You are on page 1of 138

HOL-1706-SDC-6

Table of Contents
Lab Overview - HOL-1706-SDC-6 - Guide to SDDC: VMware Validated Designs ................ 2
Lab Guidance .......................................................................................................... 3
Introduction to VMware Validated Designs.............................................................. 6
VMware Validated Design for Software-Defined Data Center ................................ 20
Module 1 - VMware Validated Design for SDDC - Core Platform (15 minutes)................. 40
Introduction........................................................................................................... 41
Hands-on Labs Interactive Simulation: VMware Validated Design for SDDC - Core
Platform ................................................................................................................ 53
VMware Validated Design for SDDC - Script .......................................................... 54
Module 2 -VMware Validated Design for SDDC – Software-Defined Storage (15
minutes).......................................................................................................................... 62
Introduction........................................................................................................... 63
Hands-on Labs Interactive Simulation: VMware Validated Design for SDDC –
Software-Defined Storage ..................................................................................... 71
VMware Validated Design for SDDC – Software-Defined Storage - Script .............. 72
Module 3 -VMware Validated Design for SDDC – Software-Defined Networking (15
minutes).......................................................................................................................... 76
Introduction........................................................................................................... 77
Hands-on Labs Interactive Simulation: VMware Validated Design for SDDC –
Software-Defined Networking ............................................................................... 91
VMware Validated Design for SDDC – Software-Defined Networking - Script ........ 92
Module 4 -VMware Validated Design for SDDC – Cloud Operations with vRealize
Operations (15 minutes) ................................................................................................. 98
Introduction........................................................................................................... 99
Hands-on Labs Interactive Simulation: VMware Validated Design for SDDC – Cloud
Operations with vRealize Operations .................................................................. 103
VMware Validated Design for SDDC – Cloud Operations with vRealize Operations -
Script................................................................................................................... 104
Module 5 -VMware Validated Design for SDDC – Cloud Operations with vRealize Log
Insight (15 minutes)...................................................................................................... 109
Introduction......................................................................................................... 110
Hands-on Labs Interactive Simulation: VMware Validated Design for SDDC – Cloud
Operations with vRealize Log Insight .................................................................. 114
VMware Validated Design for SDDC – Cloud Operations with vRealize Log
Insight ................................................................................................................. 115
Module 6 -VMware Validated Design for SDDC – Cloud Management and Automation with
vRealize Automation (15 minutes) ................................................................................ 121
Introduction......................................................................................................... 122
Hands-on Labs Interactive Simulation: VMware Validated Design for SDDC – Cloud
Management and Automation with vRealize Automation.................................... 128
VMware Validated Design for SDDC – Cloud Management and Automation with
vRealize Automation - Script ............................................................................... 129

HOL-1706-SDC-6 Page 1
HOL-1706-SDC-6

Lab Overview -
HOL-1706-SDC-6 - Guide
to SDDC: VMware
Validated Designs

HOL-1706-SDC-6 Page 2
HOL-1706-SDC-6

Lab Guidance
Note: It will take more than 90 minutes to complete this lab. You should
expect to only finish 2-3 of the modules during your time. The modules are
independent of each other so you can start at the beginning of any module
and proceed from there. You can use the Table of Contents to access any
module of your choosing.

The Table of Contents can be accessed in the upper right-hand corner of the
Lab Manual.

VMware Validated Designs (VVD) provide the most comprehensive and extensively-
testedblueprints to build and operate a Software-Defined Data Center (SDDC). VVD
delivers holistic data center-level designs that span across compute, storage,
networking and management, defining the gold standard for how to deploy and
configure the complete VMware SDDC stack in a wide range of use cases.

In this lab, you will focus on the fundamental architecture elements in the VMware
Validated Design for Software-Defined Data Center. Lab content is organized into six,
15-minute lightning lab modules. Each module consists of an interactive simulation that
demonstrates the value that VVD brings to a specific topic. You may take any or all of
the modules in any order you like. Feel free to re-take the lab as many times as you like
to complete all of the modules.

Lab Module List:

• Module 1 - VMware Validated Design for SDDC - Core Platform (15


minutes)
• Module 2 - VMware Validated Design for SDDC - Software Defined
Storage(15 minutes)
• Module 3 - VMware Validated Design for SDDC - Software Defined
Networking(15 minutes)
• Module 4 - VMware Validated Design for SDDC - Cloud Operations with
vRealize Operations(15 minutes)
• Module 5 - VMware Validated Design for SDDC - Cloud Operations with
vRealize Log Insight(15 minutes)
• Module 6 - VMware Validated Design for SDDC - Cloud Management and
Automation with vRealize Automation(15 minutes)

This lab manual can be downloaded from the Hands-on Labs Document site found here:

[http://docs.hol.pub/HOL-2017]

This lab may be available in other languages. To set your language preference and have
a localized manual deployed with your lab, you may utilize this document to help guide
you through the process:

HOL-1706-SDC-6 Page 3
HOL-1706-SDC-6

http://docs.hol.vmware.com/announcements/nee-default-language.pdf

Location of the Main Console

1. The area in the RED box contains the Main Console. The Lab Manual is on the tab
to the Right of the Main Console.
2. Your lab starts with 90 minutes on the timer. The lab can not be saved. All your
work must be done during the lab session. But you can click the EXTEND to
increase your time. If you are at a VMware event, you can extend your lab time
twice, for up to 30 minutes. Each click gives you an additional 15 minutes.
Outside of VMware events, you can extend your lab time up to 9 hours and 30
minutes. Each click gives you an additional hour.
3. All work in this HOL Interactive Simulation will take place in the manual.

Activation Prompt or Watermark

When you first start your lab, you may notice a watermark on the desktop indicating
that Windows is not activated.

One of the major benefits of virtualization is that virtual machines can be moved and
run on any platform. The Hands-on Labs utilizes this benefit and we are able to run the
labs out of multiple datacenters. However, these datacenters may not have identical
processors, which triggers a Microsoft activation check through the Internet.

Rest assured, VMware and the Hands-on Labs are in full compliance with Microsoft
licensing requirements. The lab that you are using is a self-contained pod and does not
have full access to the Internet, which is required for Windows to verify the activation.

HOL-1706-SDC-6 Page 4
HOL-1706-SDC-6

Without full access to the Internet, this automated process fails and you see this
watermark.

This cosmetic issue has no effect on your lab.

HOL-1706-SDC-6 Page 5
HOL-1706-SDC-6

Introduction to VMware Validated


Designs
VMware Validated Designs provide the most comprehensive and extensively-tested
blueprints to build and operate a private cloud. They deliver holistic data center-level
designs that span across compute, storage, networking and management, defining the
standard for how to deploy and configure the complete VMware Software-Defined Data
Center stack for a wide range of use cases.

Standardized, Data Center-level Design

VMware Validated Designs streamline and simplifiy the design and deployment process
for the SDDC. The designs are based on VMware’s core expertise in data center design
and further de-risk deployments through extensive product testing that provide
interoperability, availability, scalability and security.

Proven and Robust Designs

Each design is developed by experts, and rigorously tested and validated to provide
successful deployment and efficient operations. Continuous interoperability testing
helps a validated design stays valid as subsequent versions of components are
released.

Applicable to a Broad Set of Use-cases

VMware Validated Designs provides an agile platform to achieve a wide variety of


desired outcomes delivered by the SDDC. The SDDC shifts an organization’s focus
toward use-cases and away from just products.

VMware Validated Designs are a critical part of that shift. These designs are used to
provide a structure to allow you to achieve specific use cases -- such as Micro-
segmentation, IT Automation and DevOps-Ready IT.

Comprehensive Documentation

The designs also include detailed guidance that synthetizes best practices on how to
deploy and optimally operate a VMware SDDC.

All designs are made available as free public documents from vmware.com/go/vvd.

Each includes:

• Release Notes
• Architecture Details
• Architecture Diagrams
• Planning and Preparation Guidance

HOL-1706-SDC-6 Page 6
HOL-1706-SDC-6

• Pre-Deployment Checklists
• Step-by-step Deployment and Implementation Guides
• Configuration Workbooks
• Validation Workbooks
• Operational Guidance

Learn More in this Video

Learn more about the VMware Validated Designs in this video.

Design Objectives

Before creating a VMware Validated Design, the design objectives are established.
Design objectives set the stage for the key capabilities and attributes of each design.

For example, design objectives communicate the target customer profile and
requirements, such as:

• Scope
• Availability
• Redundancy
• Performance
• Security
• Recoverability.

HOL-1706-SDC-6 Page 7
HOL-1706-SDC-6

Design Decisions

A design decision is the explicit listing of the rational made during the design process
and the reasons why those decisions were made.

Each design decision supports and ensures that the design meets the design objectives
by providing a means to record and communicate the argumentation and reasoning
behind the design process.

The process reduces customer risk by providing a baseline or standardization, and


reinforces this standardization with the justification and implications.

Design decisions the VMware Validated Designs are presented in a simple checklist form
and include the following:

• Reference ID
• Design Decision
• Design Justification
• Design Implications (If Any)

HOL-1706-SDC-6 Page 8
HOL-1706-SDC-6

Design Decisions Example

Above is an example from the design decisions established in the VMware Validated
Design for SDDC.

These three design decisions are related to the routing model for the software-defined
networking. These decisions are part of the instantiation of the distributed logical
routing architecture in the SDDC.•

All design decisions are included in a VMware Validated Design's comprehesive


documentation.

Architecture Fundamentals

The following section will provide an introduction to the architecture fundamentals in the
VMware Validated Designs.

Pod Architecture

VMware Validated Design uses a small set of common, standardized building blocks
called pods. Each pod encompasses the combinations of servers, storage, and network
equipment that are required to fulfill a specific role within the SDDC. These roles are
typically include management, edge and compute or a combination of edge and
compute.

HOL-1706-SDC-6 Page 9
HOL-1706-SDC-6

Pods can be set up with varying levels of hardware redundancy and varying quality of
components. For example, one compute pod could use full hardware redundancy for
each component (power supply through memory chips) for increased availability. At the
same time, another compute pod in the same setup could use low-cost hardware
without any hardware redundancy. With these variations, the architecture can cater to
the different workload requirements in the SDDC.

For both smaller and large setups, homogeneity and easy replication are important.

HOL-1706-SDC-6 Page 10
HOL-1706-SDC-6

Learn More in this Video

Learn more about the pod architecture used in the VMware Validated Designs in this
video.

Leaf-and-Spine Network Architecture

The physical network architecture used in the VMware Validated Designs is tightly
coupled with the pod architecture.

The VMware Validated Designs recommend a layer-3 leaf-and-spine network topology. In


this topology, each rack contains a redundant set of Top-of-Rack (ToR) switches,
commonly referred to as leaf switches. These leaf switches are interconnected with a
series of high-capacity spine switches that are used to instantiate a robust, high-speed
layer-3 network core that provides connectivity between the racks as well as the on- and
off-ramp access to and from external networks.

The number of spine and leaf switches in a deployment will vary depending on the
number of physical racks. Naturally, the larger the SDDC environment, the more
switches required to make up the overall fabric. A key benefit of the VMware Validated
Design is that it allows you to start small and easily scale out as you grow.

The following are some network designs guidelines are used in the VMware Validated
Designs:

• Redundancy is built into the fabric in order to instantiate a highly resilient fabric
capable of sustaining individual link and/or switch failures without widespread
impact.

HOL-1706-SDC-6 Page 11
HOL-1706-SDC-6

• If a link failure occurs between a spine switch and a leaf switch, the routing
protocol ensures that no traffic for the affected rack is attracted to the spine
switch that has lost connectivity to that rack.
• The total number of ports available across all spine switches and the
oversubscription that is acceptable determine the number of racks supported in
the SDDC.
• Because the number of hops between any two racks is consistent, the
architecture can utilize equal-cost multi-pathing (ECMP).
• VMware NSX is used to instantiate a robust, software-defined networking layer on
top of the physical leaf-and-spine network topology.

HOL-1706-SDC-6 Page 12
HOL-1706-SDC-6

Learn More in this Video

Learn more about the physical network architecture used in the VMware Validated
Designs in this video.

Distributed Logical Routing

In this diagram we depict the constructs for the universal distributed logical routing in
the SDDC across two regions. The configuration is largely the same across the
management stack and the compute stack.

The core platform solutions, such as the vCenter Server instances, Platform Services
Controllers instances, NSX Manager instances and the NSX Universal Controller cluster
run on a vSphere Distributed Port Group which has a VLAN provided down from the dual
leaf switches. This port group is on the vSphere Distributed Switch for the Management
Pod.

This same vDS has two uplink port groups – Uplink 01 and Uplink 02. On these port
groups we deploy NSX Edge Services Gateways in an ECMP - Equal Cost Multi-Pathing -
configuration for north/south routing for management.

This also occursin both edge cluster (3-pod) or shared edge/compute (2-pod) for the
compute stack and leverages the similar uplink port groups provided on the vDS for the
Edge.

The NSX Edge Services Gateways pair in management will manage north/south traffic
for the SDDC infrastructure and the pair in edge will manage north/south traffic for
SDDC workloads. The use of ECMP provides multiple paths in and out of the SDDC. This
results in much faster failover times than deploying Edge service gateways in HA mode.

HOL-1706-SDC-6 Page 13
HOL-1706-SDC-6

The design uses BGP as the dynamic routing protocol inside the SDDC for a simple
implementation. There is no need to plan and design access to OSPF area 0 inside the
SDDC. OSPF area 0 varies based on customer configuration.

The management stack uses a single universal transport zone that encompasses all
management clusters and for the compute stack, the design uses a single universal
transport zone that encompasses all edge and compute clusters from all regions. The
use of a single Universal Transport Zone for management stack and one for the compute
stack supports extending networks and security policies across regions. This allows
seamless migration of management applications and business workloads
across regions either by cross vCenter vMotion or by failover recovery with Site
Recovery Manager.

Riding on top of these transport zones we instantiate a universal logical switch for use
as the transit network between the Universal Distributed Logical Routers (UDLRs) and
ESGs – this is done in both the management stack and the compute stacks. In this
diagram we’re just representing the construct with a single instance.

We then deploy a single NSX UDLR for the management cluster to provide east/west
routing across all regions as well as a single NSX UDLR for the compute and edge
cluster to provide east/west routing across all regions. Here again, we’re just
representing the construct with a single instance.

The UDLRs are peer via BGP to the north/south edges in their respective stack and the
universal logical switch that is used as the transit network allows the UDLRs and all
ESGs across regions to exchange routing information.

The distributed logical routing provides the ability to spin up logical networks that are
then accessible throughout the SDDC and out to the physical network.

In fact, we leverage this capability tor the SDDC solutions themselves and create
application virtual networks for these solutions.

HOL-1706-SDC-6 Page 14
HOL-1706-SDC-6

Application Virtual Networks

An important design aspect of the VMware Validated Designs is the concept of an


Application Virtual Network (AVN), a construct wherein applications running in the SDDC
are isolated on an NSX logical switches and connected to the distributed logical router.

AVNs improve security by isolating applications on their own network. In addition,


application virtual networks, coupled with the universal transport zones, and distributed
logical routing facilitates application portability by allowing both the virtual machines to
be easily migrated between regions in response to a disaster recovery event,
eliminating the need to change IP addresses or routing.

HOL-1706-SDC-6 Page 15
HOL-1706-SDC-6

Regions

VMware Validated Designs typically support two deployment types:

1. Single-Region
2. Dual-Region.

A single-region deployment is comprised of a single VMware Validated Design


implementation at one physical location. This deployment type provides no protection
against catastrophic events affecting the private cloud.

A dual-region deployment is comprised of two geographic locations. This deployment


type extends the private cloud across the separate regions providing a level of
protection against natural disasters and other types of catastrophic events. It is well
suited for hosting production and other critical workloads.

Typically, single-region deployments are dual-region capable, making it easy to start


with a single-region deployment and later extend it to a dual-region deployment without
having to re-deploy or re-configure the private cloud.

Learn More in this Video

Learn more about business continuity and disaster recovery between VMware Validated
Designs regions in this video.

Storage

VMware Validated Design provides guidance for the storage of the management
components.

HOL-1706-SDC-6 Page 16
HOL-1706-SDC-6

The design uses two storage technologies:

• Virtual SAN - Virtual SAN storage is the default storage type for the SDDC
management components. The VMware Validated Designs uses rack mount
Virtual SAN Ready Nodes to ensure seamless compatibility and support with
Virtual SAN during the deployment. The configuration and assembly process for
each system is then standardized, with all components installed the same
manner on each host. Standardizing the entire physical configuration of the ESXi
hosts is critical to providing an easily manageable and supportable infrastructure
because standardization eliminates variability. Consistent PCI card slot location,
especially for network controllers, is essential for accurate alignment of physical
to virtual I/O.While there is no explicit requirement for running Virtual SAN on
hosts the compute pods, it is recommended that you use Virtual SAN Ready
Nodes as this not only enables you to leverage the benefits of Virtual SAN for
your compute workloads, but it also provides hardware consistency across all the
pods in the SDDC. Use storage that meet the application and business
requirements. This provides multiple storage tiers and SLAs for these business
workloads.
• NFS - NFS storage is the secondary storage for the SDDC management
components. It provides space for workload backup, archiving log data and
application templates. NFS storage required an NFS-capable external storage
arrays. The VMware Validated Design calls for three specific NFS exports.
Essentially, These exports control the access between the endpoints and the
underlying storage system. These exports include: (1) Log Archives NFS Export
for vRealize Log Insight The Log Archive NFS Export is used directly by vRealize
Log Insight and is not presented as a datastore. (2) Content Library NFS Export
for Templates that will be converted to a vRealize Automation format. It is
presented as a datastore to the Compute Pods. (3) Data Protection NFS Export on
a separate volume for data protection services. Data protection is I/O intensive.
This NFS Export is presented as a datastore to the Management Pod. For security
purposes, we limit access to each export to only the application virtual machines
or hosts requiring the ability to mount the storage.

Get Started

You can get started with the VMware Validated Designs in three different way:

Professional Services

When deploying VMware Validated Designs, you can choose to have an expert by your
side. VMware Professional Services delivers the right expertise and collaboration for a
rapid implementation of your SDDC architecture. Achieve faster value, increase business
efficiency and improve end user productivity with our project assistance and knowledge
transfer.


HOL-1706-SDC-6 Page 17
HOL-1706-SDC-6

• Rapidly deploy an SDDC that delivers a production cloud platform for delivering IT
services
• Gain skills and knowledge, and get assistance with an initial deployment of the
foundational platform
• Improve end-user experience and business efficiency with an optimized
deployment

Certified Partner Architectures

VMware is partnering with the most prestigious System Integrators to bring VVD to
customers.

Through a rigorous process, VMware verifies and certifies that a SDDC design complies
with the VVD specifications. Certified SDDC designs receive the "VMware Ready" stamp
which gives customers a high level of assurance about the robustness of the solution
based on those designs.

Deploy It Yourself

For organizations that want to implement, operate and integrate the SDDC step-by-step
themselves we’ve made all the documentation publically available at vmware.com/go/
vvd.

VMware Validated Designs provide the most comprehensive set of prescriptive


documentation for you to build your SDDC.

This includes:

• Release Notes
• Architecture Details
• Architecture Diagrams
• Planning and Preparation Documents
• Pre-Deployment Checklists
• Step-by-step Deployment and Implementation Guides
• Configuration Workbooks
• Validation Workbooks
• Operational Guidance Documents – that include:
• Monitoring and Alerting
• Business Continuity
• Startup and Shutdown
• Plus many more Operations Add-ons!

Join the Community

Join the publicVMware Validated Design community at vmware.com/go/vvd-community.


Here you can provide general feedback or ask a question on any design.

HOL-1706-SDC-6 Page 18
HOL-1706-SDC-6

Follow the community by selecting the "Following in" button in the community's
banner and you will receive general notifications when new content is available. You will
even get early access to new designs as they become available.

For updates, follow @VMwareSDDC on Twitter and follow our Youtube playlist at
vmware.com/go/vvd-videos.

HOL-1706-SDC-6 Page 19
HOL-1706-SDC-6

VMware Validated Design for Software-


Defined Data Center
In this section you'll learn about the VMware Validated Design for Software-Defined Data
Center.

Let's get started.

Overview

The VMware Validated Design for Software-Defined Data Center is a specific design by
VMware that is a comprehensive guide that provides a prescriptive and extensively-
tested blueprint to deploy and operate a Software-Defined Data Center using VMware’s
technology. Each design includes design guides, implementation and deployment
procedures, and documentation for on-going operations.

This Hands-on Lab is based on the VMware Validated Design for Software-Defined Data
Center 2.x.

It also includes updates on what's new in the 3.0 release announced at VMworld 2016.

HOL-1706-SDC-6 Page 20
HOL-1706-SDC-6

VMware Validated Design for Software-Defined Data


Center 2.x

The VMware Validated Design for the Software-Defined Data Center includes a
completely integrated software bill of materials.

Recall that our validation processes rigorously tests and continuously validates the
entire integrated solution.

Let’s take a look at the components included in the VMware Validated Design for the
Software-Defined Data Center 2.x.

From a software component, or SDDC stack perspective, this translates into the strong
foundation and management platform

The foundation of the design includes seen in green includes:

• VMware vSphere for Compute Virtualization


• VMware Virtual SAN, as well as vSphere’s vVols, for Software-Defined Storage
• VMware NSX for Software-Defined Networking, Security and Extensibility
• §Mware vRealize Log Insight for Log Aggregation and Analytics
• VMware vRealize Operations for Streamlined and Automated Data Center
Monitoring and Alerting and
• VMware vSphere Data Protection for Data Protection Services.

Note: vSphere Data Protection is interchangeable for another data protection solution
as long as the design objectives are met.

This foundation is then extended to include the Cloud Management, Automation and
Orchestration components in addition to IT Financial Management.

These include:

• VMware vRealize Automation for Cloud Management and Automation


• VMware vRealize Orchestrator for Cloud Orchestration and integration to 3rd
Party Solutions and
• VMware vRealize Business for Cloud for the IT Financial Management capabilities.

Next let’s dive a bit deeper and look into some of the high-level technical aspects before we jump into the
lab modules

HOL-1706-SDC-6 Page 21
HOL-1706-SDC-6

HOL-1706-SDC-6 Page 22
HOL-1706-SDC-6

Pod Architecture

VMware Validated Design uses a small set of common, standardized building blocks
called pods. Each pod encompasses the combinations of servers, storage, and network
equipment that are required to fulfill a specific role within the SDDC.

In the VMware Validated Design for Software-Defined Data Center 2.x these roles
include management, edge and compute.

Pods can be set up with varying levels of hardware redundancy and varying quality of
components. For example, one compute pod could use full hardware redundancy for
each component (power supply through memory chips) for increased availability. At the
same time, another compute pod in the same setup could use low-cost hardware
without any hardware redundancy. With these variations, the architecture can cater to
the different workload requirements in the SDDC.

For both smaller and large setups, homogeneity and easy replication are important.

HOL-1706-SDC-6 Page 23
HOL-1706-SDC-6

Management Pod

As the name implies, the Management Pod hosts the infrastructure components used
to instantiate, manage, and monitor the private cloud. This includes the core
infrastructure components, such as the Platform Services Controllers, vCenter Server
instances, NSX Managers, and NSX Controllers, SDDC monitoring solutions like vRealize
Operations Manager and vRealize Log Insight.

In the VMware Validated Design for Software-Defined Date Center, Cloud Management
Platform components are added and to include vRealize Automation, vRealize
Orchestrator and vRealize Business for Cloud on top of the solid and robust
management platform.

HOL-1706-SDC-6 Page 24
HOL-1706-SDC-6

Edge Pod

In this three-pod architecture, the SDDC network fabric does not provide external
connectivity. Most pod types, such as compute pods, are not set up with external
network connectivity. Instead external connectivity is pooled into Edge Pod. The edge
pod runs the software-defined networking services provided by VMware NSX to establish
north/south routing between the SDDC and the external network as well as east/west
routing inside the SDDC for the business workloads.

HOL-1706-SDC-6 Page 25
HOL-1706-SDC-6

Compute Pod

Compute Pods host the SDDC workloads. An SDDC can mix different types of compute
pods and provide separate compute pools for different types of Service Level
Agreements (SLAs).

For example, compute pods can be set up with varying levels of hardware redundancy
and varying quality of components for different Service. One compute pod could use full
hardware redundancy for each component (power supply through memory chips) for
increased availability. At the same time, another compute pod in the same setup could
use low-cost hardware without any hardware redundancy. With these variations, the
architecture can cater to the different workload requirements in the SDDC.

Leaf-and-Spine Network Architecture

The physical network architecture used in the VMware Validated Designs is tightly
coupled with the pod architecture.

The VMware Validated Designs recommend a layer-3 leaf-and-spine network topology. In


this topology, each rack contains a redundant set of Top-of-Rack (ToR) switches,
commonly referred to as leaf switches. These leaf switches are interconnected with a
series of high-capacity spine switches that are used to instantiate a robust, high-speed
layer-3 network core that provides connectivity between the racks as well as the on- and
off-ramp access to and from external networks.

HOL-1706-SDC-6 Page 26
HOL-1706-SDC-6

The number of spine and leaf switches in a deployment will vary depending on the
number of physical racks. Naturally, the larger the SDDC environment, the more
switches required to make up the overall fabric. A key benefit of the VMware Validated
Design is that it allows you to start small and easily scale out as you grow.

The following are some network designs guidelines are used in the VMware Validated
Designs:

• Redundancy is built into the fabric in order to instantiate a highly resilient fabric
capable of sustaining individual link and/or switch failures without widespread
impact.
• If a link failure occurs between a spine switch and a leaf switch, the routing
protocol ensures that no traffic for the affected rack is attracted to the spine
switch that has lost connectivity to that rack.
• The total number of ports available across all spine switches and the
oversubscription that is acceptable determine the number of racks supported in
the SDDC.
• Because the number of hops between any two racks is consistent, the
architecture can utilize equal-cost multi-pathing (ECMP).
• VMware NSX is used to instantiate a robust, software-defined networking layer on
top of the physical leaf-and-spine network topology.

Leaf-and-Spine Network in a Three Pod Architecture

In the three-pod architecture, the SDDC network fabric does not provide external
connectivity. Most pod types, such as compute pods, are not set up with external
network connectivity. Instead external connectivity is pooled into management and
edge pod. The edge pod runs the software-defined networking services provided by

HOL-1706-SDC-6 Page 27
HOL-1706-SDC-6

VMware NSX to establish north/south routing between the SDDC workloads and the
external network as well as east/west routing inside the SDDC for the business
workloads.

HOL-1706-SDC-6 Page 28
HOL-1706-SDC-6

Example Leaf Switch Configuration

In this diagram, a high-level illustration of typical leaf node is shown.

Leaf switches of each rack acts as the Layer 3 interface for the corresponding subnet.
The management and Edge Pods are provided with externally accessible VLANs for
access to the Internet and/or MPLS-based corporate networks.

Each ESXi host in the Management, Edge and Compute Pod uses VLANs and
corresponding Layer 2 networks presented for in-rack traffic,

The leaf switches of each rack act as the Layer 3 interface for the corresponding Layer 2
networks.

HOL-1706-SDC-6 Page 29
HOL-1706-SDC-6

Host Connectivity in Three Pod Architecture

In this diagram, we illustrate the standard VLANs that are presented to the hosts in each
management, edge and compute pods.

You’ll notice that the vSphere Distributed Switches have an MTU of 9000 configured for
Jumbo Frames as do the necessary VMkernel ports – vMotion, VSAN, VXLAN and NFS.

HOL-1706-SDC-6 Page 30
HOL-1706-SDC-6

Distributed Logical Routing

Recall from the Introduction that the VMware Validated Designs use distributed logical
networking.

In this three-pod architecture, the SDDC network fabric does not provide external
connectivity. Most pod types, such as compute pods, are not set up with external
network connectivity. Instead external connectivity is pooled into Edge Pod. The edge
pod runs the software-defined networking services provided by VMware NSX to establish
north/south routing between the SDDC and the external network as well as east/west
routing inside the SDDC for the business workloads.

Application Virtual Networks with Example

An important design aspect of the VMware Validated Designs is the concept of an


Application Virtual Network (AVN), a construct wherein applications running in the SDDC
are isolated on an NSX logical switches and connected to the distributed logical router.

AVNs improve security by isolating applications on their own network. In addition,


application virtual networks, coupled with the universal transport zones, and distributed
logical routing facilitates application portability by allowing both the virtual machines to

HOL-1706-SDC-6 Page 31
HOL-1706-SDC-6

be easily migrated between regions in response to a disaster recovery event,


eliminating the need to change IP addresses or routing.

In a dual region deployment, the designs dictate that three separate AVNs be deployed:

(1) A shared, region independent AVN that spans both regions. All management
applications that are configured to migrate, or failover between regions run inside this
AVN.

In the VMware Validated Design for SDDC these include:

• vRealize Automation
• vRealize Business for Cloud
• vRealize Orchestrator
• vRealize Operations Analytics Cluster

(2) Two region dependent AVNs, one at each site. Management applications that are
specific to each region and which do not migrate between regions run in this AVN.

In the VMware Validated Design for SDDC these include:

• vRealize Automation Proxy Agents


• vRealize Business for cloud Collectors
• vRealize Operations Remote Collectors
• vRealize Log Insight Clusters

Integrated Bill of Materials

The VMware Validated Design for the Software-Defined Data Center includes a
completely integrated software bill of materials.

HOL-1706-SDC-6 Page 32
HOL-1706-SDC-6

Recall that our validation processes rigorously tests and continuously validates the
entire integrated solution.

VMware Validated Designs remove uncertainty, possible errors and downtime in


implementing and operating SDDC by ensuring interoperability and compatibility of all
components upon deployment, and during upgrades between design versions.

Refer to the design documentation for products and versions included in the design.
Visit vmware.com/go/vvd.

What's New in the VMware Validated Design for Software-


Defined Data Center v3.0

In this lab, we're excited to share a preview of what's new in the VMware Validated
Design for Software Defined Data Center v3.0 announced VMworld.

This release of the design will include all the features and capabilities of the prior
release along with the addition of the following:

Simplified Pod Architecture

This release includes an update to the base pod architecture from a three-pod design to
new two-pod design by converging the previous Edge Pod and first Compute Pod.

Updates to the Software Bill of Materials

Expansion of Software-Defined Data Center solutions to include business continuity and


disaster recovery solutions.

Please refer to the design documentation for products and versions included in this
design.)

Dual-Region Deployment and Operational Guidance

HOL-1706-SDC-6 Page 33
HOL-1706-SDC-6

This release includes the expansion from single-region deployment and operations
guidance to include full dual-region support. This includes:

• Comprehensive guides and a prescriptive blueprint for the deployment of a dual-


region Software-Defined Data Center
• Dual region operational guidance for Cloud Management and Cloud Operations,
including cloud management, business continuity and disaster recovery of the
SDDC solutions.

HOL-1706-SDC-6 Page 34
HOL-1706-SDC-6

Software Bill of Materials

The VMware Validated Design for Software-Defined Data Center v3.0 includes the
addition of site replication and site protection services for the SDDC management,
automation and operations solution. In this release, we provide prescriptive guidance on
deploying and configuring vSphere Replication and Site Recovery Manager to protect:

• vRealize Operatins
• vRealize Automation
• vRealize Orchestrator
• vRealize Business for Cloud

Please refer to the design documentation for products and versions included in this
design.

HOL-1706-SDC-6 Page 35
HOL-1706-SDC-6

Two Pod Architecture

VMware Validated Design uses a small set of common, standardized building blocks
called pods. Each pod encompasses the combinations of servers, storage, and network
equipment that are required to fulfill a specific role within the SDDC.

In the VMware Validated Design for Software-Defined Data Center v3.0 release we have
included an consolidation of the edge and compute pod to make it easier to get started
with the design.

The roles for pods now include management and shared edgeandcompute.

Shared Edge and Compute Pod

In the new two-pod architecture, the SDDC network fabric provides external connectivity
to all pods. External connectivity is pooled into a Shared Edge and Compute Pod.
The shared pod runs the software-defined networking services provided by VMware NSX
to establish north/south routing between the SDDC and the external network as well as
east/west routing inside the SDDC for the business workloads. This shared pod may
also host the SDDC workloads.

HOL-1706-SDC-6 Page 36
HOL-1706-SDC-6

As the SDDC grows, additional compute-only pods can be added to support a mix of
different types of workloads for different types of Service Level Agreements (SLAs).

HOL-1706-SDC-6 Page 37
HOL-1706-SDC-6

Leaf-and-Spine Network in a Two Pod Architecture

In a two-pod architecture the SDDC network fabric provides external connectivity to all
pods. External connectivity is pooled into both the management pod and a shared edge
and compute pod.

HOL-1706-SDC-6 Page 38
HOL-1706-SDC-6

Host Connectivity in a Two Pod Architecture

In this diagram, the host connectivity for the new two pod architecture prescribed in the
VMware Validated Design for Software-Defined Data Center v3.0 is shown.

HOL-1706-SDC-6 Page 39
HOL-1706-SDC-6

Module 1 - VMware
Validated Design for
SDDC - Core Platform (15
minutes)

HOL-1706-SDC-6 Page 40
HOL-1706-SDC-6

Introduction
This module will introduce you to the fundamental constructs of the VMware Validated
Design for SDDC architecture.

This lab is based on the VMware Validated Design for Software-Defined Data Center 2.0.

In this lab you'll be introduced to some key architecture concepts of the VMware
Validated Design for Software-Defined Data Center.

We've included several architecture reference diagrams and descriptions to provide


visual context to the topics seen in the lab simulations.

You can review these diagrams now and/or toggle back to them as you take the lab by
changing the active browser tab or window.

If you're ready to take the lab you can advance the manual and launch the interactive
simulation.

Architecture Reference Diagrams

Architecture reference diagrams for Module 1 are provided in the following sections.

Pods - Three Pod Architecture

Recall from the Lab Introduction that VMware Validated Design uses a small set
of common, standardized building blocks called pods. Each pod encompasses the
combinations of servers, storage, and network equipment that are required to fulfill a
specific role within the SDDC. These roles are typically include management, edge and
compute or a combination of edge and compute.

Pods can be set up with varying levels of hardware redundancy and varying quality of
components. For example, one compute pod could use full hardware redundancy for
each component (power supply through memory chips) for increased availability. At the
same time, another compute pod in the same setup could use low-cost hardware
without any hardware redundancy. With these variations, the architecture can cater to
the different workload requirements in the SDDC.

The types of pods include:

• Management Pod
• Edge Pod
• Compute Pod

HOL-1706-SDC-6 Page 41
HOL-1706-SDC-6

In this diagram the three-pod architecture of the VMware Validate Design for Software-
Defined Data Center 2.x is shown.

HOL-1706-SDC-6 Page 42
HOL-1706-SDC-6

Pods - Two Pod Architecture

The VMware Validated Design for Software-Defined Data Center 3.x introduces the two-
pod architecture.

• Management Pod
• Shared Edge and Compute Pod

As additional pods are added to the SDDC these are only Compute Pod.

Lead-and-Spine Network - Three Pod Architecture

Recall that the physical network architecture used in the VMware Validated Designs is
tightly coupled with the pod architecture.

The VMware Validated Designs recommend a layer-3 leaf-and-spine network topology. In


this topology, each rack contains a redundant set of Top-of-Rack (ToR) switches,
commonly referred to as leaf switches. These leaf switches are interconnected with a
series of high-capacity spine switches that are used to instantiate a robust, high-speed
layer-3 network core that provides connectivity between the racks as well as the on- and
off-ramp access to and from external networks.

HOL-1706-SDC-6 Page 43
HOL-1706-SDC-6

In this diagram, the three pod architecture as prescribed in the VMware Validated
Design for Software-Defined Data Center 2.x is shown.

In a three-pod architecture (management, edge and compute) the SDDC network fabric
does not provide external connectivity. Most pod types, such as compute pods, are not
set up with external network connectivity. Instead external connectivity is pooled into
management and edge pod. The edge pod runs the software-defined networking
services provided by VMware NSX to establish north/south routing between the SDDC
workloads and the external network as well as east/west routing inside the SDDC for the
business workloads.

HOL-1706-SDC-6 Page 44
HOL-1706-SDC-6

Lead-and-Spine Network - Two Pod Architecture

In this diagram the two pod architecture as prescribed in the VMware Validated Design
for Software-Defined Data Center 3.x is shown.

In a two-pod architecture (management and shared edge/compute) the SDDC network


fabric provides external connectivity to all pods. External connectivity is pooled into the
management pod and a shared edge/compute pod. The shared pod runs the software-
defined networking services provided by VMware NSX to establish north/south routing
between the SDDC workloads and the external network as well as east/west routing
inside the SDDC for the business workloads. This shared pod may also host the SDDC
workloads.

HOL-1706-SDC-6 Page 45
HOL-1706-SDC-6

Host Connectivity - Three Pod Architecture

In this diagram, the three pod architecture host connectivity as prescribed in the
VMware Validated Design for Software-Defined Data Center 2.x is shown.

Host Connectivity - Two Pod Architecture

In this diagram, the two pod architecture host connectivity as prescribed in the VMware
Validated Design for Software-Defined Data Center 3.x is shown.

HOL-1706-SDC-6 Page 46
HOL-1706-SDC-6

Pods and Clusters

In this diagram, a logical representation of the pods and clusters in the VMware
Validated Design for Software-Defined Data Center.

The diagram represents the pods, clusters, host connectivity, storage, distributed
routing, virtual networks and placement of core platform components.

Note, while this diagram is based on the 3.x two pod architecture it is applicable to 2.x
as well.

Core vSphere Management

In this diagram, the multi-region deployment of vCenter Server, Platform Services


Controllers and vSphere Data Protection instances in the VMware Validated Design for
Software-Defined Data Center.

Within each region, the design instantiates two Platform Service Controllers and two
vCenter Server systems in the appliance form factor. This includes one PSC and one
vCenter Server for the management pod and one PSC and one vCenter Server for the
shared edge and compute pods. The design also joins the Platform Services Controller

HOL-1706-SDC-6 Page 47
HOL-1706-SDC-6

instances to the same vCenter Single Sign-On domain and points each vCenter Server
instance to its respective Platform Services Controller instance.

Note: This diagram is applicable to both the 2.x and 3.x designs.

HOL-1706-SDC-6 Page 48
HOL-1706-SDC-6

NSX - Three Pod Architecture

In this diagram, the multi-region and cross-vCenter deployment of NSX in the VMware
Validated Design for Software-Defined Data Center 2.x is shown.

In both regions, two separate NSX Managers instances are deployed, one for the
Management pod and one for the Compute and Edge pods, along with an associated
NSX Universal Controller Cluster. In the Region B the secondary NSX Manager instances
automatically imports the configurations of the NSX Universal Controller Clusters from
Region A.

HOL-1706-SDC-6 Page 49
HOL-1706-SDC-6

NSX - Two Pod Architecture

In this diagram, the multi-region and cross-vCenter deployment of NSX in the VMware
Validated Design for Software-Defined Data Center 3.x is shown.

The general architecture is the same as in the three-pod architecture; however, the NSX
services for the Compute Stack are deployed on an initial shared edge and compute pod
and all NSX services are added to a resource pool to guarantee resources for the
network virtualization platform in this stack.

Universal Distributed Logical Routing

In this diagram we depict the constructs for the universal distributed logical routing in
the SDDC across two regions. The configuration is largely the same across the
management stack and the compute stack.

The core platform solutions, such as the vCenter Server instances, Platform Services
Controllers instances, NSX Manager instances and the NSX Universal Controller cluster
run on a vSphere Distributed Port Group which has a VLAN provided down from the dual
leaf switches. This port group is on the vSphere Distributed Switch for the Management
Pod.

This same vDS has two uplink port groups – Uplink 01 and Uplink 02. On these port
groups we deploy NSX Edge Services Gateways in an ECMP - Equal Cost Multi-Pathing -
configuration for north/south routing for management.

HOL-1706-SDC-6 Page 50
HOL-1706-SDC-6

This also occursin both edge cluster (3-pod) or shared edge/compute (2-pod) for the
compute stack and leverages the similar uplink port groups provided on the vDS for the
Edge.

The NSX Edge Services Gateways pair in management will manage north/south traffic
for the SDDC infrastructure and the pair in edge will manage north/south traffic for
SDDC workloads. The use of ECMP provides multiple paths in and out of the SDDC. This
results in much faster failover times than deploying Edge service gateways in HA mode.

The design uses BGP as the dynamic routing protocol inside the SDDC for a simple
implementation. There is no need to plan and design access to OSPF area 0 inside the
SDDC. OSPF area 0 varies based on customer configuration.

The management stack uses a single universal transport zone that encompasses all
management clusters and for the compute stack, the design uses a single universal
transport zone that encompasses all edge and compute clusters from all regions. The
use of a single Universal Transport Zone for management stack and one for the compute
stack supports extending networks and security policies across regions. This allows
seamless migration of management applications and business workloads
across regions either by cross vCenter vMotion or by failover recovery with Site
Recovery Manager.

Riding on top of these transport zones we instantiate a universal logical switch for use
as the transit network between the Universal Distributed Logical Routers (UDLRs) and
ESGs – this is done in both the management stack and the compute stacks. In this
diagram we’re just representing the construct with a single instance.

We then deploy a single NSX UDLR for the management cluster to provide east/west
routing across all regions as well as a single NSX UDLR for the compute and edge
cluster to provide east/west routing across all regions. Here again, we’re just
representing the construct with a single instance.

The UDLRs are peer via BGP to the north/south edges in their respective stack and the
universal logical switch that is used as the transit network allows the UDLRs and all
ESGs across regions to exchange routing information.

The distributed logical routing provides the ability to spin up logical networks that are
then accessible throughout the SDDC and out to the physical network.

In fact, we leverage this capability tor the SDDC solutions themselves and create
application virtual networks for these solutions.

HOL-1706-SDC-6 Page 51
HOL-1706-SDC-6

HOL-1706-SDC-6 Page 52
HOL-1706-SDC-6

Hands-on Labs Interactive Simulation:


VMware Validated Design for SDDC -
Core Platform
The interactive simulation will allow you to experience steps which are too time-
consuming or resource intensive to do live in the lab environment.

1. Click here to open the interactive simulation. It will open in a new browser
window or tab.
2. When finished, click the “Return to the lab” link to continue with this lab.

HOL-1706-SDC-6 Page 53
HOL-1706-SDC-6

VMware Validated Design for SDDC -


Script
Welcome to the VMware Validated Design for SDDC Interactive Simulation (iSIM).

In this iSIM, we will demonstrate the some of the core platform components of a private
cloud based on this design.

The orange boxes highlight the next step.

Log in to the vSphere Web Client

Let’s start off by logging into the vSphere Web Client.

1. Click in the User name field, then the Password field. Finally click the Login
button.

vSphere Web Client

Here in the vSphere Web Client we can manage the entire platform for the software-
defined data center. The well know compute virtualization along with software-defined
storage and software-defined networking.

The basis of the platform is vSphere 6 and the design uses key enhancements and
capabilities of the release. For example, Enhanced Linked Mode allows us to
interconnect both the management and compute stacks for a single view.

Let's take a look in the Administration and System Configuration.

2. Click the Administration tab, then the System Configuration tab and finally
the Nodes link.

Here we can see the nodes and services running across vCenter Server and Platform
Services Controllers instances. Two nodes provide platform services, such as single sign-
on and certificate management. Two additional nodes provide the vCenter Server
instances. All nodes are registered with one SSO domain.

This linkage allows the infrastructure to be managed from a single user interface.

We also see the overall node health and service health.

HOL-1706-SDC-6 Page 54
HOL-1706-SDC-6

Software Defined Data Center Building Blocks

3. Click Home and then the Hosts and Clusters icon.

The VMware Validated Design for SDDC uses a collection of physical data center racks
that are interconnected using a common network core. Inside the physical racks the
different functions of the Software-Defined Data Center are implemented as a
standardized set of building blocks referred to as pods. Each pod encompasses the
combinations of servers, storage, and network that are required to fulfill a specific role
within the SDDC.

This design three types of pods: management, edge and compute.

Management Pod

Let’s take a look at three of these essential pods starting with the Management Pod.

4. Click SF001-MGMT01

As the name implies, the management pod hosts the infrastructure components used to
instantiate, manage, and monitor the Software-Defined Data Center. This includes the
core infrastructure components, such as the Platform Services Controllers, vCenter
Server Instances, NSX Managers, and NSX Controllers, as well as the SDDC monitoring
solutions like vRealize Operations Manager and vRealize Log Insight.

The management pod is deployed inside a single data center rack and is comprised of a
minimum of four physical servers and two redundant Top-of-Rack switches, commonly
referred to as leaf switches.

VMware ESXi is installed on each of the four physical servers, which are then logically
grouped into a vSphere management cluster.

Notice that the advanced resource management settings for vSphere Distributed
Resource Scheduler and vSphere High Availability are enabled. As well as Virtual SAN.

5. Click vSphere DRS to explore the section. When you have finished viewing,
click it again to close.
6. Click vSphere HA to explore the section. When you have finished viewing, click
it again to close.
7. Click Virtual SAN Capacity to explore the section. When you have finished
viewing, click it again to close.

Virtual SAN Ready Nodes

8. Click on mgmt01esx01.sf01.rainpole.local

HOL-1706-SDC-6 Page 55
HOL-1706-SDC-6

The physical servers used in the management pod must be Virtual SAN Ready Nodes,
meaning they have been certified for use in a Virtual SAN deployment. Storage for the
Management pod is provided using a combination of VMware Virtual SAN and NFS.
Virtual SAN is used for hosting the virtual machines that run in the management cluster
where NFS is used for storing backups, log archives and virtual machine templates.

To accommodate future growth and scalability, additional Virtual SAN Ready Servers can
be added to the management cluster in order provide additional compute, network and
storage capacity.

Edge Pod

9. Click on SF001-MGMT01 to collapse it. Then click on SF001-EDGE01.

The Edge Pod provides a centralized gateway through which workloads running in the
SDDC are able to access external networks. Workloads in the SDDC are isolated on their
own logical networks and do not have direct access to external networks. To access
external networks traffic is routed through the edge pod over a transport zone using
distributed logical routing and edge service gateways.

Like the management pod, the edge pod contains a minimum of four Virtual SAN Ready
servers. This pod is typically co-located inside in the same physical rack as the
Management Pod.

While the management and edge pods can be consolidated into a single physical rack,
they are still logically divided into two separate vSphere clusters as seen here.

Here again, VMware ESXi is installed on each physical server and the servers logically
grouped into a vSphere edge cluster. The Edge Cluster is managed from the Compute
vCenter Server instance running in the management pod. Storage for the edge pod is
provided by VMware Virtual SAN.

10. Click vSphere DRS to explore the section. When you have finished viewing,
click it again to close.
11. Click vSphere HA to explore the section. When you have finished viewing, click
it again to close.
12. Click Virtual SAN Capacity to explore the section. When you have finished
viewing, click it again to close.

Compute Pod

Within the Software-Defined Data Center all business and end-user workloads run inside
the Compute Pods.

13. Click on SF001-EDGE01 to collapse it and then click on SF001-COMP01.

HOL-1706-SDC-6 Page 56
HOL-1706-SDC-6

Like the management and edge pods, the compute pods are deployed inside data
center racks, with each rack representing a separate pod. Each compute pod contains a
minimum of four servers along with a pair of leaf switches.

Storage for the compute clusters can be any combination of supported vSphere storage.
The type of storage used is determined based on cost, performance, business
requirements, and desired service levels. It is recommended that you use Virtual SAN
Ready nodes as it enables you to leverage the benefits included in the hybrid and all-
flash options Virtual SAN.

As with the other pods, VMware ESXi is installed on each server and the hosts are
logically grouped into vSphere clusters.

Virtual Machines

Let’s take look at the virtual machines that instantiate and manage the SDDC.

14. Click on the Virtual MachinesandTemplates tab.

Here the components are organized.

Let’s look at the instances in the management stack.

15. Click on the Platform Services folder, then the vCenter Server folder.

In a single-region deployment, two vCenter Server instances are deployed along with
two corresponding Platform Services Controller instances. The two Platform Services
Controller instances are configured as a replication pair for a single Single Sign-on
domain. One vCenter Server instance, which is referred to as the “Management
vCenter”, is used to instantiate and manage the management cluster itself. The second
vCenter Server instance, which is referred to as the “Compute vCenter”, is used to
instantiate and manage the vSphere clusters running in the edge and compute pods.

vSphere Data Protection

16. Click on the vCenter Server folder to collapse it, then click on the vSphere
Data Protection folder.

vSphere Data Protection is deployed and used to provide data protection for the
solutions residing in the management cluster.

Within vSphere Data Protection, backup policies are created for these virtual machines.

HOL-1706-SDC-6 Page 57
HOL-1706-SDC-6

VMware NSX

17. Click on the vSphere Data Protection folder to collapse it, then click on the
NSX for vSphere folder.

VMware NSX is deployed to provide network and security services such as VXLAN,
virtual switching, firewalling and load balancing. Here we see the NSX Manager and NSX
Controller cluster for the management stack. We also see the NSX Edge Services
Gateways used for North/South routing and for load-balancing SDDC solutions.

18. Click on the NSX for vSphere and then the Platform Services folders to
collapse them.

Cloud Operations

Let’s look at Cloud Operations.

19. Click on the Cloud Operations folder.

The design uses vRealize Operations to provide monitoring and alerting services for the
Software-Defined Data Center. vRealize Operations is deployed in two parts, the
Analytics Cluster and the Remote Collectors.

20. Click on the vROps01 folder to expand it.

A four-node vRealize Operations analytics cluster is deployed inside an application


virtual network provided by NSX. The analytics cluster consists of one master node, one
master replica node, and two data nodes. The use of multiple nodes allows for both
scalability and high availability.

21. Click on the vROps01 folder to collapse it and then click on the vROps01RC
folder to expand it.

In addition to the analytics cluster, a two Remote Collector nodes are also deployed.
Remote collectors help to lighten the load on the analytics cluster by collecting metrics
from applications and then forwarding them in bulk. The use of remote collectors
facilitates SDDC deployments that span multiple regions as it allows for separate remote
collectors to be deployed in each region.

vRealize Log Insight

22. Click on the vROps01RC folder to collapse it and then vRLI01 folder to expand
it.

And now vRealize Log Insight.

HOL-1706-SDC-6 Page 58
HOL-1706-SDC-6

vRealize Log Insight provides scalable log aggregation and indexing for the SDDC with
near real-time search and analytics capabilities. vRealize Log Insight collects, imports,
and analyzes logs to provide real-time answers to problems related to systems,
services, and applications, and derive important insights.

As with vRealize Operations, vRealize Log Insight is also deployed in a highly available
and scalable manner inside an application virtual network. vRealize Log Insight is
comprised of a minimum of three nodes, a single master node and two worker nodes.

23. Click the vRLI01 and Cloud Operations folders to collapse them.

vRealize Automation plus vRealize Business

And vRealize Automation plus vRealize Business.

24. Click the Cloud Management folder and then the vRA01 folder to expand them.

vRealize Automation enables modeling of complex IT services inside reusable blueprints.


These blueprints are published within a centralized service catalog and made available
for automated provisioning. Along with automated provisioning of IT services, vRealize
Automation also provides a self-service portal that empowers IT to take full advantage
of the rapid provisioning capabilities of the Software-Defined Data Center, through
automated deployment of new workloads as well as enabling virtual machine life cycle
management, to include automated decommissioning of retired workloads and resource
reclamation.

25. Click the vRA01 folder to collapse it and then the vR01IAS folder to expand it.

vRealize Business for Cloud provides visibility into the costs associated with the private
cloud. The solution tracks the costs of deployed workloads and automatically estimates
the impact and associated cost of deploying additional workloads. vRealize Business
provides a centralized dashboard to view, monitor and track cost and spending
efficiency.

26. Click the vR01IAS and Cloud Management folders to collapse them.

Compute Stack

Let’s look at the instances in the compute stack.

27. Click on the Platform Services and then NSX for vSphere folders to expand
them.

Here again, VMware NSX is deployed to provide network and security services such as
VXLAN, virtual switching, firewalling and load balancing. While the NSX Manager
instance for the compute stack is deployed in the management cluster its NSX

HOL-1706-SDC-6 Page 59
HOL-1706-SDC-6

Controller Cluster runs inside the edge cluster. We also see the NSX Edge Services
Gateways used for North/South routing for SDDC workloads.

28. Click on the NSX for vSphere and then Platform Services folders to expand
them.

Pod Storage

Now let’s take a quick look at the storage provided to these pods.

29. Click on the Storage tab.

Here we the types of storage are organized.

30. Click on the VSAN folder to explore it.

Recall that we use Virtual SAN for the management and edge pod workloads.

31. Click on the NFS folder to explore it.

We also use NFS in the management pod for backups, log archives and templates.

Click on comp01vc01.sfo01.rainpole.local.

32. Click on the VSAN folder to explore it.


33. Click on the NFS folder to explore it.

You can use any HCL supported storage in the Compute Pod.

Pod Networking

34. Click on the Networking tab and then the SF001 Data Center under
mgmt01vc01.sfo01.rainpole.local.

The foundation for software-defined networking provided by NSX is based on the


vSphere Distributed Switch.

A vSphere Distributed Switch is created for each pod.

35. Click on vDS-Mgmt to expand it.

Here we see the Management vSphere Distributed Switch created within the
Management vCenter. It includes the necessary VMkernel port groups for management,
vMotion, Virtual SAN, and NFS. Additional port groups are provided for north/south
routing uplinks and others are created by NSX for the application virtual networks.

HOL-1706-SDC-6 Page 60
HOL-1706-SDC-6

Jumbo Frames

36. Click the Manage tab.

Jumbo frames is enabled on the distributed switch and the MTU is set to 9000 Bytes.

37. Click the Related Objects tab and then on


mgmt01esx01.sfo01.rainpole.local. Click the Manage tab and finally the
Virtual switches tab.

Each host has two 10Gig uplink connections to the top of rack leaf switches for
redundancy.

Each host is assigned to their respective distributed switch and VMkernel adapters are
configured. Jumbo frames is set on the VMkernel adapters for Virtual SAN, NFS and NSX.

vSphere Distributed Switches

38. Click on the Networking tab, then vDS-Mgmt to collapse it. Click on the SFO01
Data Center to collapse it too.
39. Now click on the SFO01 Data Center under comp01vc01.sfo01.rainpole.local.

Similar vSphere Distributed Switches are also created in the Compute vCenter for both
the Edge and Compute pods.

40. Click on vDS-Edge to explore it. When you have finished, click on it again to
collapse it.
41. Click on vDS-Comp to explore it. When you have finished, click on it again to
collapse it.

Here we can see that the vSphere Distributed Switch has been applied to it’s
corresponding pod.

Conclusion

This concludes the demonstration on the core platform components of the VMware
Validated Design for SDDC.

Thank you!

HOL-1706-SDC-6 Page 61
HOL-1706-SDC-6

Module 2 -VMware
Validated Design for
SDDC – Software-Defined
Storage (15 minutes)

HOL-1706-SDC-6 Page 62
HOL-1706-SDC-6

Introduction
This module introduces you the software-defined storage provided by VMware Virtual
SAN in the VMware Validated Design for Software-Defined Data Center.

This lab is based on the VMware Validated Design for Software-Defined Data Center 2.0.

In this lab you'll be introduced to some key architecture concepts of the VMware
Validated Design for Software-Defined Data Center.

We've included several architecture reference diagrams and descriptions to provide


visual context to the topics seen in the lab simulations.

You can review these diagrams now and/or toggle back to them as you take the lab by
changing the active browser tab or window.

If you're ready to take the lab you can advance the manual and launch the interactive
simulation.

Architecture Reference Diagrams

Architecture reference diagrams for Module 2 are provided in the following sections.

Virtual SAN

In this diagram, a logical view of a Virtual SAN Ready Nodes is shown.

The VMware Validated Design for Software-Defined Data Center provides guidance for
the primary storage of the management components.

Virtual SAN storage is the default and primary storage for the SDDC management
components. The VMware Validated Designs uses rack mount Virtual SAN Ready Nodes
to ensure seamless compatibility and support with Virtual SAN during the deployment.
The configuration and assembly process for each system is then standardized, with all
components installed the same manner on each host. Standardizing the entire physical
configuration of the ESXi hosts is critical to providing an easily manageable and
supportable infrastructure because standardization eliminates variability. Consistent PCI
card slot location, especially for network controllers, is essential for accurate alignment
of physical to virtual I/O.

VMware Validated Design for SDDC 2.x:

• Management Pod uses Virtual SAN Hybrid Storage Architecture


• Edge Pod uses Virtual SAN Hybrid Storage Architecture

HOL-1706-SDC-6 Page 63
HOL-1706-SDC-6

• Compute Pods use any HCL supported storage. Both Hybrid and All-Flash Virtual
SAN are ideal options for the SDDC workloads.

VMware Validated Design for SDDC 3.x:

• Management Pod uses Virtual SAN Hybrid Storage Architecture


• Shared Edge and Compute Pods use any HCL supported storage. Both Hybrid and
All-Flash Virtual SAN are ideal options for the SDDC workloads.

While there is no explicit requirement for running Virtual SAN on hosts the compute
pods, it is recommended that you use Virtual SAN Ready Nodes as this not only enables
you to leverage the benefits of Virtual SAN for your compute workloads, but it also
provides hardware consistency across all the pods in the SDDC. Use storage that meet
the application and business requirements. This provides multiple storage tiers and
SLAs for these business workloads.

Virtual SAN - Hybrid and All-Flash

In this diagram, both the hybrid and all-flash architectures of Virtual SAN enables are
shown.

Irrespective of the architecture, there is a flash-based caching tier which can be


configured out of flash devices like SSDs, PCIe cards, NVMe etc. The flash caching tier
acts as the read cache/write buffer that dramatically improves the performance of
storage operations.

In All-Flash architecture, the flash-based caching tier is intelligently used as a write-


buffer only, while another set of SSDs forms the persistence tier to store data. Since this
architecture utilizes only flash devices, it delivers extremely high IOPs of up to 100K per
host, with predictable low latencies. Reads will primarily come from the capacity tier,
although if data was freshly written is “hot” and not de-staged it may read from the
write cache tier.

HOL-1706-SDC-6 Page 64
HOL-1706-SDC-6

In the hybrid architecture, server-attached magnetic disks are pooled to create a


distributed shared datastore, that persists the data. In this type of architecture, you can
get up to 40K IOPs per server host.

VMware Validated Design for SDDC 2.x:

• Management Pod uses Virtual SAN Hybrid Storage Architecture


• Edge Pod uses Virtual SAN Hybrid Storage Architecture
• Compute Pods use any HCL supported storage. Both Hybrid and All-Flash Virtual
SAN are ideal options for the SDDC workloads.

VMware Validated Design for SDDC 3.x:

• Management Pod uses Virtual SAN Hybrid Storage Architecture


• Shared Edge and Compute Pods use any HCL supported storage. Both Hybrid and
All-Flash Virtual SAN are ideal options for the SDDC workloads.

NFS

In this diagram, we depict the secondary storage used in the VMware Validated Design
for Software-Defined Data Center.

In this design, the secondary storage tier for management and compute pods is
provided by NFS.

NFS is used as the target for:

HOL-1706-SDC-6 Page 65
HOL-1706-SDC-6

• vSphere Data Protection backups


• vRealize Log Insight log archives in the management pod.
• Virtual machine templates in the compute pods consumed by vRealize
Automation

HOL-1706-SDC-6 Page 66
HOL-1706-SDC-6

Templates and Content Library

In this diagram, the use of the content library to share VM-related content - like
templates consumed and used by vRealize Automation across regions - is shown.

Here we illustrate some standard virtual machine templates are consumed by vRealize
Automation in Region A. Here you can see Microsoft Windows and two Linux
distributions. These templates are then imported into a published content library in
Region A.

Region B subscribes to the Region A library and synchronizes the content. The templates
are then exported to a format consumable by vRealize Automation in Region B.

In both regions, the templates and content libraries are stored on NFS.

HOL-1706-SDC-6 Page 67
HOL-1706-SDC-6

Log Archives

In this diagram, the logical architecture of vRealize Log Insight is shown. Both compute
and storage resources for master and worker instances scale-up and can perform log
archiving onto an NFS export that each vRealize Log Insight node can access.

Note: Local log storage is stored on the Virtual SAN datastore in the Management
Cluster.

Backups

In this diagram, the logical architecture for data protection is shown.

In the VMware Validated Design for Software-Defined Data Center we use Sphere Data
Protection to back up all management components. vSphere Data Protection provides
the functionality that is required to back up full image VMs and applications in those
VMs, for example, Microsoft SQL Server. However, it's noted that another data
protection product can be used if it supports the design objectives in the architecture.

vSphere Data Protection protects the virtual infrastructure at the vCenter Server layer.
Because vSphere Data Protection is connected to the Management vCenter Server, it
can access all management ESXi hosts, and can detect the virtual machines that require
backups.

HOL-1706-SDC-6 Page 68
HOL-1706-SDC-6

vSphere Data Protection uses deduplication technology to back up virtual environments


at data block level, which enables efficient disk utilization. To optimize backups
and leverage the VMware vSphere Storage APIs, all ESXi hosts must have access to the
production storage. The backup datastore stores all the data that is required to recover
services according to a Recovery Point Objective (RPO).

The design allocates a dedicated NFS datastore for the vSphere Data Protection
appliance and the backup data in each region. Because vSphere Data Protection
generates a significant amount of I/O operations, especially when performing multiple
concurrent backups. a dedicated volume is presented. The storage platform must be
able to handle this I/O. If the storage platform does not meet the performance
requirements, it might miss backup windows or backup failures and error messages
might occur.

During deployment, always run the vSphere Data Protection performance analysis
feature during virtual appliance deployment or after deployment to assess performance.

vSphere Data Protection can dynamically expand the destination backup store from 2 TB
to 8 TB. Using an extended backup storage requires additional memory on the vSphere
Data Protection appliance. In this design we set the backup targets to 4 TB initially since
the management stack currently consumes approximately 2 TB of disk space,
uncompressed and without deduplication.

Even though vSphere Data Protection uses the Changed Block Tracking technology to
optimize the backup data, do not use a backup window when the production storage is
in high demand to avoid any business impact.

Backups are scheduled for each day and the design retain 3 days of backups by default.
This is aligned with the size of the NFS export.

HOL-1706-SDC-6 Page 69
HOL-1706-SDC-6

HOL-1706-SDC-6 Page 70
HOL-1706-SDC-6

Hands-on Labs Interactive Simulation:


VMware Validated Design for SDDC –
Software-Defined Storage
The interactive simulation will allow you to experience steps which are too time-
consuming or resource intensive to do live in the lab environment.

1. Click here to open the interactive simulation. It will open in a new browser
window or tab.
2. When finished, click the “Return to the lab” link to continue with this lab.

HOL-1706-SDC-6 Page 71
HOL-1706-SDC-6

VMware Validated Design for SDDC –


Software-Defined Storage - Script
Welcome to the VMware Validated Design for SDDC Interactive Simulation (iSIM). In this
iSIM, we will demonstrate the configuration of VMware Virtual SAN for use in a private
cloud based on the design.

Software-Defined Storage

Within the Software-Defined Data Center all business and end-user workloads run inside
the Compute Pods.

1. Click the Hosts and Clusters icon.

The compute pods are deployed inside data center racks, with each rack representing a
separate pod. Each compute pod contains a minimum of four servers along with a pair
of top-of-rack leaf switches.

2. Click on the comp01esx01.sfo01.rainpole.local.

As with the other pods, VMware ESXi is installed on each server and the hosts are
logically grouped into vSphere clusters.

Compute clusters are all managed by the Compute vCenter Instance running in the
Management Pod. Multiple compute clusters can be created until the maximum number
of either hosts (1,000) or virtual machines (10,000) for vCenter Server is reached.
Should these maximums ever be reached, additional vCenter Server instances can be
provisioned to allow for additional compute clusters to be created.

There will typically be multiple compute pods deployed within a Software-Defined Data
Center. The pod design of the VVD makes it easy to start small and gradually expand
over time to accommodate growth. In addition, the pod design makes it possible to
deploy separate compute pods with varying levels of quality, redundancy and
availability.

Verify Virtual SAN Settings

Storage can be any combination of supported vSphere storage. The type of storage
used is determined based on cost, performance, business requirements, and desired
service levels. It is recommended that you use Virtual SAN Ready nodes as it enables
you to leverage the benefits included in the hybrid and all-flash storage configuration
for Virtual SAN.

HOL-1706-SDC-6 Page 72
HOL-1706-SDC-6

In this interactive simulation, we will enable Virtual SAN for hybrid storage. Each of the
hosts in this cluster have one flash disk and two magnetic disks that are not partitioned
or formatted. These drives will be used for the Virtual SAN datastore in this compute
pod.

Before enabling Virtual SAN we’ll review configuration prerequisites.Virtual SAN requires
a VMkernel adapter to be enabled for Virtual SAN traffic. Here we’ll check the
configuration of this host.

3. Click on the vmk4 VMkernel adapter.

Click on the pencil icon to edit or view the VMkernel adapters properties.

We can see that this hosts VMkernel adapter for Virtual SAN has enabled for the traffic
type.

4. Click on NIC settings.

We also see that the MTU has been set to 9000 as prescribed in the design.

For this simulation, the configuration has already been enabled on all four hosts. Click
the Cancel button.

Enable Virtual SAN

5. Click on the SFO01-COMP01 cluster and then click the Manage tab.
6. Enabling Virtual SAN starts with a simple checkbox. Click on the Configure...
button to get started.

When adding drives to Virtual SAN, there are two options: Automatic and Manual. In this
demo, the aforementioned flash and magnetic disks will be added to Virtual SAN
automatically.

7. Select Automatic from the Add disks to storage drop-down menu. Click Next.
8. The Virtual SAN configuration wizard provides a network validation to ensure all
hosts meet the network prerequisites. Click Next.
9. The configuration of Virtual SAN with automatic disk claiming is ready to
complete. Click Finish to confirm your settings and configure Virtual SAN on this
cluster.

Virtual SAN is on and there are four hosts in the cluster as seen in the Resources
section.

HOL-1706-SDC-6 Page 73
HOL-1706-SDC-6

Disk Management

10. Click the Disk Management tab. The "Disk Management" section is used to
review the disk groups created in the automatic configuration. In a manual
configuration, it would show disks eligible for use by Virtual SAN.

There are a total of four flash disks and eight data disks that have been automatically
added to Virtual SAN. One flash disk from each host and two magnetic disks from each.

Virtual SAN provides fully integrated operations management for health and
performance using the native vSphere Web Client.

Health and Performance

Virtual SAN provides fully integrated operations management for health and
performance using the native vSphere Web Client.

11. Click on the Health and Performance tab.


12. To enable the service, click on the Edit button. Verify that the Virtual SAN
performance service is turned on and it is using the Virtual SAN Default Storage
policy. Click the OK button.

Virtual SAN provides fully integrated operations management for health and
performance using the native vSphere Web Client. The Performance Service is now
turned on and Virtual SAN is Healthy (Stats object health now shows 'Healthy').

Review the New Virtual SAN Datastore

13. Click on the Storage icon to review the new Virtual SAN datastore that has been
created in this compute pod.
14. Click on comp01vc01.sfo01.rainpole.local and then SFO01 to expand those
sections. Finally, click on vsanDatastore.

Here we see that the new datastore has been added with the name of “vsanDatastore”.

Renaming a Datastore

Let’s rename the datastore to follow our existing naming convention.

15. Click the Actions menu and then select Rename. To rename the datastore, click
in the box to rename it to SFO01A-DS-VSAN01-COMP01. Click OK to rename the
datastore.

HOL-1706-SDC-6 Page 74
HOL-1706-SDC-6

Conclusion

This concludes this Interactive Simulation on the enabling the software-defined storage
using VMware Virtual SAN in the VMware Validated Design for SDDC.

Thank you!

HOL-1706-SDC-6 Page 75
HOL-1706-SDC-6

Module 3 -VMware
Validated Design for
SDDC – Software-Defined
Networking (15 minutes)

HOL-1706-SDC-6 Page 76
HOL-1706-SDC-6

Introduction
This module introduces the fundamental concepts for software-defined networking with
VMware NSX in the VMware Validated Design for Software-Defined Data Center.

This lab is based on the VMware Validated Design for Software-Defined Data Center 2.0.

In this lab you'll be introduced to some key architecture concepts of the VMware
Validated Design for Software-Defined Data Center.

We've included several architecture reference diagrams and descriptions to provide


visual context to the topics seen in the lab simulations.

You can review these diagrams now and/or toggle back to them as you take the lab by
changing the active browser tab or window.

If you're ready to take the lab you can advance the manual and launch the interactive
simualtion.

Architecture Reference Diagrams

Architecture reference diagrams for Module 3 are provided in the following sections.

Pod Architecture

VMware Validated Design uses a small set of common, standardized building blocks
called pods. Each pod encompasses the combinations of servers, storage, and network
equipment that are required to fulfill a specific role within the SDDC. These roles are
typically include management, edge and compute or a combination of edge and
compute.

Pods can be set up with varying levels of hardware redundancy and varying quality of
components. For example, one compute pod could use full hardware redundancy for
each component (power supply through memory chips) for increased availability. At the
same time, another compute pod in the same setup could use low-cost hardware
without any hardware redundancy. With these variations, the architecture can cater to
the different workload requirements in the SDDC.

For both smaller and large setups, homogeneity and easy replication are important.

As the name implies, the Management Pod hosts the infrastructure components used
to instantiate, manage, and monitor the private cloud. This includes the core
infrastructure components, such as the Platform Services Controllers, vCenter Server
instances, NSX Managers, and NSX Controllers, SDDC monitoring solutions like vRealize
Operations Manager and vRealize Log Insight.

HOL-1706-SDC-6 Page 77
HOL-1706-SDC-6

In the VMware Validated Design for Software-Defined Date Center, Cloud Management
Platform components are added and to include vRealize Automation, vRealize
Orchestrator and vRealize Business for Cloud on top of the solid and robust
management platform.

In a three-pod architecture in the VMware Validated Design for SDDC 2.x


(management, edge and compute) the SDDC network fabric does not provide external
connectivity. Most pod types, such as compute pods, are not set up with external
network connectivity. Instead external connectivity is pooled into Edge Pod. The edge
pod runs the software-defined networking services provided by VMware NSX to establish
north/south routing between the SDDC and the external network as well as east/west
routing inside the SDDC for the business workloads.

In the new two-pod architecture in the VMware Validated Design for SDDC 3.x
(management and shared edge/compute) the SDDC network fabric provides external
connectivity to all pods. External connectivity is pooled into a Shared Edge and
Compute pod. The shared pod runs the software-defined networking services provided
by VMware NSX to establish north/south routing between the SDDC and the external
network as well as east/west routing inside the SDDC for the business workloads. This
shared pod may also host the SDDC workloads.

SDDC grows, additional compute-only pods can be added to support a mix of different
types of workloads for different types of Service Level Agreements (SLAs).

Compute Pods host the SDDC workloads. An SDDC can mix different types of compute
pods and provide separate compute pools for different types of Service Level
Agreements (SLAs). For example, compute pods can be set up with varying levels of
hardware redundancy and varying quality of components for different Service. One
compute pod could use full hardware redundancy for each component (power supply
through memory chips) for increased availability. At the same time, another compute
pod in the same setup could use low-cost hardware without any hardware redundancy.
With these variations, the architecture can cater to the different workload requirements
in the SDDC.

HOL-1706-SDC-6 Page 78
HOL-1706-SDC-6

HOL-1706-SDC-6 Page 79
HOL-1706-SDC-6

Pods and Clusters

In this diagram, a logical representation of the pods and clusters in the VMware
Validated Design for Software-Defined Data Center.

The diagram represents the pods, clusters, host connectivity, storage, distributed
routing, virtual networks and placement of core platform components.

Note, while this diagram is based on the new two pod architecture in the VMware
Validated Design for SDDC 3.x but it is applicable to 2.x as well.

Pods - Three Pod Architecture

Recall from the Lab Introduction that VMware Validated Design uses a small set
of common, standardized building blocks called pods. Each pod encompasses the
combinations of servers, storage, and network equipment that are required to fulfill a
specific role within the SDDC. These roles are typically include management, edge and
compute or a combination of edge and compute.

Pods can be set up with varying levels of hardware redundancy and varying quality of
components. For example, one compute pod could use full hardware redundancy for
each component (power supply through memory chips) for increased availability. At the

HOL-1706-SDC-6 Page 80
HOL-1706-SDC-6

same time, another compute pod in the same setup could use low-cost hardware
without any hardware redundancy. With these variations, the architecture can cater to
the different workload requirements in the SDDC.

In the VMware Validated Design for SDDC 2.x the types of pods include:

• Management Pod
• Edge Pod
• Compute Pod

In this diagram the three-pod architecture of the VMware Validate Design for Software-
Defined Data Center 2.x is shown.

HOL-1706-SDC-6 Page 81
HOL-1706-SDC-6

Pods - Two Pod Architecture

The VMware Validated Design for Software-Defined Data Center 3.x introduces the two-
pod architecture.

• Management Pod
• Shared Edge and Compute Pod

As additional pods are added to the SDDC these are only Compute Pod.

Lead-and-Spine Network - Three Pod Architecture

Recall that the physical network architecture used in the VMware Validated Designs is
tightly coupled with the pod architecture.

The VMware Validated Designs recommend a layer-3 leaf-and-spine network topology. In


this topology, each rack contains a redundant set of Top-of-Rack (ToR) switches,
commonly referred to as leaf switches. These leaf switches are interconnected with a
series of high-capacity spine switches that are used to instantiate a robust, high-speed
layer-3 network core that provides connectivity between the racks as well as the on- and
off-ramp access to and from external networks.

HOL-1706-SDC-6 Page 82
HOL-1706-SDC-6

In this diagram, the three pod architecture as prescribed in the VMware Validated
Design for Software-Defined Data Center 2.x is shown.

In a three-pod architecture (management, edge and compute) the SDDC network fabric
does not provide external connectivity. Most pod types, such as compute pods, are not
set up with external network connectivity. Instead external connectivity is pooled into
management and edge pod. The edge pod runs the software-defined networking
services provided by VMware NSX to establish north/south routing between the SDDC
workloads and the external network as well as east/west routing inside the SDDC for the
business workloads.

HOL-1706-SDC-6 Page 83
HOL-1706-SDC-6

Lead-and-Spine Network - Two Pod Architecture

In this diagram the two pod architecture as prescribed in the VMware Validated Design
for Software-Defined Data Center 3.x is shown.

In a two-pod architecture (management and shared edge/compute) the SDDC network


fabric provides external connectivity to all pods. External connectivity is pooled into the
management pod and a shared edge/compute pod. The shared pod runs the software-
defined networking services provided by VMware NSX to establish north/south routing
between the SDDC workloads and the external network as well as east/west routing
inside the SDDC for the business workloads. This shared pod may also host the SDDC
workloads.

HOL-1706-SDC-6 Page 84
HOL-1706-SDC-6

Host Connectivity - Three Pod Architecture

In this diagram, the three pod architecture host connectivity as prescribed in the
VMware Validated Design for Software-Defined Data Center 2.x is shown.

Host Connectivity - Two Pod Architecture

In this diagram, the two pod architecture host connectivity as prescribed in the VMware
Validated Design for Software-Defined Data Center 3.x is shown.

HOL-1706-SDC-6 Page 85
HOL-1706-SDC-6

Core vSphere Management

In this diagram, the multi-region deployment of vCenter Server, Platform Services


Controllers and vSphere Data Protection instances in the VMware Validated Design for
Software-Defined Data Center.

Within each region, the design instantiates two Platform Service Controllers and two
vCenter Server systems in the appliance form factor. This includes one PSC and one
vCenter Server for the management pod and one PSC and one vCenter Server for the
shared edge and compute pods. The design also joins the Platform Services Controller
instances to the same vCenter Single Sign-On domain and points each vCenter Server
instance to its respective Platform Services Controller instance.

Note: This diagram is applicable to both the 2.x and 3.x designs.

HOL-1706-SDC-6 Page 86
HOL-1706-SDC-6

NSX - Three Pod Architecture

In this diagram, the multi-region and cross-vCenter deployment of NSX in the VMware
Validated Design for Software-Defined Data Center 2.x is shown.

In both regions, two separate NSX Managers instances are deployed, one for the
Management pod and one for the Compute and Edge pods, along with an associated
NSX Universal Controller Cluster. In the Region B the secondary NSX Manager instances
automatically imports the configurations of the NSX Universal Controller Clusters from
Region A.

HOL-1706-SDC-6 Page 87
HOL-1706-SDC-6

NSX - Two Pod Architecture

In this diagram, the multi-region and cross-vCenter deployment of NSX in the VMware
Validated Design for Software-Defined Data Center 3.x is shown.

The general architecture is the same as in the three-pod architecture; however, the NSX
services for the Compute Stack are deployed on an initial shared edge and compute pod
and all NSX services are added to a resource pool to guarantee resources for the
network virtualization platform in this stack.

Universal Distributed Logical Routing

In this diagram we depict the constructs for the universal distributed logical routing in
the SDDC across two regions. The configuration is largely the same across the
management stack and the compute stack.

The core platform solutions, such as the vCenter Server instances, Platform Services
Controllers instances, NSX Manager instances and the NSX Universal Controller cluster
run on a vSphere Distributed Port Group which has a VLAN provided down from the dual
leaf switches. This port group is on the vSphere Distributed Switch for the Management
Pod.

This same vDS has two uplink port groups – Uplink 01 and Uplink 02. On these port
groups we deploy NSX Edge Services Gateways in an ECMP - Equal Cost Multi-Pathing -
configuration for north/south routing for management.

HOL-1706-SDC-6 Page 88
HOL-1706-SDC-6

This also occursin both edge cluster (3-pod) or shared edge/compute (2-pod) for the
compute stack and leverages the similar uplink port groups provided on the vDS for the
Edge.

The NSX Edge Services Gateways pair in management will manage north/south traffic
for the SDDC infrastructure and the pair in edge will manage north/south traffic for
SDDC workloads. The use of ECMP provides multiple paths in and out of the SDDC. This
results in much faster failover times than deploying Edge service gateways in HA mode.

The design uses BGP as the dynamic routing protocol inside the SDDC for a simple
implementation. There is no need to plan and design access to OSPF area 0 inside the
SDDC. OSPF area 0 varies based on customer configuration.

The management stack uses a single universal transport zone that encompasses all
management clusters and for the compute stack, the design uses a single universal
transport zone that encompasses all edge and compute clusters from all regions. The
use of a single Universal Transport Zone for management stack and one for the compute
stack supports extending networks and security policies across regions. This allows
seamless migration of management applications and business workloads
across regions either by cross vCenter vMotion or by failover recovery with Site
Recovery Manager.

Riding on top of these transport zones we instantiate a universal logical switch for use
as the transit network between the Universal Distributed Logical Routers (UDLRs) and
ESGs – this is done in both the management stack and the compute stacks. In this
diagram we’re just representing the construct with a single instance.

We then deploy a single NSX UDLR for the management cluster to provide east/west
routing across all regions as well as a single NSX UDLR for the compute and edge
cluster to provide east/west routing across all regions. Here again, we’re just
representing the construct with a single instance.

The UDLRs are peer via BGP to the north/south edges in their respective stack and the
universal logical switch that is used as the transit network allows the UDLRs and all
ESGs across regions to exchange routing information.

The distributed logical routing provides the ability to spin up logical networks that are
then accessible throughout the SDDC and out to the physical network.

In fact, we leverage this capability tor the SDDC solutions themselves and create
application virtual networks for these solutions.

HOL-1706-SDC-6 Page 89
HOL-1706-SDC-6

HOL-1706-SDC-6 Page 90
HOL-1706-SDC-6

Hands-on Labs Interactive Simulation:


VMware Validated Design for SDDC –
Software-Defined Networking
The interactive simulation will allow you to experience steps which are too time-
consuming or resource intensive to do live in the lab environment.

1. Click here to open the interactive simulation. It will open in a new browser
window or tab.
2. When finished, click the “Return to the lab” link to continue with this lab.

HOL-1706-SDC-6 Page 91
HOL-1706-SDC-6

VMware Validated Design for SDDC –


Software-Defined Networking - Script
Welcome to the VMware Validated Design for SDDC Interactive Simulation (iSIM). In this
iSIM, we will demonstrate the configuration of VMware NSX for use in a private cloud
based on this SDDC architecture. This simulation focuses primarily on the configuration
in the management stack.

VMware NSX reproduces the complete set of Layer 2 through 7 networking services in
software. This includes: switching, routing, access control, firewalling, and load
balancing.

VMware NSX is a transformative approach to software-defined networking that not only


enables you to achieve magnitudes better agility and economics, but also allows for a
vastly simplified operational model for the underlying physical network. With the ability
to be installed on any IP network, including both traditional and next-generation fabric
architectures, NSX is a completely non-disruptive solution that can easily be deployed in
any data center.

Networking and Security

The VMware Validated Design uses a simple, scalable and resilient leaf-and-spine
network topology for the IP transport layer. Servers in each pod are redundancy
connected to the leaf switches, commonly referred to as top-of-rack switches, in their
rack. High-speed uplinks connect the leaf and spine switches to establish an access
layer for transport.

1. Click Networking and Security, then Installation.

Hosts in each pod are logically grouped into clusters. For example, in the management
pod, 4 Virtual SAN Ready Nodes are logically grouped into a management cluster. A
vSphere Distributed Switch is established for each pod and port groups defined for
services required by the cluster.

NSX Managers are deployed for both the management stack and compute stack and
provide a centralized management plane for the NSX for vSphere architecture.
Orchestration of the software-defined networks occurs through the management plane.

Universal NSX Controller Clusters are deployed for the network virtualization control
plane. Here control messages are used to set up networking attributes on logical
switches, as well as to configure distributed routing, and distributed firewalling in the
data plane.

2. Click the Host Preparation tab. From the NSX Manager drop-down menu, select
172.27.12.65 and then click SFO01-MGMT01 to expand it.

HOL-1706-SDC-6 Page 92
HOL-1706-SDC-6

Host Preparation

Hosts are prepared with the network virtualization components and establish their
VXLAN Tunnel Endpoint (VTEP) connections for communication on the IP transport layer.
Here we see that the hosts have the components install and VXLAN is configured on
each of these management hosts.

3. Click Logical Network Preparation and then click SFO01-MGMT01 to expand


it.

And here we can see that the VXLAN Tunnel Endpoints connections for communication
on the IP transport layer are established each of these management hosts.

The design used the Hybrid replication mode for multi-destination traffic. Hybrid mode
offers operational simplicity while leveraging the Layer 2 Multicast capability of physical
switches.

4. Click the Segment ID tab.

The design used the Hybrid replication mode for multi-destination traffic. Hybrid mode
offers operational simplicity while leveraging the Layer 2 Multicast capability of physical
switches.

5. Click the Transport Zones tab.

Hosts are joined to a transport zone that defines the scope of the logical switches, or
virtual networks, across SDDC. For the management stack, it the uses a single universal
transport zone that encompasses all management clusters and for the compute stack,
the design uses a single universal transport zone that encompasses all edge and
compute clusters from all regions.

6. From the Actions drop-down menu, select Connect Clusters.

Here we see that the management cluster is a connected to the Management Universal
Transport Zone. The use of a single Universal Transport Zone for management stack and
one for the compute stack supports extending networks and security policies across
regions.

7. Click Cancel.

Logical Switches

8. Click the Logical Switches tab.

Workload data is contained within logical switches provided by the data plane. The data
is carried over designated transport networks in the physical network and can be
extended across data centers in a multi-region deployment. For example, a virtual

HOL-1706-SDC-6 Page 93
HOL-1706-SDC-6

network can be established across Management Pods in San Francisco and Los Angeles.
This enables workload migration and disaster recovery of applications without the need
to change the IP addressing.

9. Click on Mgmt-RegionA01-VXLAN.

The first application virtual network is a region dependent virtual network.

10. Click NSX Edge.

A Universal Distributed Logical Router is instantiated in the management stack and


another in the compute stack. Universal Distributed Logical Routers run in the kernel of
each ESXi host and provide centralized administration and routing configuration for the
Software-Defined Data Center.

11. Click Virtual Machines.

On this virtual network, the solutions or solution portions tied to a region are deployed.
These include the vRealize Operations Remote Collectors, vRealize Log Insight master
and worker nodes, vRealize Automation Proxy Agents and vRealize Business for Cloud
Collectors.

13. Click the Networking and Security link to return to the Logical Switches page.
Then click Mgmt-xRegion01-VXLAN. The second application virtual network is
a region dependent virtual network.
14. Click NSX Edge.

Solutions that require load balancing for highly distributed and available deployments
use the load balancing capabilities of an NSX Edge Services Gateway. This ensures
availability, scalability and performance of the VMware Validated Designs’ SDDC
solutions.

15. Click on Virtual Machines.

On this virtual network, the solutions that are independent of region – that is, they are
portable to another region - are deployed. These include the vRealize Operations
master, master replica and data nodes; vRealize Automation appliances, IaaS Web
Servers, IaaS Manager Servers, and Distributed Execution Managers; and lastly, vRealize
Business for Cloud appliance. Both the vRealize Operations and vRealize Automation
solutions are load balanced by the NSX Edge Services Gateway.

16. Click the Networking and Security link to return to the Logical Switches page.

Universal Transit Network

An additional virtual network for Universal Transit Network is established to link and
distribute logical routing to the physical network.

HOL-1706-SDC-6 Page 94
HOL-1706-SDC-6

17. Click Universal Transit Network, then click NSX Edge.

Here we see that the transit network is connected to the Universal Distributed Logical
Router. We also see two NSX Edge Service Gateways are deployed. These gateways
provide north/south routing between the Software-Defined Data Center and the physical
network with Equal Cost Multi-pathing. Two edges are instantiated in the management
pod to provide north/south access for the management stack and two are instantiated in
the edge pod provide north/south access for the compute stack in a similar fashion.

18. Click the Networking and Security link to return to the Logical Switches page.

NSX Edges

19. Click NSX Edges.

Let’s take a look at the gateways.

20. Click SFOMGMT-ESG01.

In the global configuration we can see the size, hostname, syslog, high-availability
configuration and more.

21. Click the Routing tab.

The ECMP edge gateways are peered for route redistribution with the leaf switches for
their pod.

22. Click BGP.

Here we can see that this gateway is peers with the top-of-rack leaf switches in the pod
as well as the Universal Distributed Logical Router.

23. Click Route Redistribution.

We also see that route distribution is enabled through BGP.

24. Let's review and validate the same settings for the second edge gateway for
north / south routing. Click the Networking and Security link to go back to NSX
Edges, then click on SFOMGMT-ESG02.

• Click Interfaces - We can also view, modify or add IP addresses for the
interfaces of each vNic. Here we see that three vNics are connected. One uplink
to the transit network, and two uplinks to the physical network for access to the
leaf switches.
• Click the Routing tab.
• Click on BGP.
• Click Route Redistribution.

HOL-1706-SDC-6 Page 95
HOL-1706-SDC-6

25. Click the Networking and Security link to return to NSX Edges.

Universal Distributed Logical Router

Recall that a Universal Distributed Logical Router is instantiated in the management


stack and another in the compute stack.

26. Click UDLR01.

The UDLR Control VM is deployed in a high-availability pair.

27. Click Interfaces.

We can also view, modify or add IP addresses for the interfaces of each vNic. Here we
see that three vNics are connected. One to the transit network, and one to each
application virtual network for the SDDC solutions.

28. Click the Routing tab.

Universal Distributed Logical Routers run in the kernel of each ESXi host and provide
centralized administration and routing configuration for the Software-Defined Data
Center. Note that ECMP is enabled.

29. Click BGP.

The Universal Distributed Logical Routers for each stack is peered for route
redistribution with their respective northbound edges on a dedicated virtual network for
transit services.

30. Click Route Redistribution.

We also see that route distribution is enabled through BGP, as well as OSPF.

This provides complete route redistribution and access between virtual networks in the
Software-Defined Data Center to and from the physical network.

31. Click the Home icon and select Home.

Route Redistribution Status

Let’s check the status of the route redistribution.

Click on the minimized Putty Session on the task bar.

32. Hit the 'Enter' key to start the ssh session. Hit 'Enter' again for the password.

HOL-1706-SDC-6 Page 96
HOL-1706-SDC-6

In this lab environment, our top-of-rack leaf switches are simulated by an NSX Edge
Services Gateway.

33. Hit any key to enter the 'show ip bgp neighbors' command and hit 'Enter'.

Here we see that the top-of-rack leaf switches have established peering with the north /
south NSX Edge Service Gateways.

34. Hit any key to view the rest of the output.

Here we see that the top-of-rack leaf switches have established peering with the
upstream spine switch.

35. Hit any key to enter the 'show ip bgp route' command and hit 'Enter'.

Here we see routes to the virtual networks instantiated by NSX in the management
stack have be distributed to the physical network from the north / south NSX Edge
Service Gateways.

The 192.168.10.0 network is the Management Universal Transit Network and the
192.168.11.0 and 192.168.31.0 networks are the application virtual networks for the
SDDC solutions.

Conclusion

Consumption of the VMware Validated Designs’ software-defined networking is provided


through the consumption plane and is available through the vSphere Web Client and
REST API entry-points. This flexible architecture allows for automation of all
configurations via the Cloud Management Platform – vRealize Automation and vRealize
Orchestrator. Here, end-users consume all Layer 2 through Layer 7 services in seconds
where they are programmatically assembled on-demand.

And full monitoring is integrated with Cloud Operations solutions that are built into the
foundation of VMware Validated Designs. These include vRealize Operations and
vRealize Log Insight.

This concludes the overview of the software-defined networking provided by VMware


NSX in the VMware Validated Design for SDDC.

HOL-1706-SDC-6 Page 97
HOL-1706-SDC-6

Module 4 -VMware
Validated Design for
SDDC – Cloud Operations
with vRealize Operations
(15 minutes)

HOL-1706-SDC-6 Page 98
HOL-1706-SDC-6

Introduction
This module introduces the fundamentals of cloud operations with VMware vRealize
Operations in the VMware Validated Design for Software-Defined Data Center.

This lab is based on the VMware Validated Design for Software-Defined Data Center 2.0.

In this lab you'll be introduced to some key architecture concepts of the VMware
Validated Design for Software-Defined Data Center.

We've included several architecture reference diagrams and descriptions to provide


visual context to the topics seen in the lab simulations.

You can review these diagrams now and/or toggle back to them as you take the lab by
changing the active browser tab or window.

If you're ready to take the lab you can advance the manual and launch the interactive
simulation.

Architecture Reference Diagrams

Architecture reference diagrams for Module 4 are provided in the following sections.

vRealize Operations Logical Design

In this diagram, the dual-region deployment vRealize Operations in the VMware


Validated Design for Software-Defined Data Center is shown.

Before outlining the deployment topology, we will outline the architecture of vRealize
Opeations.

vRealize Operations tracks and analyzes the operation of multiple data sources within
the Software-Defined Data Center by using specialized analytics algorithms. These
algorithms help vRealize Operations Manager to learn and predicts the behavior of
every object it monitors. Users access this information by using views, reports, and
dashboards.

vRealize Operations contains functional elements that collaborate for data analysis and
storage, and support creating clusters of nodes with different roles.

For high availability and scalability, you can deploy several vRealize Operations Manager
instances in a cluster where they can have either of the following roles:

• Master Node. This is the initial node in a deployment and a cluster. In large-
scale environments the master node manages all other nodes. In small-scale

HOL-1706-SDC-6 Page 99
HOL-1706-SDC-6

environments, the master node is the single standalone vRealize Operations


Manager node.
• Master Replica Node. Enables high availability of the master node.
• Data Node. Enables scale-out of vRealize Operations Manager in larger
environments. Data nodes have adapters installed to perform collection and
analysis. Data nodes also host vRealize Operations Manager management packs.
Larger deployments usually include adapters only on data nodes, not on the
master node or replica node
• Remote Collector Node. In distributed deployments, enables navigation
through firewalls, interfaces with a remote data source, reduces bandwidth across
regions, or reduces the load on the vRealize Operations Manager analytics
cluster. Remote collector nodes only gather objects for the inventory and
forward collected data to the data nodes. Remote collector nodes do not store
data or perform analysis. In addition, you can install them on a different operating
system than the rest of the cluster nodes. The master and master replica nodes
are data nodes with extended capabilities.

vRealize Operations can form two type of clusters according to the nodes that
participate in a cluster and both are used in the VMware Validated Designs:

• Analytics Clusters. Tracks, analyzes, and predicts the operation of monitored


systems. Consists of a master node, data nodes, and optionally of a master
replica node.
• Remote Collector Clusters. Only collects diagnostics data without storage or
analysis. Consists only of remote collector nodes.

In this diagram the multi-region deployment of vRealize Operations in the VMware


Validated Design for Software Defined Data Center is shown. It consists of:

• In the Region A Management Pod, we deploy a 4-node (medium-size) vRealize


Operations analytics cluster that is highly available. This topology provides high
availability, scale-out capacity up to eight nodes, and failover.
• In the Management Pod for Region A and Region B, a 2-node remote collector
cluster is deployed. The remote collectors communicate directly with the data
nodes in the vRealize Operations analytics cluster. For load balancing and fault
tolerance, deploy two remote collectors in each region.

Each region contains its own remote collectors whose role is to ease scalability by
performing the data collection from the applications that are not a subject of failover
and periodically sending collected data to the analytics cluster.

In a disaster, the design only fails over the analytics cluster only because the analytics
cluster is the construct that analyzes and stores monitoring data.

vRealize Operations can monitor and perform diagnostics on all of VMware Validated
Design for SDDC systems by using management packs.

HOL-1706-SDC-6 Page 100


HOL-1706-SDC-6

Management packs contain extensions and third-party integration software. They add
dashboards, alerts definitions, policies, reports, and other content to the inventory of
vRealize Operations

vRealize Operations is configured with the following Management Packs after


installation:

• Management Pack for NSX for vSphere


• Management Pack for vRealize Log Insight
• Management Pack for vRealize Automation
• Management Pack for Storage Devices

After which, the solution is configured to collect data from the following virtual
infrastructure and cloud management components:

• Management vCenter Server and Platform Services Controller


• Compute vCenter Server and Platform Services Controller
• All ESXi hosts
• NSX Manager, Controllers and Edges for Management and Compute Stacks
• 
vRealize Automation and Components
• vRealize Orchestrator
• vRealize Log Insight
• vRealize Operations Manager (Self Health Monitoring)

HOL-1706-SDC-6 Page 101


HOL-1706-SDC-6

Application Virtual Networks for vRealize Operations

In this diagram the distributed deployment of vRealize Operations in conjunction with


the application virtual networks, distributed routing and load balancing services for the
components is shown.

HOL-1706-SDC-6 Page 102


HOL-1706-SDC-6

Hands-on Labs Interactive Simulation:


VMware Validated Design for SDDC –
Cloud Operations with vRealize
Operations
The interactive simulation will allow you to experience steps which are too time-
consuming or resource intensive to do live in the lab environment.

1. Click here to open the interactive simulation. It will open in a new browser
window or tab.
2. When finished, click the “Return to the lab” link to continue with this lab.

HOL-1706-SDC-6 Page 103


HOL-1706-SDC-6

VMware Validated Design for SDDC –


Cloud Operations with vRealize
Operations - Script
Real-time monitoring is a key aspect of Software Defined Data Center operations. Once
an SDDC has been deployed, its ongoing performance and health must be continually
monitored, and administrators must be notified any time problems develop, or capacity
constraints are reached. In the VMware Validated Design, this is achieved using vRealize
Operations and vRealize Log Insight.

In this Hands-on Labs Interactive Simulation, we will review the deployment and
configuration of vRealize Operations based this comprehensive SDDC architecture.

Review Virtual Networking and vRealize Operations

1. Click Networking and Security and then Logical Switches.

The design leverages the capabilities of the software-defined networking by establishing


virtual networks in the management stack for SDDC management, automation and
operations solutions. These networks are referred to as application virtual networks. The
use of application virtual networks provides isolation and portability across regions
during disaster recovery.

Let's look at the virtual machines attached to this virtual network.

2. Click MGMT-xRegion01-VXLAN
3. Click the Related Objects tab
4. Click the Virtual Machines button

The deployment consists of two components – the Analytic Cluster and the Remote
Collector Cluster. A four-node analytics cluster stores and analyzes the collected
metrics, validates them against established thresholds and sends alerts, when required.
A two-node remote collector cluster is also in each region. Remote collectors gather the
metrics for the monitored components and forwards this data to the analytics cluster.

Here we see the virtual machines contained within this application virtual network. This
virtual network is independent of the region and can be extended across data centers in
a multi-region deployment.

Here we filter and search for the vRealize Operations nodes deployed on this network.

5. Type “vrops” and press the Enter key to filter Virtual Machines

HOL-1706-SDC-6 Page 104


HOL-1706-SDC-6

Here they are. The members of the vRealize Operations Analytics cluster. The Remote
Collectors reside in another application virtual network that is designated to a specific
region.

6. Click the NSX Edges button.

A Universal Distributed Logical Router is instantiated in the management stack.


Universal Distributed Logical Routers run in the kernel of each ESXi host and provide
centralized administration and routing configuration for the Software-Defined Data
Center.

Select edge-1 / SFOMGMT-LB01.

Solutions that require load balancing for highly distributed and available deployments
use the load balancing capabilities of an NSX Edge Services Gateway in HA mode. This
ensures availability, scalability and performance of the VMware Validated Design for
SDDC solutions.

8. Click Interfaces and then Show All under the IP Address column of OneArmLb.

Here we see the addresses assigned to the load balancer. These are used by the virtual
servers created for the SDDC solutions.

9. Click Cancel
10. Click the Load Balancer button.

The load balancer status indicates that the service is enabled.

Now let’s take a look at the pools.

11. Click Pools.


12. Type “vrops” in the Filter box and press the Enter key to filter the Pools.

By filtering the list of pools we see that one pool has been created for vRealize
Operations.

13. Select the vROPs_Pool and click the Pencil Icon to Edit the entry.

This pool indicates the name, connection algorithm as well as the four nodes of the
analytics cluster we saw earlier including their IP, port and weighted values.

14. Click Cancel.


15. Click Virtual Servers.

Virtual Servers are the load-balanced endpoints that users connect to in order to access
the solutions.

16. Type “vrops” in the Filter box to filter the Virtual Servers.

HOL-1706-SDC-6 Page 105


HOL-1706-SDC-6

By filtering the list of virtual servers we see that two pools are present have been
created for vRealize Operations. Here we select the virtual server for the HTTPs protocol.

17. Click the down arrow in the Virtual Server list and then select the
virtualServer-5 for the https Protocol.
18. Click the Pencil Icon to Edit the entry.

The virtual server indicates if it is enabled, which it is. It also indicated the application
profile, a descriptive name, an IP address from those we saw earlier, the protocol, the
port, the pool and connection rates and limits, if any.

19. Click Cancel.


20. Click the Home hamburger menu and then select Home.

vRealize Operations Cluster Configuration

Now that we understand the deployment of a vRealize Operations Analytics Clusters on


an application virtual network, let’s explore the cluster configuration and status.

21. Click Second Browser Tab.

Here we see that the cluster is online and operational.

We also see that High Availability has been enabled for the cluster.

All four nodes in the vRealize Operations Analytics cluster are online and running. This
includes the master node, the master replicate node and two data nodes.

We also see that the two Remote Collector nodes are also online and running in their
application virtual network.

vRealize Operations and Management Packs

22. Click Third Browser Tab.

Now let’s explore the basic configuration of the vRealize Operations Management Packs
used for the SDDC.

Here we are connecting to the virtual server on the load balancer that is distributing and
managing our connection to the solution.

Click Administration on the left-hand site navigation bar.

The VMware Validated Design for SDDC contains several solutions for network, storage,
and cloud management and operations. You can monitor and perform diagnostics on all
of them by using management packs.

HOL-1706-SDC-6 Page 106


HOL-1706-SDC-6

vRealize Operations is configured with the following Management Packs after


installation:

* Management Pack for vRealize Automation

* Management Pack for Storage Devices

* Management Pack for NSX for vSphere

* Management Pack for vRealize Log Insight

As well as the native management pack for vSphere and Operating Systems.

23. Click Operating System Management Pack


24. Click Management Pack for Storage Devices
25. Click the down arrow on the scroll bar.
26. Click Management Pack for NSX-vSphere
27. Click vSphere
28. Click Home Icon in Navigation Pane
29. Click Environment

Management packs contain extensions that add dashboards, alerts definitions, policies,
reports, and other content to the inventory of vRealize Operations.

Here we see the addition of inventory items collected and available from these
management packs

Click Home Icon in Navigation Pane

Let’s explore the dashboards added by the management pack extensions.

Here we see the default dashboards for vSphere.

31. Click Dashboard List and then vSphere Dashboards

We also see the dashboards added by the management packs. Such as,

* Management Pack for Storage Devices

32. Click on MPSD

* Management Pack for NSX for vSphere

33. Click on NSX-vSphere


34. Click on MPND Dashboards

* Management Pack for vRealize Automation

35. Click on vRealize Automation

HOL-1706-SDC-6 Page 107


HOL-1706-SDC-6

And the default Recommendations, Diagnose, Self Health and Workload Utilization
dashboards.

vRealize Operations Sample Dashboard

Let’s take a look at an example dashboard. Here we select the Management Pack for
NSX and its main dashboard.

36. Click NSX-vSphere and then NSX-vSphere Main

This dashboard provides an overall view of the health of key NSX components in our
SDDC and any alerts that have been generated.

Here we see that a critical alert is reported on the NSX Manager for our compute stack.
Let’s take a look at it.

37. Click comp01nsxm01.sfo01.rainpole.local

Here we see a summary of the Health, Risk and Efficiency for the inventory object. We
also see that an item appears under the Top Risk Alerts indicated that our ‘Backups are
not using Secure FTP’

38. Click Backups are not using secure FTP in the Risks section.

By selecting the alerts, we can review what is causing the issues and any
recommendation.

39. Click on the Arrow Under ‘What is Causing the Issue?’

In this example, we see that only FTP has been selected for our backup protocol when
we should be using Secure FTP.

HOL-1706-SDC-6 Page 108


HOL-1706-SDC-6

Module 5 -VMware
Validated Design for
SDDC – Cloud Operations
with vRealize Log Insight
(15 minutes)

HOL-1706-SDC-6 Page 109


HOL-1706-SDC-6

Introduction
This module introduces the fundamentals of cloud operations with VMware vRealize Log
Insight in the VMware Validated Design for Software-Defined Data Center.

This lab is based on the VMware Validated Design for Software-Defined Data Center 2.0.

In this lab you'll be introduced to some key architecture concepts of the VMware
Validated Design for Software-Defined Data Center.

We've included several architecture reference diagrams and descriptions to provide


visual context to the topics seen in the lab simulations.

You can review these diagrams now and/or toggle back to them as you take the lab by
changing the active browser tab or window.

If you're ready to take the lab you can advance the manual and launch the interactive
simulation.

Architecture Reference Diagrams

Architecture reference diagrams for Module 5 are provided in the following sections.

vRealize Log Insight Logical Design

In this diagram, the dual-region deployment vRealize Log Insight clusters in the VMware
Validated Design for Software-Defined Data Center is shown.

Before outlining the deployment topology, we will outline the architecture of vRealize
Log Insight.

vRealize Log Insight collects data from ESXi hosts using the syslog protocol. It connects
to vCenter Server to collect events, tasks, and alarms data, and integrates with vRealize
Operations Manager to send notification events and enable launch in context. It also
functions as a collection and analysis point for any system capable of sending syslog
data. In addition to syslog data an ingestion agent can be installed on Linux or Windows
servers to collect logs. This agent approach is especially useful for custom logs and
operating systems that don't natively support the syslog protocol, such as Windows.

For high availability and scalability, several instances of vRealize Log Insight can be
deployed in a cluster where they can have either of the following roles:

• Master Node - Required initial node in the cluster. The master node is
responsible for queries and log ingestion. The Web user interface of the master

HOL-1706-SDC-6 Page 110


HOL-1706-SDC-6

node serves as the single pane of glass for the cluster. All queries against data
are directed to the master, which in turn queries the workers as appropriate.
• Worker Node - Enables scale-out in larger environments. A worker node is
responsible for ingestion of logs. A worker node stores logs locally. If a worker
node is down, the logs on that worker becomes unavailable. You need at least two
worker nodes to form a cluster with the master node.
• Integrated Load Balancer - Provides high availability. The ILB runs on one of
the cluster nodes. If the node that hosts the ILB Virtual IP (VIP) address stops
responding, the VIP address is failed over to another node in the cluster.

In the dual-region deployments of the VMware Validated Design for Software Defined
Data Center, deployments of vRealize Log Insight clusters are established in each region
and consists of a minimum of three nodes – a master node and two worker nodes. This
allows for continued availability and increased log ingestion rates.

vRealize Log Insight clients connect to load balancer VIP address and use the user
interface and ingestion (via Syslog or the Ingestion API) to send logs to vRealize Log
Insight.

The compute and storage resources of the vRealize Log Insight instances for scale-up
and it can perform log archiving onto an NFS export that each vRealize Log Insight node
can access.

vRealize Log Insight supports alerts that trigger notifications about its health. The
following types of alerts exist in vRealize Log Insight:


• System Alerts. vRealize Log Insight generates notifications when an important


system event occurs, for example when the disk space is almost exhausted
and vRealize Log Insight must start deleting or archiving old log files.
• Content Pack Alerts. Content packs contain default alerts that can be
configured to send notifications, these alerts are specific to the content pack and
are disabled by default.
• User-Defined Alerts. Administrators and users can define their own alerts
based on data ingested by vRealize Log Insight.

vRealize Log Insight integrates with vRealize Operations Manager to provide a central
location for monitoring and diagnostics in the following ways:


• Notification Events. Forward notification events from vRealize Log Insight to


vRealize Operations Manager.
• Launch in Context. Launch vRealize Log Insight from the vRealize Operation
Manager user interface. You must install the vRealize Log Insight management
pack in vRealize Operations Manager.

We also protect the vRealize Log Insight deployment by providing centralized role-based
authentication and secure communication with the other components in the Software-
Defined Data Center. The design enables role-based access control in by using any

HOL-1706-SDC-6 Page 111


HOL-1706-SDC-6

existing Active Directory to realize fine-grained role and privilege-based access for
administrator and operator roles.

To simplify the design implementation for log sources that are syslog capable we
configure syslog sources to send log data directly to vRealize Log Insight – it’s native.

We forward syslog data in vRealize Log Insight from Region B to Region A by using
the Ingestion API .The vRealize Log Insight Ingestion API uses TCP communication. In
contrast to syslog, the forwarding module supports the following features for the
Ingestion API:


• Forwarding to other vRealize Log Insight instances


• Both structured and unstructured data, that is, multi-line messages.
• Metadata in the form of tags
• Client-side compression
• Configurable disk-backed queue to save events until the server acknowledges the
ingestion.

vRealize Log Insight collects logs from the following virtual infrastructure and cloud
management components using configured Content Packs.

• Management vCenter Server and Platform Services Controller


• Compute vCenter Server and Platform Services Controller
• All ESXi hosts
• NSX Manager, Controllers and Edges for Management and Compute Stacks
• 
vRealize Automation and Components (Appliance and Windows-based)
• vRealize Orchestrator
• vRealize Log Insight
• vRealize Operations Manager

HOL-1706-SDC-6 Page 112


HOL-1706-SDC-6

Application Virtual Networks for vRealize Log Insight

In this diagram the distributed deployment of vRealize Log Insight in conjunction with
the application virtual networks, distributed routing and load balancing services for the
components is shown.

HOL-1706-SDC-6 Page 113


HOL-1706-SDC-6

Hands-on Labs Interactive Simulation:


VMware Validated Design for SDDC –
Cloud Operations with vRealize Log
Insight
This is a placeholder for the Interactive Simulator. The link below will only
take you a stub demo.

The interactive simulation will allow you to experience steps which are too time-
consuming or resource intensive to do live in the lab environment.

1. Click here to open the interactive simulation. It will open in a new browser
window or tab.
2. When finished, click the “Return to the lab” link to continue with this lab.

HOL-1706-SDC-6 Page 114


HOL-1706-SDC-6

VMware Validated Design for SDDC –


Cloud Operations with vRealize Log
Insight
Real-time monitoring is a key aspect of Software Defined Data Center operations. Once
an SDDC has been deployed, its ongoing performance and health must be continually
monitored, and administrators must be notified any time problems develop. In the
VMware Validated Design, this is achieved using vRealize Log Insight and vRealize
Operations.

In this interactive simulation we will review the deployment and configuration of


vRealize Log Insight based this comprehensive SDDC architecture.

VMware vRealize Log Insight provides centralized log aggregation and log analytics,
increasing the operational visibility and facilitating troubleshooting and root cause
analysis. The VMware Validated Design for SDDC deploys a separate vRealize Log
Insight cluster in each region.

Log into vRealize Log Insight Administration

Begin by logging into the vRealize Log Insight Administration.

1. Click Login. Default is System Monitoring.


2. Scroll down to review resources.
3. Scroll up to Top.
4. Click Statistics.
5. Review Statistics.

Review Nodes in the Cluster. One Master and Two Workers

6. Click Cluster.

Each cluster is comprised of one master and two worker nodes. The design leverages
the capabilities of the software-defined networking by establishing virtual networks for
this SDDC operations solution. These networks are referred to as application virtual
networks.All nodes are configured with an integrated load balancer. This design allows
for continued availability and increased log ingestion rates.

Once the vRealize Log Insight cluster has been deployed, the various components of the
Software Defined Data Center are configured to forward logs to the cluster where they
are processed and analyzed.

HOL-1706-SDC-6 Page 115


HOL-1706-SDC-6

Review Integrated Load Balancer

7. Click Access Control

Role Based Access Control provides a powerful way to control access to events within
vRealize Log Insight. Not only does it make it possible to support multiple users and
groups within the product, it also makes it possible to restrict access based on job
function using roles and data sets

Review Users and Groups Added from Active Directory

8. Click Hosts

Administrators can view each host or device that has at least one local event in the
cluster. In addition to the hostname you can determine the last time vRealize Log Insight
received an event.

Review List of Hosts that are Sending Syslog Data to the


Cluster.

9. Click Agents

vRealize Log Insight provides the ability to use and agent to collect logs from clients.
The agent supports sending events over syslog, abiding by the syslog RFC, and over the
solution’s ingestion API – this means the agent will work with any remote syslog
destination. The agent supports both Linux and Microsoft Window.

Review Agents Listed.

Review the vCenter Server and Platform Services Controller Appliances Sending Syslog
Data.

10. Click Dropdown and Select vRealize Automation 7 – Windows

Here we can see another custom group has been created for all of the Microsoft
Windows-based IaaS instances for vRealize Automation.

Review the vRealize Automation Window-based Systems Sending Syslog Data.

11. Click vSphere under Integration.

HOL-1706-SDC-6 Page 116


HOL-1706-SDC-6

Review Registration of the Two vCenter Server Instances


with the Cluster

vRealize Log Insight has tight integration with other VMware products. One of the out-of-
the-box integrations is with VMware vSphere. This integration allows Log Insight to
perform two operations:

* Collect events, tasks, and alarms from the vCenter Server database and ingest them
as log messages and

* Configure ESXi hosts to forward syslog events

Review Registration of the Two vCenter Server Instances with the Cluster. Also Collecting
Events, Tasks and Alarms Data.

12. Click vRealize Operations.

vRealize Log Insight integrates directly with vRealize Operations to provide insight
between the structured and unstructured data. For example, inventory mapping, alert
integration, and a two-way launch in context. With this integration you get a single pane
of glass from which to ensure the health of your environment, get notified of detected
issues, automatically respond to issues detected and perform complete troubleshooting
and root cause analysis.

Review that vRealize Log Insight is Integrated with vRealize Operations and Uses it Load-
balanced IP. Alerts Integration and Launch in Context are Also Enabled.

13. Click the List Dropdown (Hamburger Icon) and select Content Packs

Review Content Pack Marketplace

vRealize Log Insight content packs are immutable, or read-only, plugins that provide
pre-defined provide knowledge about a specific set of events in a format easily
digestible by administrators, engineers, monitoring teams, and executives.

A content pack marketplace is natively available within the product and is available
from the Content Packs page. This provides you with immediate access to content packs
without the need of leaving the product.

Review Content Pack Marketplace.

15. Scroll down to Bottom

Review the Content Packs that are Installed with the VVD for SDDC Architecture.

16. Scroll Up to Top

HOL-1706-SDC-6 Page 117


HOL-1706-SDC-6

By default, vRealize Log Insight ships with the vSphere content pack, but additional
content packs can be imported as needed.

Out-of-the-Box content packs.

17. Click General


18. Click VMware - vSphere

Here we can see additional content packs that are deployed and configured in the
VMware Validated Design for SDDC. These include packs for: Virtual SAN, NSX, vRealize
Automation, vRealize Operations and vRealize Orchestrator.

The following have been installed.

• VMware - NSX
• VMware - vRealize Orchestrator
• VMware - Virtual SAN
• VMware - vRealize Automation
• VMware - vRealize Operations

19. Click vRealize Automation

A content pack is made up of information that can be saved from either the Dashboards
or Interactive Analytics pages in Log Insight. These include:

• Queries
• Fields
• Aggregations
• Alerts
• Dashboards

Review the List of Dashboards from the Content Pack

20. Click Dashboards

The dashboards page as an overview section. It contains mostly chart widgets and
allows you to quickly digest log data and determine potential issues in your
environment.

Review General Overview Dashboards.

Review some of the dashboards that come with the vRealize Automation content pack
included in the architecture.

21. Click VMware - vRA-7 Dropdown Menu and select General.

HOL-1706-SDC-6 Page 118


HOL-1706-SDC-6

Dashboard widgets help you visualize information.

• A Chart widget that contains a visual representation of events with a link to a


saved query.
• A Query List that contains title links to saved queries.
• A Field Table that contains events where each field represents a column.

Review dashboard VMware - vSphere to look at some of the dashboards that are
included by default with the out-of-the-box vSphere content pack.

22. Click General Dropdown menu and select VMware-vSphere

Content Pack dashboards cannot be modified, but you can clone these dashboards to
your Custom Dashboards space and modify the clones. A clone has been made and
named 'MyDashboards'.

You can add, modify, and delete dashboards in your Custom Dashboards space.

Review that the Item Has Been Added to the Dashboard

23. Click My Dashboards

Log Insight provides high performance search and visualization of log data for efficient
troubleshooting across heterogeneous environments with its Interactive Analytics.

24. Click Interactive Analytics

There is no need to learn a proprietary query language. Simply start typing to get result
and the search query and results will auto-populate.

Here in this example, we are troubleshooting a storage performance issue over the last
24 hours.

25. Click in search filter field


26. Type ‘scsi latency’ or hit 'left-arrow' key and Press Enter

Change the Scope of Time to ‘Latest 24 Hours of Data’

27. Click Scope of Time dropdown menu (Latest 5 minutes of data) and select
Latest 24 Hours of Data

Review Results

28. Click in search filter field


29. Type ‘deteriorated’ or hit 'left-arrow' key next to ‘scsi latency’ and and select
deteriorated
30. Click Search button

HOL-1706-SDC-6 Page 119


HOL-1706-SDC-6

Adding filters enables us to easily refine search criteria to specified threshold and
constraints.

31. Click + Add Filter and click Text Filter to use the default
32. Click in search filter field
33. Type ‘vmw_esxi_scsi_l' or hit 'left-arrow' key and select
'vmw_esxi_scsi_latency (VMware-vSphere)'
34. Click comparison dropdown (=) and select ‘>’ for the Filter
35. Click in the items search field andtype ‘10000' or hit 'left-arrow' key
36. Click the Search button

Chart graphs can be modified based on visual preference. Change chart graph.

37. Click chart graph dropdown (Automatic) and select Line


38. Select Area

These visualizations and queries can then be saved to a custom dashboard with a
customer name and description.

39. Click Add to Dashboard


40. Click in Name field
41. Type 'scsi latency deteriorated > 10000' or or hit 'left-arrow' key
42. Click Add
43. Click Dashboards

Notice that the custom dashboard has been added to My Dashboards. Let's now delete.

44. Click Gear icon dropdown menu and select 'Delete...'


45. Click Delete

Conclusion

This concludes the review of deployment and configuration of vRealize Log Insight in the
VMware Validated Design for SDDC.

HOL-1706-SDC-6 Page 120


HOL-1706-SDC-6

Module 6 -VMware
Validated Design for
SDDC – Cloud
Management and
Automation with vRealize
Automation (15 minutes)

HOL-1706-SDC-6 Page 121


HOL-1706-SDC-6

Introduction
This module introduces the fundamentals of cloud management and automation with
VMware vRealize Automation in the VMware Validated Design for Software-Defined Data
Center.

This lab is based on the VMware Validated Design for Software-Defined Data Center 2.0.

In this lab you'll be introduced to some key architecture concepts of the VMware
Validated Design for Software-Defined Data Center.

We've included several architecture reference diagrams and descriptions to provide


visual context to the topics seen in the lab simulations.

You can review these diagrams now and/or toggle back to them as you take the lab by
changing the active browser tab or window.

If you're ready to take the lab you can advance the manual and launch the interactive
simulation.

Architecture Reference Diagrams

Architecture reference diagrams for Module 6 are provided in the following sections

Cloud Management Platform Logical Architecture

Let’s take a look at the logical deployment of the Cloud Management Portal systems.

vRealize Automation Appliances

The vRealize Automation virtual appliance (seen as ”vRA”) includes the cloud
management system and database services. The vRealize Automation allows self-
service provisioning and management of cloud services, as well as authoring blueprints,
administration, and governance.

• Two instances of the vRealize Automation virtual appliance are deployed to


achieve redundancy as we enable an active/active front-end portal for higher
availability.
• The two appliances also replicate data using the embedded PostgreSQL database.

vRealize Automation IaaS Web Servers

The vRealize Automation IaaS Web server (seen as “IWS”) provides a user interface
within the vRealize Automation portal Web site for the administration and consumption

HOL-1706-SDC-6 Page 122


HOL-1706-SDC-6

of IaaS components. This is a separate component from the vRealize Automation


appliance.

• Two virtual machines are deployed to run the vRealize Automation IaaS Web
server services and to achieve redundancy we enable an active/active load-
balancing for higher availability.

vRealize Automation IaaS Model Manager

The vRealize Automation IaaS Model Manager (seen as "IMS") and Distributed Execution
Management (see as "DEM") server are at the core of the vRealize Automation IaaS
platform. The vRealize Automation IaaS Model Manager and DEM server supports
several functions.


• Manages the integration of vRealize Automation IaaS with external systems and
databases.
• Provides multi-tenancy.
• Provides business logic to the DEMs.
• Manages business logic and execution policies.
• Maintains all workflows and their supporting constructs.

A Distributed Execution Manager (DEM) runs the business logic of custom models,
interacting with the database and with external databases and systems as required.
DEMs also manage cloud and physical machines.

Each DEM instance acts in either an Orchestrator role or a Worker role. The DEM
Orchestrator monitors the status of the DEM Workers. If a DEM worker stops or loses the
connection to the Model Manager, the DEM Orchestrator puts the workflow back in the
queue. It manages the scheduled workflows by creating new workflow instances at the
scheduled time and allows only one instance of a particular scheduled workflow to run
at a given time. It also preprocesses workflows before execution. Preprocessing includes
checking preconditions for workflows and creating the workflow's execution history.

• Two virtual machines are deployed to run both the Automation IaaS Model
Manager and the DEM Orchestrator services in a load-balanced pool.

The vRealize Automation IaaS DEM Workers are responsible for the provisioning and de-
provisioning tasks initiated by the vRealize Automation portal. DEM
Workers communicate with vRealize Automation endpoints. .

• Two DEMO Workers are deployed, each with three DEM Worker instances.

A minimum of one two-node vRealize Orchestrator cluster is also deployer to provide a


production class orchestration engine.

The vRealize Automation IaaS Proxy Agent is a Windows program that proxies
information gathering from vCenter Server back to vRealize Automation. The IaaS Proxy
Agent server provides the following functions.

HOL-1706-SDC-6 Page 123


HOL-1706-SDC-6

1. vRealize Automation IaaS Proxy Agent can interact with different types of
hypervisors and public cloud services, such as Hyper-V and AWS. For this design,
only the vSphere agent is used.
2. vRealize Automation does not itself virtualize resources, but works with vSphere
to provision and manage the virtual machines. It uses vSphere agents to send
commands to and collect data from vSphere.

• Two vRealize Automation vSphere Proxy Agent virtual machines are deployed on
a separate virtual network in each region. This will allow for independent failover
of the main vRealize Automation components across regions.

vRealize Business for Cloud

vRealize Business for Cloud Standard provides end-user transparency in the costs that
are associated with operating workloads. It’s used to gather and aggregate the financial
cost of workload operations provides greater visibility both during a workload request
and on a periodic basis, regardless of whether the costs are "charged-back" to a specific
business unit, or are "showed-back" to illustrate the value that the SDDC provides.
vRealize Business integrates with vRealize Automation to display costing during
workload request and on an ongoing basis with cost reporting by user, business group or
tenant. Additionally, tenant administrators can create a wide range of custom reports to
meet the requirements of an organization.

• A vRealize Business for Cloud appliance is deployed in Region A and integrates


with vRealize Automation.
• A vRealize Business remote data collector is deployed in each region-specific
logical network.

Business Groups and Reservations

In this diagram a logical view of the initial business groups, reservations and fabric
groups in vRealize Automation are shown.

The VMware Validated Design for SDDC implements a single vRealize Automation tenant
with two initial business groups:

1. Production
2. Development.

HOL-1706-SDC-6 Page 124


HOL-1706-SDC-6

Within each business group the tenant administrators are able to manage users and
groups, apply tenant-specific branding, enable notifications, configure business policies,
and manage the service catalog

HOL-1706-SDC-6 Page 125


HOL-1706-SDC-6

Application Virtual Networks for vRealize Automation,


vRealize Orchestrator and vRealize Business for Cloud

In this diagram a illustration of the distributed deployment topology, application virtual


networks, distributed routing and load balancing services for the vRealize Automation,
vRealize Orchestrator and vRealize Business for Cloud components are shown.

The design uses NSX logical switches to abstract the solutions and services onto
application virtual networks. This abstraction allows the solutions to be hosted in any
given region regardless of the underlying physical infrastructure such as network
subnets, compute hardware, or storage types. This design places the vRealize
Automation application and its supporting services in Region A. The same instance of
the application manages workloads in both Region A and Region B.

HOL-1706-SDC-6 Page 126


HOL-1706-SDC-6

Templates and Content Library

In this diagram, the use of the content library to share VM-related content - like
templates consumed and used by vRealize Automation across regions - is shown.

Here we illustrate some standard virtual machine templates are consumed by vRealize
Automation in Region A. Here you can see Microsoft Windows and two Linux
distributions. These templates are then imported into a published content library in
Region A.

Region B subscribes to the Region A library and synchronizes the content. The templates
are then exported to a format consumable by vRealize Automation in Region B.

In both regions, the templates and content libraries are stored on NFS.

HOL-1706-SDC-6 Page 127


HOL-1706-SDC-6

Hands-on Labs Interactive Simulation:


VMware Validated Design for SDDC –
Cloud Management and Automation
with vRealize Automation
The interactive simulation will allow you to experience steps which are too time-
consuming or resource intensive to do live in the lab environment.

1. Click here to open the interactive simulation. It will open in a new browser
window or tab.
2. When finished, click the “Return to the lab” link to continue with this lab.

HOL-1706-SDC-6 Page 128


HOL-1706-SDC-6

VMware Validated Design for SDDC –


Cloud Management and Automation
with vRealize Automation - Script
In this Interactive Simulation, we will review the deployment and configuration of
vRealize Automation based on this comprehensive SDDC architecture.

vRealize Automation provides the service catalog and self-service portal for the
Software-Defined Data Center.

• The service catalog provides is a library of templates and services that can be
deployed in the private cloud.
• The self-service portal is the interface used to author, administer, and consume
the templates and services that exist in the Service Catalog.

Virtual Appliances

1. Click on the Networking and Security icon, then Logical Switches.

The design leverages the capabilities of the software-defined networking by establishing


virtual networks in the management stack for SDDC management, automation and
operations solutions. These networks are referred to as application virtual networks. The
use of application virtual networks provides isolation and portability across regions
during disaster recovery.

Let's take a look at the virtual machine appliances attached to the network.

2. Click on MGMT-xRegionA01-VXLAN and then the Virtual Machines tab.

The deployment consists of multiple components. These include: the vRealize


Automation Appliance, IaaS Web Server, IaaS Model Manager, DEM Orchestration
Server, DEM Worker Server, and the IaaS Proxy Server. vRealize Orchestrator is
comprised of a single virtual appliance. To provide redundancy and ensure availability,
these components are deployed in pairs.

Here we see the virtual machines contained within this application virtual network. This
virtual network is independent of the region and can be extended across data centers in
a multi-region deployment.

3. Filter and search for the vRealize Automation and vRealize Orchestrator systems
deployed on this network by clicking in the Filter box and pressing the 'Enter'
key.

HOL-1706-SDC-6 Page 129


HOL-1706-SDC-6

Here they are. The systems that comprise the vRealize Automation platform. Note that
the IaaS Proxy Servers reside in another application virtual network that is designated to
a specific region.

NSX Edges

4. Let's take a look at the NSX Edge Appliances by clicking the NSX Edges button.

A Universal Distributed Logical Router is instantiated in the management stack.


Universal Distributed Logical Routers run in the kernel of each ESXi host and provide
centralized administration and routing configuration for the Software-Defined Data
Center.

5. Click on edge-1 / SFOMGMT-LB01.

Solutions that require load balancing for highly distributed and available deployments
use the load balancing capabilities of an NSX Edge Services Gateway in HA mode. This
ensures availability, scalability and performance of the VMware Validated Design for
SDDC solutions.

6. Click on OneArmLb in the interfaces list and then Show All.

Here we see the addresses assigned to the load balancer. These are used by the virtual
servers created for the SDDC solutions.

Click the Cancel button.

7. Next, click on the Load Balancer tab.

Pools and Virtual Servers

Now let’s take a look at the pools.

8. Click inside the Filter search box and then press the 'Enter' key.
9. By filtering the list of pools we see that multiple pools have been created for
distributed vRealize Automation components..
10. Select the vra-svr-443.

This pool indicates the name, connection algorithm as well as the twp nodes of the
vRealize Automation appliance we saw earlier including their IP, port and weighted
values.

11. Click on the Virtual Servers link.

Virtual Servers are the load-balanced endpoints that users connect to access the
solutions.

HOL-1706-SDC-6 Page 130


HOL-1706-SDC-6

12. Click inside the Filter search box and then press the 'Enter' key.

By filtering the list of virtual servers we see that five pools are present have been
created for vRealize Automation. Here we select the virtual server for the vRealize
Automation appliance’s HTTPs protocol.

13. Select the vra-svr-443 server and click the Pencil Icon to edit its properties.

The virtual server indicates if it is enabled, which it is. It also indicated the application
profile, a descriptive name, an IP address from those we saw earlier, the protocol, the
port, the pool and connection rates and limits, if any.

14. When you have finished reviewing the settings, click the 'Cancel' button.

Workload Virtual Network

Now that we understand the deployment of a vRealize Automation on an application


virtual network, let’s explore some of the virtual networks created for workloads that will
be deployed through vRealize Automation.

15. Click the Global Configuration link.

Here we can see that the load balancer is enabled.

16. Click MGTM-xRegionA01-VXLAN and the Networking & Security in the


Navigation Pane.
17. In the NSX Manager drop-down box, select 172.27.12.66.

Here we see that virtual networks have been created for Web, App and Database roles
in both Production and Development.

18. Click NSX Edges and then click UDLR01.

Here see that the Universal Distributed Logical Router has been connected to each of
these virtual networks and has been assigned a network address.

Now that we have reviewed the deployment, let’s log in and explore some of the
vRealize Automation configuration in this architecture.

19. Click Home Icon and select Host and Clusters.

vRealize Automation - Endpoints and Compute Resources

20. Click on the Second Browser Tab, VMware Identity Manager.


21. Click in the first field to auto populate the user name and then in the password
field to auto populate it. Finally, click the 'Sign in' button.

HOL-1706-SDC-6 Page 131


HOL-1706-SDC-6

Here we are at the Home tab in vRealize Automation. On-screen widgets provide
information on such items as open requests, recent requests, items owned by the user,
notifications, such as approvals, and more.

22. Let's review the review the Endpoints by clicking Infrastructure in the Top
Navigation.
23. Click Endpoints and then Endpoints again.

Endpoints allow vRealize Automation to communicate with infrastructure services, such


as vSphere, vRealize Orchestrator, and NSX.

Here we’ve added endpoints for our vSphere and vRealize Orchestrator. Note that this is
pointing to the vRealize Orchestrator instance deployed on the application virtual
network and load balanced by an NSX Edge Services Gateway. The vSphere Endpoint
allows vRealize Automation to communicate with the vSphere environment and discover
compute resources, collect data, and provision machines. The vRealize Orchestrator
Endpoint allows vRealize Automation to communicate with NSX and run out-of-the-box
or custom workflows during the machine lifecycle.

24. Let's take a look at the Compute Resources. Start by clicking '<Infrastructure'
25. Next, click Compute Resources and then Compute Resources again.

A compute resource is an object that represents a host, host cluster, or pool in a


virtualization platform on which machines can be provisioned.

An administrator can add compute resources to or remove compute resources from a


fabric group. A compute resource can belong to more than one fabric group, including
groups that different fabric administrators manage.

Here we see that the compute and edge clusters have been added as compute
resources.

Resource Reservations - Storage and Network

After a compute resource is added to a fabric group, a fabric administrator can create
reservations on it for specific business groups. Users in those business groups can then
be entitled to provision machines on that compute resource.

Information about the compute resources on each infrastructure source endpoint and
machines provisioned on each compute resource is collected at regular intervals.

26. To take a look at the current Resource Reservations, click <Infrastructure, then
Reservations and Reservations again.
27. Here we see that an reservation has been created for our production business
group. To view more details, click on SFO01-Comp01-Prod-Res01.Let’s edit
this reservation to remove NFS and add Virtual SAN in its place. Start by clicking
on the Resources tab.

HOL-1706-SDC-6 Page 132


HOL-1706-SDC-6

28. Click the Down Arrow on the scroll bar to scroll dow the list of storage paths
and uncheck SFO01A-DS-NFS-Primary-VRA.
29. Click Yes to confirm you want to remove this storage path.
30. Now check SFO01A-DS-VSAN01-COMP01 to add Virtual SAN storage to the
reservation.
31. Click in the “This Reservation Reserved” field twice to enter “4000” and then
click in the “Priority” to “1”.
32. Click OK to confirm.
33. We can also assign network resources to the reservation. To do this, start by
clicking the Up Arrow on the scroll bar and select the Network tab.

Here we see that the production virtual networks have been added to the reservation
and assigned a Network Profile.

35. Click the Down Arrow on the scroll bar.


36. To save the changes we have made to this reservation, click the Up Arrow in the
scroll bar and click OK.

We have now saved the new storage policy to our reservation.

Network Profiles

Network profiles are used to specify network settings in reservations, relative to a


network path.

When a custom property is not used, vRealize Automation uses a reservation network
path for the machine NIC for which the network profile.

Network profiles then configure network settings during the machine provisioning. They
may also specify the configuration of NSX Edge devices.

You can create a network profile to define a type of available network, including external
network profiles and templates for network address translation and routed network
profiles that will build NSX logical switches and appropriate routing settings for a new
network path to be used by provisioned machine as assigned in blueprint.

37. To view the current network profiles, click the Network Profiles tab.
38. Click on the Key Pairs link.
39. Click Ext-Net-Profile-Production-Web and click Pencil Icon.

You can specify the ranges of IP addresses that network profiles can use. Each IP
address in the specified ranges that are allocated to a machine is reclaimed for
reassignment when the machine is destroyed

40. To review the current IP Allocations, click on the IP Ranges tab.


41. When you have finished reviewing the IP Ranges and allocations, click Cancel to
continue.

HOL-1706-SDC-6 Page 133


HOL-1706-SDC-6

Blueprints

Blueprints are built on with a dynamic drag-n-drop design canvas, allowing you to
choose any supported components, drag them on to the canvas, build dependencies,
and publish to the catalog.

The supported components include machine shells for all the supported OOTB
platforms, software components, endpoint networks, NSX-provided networks, XaaS
components, and even other blueprints that have already been published.

Once dragged over, the you can build the necessarily logic and any needed integration
for each component of that particular service.

Application Authoring is done from within the canvas with the same drag-and-drop
capability.

vRealize Automation is the consumption plane for NSX. It consumes and automates
NSX, including the ability to dynamically build on-demand network services.The canvas
also allows the drag-n-drop of NSX security groups and support app-centric isolation.

Blueprints can also be exported as YAML code using the CloudClient. Once exported, you
can edit/change/manipulate the content however they see fit, then imported as a new
blueprint. And since it’s just text — the YAML can be shared, edited, and imported into
other environments.

Let's take a look at how these work.

42. Start by clicking Design in the Top Navigation.


43. Select the 3 Tier App by clicking on it.
44. Click Software Components under Categories.
45. Click Blueprints under Catagories.
46. Click Network & Security under Catagories.

Here we see that we have three vSphere Machines, Web-0, App-0 and DB-0.

47. Let’s look at their configuration on the canvas. Start by clicking on Web-0.

We see that we will use a specific machine prefix for the virtual machine name

48. To see how the Web-0 will be built, click on Build Information.

Here we see that this machine will be created by cloning a Ubuntu Linux template and
running a customization.

49. To view the resources the machine will use, click the Machine Resources tab.

We can also set the minimum and maximum CPU, Memory and Storage for this web
server.

HOL-1706-SDC-6 Page 134


HOL-1706-SDC-6

50. Let's take a look at the network settings. Click the Network tab.

And we assign a network profile to allocate the network settings during machine
provisioning. This machine will be assigned a static IP address.

51. We can review the settings for the application and database server as well. Let's
start will the application server by clicking on App-0.

We can see that the application server will also use a machine prefix when it is created.

52. By clicking on the Network tab, we can see that it too will use a static IP
address.
53. Now let's take a look at the database server. Click on DB-0.
54. It too will be using a machine prefix and by clicking on the Network tab it will
also be assigned a static IP address.
55. When you have finished, click the Cancel button at the bottom and click Yes to
discard your changes.

Catalogs and Requests

56. Let's take a look at the Catalog and request an item. To get started, click on the
Catalog tab in the Top Navigation.

The catalog provides a self-service portal for requesting services and also enables
business users to manage their own provisioned resources.

The items that are available in the catalog are grouped into service categories, which
helps you find what you’re looking for. Selecting a catalog item, you can view its details
to confirm that it is what you wants before submitting a request.

When you requests a catalog item, a form appears where you can provide information
such as the reason for the request.

57. Let's submit a request for a the 3 Tier App by clicking on the Request button.
58. Click in the Description field twice to add 'My 3 Tier App’.

You can also provide any parameters for the request. For example, you may be able to
specify the number of CPUs, memory or amount of storage on for a machine.

Here we will review the options and parameters for the 3 – tier application.

59. Click on App-0.

For App-0, we have the option of request anywhere from 1 to 4 CPUs and 1024 to 4096
MB of Memory. As we saw earlier when reviewing the Blueprint, the storage is fixed at
16GB.

60. Click on Web-0.

HOL-1706-SDC-6 Page 135


HOL-1706-SDC-6

For Web-0, we have the same options as App-0. We can request additional CPU and
Memory resources, but the storage resource is fixed.

61. Click on DB-0.

Again, for DB-0, we have the same options for CPU and Memory, but cannot modify the
storage.

62. Click on 3 Tier App.

We can see that the lease time has been set to 30 days, but can be adjusted for up to
90 days.

63. To request the 3 Tier Ap, click the Submit button.

After submitting a request, it may be subject to configured approvals. For example, the
unmodified request could require no approval, but any modification to include additional
CPU or Memory resources could require a request to go through an approval process.
You can review the Requests tab to track the progress of a request, including whether it
is pending approval, in progress, or completed.

If the request results in an catalog item being provisioned, it is added to your list of
items on the Items tab.

We can see that our request successfully submitted.

Viewing the Request

Now that we’ve requested the 3-tier app from the catalog, let’s return to the vSphere
Web Client and see the results.

64. Go back to the vSphere Web Client by clicking on the first tab, vSphere Web
Client. We can see that our 3 Tier App has been deployed, customized and
powered on. These include prod-app-00003, prod-db-00003 and prod-
web-00003. Each on their own virtual network.

Let’s review the IP Addresses and hosts on which each of the virtual machines have
been deployed.

65. To review the application server, click on prod-app-00003.


66. To review the application server, click on prod-db-00003.
67. To review the application server, click on prod-web-00003.

Next, we’ll login on the web server and see if we can ping the database and application
servers across the distributed logical routing.

68. Click on the Black Box for the VM to Launch the Remote Console.
69. Press the 'Enter' key to enter the password.

HOL-1706-SDC-6 Page 136


HOL-1706-SDC-6

70. We can view the IP address by typing 'ifconfig' and pressing the 'Enter' key.

Press any key to clear the display window.

Let's try and ping the database and application server.

71. Type “ping –c 5 172.11.11.20” to ping the Database Server and press the
'Enter' key.

The successful pings mean we can communicate with the Database server from the
Web server.

72. Type “ping –c 5 172.11.12.20” to ping the Application Server and press the
'Enter' key.

We can also ping the Application server, meaning we can communicate with it too.

Conclusion

This concludes the review the deployment and configuration of vRealize Automation in
the VMware Validated Design for SDDC.

HOL-1706-SDC-6 Page 137


HOL-1706-SDC-6

Conclusion
Thank you for participating in the VMware Hands-on Labs. Be sure to visit
http://hol.vmware.com/ to continue your lab experience online.

Lab SKU: HOL-1706-SDC-6

Version: 20170502-055051

HOL-1706-SDC-6 Page 138

You might also like