Professional Documents
Culture Documents
VMWare Hol-1706-Sdc-6 PDF en
VMWare Hol-1706-Sdc-6 PDF en
Table of Contents
Lab Overview - HOL-1706-SDC-6 - Guide to SDDC: VMware Validated Designs ................ 2
Lab Guidance .......................................................................................................... 3
Introduction to VMware Validated Designs.............................................................. 6
VMware Validated Design for Software-Defined Data Center ................................ 20
Module 1 - VMware Validated Design for SDDC - Core Platform (15 minutes)................. 40
Introduction........................................................................................................... 41
Hands-on Labs Interactive Simulation: VMware Validated Design for SDDC - Core
Platform ................................................................................................................ 53
VMware Validated Design for SDDC - Script .......................................................... 54
Module 2 -VMware Validated Design for SDDC – Software-Defined Storage (15
minutes).......................................................................................................................... 62
Introduction........................................................................................................... 63
Hands-on Labs Interactive Simulation: VMware Validated Design for SDDC –
Software-Defined Storage ..................................................................................... 71
VMware Validated Design for SDDC – Software-Defined Storage - Script .............. 72
Module 3 -VMware Validated Design for SDDC – Software-Defined Networking (15
minutes).......................................................................................................................... 76
Introduction........................................................................................................... 77
Hands-on Labs Interactive Simulation: VMware Validated Design for SDDC –
Software-Defined Networking ............................................................................... 91
VMware Validated Design for SDDC – Software-Defined Networking - Script ........ 92
Module 4 -VMware Validated Design for SDDC – Cloud Operations with vRealize
Operations (15 minutes) ................................................................................................. 98
Introduction........................................................................................................... 99
Hands-on Labs Interactive Simulation: VMware Validated Design for SDDC – Cloud
Operations with vRealize Operations .................................................................. 103
VMware Validated Design for SDDC – Cloud Operations with vRealize Operations -
Script................................................................................................................... 104
Module 5 -VMware Validated Design for SDDC – Cloud Operations with vRealize Log
Insight (15 minutes)...................................................................................................... 109
Introduction......................................................................................................... 110
Hands-on Labs Interactive Simulation: VMware Validated Design for SDDC – Cloud
Operations with vRealize Log Insight .................................................................. 114
VMware Validated Design for SDDC – Cloud Operations with vRealize Log
Insight ................................................................................................................. 115
Module 6 -VMware Validated Design for SDDC – Cloud Management and Automation with
vRealize Automation (15 minutes) ................................................................................ 121
Introduction......................................................................................................... 122
Hands-on Labs Interactive Simulation: VMware Validated Design for SDDC – Cloud
Management and Automation with vRealize Automation.................................... 128
VMware Validated Design for SDDC – Cloud Management and Automation with
vRealize Automation - Script ............................................................................... 129
HOL-1706-SDC-6 Page 1
HOL-1706-SDC-6
Lab Overview -
HOL-1706-SDC-6 - Guide
to SDDC: VMware
Validated Designs
HOL-1706-SDC-6 Page 2
HOL-1706-SDC-6
Lab Guidance
Note: It will take more than 90 minutes to complete this lab. You should
expect to only finish 2-3 of the modules during your time. The modules are
independent of each other so you can start at the beginning of any module
and proceed from there. You can use the Table of Contents to access any
module of your choosing.
The Table of Contents can be accessed in the upper right-hand corner of the
Lab Manual.
VMware Validated Designs (VVD) provide the most comprehensive and extensively-
testedblueprints to build and operate a Software-Defined Data Center (SDDC). VVD
delivers holistic data center-level designs that span across compute, storage,
networking and management, defining the gold standard for how to deploy and
configure the complete VMware SDDC stack in a wide range of use cases.
In this lab, you will focus on the fundamental architecture elements in the VMware
Validated Design for Software-Defined Data Center. Lab content is organized into six,
15-minute lightning lab modules. Each module consists of an interactive simulation that
demonstrates the value that VVD brings to a specific topic. You may take any or all of
the modules in any order you like. Feel free to re-take the lab as many times as you like
to complete all of the modules.
This lab manual can be downloaded from the Hands-on Labs Document site found here:
[http://docs.hol.pub/HOL-2017]
This lab may be available in other languages. To set your language preference and have
a localized manual deployed with your lab, you may utilize this document to help guide
you through the process:
HOL-1706-SDC-6 Page 3
HOL-1706-SDC-6
http://docs.hol.vmware.com/announcements/nee-default-language.pdf
1. The area in the RED box contains the Main Console. The Lab Manual is on the tab
to the Right of the Main Console.
2. Your lab starts with 90 minutes on the timer. The lab can not be saved. All your
work must be done during the lab session. But you can click the EXTEND to
increase your time. If you are at a VMware event, you can extend your lab time
twice, for up to 30 minutes. Each click gives you an additional 15 minutes.
Outside of VMware events, you can extend your lab time up to 9 hours and 30
minutes. Each click gives you an additional hour.
3. All work in this HOL Interactive Simulation will take place in the manual.
When you first start your lab, you may notice a watermark on the desktop indicating
that Windows is not activated.
One of the major benefits of virtualization is that virtual machines can be moved and
run on any platform. The Hands-on Labs utilizes this benefit and we are able to run the
labs out of multiple datacenters. However, these datacenters may not have identical
processors, which triggers a Microsoft activation check through the Internet.
Rest assured, VMware and the Hands-on Labs are in full compliance with Microsoft
licensing requirements. The lab that you are using is a self-contained pod and does not
have full access to the Internet, which is required for Windows to verify the activation.
HOL-1706-SDC-6 Page 4
HOL-1706-SDC-6
Without full access to the Internet, this automated process fails and you see this
watermark.
HOL-1706-SDC-6 Page 5
HOL-1706-SDC-6
VMware Validated Designs streamline and simplifiy the design and deployment process
for the SDDC. The designs are based on VMware’s core expertise in data center design
and further de-risk deployments through extensive product testing that provide
interoperability, availability, scalability and security.
Each design is developed by experts, and rigorously tested and validated to provide
successful deployment and efficient operations. Continuous interoperability testing
helps a validated design stays valid as subsequent versions of components are
released.
VMware Validated Designs are a critical part of that shift. These designs are used to
provide a structure to allow you to achieve specific use cases -- such as Micro-
segmentation, IT Automation and DevOps-Ready IT.
Comprehensive Documentation
The designs also include detailed guidance that synthetizes best practices on how to
deploy and optimally operate a VMware SDDC.
All designs are made available as free public documents from vmware.com/go/vvd.
Each includes:
• Release Notes
• Architecture Details
• Architecture Diagrams
• Planning and Preparation Guidance
HOL-1706-SDC-6 Page 6
HOL-1706-SDC-6
• Pre-Deployment Checklists
• Step-by-step Deployment and Implementation Guides
• Configuration Workbooks
• Validation Workbooks
• Operational Guidance
Design Objectives
Before creating a VMware Validated Design, the design objectives are established.
Design objectives set the stage for the key capabilities and attributes of each design.
For example, design objectives communicate the target customer profile and
requirements, such as:
• Scope
• Availability
• Redundancy
• Performance
• Security
• Recoverability.
HOL-1706-SDC-6 Page 7
HOL-1706-SDC-6
Design Decisions
A design decision is the explicit listing of the rational made during the design process
and the reasons why those decisions were made.
Each design decision supports and ensures that the design meets the design objectives
by providing a means to record and communicate the argumentation and reasoning
behind the design process.
Design decisions the VMware Validated Designs are presented in a simple checklist form
and include the following:
• Reference ID
• Design Decision
• Design Justification
• Design Implications (If Any)
HOL-1706-SDC-6 Page 8
HOL-1706-SDC-6
Above is an example from the design decisions established in the VMware Validated
Design for SDDC.
These three design decisions are related to the routing model for the software-defined
networking. These decisions are part of the instantiation of the distributed logical
routing architecture in the SDDC.•
Architecture Fundamentals
The following section will provide an introduction to the architecture fundamentals in the
VMware Validated Designs.
Pod Architecture
VMware Validated Design uses a small set of common, standardized building blocks
called pods. Each pod encompasses the combinations of servers, storage, and network
equipment that are required to fulfill a specific role within the SDDC. These roles are
typically include management, edge and compute or a combination of edge and
compute.
HOL-1706-SDC-6 Page 9
HOL-1706-SDC-6
Pods can be set up with varying levels of hardware redundancy and varying quality of
components. For example, one compute pod could use full hardware redundancy for
each component (power supply through memory chips) for increased availability. At the
same time, another compute pod in the same setup could use low-cost hardware
without any hardware redundancy. With these variations, the architecture can cater to
the different workload requirements in the SDDC.
For both smaller and large setups, homogeneity and easy replication are important.
HOL-1706-SDC-6 Page 10
HOL-1706-SDC-6
Learn more about the pod architecture used in the VMware Validated Designs in this
video.
The physical network architecture used in the VMware Validated Designs is tightly
coupled with the pod architecture.
The number of spine and leaf switches in a deployment will vary depending on the
number of physical racks. Naturally, the larger the SDDC environment, the more
switches required to make up the overall fabric. A key benefit of the VMware Validated
Design is that it allows you to start small and easily scale out as you grow.
The following are some network designs guidelines are used in the VMware Validated
Designs:
• Redundancy is built into the fabric in order to instantiate a highly resilient fabric
capable of sustaining individual link and/or switch failures without widespread
impact.
HOL-1706-SDC-6 Page 11
HOL-1706-SDC-6
• If a link failure occurs between a spine switch and a leaf switch, the routing
protocol ensures that no traffic for the affected rack is attracted to the spine
switch that has lost connectivity to that rack.
• The total number of ports available across all spine switches and the
oversubscription that is acceptable determine the number of racks supported in
the SDDC.
• Because the number of hops between any two racks is consistent, the
architecture can utilize equal-cost multi-pathing (ECMP).
• VMware NSX is used to instantiate a robust, software-defined networking layer on
top of the physical leaf-and-spine network topology.
HOL-1706-SDC-6 Page 12
HOL-1706-SDC-6
Learn more about the physical network architecture used in the VMware Validated
Designs in this video.
In this diagram we depict the constructs for the universal distributed logical routing in
the SDDC across two regions. The configuration is largely the same across the
management stack and the compute stack.
The core platform solutions, such as the vCenter Server instances, Platform Services
Controllers instances, NSX Manager instances and the NSX Universal Controller cluster
run on a vSphere Distributed Port Group which has a VLAN provided down from the dual
leaf switches. This port group is on the vSphere Distributed Switch for the Management
Pod.
This same vDS has two uplink port groups – Uplink 01 and Uplink 02. On these port
groups we deploy NSX Edge Services Gateways in an ECMP - Equal Cost Multi-Pathing -
configuration for north/south routing for management.
This also occursin both edge cluster (3-pod) or shared edge/compute (2-pod) for the
compute stack and leverages the similar uplink port groups provided on the vDS for the
Edge.
The NSX Edge Services Gateways pair in management will manage north/south traffic
for the SDDC infrastructure and the pair in edge will manage north/south traffic for
SDDC workloads. The use of ECMP provides multiple paths in and out of the SDDC. This
results in much faster failover times than deploying Edge service gateways in HA mode.
HOL-1706-SDC-6 Page 13
HOL-1706-SDC-6
The design uses BGP as the dynamic routing protocol inside the SDDC for a simple
implementation. There is no need to plan and design access to OSPF area 0 inside the
SDDC. OSPF area 0 varies based on customer configuration.
The management stack uses a single universal transport zone that encompasses all
management clusters and for the compute stack, the design uses a single universal
transport zone that encompasses all edge and compute clusters from all regions. The
use of a single Universal Transport Zone for management stack and one for the compute
stack supports extending networks and security policies across regions. This allows
seamless migration of management applications and business workloads
across regions either by cross vCenter vMotion or by failover recovery with Site
Recovery Manager.
Riding on top of these transport zones we instantiate a universal logical switch for use
as the transit network between the Universal Distributed Logical Routers (UDLRs) and
ESGs – this is done in both the management stack and the compute stacks. In this
diagram we’re just representing the construct with a single instance.
We then deploy a single NSX UDLR for the management cluster to provide east/west
routing across all regions as well as a single NSX UDLR for the compute and edge
cluster to provide east/west routing across all regions. Here again, we’re just
representing the construct with a single instance.
The UDLRs are peer via BGP to the north/south edges in their respective stack and the
universal logical switch that is used as the transit network allows the UDLRs and all
ESGs across regions to exchange routing information.
The distributed logical routing provides the ability to spin up logical networks that are
then accessible throughout the SDDC and out to the physical network.
In fact, we leverage this capability tor the SDDC solutions themselves and create
application virtual networks for these solutions.
HOL-1706-SDC-6 Page 14
HOL-1706-SDC-6
HOL-1706-SDC-6 Page 15
HOL-1706-SDC-6
Regions
1. Single-Region
2. Dual-Region.
Learn more about business continuity and disaster recovery between VMware Validated
Designs regions in this video.
Storage
VMware Validated Design provides guidance for the storage of the management
components.
HOL-1706-SDC-6 Page 16
HOL-1706-SDC-6
• Virtual SAN - Virtual SAN storage is the default storage type for the SDDC
management components. The VMware Validated Designs uses rack mount
Virtual SAN Ready Nodes to ensure seamless compatibility and support with
Virtual SAN during the deployment. The configuration and assembly process for
each system is then standardized, with all components installed the same
manner on each host. Standardizing the entire physical configuration of the ESXi
hosts is critical to providing an easily manageable and supportable infrastructure
because standardization eliminates variability. Consistent PCI card slot location,
especially for network controllers, is essential for accurate alignment of physical
to virtual I/O.While there is no explicit requirement for running Virtual SAN on
hosts the compute pods, it is recommended that you use Virtual SAN Ready
Nodes as this not only enables you to leverage the benefits of Virtual SAN for
your compute workloads, but it also provides hardware consistency across all the
pods in the SDDC. Use storage that meet the application and business
requirements. This provides multiple storage tiers and SLAs for these business
workloads.
• NFS - NFS storage is the secondary storage for the SDDC management
components. It provides space for workload backup, archiving log data and
application templates. NFS storage required an NFS-capable external storage
arrays. The VMware Validated Design calls for three specific NFS exports.
Essentially, These exports control the access between the endpoints and the
underlying storage system. These exports include: (1) Log Archives NFS Export
for vRealize Log Insight The Log Archive NFS Export is used directly by vRealize
Log Insight and is not presented as a datastore. (2) Content Library NFS Export
for Templates that will be converted to a vRealize Automation format. It is
presented as a datastore to the Compute Pods. (3) Data Protection NFS Export on
a separate volume for data protection services. Data protection is I/O intensive.
This NFS Export is presented as a datastore to the Management Pod. For security
purposes, we limit access to each export to only the application virtual machines
or hosts requiring the ability to mount the storage.
Get Started
You can get started with the VMware Validated Designs in three different way:
Professional Services
When deploying VMware Validated Designs, you can choose to have an expert by your
side. VMware Professional Services delivers the right expertise and collaboration for a
rapid implementation of your SDDC architecture. Achieve faster value, increase business
efficiency and improve end user productivity with our project assistance and knowledge
transfer.
HOL-1706-SDC-6 Page 17
HOL-1706-SDC-6
• Rapidly deploy an SDDC that delivers a production cloud platform for delivering IT
services
• Gain skills and knowledge, and get assistance with an initial deployment of the
foundational platform
• Improve end-user experience and business efficiency with an optimized
deployment
VMware is partnering with the most prestigious System Integrators to bring VVD to
customers.
Through a rigorous process, VMware verifies and certifies that a SDDC design complies
with the VVD specifications. Certified SDDC designs receive the "VMware Ready" stamp
which gives customers a high level of assurance about the robustness of the solution
based on those designs.
Deploy It Yourself
For organizations that want to implement, operate and integrate the SDDC step-by-step
themselves we’ve made all the documentation publically available at vmware.com/go/
vvd.
This includes:
• Release Notes
• Architecture Details
• Architecture Diagrams
• Planning and Preparation Documents
• Pre-Deployment Checklists
• Step-by-step Deployment and Implementation Guides
• Configuration Workbooks
• Validation Workbooks
• Operational Guidance Documents – that include:
• Monitoring and Alerting
• Business Continuity
• Startup and Shutdown
• Plus many more Operations Add-ons!
HOL-1706-SDC-6 Page 18
HOL-1706-SDC-6
Follow the community by selecting the "Following in" button in the community's
banner and you will receive general notifications when new content is available. You will
even get early access to new designs as they become available.
For updates, follow @VMwareSDDC on Twitter and follow our Youtube playlist at
vmware.com/go/vvd-videos.
HOL-1706-SDC-6 Page 19
HOL-1706-SDC-6
Overview
The VMware Validated Design for Software-Defined Data Center is a specific design by
VMware that is a comprehensive guide that provides a prescriptive and extensively-
tested blueprint to deploy and operate a Software-Defined Data Center using VMware’s
technology. Each design includes design guides, implementation and deployment
procedures, and documentation for on-going operations.
This Hands-on Lab is based on the VMware Validated Design for Software-Defined Data
Center 2.x.
It also includes updates on what's new in the 3.0 release announced at VMworld 2016.
HOL-1706-SDC-6 Page 20
HOL-1706-SDC-6
The VMware Validated Design for the Software-Defined Data Center includes a
completely integrated software bill of materials.
Recall that our validation processes rigorously tests and continuously validates the
entire integrated solution.
Let’s take a look at the components included in the VMware Validated Design for the
Software-Defined Data Center 2.x.
From a software component, or SDDC stack perspective, this translates into the strong
foundation and management platform
Note: vSphere Data Protection is interchangeable for another data protection solution
as long as the design objectives are met.
This foundation is then extended to include the Cloud Management, Automation and
Orchestration components in addition to IT Financial Management.
These include:
Next let’s dive a bit deeper and look into some of the high-level technical aspects before we jump into the
lab modules
HOL-1706-SDC-6 Page 21
HOL-1706-SDC-6
HOL-1706-SDC-6 Page 22
HOL-1706-SDC-6
Pod Architecture
VMware Validated Design uses a small set of common, standardized building blocks
called pods. Each pod encompasses the combinations of servers, storage, and network
equipment that are required to fulfill a specific role within the SDDC.
In the VMware Validated Design for Software-Defined Data Center 2.x these roles
include management, edge and compute.
Pods can be set up with varying levels of hardware redundancy and varying quality of
components. For example, one compute pod could use full hardware redundancy for
each component (power supply through memory chips) for increased availability. At the
same time, another compute pod in the same setup could use low-cost hardware
without any hardware redundancy. With these variations, the architecture can cater to
the different workload requirements in the SDDC.
For both smaller and large setups, homogeneity and easy replication are important.
HOL-1706-SDC-6 Page 23
HOL-1706-SDC-6
Management Pod
As the name implies, the Management Pod hosts the infrastructure components used
to instantiate, manage, and monitor the private cloud. This includes the core
infrastructure components, such as the Platform Services Controllers, vCenter Server
instances, NSX Managers, and NSX Controllers, SDDC monitoring solutions like vRealize
Operations Manager and vRealize Log Insight.
In the VMware Validated Design for Software-Defined Date Center, Cloud Management
Platform components are added and to include vRealize Automation, vRealize
Orchestrator and vRealize Business for Cloud on top of the solid and robust
management platform.
HOL-1706-SDC-6 Page 24
HOL-1706-SDC-6
Edge Pod
In this three-pod architecture, the SDDC network fabric does not provide external
connectivity. Most pod types, such as compute pods, are not set up with external
network connectivity. Instead external connectivity is pooled into Edge Pod. The edge
pod runs the software-defined networking services provided by VMware NSX to establish
north/south routing between the SDDC and the external network as well as east/west
routing inside the SDDC for the business workloads.
HOL-1706-SDC-6 Page 25
HOL-1706-SDC-6
Compute Pod
Compute Pods host the SDDC workloads. An SDDC can mix different types of compute
pods and provide separate compute pools for different types of Service Level
Agreements (SLAs).
For example, compute pods can be set up with varying levels of hardware redundancy
and varying quality of components for different Service. One compute pod could use full
hardware redundancy for each component (power supply through memory chips) for
increased availability. At the same time, another compute pod in the same setup could
use low-cost hardware without any hardware redundancy. With these variations, the
architecture can cater to the different workload requirements in the SDDC.
The physical network architecture used in the VMware Validated Designs is tightly
coupled with the pod architecture.
HOL-1706-SDC-6 Page 26
HOL-1706-SDC-6
The number of spine and leaf switches in a deployment will vary depending on the
number of physical racks. Naturally, the larger the SDDC environment, the more
switches required to make up the overall fabric. A key benefit of the VMware Validated
Design is that it allows you to start small and easily scale out as you grow.
The following are some network designs guidelines are used in the VMware Validated
Designs:
• Redundancy is built into the fabric in order to instantiate a highly resilient fabric
capable of sustaining individual link and/or switch failures without widespread
impact.
• If a link failure occurs between a spine switch and a leaf switch, the routing
protocol ensures that no traffic for the affected rack is attracted to the spine
switch that has lost connectivity to that rack.
• The total number of ports available across all spine switches and the
oversubscription that is acceptable determine the number of racks supported in
the SDDC.
• Because the number of hops between any two racks is consistent, the
architecture can utilize equal-cost multi-pathing (ECMP).
• VMware NSX is used to instantiate a robust, software-defined networking layer on
top of the physical leaf-and-spine network topology.
In the three-pod architecture, the SDDC network fabric does not provide external
connectivity. Most pod types, such as compute pods, are not set up with external
network connectivity. Instead external connectivity is pooled into management and
edge pod. The edge pod runs the software-defined networking services provided by
HOL-1706-SDC-6 Page 27
HOL-1706-SDC-6
VMware NSX to establish north/south routing between the SDDC workloads and the
external network as well as east/west routing inside the SDDC for the business
workloads.
HOL-1706-SDC-6 Page 28
HOL-1706-SDC-6
Leaf switches of each rack acts as the Layer 3 interface for the corresponding subnet.
The management and Edge Pods are provided with externally accessible VLANs for
access to the Internet and/or MPLS-based corporate networks.
Each ESXi host in the Management, Edge and Compute Pod uses VLANs and
corresponding Layer 2 networks presented for in-rack traffic,
The leaf switches of each rack act as the Layer 3 interface for the corresponding Layer 2
networks.
HOL-1706-SDC-6 Page 29
HOL-1706-SDC-6
In this diagram, we illustrate the standard VLANs that are presented to the hosts in each
management, edge and compute pods.
You’ll notice that the vSphere Distributed Switches have an MTU of 9000 configured for
Jumbo Frames as do the necessary VMkernel ports – vMotion, VSAN, VXLAN and NFS.
HOL-1706-SDC-6 Page 30
HOL-1706-SDC-6
Recall from the Introduction that the VMware Validated Designs use distributed logical
networking.
In this three-pod architecture, the SDDC network fabric does not provide external
connectivity. Most pod types, such as compute pods, are not set up with external
network connectivity. Instead external connectivity is pooled into Edge Pod. The edge
pod runs the software-defined networking services provided by VMware NSX to establish
north/south routing between the SDDC and the external network as well as east/west
routing inside the SDDC for the business workloads.
HOL-1706-SDC-6 Page 31
HOL-1706-SDC-6
In a dual region deployment, the designs dictate that three separate AVNs be deployed:
(1) A shared, region independent AVN that spans both regions. All management
applications that are configured to migrate, or failover between regions run inside this
AVN.
• vRealize Automation
• vRealize Business for Cloud
• vRealize Orchestrator
• vRealize Operations Analytics Cluster
(2) Two region dependent AVNs, one at each site. Management applications that are
specific to each region and which do not migrate between regions run in this AVN.
The VMware Validated Design for the Software-Defined Data Center includes a
completely integrated software bill of materials.
HOL-1706-SDC-6 Page 32
HOL-1706-SDC-6
Recall that our validation processes rigorously tests and continuously validates the
entire integrated solution.
Refer to the design documentation for products and versions included in the design.
Visit vmware.com/go/vvd.
In this lab, we're excited to share a preview of what's new in the VMware Validated
Design for Software Defined Data Center v3.0 announced VMworld.
This release of the design will include all the features and capabilities of the prior
release along with the addition of the following:
This release includes an update to the base pod architecture from a three-pod design to
new two-pod design by converging the previous Edge Pod and first Compute Pod.
Please refer to the design documentation for products and versions included in this
design.)
HOL-1706-SDC-6 Page 33
HOL-1706-SDC-6
This release includes the expansion from single-region deployment and operations
guidance to include full dual-region support. This includes:
HOL-1706-SDC-6 Page 34
HOL-1706-SDC-6
The VMware Validated Design for Software-Defined Data Center v3.0 includes the
addition of site replication and site protection services for the SDDC management,
automation and operations solution. In this release, we provide prescriptive guidance on
deploying and configuring vSphere Replication and Site Recovery Manager to protect:
• vRealize Operatins
• vRealize Automation
• vRealize Orchestrator
• vRealize Business for Cloud
Please refer to the design documentation for products and versions included in this
design.
HOL-1706-SDC-6 Page 35
HOL-1706-SDC-6
VMware Validated Design uses a small set of common, standardized building blocks
called pods. Each pod encompasses the combinations of servers, storage, and network
equipment that are required to fulfill a specific role within the SDDC.
In the VMware Validated Design for Software-Defined Data Center v3.0 release we have
included an consolidation of the edge and compute pod to make it easier to get started
with the design.
The roles for pods now include management and shared edgeandcompute.
In the new two-pod architecture, the SDDC network fabric provides external connectivity
to all pods. External connectivity is pooled into a Shared Edge and Compute Pod.
The shared pod runs the software-defined networking services provided by VMware NSX
to establish north/south routing between the SDDC and the external network as well as
east/west routing inside the SDDC for the business workloads. This shared pod may
also host the SDDC workloads.
HOL-1706-SDC-6 Page 36
HOL-1706-SDC-6
As the SDDC grows, additional compute-only pods can be added to support a mix of
different types of workloads for different types of Service Level Agreements (SLAs).
HOL-1706-SDC-6 Page 37
HOL-1706-SDC-6
In a two-pod architecture the SDDC network fabric provides external connectivity to all
pods. External connectivity is pooled into both the management pod and a shared edge
and compute pod.
HOL-1706-SDC-6 Page 38
HOL-1706-SDC-6
In this diagram, the host connectivity for the new two pod architecture prescribed in the
VMware Validated Design for Software-Defined Data Center v3.0 is shown.
HOL-1706-SDC-6 Page 39
HOL-1706-SDC-6
Module 1 - VMware
Validated Design for
SDDC - Core Platform (15
minutes)
HOL-1706-SDC-6 Page 40
HOL-1706-SDC-6
Introduction
This module will introduce you to the fundamental constructs of the VMware Validated
Design for SDDC architecture.
This lab is based on the VMware Validated Design for Software-Defined Data Center 2.0.
In this lab you'll be introduced to some key architecture concepts of the VMware
Validated Design for Software-Defined Data Center.
You can review these diagrams now and/or toggle back to them as you take the lab by
changing the active browser tab or window.
If you're ready to take the lab you can advance the manual and launch the interactive
simulation.
Architecture reference diagrams for Module 1 are provided in the following sections.
Recall from the Lab Introduction that VMware Validated Design uses a small set
of common, standardized building blocks called pods. Each pod encompasses the
combinations of servers, storage, and network equipment that are required to fulfill a
specific role within the SDDC. These roles are typically include management, edge and
compute or a combination of edge and compute.
Pods can be set up with varying levels of hardware redundancy and varying quality of
components. For example, one compute pod could use full hardware redundancy for
each component (power supply through memory chips) for increased availability. At the
same time, another compute pod in the same setup could use low-cost hardware
without any hardware redundancy. With these variations, the architecture can cater to
the different workload requirements in the SDDC.
• Management Pod
• Edge Pod
• Compute Pod
HOL-1706-SDC-6 Page 41
HOL-1706-SDC-6
In this diagram the three-pod architecture of the VMware Validate Design for Software-
Defined Data Center 2.x is shown.
HOL-1706-SDC-6 Page 42
HOL-1706-SDC-6
The VMware Validated Design for Software-Defined Data Center 3.x introduces the two-
pod architecture.
• Management Pod
• Shared Edge and Compute Pod
As additional pods are added to the SDDC these are only Compute Pod.
Recall that the physical network architecture used in the VMware Validated Designs is
tightly coupled with the pod architecture.
HOL-1706-SDC-6 Page 43
HOL-1706-SDC-6
In this diagram, the three pod architecture as prescribed in the VMware Validated
Design for Software-Defined Data Center 2.x is shown.
In a three-pod architecture (management, edge and compute) the SDDC network fabric
does not provide external connectivity. Most pod types, such as compute pods, are not
set up with external network connectivity. Instead external connectivity is pooled into
management and edge pod. The edge pod runs the software-defined networking
services provided by VMware NSX to establish north/south routing between the SDDC
workloads and the external network as well as east/west routing inside the SDDC for the
business workloads.
HOL-1706-SDC-6 Page 44
HOL-1706-SDC-6
In this diagram the two pod architecture as prescribed in the VMware Validated Design
for Software-Defined Data Center 3.x is shown.
HOL-1706-SDC-6 Page 45
HOL-1706-SDC-6
In this diagram, the three pod architecture host connectivity as prescribed in the
VMware Validated Design for Software-Defined Data Center 2.x is shown.
In this diagram, the two pod architecture host connectivity as prescribed in the VMware
Validated Design for Software-Defined Data Center 3.x is shown.
HOL-1706-SDC-6 Page 46
HOL-1706-SDC-6
In this diagram, a logical representation of the pods and clusters in the VMware
Validated Design for Software-Defined Data Center.
The diagram represents the pods, clusters, host connectivity, storage, distributed
routing, virtual networks and placement of core platform components.
Note, while this diagram is based on the 3.x two pod architecture it is applicable to 2.x
as well.
Within each region, the design instantiates two Platform Service Controllers and two
vCenter Server systems in the appliance form factor. This includes one PSC and one
vCenter Server for the management pod and one PSC and one vCenter Server for the
shared edge and compute pods. The design also joins the Platform Services Controller
HOL-1706-SDC-6 Page 47
HOL-1706-SDC-6
instances to the same vCenter Single Sign-On domain and points each vCenter Server
instance to its respective Platform Services Controller instance.
Note: This diagram is applicable to both the 2.x and 3.x designs.
HOL-1706-SDC-6 Page 48
HOL-1706-SDC-6
In this diagram, the multi-region and cross-vCenter deployment of NSX in the VMware
Validated Design for Software-Defined Data Center 2.x is shown.
In both regions, two separate NSX Managers instances are deployed, one for the
Management pod and one for the Compute and Edge pods, along with an associated
NSX Universal Controller Cluster. In the Region B the secondary NSX Manager instances
automatically imports the configurations of the NSX Universal Controller Clusters from
Region A.
HOL-1706-SDC-6 Page 49
HOL-1706-SDC-6
In this diagram, the multi-region and cross-vCenter deployment of NSX in the VMware
Validated Design for Software-Defined Data Center 3.x is shown.
The general architecture is the same as in the three-pod architecture; however, the NSX
services for the Compute Stack are deployed on an initial shared edge and compute pod
and all NSX services are added to a resource pool to guarantee resources for the
network virtualization platform in this stack.
In this diagram we depict the constructs for the universal distributed logical routing in
the SDDC across two regions. The configuration is largely the same across the
management stack and the compute stack.
The core platform solutions, such as the vCenter Server instances, Platform Services
Controllers instances, NSX Manager instances and the NSX Universal Controller cluster
run on a vSphere Distributed Port Group which has a VLAN provided down from the dual
leaf switches. This port group is on the vSphere Distributed Switch for the Management
Pod.
This same vDS has two uplink port groups – Uplink 01 and Uplink 02. On these port
groups we deploy NSX Edge Services Gateways in an ECMP - Equal Cost Multi-Pathing -
configuration for north/south routing for management.
HOL-1706-SDC-6 Page 50
HOL-1706-SDC-6
This also occursin both edge cluster (3-pod) or shared edge/compute (2-pod) for the
compute stack and leverages the similar uplink port groups provided on the vDS for the
Edge.
The NSX Edge Services Gateways pair in management will manage north/south traffic
for the SDDC infrastructure and the pair in edge will manage north/south traffic for
SDDC workloads. The use of ECMP provides multiple paths in and out of the SDDC. This
results in much faster failover times than deploying Edge service gateways in HA mode.
The design uses BGP as the dynamic routing protocol inside the SDDC for a simple
implementation. There is no need to plan and design access to OSPF area 0 inside the
SDDC. OSPF area 0 varies based on customer configuration.
The management stack uses a single universal transport zone that encompasses all
management clusters and for the compute stack, the design uses a single universal
transport zone that encompasses all edge and compute clusters from all regions. The
use of a single Universal Transport Zone for management stack and one for the compute
stack supports extending networks and security policies across regions. This allows
seamless migration of management applications and business workloads
across regions either by cross vCenter vMotion or by failover recovery with Site
Recovery Manager.
Riding on top of these transport zones we instantiate a universal logical switch for use
as the transit network between the Universal Distributed Logical Routers (UDLRs) and
ESGs – this is done in both the management stack and the compute stacks. In this
diagram we’re just representing the construct with a single instance.
We then deploy a single NSX UDLR for the management cluster to provide east/west
routing across all regions as well as a single NSX UDLR for the compute and edge
cluster to provide east/west routing across all regions. Here again, we’re just
representing the construct with a single instance.
The UDLRs are peer via BGP to the north/south edges in their respective stack and the
universal logical switch that is used as the transit network allows the UDLRs and all
ESGs across regions to exchange routing information.
The distributed logical routing provides the ability to spin up logical networks that are
then accessible throughout the SDDC and out to the physical network.
In fact, we leverage this capability tor the SDDC solutions themselves and create
application virtual networks for these solutions.
HOL-1706-SDC-6 Page 51
HOL-1706-SDC-6
HOL-1706-SDC-6 Page 52
HOL-1706-SDC-6
1. Click here to open the interactive simulation. It will open in a new browser
window or tab.
2. When finished, click the “Return to the lab” link to continue with this lab.
HOL-1706-SDC-6 Page 53
HOL-1706-SDC-6
In this iSIM, we will demonstrate the some of the core platform components of a private
cloud based on this design.
1. Click in the User name field, then the Password field. Finally click the Login
button.
Here in the vSphere Web Client we can manage the entire platform for the software-
defined data center. The well know compute virtualization along with software-defined
storage and software-defined networking.
The basis of the platform is vSphere 6 and the design uses key enhancements and
capabilities of the release. For example, Enhanced Linked Mode allows us to
interconnect both the management and compute stacks for a single view.
2. Click the Administration tab, then the System Configuration tab and finally
the Nodes link.
Here we can see the nodes and services running across vCenter Server and Platform
Services Controllers instances. Two nodes provide platform services, such as single sign-
on and certificate management. Two additional nodes provide the vCenter Server
instances. All nodes are registered with one SSO domain.
This linkage allows the infrastructure to be managed from a single user interface.
HOL-1706-SDC-6 Page 54
HOL-1706-SDC-6
The VMware Validated Design for SDDC uses a collection of physical data center racks
that are interconnected using a common network core. Inside the physical racks the
different functions of the Software-Defined Data Center are implemented as a
standardized set of building blocks referred to as pods. Each pod encompasses the
combinations of servers, storage, and network that are required to fulfill a specific role
within the SDDC.
Management Pod
Let’s take a look at three of these essential pods starting with the Management Pod.
4. Click SF001-MGMT01
As the name implies, the management pod hosts the infrastructure components used to
instantiate, manage, and monitor the Software-Defined Data Center. This includes the
core infrastructure components, such as the Platform Services Controllers, vCenter
Server Instances, NSX Managers, and NSX Controllers, as well as the SDDC monitoring
solutions like vRealize Operations Manager and vRealize Log Insight.
The management pod is deployed inside a single data center rack and is comprised of a
minimum of four physical servers and two redundant Top-of-Rack switches, commonly
referred to as leaf switches.
VMware ESXi is installed on each of the four physical servers, which are then logically
grouped into a vSphere management cluster.
Notice that the advanced resource management settings for vSphere Distributed
Resource Scheduler and vSphere High Availability are enabled. As well as Virtual SAN.
5. Click vSphere DRS to explore the section. When you have finished viewing,
click it again to close.
6. Click vSphere HA to explore the section. When you have finished viewing, click
it again to close.
7. Click Virtual SAN Capacity to explore the section. When you have finished
viewing, click it again to close.
8. Click on mgmt01esx01.sf01.rainpole.local
HOL-1706-SDC-6 Page 55
HOL-1706-SDC-6
The physical servers used in the management pod must be Virtual SAN Ready Nodes,
meaning they have been certified for use in a Virtual SAN deployment. Storage for the
Management pod is provided using a combination of VMware Virtual SAN and NFS.
Virtual SAN is used for hosting the virtual machines that run in the management cluster
where NFS is used for storing backups, log archives and virtual machine templates.
To accommodate future growth and scalability, additional Virtual SAN Ready Servers can
be added to the management cluster in order provide additional compute, network and
storage capacity.
Edge Pod
The Edge Pod provides a centralized gateway through which workloads running in the
SDDC are able to access external networks. Workloads in the SDDC are isolated on their
own logical networks and do not have direct access to external networks. To access
external networks traffic is routed through the edge pod over a transport zone using
distributed logical routing and edge service gateways.
Like the management pod, the edge pod contains a minimum of four Virtual SAN Ready
servers. This pod is typically co-located inside in the same physical rack as the
Management Pod.
While the management and edge pods can be consolidated into a single physical rack,
they are still logically divided into two separate vSphere clusters as seen here.
Here again, VMware ESXi is installed on each physical server and the servers logically
grouped into a vSphere edge cluster. The Edge Cluster is managed from the Compute
vCenter Server instance running in the management pod. Storage for the edge pod is
provided by VMware Virtual SAN.
10. Click vSphere DRS to explore the section. When you have finished viewing,
click it again to close.
11. Click vSphere HA to explore the section. When you have finished viewing, click
it again to close.
12. Click Virtual SAN Capacity to explore the section. When you have finished
viewing, click it again to close.
Compute Pod
Within the Software-Defined Data Center all business and end-user workloads run inside
the Compute Pods.
HOL-1706-SDC-6 Page 56
HOL-1706-SDC-6
Like the management and edge pods, the compute pods are deployed inside data
center racks, with each rack representing a separate pod. Each compute pod contains a
minimum of four servers along with a pair of leaf switches.
Storage for the compute clusters can be any combination of supported vSphere storage.
The type of storage used is determined based on cost, performance, business
requirements, and desired service levels. It is recommended that you use Virtual SAN
Ready nodes as it enables you to leverage the benefits included in the hybrid and all-
flash options Virtual SAN.
As with the other pods, VMware ESXi is installed on each server and the hosts are
logically grouped into vSphere clusters.
Virtual Machines
Let’s take look at the virtual machines that instantiate and manage the SDDC.
15. Click on the Platform Services folder, then the vCenter Server folder.
In a single-region deployment, two vCenter Server instances are deployed along with
two corresponding Platform Services Controller instances. The two Platform Services
Controller instances are configured as a replication pair for a single Single Sign-on
domain. One vCenter Server instance, which is referred to as the “Management
vCenter”, is used to instantiate and manage the management cluster itself. The second
vCenter Server instance, which is referred to as the “Compute vCenter”, is used to
instantiate and manage the vSphere clusters running in the edge and compute pods.
16. Click on the vCenter Server folder to collapse it, then click on the vSphere
Data Protection folder.
vSphere Data Protection is deployed and used to provide data protection for the
solutions residing in the management cluster.
Within vSphere Data Protection, backup policies are created for these virtual machines.
HOL-1706-SDC-6 Page 57
HOL-1706-SDC-6
VMware NSX
17. Click on the vSphere Data Protection folder to collapse it, then click on the
NSX for vSphere folder.
VMware NSX is deployed to provide network and security services such as VXLAN,
virtual switching, firewalling and load balancing. Here we see the NSX Manager and NSX
Controller cluster for the management stack. We also see the NSX Edge Services
Gateways used for North/South routing and for load-balancing SDDC solutions.
18. Click on the NSX for vSphere and then the Platform Services folders to
collapse them.
Cloud Operations
The design uses vRealize Operations to provide monitoring and alerting services for the
Software-Defined Data Center. vRealize Operations is deployed in two parts, the
Analytics Cluster and the Remote Collectors.
21. Click on the vROps01 folder to collapse it and then click on the vROps01RC
folder to expand it.
In addition to the analytics cluster, a two Remote Collector nodes are also deployed.
Remote collectors help to lighten the load on the analytics cluster by collecting metrics
from applications and then forwarding them in bulk. The use of remote collectors
facilitates SDDC deployments that span multiple regions as it allows for separate remote
collectors to be deployed in each region.
22. Click on the vROps01RC folder to collapse it and then vRLI01 folder to expand
it.
HOL-1706-SDC-6 Page 58
HOL-1706-SDC-6
vRealize Log Insight provides scalable log aggregation and indexing for the SDDC with
near real-time search and analytics capabilities. vRealize Log Insight collects, imports,
and analyzes logs to provide real-time answers to problems related to systems,
services, and applications, and derive important insights.
As with vRealize Operations, vRealize Log Insight is also deployed in a highly available
and scalable manner inside an application virtual network. vRealize Log Insight is
comprised of a minimum of three nodes, a single master node and two worker nodes.
23. Click the vRLI01 and Cloud Operations folders to collapse them.
24. Click the Cloud Management folder and then the vRA01 folder to expand them.
25. Click the vRA01 folder to collapse it and then the vR01IAS folder to expand it.
vRealize Business for Cloud provides visibility into the costs associated with the private
cloud. The solution tracks the costs of deployed workloads and automatically estimates
the impact and associated cost of deploying additional workloads. vRealize Business
provides a centralized dashboard to view, monitor and track cost and spending
efficiency.
26. Click the vR01IAS and Cloud Management folders to collapse them.
Compute Stack
27. Click on the Platform Services and then NSX for vSphere folders to expand
them.
Here again, VMware NSX is deployed to provide network and security services such as
VXLAN, virtual switching, firewalling and load balancing. While the NSX Manager
instance for the compute stack is deployed in the management cluster its NSX
HOL-1706-SDC-6 Page 59
HOL-1706-SDC-6
Controller Cluster runs inside the edge cluster. We also see the NSX Edge Services
Gateways used for North/South routing for SDDC workloads.
28. Click on the NSX for vSphere and then Platform Services folders to expand
them.
Pod Storage
Now let’s take a quick look at the storage provided to these pods.
Recall that we use Virtual SAN for the management and edge pod workloads.
We also use NFS in the management pod for backups, log archives and templates.
Click on comp01vc01.sfo01.rainpole.local.
You can use any HCL supported storage in the Compute Pod.
Pod Networking
34. Click on the Networking tab and then the SF001 Data Center under
mgmt01vc01.sfo01.rainpole.local.
Here we see the Management vSphere Distributed Switch created within the
Management vCenter. It includes the necessary VMkernel port groups for management,
vMotion, Virtual SAN, and NFS. Additional port groups are provided for north/south
routing uplinks and others are created by NSX for the application virtual networks.
HOL-1706-SDC-6 Page 60
HOL-1706-SDC-6
Jumbo Frames
Jumbo frames is enabled on the distributed switch and the MTU is set to 9000 Bytes.
Each host has two 10Gig uplink connections to the top of rack leaf switches for
redundancy.
Each host is assigned to their respective distributed switch and VMkernel adapters are
configured. Jumbo frames is set on the VMkernel adapters for Virtual SAN, NFS and NSX.
38. Click on the Networking tab, then vDS-Mgmt to collapse it. Click on the SFO01
Data Center to collapse it too.
39. Now click on the SFO01 Data Center under comp01vc01.sfo01.rainpole.local.
Similar vSphere Distributed Switches are also created in the Compute vCenter for both
the Edge and Compute pods.
40. Click on vDS-Edge to explore it. When you have finished, click on it again to
collapse it.
41. Click on vDS-Comp to explore it. When you have finished, click on it again to
collapse it.
Here we can see that the vSphere Distributed Switch has been applied to it’s
corresponding pod.
Conclusion
This concludes the demonstration on the core platform components of the VMware
Validated Design for SDDC.
Thank you!
HOL-1706-SDC-6 Page 61
HOL-1706-SDC-6
Module 2 -VMware
Validated Design for
SDDC – Software-Defined
Storage (15 minutes)
HOL-1706-SDC-6 Page 62
HOL-1706-SDC-6
Introduction
This module introduces you the software-defined storage provided by VMware Virtual
SAN in the VMware Validated Design for Software-Defined Data Center.
This lab is based on the VMware Validated Design for Software-Defined Data Center 2.0.
In this lab you'll be introduced to some key architecture concepts of the VMware
Validated Design for Software-Defined Data Center.
You can review these diagrams now and/or toggle back to them as you take the lab by
changing the active browser tab or window.
If you're ready to take the lab you can advance the manual and launch the interactive
simulation.
Architecture reference diagrams for Module 2 are provided in the following sections.
Virtual SAN
The VMware Validated Design for Software-Defined Data Center provides guidance for
the primary storage of the management components.
Virtual SAN storage is the default and primary storage for the SDDC management
components. The VMware Validated Designs uses rack mount Virtual SAN Ready Nodes
to ensure seamless compatibility and support with Virtual SAN during the deployment.
The configuration and assembly process for each system is then standardized, with all
components installed the same manner on each host. Standardizing the entire physical
configuration of the ESXi hosts is critical to providing an easily manageable and
supportable infrastructure because standardization eliminates variability. Consistent PCI
card slot location, especially for network controllers, is essential for accurate alignment
of physical to virtual I/O.
HOL-1706-SDC-6 Page 63
HOL-1706-SDC-6
• Compute Pods use any HCL supported storage. Both Hybrid and All-Flash Virtual
SAN are ideal options for the SDDC workloads.
While there is no explicit requirement for running Virtual SAN on hosts the compute
pods, it is recommended that you use Virtual SAN Ready Nodes as this not only enables
you to leverage the benefits of Virtual SAN for your compute workloads, but it also
provides hardware consistency across all the pods in the SDDC. Use storage that meet
the application and business requirements. This provides multiple storage tiers and
SLAs for these business workloads.
In this diagram, both the hybrid and all-flash architectures of Virtual SAN enables are
shown.
HOL-1706-SDC-6 Page 64
HOL-1706-SDC-6
NFS
In this diagram, we depict the secondary storage used in the VMware Validated Design
for Software-Defined Data Center.
In this design, the secondary storage tier for management and compute pods is
provided by NFS.
HOL-1706-SDC-6 Page 65
HOL-1706-SDC-6
HOL-1706-SDC-6 Page 66
HOL-1706-SDC-6
In this diagram, the use of the content library to share VM-related content - like
templates consumed and used by vRealize Automation across regions - is shown.
Here we illustrate some standard virtual machine templates are consumed by vRealize
Automation in Region A. Here you can see Microsoft Windows and two Linux
distributions. These templates are then imported into a published content library in
Region A.
Region B subscribes to the Region A library and synchronizes the content. The templates
are then exported to a format consumable by vRealize Automation in Region B.
In both regions, the templates and content libraries are stored on NFS.
HOL-1706-SDC-6 Page 67
HOL-1706-SDC-6
Log Archives
In this diagram, the logical architecture of vRealize Log Insight is shown. Both compute
and storage resources for master and worker instances scale-up and can perform log
archiving onto an NFS export that each vRealize Log Insight node can access.
Note: Local log storage is stored on the Virtual SAN datastore in the Management
Cluster.
Backups
In the VMware Validated Design for Software-Defined Data Center we use Sphere Data
Protection to back up all management components. vSphere Data Protection provides
the functionality that is required to back up full image VMs and applications in those
VMs, for example, Microsoft SQL Server. However, it's noted that another data
protection product can be used if it supports the design objectives in the architecture.
vSphere Data Protection protects the virtual infrastructure at the vCenter Server layer.
Because vSphere Data Protection is connected to the Management vCenter Server, it
can access all management ESXi hosts, and can detect the virtual machines that require
backups.
HOL-1706-SDC-6 Page 68
HOL-1706-SDC-6
The design allocates a dedicated NFS datastore for the vSphere Data Protection
appliance and the backup data in each region. Because vSphere Data Protection
generates a significant amount of I/O operations, especially when performing multiple
concurrent backups. a dedicated volume is presented. The storage platform must be
able to handle this I/O. If the storage platform does not meet the performance
requirements, it might miss backup windows or backup failures and error messages
might occur.
During deployment, always run the vSphere Data Protection performance analysis
feature during virtual appliance deployment or after deployment to assess performance.
vSphere Data Protection can dynamically expand the destination backup store from 2 TB
to 8 TB. Using an extended backup storage requires additional memory on the vSphere
Data Protection appliance. In this design we set the backup targets to 4 TB initially since
the management stack currently consumes approximately 2 TB of disk space,
uncompressed and without deduplication.
Even though vSphere Data Protection uses the Changed Block Tracking technology to
optimize the backup data, do not use a backup window when the production storage is
in high demand to avoid any business impact.
Backups are scheduled for each day and the design retain 3 days of backups by default.
This is aligned with the size of the NFS export.
HOL-1706-SDC-6 Page 69
HOL-1706-SDC-6
HOL-1706-SDC-6 Page 70
HOL-1706-SDC-6
1. Click here to open the interactive simulation. It will open in a new browser
window or tab.
2. When finished, click the “Return to the lab” link to continue with this lab.
HOL-1706-SDC-6 Page 71
HOL-1706-SDC-6
Software-Defined Storage
Within the Software-Defined Data Center all business and end-user workloads run inside
the Compute Pods.
The compute pods are deployed inside data center racks, with each rack representing a
separate pod. Each compute pod contains a minimum of four servers along with a pair
of top-of-rack leaf switches.
As with the other pods, VMware ESXi is installed on each server and the hosts are
logically grouped into vSphere clusters.
Compute clusters are all managed by the Compute vCenter Instance running in the
Management Pod. Multiple compute clusters can be created until the maximum number
of either hosts (1,000) or virtual machines (10,000) for vCenter Server is reached.
Should these maximums ever be reached, additional vCenter Server instances can be
provisioned to allow for additional compute clusters to be created.
There will typically be multiple compute pods deployed within a Software-Defined Data
Center. The pod design of the VVD makes it easy to start small and gradually expand
over time to accommodate growth. In addition, the pod design makes it possible to
deploy separate compute pods with varying levels of quality, redundancy and
availability.
Storage can be any combination of supported vSphere storage. The type of storage
used is determined based on cost, performance, business requirements, and desired
service levels. It is recommended that you use Virtual SAN Ready nodes as it enables
you to leverage the benefits included in the hybrid and all-flash storage configuration
for Virtual SAN.
HOL-1706-SDC-6 Page 72
HOL-1706-SDC-6
In this interactive simulation, we will enable Virtual SAN for hybrid storage. Each of the
hosts in this cluster have one flash disk and two magnetic disks that are not partitioned
or formatted. These drives will be used for the Virtual SAN datastore in this compute
pod.
Before enabling Virtual SAN we’ll review configuration prerequisites.Virtual SAN requires
a VMkernel adapter to be enabled for Virtual SAN traffic. Here we’ll check the
configuration of this host.
Click on the pencil icon to edit or view the VMkernel adapters properties.
We can see that this hosts VMkernel adapter for Virtual SAN has enabled for the traffic
type.
We also see that the MTU has been set to 9000 as prescribed in the design.
For this simulation, the configuration has already been enabled on all four hosts. Click
the Cancel button.
5. Click on the SFO01-COMP01 cluster and then click the Manage tab.
6. Enabling Virtual SAN starts with a simple checkbox. Click on the Configure...
button to get started.
When adding drives to Virtual SAN, there are two options: Automatic and Manual. In this
demo, the aforementioned flash and magnetic disks will be added to Virtual SAN
automatically.
7. Select Automatic from the Add disks to storage drop-down menu. Click Next.
8. The Virtual SAN configuration wizard provides a network validation to ensure all
hosts meet the network prerequisites. Click Next.
9. The configuration of Virtual SAN with automatic disk claiming is ready to
complete. Click Finish to confirm your settings and configure Virtual SAN on this
cluster.
Virtual SAN is on and there are four hosts in the cluster as seen in the Resources
section.
HOL-1706-SDC-6 Page 73
HOL-1706-SDC-6
Disk Management
10. Click the Disk Management tab. The "Disk Management" section is used to
review the disk groups created in the automatic configuration. In a manual
configuration, it would show disks eligible for use by Virtual SAN.
There are a total of four flash disks and eight data disks that have been automatically
added to Virtual SAN. One flash disk from each host and two magnetic disks from each.
Virtual SAN provides fully integrated operations management for health and
performance using the native vSphere Web Client.
Virtual SAN provides fully integrated operations management for health and
performance using the native vSphere Web Client.
Virtual SAN provides fully integrated operations management for health and
performance using the native vSphere Web Client. The Performance Service is now
turned on and Virtual SAN is Healthy (Stats object health now shows 'Healthy').
13. Click on the Storage icon to review the new Virtual SAN datastore that has been
created in this compute pod.
14. Click on comp01vc01.sfo01.rainpole.local and then SFO01 to expand those
sections. Finally, click on vsanDatastore.
Here we see that the new datastore has been added with the name of “vsanDatastore”.
Renaming a Datastore
15. Click the Actions menu and then select Rename. To rename the datastore, click
in the box to rename it to SFO01A-DS-VSAN01-COMP01. Click OK to rename the
datastore.
HOL-1706-SDC-6 Page 74
HOL-1706-SDC-6
Conclusion
This concludes this Interactive Simulation on the enabling the software-defined storage
using VMware Virtual SAN in the VMware Validated Design for SDDC.
Thank you!
HOL-1706-SDC-6 Page 75
HOL-1706-SDC-6
Module 3 -VMware
Validated Design for
SDDC – Software-Defined
Networking (15 minutes)
HOL-1706-SDC-6 Page 76
HOL-1706-SDC-6
Introduction
This module introduces the fundamental concepts for software-defined networking with
VMware NSX in the VMware Validated Design for Software-Defined Data Center.
This lab is based on the VMware Validated Design for Software-Defined Data Center 2.0.
In this lab you'll be introduced to some key architecture concepts of the VMware
Validated Design for Software-Defined Data Center.
You can review these diagrams now and/or toggle back to them as you take the lab by
changing the active browser tab or window.
If you're ready to take the lab you can advance the manual and launch the interactive
simualtion.
Architecture reference diagrams for Module 3 are provided in the following sections.
Pod Architecture
VMware Validated Design uses a small set of common, standardized building blocks
called pods. Each pod encompasses the combinations of servers, storage, and network
equipment that are required to fulfill a specific role within the SDDC. These roles are
typically include management, edge and compute or a combination of edge and
compute.
Pods can be set up with varying levels of hardware redundancy and varying quality of
components. For example, one compute pod could use full hardware redundancy for
each component (power supply through memory chips) for increased availability. At the
same time, another compute pod in the same setup could use low-cost hardware
without any hardware redundancy. With these variations, the architecture can cater to
the different workload requirements in the SDDC.
For both smaller and large setups, homogeneity and easy replication are important.
As the name implies, the Management Pod hosts the infrastructure components used
to instantiate, manage, and monitor the private cloud. This includes the core
infrastructure components, such as the Platform Services Controllers, vCenter Server
instances, NSX Managers, and NSX Controllers, SDDC monitoring solutions like vRealize
Operations Manager and vRealize Log Insight.
HOL-1706-SDC-6 Page 77
HOL-1706-SDC-6
In the VMware Validated Design for Software-Defined Date Center, Cloud Management
Platform components are added and to include vRealize Automation, vRealize
Orchestrator and vRealize Business for Cloud on top of the solid and robust
management platform.
In the new two-pod architecture in the VMware Validated Design for SDDC 3.x
(management and shared edge/compute) the SDDC network fabric provides external
connectivity to all pods. External connectivity is pooled into a Shared Edge and
Compute pod. The shared pod runs the software-defined networking services provided
by VMware NSX to establish north/south routing between the SDDC and the external
network as well as east/west routing inside the SDDC for the business workloads. This
shared pod may also host the SDDC workloads.
SDDC grows, additional compute-only pods can be added to support a mix of different
types of workloads for different types of Service Level Agreements (SLAs).
Compute Pods host the SDDC workloads. An SDDC can mix different types of compute
pods and provide separate compute pools for different types of Service Level
Agreements (SLAs). For example, compute pods can be set up with varying levels of
hardware redundancy and varying quality of components for different Service. One
compute pod could use full hardware redundancy for each component (power supply
through memory chips) for increased availability. At the same time, another compute
pod in the same setup could use low-cost hardware without any hardware redundancy.
With these variations, the architecture can cater to the different workload requirements
in the SDDC.
HOL-1706-SDC-6 Page 78
HOL-1706-SDC-6
HOL-1706-SDC-6 Page 79
HOL-1706-SDC-6
In this diagram, a logical representation of the pods and clusters in the VMware
Validated Design for Software-Defined Data Center.
The diagram represents the pods, clusters, host connectivity, storage, distributed
routing, virtual networks and placement of core platform components.
Note, while this diagram is based on the new two pod architecture in the VMware
Validated Design for SDDC 3.x but it is applicable to 2.x as well.
Recall from the Lab Introduction that VMware Validated Design uses a small set
of common, standardized building blocks called pods. Each pod encompasses the
combinations of servers, storage, and network equipment that are required to fulfill a
specific role within the SDDC. These roles are typically include management, edge and
compute or a combination of edge and compute.
Pods can be set up with varying levels of hardware redundancy and varying quality of
components. For example, one compute pod could use full hardware redundancy for
each component (power supply through memory chips) for increased availability. At the
HOL-1706-SDC-6 Page 80
HOL-1706-SDC-6
same time, another compute pod in the same setup could use low-cost hardware
without any hardware redundancy. With these variations, the architecture can cater to
the different workload requirements in the SDDC.
In the VMware Validated Design for SDDC 2.x the types of pods include:
• Management Pod
• Edge Pod
• Compute Pod
In this diagram the three-pod architecture of the VMware Validate Design for Software-
Defined Data Center 2.x is shown.
HOL-1706-SDC-6 Page 81
HOL-1706-SDC-6
The VMware Validated Design for Software-Defined Data Center 3.x introduces the two-
pod architecture.
• Management Pod
• Shared Edge and Compute Pod
As additional pods are added to the SDDC these are only Compute Pod.
Recall that the physical network architecture used in the VMware Validated Designs is
tightly coupled with the pod architecture.
HOL-1706-SDC-6 Page 82
HOL-1706-SDC-6
In this diagram, the three pod architecture as prescribed in the VMware Validated
Design for Software-Defined Data Center 2.x is shown.
In a three-pod architecture (management, edge and compute) the SDDC network fabric
does not provide external connectivity. Most pod types, such as compute pods, are not
set up with external network connectivity. Instead external connectivity is pooled into
management and edge pod. The edge pod runs the software-defined networking
services provided by VMware NSX to establish north/south routing between the SDDC
workloads and the external network as well as east/west routing inside the SDDC for the
business workloads.
HOL-1706-SDC-6 Page 83
HOL-1706-SDC-6
In this diagram the two pod architecture as prescribed in the VMware Validated Design
for Software-Defined Data Center 3.x is shown.
HOL-1706-SDC-6 Page 84
HOL-1706-SDC-6
In this diagram, the three pod architecture host connectivity as prescribed in the
VMware Validated Design for Software-Defined Data Center 2.x is shown.
In this diagram, the two pod architecture host connectivity as prescribed in the VMware
Validated Design for Software-Defined Data Center 3.x is shown.
HOL-1706-SDC-6 Page 85
HOL-1706-SDC-6
Within each region, the design instantiates two Platform Service Controllers and two
vCenter Server systems in the appliance form factor. This includes one PSC and one
vCenter Server for the management pod and one PSC and one vCenter Server for the
shared edge and compute pods. The design also joins the Platform Services Controller
instances to the same vCenter Single Sign-On domain and points each vCenter Server
instance to its respective Platform Services Controller instance.
Note: This diagram is applicable to both the 2.x and 3.x designs.
HOL-1706-SDC-6 Page 86
HOL-1706-SDC-6
In this diagram, the multi-region and cross-vCenter deployment of NSX in the VMware
Validated Design for Software-Defined Data Center 2.x is shown.
In both regions, two separate NSX Managers instances are deployed, one for the
Management pod and one for the Compute and Edge pods, along with an associated
NSX Universal Controller Cluster. In the Region B the secondary NSX Manager instances
automatically imports the configurations of the NSX Universal Controller Clusters from
Region A.
HOL-1706-SDC-6 Page 87
HOL-1706-SDC-6
In this diagram, the multi-region and cross-vCenter deployment of NSX in the VMware
Validated Design for Software-Defined Data Center 3.x is shown.
The general architecture is the same as in the three-pod architecture; however, the NSX
services for the Compute Stack are deployed on an initial shared edge and compute pod
and all NSX services are added to a resource pool to guarantee resources for the
network virtualization platform in this stack.
In this diagram we depict the constructs for the universal distributed logical routing in
the SDDC across two regions. The configuration is largely the same across the
management stack and the compute stack.
The core platform solutions, such as the vCenter Server instances, Platform Services
Controllers instances, NSX Manager instances and the NSX Universal Controller cluster
run on a vSphere Distributed Port Group which has a VLAN provided down from the dual
leaf switches. This port group is on the vSphere Distributed Switch for the Management
Pod.
This same vDS has two uplink port groups – Uplink 01 and Uplink 02. On these port
groups we deploy NSX Edge Services Gateways in an ECMP - Equal Cost Multi-Pathing -
configuration for north/south routing for management.
HOL-1706-SDC-6 Page 88
HOL-1706-SDC-6
This also occursin both edge cluster (3-pod) or shared edge/compute (2-pod) for the
compute stack and leverages the similar uplink port groups provided on the vDS for the
Edge.
The NSX Edge Services Gateways pair in management will manage north/south traffic
for the SDDC infrastructure and the pair in edge will manage north/south traffic for
SDDC workloads. The use of ECMP provides multiple paths in and out of the SDDC. This
results in much faster failover times than deploying Edge service gateways in HA mode.
The design uses BGP as the dynamic routing protocol inside the SDDC for a simple
implementation. There is no need to plan and design access to OSPF area 0 inside the
SDDC. OSPF area 0 varies based on customer configuration.
The management stack uses a single universal transport zone that encompasses all
management clusters and for the compute stack, the design uses a single universal
transport zone that encompasses all edge and compute clusters from all regions. The
use of a single Universal Transport Zone for management stack and one for the compute
stack supports extending networks and security policies across regions. This allows
seamless migration of management applications and business workloads
across regions either by cross vCenter vMotion or by failover recovery with Site
Recovery Manager.
Riding on top of these transport zones we instantiate a universal logical switch for use
as the transit network between the Universal Distributed Logical Routers (UDLRs) and
ESGs – this is done in both the management stack and the compute stacks. In this
diagram we’re just representing the construct with a single instance.
We then deploy a single NSX UDLR for the management cluster to provide east/west
routing across all regions as well as a single NSX UDLR for the compute and edge
cluster to provide east/west routing across all regions. Here again, we’re just
representing the construct with a single instance.
The UDLRs are peer via BGP to the north/south edges in their respective stack and the
universal logical switch that is used as the transit network allows the UDLRs and all
ESGs across regions to exchange routing information.
The distributed logical routing provides the ability to spin up logical networks that are
then accessible throughout the SDDC and out to the physical network.
In fact, we leverage this capability tor the SDDC solutions themselves and create
application virtual networks for these solutions.
HOL-1706-SDC-6 Page 89
HOL-1706-SDC-6
HOL-1706-SDC-6 Page 90
HOL-1706-SDC-6
1. Click here to open the interactive simulation. It will open in a new browser
window or tab.
2. When finished, click the “Return to the lab” link to continue with this lab.
HOL-1706-SDC-6 Page 91
HOL-1706-SDC-6
VMware NSX reproduces the complete set of Layer 2 through 7 networking services in
software. This includes: switching, routing, access control, firewalling, and load
balancing.
The VMware Validated Design uses a simple, scalable and resilient leaf-and-spine
network topology for the IP transport layer. Servers in each pod are redundancy
connected to the leaf switches, commonly referred to as top-of-rack switches, in their
rack. High-speed uplinks connect the leaf and spine switches to establish an access
layer for transport.
Hosts in each pod are logically grouped into clusters. For example, in the management
pod, 4 Virtual SAN Ready Nodes are logically grouped into a management cluster. A
vSphere Distributed Switch is established for each pod and port groups defined for
services required by the cluster.
NSX Managers are deployed for both the management stack and compute stack and
provide a centralized management plane for the NSX for vSphere architecture.
Orchestration of the software-defined networks occurs through the management plane.
Universal NSX Controller Clusters are deployed for the network virtualization control
plane. Here control messages are used to set up networking attributes on logical
switches, as well as to configure distributed routing, and distributed firewalling in the
data plane.
2. Click the Host Preparation tab. From the NSX Manager drop-down menu, select
172.27.12.65 and then click SFO01-MGMT01 to expand it.
HOL-1706-SDC-6 Page 92
HOL-1706-SDC-6
Host Preparation
Hosts are prepared with the network virtualization components and establish their
VXLAN Tunnel Endpoint (VTEP) connections for communication on the IP transport layer.
Here we see that the hosts have the components install and VXLAN is configured on
each of these management hosts.
And here we can see that the VXLAN Tunnel Endpoints connections for communication
on the IP transport layer are established each of these management hosts.
The design used the Hybrid replication mode for multi-destination traffic. Hybrid mode
offers operational simplicity while leveraging the Layer 2 Multicast capability of physical
switches.
The design used the Hybrid replication mode for multi-destination traffic. Hybrid mode
offers operational simplicity while leveraging the Layer 2 Multicast capability of physical
switches.
Hosts are joined to a transport zone that defines the scope of the logical switches, or
virtual networks, across SDDC. For the management stack, it the uses a single universal
transport zone that encompasses all management clusters and for the compute stack,
the design uses a single universal transport zone that encompasses all edge and
compute clusters from all regions.
Here we see that the management cluster is a connected to the Management Universal
Transport Zone. The use of a single Universal Transport Zone for management stack and
one for the compute stack supports extending networks and security policies across
regions.
7. Click Cancel.
Logical Switches
Workload data is contained within logical switches provided by the data plane. The data
is carried over designated transport networks in the physical network and can be
extended across data centers in a multi-region deployment. For example, a virtual
HOL-1706-SDC-6 Page 93
HOL-1706-SDC-6
network can be established across Management Pods in San Francisco and Los Angeles.
This enables workload migration and disaster recovery of applications without the need
to change the IP addressing.
9. Click on Mgmt-RegionA01-VXLAN.
On this virtual network, the solutions or solution portions tied to a region are deployed.
These include the vRealize Operations Remote Collectors, vRealize Log Insight master
and worker nodes, vRealize Automation Proxy Agents and vRealize Business for Cloud
Collectors.
13. Click the Networking and Security link to return to the Logical Switches page.
Then click Mgmt-xRegion01-VXLAN. The second application virtual network is
a region dependent virtual network.
14. Click NSX Edge.
Solutions that require load balancing for highly distributed and available deployments
use the load balancing capabilities of an NSX Edge Services Gateway. This ensures
availability, scalability and performance of the VMware Validated Designs’ SDDC
solutions.
On this virtual network, the solutions that are independent of region – that is, they are
portable to another region - are deployed. These include the vRealize Operations
master, master replica and data nodes; vRealize Automation appliances, IaaS Web
Servers, IaaS Manager Servers, and Distributed Execution Managers; and lastly, vRealize
Business for Cloud appliance. Both the vRealize Operations and vRealize Automation
solutions are load balanced by the NSX Edge Services Gateway.
16. Click the Networking and Security link to return to the Logical Switches page.
An additional virtual network for Universal Transit Network is established to link and
distribute logical routing to the physical network.
HOL-1706-SDC-6 Page 94
HOL-1706-SDC-6
Here we see that the transit network is connected to the Universal Distributed Logical
Router. We also see two NSX Edge Service Gateways are deployed. These gateways
provide north/south routing between the Software-Defined Data Center and the physical
network with Equal Cost Multi-pathing. Two edges are instantiated in the management
pod to provide north/south access for the management stack and two are instantiated in
the edge pod provide north/south access for the compute stack in a similar fashion.
18. Click the Networking and Security link to return to the Logical Switches page.
NSX Edges
In the global configuration we can see the size, hostname, syslog, high-availability
configuration and more.
The ECMP edge gateways are peered for route redistribution with the leaf switches for
their pod.
Here we can see that this gateway is peers with the top-of-rack leaf switches in the pod
as well as the Universal Distributed Logical Router.
24. Let's review and validate the same settings for the second edge gateway for
north / south routing. Click the Networking and Security link to go back to NSX
Edges, then click on SFOMGMT-ESG02.
• Click Interfaces - We can also view, modify or add IP addresses for the
interfaces of each vNic. Here we see that three vNics are connected. One uplink
to the transit network, and two uplinks to the physical network for access to the
leaf switches.
• Click the Routing tab.
• Click on BGP.
• Click Route Redistribution.
HOL-1706-SDC-6 Page 95
HOL-1706-SDC-6
25. Click the Networking and Security link to return to NSX Edges.
We can also view, modify or add IP addresses for the interfaces of each vNic. Here we
see that three vNics are connected. One to the transit network, and one to each
application virtual network for the SDDC solutions.
Universal Distributed Logical Routers run in the kernel of each ESXi host and provide
centralized administration and routing configuration for the Software-Defined Data
Center. Note that ECMP is enabled.
The Universal Distributed Logical Routers for each stack is peered for route
redistribution with their respective northbound edges on a dedicated virtual network for
transit services.
We also see that route distribution is enabled through BGP, as well as OSPF.
This provides complete route redistribution and access between virtual networks in the
Software-Defined Data Center to and from the physical network.
32. Hit the 'Enter' key to start the ssh session. Hit 'Enter' again for the password.
HOL-1706-SDC-6 Page 96
HOL-1706-SDC-6
In this lab environment, our top-of-rack leaf switches are simulated by an NSX Edge
Services Gateway.
33. Hit any key to enter the 'show ip bgp neighbors' command and hit 'Enter'.
Here we see that the top-of-rack leaf switches have established peering with the north /
south NSX Edge Service Gateways.
Here we see that the top-of-rack leaf switches have established peering with the
upstream spine switch.
35. Hit any key to enter the 'show ip bgp route' command and hit 'Enter'.
Here we see routes to the virtual networks instantiated by NSX in the management
stack have be distributed to the physical network from the north / south NSX Edge
Service Gateways.
The 192.168.10.0 network is the Management Universal Transit Network and the
192.168.11.0 and 192.168.31.0 networks are the application virtual networks for the
SDDC solutions.
Conclusion
And full monitoring is integrated with Cloud Operations solutions that are built into the
foundation of VMware Validated Designs. These include vRealize Operations and
vRealize Log Insight.
HOL-1706-SDC-6 Page 97
HOL-1706-SDC-6
Module 4 -VMware
Validated Design for
SDDC – Cloud Operations
with vRealize Operations
(15 minutes)
HOL-1706-SDC-6 Page 98
HOL-1706-SDC-6
Introduction
This module introduces the fundamentals of cloud operations with VMware vRealize
Operations in the VMware Validated Design for Software-Defined Data Center.
This lab is based on the VMware Validated Design for Software-Defined Data Center 2.0.
In this lab you'll be introduced to some key architecture concepts of the VMware
Validated Design for Software-Defined Data Center.
You can review these diagrams now and/or toggle back to them as you take the lab by
changing the active browser tab or window.
If you're ready to take the lab you can advance the manual and launch the interactive
simulation.
Architecture reference diagrams for Module 4 are provided in the following sections.
Before outlining the deployment topology, we will outline the architecture of vRealize
Opeations.
vRealize Operations tracks and analyzes the operation of multiple data sources within
the Software-Defined Data Center by using specialized analytics algorithms. These
algorithms help vRealize Operations Manager to learn and predicts the behavior of
every object it monitors. Users access this information by using views, reports, and
dashboards.
vRealize Operations contains functional elements that collaborate for data analysis and
storage, and support creating clusters of nodes with different roles.
For high availability and scalability, you can deploy several vRealize Operations Manager
instances in a cluster where they can have either of the following roles:
• Master Node. This is the initial node in a deployment and a cluster. In large-
scale environments the master node manages all other nodes. In small-scale
HOL-1706-SDC-6 Page 99
HOL-1706-SDC-6
vRealize Operations can form two type of clusters according to the nodes that
participate in a cluster and both are used in the VMware Validated Designs:
Each region contains its own remote collectors whose role is to ease scalability by
performing the data collection from the applications that are not a subject of failover
and periodically sending collected data to the analytics cluster.
In a disaster, the design only fails over the analytics cluster only because the analytics
cluster is the construct that analyzes and stores monitoring data.
vRealize Operations can monitor and perform diagnostics on all of VMware Validated
Design for SDDC systems by using management packs.
Management packs contain extensions and third-party integration software. They add
dashboards, alerts definitions, policies, reports, and other content to the inventory of
vRealize Operations
After which, the solution is configured to collect data from the following virtual
infrastructure and cloud management components:
1. Click here to open the interactive simulation. It will open in a new browser
window or tab.
2. When finished, click the “Return to the lab” link to continue with this lab.
In this Hands-on Labs Interactive Simulation, we will review the deployment and
configuration of vRealize Operations based this comprehensive SDDC architecture.
2. Click MGMT-xRegion01-VXLAN
3. Click the Related Objects tab
4. Click the Virtual Machines button
The deployment consists of two components – the Analytic Cluster and the Remote
Collector Cluster. A four-node analytics cluster stores and analyzes the collected
metrics, validates them against established thresholds and sends alerts, when required.
A two-node remote collector cluster is also in each region. Remote collectors gather the
metrics for the monitored components and forwards this data to the analytics cluster.
Here we see the virtual machines contained within this application virtual network. This
virtual network is independent of the region and can be extended across data centers in
a multi-region deployment.
Here we filter and search for the vRealize Operations nodes deployed on this network.
5. Type “vrops” and press the Enter key to filter Virtual Machines
Here they are. The members of the vRealize Operations Analytics cluster. The Remote
Collectors reside in another application virtual network that is designated to a specific
region.
Solutions that require load balancing for highly distributed and available deployments
use the load balancing capabilities of an NSX Edge Services Gateway in HA mode. This
ensures availability, scalability and performance of the VMware Validated Design for
SDDC solutions.
8. Click Interfaces and then Show All under the IP Address column of OneArmLb.
Here we see the addresses assigned to the load balancer. These are used by the virtual
servers created for the SDDC solutions.
9. Click Cancel
10. Click the Load Balancer button.
By filtering the list of pools we see that one pool has been created for vRealize
Operations.
13. Select the vROPs_Pool and click the Pencil Icon to Edit the entry.
This pool indicates the name, connection algorithm as well as the four nodes of the
analytics cluster we saw earlier including their IP, port and weighted values.
Virtual Servers are the load-balanced endpoints that users connect to in order to access
the solutions.
16. Type “vrops” in the Filter box to filter the Virtual Servers.
By filtering the list of virtual servers we see that two pools are present have been
created for vRealize Operations. Here we select the virtual server for the HTTPs protocol.
17. Click the down arrow in the Virtual Server list and then select the
virtualServer-5 for the https Protocol.
18. Click the Pencil Icon to Edit the entry.
The virtual server indicates if it is enabled, which it is. It also indicated the application
profile, a descriptive name, an IP address from those we saw earlier, the protocol, the
port, the pool and connection rates and limits, if any.
We also see that High Availability has been enabled for the cluster.
All four nodes in the vRealize Operations Analytics cluster are online and running. This
includes the master node, the master replicate node and two data nodes.
We also see that the two Remote Collector nodes are also online and running in their
application virtual network.
Now let’s explore the basic configuration of the vRealize Operations Management Packs
used for the SDDC.
Here we are connecting to the virtual server on the load balancer that is distributing and
managing our connection to the solution.
The VMware Validated Design for SDDC contains several solutions for network, storage,
and cloud management and operations. You can monitor and perform diagnostics on all
of them by using management packs.
As well as the native management pack for vSphere and Operating Systems.
Management packs contain extensions that add dashboards, alerts definitions, policies,
reports, and other content to the inventory of vRealize Operations.
Here we see the addition of inventory items collected and available from these
management packs
We also see the dashboards added by the management packs. Such as,
And the default Recommendations, Diagnose, Self Health and Workload Utilization
dashboards.
Let’s take a look at an example dashboard. Here we select the Management Pack for
NSX and its main dashboard.
This dashboard provides an overall view of the health of key NSX components in our
SDDC and any alerts that have been generated.
Here we see that a critical alert is reported on the NSX Manager for our compute stack.
Let’s take a look at it.
Here we see a summary of the Health, Risk and Efficiency for the inventory object. We
also see that an item appears under the Top Risk Alerts indicated that our ‘Backups are
not using Secure FTP’
38. Click Backups are not using secure FTP in the Risks section.
By selecting the alerts, we can review what is causing the issues and any
recommendation.
In this example, we see that only FTP has been selected for our backup protocol when
we should be using Secure FTP.
Module 5 -VMware
Validated Design for
SDDC – Cloud Operations
with vRealize Log Insight
(15 minutes)
Introduction
This module introduces the fundamentals of cloud operations with VMware vRealize Log
Insight in the VMware Validated Design for Software-Defined Data Center.
This lab is based on the VMware Validated Design for Software-Defined Data Center 2.0.
In this lab you'll be introduced to some key architecture concepts of the VMware
Validated Design for Software-Defined Data Center.
You can review these diagrams now and/or toggle back to them as you take the lab by
changing the active browser tab or window.
If you're ready to take the lab you can advance the manual and launch the interactive
simulation.
Architecture reference diagrams for Module 5 are provided in the following sections.
In this diagram, the dual-region deployment vRealize Log Insight clusters in the VMware
Validated Design for Software-Defined Data Center is shown.
Before outlining the deployment topology, we will outline the architecture of vRealize
Log Insight.
vRealize Log Insight collects data from ESXi hosts using the syslog protocol. It connects
to vCenter Server to collect events, tasks, and alarms data, and integrates with vRealize
Operations Manager to send notification events and enable launch in context. It also
functions as a collection and analysis point for any system capable of sending syslog
data. In addition to syslog data an ingestion agent can be installed on Linux or Windows
servers to collect logs. This agent approach is especially useful for custom logs and
operating systems that don't natively support the syslog protocol, such as Windows.
For high availability and scalability, several instances of vRealize Log Insight can be
deployed in a cluster where they can have either of the following roles:
• Master Node - Required initial node in the cluster. The master node is
responsible for queries and log ingestion. The Web user interface of the master
node serves as the single pane of glass for the cluster. All queries against data
are directed to the master, which in turn queries the workers as appropriate.
• Worker Node - Enables scale-out in larger environments. A worker node is
responsible for ingestion of logs. A worker node stores logs locally. If a worker
node is down, the logs on that worker becomes unavailable. You need at least two
worker nodes to form a cluster with the master node.
• Integrated Load Balancer - Provides high availability. The ILB runs on one of
the cluster nodes. If the node that hosts the ILB Virtual IP (VIP) address stops
responding, the VIP address is failed over to another node in the cluster.
In the dual-region deployments of the VMware Validated Design for Software Defined
Data Center, deployments of vRealize Log Insight clusters are established in each region
and consists of a minimum of three nodes – a master node and two worker nodes. This
allows for continued availability and increased log ingestion rates.
vRealize Log Insight clients connect to load balancer VIP address and use the user
interface and ingestion (via Syslog or the Ingestion API) to send logs to vRealize Log
Insight.
The compute and storage resources of the vRealize Log Insight instances for scale-up
and it can perform log archiving onto an NFS export that each vRealize Log Insight node
can access.
vRealize Log Insight supports alerts that trigger notifications about its health. The
following types of alerts exist in vRealize Log Insight:
vRealize Log Insight integrates with vRealize Operations Manager to provide a central
location for monitoring and diagnostics in the following ways:
We also protect the vRealize Log Insight deployment by providing centralized role-based
authentication and secure communication with the other components in the Software-
Defined Data Center. The design enables role-based access control in by using any
existing Active Directory to realize fine-grained role and privilege-based access for
administrator and operator roles.
To simplify the design implementation for log sources that are syslog capable we
configure syslog sources to send log data directly to vRealize Log Insight – it’s native.
We forward syslog data in vRealize Log Insight from Region B to Region A by using
the Ingestion API .The vRealize Log Insight Ingestion API uses TCP communication. In
contrast to syslog, the forwarding module supports the following features for the
Ingestion API:
vRealize Log Insight collects logs from the following virtual infrastructure and cloud
management components using configured Content Packs.
In this diagram the distributed deployment of vRealize Log Insight in conjunction with
the application virtual networks, distributed routing and load balancing services for the
components is shown.
The interactive simulation will allow you to experience steps which are too time-
consuming or resource intensive to do live in the lab environment.
1. Click here to open the interactive simulation. It will open in a new browser
window or tab.
2. When finished, click the “Return to the lab” link to continue with this lab.
VMware vRealize Log Insight provides centralized log aggregation and log analytics,
increasing the operational visibility and facilitating troubleshooting and root cause
analysis. The VMware Validated Design for SDDC deploys a separate vRealize Log
Insight cluster in each region.
6. Click Cluster.
Each cluster is comprised of one master and two worker nodes. The design leverages
the capabilities of the software-defined networking by establishing virtual networks for
this SDDC operations solution. These networks are referred to as application virtual
networks.All nodes are configured with an integrated load balancer. This design allows
for continued availability and increased log ingestion rates.
Once the vRealize Log Insight cluster has been deployed, the various components of the
Software Defined Data Center are configured to forward logs to the cluster where they
are processed and analyzed.
Role Based Access Control provides a powerful way to control access to events within
vRealize Log Insight. Not only does it make it possible to support multiple users and
groups within the product, it also makes it possible to restrict access based on job
function using roles and data sets
8. Click Hosts
Administrators can view each host or device that has at least one local event in the
cluster. In addition to the hostname you can determine the last time vRealize Log Insight
received an event.
9. Click Agents
vRealize Log Insight provides the ability to use and agent to collect logs from clients.
The agent supports sending events over syslog, abiding by the syslog RFC, and over the
solution’s ingestion API – this means the agent will work with any remote syslog
destination. The agent supports both Linux and Microsoft Window.
Review the vCenter Server and Platform Services Controller Appliances Sending Syslog
Data.
Here we can see another custom group has been created for all of the Microsoft
Windows-based IaaS instances for vRealize Automation.
vRealize Log Insight has tight integration with other VMware products. One of the out-of-
the-box integrations is with VMware vSphere. This integration allows Log Insight to
perform two operations:
* Collect events, tasks, and alarms from the vCenter Server database and ingest them
as log messages and
Review Registration of the Two vCenter Server Instances with the Cluster. Also Collecting
Events, Tasks and Alarms Data.
vRealize Log Insight integrates directly with vRealize Operations to provide insight
between the structured and unstructured data. For example, inventory mapping, alert
integration, and a two-way launch in context. With this integration you get a single pane
of glass from which to ensure the health of your environment, get notified of detected
issues, automatically respond to issues detected and perform complete troubleshooting
and root cause analysis.
Review that vRealize Log Insight is Integrated with vRealize Operations and Uses it Load-
balanced IP. Alerts Integration and Launch in Context are Also Enabled.
13. Click the List Dropdown (Hamburger Icon) and select Content Packs
vRealize Log Insight content packs are immutable, or read-only, plugins that provide
pre-defined provide knowledge about a specific set of events in a format easily
digestible by administrators, engineers, monitoring teams, and executives.
A content pack marketplace is natively available within the product and is available
from the Content Packs page. This provides you with immediate access to content packs
without the need of leaving the product.
Review the Content Packs that are Installed with the VVD for SDDC Architecture.
By default, vRealize Log Insight ships with the vSphere content pack, but additional
content packs can be imported as needed.
Here we can see additional content packs that are deployed and configured in the
VMware Validated Design for SDDC. These include packs for: Virtual SAN, NSX, vRealize
Automation, vRealize Operations and vRealize Orchestrator.
• VMware - NSX
• VMware - vRealize Orchestrator
• VMware - Virtual SAN
• VMware - vRealize Automation
• VMware - vRealize Operations
A content pack is made up of information that can be saved from either the Dashboards
or Interactive Analytics pages in Log Insight. These include:
• Queries
• Fields
• Aggregations
• Alerts
• Dashboards
The dashboards page as an overview section. It contains mostly chart widgets and
allows you to quickly digest log data and determine potential issues in your
environment.
Review some of the dashboards that come with the vRealize Automation content pack
included in the architecture.
Review dashboard VMware - vSphere to look at some of the dashboards that are
included by default with the out-of-the-box vSphere content pack.
Content Pack dashboards cannot be modified, but you can clone these dashboards to
your Custom Dashboards space and modify the clones. A clone has been made and
named 'MyDashboards'.
You can add, modify, and delete dashboards in your Custom Dashboards space.
Log Insight provides high performance search and visualization of log data for efficient
troubleshooting across heterogeneous environments with its Interactive Analytics.
There is no need to learn a proprietary query language. Simply start typing to get result
and the search query and results will auto-populate.
Here in this example, we are troubleshooting a storage performance issue over the last
24 hours.
27. Click Scope of Time dropdown menu (Latest 5 minutes of data) and select
Latest 24 Hours of Data
Review Results
Adding filters enables us to easily refine search criteria to specified threshold and
constraints.
31. Click + Add Filter and click Text Filter to use the default
32. Click in search filter field
33. Type ‘vmw_esxi_scsi_l' or hit 'left-arrow' key and select
'vmw_esxi_scsi_latency (VMware-vSphere)'
34. Click comparison dropdown (=) and select ‘>’ for the Filter
35. Click in the items search field andtype ‘10000' or hit 'left-arrow' key
36. Click the Search button
Chart graphs can be modified based on visual preference. Change chart graph.
These visualizations and queries can then be saved to a custom dashboard with a
customer name and description.
Notice that the custom dashboard has been added to My Dashboards. Let's now delete.
Conclusion
This concludes the review of deployment and configuration of vRealize Log Insight in the
VMware Validated Design for SDDC.
Module 6 -VMware
Validated Design for
SDDC – Cloud
Management and
Automation with vRealize
Automation (15 minutes)
Introduction
This module introduces the fundamentals of cloud management and automation with
VMware vRealize Automation in the VMware Validated Design for Software-Defined Data
Center.
This lab is based on the VMware Validated Design for Software-Defined Data Center 2.0.
In this lab you'll be introduced to some key architecture concepts of the VMware
Validated Design for Software-Defined Data Center.
You can review these diagrams now and/or toggle back to them as you take the lab by
changing the active browser tab or window.
If you're ready to take the lab you can advance the manual and launch the interactive
simulation.
Architecture reference diagrams for Module 6 are provided in the following sections
Let’s take a look at the logical deployment of the Cloud Management Portal systems.
The vRealize Automation virtual appliance (seen as ”vRA”) includes the cloud
management system and database services. The vRealize Automation allows self-
service provisioning and management of cloud services, as well as authoring blueprints,
administration, and governance.
The vRealize Automation IaaS Web server (seen as “IWS”) provides a user interface
within the vRealize Automation portal Web site for the administration and consumption
• Two virtual machines are deployed to run the vRealize Automation IaaS Web
server services and to achieve redundancy we enable an active/active load-
balancing for higher availability.
The vRealize Automation IaaS Model Manager (seen as "IMS") and Distributed Execution
Management (see as "DEM") server are at the core of the vRealize Automation IaaS
platform. The vRealize Automation IaaS Model Manager and DEM server supports
several functions.
• Manages the integration of vRealize Automation IaaS with external systems and
databases.
• Provides multi-tenancy.
• Provides business logic to the DEMs.
• Manages business logic and execution policies.
• Maintains all workflows and their supporting constructs.
A Distributed Execution Manager (DEM) runs the business logic of custom models,
interacting with the database and with external databases and systems as required.
DEMs also manage cloud and physical machines.
Each DEM instance acts in either an Orchestrator role or a Worker role. The DEM
Orchestrator monitors the status of the DEM Workers. If a DEM worker stops or loses the
connection to the Model Manager, the DEM Orchestrator puts the workflow back in the
queue. It manages the scheduled workflows by creating new workflow instances at the
scheduled time and allows only one instance of a particular scheduled workflow to run
at a given time. It also preprocesses workflows before execution. Preprocessing includes
checking preconditions for workflows and creating the workflow's execution history.
• Two virtual machines are deployed to run both the Automation IaaS Model
Manager and the DEM Orchestrator services in a load-balanced pool.
The vRealize Automation IaaS DEM Workers are responsible for the provisioning and de-
provisioning tasks initiated by the vRealize Automation portal. DEM
Workers communicate with vRealize Automation endpoints. .
• Two DEMO Workers are deployed, each with three DEM Worker instances.
The vRealize Automation IaaS Proxy Agent is a Windows program that proxies
information gathering from vCenter Server back to vRealize Automation. The IaaS Proxy
Agent server provides the following functions.
1. vRealize Automation IaaS Proxy Agent can interact with different types of
hypervisors and public cloud services, such as Hyper-V and AWS. For this design,
only the vSphere agent is used.
2. vRealize Automation does not itself virtualize resources, but works with vSphere
to provision and manage the virtual machines. It uses vSphere agents to send
commands to and collect data from vSphere.
• Two vRealize Automation vSphere Proxy Agent virtual machines are deployed on
a separate virtual network in each region. This will allow for independent failover
of the main vRealize Automation components across regions.
vRealize Business for Cloud Standard provides end-user transparency in the costs that
are associated with operating workloads. It’s used to gather and aggregate the financial
cost of workload operations provides greater visibility both during a workload request
and on a periodic basis, regardless of whether the costs are "charged-back" to a specific
business unit, or are "showed-back" to illustrate the value that the SDDC provides.
vRealize Business integrates with vRealize Automation to display costing during
workload request and on an ongoing basis with cost reporting by user, business group or
tenant. Additionally, tenant administrators can create a wide range of custom reports to
meet the requirements of an organization.
In this diagram a logical view of the initial business groups, reservations and fabric
groups in vRealize Automation are shown.
The VMware Validated Design for SDDC implements a single vRealize Automation tenant
with two initial business groups:
1. Production
2. Development.
Within each business group the tenant administrators are able to manage users and
groups, apply tenant-specific branding, enable notifications, configure business policies,
and manage the service catalog
The design uses NSX logical switches to abstract the solutions and services onto
application virtual networks. This abstraction allows the solutions to be hosted in any
given region regardless of the underlying physical infrastructure such as network
subnets, compute hardware, or storage types. This design places the vRealize
Automation application and its supporting services in Region A. The same instance of
the application manages workloads in both Region A and Region B.
In this diagram, the use of the content library to share VM-related content - like
templates consumed and used by vRealize Automation across regions - is shown.
Here we illustrate some standard virtual machine templates are consumed by vRealize
Automation in Region A. Here you can see Microsoft Windows and two Linux
distributions. These templates are then imported into a published content library in
Region A.
Region B subscribes to the Region A library and synchronizes the content. The templates
are then exported to a format consumable by vRealize Automation in Region B.
In both regions, the templates and content libraries are stored on NFS.
1. Click here to open the interactive simulation. It will open in a new browser
window or tab.
2. When finished, click the “Return to the lab” link to continue with this lab.
vRealize Automation provides the service catalog and self-service portal for the
Software-Defined Data Center.
• The service catalog provides is a library of templates and services that can be
deployed in the private cloud.
• The self-service portal is the interface used to author, administer, and consume
the templates and services that exist in the Service Catalog.
Virtual Appliances
Let's take a look at the virtual machine appliances attached to the network.
Here we see the virtual machines contained within this application virtual network. This
virtual network is independent of the region and can be extended across data centers in
a multi-region deployment.
3. Filter and search for the vRealize Automation and vRealize Orchestrator systems
deployed on this network by clicking in the Filter box and pressing the 'Enter'
key.
Here they are. The systems that comprise the vRealize Automation platform. Note that
the IaaS Proxy Servers reside in another application virtual network that is designated to
a specific region.
NSX Edges
4. Let's take a look at the NSX Edge Appliances by clicking the NSX Edges button.
Solutions that require load balancing for highly distributed and available deployments
use the load balancing capabilities of an NSX Edge Services Gateway in HA mode. This
ensures availability, scalability and performance of the VMware Validated Design for
SDDC solutions.
Here we see the addresses assigned to the load balancer. These are used by the virtual
servers created for the SDDC solutions.
8. Click inside the Filter search box and then press the 'Enter' key.
9. By filtering the list of pools we see that multiple pools have been created for
distributed vRealize Automation components..
10. Select the vra-svr-443.
This pool indicates the name, connection algorithm as well as the twp nodes of the
vRealize Automation appliance we saw earlier including their IP, port and weighted
values.
Virtual Servers are the load-balanced endpoints that users connect to access the
solutions.
12. Click inside the Filter search box and then press the 'Enter' key.
By filtering the list of virtual servers we see that five pools are present have been
created for vRealize Automation. Here we select the virtual server for the vRealize
Automation appliance’s HTTPs protocol.
13. Select the vra-svr-443 server and click the Pencil Icon to edit its properties.
The virtual server indicates if it is enabled, which it is. It also indicated the application
profile, a descriptive name, an IP address from those we saw earlier, the protocol, the
port, the pool and connection rates and limits, if any.
14. When you have finished reviewing the settings, click the 'Cancel' button.
Here we see that virtual networks have been created for Web, App and Database roles
in both Production and Development.
Here see that the Universal Distributed Logical Router has been connected to each of
these virtual networks and has been assigned a network address.
Now that we have reviewed the deployment, let’s log in and explore some of the
vRealize Automation configuration in this architecture.
Here we are at the Home tab in vRealize Automation. On-screen widgets provide
information on such items as open requests, recent requests, items owned by the user,
notifications, such as approvals, and more.
22. Let's review the review the Endpoints by clicking Infrastructure in the Top
Navigation.
23. Click Endpoints and then Endpoints again.
Here we’ve added endpoints for our vSphere and vRealize Orchestrator. Note that this is
pointing to the vRealize Orchestrator instance deployed on the application virtual
network and load balanced by an NSX Edge Services Gateway. The vSphere Endpoint
allows vRealize Automation to communicate with the vSphere environment and discover
compute resources, collect data, and provision machines. The vRealize Orchestrator
Endpoint allows vRealize Automation to communicate with NSX and run out-of-the-box
or custom workflows during the machine lifecycle.
24. Let's take a look at the Compute Resources. Start by clicking '<Infrastructure'
25. Next, click Compute Resources and then Compute Resources again.
Here we see that the compute and edge clusters have been added as compute
resources.
After a compute resource is added to a fabric group, a fabric administrator can create
reservations on it for specific business groups. Users in those business groups can then
be entitled to provision machines on that compute resource.
Information about the compute resources on each infrastructure source endpoint and
machines provisioned on each compute resource is collected at regular intervals.
26. To take a look at the current Resource Reservations, click <Infrastructure, then
Reservations and Reservations again.
27. Here we see that an reservation has been created for our production business
group. To view more details, click on SFO01-Comp01-Prod-Res01.Let’s edit
this reservation to remove NFS and add Virtual SAN in its place. Start by clicking
on the Resources tab.
28. Click the Down Arrow on the scroll bar to scroll dow the list of storage paths
and uncheck SFO01A-DS-NFS-Primary-VRA.
29. Click Yes to confirm you want to remove this storage path.
30. Now check SFO01A-DS-VSAN01-COMP01 to add Virtual SAN storage to the
reservation.
31. Click in the “This Reservation Reserved” field twice to enter “4000” and then
click in the “Priority” to “1”.
32. Click OK to confirm.
33. We can also assign network resources to the reservation. To do this, start by
clicking the Up Arrow on the scroll bar and select the Network tab.
Here we see that the production virtual networks have been added to the reservation
and assigned a Network Profile.
Network Profiles
When a custom property is not used, vRealize Automation uses a reservation network
path for the machine NIC for which the network profile.
Network profiles then configure network settings during the machine provisioning. They
may also specify the configuration of NSX Edge devices.
You can create a network profile to define a type of available network, including external
network profiles and templates for network address translation and routed network
profiles that will build NSX logical switches and appropriate routing settings for a new
network path to be used by provisioned machine as assigned in blueprint.
37. To view the current network profiles, click the Network Profiles tab.
38. Click on the Key Pairs link.
39. Click Ext-Net-Profile-Production-Web and click Pencil Icon.
You can specify the ranges of IP addresses that network profiles can use. Each IP
address in the specified ranges that are allocated to a machine is reclaimed for
reassignment when the machine is destroyed
Blueprints
Blueprints are built on with a dynamic drag-n-drop design canvas, allowing you to
choose any supported components, drag them on to the canvas, build dependencies,
and publish to the catalog.
The supported components include machine shells for all the supported OOTB
platforms, software components, endpoint networks, NSX-provided networks, XaaS
components, and even other blueprints that have already been published.
Once dragged over, the you can build the necessarily logic and any needed integration
for each component of that particular service.
Application Authoring is done from within the canvas with the same drag-and-drop
capability.
vRealize Automation is the consumption plane for NSX. It consumes and automates
NSX, including the ability to dynamically build on-demand network services.The canvas
also allows the drag-n-drop of NSX security groups and support app-centric isolation.
Blueprints can also be exported as YAML code using the CloudClient. Once exported, you
can edit/change/manipulate the content however they see fit, then imported as a new
blueprint. And since it’s just text — the YAML can be shared, edited, and imported into
other environments.
Here we see that we have three vSphere Machines, Web-0, App-0 and DB-0.
47. Let’s look at their configuration on the canvas. Start by clicking on Web-0.
We see that we will use a specific machine prefix for the virtual machine name
48. To see how the Web-0 will be built, click on Build Information.
Here we see that this machine will be created by cloning a Ubuntu Linux template and
running a customization.
49. To view the resources the machine will use, click the Machine Resources tab.
We can also set the minimum and maximum CPU, Memory and Storage for this web
server.
50. Let's take a look at the network settings. Click the Network tab.
And we assign a network profile to allocate the network settings during machine
provisioning. This machine will be assigned a static IP address.
51. We can review the settings for the application and database server as well. Let's
start will the application server by clicking on App-0.
We can see that the application server will also use a machine prefix when it is created.
52. By clicking on the Network tab, we can see that it too will use a static IP
address.
53. Now let's take a look at the database server. Click on DB-0.
54. It too will be using a machine prefix and by clicking on the Network tab it will
also be assigned a static IP address.
55. When you have finished, click the Cancel button at the bottom and click Yes to
discard your changes.
56. Let's take a look at the Catalog and request an item. To get started, click on the
Catalog tab in the Top Navigation.
The catalog provides a self-service portal for requesting services and also enables
business users to manage their own provisioned resources.
The items that are available in the catalog are grouped into service categories, which
helps you find what you’re looking for. Selecting a catalog item, you can view its details
to confirm that it is what you wants before submitting a request.
When you requests a catalog item, a form appears where you can provide information
such as the reason for the request.
57. Let's submit a request for a the 3 Tier App by clicking on the Request button.
58. Click in the Description field twice to add 'My 3 Tier App’.
You can also provide any parameters for the request. For example, you may be able to
specify the number of CPUs, memory or amount of storage on for a machine.
Here we will review the options and parameters for the 3 – tier application.
For App-0, we have the option of request anywhere from 1 to 4 CPUs and 1024 to 4096
MB of Memory. As we saw earlier when reviewing the Blueprint, the storage is fixed at
16GB.
For Web-0, we have the same options as App-0. We can request additional CPU and
Memory resources, but the storage resource is fixed.
Again, for DB-0, we have the same options for CPU and Memory, but cannot modify the
storage.
We can see that the lease time has been set to 30 days, but can be adjusted for up to
90 days.
After submitting a request, it may be subject to configured approvals. For example, the
unmodified request could require no approval, but any modification to include additional
CPU or Memory resources could require a request to go through an approval process.
You can review the Requests tab to track the progress of a request, including whether it
is pending approval, in progress, or completed.
If the request results in an catalog item being provisioned, it is added to your list of
items on the Items tab.
Now that we’ve requested the 3-tier app from the catalog, let’s return to the vSphere
Web Client and see the results.
64. Go back to the vSphere Web Client by clicking on the first tab, vSphere Web
Client. We can see that our 3 Tier App has been deployed, customized and
powered on. These include prod-app-00003, prod-db-00003 and prod-
web-00003. Each on their own virtual network.
Let’s review the IP Addresses and hosts on which each of the virtual machines have
been deployed.
Next, we’ll login on the web server and see if we can ping the database and application
servers across the distributed logical routing.
68. Click on the Black Box for the VM to Launch the Remote Console.
69. Press the 'Enter' key to enter the password.
70. We can view the IP address by typing 'ifconfig' and pressing the 'Enter' key.
71. Type “ping –c 5 172.11.11.20” to ping the Database Server and press the
'Enter' key.
The successful pings mean we can communicate with the Database server from the
Web server.
72. Type “ping –c 5 172.11.12.20” to ping the Application Server and press the
'Enter' key.
We can also ping the Application server, meaning we can communicate with it too.
Conclusion
This concludes the review the deployment and configuration of vRealize Automation in
the VMware Validated Design for SDDC.
Conclusion
Thank you for participating in the VMware Hands-on Labs. Be sure to visit
http://hol.vmware.com/ to continue your lab experience online.
Version: 20170502-055051