You are on page 1of 640

HANDS-ON LABS MANUAL - 2024

HOL-2446-05-HCI

Optimize and Modernize


Data Centers
HOL-2446-05-HCI: Optimize and Modernize Data Centers

Table of contents
Lab Overview - HOL-2446-05-HCI - VMware Cloud Foundation - Optimize and

Modernize Data Centers 6

Lab Overview and Guidance ............................................................. 6

VMware Cloud Foundation ............................................................... 11

Module 1 - Cloud Foundation Overview (15 minutes) Beginner 13

Cloud Foundation Overview ............................................................ 13

Module 2 - Lifecycle Management (30 minutes) Beginner 28

Patching and Upgrading..................................................................28

Conclusion..................................................................................... 50

Module 3 - Certificate Management (30 minutes) Beginner 52

Certificate Management ..................................................................52

Module 4 - Password Management (30 minutes) Beginner 75

Password Management ................................................................... 75

Module 5 - vSAN SPBM and Availability (30 minutes) Beginner 90

Introduction................................................................................... 90

What's new in vSAN 8 .................................................................... 90

Storage Policy Based Management ..................................................92

Advanced Storage Based Policy Management ................................. 110

Reserved Capacity .........................................................................142

Conclusion..................................................................................... 151

Module 6 - vSAN - Monitoring, Health, Capacity and Performance (30 minutes)

Beginner 152

Introductions .................................................................................152
vSAN Health Check Validation ........................................................152

Monitoring vSAN Capacity .............................................................. 161

Monitoring vSAN Performance ...................................................... 165

Conclusion.....................................................................................170

Module 7 - Workload Domain Operations (iSIMs) (45 minutes) Beginner 172

Introduction................................................................................... 172

Hands-on Labs Interactive Simulation: Host Commissioning ............. 172

Hands-on Labs Interactive Simulation: Create a vLCM Image-Based

Workload Domain .......................................................................... 174

Hands-on Labs Interactive Simulation: Expand an Existing Cluster ....176

HANDS-ON LABS MANUAL | 2


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Hands-on Labs Interactive Simulation: Cluster Upgrades with vLCM

Images ..........................................................................................178

Conclusion.................................................................................... 180

Module 8 - Introducing Software Defined Networking: Segments and Distributed

Routing (45 Minutes) Intermediate 181

Module 9 - Overview ...................................................................... 181

Creating Network Segments ........................................................... 181

Viewing the packet flow within a host ............................................. 207

Viewing the packet flow between hosts ...........................................219

Adding router connectivity ............................................................ 228

Testing the Opencart Application ................................................... 249

Module 9 - Conclusion .................................................................. 253

Module 9 - Changing the Security Game – Distributed Firewall (45 Minutes)

Intermediate 255

Module 10 - Overview.................................................................... 255

Tagging VMs and Grouping Workloads based on Tags.................... 258

Applying Distributed Firewall Rules based on Tagging on a

Segment....................................................................................... 279

Module 10 - Conclusion ..................................................................331

Module 10 - Basic Load Balancing with NSX (15 Minutes) Intermediate 332

Module 11 - Overview..................................................................... 332

Creating and Configuring the load balancer .................................... 332

Module 11 - Conclusion .................................................................. 352

Module 11 - Migrating Workloads (45 mins) Intermediate 353

Module Overview .......................................................................... 353


Hands-on Labs Interactive Simulation: Migrating to VCF with HCX... 353

Hands-on Labs Interactive Simulation: HCX installation, Activation, &

Site Pairing ................................................................................... 355

Hands-on Labs Interactive Simulation: HCX Network Profile, Compute

Profile, Service Mesh..................................................................... 357

Hands-on Labs Interactive Simulation: Extend Network, Migrate

VMs .............................................................................................359

Conclusion.....................................................................................361

Module 12 - Deploying Applications to a Pre-Existing NSX Network (45 minutes)

Intermediate 362

Module Overview .......................................................................... 362

HANDS-ON LABS MANUAL | 3


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Create an NSX Segment and DHCP Server ..................................... 362

Create the Assembler Network Profile............................................ 375

Deploy the Cloud Template ...........................................................404

Module 13 - Conclusion .................................................................430

Module 13 - Deploying Applications to an On-Demand NSX Network (45 minutes)

Intermediate 431

Module Overview ...........................................................................431

Create Assembler Network Profiles .................................................431

Deploy from a Template ................................................................ 453

Module 14 - Conclusion ................................................................. 482

Module 14 - Kubernetes Overview and Deploying vSphere Pod VMs (30 minutes)

Advanced 483

Module 15 - Overview ....................................................................483

Exercise 1 - Lab Overview ..............................................................483

Exercise 2 - Developer Access .......................................................489

Exercise 3 - Creating a Kubernetes Deployment ............................. 493

Exercise 4 - Scaling a Kubernetes Deployment ............................... 501

Exercise 5 - Kubernetes Integration with NSX .................................504

Exercise 6 - Deleting the Kubernetes Deployment ...........................515

Module 15 - Conclusion .................................................................. 517

Module 15 - Deploying Tanzu Kubernetes Clusters (30 minutes) Advanced 518

Module 16 - Overview.................................................................... 518

Exercise 1 - Deploy Tanzu Kubernetes Cluster ................................. 518

Module 16 - Conclusion ................................................................. 532

Module 16 - Adding Worker Nodes to a Tanzu Kubernetes Cluster (15 minutes)


Advanced 533

Module 17 - Overview .................................................................... 533

Exercise 1 - Add Worker Nodes to TKC........................................... 533

Module 17 - Conclusion..................................................................558

Module 17 - Adding Capacity to a Tanzu Kubernetes Worker Node (15 minutes)

Advanced 559

Module 18 - Overview....................................................................559

Exercise 1 - Add Capacity to TKC Worker Nodes .............................559

Module 18 - Conclusion ................................................................. 576

Module 18 - Upgrading a Tanzu Kubernetes Cluster (15 minutes) Advanced 578

Module 19 - Overview.................................................................... 578

HANDS-ON LABS MANUAL | 4


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Exercise 1 - Upgrade a TKC Cluster................................................. 578

Module 19 - Conclusion .................................................................595

Module 19 - Enabling the Embedded Image Registry (30 minutes) Advanced 596

Module 20 - Overview ...................................................................596

Exercise 1 - Enable the Embedded Image Registry ..........................596

Module 20 - Conclusion.................................................................626

Module 20 - Use Helm to Deploy a Sample Application (15 minutes)

Advanced 627

Module 21 - Overview .................................................................... 627

Exercise 1: Deploy OpenCart using Helm ........................................ 627

Module 21 - Conclusion.................................................................. 637

Conclusion 639

Conclusion....................................................................................639

HANDS-ON LABS MANUAL | 5


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Lab Overview - HOL-2446-05-HCI - VMware Cloud Foundation - Optimize and Modernize Data
Centers

Lab Overview and Guidance [2]

Note: It may take more than 90 minutes to complete this lab. You may only finish 2-3 of the modules during your time. However, you
may take this lab as many times as you want. The modules are independent of each other so you can start at the beginning of any
module and proceed from there. Use the Table of Contents to access any module in the lab. The Table of Contents can be accessed in
the upper right-hand corner of the Lab Manual.

VMware Cloud Foundation provides a complete set of highly secure software-defined services for compute, storage, network, security,
Kubernetes management, and cloud management. The result is agile, reliable, efficient and AI-ready cloud infrastructure that offers
consistent infrastructure and operations across private and public clouds. In addition, VMware Cloud Foundation contains built-in
automated lifecycle management to simplify the administration of the software stack, from initial deployment, to patching and
upgrading. As a cloud connected offer, VMware Cloud Foundation+ delivers the benefits of public cloud to on-premises workloads by
combining industry-leading full stack cloud infrastructure, an enterprise-ready Kubernetes environment, and high-value cloud services
to transform existing on-premises deployments into SaaS-enabled infrastructure.

In this lab you will walk through VMware Cloud Foundation as well as how to operate this as a private cloud platform to optimize and
modernize your data center.

Lab Module List:

•Module
Module 1 - Cloud Foundation Overview (15 minutes) (Beginner) A brief overview of the VCF platform and each of its

components.

•Module
Module 2 - Lifecycle Management (30 minutes) (Beginner) Use the Life Cycle Management capabilities to upgrade your VCF

infrastructure.

•Module
Module 3 - Certificate Management (30 minutes) (Beginner) Understand how to manage certificates for all external-facing

Cloud Foundation component resources, including configuring a certificate authority, generating and downloading CSRs, and

installing them using the VMware SDDC Manager.

•Module
Module 4 - Password Management (30 minutes) (Beginner) Learn how to utilize SDDC Manager to manage component

passwords in the Cloud Foundation platform.

•Module
Module 5 - vSAN SPBM and Availability (30 minutes) (Beginner) Introduction to VMware vSAN. We will cover the power of

Storage Based Policy Management (SPBM) and show you the availability of vSAN.

•Module
Module 6 - vSAN - Monitoring, Health, Capacity and Performance (30 minutes) (Beginner) Show you how to enable vRealize

Operations within vCenter Server. We will cover the vSAN Health Check and how you can monitor your vSAN environment.

•Module
Module 7 - Workload Domain Operations (iSIM) (45 minutes) (Beginner) Walk through the process of adding additional host

capacity to an existing Workload Domain using the VMware SDDC Manager.

•Module
Module 8 - Introducing Software Defined Networking: Segments and Distributed Routing (45 Minutes) (Intermediate) Learn

how software defined networking removes some barriers to physical networking constructs.

•Module
Module 9 - Changing the Security Game Distributed Firewall (45 Minutes) (Intermediate) Implement East-West firewall to

protect your environment.

HANDS-ON LABS MANUAL | 6


HOL-2446-05-HCI: Optimize and Modernize Data Centers

•Module
Module 10 - Basic Load Balancing with NSX-T (15 Minutes) (Intermediate) Implement load balancing features built into the

platform.

•Module
Module 11 - Migrating Workloads (45 mins) (Intermediate) - Utilize software defined networking and HCX to migrate your

workloads to VMware Cloud Foundation.

•Module
Module 12 - Deploying applications to a pre-existing NSX network (45 minutes) (Intermediate) In this module, we show how to

use Aria Automation Assembler to deploy an OpenCart instance to pre-defined NSX networks configured on the Cloud

Foundation management domain.

•Module
Module 13 - Deploying applications to an on-demand NSX network (45 minutes) (Intermediate) In this module, we use Aria

Automation Assembler to dynamically deploy software-defined networking objects inside VMware Cloud Foundation's NSX

implementation as part of an application deployment using Aria Automation Assembler.

•Module
Module 14 - Kubernetes Overview and Deploying vSphere Pod VMs (30 minutes) (Advanced)A brief overview of Cloud

Foundation with Tanzu and running vSphere Pod VMs in a vSphere Cluster.

•Module
Module 15 - Deploying Tanzu Kubernetes Clusters (15 minutes) (Advanced) Learn how to deploy a Tanzu Kubernetes Cluster

(TKC) inside a vSphere Namespace.

•Module
Module 16 - Adding Worker Nodes to a Tanzu Kubernetes Cluster (15 minutes) (Advanced) Understand how to add capacity to

a Tanzu Kubernetes Cluster (TKC) by resizing the worker nodes.

•Module
Module 17 - Adding Capacity to a Tanzu Kubernetes Worker Node (15 minutes) (Advanced) Understand how to expand a

Tanzu Kubernetes Cluster (TKC) by adding an additional worker node.

•Module
Module 18 - Upgrading a Tanzu Kubernetes Cluster (15 minutes) (Advanced) Walk through the upgrade of a Tanzu Kubernetes

Cluster (TKC) inside a vSphere Namespace.

•Module
Module 19 - Enabling the Embedded Image Registry (15 minutes) (Advanced) Configure and use the embedded Harbor

registry to store images for developer use.

•Module
Module 20 - Use Helm to Deploy a Sample Application (15 minutes) (Advanced) Utilize Helm to deploy OpenCart application

from the Bitnami repository.

Lab Captains:

•Chris Horning - Staff Technical Account Manager, USA

•Jeff Wong - Staff Solutions Architect, USA

•Kevin Tebear - Staff Technical Marketing Architect, USA

Content Architect:

•Milena Chen, Content Architect, Costa Rica

This lab manual can be downloaded from the Hands-on Labs document site found here:

http://docs.hol.vmware.com

This lab may be available in other languages. To set your language preference and view a localized manual deployed with your lab,
utilize this document to guide you through the process:

HANDS-ON LABS MANUAL | 7


HOL-2446-05-HCI: Optimize and Modernize Data Centers

http://docs.hol.vmware.com/announcements/nee-default-language.pdf

Credentials [3]

The following is a summary of the credentials used for this lab. For your convenience, links to the management interfaces are located in
the bookmark bar of Google Chrome shown in the image.

Additional credentials for components not listed below may be found in the README.txt file located on the desktop of the Main
Console.

•SDDC
SDDC Manager

◦Username: administrator@vsphere.local

◦Password: VMware123!

•SDDC
SDDC Manager as Sam Jones

◦Username: sam@vcf.sddc.lab

◦Password: VMware123!

•SDDC
SDDC Manager as Alex Foster

◦Username: alex@vcf.sddc.lab

◦Password: VMware123!

•SDDC
SDDC Manager as Ava
◦Username: ava@vsphere.local

◦Password: VMware123!

•vCenter
vCenter Server Admin Console

◦Username: root

◦Password: VMware123!

•vSphere
vSphere Web Client

◦Username: administrator@vsphere.local

◦Password: VMware123!

•VMware
VMware NSX Manager

◦Username: admin

◦Password NSX-T: VMware123!VMware123!

HANDS-ON LABS MANUAL | 8


HOL-2446-05-HCI: Optimize and Modernize Data Centers

First time using Hands-on Labs? [4]

Welcome! If this is your first time taking a lab navigate to the Appendix in the Table of Contents to review the interface and features
before proceeding.

For returning users, feel free to start your lab by clicking next in the manual.

HANDS-ON LABS MANUAL | 9


HOL-2446-05-HCI: Optimize and Modernize Data Centers

You are ready....is your lab? [5]

Please verify that your lab has finished all the startup routines and is ready for you to start. If you see anything other than "Ready",
please wait a few minutes. If after 5 minutes your lab has not changed to "Ready", please ask for assistance.

HANDS-ON LABS MANUAL | 10


HOL-2446-05-HCI: Optimize and Modernize Data Centers

VMware Cloud Foundation [6]

VMware Cloud Foundation™ is VMware’s unified SDDC platform for a modern hybrid cloud. This product brings together VMware’s
compute, storage, and network virtualization into a natively integrated stack, and allows you to deliver enterprise-ready cloud
infrastructure with automation and management capabilities for simplified operations that are consistent across private and public
clouds.

A deployed VMware Cloud Foundation™ system includes the following VMware software as standard components:

•SDDC Manager - Virtual appliance that provides administrators with a centralized portal to provision, manage, and monitor

the VMware Cloud Foundation™ solution.

•vSphere - Enterprise-class hypervisor for compute virtualization.

•vCenter Server Standard - Provides centralized management of vSphere virtual infrastructure.

•vSAN – Delivers flash-optimized, high-performance storage for hyper-converged infrastructure.

•NSX - VMware NSX is the network virtualization platform for the Software-Defined Data Center. NSX embeds networking and

security functionality that is typically handled in hardware directly into the hypervisor.

HANDS-ON LABS MANUAL | 11


HOL-2446-05-HCI: Optimize and Modernize Data Centers

•vSphere with Tanzu - vSphere with Tanzu provides the capability to run Kubernetes workloads directly on ESXi hosts and to

create upstream Tanzu Kubernetes Grid clusters within dedicated resource pools.

The following VMware software components may be optionally deployed as part of VMware Cloud Foundation:

•vRealize Operations - Correlates data from applications to storage in a unified, easy-to-use management tool that provides

control over performance, capacity, and configuration, with predictive analytics driving proactive action, and policy-based

automation.

•vRealize Automation - Automates the delivery of the compute, storage, and network resources on a per-application basis,

delivered through repeatable blueprints and accessed through a self-service user portal.

•vRealize Log Insight – Allows administrators to view, manage, and analyze log information from various points within the

solution.

HANDS-ON LABS MANUAL | 12


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Module 1 - Cloud Foundation Overview (15 minutes) Beginner

Cloud Foundation Overview [8]

VMware Cloud Foundation is the hybrid cloud platform for managing VMs and orchestrating containers, built on full-stack hyper-
converged infrastructure (HCI) technology. With a single architecture that’s easy to deploy, VMware Cloud Foundation enables
consistent, secure infrastructure and operations across private and public clouds.

HANDS-ON LABS MANUAL | 13


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Workload Domains [9]

VMware Cloud Foundation consists of two types of Workload Domains that make up the Cloud Foundation Platform. These two
Workload Domains are pools of logical resources. Each pool is a cluster or multiple clusters of ESXi hosts managed by an associated
vCenter Server and NSX Manager. Each cluster manages the resources of all the hosts that are assigned to it. Within each cluster, Cloud
Foundation enables the VMware vSphere® High Availability (HA), VMware vSphere® Distributed Resource Scheduler™ (DRS), and
VMware vSAN capabilities.

Management Domain

There is one management domain that is used to manage the SDDC infrastructure components within a Cloud Foundation deployment.
The management domain is automatically provisioned using the first four hosts when the environment is initially configured for Cloud
Foundation (a process referred to as "Bring Up"). The management domain contains all of the management components of the SDDC
Platform. This includes vCenter, vSAN, NSX Manager Controller Cluster, SDDC Manager, and any of the optional vRealize Suite
components, such as vRealize Suite Lifecycle Manager, vRealize Operations, vRealize Log Insight, vRealize Automation, and Workspace
ONE Access.

Virtual Infrastructure (VI) Workload Domain

A Virtual Infrastructure (VI) Workload Domain is designed to run your business applications. When creating VI Workload Domains,
Cloud Foundation takes the number of hosts specified by the cloud administrator and automatically deploys the VI Workload Domain
with VMware best practices. The first VI Workload Domain has its own, vCenter and NSX Manager Controller Cluster. This creates a
highly reliable and secure infrastructure for your business applications. Additional VI Workload Domains can be added, each additional
VI Workload Domains has its own vCenter server, but the customer has the choice to deploy a new NSX Manager Controller Cluster or
share an existing one, depending on the customer’s needs.

HANDS-ON LABS MANUAL | 14


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Separating the Management Domain from the Workload Domains provides several benefits.

•The Management Domain and VI Workload Domain being separated allows for dedicated resource management for higher

business application performance.

•Security is improved by creating a separate role-based access control of the infrastructure components. This separation

allows for more granular control of who has access to or control of resources inside your private cloud.
•Lifecycle management (patching and upgrades) can be completed on different schedules. The management domain will

always be patched first, but the VI Workload Domains can be patched at different intervals that best suit the business

application needs.

You use the SDDC Manager Web interface in a browser for the single point-of-control management of your VMware Cloud Foundation
system. The SDDC Manager provides centralized access as well as an integrated view of both the physical and virtual infrastructure of
the system.

SDDC Manager does not mask the individual component management products. Along with the SDDC Manager Web interface, for
certain tasks, you might also use web interfaces for administration tasks involving their associated VMware software components that
are part of a VMware SDDC. All of these interfaces run in a browser, and you can launch many of them from locations within the SDDC
Manager Web interface.

We have provided a full VMware Cloud Foundation experience in a virtual environment, however, procedures may have been modified
to account for the simulated environment that the HOL uses or to accelerate time for the user's convenience.

HANDS-ON LABS MANUAL | 15


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Page Loading Symbol [10]

***Note: In the Hands-on Labs environment, as you are navigating through the various screens, you may encounter long refresh
operations for extended periods in the order of 1-3 minutes. Please resist the urge to click refresh on the page during these times as it
will most likely extend the wait.

When building the lab we attempted to minimize these loading times, however, in some instances, operations such as timeouts when
waiting for hardware to reply were unavoidable, as this is a nested environment and not connected to physical hardware. Thank you for
your patience!

Initial Log In [11]

1. Please ensure that the Lab Status is green and says “Ready”. If it does not, please let a proctor know by raising your virtual

hand.

2.After you have verified that the lab is ready, please launch Google Chrome using the shortcut on the desktop.

HANDS-ON LABS MANUAL | 16


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Log in to SDDC Manager [12]

HANDS-ON LABS MANUAL | 17


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Once the browser has launched you will see two tabs open by default. The first tab is the SDDC Manager Login, the second is the
vCenter Login.

1. Select the SDDC Manager tab and verify the page URL to ensure you have the correct user interface. The SDDC Manager

login URL should read https://mgmt-vcenter.vcf.sddc.lab

2.In the User name box enter: administrator@vsphere.local

3.In the Password box enter: VMware123!

4.Click the Login button

Note: The actual SDDC Manager URL is https://sddc-manager.vcf.sddc.lab/ui/ even though the screenshot shows the vCenter URL.
This is due to the fact that SDDC Manager uses vCenter SSO for authentication.

Login to the vSphere Client [13]

1. After the successful login to the SDDC Manager, select the second tab in the Chrome browser for the vSphere Client.

2.Select the refresh button

This action should allow you to be signed in to the vSphere Client without having to enter any additional login credentials. As we have
already authenticated with the SDDC Manager and since they are both in the same SSO domain, our credentials will carry through to
the second browser tab.

HANDS-ON LABS MANUAL | 18


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Dashboard [14]

The Dashboard page is the home page that provides the overall administrative view of your system. The Dashboard page provides a
top-level view of the physical and logical resources across all of the physical racks in your system, including available CPU, memory, and
storage capacity. From this page, you can start the process of creating a VI Workload Domain. You use the links on the dashboard to
drill down and examine details about the physical resources and the virtual environments that are provisioned for the management and
workload domains.

On the left side of the interface is the Navigation bar. The Navigation bar provides icons for navigating to the corresponding pages. We
will explore each of these in more detail later in the lab.

1. Select the SDDC Manager Tab at the top of the browser window. Here we can see the dashboard view and recent tasks that

have been completed.

2.Due to the resolution of the Hands-On Lab environment, the Tasks tray may need to be resized, or you will need to scroll over

while reviewing the tasks. You also have the option to minimize the Tasks tray by clicking the X.

HANDS-ON LABS MANUAL | 19


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Workload Domain Exploration [15]

Rainpole Inc. has just deployed VMware Cloud Foundation. Let’s begin by exploring the Workload Domains.

1. From the left-hand navigation pane, select the Inventory menu item, then select Workload Domains.

HANDS-ON LABS MANUAL | 20


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Workload Domains [16]

From the Workload Domains view, we can see the available CPU, Memory, and Storage capacity. We are also able to see the Workload
Domains and the type of workload domains that have been created within the environment. This new environment currently has one
workload domain provisioned, the mgmt-wld Management Domain. In the future, Rainpole Inc. will deploy additional workload domains
for its applications.

Each of these Workload Domains performs a different function. One, the Management Domain, is responsible for the overall VMware
Cloud Foundation environment. The other Workload Domain will be used to provide resources for virtual server workloads and
applications. VMware recommends that management servers be physically separated from user workloads.

Cloud Foundation now supports the ability for Workload Domains to use vSAN, NFS, vVOls, or VMFS on FC as their principal storage
and automates the deployment of both of the storage solutions.

1. Use the vertical and horizontal scroll bars at the side and bottom of the page to view more information about the existing

Workload Domain.

HANDS-ON LABS MANUAL | 21


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Management Workload Domain [17]

You will now explore the Management Workload Domain in greater detail.

1. Click the scroll bar back to the left.

2.Click on the Management Workload Domain link labeled mgmt-wld at the bottom of the page.

HANDS-ON LABS MANUAL | 22


HOL-2446-05-HCI: Optimize and Modernize Data Centers

MGMT - Deep Dive [18]

HANDS-ON LABS MANUAL | 23


HOL-2446-05-HCI: Optimize and Modernize Data Centers

From the landing page of the mgmt-wld Workload Domain, we get an immediate picture of the status of CPU, Memory, and Storage
consumption by this workload domain. We are also able to determine the capacity of allocated resources as well as how much of that
capacity has been consumed.

Scrolling further down you will see several options along the bottom of the page that allow you to drill further into the status of the
workload domain. Each of these options is detailed below. Explore these by clicking on each in turn.

1. Summary: Provides network and storage information. In the Management Domain VSAN is the storage type. In a Workload

Domain, we can also use NFS or VMFS on Fibre Channel.

2.Services:
Services: Displays the FQDN and IP address of all associated components that have been deployed to support the specific

Workload Domain. This includes the vCenter and NSX Manager.

3.Updates:
Updates: Shows the pre-check workflow, as well as any updates that have been made available that apply to this specific

Workload Domain. Also listed are the specific versions of software for the deployed components within the Workload
Domain. Selecting a version number will take you to the Update history for that component.

4.Update
Update History: Shows all updates that have already been applied to the system. You have the option to filter the period over

which you'd like results displayed.

5.Hosts:
Hosts: Displays all the hosts that are part of this specific Workload Domain including the Cluster that the host belongs to, the

FQDN of the host, the Management IP address, Network Pool, Host Status, Resource Usage, and Storage Type (Hybrid or

All-Flash)

6.Clusters:
Clusters: Lists out all available clusters under a given Workload Domain

7. Edge Clusters: List of the Edge Clusters used by NSX for North/South Traffic

8.Certificates:
Certificates: Displays the certificate information for all components of the VMware Cloud Foundation environment. This

interface can also automate the replacement of a certificate for all components inside of VMware Cloud Foundation. We will

explore certificate management in a later module.

NOTE: May need to scroll to the right to see all of the tabs.

HANDS-ON LABS MANUAL | 24


HOL-2446-05-HCI: Optimize and Modernize Data Centers

NSX Exploration [19]

VMware Cloud Foundation uses NSX for both Management and Workload Domains

1. Select the Services tab.

HANDS-ON LABS MANUAL | 25


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Workload Domain Summary [20]

VMware Cloud Foundation supports the deployment of NSX and multiple storage options for a Workload Domain.

Below is a snippet from the user manual in regards to Workload Domains and support:

In the VI Configuration wizard, you specify the storage, name, compute, and NSX platform details for the VI Workload Domain. Based
on the selected storage, you provide vSAN parameters or NFS share details. You then select the hosts and licenses for the workload
domain and start the creation workflow.

The workflow automatically:

•Deploys an additional vCenter Server Appliance for the new workload domain within the management domain.

•By using a separate vCenter Server instance per workload domain, software updates can be applied without impacting other

workload domains. It also allows for each workload domain to have additional isolation as needed.
•Connects the specified ESXi servers to this vCenter Server instance and groups them into a cluster. Each host is configured

with the port groups applicable for the workload domain.

•Configures networking on each host.

•Configures vSAN, NFS, or VMFS on FC storage on the ESXi hosts.

•For the first VI workload domain, the workflow deploys a cluster of three NSX Managers in the management domain and

configures a virtual IP (VIP) address for the NSX Manager cluster. The workflow also configures an anti-affinity rule between

the NSX Manager VMs to prevent them from being on the same host for High Availability. Subsequent VI workload domains

can share an existing NSX Manager cluster or deploy a new one. To share an NSX Manager cluster, the workload domains

must use the same update manager. The workload domains must both use vSphere Lifecycle Manager (vLCM) or they must

both use vSphere Update Manager (VUM).

•Cloud Foundation can optionally create a two-node NSX Edge cluster on the management domain for use by the vRealize

Suite components. You can add additional NSX Edge clusters to the management domain. By default, workload domains do

not include any NSX Edge clusters and are isolated. Add one or more Edge clusters to a workload domain to provide north-

south routing and network services. See Deploying NSX Edge Clusters. Note: Multiple Edge clusters cannot reside on the
same vSphere cluster.

•NSX Managers deployed as part of a VI workload domain are configured to periodically get backed up to an SFTP server. By

default, these backups are written to an SFTP server built into SDDC Manager, but you can register an external SFTP server

for better protection against failures. SDDC Manager uses either the built-in or external SFTP server with all currently

deployed NSX Managers and when deploying additional NSX Managers.

•Licenses and integrates the deployed components with the appropriate pieces in the Cloud Foundation software stack.

The result is a workload-ready SDDC environment.

HANDS-ON LABS MANUAL | 26


HOL-2446-05-HCI: Optimize and Modernize Data Centers

End of Module 1 [21]

You have completed Module 1 and should now have a good understanding of how to navigate the SDDC Manager web interface. You
should also at this point conceptually understand what a workload domain is and what it is used for. Please continue to Module 2.

HANDS-ON LABS MANUAL | 27


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Module 2 - Lifecycle Management (30 minutes) Beginner

Patching and Upgrading [23]

In Cloud Foundation, the Life Cycle Management (LCM) capabilities include automated patching and upgrades for both the SDDC
Manager (SDDC Manager and LCM) and other VMware software components (vCenter Server, ESXi, NSX, and vSAN).

The high-level update workflow is described below.

1. Authorize VMware Customer Connect credentials.

2.Download the update bundle.

3.Select update targets and schedule updates.

4.Verify update.

HANDS-ON LABS MANUAL | 28


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Even though SDDC Manager may be available while the update is installed, it is recommended that you schedule the update at a time
when it is not being heavily used.

HANDS-ON LABS MANUAL | 29


HOL-2446-05-HCI: Optimize and Modernize Data Centers

HANDS-ON LABS MANUAL | 30


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Initial Log In [24]

1. Please ensure that the Lab Status is green and says “Ready”.

2.After you have verified that the lab is ready, please launch Google Chrome using the shortcut on the desktop.

HANDS-ON LABS MANUAL | 31


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Log in to SDDC Manager [25]

HANDS-ON LABS MANUAL | 32


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Once the browser has launched you will see two tabs open by default. The first tab is the SDDC Manager Login, the second is the
vCenter Login.

1. Select the SDDC Manager tab and verify the page URL to ensure you have the correct user interface. The SDDC Manager

login URL should read https://sddc-manager.vcf.sddc.lab

2.In the User name box enter: administrator@vsphere.local

3.In the Password box enter: VMware123!

4.Click the Login button

Note: The actual SDDC Manager URL is https://sddc-manager.vcf.sddc.lab/ui/ even though the screenshot shows the vCenter URL.
This is due to the fact that SDDC Manager uses vCenter SSO for authentication.

Authorize VMware Customer Connect Account [26]

To check for available updates from VMware, a VMware Customer Connect Account is required to be authorized in SDDC Manager.

1. Select Bundle Management from within the Lifecycle Management menu on the left.

2.Click on the VMware Customer Connect Account link.

HANDS-ON LABS MANUAL | 33


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Enter the following credentials to authorize. This is a global setting and will be set for all users once done.

1. Username: sam@vcf.sddc.lab

2.Password: VMware123!

3.Click AUTHORIZE.

HANDS-ON LABS MANUAL | 34


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Update Repository [27]

An update is now available for the VMware Cloud Foundation deployment. Let’s walk through our options for downloading and
deploying this update.

1. If you are not already in the Bundle Management page, Select Bundle Management from the Lifecycle Management menu on

the left.

2.Wait 1 to 2 minutes and then refresh your browser for the available bundle to appear.

3.Click DOWNLOAD NOW - NOTE: This update may take a minute to start and then another minute or two to download. You

may proceed while the download continues.

4.Clicking on View Details, you can see more information

HANDS-ON LABS MANUAL | 35


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Bundle Details [28]

Information such as the severity of the update, the number, and types of software components, the minimum required software
versions, and the bundle release date are shown under the details.

1. When you are done examining the details of the update, click the Exit Details link on the top right corner of the window.

HANDS-ON LABS MANUAL | 36


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Plan Upgrade [29]

In order to perform an upgrade, we need to select an upgrade plan.

1. Navigate to Inventory -> Workloads Domains.

2.Click on mgmt-wld.

HANDS-ON LABS MANUAL | 37


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. In the mgmt-wld screen, select the Updates tab.

2.Scroll down to the bottom of the page and click on PLAN UPGRADE.

HANDS-ON LABS MANUAL | 38


HOL-2446-05-HCI: Optimize and Modernize Data Centers

In this lab, we will be upgrading to a mock version of VMware Cloud Foundation 5.0.0.1

1. Select VMware Cloud Foundation 5.0.0.1 from the drop down menu.

2.By selecting this version of VMware Cloud Foundation, you can see a summary of all the software component changes. For

lab purposes, this version will upgrade SDDC Manager from version 5.0.0.0 to 5.0.0.1. No other changes will occur.

3.Click CONFIRM to proceed.

HANDS-ON LABS MANUAL | 39


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Precheck [30]

It is good practice to ensure the environment is healthy before performing any upgrade activity.

1. Within the same mgmt-wld workload domain page Updates tab, click on RUN PRECHECK on top of the page.

HANDS-ON LABS MANUAL | 40


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Select the target version of 5.0.0.1 we will be upgrading to.

2.Select the Precheck Scope of SDDC_MANAGER_VCF - 5.0.0.1-22053494 which is the component we will be upgrading.

3.Click RUN PRECHECK to proceed. The precheck will take a few minutes to run.

HANDS-ON LABS MANUAL | 41


HOL-2446-05-HCI: Optimize and Modernize Data Centers

When the precheck completes, you will be presented with the results for all the checks performed against the environment and will
highlight any areas that could potentially prevent the update or patch from being applied successfully.

1. As this is a simulated lab environment, some failures are expected


expected. It is safe to ignore these failures and proceed for lab

purposes.

2.Once you have completed reviewing the details, scroll up and click the BACK TO UPDATES link at the top left of the window.

HANDS-ON LABS MANUAL | 42


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Download Status [31]

1. At this point, the bundle download should have completed and show up under the Available Updates section. You may need

to click on the browser refresh button to get the latest information.

2.As this bundle applies to the current mgmt-wld domain it is now made available for update. You may review details of this

update.

HANDS-ON LABS MANUAL | 43


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Schedule Update [32]

In the Available Updates section, you are presented with 2 options for executing the deployment of the relevant patches or updates.

1. Choose the SCHEDULE UPDATE option if you'd like to specify a future date and time to execute the update. You may specify

a day / time of up to 365 days out from the present day.

2.Click the X to close the Schedule Update Window,

HANDS-ON LABS MANUAL | 44


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Update Now [33]

1. Due to time constraints within the lab environment, click to UPDATE NOW button to begin an immediate update.

2.After you click the UPDATE NOW button, you will see a Scheduled message displayed. After a 1-2 min wait, an update dialog

window will appear.

HANDS-ON LABS MANUAL | 45


HOL-2446-05-HCI: Optimize and Modernize Data Centers

View Activity [34]

1. You can follow the progress by clicking on VIEW UPDATE ACTIVITY.

2.Scroll down to view more details. Select the drop-down arrow to view more granular details around the status of specific

Common Services. This update will take about 2-5 minutes to complete.

NOTE: You may need to refresh the browser if the screen does not refresh automatically after 3 minutes.

HANDS-ON LABS MANUAL | 46


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. As this is an SDDC Manager update, the page may become unresponsive due to SDDC Manager service restarts. If you are

presented with a browser 'Page Unresponsive' window, click Exit Page


Page.

2.Select Reload to continue following along the update progress.

HANDS-ON LABS MANUAL | 47


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Update Completion [35]

1. Upon completion, a green ribbon will display the date and time the update was completed.

2.Click the FINISH button to exit the update status screen.

HANDS-ON LABS MANUAL | 48


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Verify the Update has been Applied [36]

1. From the main SDDC Manager Dashboard interface. Select Workload Domains from the Inventory menu item on the left side

of the page.

2.Click the mgmt-wld link.

HANDS-ON LABS MANUAL | 49


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Select the Update History link to validate that the update you just applied was successful.

2.Clicking on the ACTIONS drop-down link will allow you to download the log files associated with the update or view the

update status.

End of Module [37]

You have completed the module and should now have a good understanding of the upgrade and patching process within the VMware
Cloud Foundation environment. Please continue to the next module.

Conclusion [38]

In this module, we worked in Cloud Foundation covering Life Cycle Management (LCM) capabilities including automated patching and
upgrades for both the SDDC Manager (SDDC Manager and LCM) and other VMware software components (vCenter Server, ESXi, NSX,
and vSAN).

HANDS-ON LABS MANUAL | 50


HOL-2446-05-HCI: Optimize and Modernize Data Centers

You have finished the module [39]

Congratulations on completing the lab module.

HANDS-ON LABS MANUAL | 51


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Module 3 - Certificate Management (30 minutes) Beginner

Certificate Management [41]

An easy way to increase the security of an environment, and a common practice for most IT organizations, is to replace the self-signed
certificates that are generated during installation with a certificate signed by the organization's Certificate Authority (CA). VMware Cloud
Foundation simplifies this process allowing customers to easily update and manage these certificates.

You can manage certificates for all external-facing Cloud Foundation component resources, including configuring a certificate authority,
generating and downloading CSRs, and installing them. Cloud Foundation supports the use of Microsoft certificate authority, Open SSL,
and 3rd party certificate authorities.

You can manage the certificates for the following components.

•vCenter Server

•NSX Manager

•SDDC Manager

SDDC Manager Log In [42]

HANDS-ON LABS MANUAL | 52


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Please ensure that the Lab Status is green and says “Ready”.

2.After you have verified that the lab is ready, please launch Google Chrome using the shortcut on the desktop.

Log in to SDDC Manager [43]

HANDS-ON LABS MANUAL | 53


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Once the browser has launched you will see two tabs open by default. The first tab is the SDDC Manager Login, the second is the
vCenter Login.

1. Select the SDDC Manager tab and verify the page URL to ensure you have the correct user interface. The SDDC Manager

login URL should read https://sddc-manager.vcf.sddc.lab

2.In the User name box enter: administrator@vsphere.local

3.In the Password box enter: VMware123!

4.Click the LOGIN button

Note: The actual SDDC Manager URL is https://sddc-manager.vcf.sddc.lab/ui/ even though the screenshot shows the vCenter URL.
This is due to the fact that SDDC Manager uses vCenter SSO for authentication.

Log in to the vSphere Client [44]

1. After the successful login to the SDDC Manager, select the second tab in the Chrome browser for the vSphere Web Client.

2.Select the URL refresh button in the second browser tab. This action should allow you to be signed in to the vSphere Client

without having to enter any additional login credentials. As we have already authenticated with the SDDC Manager and since

they are both in the same SSO domain, our credentials should carry through to the second browser tab.

The refresh process can take a couple of minutes to complete, but you can continue to the next step in the lab.

HANDS-ON LABS MANUAL | 54


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review Certificate Authority [45]

1. Select the first browser tab to navigate to the SDDC Manager interface.

2.Expand the Security menu item in the left navigation window.

3.Click the Certificate Authority sub-menu item.

As we can see the connection from SDDC Manager to the backend Certificate Authority has already been established.

HANDS-ON LABS MANUAL | 55


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Generate CSR [46]

1. Click the Inventory menu item in the left navigation window.

2.Click the Workload Domains sub-menu item.

3.On the resulting screen, Click the mgmt-wld Domain link

HANDS-ON LABS MANUAL | 56


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Generate CSR [47]

1. Select the Certificates Tab

2.Place a check in the box next to the sddcmanager

NOTE: Due to time constraints we will be replacing the SDDC Manager certificate.

3.Uncheck any other boxes

4.Click on the GENERATE CSRS button.

NOTE: Review the current date that the certificate is valid through.

HANDS-ON LABS MANUAL | 57


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Generate CSR Wizard [48]

HANDS-ON LABS MANUAL | 58


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Populate the fields in the CSR wizard with the following information.

Algorithm: RSA

Key Size: 2048

Email: sam@vcf.sddc.lab

Organizational Unit: IT

Organization: Rainpole

Locality: Palo Alto

State: CA

Country: US

1. Click NEXT

HANDS-ON LABS MANUAL | 59


HOL-2446-05-HCI: Optimize and Modernize Data Centers

If you have any Subject Alternative Names you may enter them here. In this lab, we will leave this blank.

1. Click NEXT

1. Click GENERATE CSRS

HANDS-ON LABS MANUAL | 60


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Generate Signed Certificate [49]

HANDS-ON LABS MANUAL | 61


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Now that the CSR has been generated, click the GENERATE SIGNED CERTIFICATES button.

2.Select Microsoft as the Certificate Authority

3.Click on the GENERATE CERTIFICATES button.

NOTE: This may take a minute or two to complete.

If you were using a 3rd party CA, you would click download CSR after step 1. to submit to the 3rd party Certificate Provider.

HANDS-ON LABS MANUAL | 62


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Certificate Generation Validation [50]

HANDS-ON LABS MANUAL | 63


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Certificate Installation [51]

1. Place a check in the box next to the sddcmanager

2.Click INSTALL CERTIFICATES

Note: If the INSTALL CERTIFICATES button is not activated. Refresh the browser to get the latest update.

Due to the formatting of the Hands-On Lab environment, you may need to scroll over to the right to see the status of the sddc-
manager.vcf.sddc.lab certificate replacement.

This process takes a couple of minutes to replace the certificate in the Hands-On Lab Environment. While this is running please
proceed in the lab, you can come back to check this status later if you wish to do so.

Verify that the Certificate Installation Status for the sddcmanager shows SUCCESSFUL.

HANDS-ON LABS MANUAL | 64


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Certificate Installation Validation [52]

HANDS-ON LABS MANUAL | 65


HOL-2446-05-HCI: Optimize and Modernize Data Centers

SSH to SDDC Manager [53]

HANDS-ON LABS MANUAL | 66


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Launch Putty

2.Select the sddc-manager

3.Click Open

4.Enter Password VMware123!

HANDS-ON LABS MANUAL | 67


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Restart the SDDC Manager Service [54]

1. Enter su to switch to the root user.

2.Enter VMware123! when prompted for a password.

3.Run the following command:

sh /opt/vmware/vcf/operationsmanager/scripts/cli/sddcmanager_restart_services.sh

4. Enter Y to proceed

HANDS-ON LABS MANUAL | 68


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Log in to SDDC Manager [55]

HANDS-ON LABS MANUAL | 69


HOL-2446-05-HCI: Optimize and Modernize Data Centers

After the service restart, you will need to login back into SDDC manager. The service may take 2-3 minutes for the service fully restart.

1. Select the SDDC Manager tab and verify the page URL to ensure you have the correct user interface. The SDDC Manager

login URL should read https://sddc-manager.vcf.sddc.lab

2.In the User name box enter: administrator@vsphere.local

3.In the Password box enter: VMware123!

4.Click the Login button

Note: The actual SDDC Manager URL is https://sddc-manager.vcf.sddc.lab/ui/ even though the screenshot shows the vCenter URL.
This is due to the fact that SDDC Manager uses vCenter SSO for authentication.

Verify Certificate Replacement [56]

HANDS-ON LABS MANUAL | 70


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Select the lock icon

2.Click on Connection is Secure

3.Click Certificate is Valid

HANDS-ON LABS MANUAL | 71


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Verify Certificate Continued [57]

HANDS-ON LABS MANUAL | 72


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Select the Details tab.

2.Verify that the Valid to date is 2 years from the current date.

3.Select the Serial Number. Note the number.

Navigate to the Management Workload Domain [58]

1. Select Workload Domains

2.Select the mgmt-wld workload domain.

HANDS-ON LABS MANUAL | 73


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Verify Cert Serial Number [59]

1. Click Certificates

2.Expand the SDDC Manager

Note that the number matches.

Module Completed. [60]

Congratulations. You have completed the module on Certificate Management. We have demonstrated how Cloud Foundation can help
easily replace certificates. Please continue to the next module.

HANDS-ON LABS MANUAL | 74


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Module 4 - Password Management (30 minutes) Beginner

Password Management [62]

VMware Cloud Foundation provides the ability to manage passwords for logical and physical entities on all racks in your system. The
process of password rotation generates randomized passwords for the selected accounts.

You can change passwords for the following entities:

•ESXi

•vCenter / PSC

•NSX Manager

•NSX Edges

•Internal backup account

SDDC Manager Log In [63]

HANDS-ON LABS MANUAL | 75


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Please ensure that the Lab Status is green and says “Ready”.

2.After you have verified that the lab is ready, please launch Google Chrome using the shortcut on the desktop.

Log in to SDDC Manager [64]

HANDS-ON LABS MANUAL | 76


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Once the browser has launched you will see two tabs open by default. The first tab is the SDDC Manager Login, the second is the
vCenter Login.

1. Select the SDDC Manager tab and verify the page URL to ensure you have the correct user interface. The SDDC Manager

login URL should read https://sddc-manager.vcf.sddc.lab

2.In the User name box enter: administrator@vsphere.local

3.In the Password box enter: VMware123!

4.Click the Login button

Note: The actual SDDC Manager URL is https://sddc-manager.vcf.sddc.lab/ui/ even though the screenshot shows the vCenter URL.
This is due to the fact that SDDC Manager uses vCenter SSO for authentication.

Password Update [65]

HANDS-ON LABS MANUAL | 77


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Once logged into the SDDC Manager interface:

1. Click Security

2.Click Password Management

1. Click the three vertical dots for the user root on host esxi-1.vcf.sddc.lab

2.Select UPDATE

HANDS-ON LABS MANUAL | 78


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Once the Update Password dialog box is open, fill in the password you would like it changed to.

1. Use HOLR0cks! as the password

2.Click UPDATE

HANDS-ON LABS MANUAL | 79


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Monitor the Task [66]

Monitor the progress of the task by opening the Tasks window in the lower left and

1. Tasks link

2.Clicking the REFRESH link.

Validate the Password Change [67]

Once the password update has been completed successfully we will validate that the password change has occurred.

1. In the browser open a new tab, from the bookmarks shortcut bar, select ESXi Hosts and then select ESXi-1

Note: You may need to click on the Advanced link in the browser to proceed if there is a precautionary security warning

presented.

HANDS-ON LABS MANUAL | 80


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Once the page opens use the following credentials to validate the password change was successful.

1. Fill in the values:

◦Username: root

◦Password: HOLR0cks! (or the password you supplied in the previous step when changing the root user password)

2.Click the Log in button

Successful login shows that the password was updated.

HANDS-ON LABS MANUAL | 81


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Switch back to SDDC Manager [68]

1. Close the ESXi tab in Chrome

2.Select the SDDC Manager tab

HANDS-ON LABS MANUAL | 82


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Password Rotation [69]

The other option is to rotate instead of update. We can test this by navigating back to the first tab for SDDC Manager

1. Click Security

2.Click Password Management

HANDS-ON LABS MANUAL | 83


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Change Password [70]

1. Select the check box next to root for host esxi-1.vcf.sddc.lab

2.Click the ROTATE NOW button.

Rotate [71]

HANDS-ON LABS MANUAL | 84


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click the ROTATE button again in the confirmation pop-up dialog box.

This will rotate the password to a randomly generated password that will be stored in the SDDC Manager database.

Validate the Password Rotation [72]

There are two ways to look up the password once it has been rotated. You may either (1) SSH into the SDDC Manager and follow the
admin guide and use the lookup_passwords command. This requires SSH access to the host or (2) Use the API to look up the
credentials. We will do the latter in this exercise.

1. Navigate to Developer Center

2.Click the API Explorer tab

3.Expand the Credentials API category

4.Expand GET /v1/credentials

HANDS-ON LABS MANUAL | 85


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Get Credentials API [73]

HANDS-ON LABS MANUAL | 86


HOL-2446-05-HCI: Optimize and Modernize Data Centers

HANDS-ON LABS MANUAL | 87


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Enter the resourceName esxi-1.vcf.sddc.lab

2.Click EXECUTE

3.Expand PageOfCredential

4.and then the second Credential(GUID) View the password information (see yellow box below) Your password will be different

than what is listed below.

5.Copy the random password without the quotes.

NOTE: There are two Credentials returned one is for the VCF Service account that has access to the host in the second is for the root
user account. For the lab, you will use the second root user account.

Login to ESX [74]

HANDS-ON LABS MANUAL | 88


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Once the page opens use the following credentials to validate that the password change was successful

1. Open a new tab

2.Expand the ESXi Hosts bookmark folder and select ESXi-1.vcf.sddc.lab

3.Enter:

◦Username: root

◦Password: <password copied from previous step> (this is the password that is in your developer center)

4.Clicking the Log In button allows us to see that the password rotation was successful.

End of Module [75]

This concludes this module. We explored how we can utilize SDDC Manager to update and rotate passwords. Please continue to the
next module.

HANDS-ON LABS MANUAL | 89


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Module 5 - vSAN SPBM and Availability (30 minutes) Beginner

Introduction [77]

vSAN delivers flash-optimized, secure shared storage with the simplicity of a VMware vSphere-native experience for all your critical
virtualized workloads. vSAN runs on industry-standard x86 servers and components that help lower TCO by up to 50% versus
traditional storage. It delivers the agility to easily scale IT with a comprehensive suite of software solutions and offers the first native
software-based, FIPS 140-2 validated HCI encryption. A VMware vSAN SSD storage solution powered by Intel® Xeon® Scalable
processors with Intel® Optane™ technology can help companies optimize their storage solution in order to gain fast access to large data
sets. View video to learn more.

vSAN 7 delivers a new HCI experience architected for the hybrid cloud with operational efficiencies that reduce time to value through a
new, intuitive user interface, and provides consistent application performance and availability through advanced self-healing and
proactive support insights. Seamless integration with VMware's complete software-defined data center (SDDC) stack and leading hybrid
cloud offerings make it the most complete platform for virtual machines, whether running business-critical databases, virtual desktops
or next-generation applications.

What's new in vSAN 8 [78]

Before we jump in the Lab, let's take a moment to review What's New with vSAN 8.

With vSAN 8, we are continuing to build on the robust features that make vSAN a high performing general-purpose infrastructure.
vSAN 8 makes it easy for you to standardize on a single storage operational model with three new capabilities: integrated file services,
enhanced cloud-native storage, and simpler lifecycle management. You can now unify block and file storage on hyperconverged
infrastructure with a single control pane, which reduces costs and simplifies storage management.
Cloud-native applications also benefit from these updates, which include integrated file services, vSphere with Kubernetes support, and
increased data services. Finally, vSAN 7 also simplifies HCI lifecycle management by reducing the number of tools required for Day 2
operations, while simultaneously increasing update reliability.

HANDS-ON LABS MANUAL | 90


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Product Enhancements [79]

The most significant new capabilities and updates of vSAN 8 include:

•Enhanced
Enhanced Cloud-Native Storage

vSAN supports file-based persistent volumes for Kubernetes on vSAN datastore. Developers can dynamically create file shares for their
applications and have multiple pods share data.

•Integrated
Integrated File Services

In vSAN 8, integrated file services make it easier to provision and share files. Users can now provision a file share from their vSAN
cluster, which can be accessed via NFS 4.1 and NFS 3 and SMB. A simple workflow reduces the amount of time it takes to stand up a file
share.

•Simpler
Simpler Lifecycle Management

Consistent operations with a unified Lifecycle Management tool. vSAN 8 provides a unified vSphere Lifecycle Manager tool (vLCM) for
Day 2 operations for software and server hardware. vLCM delivers a single lifecycle workflow for the full HCI server stack: vSphere,
vSAN, drivers and OEM server firmware. vLCM constantly monitors and automatically remediates compliance drift. the vLCM
component is driven and performed by SDDC Manager in VMware Cloud Foundation.

•Increased
Increased Visibility into vSAN Used Capacity

Replication objects are now visible in vSAN monitoring for customers using VMware Site Recovery Manager and vSphere Replication.
The objects are labeled “vSphere Replicas” in the “Replication” category.

•Uninterrupted
Uninterrupted Application Run Time

vSAN 8 enhances uptime in Stretched Clusters by introducing the ability to redirect VM I/O from one site to another in the event of a
capacity imbalance. Once the disks at the first site have freed up capacity, customers can redirect I/O back to the original site without
disruption.

HANDS-ON LABS MANUAL | 91


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Why vSAN? [80]

VMware’s solution stack offers the levels of flexibility that is needed for today’s rapidly changing needs. It’s built off of a foundation
of VMware vSphere, paired with vSAN. This provides the basis for a fully software defined storage and virtualization platform that
removes dependencies from legacy solutions using physical hardware. Next is VMware Cloud Foundation, the integrated solution that
provides the full stack of tools for an automated private cloud. And finally, there is VMware Cloud on AWS. The same software that you
already know, sitting in Amazon Web Services, and provide consistent operations to all of the existing workflows used in your private
clouds. The result is a complete solution regardless of where the topology sit on-prem or on the cloud.

Storage Policy Based Management [81]

As an abstraction layer, Storage Policy Based Management (SPBM) abstracts storage services delivered by Virtual Volumes, vSAN, I/O
filters, or other storage entities. Multiple partners and vendors can provide Virtual Volumes, vSAN, or I/O filters support. Rather than
integrating with each individual vendor or type of storage and data service, SPBM provides a universal framework for many types of
storage entities.

SPBM offers the following mechanisms:

• Advertisement of storage capabilities and data services that storage arrays and other entities, such as I/O filters, offer.

• Bi-directional communications between ESXi and vCenter Server on one side, and storage arrays and entities on the other.

HANDS-ON LABS MANUAL | 92


HOL-2446-05-HCI: Optimize and Modernize Data Centers

• Virtual machine provisioning based on VM storage policies.

Examine the Default Storage Policy [82]

vSAN requires that the virtual machines deployed on the vSAN Datastore are assigned at least one storage policy. When provisioning a
virtual machine, if you do not explicitly assign a storage policy to the virtual machine the vSAN Default Storage Policy is assigned to the
virtual machine.

The default policy contains vSAN rule sets and a set of basic storage capabilities, typically used for the placement of virtual machines
deployed on vSAN Datastore.

vSAN Default Storage Policy Specifications [83]

The following characteristics apply to the vSAN Default Storage Policy


Policy.

• The vSAN default storage policy is assigned to all virtual machine objects if you do not assign any other vSAN policy when you
provision a virtual machine.

• The vSAN default policy only applies to vSAN datastores. You cannot apply the default storage policy to non-vSAN datastores, such
as NFS or a VMFS datastore.

• You can clone the default policy and use it as a template to create a user-defined storage policy.

• You cannot delete the default policy.

HANDS-ON LABS MANUAL | 93


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Open Chrome Browser from the Windows Desktop [84]

HANDS-ON LABS MANUAL | 94


HOL-2446-05-HCI: Optimize and Modernize Data Centers

HANDS-ON LABS MANUAL | 95


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click on the Chrome Icon on the Windows Quick Launch Task Bar.

Login to vSphere Client [85]

HANDS-ON LABS MANUAL | 96


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click the second tab in the browser

2.On the vSphere Client login screen, username: administrator@vsphere.local

3.Enter Password: VMware123!

4.Click LOGIN

HANDS-ON LABS MANUAL | 97


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Examine the Default Storage Policy [86]

HANDS-ON LABS MANUAL | 98


HOL-2446-05-HCI: Optimize and Modernize Data Centers

HANDS-ON LABS MANUAL | 99


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. From the Menu page of the vSphere Client

2.Select Policies and Profiles

Examine the Default Storage Policy [87]

1. Select VM Storage Policies

2.Select the VM Storage Policy called vSAN Default Storage Policy

3.Select Rules

The default rules for the Storage Policy are displayed.

4.Select Storage Compatibility

Here we can see that the mgmt-vsan is compatible with this storage policy. (not pictured)

HANDS-ON LABS MANUAL | 100


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Deploy VM with Default Policy [88]

We will now clone a VM and apply the Default Storage Policy

1. Select Menu

2.Select Inventory

Deploy VM with Default Policy [89]

We will deploy a VM from a template called tmpl-ubuntu to the mgmt-vsan vSAN Datastore and apply the Default Storage Policy.

HANDS-ON LABS MANUAL | 101


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click to select VM's and Templates view

2.Locate the template tmpl-ubuntu right click this template

3.Select New VM from This Template

HANDS-ON LABS MANUAL | 102


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Deploy VM with Default Policy [90]

1. Give the Virtual Machine a name:

holostorage

2.Select mgmt-vcenter.vcf.sddc.lab > mgmt-datacenter

3.Click Next

HANDS-ON LABS MANUAL | 103


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Deploy VM with Default Policy [91]

1. Ensure mgmt-cluster is selected

2.Click NEXT

HANDS-ON LABS MANUAL | 104


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Deploy VM with Default Policy [92]

HANDS-ON LABS MANUAL | 105


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click on mgmt-vsan

2.For the VM Storage Policy dropdown, select vSAN Default Storage Policy

The resulting list of compatible datastores will be presented, in our case the mgmt-vsan.

In the lower section of the screen we can see the vSAN storage consumption would be 13.33 GB disk space and 0.00 B reserved Flash
space. You may need to scroll down to see this section of the screen.

Since we have a VM with 10 GB disk and the Default Storage Policy is RAID 5, the vSAN disk consumption will be 13.33 GB disk.

3.Click NEXT then, click NEXT on the Deploy From Template (not pictured), then click FINISH (not pictured)

Deploy VM with Default Policy [93]

Wait for the Clone operation to complete.

1. Check the Recent Tasks for a status update on the Clone virtual machine task.

HANDS-ON LABS MANUAL | 106


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Verify VM has Default Storage Policy [94]

Once the clone operation has been completed:

1. Click to return to Hosts and Clusters

2.Select the VM called holostorage

3.Select Summary

4.Scroll down
5.View Related Objects

The VM is now residing on the mgmt-vsan

5.Scroll down and View Storage Policies

Here we can see that the VM Storage Policy for this VM is set to vSAN Default Storage Policy and the policy is Compliant
Compliant.

HANDS-ON LABS MANUAL | 107


HOL-2446-05-HCI: Optimize and Modernize Data Centers

VM Disk Policies [95]

1. Select the VM called holostorage

2.Select Configure

3.Select Policies

4.Select Hard Disk 1

Here we can see the VM Storage Policy that is applied to VM Home Object and the Hard Disk Object.

HANDS-ON LABS MANUAL | 108


HOL-2446-05-HCI: Optimize and Modernize Data Centers

VM Disk Policies [96]

1. Select the mgmt-cluster

2.Select Monitor

3.Select vSAN > Virtual Objects

4.Expand holostorage

5.Select Hard disk 1

Verify that the Object State is Healthy and vSAN Default Storage Policy is applied. You may need to scroll to the right to see the Storage
Policy.

6.Click View Placement Details

HANDS-ON LABS MANUAL | 109


HOL-2446-05-HCI: Optimize and Modernize Data Centers

VM Disk Policies [97]

Here we can see the Component layout for the Hard Disk.

1. There are 4 Components spread across 4 different ESXi Hosts

2.Click CLOSE

Advanced Storage Based Policy Management [98]

Consider these guidelines when you configure RAID 5 or RAID 6 erasure coding in a vSAN cluster.

•RAID 5 or RAID 6 erasure coding is available only on all-flash disk groups.

•On-disk format version 3.0 or later is required to support RAID 5 or RAID 6.

•You must have a valid license to enable RAID 5/6 on a cluster.

•You can achieve additional space savings by enabling deduplication and compression on the vSAN cluster.

HANDS-ON LABS MANUAL | 110


HOL-2446-05-HCI: Optimize and Modernize Data Centers

New VM Storage Policy ( Raid 5/6 - Erasure coding ) [99]

HANDS-ON LABS MANUAL | 111


HOL-2446-05-HCI: Optimize and Modernize Data Centers

First, we need to create a VM Storage Policy that will define the Failure Tolerance method of Raid5/6.

1. From the Menu page of the vSphere Client

2.Select Policies and Profiles

New VM Storage Policy ( Raid 1) [100]

1. Select VM Storage Policies

2.Click on CREATE

HANDS-ON LABS MANUAL | 112


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Create a new VM Storage Policy using the following name :

PFTT=1-Raid1

2.Click NEXT

HANDS-ON LABS MANUAL | 113


HOL-2446-05-HCI: Optimize and Modernize Data Centers

New VM Storage Policy ( Raid 1 - Mirroring ) [101]

1. Select Enable rules for "vSAN" storage

2.Click NEXT

HANDS-ON LABS MANUAL | 114


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Create New VM Storage Policy ( Raid 1 - Mirroring) [102]

1. In the Availability tab, Select the following options :

Site disaster tolerance : None - standard cluster


Failures to Tolerate: 1 failure - RAID-1 (Mirroring)

2.Click Storage rules

HANDS-ON LABS MANUAL | 115


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. In the Storage rules tab, Select All flash for the Storage tier

2.Select Advanced Policy Rules tab

HANDS-ON LABS MANUAL | 116


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review the options that are available here, but leave at the default settings.

1. Click NEXT

HANDS-ON LABS MANUAL | 117


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Create New VM Storage Policy ( Raid 1 - Mirroring ) [103]

Verify that the mgmt-vsan is compatible against the VM Storage Policy.

1. Click NEXT

HANDS-ON LABS MANUAL | 118


HOL-2446-05-HCI: Optimize and Modernize Data Centers

New VM Storage Policy ( Raid 1 - Mirroring ) [104]

Here we can see the rules that make up our VM Storage Policy.

1. Review the settings and click FINISH

HANDS-ON LABS MANUAL | 119


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Assign VM Storage Policy to an existing VM [105]

Now that we have created a new VM Storage Policy , let's assign that policy to an existing VM on the vSAN Datastore.

1. Select Menu on the vSphere Client

2.Select Inventory

HANDS-ON LABS MANUAL | 120


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Assign VM Storage Policy to an existing VM [106]

1. Select the VM called holostorage

2.Select Configure

3.Select Policies

Here we can see that the vSAN Default Storage Policy is assigned to this VM.

4.Select EDIT VM STORAGE POLICIES

HANDS-ON LABS MANUAL | 121


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Assign VM Storage Policy to an existing VM [107]

1. Change the VM storage Policy from the dropdown list to PFTT=1-Raid1. You should notice that the total vSAN storage

consumption has increased in size.


2.Click OK

HANDS-ON LABS MANUAL | 122


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Assign VM Storage Policy to an existing VM [108]

Verify that the VM Storage Policy has been changed and that the VM is compliant against the new storage Policy. You might have to hit
the refresh button to see the change.

Assign VM Storage Policy to an existing VM [109]

HANDS-ON LABS MANUAL | 123


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Select the cluster called mgmt-cluster

2.Select Monitor

3.Select vSAN > Virtual Objects

4.Expand holostorage and select Hard disk 1

5.Select VIEW PLACEMENT DETAILS

Assign VM Storage Policy to an existing VM [110]

Here we can see the new revised Component layout for the VM with the Raid-1 Storage Policy (2 vSAN components and a Withness on
3 hosts, you may see reconfiguring like in the screenshot if the components haven't had time to rebalance before you go to this step).

1. Click CLOSE

HANDS-ON LABS MANUAL | 124


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Stripe Width Improvements in vSAN 7.0 U1+ [111]

In vSAN 7.0 U1 and higher, the “number of disk stripes per object” storage policy rule attempts to improve performance by
distributing data contained in a single object (such as a VMDK) across more capacity devices.

Commonly known as “stripe width,” this storage policy rule will tell vSAN to split the objects into chunks of data (known in vSAN as
“components”) across more capacity devices.

Refer to the following Blog Post for a more detailed description of these changes : https://blogs.vmware.com/virtualblocks/2021/01/21/
stripe-width-improvements-in-vsan-7-u1/

The optimizations introduced to the stripe width storage policy rule in vSAN 7 U1 help provide more appropriate levels of disk striping
when using storage policies based on RAID-5/6 erasure codes.

In the next task we will change the Storage Policy to Raid 5 to have a Stripe Width value of 4.

HANDS-ON LABS MANUAL | 125


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Edit VM Storage Policy [112]

HANDS-ON LABS MANUAL | 126


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Select Menu in the vSphere Client

2.Select Policies and Profiles

Edit VM Storage Policy [113]

1. Select VM Storage Policies

2.Select the VM Storage Policy called PFTT=1Raid1

3.Select EDIT (if edit is unavailable ensure only PFTT=1Raid1 is selected)

HANDS-ON LABS MANUAL | 127


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Edit VM Storage Policy [114]

1. On the Name and description dialog, update the Name to PFTT=1Raid5

2.Click NEXT

HANDS-ON LABS MANUAL | 128


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Edit VM Storage Policy [115]

1. On the Policy structure dialog, Click NEXT

HANDS-ON LABS MANUAL | 129


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Update Availability [116]

1. Update Failures to tolerate to 1 Failure - RAID 5 (Erasure Coding)

2.Click Next

HANDS-ON LABS MANUAL | 130


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Edit VM Storage Policy [117]

1. On the vSAN dialog, select Advanced Policy Rules

2.Modify the Number of disk stripes per object to 4

3.Enable Force Provisioning

4.Click NEXT

HANDS-ON LABS MANUAL | 131


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Edit VM Storage Policy [118]

1. On the Storage compatibility dialog, click NEXT

HANDS-ON LABS MANUAL | 132


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Edit VM Storage Policy [119]

1. On the Review and Finish dialog, click FINISH

HANDS-ON LABS MANUAL | 133


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Edit VM Storage Policy [120]

The VM storage policy is in use by 1 virtual machine(s). Changing the VM storage policy will make it out of sync with those 1 virtual
machine(s).

1. Select Manually later

2.Select Yes

HANDS-ON LABS MANUAL | 134


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Edit VM Storage Policy [121]

1. Select VM Compliance

2.You will see that the Compliance Status of the VM (holostorage) has now changed to Out of Date since we have changed the

VM Storage Policy that this VM has been using.

3.Click REAPPLY

HANDS-ON LABS MANUAL | 135


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Edit VM Storage Policy [122]

Reapplying the selected VM storage policy might take significant time and system resources because it will affect 1 VM(s) and will move
data residing on vSAN datastore.

1. Click Show predicted storage impact

HANDS-ON LABS MANUAL | 136


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Modify an existing VM Storage Policy [123]

The changes in the VM storage policies will lead to changes in the storage consumption on some datastores. The storage impact can be
predicted only for vSAN datastores, but datastores of other types could also be affected.

After you reapply the VM storage policies, the storage consumption of the affected datastores is shown.

1. Click CLOSE

HANDS-ON LABS MANUAL | 137


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Edit VM Storage Policy [124]

1. Click OK to reapply the VM Storage Policy.

HANDS-ON LABS MANUAL | 138


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Edit VM Storage Policy [125]

1. Once the VM Storage Policy has been reapplied, verify that the VM is in a Compliant state again with the VM Storage Policy.

2.If the VM does not show Compliant


Compliant, click CHECK (you may have to click CHECK a couple of times)

HANDS-ON LABS MANUAL | 139


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Edit VM Storage Policy [126]

1. From vSphere Client, select Menu

2.Select Inventory

HANDS-ON LABS MANUAL | 140


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Modify an existing VM Storage Policy [127]

1. Select the cluster called mgmt-cluster

2.Select Monitor

3.Select vSAN > Virtual Objects

4.Expand holostorage and select Hard disk 1

5.Select VIEW PLACEMENT DETAILS

HANDS-ON LABS MANUAL | 141


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Physical Placement [128]

Here we can see the new revised Components layout for the VM with the Raid-5 Storage Policy (4 components on 4 hosts).

We now have components spread across 4 ESXi hosts with Raid 5. (Given that this is a 4 node cluster this is the default configuration, at
5 hosts you may see objects across 4 hosts in RAID 0 on the component level)

1. Click CLOSE

Reserved Capacity [129]

We now have the ability to control the amount of capacity that is reserved for both rebuild operations and transient operations such as
temporary capacity need to do policy change on an object.

By default, the Capacity Reserve feature is disabled, meaning all vSAN capacity is available for workloads. You can enable capacity
reservations for internal cluster operations and host failure rebuilds. Reservations are soft-thresholds designed to prevent user-driven
provisioning activity from interfering with internal operations, such as data rebuilds, rebalancing activity, or policy re-configurations. The
capacity required to restore a host failure matches the total capacity of the largest host in the cluster and minimum of 4 hosts are
required.

HANDS-ON LABS MANUAL | 142


HOL-2446-05-HCI: Optimize and Modernize Data Centers

By enabling reserve capacity in advanced, vSAN prevents you from using the space to create workloads and intends to save the
capacity available in a cluster.

If there is enough free space in the vSAN cluster, you can enable the operations reserve and/or the host rebuild reserve.

•Operations Reserve - Reserved space in the cluster for vSAN internal operations.

•Host Rebuild Reserve - Reserved space for vSAN to be able to repair in case of a single host failure.

The reserved capacity is not supported on a stretched cluster, cluster with fault domains and nested fault domains, ROBO cluster, or the
number of hosts in the cluster is less than four.

Configure Reserved Capacity [130]

Now that we have 4 hosts in the cluster, we can start configuring the Reserve Capacity.

1. Select Menu

2.Select Inventory

HANDS-ON LABS MANUAL | 143


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Configure Reserve Capacity [131]

Once we edit the Enable Capacity reserve, you will be shown how much of the total vSAN datastore capacity (by default) is allocated to
each reserve.

1. Navigate to the vSAN Cluster, mgmt-cluster

2.Select Configure

3.Select vSAN > Services

4.Scroll until you see Reservations and Alerts

5.Under the Reservations and Alerts


Alerts, click EDIT

HANDS-ON LABS MANUAL | 144


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Running Low on Capacity [132]

It looks like we are running too low on capacity to enable Operations reserve. Lets address this by creating another disk group.

1. Click Cancel

HANDS-ON LABS MANUAL | 145


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Claim Unused Disks [133]

1. Click Disk Management

2.Click Claim Unused Disks (8)

HANDS-ON LABS MANUAL | 146


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Create Disk Group [134]

Here we can see 2 disk types that are installed on each host in the cluster. The smaller of the drives has been identified as the Cache
Tier and the larger the Capacity Tier.

1. Click to expand to see the drives on each of the 4 hosts


2.Click Create

HANDS-ON LABS MANUAL | 147


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Refresh Capacity [135]

1. After waiting 30 seconds or less your screen will update and show that 4/4 disks are in use on each host.

Now that the capacity in the cluster has been increased let's see if we can configure our Reserve Capacity again.

HANDS-ON LABS MANUAL | 148


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Configure Reserve Capacity [136]

Once we edit the Enable Capacity reserve, you will be shown how much of the total vSAN datastore capacity (by default) is allocated to
each reserve.

1. Navigate to the vSAN Cluster, mgmt-cluster

2.Select Configure

3.Select vSAN > Services

4.Scroll until you see Reservations and Alerts


5.Under the Reservations and Alerts
Alerts, click EDIT

HANDS-ON LABS MANUAL | 149


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Enable Capacity Reserve [137]

1. Hover onto the Warning Icon. You will see a warning notifications when the storage reaches 70%

2.Hover onto the Error Icon. You will see an error alert at 90% of storage consuption

3.Toggle on Operations reserve and Host rebuild reserve to be on

4.Select Customize alerts. You can set the warning and error alerts but we will keep it at default

5.Click on APPLY

HANDS-ON LABS MANUAL | 150


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Conclusion [138]

Storage Policy Based Management (SPBM) is a major element of your software-defined storage environment. It is a storage policy
framework that provides a single unified control panel across a broad range of data services and storage solutions.

The framework helps to align storage with application demands of your virtual machines.

You Finished Module 6 [139]

Congratulations on completing Module 6.

HANDS-ON LABS MANUAL | 151


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Module 6 - vSAN - Monitoring, Health, Capacity and Performance (30 minutes) Beginner

Introductions [141]

A critical aspect of enabling a vSAN Datastore is validating the Health of the environment. vSAN has over a hundred out of the box
Health Checks to not only validate initial Health but also report ongoing runtime Health. vSAN 8 introduces exciting new ways to
monitor the Health, Capacity and Performance of your Cluster via vRealize Operations within vCenter, all within the same User Interface
that VI Administrators use today.

vSAN Health Check Validation [142]

One of the ways to monitor your vSAN environment is to use the vSAN Health Check.

The vSAN Health runs a comprehensive health check on your vSAN environment to verify that it is running correctly and will alert you if
it finds some inconsistencies and options on how to fix these.

vSAN Health Check [143]

Running individual commands from one host to all other hosts in the cluster can be tedious and time consuming. Fortunately, since
vSAN 6.0, vSAN has a health check system, part of which tests the network connectivity between all hosts in the cluster. One of the first
tasks to do after setting up any vSAN cluster is to perform a vSAN Health Check. This will reduce the time to detect and resolve any
networking issue, or any other vSAN issues in the cluster.

HANDS-ON LABS MANUAL | 152


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Open Chrome Browser from the Windows Desktop [144]

1. Click on the Chrome Icon on the Windows Quick Launch Task Bar.

HANDS-ON LABS MANUAL | 153


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Login to vSphere Client [145]

1. Click the second browser tab

2.On the vSphere Client login screen, username: administrator@vsphere.local

3.Enter Password: VMware123!

4.Click LOGIN

HANDS-ON LABS MANUAL | 154


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Use Health Check to Verify vSAN Functionality [146]

1. Select Menu

2.Select Inventory

HANDS-ON LABS MANUAL | 155


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Use Health Check to Verify vSAN Functionality [147]

To run a vSAN Health Check,

1. Select the vSAN Cluster, mgmt-cluster

2.Select Monitor

3.Select vSAN > Skyline Health


Health, here we can see our overall health is 93/100 (with this being a lab environment this is what is to

be expected)

4.Click to enable list view

5.Click on one of health checks to see what is being run

You can view the history of the health of the vSAN Cluster and when an event was unhealthy.

Note that some of the Health Checks are in a Warning State. This is due to the fact that we are running a vSAN cluster in a nested
virtualized environment. In addition some alerts have been silenced due to the nested environment.

HANDS-ON LABS MANUAL | 156


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Inducing a vSAN Health Check Failure [148]

Let's induce a vSAN Health Check failure to test the health Check.

1. Right click the ESXi host called esxi-4.vcf.sddc.lab

2.Select Connection

3.Select Disconnect

HANDS-ON LABS MANUAL | 157


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click OK to disconnect the selected host.

Inducing a vSAN Health Check Failure [149]

HANDS-ON LABS MANUAL | 158


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Let's return to the vSAN Health Check

1. Select the vSAN Cluster, mgmt-vcenter

2.Select Monitor

3.Select vSAN > Skyline Health

4.Click the RETEST button if the Health Check for Hosts disconnected from VC does not show as red.

Here we will see a vSAN Network Health Check that has failed.

Inducing a vSAN Health Check Failure [150]

1. Click the Hosts Disconnected from VC to get additional information

Here we can see that the ESXi host called esxi-4.vcf.sddc.lab is showing as Disconnected
Disconnected.

Each details view under the Info tab also contains an Ask VMware button where appropriate, which will take you to a VMware
Knowledge Base article detailing the issue, and how to troubleshoot and resolve it.

HANDS-ON LABS MANUAL | 159


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Resolving a vSAN Health Check Failure [151]

Let's resolve the vSAN Health Check failure.

1. Right click the ESXi host called esxi-4vcf.sddc.lab

2.Select Connection

3.Select Connect

Answer OK to reconnect the selected host.

HANDS-ON LABS MANUAL | 160


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Resolving a vSAN Health Check Failure [152]

Let's return to the vSAN Health Check

1. Select the vSAN Cluster, mgmt-cluster

2.Select Monitor

3.Select vSAN > Skyline Health

4.Click Retest if your score still shows 89

5.The Host connection has been restored and our cluster health score increased.

Click the RETEST button if the Health Checks do not show an equivalent cluster health.

Conclusion [153]

You can use the vSAN health checks to monitor the status of cluster components, diagnose issues, and troubleshoot problems. The
health checks cover hardware compatibility, network configuration and operation, advanced vSAN configuration options, storage
device health, and virtual machine objects.

Monitoring vSAN Capacity [154]

The capacity of the vSAN Datastore can be monitored from a number of locations within the vSphere Client. First, one can select the

HANDS-ON LABS MANUAL | 161


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Datastore view, and view the summary tab for the vSAN Datastore. This will show you the capacity, used and free space.

Datastore View [155]

1. Select Storage Icon

2.Select mgmt-datacenter > mgmt-vsan

3.Click Summary

4.Note the amount of Used and Free Capacity Information

HANDS-ON LABS MANUAL | 162


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Capacity Overview [156]

1. Select Inventory Icon

2.Select mgmt-cluster

3.Select Monitor

4.Scroll down and Click vSAN > Capacity

5.Note the Capacity Overview and Usable Capacity Analysis Information

The Capacity Overview displays the storage capacity of the vSAN Datastore, including used space and free space. The Deduplication
and Compression Overview indicates storage usage before and after space savings are applied, including a Ratio indicator.

HANDS-ON LABS MANUAL | 163


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Usage breakdown before dedup and compression [157]

1. Scroll Down to view the Usage breakdown

2.Click on EXPAND ALL (note - screenshot showing expanded)

These are all the different object types one might find on the vSAN Datastore. We have VMDKs, VM Home namespaces, and swap
objects for virtual machines. We also have performance management objects when the vSAN performance logging service is enabled.
There are also the overheads associated with on-disk format file system, and checksum overhead. Other (not shown) refers to objects
such as templates and ISO images, and anything else that doesn't fit into a category above.

It's important to note that the percentages shown are based on the current amount of Used vSAN Datastore space. These percentages
will change as more Virtual Machines are stored within vSAN (e.g. the File system overhead % will decrease, as one example).

HANDS-ON LABS MANUAL | 164


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Data Distribution [158]

1. Make sure you are in the Configure Tab

2.Select vSAN > Fault Domains

3.Note the data distribution across all 5 hosts in the cluster. vSAN is managing the distribution

Here we can see the amount of Used Capacity per ESXi Host.

Monitoring vSAN Performance [159]

A healthy vSAN environment is one that is performing well. vSAN includes many graphs that provide performance information at the
cluster, host, network adapter, virtual machine, and virtual disk levels. There are many data points that can be viewed such as IOPS,
throughput, latency, packet loss rate, write buffer free percentage, cache de-stage rate, and congestion. Time range can be modified to
show information from the last 1-24 hours or a custom date and time range. It is also possible to save performance data for later
viewing.

HANDS-ON LABS MANUAL | 165


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Performance Service [160]

With vSAN 7, the performance service is automatically enabled at the cluster level. The performance service is responsible for collecting
and presenting Cluster, Host and Virtual Machine performance related metrics for vSAN powered environments. The performance
service is integrated into ESXi, running on each host, and collects the data in a database, as an object on a vSAN Datastore. The
performance service database is stored as a vSAN object independent of vCenter Server. A storage policy is assigned to the object to
control space consumption and availability of that object. If it becomes unavailable, performance history for the cluster cannot be
viewed until access to the object is restored.

Performance Metrics are stored for 90 days and are captured at 5 minute intervals.

Enable Performance Service [161]

1. Select mgmt-cluster

2.Select Configure

3.Select vSAN > Services

4.Click Enable on the Performance Service

5.Click Enable on the pop-up dialog (not pictured)

6.Click Refresh to update the page

HANDS-ON LABS MANUAL | 166


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review Performance [162]

1. Note that the Performance Stats Database Object is reported as Healthy

2.Note that the Stats DB is using the vSAN Default Storage Policy (RAID 5 - Erasure Coding) and is reporting Compliant status

Let's examine the various Performance views next at a Cluster, Host and Virtual Machine level.

HANDS-ON LABS MANUAL | 167


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Cluster Performance [163]

1. Select mgmt-cluster

2.Select Monitor

3.Select vSAN > Performance

4.Note that we can choose to view VM


VM, Backend, File Share and IOInsight Performance views at the Cluster level (you can also

customize the Time Range if desired).

Note: You might need to click on 'Show Results'

Scroll-down to view the various metrics that are collected (IOPS


IOPS, Throughput
Throughput, Latency
Latency, etc.)

“Front End” VM traffic is defined as the type of storage traffic being generated by the VMs themselves (the reads they are requesting,
and the writes they are committing). “Back
Back End
End” vSAN traffic accounts for replica traffic (I/Os in order to make the data redundant/
highly available), and well as synchronization traffic. Both of these traffic types take place on the dedicated vSAN vmkernel interface(s)
per vSphere Host.

HANDS-ON LABS MANUAL | 168


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Host Performance [164]

1. Select host, esxi-1.vcf.sddc.lab

2.Select Monitor

3.Select vSAN > Performance

4.Note that we can choose to view VM


VM, Backend, Disks, Physical Adapters, Host Network and IOInsight Performance views at

the Host level (you can also customize the Time Range if desired).

Scroll-down to view the various metrics that are collected (IOPS


IOPS, Throughput
Throughput, Latency
Latency, etc.)

In this view, we can see more Performance related metrics at the Host level vs. Cluster. Feel free to examine the various categories
indicated in Step 4 to get a feel for the information that is available.

HANDS-ON LABS MANUAL | 169


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Virtual Machine Performance [165]

1. Select the VM call sddc-manager

2.Select Monitor

3.Select vSAN > Performance

4.Note that we can choose to view VM and Virtual Disks Performance views at the Virtual Machine level (you can also customize

the Time Range if desired). (If no results show up, click Show Results in this box to refresh, you may need to wait 1-2 minutes

before this will refresh)

Scroll-down to view the various metrics that are collected (IOPS


IOPS, Throughput
Throughput, Latency
Latency, etc.)

Conclusion [166]

In this module, we showed you how to validate vSAN Health, Monitor vSAN Capacity & Performance as well as utilize vRealize
Operations for vCenter and vRealize Operations Dashboards.

You Finished Module 7 [167]

Congratulations on completing Module 7.

HANDS-ON LABS MANUAL | 170


HOL-2446-05-HCI: Optimize and Modernize Data Centers

HANDS-ON LABS MANUAL | 171


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Module 7 - Workload Domain Operations (iSIMs) (45 minutes) Beginner

Introduction [169]

This module contains a total of four different iSIM's that are all related to Workload Domain Operations. These demonstrations will walk
you through the following processes:

•Host Commissioning

•Creating a vLCM Image based Workload Domain

•How to expand an Existing Cluster

•Updating a Cluster using a vLCM Image

Hands-on Labs Interactive Simulation: Host Commissioning [170]

This part of the lab is presented as a Hands-on Labs Interactive Simulation


Simulation. This will allow you to experience steps which are too time-
consuming or resource intensive to do live in the lab environment. In this simulation, you can use the software interface as if you are
interacting with a live environment.

Click the button below to start the simulation!

View the Interactive Simulation

You can hide the manual to use more of the screen for the simulation.

HANDS-ON LABS MANUAL | 172


HOL-2446-05-HCI: Optimize and Modernize Data Centers

NOTE: When you have completed the simulation, click on the Manual tab to open it and continue with the lab.

[vlp:close-panel|manual|Close Instructions Panel]

HANDS-ON LABS MANUAL | 173


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Hands-on Labs Interactive Simulation: Create a vLCM Image-Based Workload Domain [171]

This part of the lab is presented as a Hands-on Labs Interactive Simulation


Simulation. This will allow you to experience steps which are too time-
consuming or resource intensive to do live in the lab environment. In this simulation, you can use the software interface as if you are
interacting with a live environment.

Click the button below to start the simulation!

View the Interactive Simulation

You can hide the manual to use more of the screen for the simulation.

HANDS-ON LABS MANUAL | 174


HOL-2446-05-HCI: Optimize and Modernize Data Centers

NOTE: When you have completed the simulation, click on the Manual tab to open it and continue with the lab.

[vlp:close-panel|manual|Close Instructions Panel]

HANDS-ON LABS MANUAL | 175


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Hands-on Labs Interactive Simulation: Expand an Existing Cluster [172]

This part of the lab is presented as a Hands-on Labs Interactive Simulation


Simulation. This will allow you to experience steps which are too time-
consuming or resource intensive to do live in the lab environment. In this simulation, you can use the software interface as if you are
interacting with a live environment.

Click the button below to start the simulation!

View the Interactive Simulation

You can hide the manual to use more of the screen for the simulation.

HANDS-ON LABS MANUAL | 176


HOL-2446-05-HCI: Optimize and Modernize Data Centers

NOTE: When you have completed the simulation, click on the Manual tab to open it and continue with the lab.

[vlp:close-panel|manual|Close Instructions Panel]

HANDS-ON LABS MANUAL | 177


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Hands-on Labs Interactive Simulation: Cluster Upgrades with vLCM Images [173]

This part of the lab is presented as a Hands-on Labs Interactive Simulation


Simulation. This will allow you to experience steps which are too time-
consuming or resource intensive to do live in the lab environment. In this simulation, you can use the software interface as if you are
interacting with a live environment.

Click the button below to start the simulation!

View the Interactive Simulation

You can hide the manual to use more of the screen for the simulation.

HANDS-ON LABS MANUAL | 178


HOL-2446-05-HCI: Optimize and Modernize Data Centers

NOTE: When you have completed the simulation, click on the Manual tab to open it and continue with the lab.

[vlp:close-panel|manual|Close Instructions Panel]

HANDS-ON LABS MANUAL | 179


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Conclusion [174]

You have completed Module 4 and should now have a good understanding of some operations including the creation and lifecycle
management required to grow and maintain a Workload Domain. Please continue to Module 5.

HANDS-ON LABS MANUAL | 180


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Module 8 - Introducing Software Defined Networking: Segments and Distributed Routing (45
Minutes) Intermediate

Module 9 - Overview [176]

Software-Defined Networking in VMware Cloud Foundation is provided by VMware NSX. NSX operates as an “Overlay Network”,
where the networking capabilities are delivered in software, and “encapsulated” within standard TCP/IP packets transported by a
standard IP “underlay” network.

NSX enables customers to create elastic, logical networks that span physical network boundaries. NSX abstracts the physical network
into a pool of capacity and separates the consumption of these services from the underlying physical infrastructure. This model is similar
to the model vSphere uses to abstract compute capacity from the server hardware to create virtual pools of resources that can be
consumed as a service.

This module contains five activities

•Creating network segments

•Viewing the packet flow within a host

•Viewing the packet flow between hosts

•Adding router connectivity

•Testing the Opencart application

Creating Network Segments [177]

This exercise will deploy the necessary networking components to support a simple two-tier application called “Opencart”. Opencart
is an open-source E-Commerce platform that uses an Apache front end, and a MySQL backend. This lab uses preconfigured Apache
and MySQL VMs that we will attach to newly created SDN segments.

HANDS-ON LABS MANUAL | 181


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Initial Console Check [178]

1. Please ensure that the Lab Status is green and says “Ready”.

2.After you have verified that the lab is ready, please launch Google Chrome using the shortcut on the desktop.

HANDS-ON LABS MANUAL | 182


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Log in to NSX Manager [179]

Once the browser has launched you will see two tabs open by default. The first tab is the SDDC Manager Login, the second is the
vCenter Login.

1. Open a new tab and verify the page URL to ensure you have the correct user interface. The vCenter login URL should

read https://mgmt-nsx.vcf.sddc.lab/login.jsp

2.In the User name field enter: admin

3.In the Password box field enter: VMware123!VMware123!

4.Click the LOG IN button

HANDS-ON LABS MANUAL | 183


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review Existing Topology [180]

HANDS-ON LABS MANUAL | 184


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. From the NSX-T Manager interface click the Networking tab

2.Select Network Topology from the left-hand menu

3.What you see are the existing segments and routers that are used elsewhere in the lab environment. By the end of this lab

exercise, you create the configuration shown below, with 2 new segments, connected to a Tier-1 gateway, with a load

balancer and firewall configured.

OC-Web-Segment

•10.1.1.0/27

•Gateway 10.1.1.1/27

•OC-Apache-A 10.1.1.18

•OC-Apache-B 10.1.1.19

•OC-LB 10.1.1.2 (Load Balancer)

OC-DB-Segment

•10.1.1.32/27

•Gateway 10.1.1.33/27

•OC-MySQL 10.1.1.50

Add a New Segment [181]

HANDS-ON LABS MANUAL | 185


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click on Segments in the left pane.

2.Click on the ADD SEGMENT button

Create the OC-DB-Segment [182]

1. In the Segment Name field, enter: OC-DB-Segment

2.Leave the Connected Gateway field as None

3.In the Transport Zone dropdown, select mgmt-domain-tz-overlay01 | Overlay

4.In the IPv4 gateway field enter: 10.1.1.33/27

Save the Segment [183]

1. All other settings should remain default scroll to the bottom, click SAVE

HANDS-ON LABS MANUAL | 186


HOL-2446-05-HCI: Optimize and Modernize Data Centers

2.You will see your segment has been successfully created. Click NO on the Want to continue configuring this segment?

Add a Second Segment [184]

1. Click on the ADD SEGMENT button

HANDS-ON LABS MANUAL | 187


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Create the OC-Web-Segment [185]

1. In the Segment Name field, enter: OC-Web-Segment

2.Leave the Connected Gateway field as None

3.In the Transport Zone dropdown, select mgmt-domain-tz-overlay01 | Overlay

4.In the IPv4 gateway field enter: 10.1.1.1/27

Save the Segment [186]

1. All other settings should remain default scroll to the bottom, click SAVE

HANDS-ON LABS MANUAL | 188


HOL-2446-05-HCI: Optimize and Modernize Data Centers

2.You will see your segment has been successfully created. Click NO on the Want to continue configuring this segment?

HANDS-ON LABS MANUAL | 189


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Log in to vCenter Server [187]

HANDS-ON LABS MANUAL | 190


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Select the vCenter tab and verify the page URL to ensure you have the correct user interface. The vCenter login URL should

read https://mgmt-vcenter.vcf.sddc.lab

2.In the User name field enter: administrator@vsphere.local

3.In the Password field enter: VMware123!

4.Click on the LOGIN button

Connect Opencart Web Server VMs [188]

This step will attach our two Apache web server VMs to the OC-Web-Segment.

HANDS-ON LABS MANUAL | 191


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. From the vCenter Server Hosts and Clusters view, find OC-Apache-A in the left-side scroll list.

2.Right-click on OC-Apache-A

3.Select Edit Settings

Browse Networks [189]

HANDS-ON LABS MANUAL | 192


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click the dropdown next to Network Adapter 1

2.Select Browse

NOTE: The network segment that you just created in NSX is now visible in vSphere)

Choose Correct Network [190]

1. Click on OC-Web-Segment

2.Click on the OK button

HANDS-ON LABS MANUAL | 193


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Save Network Changes [191]

1. Click on the OK button

HANDS-ON LABS MANUAL | 194


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Update OC-Apache-B VM [192]

1. From the vCenter Server Hosts and Clusters view, find OC-Apache-B in the left-side scroll list.

2.Right-click on OC-Apache-B

3.Select Edit Settings

HANDS-ON LABS MANUAL | 195


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Browse Networks [193]

1. Click the dropdown next to Network Adapter 1

2.Select Browse

HANDS-ON LABS MANUAL | 196


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Choose Correct Network [194]

1. Click on OC-Web-Segment

2.Click on the OK button

HANDS-ON LABS MANUAL | 197


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Save Network Changes [195]

1. Click on the OK button

Test Basic Connectivity [196]

With 2 VMs on the segment, we can test connectivity between the VMs. IP Assignment is as follows:

•OC-Apache-A 10.1.1.18

•OC-Apache-B 10.1.1.19

HANDS-ON LABS MANUAL | 198


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Open a web console on OC-Apache-B by clicking the LAUNCH WEB CONSOLE button

HANDS-ON LABS MANUAL | 199


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Log into VM Console [197]

You may need to hit enter in the window to get a login prompt

1. Login with:

◦Username: ocuser

◦Password: VMware123!

HANDS-ON LABS MANUAL | 200


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Verify the Current IP Address [198]

1. Check the interface configuration by typing ifconfig at the prompt and hitting Enter

Ping OC-Apache-A [199]

HANDS-ON LABS MANUAL | 201


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Test connectivity with OC-Apache-A by typing ping 10.1.1.18 at the prompt and hitting Enter

2.Type Ctrl+c to stop the ping

Notice we can communicate from OC-Apache-B to OC-Apache-A on the network we just created.

View NSX OC-Web-Segment in vCenter Server [200]

HANDS-ON LABS MANUAL | 202


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click on the Management vCenter Server Tab in your browser

2.Click on the Networking Icon

3.Click on the OC-Web-Segment

4.Click on the Summary tab

5.Notice the “N” showing this is an NSX segment versus a Standard Port Group

6.The segment ID is shown along with the transport zone the segment is a part of

7. Notice the NSX Manager for this segment is hyperlinked

8.Lower on the screen you can see which VDS this segment is configured on

NOTE: With vSphere 7 and NSX-T 3.1 and higher versions, NSX segments are an extension of the vSphere Distributed Switch and are
completely visible to the vSphere team

Review Segment Ports [201]

1. Click on the Ports tab

2.Notice there is a port per virtual machine we attached to the segment, along with mac addresses for the interfaces on the

segment, and other port-specific data

HANDS-ON LABS MANUAL | 203


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review Hosts with the Segment [202]

1. Click on the Hosts tab

2.Notice the hosts that this segment has presented to

NOTE: Our OC-Web-Segment is connected to each ESXi host in the transport zone. When a segment is created, it is accessible to all
hosts in the transport zone.

HANDS-ON LABS MANUAL | 204


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Discover the Network Topology [203]

HANDS-ON LABS MANUAL | 205


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click on the NSX Manager tab in the browser

◦If needed, Log into NSX Manager with

◦Username: admin

◦Password: VMware123!VMware123!

2.From the NSX Manager interface click the Networking tab

3.Select Network Topology from the left menu

4.Locate our new OC-Web-Segment


OC-Web-Segment.

Note: You may need to drag the screen to the left and zoom in to see the right side of the topology display. Look for a

segment that has 2 VMs connected to it

5.Click on 2 VMs under OC-Web-Segment

Review the Segment [204]

Notice the 2 VMs you configured on the OC-Web-Segment in the topology map.

Summary [205]

This lab shows how simple it is to create an overlay network using NSX Manager. In this example, we created a fully functional IP subnet
on an overlay network segment in just a few steps. Unlike traditional VLANs in vSphere, segments do not require any underlying VLAN
configuration.

HANDS-ON LABS MANUAL | 206


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Viewing the packet flow within a host [206]

In this exercise, we will use the Traceflow capability in NSX to view traffic moving between virtual machines on the same host, on the
same segment.

Traceflow injects packets into a vSphere distributed switch (VDS) port and provides observation points along the packet’s path as it
traverses the overlay and underlay network. Observation points include entry and exit of distributed firewalls, host and edge TEPs,
logical routers, etc. This allows you to identify the path (or paths) a packet takes to reach its destination or, conversely, where a packet
is dropped along the way. Each entity reports the packet handling on input and output, so you can determine whether issues occur
when receiving a packet or when forwarding the packet. Traceflow is not the same as a ping request/response that goes from guest-VM
stack to guest-VM stack. With the NSX prepped VDS in VCF, NSX Traceflow can inject and monitor packets at the point where a VM
vNIC connects to the VDS switch port. This means that a Traceflow can be successful even when the guest VM is powered down. Note:
If the VM has not been powered on since attaching the NSX segment, the NSX control plane cannot know which host to use to inject
packets from that VM as the source and that test will fail.

Setup VMs for Testing [207]

HANDS-ON LABS MANUAL | 207


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click on the Management vCenter Server tab in the browser

◦If the session has timed out, log in with

▪Username: administrator@vsphere.local

▪Password: VMware123!

2.Click on the Hosts and Clusters view

3.Click on the OC-Apache-A virtual machine

4.Determine which ESXi host the VM is running on

Note: In this example, OC-Apache-A is running on host esxi-4.vcf.sddc.lab

Locate and Migrate OC-Apache-B [208]

Use the same steps in the previous page to locate the host OC-Apache-B is on. If it is running on the same host as OC-Apache-A, skip
the next steps to migrate the VM.

NOTE: If OC-Apache-B is not on the same host as OC-Apache-A, initiate a vMotion to move it to the same host.

HANDS-ON LABS MANUAL | 208


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Right Click on the OC-Apache-B virtual machine

2.Select the Migrate... option

HANDS-ON LABS MANUAL | 209


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Choose vMotion Type [209]

1. Ensure Change compute resource only is selected

2.Click on the NEXT button

HANDS-ON LABS MANUAL | 210


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Select Compute Resource [210]

1. Select the same host that OC-ApacheA is running on

2.Click on the NEXT button

HANDS-ON LABS MANUAL | 211


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Confirm Networks [211]

1. Click on the NEXT button

Note: No change to the network is needed with this migration

HANDS-ON LABS MANUAL | 212


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Set vMotion Priority [212]

1. Click on the NEXT button

HANDS-ON LABS MANUAL | 213


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Confirm vMotion Settings [213]

1. Click on the FINISH button

HANDS-ON LABS MANUAL | 214


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Test Packet Flow [214]

1. Click on the NSX Manager tab in the browser

◦If necessary, log in with

▪Username: admin
▪Password: VMware123!VMware123!

2.Click on Plan and Troubleshoot on the top menu bar

3.Click on Traffic Analysis on the left menu bar

4.Click on the GET STARTED button in the Traceflow Tile

HANDS-ON LABS MANUAL | 215


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Configure Traceflow [215]

1. Setup a Traceflow between OC-Apache-A and OC-Apache-B


OC-Apache-B. All values other than VM name can remain at defaults

2.Click on the TRACE button

HANDS-ON LABS MANUAL | 216


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review Flow Path [216]

Notice the path the data packets take on the resulting topology view. We can see that packets move from OC-Apache-A to OC-
Apache-B via the OC-Web-Segment.

Review Flow Details [217]

HANDS-ON LABS MANUAL | 217


HOL-2446-05-HCI: Optimize and Modernize Data Centers

In the Observations panel, review the following

•We show 1 packet delivered

•The physical hop count is 0, indicating that the packet did not leave the host

•The packet was injected at the network adapter for OC-Apache-A virtual machine

•It is then received at the distributed firewall at the VDS port for OC-Apache-A

•With no rule blocking, the packet is then forwarded on from the sending VDS port

•The packet is then received on the distributed firewall at the receiving VDS port for OC-Apache-B.

•With no rule blocking forwarding, the packet is then forwarded to the destination

•The last step shows the packet is delivered to the network adapter for the OC-Apache-B VM

Summary [218]

This section shows how ICMP packets travel between the VDS ports for 2 virtual machines running on the same ESXi host. You can see
the packet pass from where it enters the VDS at the source, passes through the source side firewall, then get forwarded to the
destination distributed firewall, and finally to the destination VDS port.

HANDS-ON LABS MANUAL | 218


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Viewing the packet flow between hosts [219]

1. Click on the Management vCenter Server tab in the browser

◦If the session has timed out, login with

◦Username: administrator@vsphere.local

◦Password: VMware123!

2.Click on the Hosts and Clusters view

3.Click on OC-Apache-A virtual machine

4.Determine which ESXi host currently has the VM running

NOTE: In this example, OC-Apache-A is running on host esxi-4.vcf.sddc.lab.

HANDS-ON LABS MANUAL | 219


HOL-2446-05-HCI: Optimize and Modernize Data Centers

HANDS-ON LABS MANUAL | 220


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Locate and Migrate OC-Apache-B [220]

1. Next click on the OC-Apache-B virtual machine

2.Determine which host it is running on

NOTE: If OC-Apache-B is on the same host as OC-Apache-A, initiate a vMotion to move OC-Apache-B to a different host. As we are
using vSAN storage, you only need to vMotion compute resources.

HANDS-ON LABS MANUAL | 221


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Viewing Packet Flow [221]

1. Click on the NSX Manager tab in the browser

◦If necessary, log in with:

◦Username: admin

◦Password: VMware123!VMware123!

2.Click on Plan and Troubleshoot on the top menu bar

3.Click on Traffic Analysis on the left menu bar

4.Click on the GET STARTED button in the Traceflow tile

HANDS-ON LABS MANUAL | 222


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review Previous Traceflow [222]

1. If your most recent Traceflow is still on screen between OC-Apache-A and OC-Apache-B, click on the RETRACE button

2.Click on the PROCEED button and skip forward

New Traceflow [223]

1. If the previous Traceflow is not correct, Click on the NEW TRACE button

HANDS-ON LABS MANUAL | 223


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Configure Traceflow settings [224]

1. Set up a Traceflow between OC-Apache-A and OC-Apache-B


OC-Apache-B. All values other than the VM name can remain at the defaults

2.Click on the TRACE button

HANDS-ON LABS MANUAL | 224


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review Traceflow Path [225]

NOTE: The path the data packets take on the resulting topology view. We can see that packets move from OC-Apache-A to OC-
Apache-B via the one-tier segment.

Review Traceflow Detail [226]

HANDS-ON LABS MANUAL | 225


HOL-2446-05-HCI: Optimize and Modernize Data Centers

In the Observations panel, review the following

•We show 1 packet delivered

•The physical hop count increments to 1 part way through the flow, indicating that the packet left the host

•The packet was injected at the network adapter for OC-Apache-A virtual machine

•It is then received at the distributed firewall at the VDS port for OC-Apache-A

•With no rule blocking, the packet is then forwarded on from the sending VDS port

•The packet then hits the physical layer to transmit to the second host. Notice the local and remote endpoints are shown

•The packet is then received on the second host. Notice the inverse local and remote endpoint IPs The local and remote

endpoints are the “Tunneling End Points” (TEP). When the OC-Apache-B virtual machine was migrated to another host,

the NSX manager updated all hosts in the transport zone with the new TEP for the virtual machine

•The packet is then received on the distributed firewall at the receiving VDS port for OC-Apache-B.

•With no rule blocking forwarding, the packet is then forwarded to the destination

•The last step shows the packet is delivered to the network adapter for the OC-Apache-B VM

View Host TEP Information [227]

HANDS-ON LABS MANUAL | 226


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click on the System tab

2.In the left menu click on Fabric

3.Click on Hosts

4.Click the > by mgmt-cluster to expand and see the hosts

5.Click on the and 4 more hyperlink to see the additional IP addresses

Note: Each host has 2 TEP interfaces in the Host TEP VLAN. In the lab configuration, Host TEP addresses are DHCP assigned on the
172.16.254.1/24 network with Cloud Builder acting as the DHCP server for ESXi hosts connecting to the Host TEP Network.

Network Overview [228]

Compare this to the logical layout of the environment. Notice each host has two TEP interfaces on the DHCP-based Host TEP Network.

The NSX Manager, interfaced with a vCenter Server instance, is responsible for updating all transport nodes in the transport zone any
time a VM powers on or is migrated. This provides a mapping of VM to TEP addresses to send the overlay traffic for a specific VM. As a
“Tunnel End Point,” the NSX prepped vSphere Distributed switch is responsible for de-encapsulating the overlay traffic to a VM and
encapsulating the traffic to communicate on the overlay. This is transparent to the VM and the underlay network.

HANDS-ON LABS MANUAL | 227


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Summary [229]

In this section, we show how ICMP packets travel between the VDS ports for 2 virtual machines running on different ESXi hosts. Like
section two, you first see the packet passes from where it enters the VDS at the source, through the source side firewall. The packet
then gets encapsulated and placed on the segment (overlay network) via the source side TEP and forwarded to the destination TEP.
The destination TEP de-encapsulates the packet and passes to the destination side distributed firewall, and finally to the VDS port. This
is a very simple example of the power of overlay networks in VCF. The source and destination physical machines do not have to be in
the same subnet, as would be common in multi-rack configuration with Spine/Leaf physical networking.

Adding router connectivity [230]

While we created two segments earlier in the lab, the segments are currently not connected, or to other parts of the network. In this
section, we will create an NSX Tier-1 router, connect the OC-Web-Segment and OC-App-Segment to the T1 router, then connect the T1
router to the existing T0 router in the lab configuration.

Create a T1 Router [231]

HANDS-ON LABS MANUAL | 228


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click on the NSX Manager tab in the Chrome browser

◦If needed log in with:

▪Username: admin

▪Password: VMware123!VMware123!

2.From the NSX Manager interface click the Networking tab

3.Click on Tier-1 Gateways in the left navigation panel

4.Click on the ADD TIER-1 GATEWAY button

5.In the Name field enter: OC-T1

6.Select mgmt-edge-cluster-t0-gw01 for the Linked Tier-0 Gateway

7. Select mgmt-edge-cluster for the Edge Cluster

8.Expand Route Advertisement

9.Enable All Connected Segments & Service Ports by ensuring the toggle is green

Save T1 Configuration [232]

1. Scroll down and click on the SAVE button

2.Click on NO when asked if you want to continue configuring

HANDS-ON LABS MANUAL | 229


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Connect OC-DB-Segment to OC-T1 [233]

1. Click on Segments in the left navigation panel

2.Click the 3 dots to the left of OC-DB-Segment

3.Select Edit

HANDS-ON LABS MANUAL | 230


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Choose Gateway [234]

1. Under Connected Gateway, scroll to find OC-T1 and select it

Save Segment Configuration [235]

1. Scroll down and click on the SAVE button

2.Click on the CLOSE EDITING button

HANDS-ON LABS MANUAL | 231


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Connect OC-Web-Segment to OC-T1 [236]

1. Click the 3 dots to the left of OC-Web-Segment then select Edit

2.Under Connected Gateway scroll down and select OC-T1

Save Segment Configuration [237]

1. Scroll down and click on the SAVE button

2.Click on the CLOSE EDITING button

HANDS-ON LABS MANUAL | 232


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Attach OC-MySQL to the OC-DB-Segment [238]

HANDS-ON LABS MANUAL | 233


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click on the vCenter Server tab in the Chrome browser window

2.From the vCenter Server, click on the Hosts and Clusters view

3.Find the OC-MySQL virtual machine in the left-side scroll list and right-click it

4.Select Edit Settings

Browse Network Segments [239]

HANDS-ON LABS MANUAL | 234


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click the dropdown next to Network Adapter 1

2.Select Browse...

Select Network [240]

1. Click on OC-DB-Segment

2.Click on the OK button

HANDS-ON LABS MANUAL | 235


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Save Virtual Machine Settings [241]

1. Click on the OK button

HANDS-ON LABS MANUAL | 236


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review the Network Topology [242]

1. Click on the NSX Manager tab in the Chrome browser window

2.From the NSX Manager interface click the Networking tab

3.Click on Network Topology in the left navigation panel

4.Review your newly created Tier-1 Router and see the segments that are connected

HANDS-ON LABS MANUAL | 237


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Test Web to SQL communications [243]

HANDS-ON LABS MANUAL | 238


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click on the vCenter Server tab in the Chrome browser window

2.From the vCenter Server, click on the Hosts and Clusters view

3.Click on the OC-MySQL virtual machine

4.Click on the LAUNCH WEB CONSOLE button

HANDS-ON LABS MANUAL | 239


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Test Basic Ping Connectivity [244]

HANDS-ON LABS MANUAL | 240


HOL-2446-05-HCI: Optimize and Modernize Data Centers

You may need to click enter on your keyboard to bring up a login prompt

1. Login with:

◦Username: ocuser

◦Password VMware123!

2.Ping OC-Apache-A by typing ping 10.1.1.18 at the command prompt

◦Press Ctrl+c to stop the ping

3.Successful ping replies mean that OC-MySQL can communicate with OC-Apache-A via the OC-T1 router.

View Layer 3 Communications in NSX Traceflow [245]

HANDS-ON LABS MANUAL | 241


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click on the NSX Manager tab in the Chrome browser window

2.From the NSX Manager interface click on the Plan & Troubleshoot tab

3.Click Traffic Analysis in the left navigation panel

4.Click on the GET STARTED button in the Traceflow tile

Configure Traceflow [246]

1. Configure the Traceflow to go from OC-Apache-A to OC-MySQL

2.Click on TRACE button

HANDS-ON LABS MANUAL | 242


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review Traceflow Path [247]

Your output should look like this. Notice the communications go through the OC-T1 router we created.

HANDS-ON LABS MANUAL | 243


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review Traceflow details [248]

In the Observations panel, review the following

•We show 1 packet delivered

•The packet was injected at the network adapter for OC-Apache-A virtual machine

•It is then received at the distributed firewall at the VDS port for OC-Apache-A

•With no rule blocking, the packet is then forwarded on from the sending VDS port

•The packet then hits the OC-T1 router and gets forwarded to the OC-Web-Segment

•Since OC-Apache-A and OC-MySQL are running on different ESXi hosts, you notice the physical hop between TEPs

•The packet is then received on the distributed firewall at the receiving VDS port for OC-MySQL

•With no rule blocking forwarding, the packet is then forwarded to the destination, the last step shows the packet is delivered

to the network adapter for the OC-MySQL VM

HANDS-ON LABS MANUAL | 244


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Test end-to-end communications [249]

1. Click on the PuTTY icon in the quick launch bar

2.Select OC-MySQL from the saved sessions

3.Click on the Load button

4.Click on the Open button

HANDS-ON LABS MANUAL | 245


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Accept SSH thumbprint [250]

1. Click on Yes to accept the ssh fingerprint if prompted

HANDS-ON LABS MANUAL | 246


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Log into OC-MySQL [251]

Login with the following information:

•Username: ocuser

•Password: VMware123!

You should now be logged in and at a command prompt

HANDS-ON LABS MANUAL | 247


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Network Topology Overview [252]

By successfully connecting PuTTY from the lab console to OC-MySQL we have tested the entire SDN connection. In this lab, the NSX
Edge Cluster connects via BGP to the pod router where our lab console is connected. SSH traffic flows from our Windows console to
the pod router, over BGP links to the Tier-0 router, to the OC-T1 router, and finally to the OC-MySQL VM on the OC-DB-Segment, and
back.

Summary [253]

In this section, we show how packets travel between the VDS ports for 2 virtual machines running on different ESXi hosts across two
segments connected by a Tier-1 router. The important distinction is this router functionality was distributed across all hosts versus a
physical device cabled somewhere else in the data center.

HANDS-ON LABS MANUAL | 248


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Testing the Opencart Application [254]

This section will validate that our web server VMs and database VM are working correctly before moving on to load balancing and
security modules.

Restart the Web Servers [255]

1. Click on the PuTTY icon in the quick launch bar

2.Click on OC-Apache-A in Saved Sessions

3.Click on the Load button

4.Click on the Open button

HANDS-ON LABS MANUAL | 249


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Accept SSH thumbprint [256]

1. Click Yes to accept SSH fingerprint if prompted

HANDS-ON LABS MANUAL | 250


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Log into OC-Apache-A [257]

1. Login with

◦Username: ocuser

◦Password: VMware123!

2.Run the command sudo systemctl restart apache2

Repeat the above steps for OC-Apache-B.

HANDS-ON LABS MANUAL | 251


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Test Opencart app [258]

1. Open a new tab in the Chrome browser

2.Connect to http://oc-apache-a.vcf.sddc.lab (10.1.1.18) using the bookmark in the bookmark bar

3.You should see the OC-Apache-A web page (The webservers for this lab module were modified to show the name of the host

you are connecting to for clarity)

HANDS-ON LABS MANUAL | 252


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Test alternate TCP Port [259]

1. Connect to the alternate port on http://oc-apache-a.vcf.sddc.lab:8080 (10.1.1.18:8080).


2.You should see identical output on port 8080, this port will be used later in the security lab.

Repeat the above steps for http://oc-apache-b.vcf.sddc.lab and http://oc-apache-b.vcf.sddc.lab:8080

Summary [260]

Congratulations! In just the time it took to get this far in the lab, you have deployed 2 brand new overlay network segments, a software-
defined router, and connected them all so you can access applications from outside the network.

Module 9 - Conclusion [261]

You have completed Module 1 and should now have a good understanding of how to create NSX Segments and configure Distributed

HANDS-ON LABS MANUAL | 253


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Routing. You should also understand how to test basic connectivity and how to use the Traceflow tool for further troubleshooting.
Please continue to Module 2 - "Changing the Security Game with Microsegmentation"

HANDS-ON LABS MANUAL | 254


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Module 9 - Changing the Security Game – Distributed Firewall (45 Minutes) Intermediate

Module 10 - Overview [263]

Over the past 10+ years, traffic in the data center has changed. More and more traffic stays within the data center, moving between
distributed application components. This traffic, known as “East-West”, is difficult to secure using traditional perimeter firewalls,
which were predominantly designed for traditional “North-South” traffic.

Micro-segmentation enables administrators to increase the agility and efficiency of the data center while maintaining an acceptable
security posture. Micro-segmentation decreases the level of risk and increases the security posture of the modern data center.

HANDS-ON LABS MANUAL | 255


HOL-2446-05-HCI: Optimize and Modernize Data Centers

HANDS-ON LABS MANUAL | 256


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Without Micro-segmentation [264]

Micro-segmentation with NSX in VCF is applied at the vNIC to VDS interface. Packets are inspected as they enter and leave each virtual
machine. Micro-segmentation is effectively a centralized packet filtering solution that acts on every machine.

HANDS-ON LABS MANUAL | 257


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Without Micro-segmentation [265]

Tagging VMs and Grouping Workloads based on Tags [266]

This section will explore the use of tagging to create groups of VMs to apply specific distributed firewall rules to. In small environments,
creating groups based on VM names may suffice. However, as your environment grows, tagging may be a better alternative.

Terminology & definitions:

•Tags
Tags A virtual machine is not directly managed by NSX however, NSX allows the attachment of tags to a virtual machine. This

tagging enables tag-based grouping of objects (e.g., you can apply a Tag called “AppServer” to all application servers).

•Security
Security Groups Security Groups enable you to assign security policies, such as distributed firewall rules, to a group of

objects, such as virtual machines. In addition to Tags, you can also create groups based on VM attributes such as VM Name,

OS, IP, Ports, etc.

•Security
Security Policies Each firewall rule contains policies that act as instructions that determine whether a packet should be

allowed or blocked, which protocols it is allowed to use, which ports it is allowed to use, etc. Policies can be either stateful or

stateless.

HANDS-ON LABS MANUAL | 258


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Tagging in NSX is distinct from tagging in vCenter Server, and at this time vCenter Server tags cannot be used to create grouping in
NSX. In a larger, more automated environment customers would use a solution such as Aria Automation to deploy virtual machines and
containers with security tagging set at the time of creation.

Given that the demo Opencart application only has two webservers and one database server, we’re going to create two tags as
criteria for two groups. This might seem somewhat redundant, creating one tag per group, but it’s essential to remember:

•This is a small sample 2-tier application. For applications leveraging micro-services, you’ll be able to group more than one

machine under one tag, and better leverage the security groups

•The advantage of using tags/groups is also an operational one. Once you create your infrastructure around Security Groups

that contain tags, the moment you tag a machine with a specific tag, it immediately inherits the specific Security Group,

Firewall rules, and so on. This brings us closer to the cloud delivery model.

•The downside is that a certain level of caution needs to be implemented when working with tags and security groups,
meaning that it is just as easy to add a machine to an existing security group and avoid the complication that comes with

setting up the firewall rules, but it is also just as easy to evade good security by giving the new machine too many permissions

due to old tags/security group configurations.

To show the capability of tags we will set up OC-Apache-A with the appropriate tag and a security group. Then we’ll have OC-
Apache-B inherit the web tag and see how easy it is to apply all the appropriate rules to “a new machine”. VM->Tag->Group
mapping is as follows

HANDS-ON LABS MANUAL | 259


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Initial Console Check [267]

1. Please ensure that the Lab Status is green and says “Ready”.

2.After you have verified that the lab is ready, please launch Google Chrome using the shortcut on the desktop.

HANDS-ON LABS MANUAL | 260


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Log in to NSX Manager [268]

1. Once the browser has launched you will see two tabs open by default. The first tab is the SDDC Manager Login, the second is
the vCenter Login.

2.Open a new tab and verify the page URL to ensure you have the correct user interface. The vCenter login URL should

read https://mgmt-nsx.vcf.sddc.lab/login.jsp

3.In the User name field enter: admin

4.In the Password box field enter: VMware123!VMware123!

5.Click the LOG IN button

HANDS-ON LABS MANUAL | 261


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Creating Tags [269]

1. Click on Inventory

2.Click on Tags

3.Click on the ADD TAG button

Name OC-Web-Tag [270]

HANDS-ON LABS MANUAL | 262


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. In the name field enter: OC-Web-Tag

Note: Don’t hit save yet. As you can see, the note says the tag must be assigned to at least one object first

2.Click on the Set Virtual Machines hyperlink

Add Tag to first VM [271]

1. Select the OC-Apache-A virtual machine (you may need to scroll down)

Note: For now, do not select OC-Apache-B as it will be added later

2.Click on the APPLY button

HANDS-ON LABS MANUAL | 263


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Save your Tag [272]

1. The “Assigned
Assigned To
To” value has incremented to
2.Click on the SAVE button

Add OC-Web-Tag [273]

1. Click on the ADD TAG button

2.In the name field enter: OC-DB-Tag

3.Click on the Select Virtual Machines hyperlink

HANDS-ON LABS MANUAL | 264


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Add Tag to first VM [274]

1. Select the OC-MySQL virtual machine (you may need to scroll down)

2.Click on the APPLY button

HANDS-ON LABS MANUAL | 265


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Save your Tag [275]

1. Click on SAVE button

Verify newly created Tags [276]

1. Click on Inventory

2.Click on Tags

3.Click in the field: Filter by Name, Path, and more

HANDS-ON LABS MANUAL | 266


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Filter Search [277]

1. Select the Tag option

1. In the search box enter: OC

2.Click on OK

HANDS-ON LABS MANUAL | 267


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Filter Results [278]

1. Confirm you see OC-DB-Tag and OC-Web-Tag.

Verify the virtual machines are mapped to tags. [279]

HANDS-ON LABS MANUAL | 268


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click Virtual Machines

2.Click in the Filter section

Filter Criteria [280]

1. Scroll down in Basic Detail

2.Select the Tag option

HANDS-ON LABS MANUAL | 269


HOL-2446-05-HCI: Optimize and Modernize Data Centers

3.Select the 2 tags we created earlier

4.Click on the APPLY button

HANDS-ON LABS MANUAL | 270


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Verify Tags [281]

1. Verify OC-Apache-A and OC-MySQL are present.

Create Groups [282]

Group mapping is as follows:

HANDS-ON LABS MANUAL | 271


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Create OC-Web-Group [283]

1. Click on Groups in the left menu

2.Click on the ADD GROUP button

3.In the Name field enter: OC-Web-Group

4.Click on “Set
Set” to add group members. In this example, we will use the tags we just created to populate the group

HANDS-ON LABS MANUAL | 272


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Add Member Criteria [284]

1. Click on + ADD CRITERION button

Machine, Tag, Equals, and “OC-Web-Tag


2.In the criteria fields select: Virtual Machine OC-Web-Tag”

3.Click on the APPLY button

HANDS-ON LABS MANUAL | 273


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Save Group [285]

1. Click on SAVE button

Create OC-DB-Group [286]

1. Click on the + ADD GROUP button

2.In the Name field enter: OC-DB-Group

3.Click on “Set
Set” to add group members. In this example, we will use the tags we just created to populate the group

HANDS-ON LABS MANUAL | 274


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Add Member Criteria [287]

1. Click on the ADD CRITERION button

Machine, Tag, Equals, and “OC-DB-Tag


2.In the criteria fields select: Virtual Machine OC-DB-Tag”

3.Click on the APPLY button

HANDS-ON LABS MANUAL | 275


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Save the Group [288]

1. Click on SAVE button

Verify Groups [289]

1. Click in the Filter by Name, Path, and More field

HANDS-ON LABS MANUAL | 276


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Filter Criteria [290]

1. Select Name in the Basic Detail column

HANDS-ON LABS MANUAL | 277


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Type OC in the search field to filter for our group names

2.Select the OC-Web-Group and OC-DB-Group groups

3.Click on the APPLY button

View Group Members [291]

1. Click the View Members hyperlink for each group

2.Review Group Members

NOTE: Each group should have a single VM as a member (we can ignore IP addresses, Ports, and VIF for now).

HANDS-ON LABS MANUAL | 278


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Summary [292]

At this point we have implemented the following:

This section shows how to create tagging and grouping in NSX. This capability allows the creation and management of a scalable set of
distributed firewall rules.

Applying Distributed Firewall Rules based on Tagging on a Segment [293]

This section will show configuring the distributed firewall to limit access in our OpenCart Application. For this lab, we will create the
following rules.

Keep in mind that this all happens at the distributed firewall level, where firewall rules are implemented at the VM switch port versus

HANDS-ON LABS MANUAL | 279


HOL-2446-05-HCI: Optimize and Modernize Data Centers

needing the services of a routed (perimeter) firewall to implement. Since we have created groups in the previous section, now we can
create access rules based on these groups.

HANDS-ON LABS MANUAL | 280


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Create a New Policy [294]

1. Ensure you are on the NSX Manager tab in the Chrome Browser

2.Click Security

3.Click Distributed Firewall

4.Click on + ADD POLICY

5.Double click New policy

HANDS-ON LABS MANUAL | 281


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Firewall Rule Name [295]

1. In the Policy Name field enter: Opencart

2.Hover over DFW


DFW, then click the Pencil

NOTE: The default is the entire distributed firewall, however, we want this rule to apply to the groups we created in the previous labs.

HANDS-ON LABS MANUAL | 282


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Set Policy Scope [296]

1. Click the Groups radio button

2.Search for OC

3.Select both OC-DB-Group and OC-Web-Group

4.Click on the APPLY button

HANDS-ON LABS MANUAL | 283


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Add new inbound-web-80 rule [297]

1. Click on the three vertical dots on the left of the Opencart policy

2.Select the Add Rule option

HANDS-ON LABS MANUAL | 284


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Name inbound-web-80 rule [298]

1. Double Click on New Rule and enter the name inbound-web-80

2.Hover over Destinations


Destinations, then click the pencil icon. This brings up the Set Destination pop-up

HANDS-ON LABS MANUAL | 285


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Set inbound-web-80 destination [299]

1. Type OC in the search box

2.Click on the OC-Web-Group checkbox

3.Click on the APPLY button

HANDS-ON LABS MANUAL | 286


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Set inbound services [300]

1. Hover over Services


Services, then click the pencil icon

Set inbound-web-80 services [301]

HANDS-ON LABS MANUAL | 287


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Type HTTP in the search box

2.Select the HTTP option

3.Click on the APPLY button

The result should be the following:

Review the inbound-web-80 rule [302]

Add inbound-web-8080 rule (clone inbound-web-80) [303]

1. Click on the three vertical dots on the left of the inbound-web-80 rule

2.Select the Clone Rule option

HANDS-ON LABS MANUAL | 288


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Change rule name [304]

1. Click on Copy of inbound-web-80 and change the name to inbound-web-8080

2.Hover over Services


Services, then click the pencil icon

HANDS-ON LABS MANUAL | 289


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Set Services [305]

1. Search for HTTP

2.Make sure it is unchecked

HANDS-ON LABS MANUAL | 290


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Add Custom Service [306]

1. Click on Raw Port Protocols

2.Click the ADD SERVICE ENTRY button

3.Set Service Type as: TCP


4.Set the Destinations Ports to: 8080

5.Click on the APPLY button

HANDS-ON LABS MANUAL | 291


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review Rule Results [307]

Test inbound-web-XX rules [308]

Notice at this point we have two rules in place that are defaulted to Allow, and we have not yet published the rule changes. Leave this
as is for the moment. Next, we will test that both ports are currently active on our web server.

1. Open a tab on the browser and

2.Connect to OC-Apache-A using the bookmark in the bookmark bar

HANDS-ON LABS MANUAL | 292


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Test Port 8080 [309]

1. Open a second tab

2.Connect to OC-Apache-A on port 8080

Change Rule to Reject [310]

1. Return to the NSX Manager tab. Click the arrow next to Allow on inbound-web-8080

2.Select the Reject option

HANDS-ON LABS MANUAL | 293


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Publish New Rules [311]

1. Click on the PUBLISH button to make the rules active

NOTE: The Publish button is grayed out, showing there are no uncommitted changes. The green Success indicator is set at the policy
level, and our two rules now have ID numbers showing they have been activated.

Test connectivity on port 80 [312]

HANDS-ON LABS MANUAL | 294


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Go to the OC-Apache-A browser tab and refresh the page

2.The web page should load normally

Test connectivity on port 8080 [313]

1. Go to the OC-Apache-A port 8080 browser tab and refresh the page

2.The web page should fail to load

HANDS-ON LABS MANUAL | 295


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Extend security policies to new VMs based on tags [314]

1. Test operations of the OC-Apache-B webserver on port 80 using the bookmark in the bookmark bar

2.The page should load successfully

Test OC-Apache-B port 8080 [315]

HANDS-ON LABS MANUAL | 296


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Test operations of the OC-Apache-B webserver on port 8080

2.The page should load successfully

Observe that the Reject rule on port 8080 does not extend to OC-Apache-B.

Add tag to OC-Apache-B [316]

1. Return to the NSX manager browser tab

2.Click on Inventory

3.Click on Tags

4.Click on Filter
Filter, select Tag

5.In the Search field enter: OC-Web

6.Click on the OK button

HANDS-ON LABS MANUAL | 297


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Edit OC-Web-Tag [317]

1. Click the 3 dots next to OC-Web-Tag, then click Edit

Click on the Assigned To number [318]

2.Click on the number 1 under Assigned To

HANDS-ON LABS MANUAL | 298


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Add OC-Apache-B [319]

1. In the Filter field enter: OC-Apache

2.Select the OC-Apache-B VM

3.Click on the APPLY button

HANDS-ON LABS MANUAL | 299


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Save Changes [320]

1. Click SAVE

HANDS-ON LABS MANUAL | 300


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Test OC-Apache-B port 8080 Connectivity [321]

1. Return to the browser tab for OC-Apache-B web server on port 8080 and refresh browser tab

2.As soon as the tag was applied to OC-Apache-B VM, it immediately became a part of the Opencart Policy, because it became

a member of the web-group that the rules applied to.

Implement the Web-DB rule [322]

The step will allow communications from the Apache web servers to the MySQL server.

HANDS-ON LABS MANUAL | 301


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Return to the NSX Manager tab

2.Click on the Security tab

3.Click on Distributed Firewall in the left menu

4.Click on the 3 dots next to Opencart

5.Select the Add Rule option

HANDS-ON LABS MANUAL | 302


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Enter Rule Criteria [323]

1. In the name field enter: Web-DB

2.Set the Sources to OC-Web-Group

3.Set the Destinations to OC-DB-Group

4.Set the Services to MySQL

5.Click on the PUBLISH button

Validate Connectivity [324]

HANDS-ON LABS MANUAL | 303


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Open a new browser tab

2.Browse to http://oc-apache-a.vcf.sddc.lab

3.The page should load normally.

Update Rule to Reject [325]

1. Return to the NSX Manager browser tab

2.Set the Web-DB rule to Reject

3.Click on the PUBLISH button

NOTE: This will allow us to see the impact of the firewall blocking access from Apache to MySQL which will be useful later in the lab.

HANDS-ON LABS MANUAL | 304


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Test Update Rule [326]

1. Test access to OC-Apache-A by refreshing the browser tab

2.The web page load should fail.

Reset Rule to Allow [327]

1. Reset the Web-DB rule to Allow and Publish before moving on to the next step

Implement Deny All Inbound rule [328]

The step will implement a Deny All Inbound rule which will deny all inbound traffic we have not explicitly allowed. This step will also
show the order of rule evaluation within a security policy

HANDS-ON LABS MANUAL | 305


HOL-2446-05-HCI: Optimize and Modernize Data Centers

HANDS-ON LABS MANUAL | 306


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click on the Security tab

2.Click on Distributed Firewall in the left menu

3.Click on the 3 dots next to Opencart

4.Select the Add Rule option

Enter Rule Criteria [329]

1. In the name field enter: Deny-All-Inbound

2.Set the Destinations to groups OC-DB-Group and OC-Web-Group

3.Set the Action to Reject


4.Click on the PUBLISH button

HANDS-ON LABS MANUAL | 307


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Test Deny-All-Inbound Rule [330]

1. Open a new browser tab using the bookmark

2.Browse to to http://oc-apache-a.vcf.sddc.lab

3.The web page load should fail. This is because the Deny-All rule is evaluated before our Allow rules.

HANDS-ON LABS MANUAL | 308


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Update Rule Order [331]

1. Return to the NSX browser tab

2.Move the Deny-All-Inbound rule down by left-clicking the mouse and holding down with the cursor anywhere on the Deny-

All-Inbound rule line and dragging the rule below our inbound-web-80 rule

3.Click on the PUBLISH button

HANDS-ON LABS MANUAL | 309


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Recheck Connectivity [332]

1. Return to the OC-Apache-A browser tab and refresh

2.You should get a normal web page load.

HANDS-ON LABS MANUAL | 310


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Remove 8080 Rule [333]

1. Return to the NSX browser tab

2.Check the box to the left of the inbound-web-8080 rule

3.Click on the DELETE button

4.Click on the PUBLISH button

NOTE: We have a hard Deny-All rule in our policy. Unless port 8080 is explicitly allowed, it will be blocked, rendering this rule no
longer needed.

HANDS-ON LABS MANUAL | 311


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Validate Policy [334]

1. Open a new web browser tab

2.Browse to http://oc-apache-a.vcf.sddc.lab:8080

3.The page should fail to load properly.

Implement the SSH rule

The step will implement a rule to allow ssh connections to our Apache and MySQL VMs, but only from our inside admin network
(10.0.0.0/24).

HANDS-ON LABS MANUAL | 312


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Attempt to SSH to OC-Apache-A. Click on PuTTY and connect to OC-Apache-A

2.Your connection should time out

HANDS-ON LABS MANUAL | 313


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Create SSH Rule [335]

HANDS-ON LABS MANUAL | 314


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Return to the NSX browser tab

2.Click on the 3 dots to the left of the Opencart

3.Select the Add Rule option

Name the Rule [336]

1. Name the rule ssh-admin

2.Click on the pencil for Sources

HANDS-ON LABS MANUAL | 315


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Set Rule Sources [337]

1. Click on IP Addresses

2.Enter the IP: 10.0.0.0/24

3.Click on the APPLY button

HANDS-ON LABS MANUAL | 316


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Set Rule Services [338]

1. Set Services to SSH

2.Click on the PUBLISH button

HANDS-ON LABS MANUAL | 317


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Connect to OC-Apache-A with PuTTY [339]

HANDS-ON LABS MANUAL | 318


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Open PuTTY using the quick launch

2.Attempt to SSH to OC-Apache-A

3.Login with

◦Username: ocuser

◦Password: VMware123!

Your connection should succeed.

Attempt to SSH from OC-Apache-A to OC-Apache-B [340]

1. Type ssh 10.1.1.19

2.Your connection should be refused

NOTE: Observe that we have blocked ssh within the Web servers but allow it between our admin network and the web servers.

Implement the ICMP-Admin rule [341]

The section will implement a rule to allow ICMP (Ping) to our Apache and MySQL VMs, but only from inside our admin network
(10.0.0.0/24), as Ping is often used to determine host accessibility in many security threat situations.

HANDS-ON LABS MANUAL | 319


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Open a command prompt on the Windows desktop

1. Click the Windows icon

2.Click on Windows System

3.Click on Command Prompt

HANDS-ON LABS MANUAL | 320


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Ping OC-Apache-A [342]

1. Attempt to ping OC-Apache-A Your connection should time out

HANDS-ON LABS MANUAL | 321


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Add ICMP Rule [343]

HANDS-ON LABS MANUAL | 322


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. On the NSX Manager browser tab

2.Click on Security tab

3.Click on Distributed Firewall in the left ment

4.Click on the 3 dots next to Opencart

5.Select the Add Rule option

Set the Rule Criteria [344]

1. In the Name enter: ICMP-Admin

2.Set the Sources to 10.0.0.0/24

3.Set the services to ICMP-ALL

4.Click on the PUBLISH button

HANDS-ON LABS MANUAL | 323


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Test Ping to OC-Apache-A [345]

1. Return to your Windows command prompt

2.Attempt to ping OC-Apache-A

3.Your connection should succeed

HANDS-ON LABS MANUAL | 324


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Use NSX Traceflow [346]

1. Return to the NSX Manager browser tab

2.Click the Plan & Troubleshoot tab

3.Click Trafic Analysis in the left navigation panel

4.Click on the GET STARTED button in the Traceflow tile

HANDS-ON LABS MANUAL | 325


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Configure Traceflow Settings [347]

1. Configure Traceflow from OC-Apache-A to OC-MySQL and

2.Click on the TRACE button

HANDS-ON LABS MANUAL | 326


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review Traceflow Path [348]

1. Traceflow should show the ICMP packet dropped at the first firewall point (OC-Apache-A) before being placed on the

segment.

Review Traceflow Detail [349]

1. Review the observations panel and notice the packet was dropped at OC-Apache A between the VM NIC and OC-Web-

Segment.

HANDS-ON LABS MANUAL | 327


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Edit Traceflow Setttings [350]

1. On the Traceflow screen, click on the EDIT button to reconfigure the trace then

2.Click on the PROCEED button on the warning pop up

HANDS-ON LABS MANUAL | 328


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Change Traceflow Protocol [351]

1. Change Protocol Type from ICMP to TCP

2.In the Destination Port enter: 3306

3.Click on the TRACE button

HANDS-ON LABS MANUAL | 329


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review Traceflow Path [352]

Your Traceflow should succeed.

HANDS-ON LABS MANUAL | 330


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review Traceflow Detail [353]

In the Observations panel, review the following

•We show 1 packet delivered

•The packet was injected at the network adapter for OC-Apache-A virtual machine

•It is then received at the distributed firewall at the VDS port for OC-Apache-A

•With no rule blocking, the packet is then forwarded on from the sending VDS port

•The packet then hits the OC-T1 router and gets forwarded to the OC-DB-Segment

•Since OC-Apache-A and OC-MySQL are running on different ESXi hosts, you notice the physical hop between TEPs

•The packet is then received on the distributed firewall at the receiving VDS port for OC-MySQL

•With no rule blocking forwarding, the packet is then forwarded to the destination, the last step shows the packet is delivered

to the network adapter for the OC-MySQL VM

Notice the flexibility of Traceflow to allow us to troubleshoot our distributed firewall and distributed routing using appropriate
communications protocols.

Module 10 - Conclusion [354]

This module shows the power of the distributed firewall capability in NSX. Using tagging and grouping, we were able to create a
scalable set of rules for our Opencart application that only allows necessary communications for application operation, while blocking all
other traffic. This was all done directly at the vSphere VDS switch port level, versus a piece of hardware elsewhere in the data center.
Please continue to Module 3 - "Load Balancing"

HANDS-ON LABS MANUAL | 331


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Module 10 - Basic Load Balancing with NSX (15 Minutes) Intermediate

Module 11 - Overview [356]

In this module, we will use NSX to create a basic load balancer for HTTP traffic to our OpenCart web servers.

The high-level steps include:

•Configuring the T1 router on an Edge Cluster

•Creating a Server Pool

•Creating the Load Balancer

•Creating Virtual Servers

Creating and Configuring the load balancer [357]

In this module we will configure an Layer 7 load balancer on the NSX Tier-1 router created in Module 1.

HANDS-ON LABS MANUAL | 332


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Initial Console Check [358]

1. Please ensure that the Lab Status is green and says “Ready”.

2.After you have verified that the lab is ready, please launch Google Chrome using the shortcut on the desktop.

HANDS-ON LABS MANUAL | 333


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Log in to NSX Manager [359]

Once the browser has launched you will see two tabs open by default. The first tab is the SDDC Manager Login, the second is the
vCenter Login.

1. Open a new tab and verify the page URL to ensure you have the correct user interface. The vCenter login URL should

read https://mgmt-nsx.vcf.sddc.lab/login.jsp

2.In the User name field enter: admin

3.In the Password box field enter: VMware123!VMware123!

4.Click the LOG IN button

Configure OC-T1 to run on an Edge Cluster [360]

To support stateful services, such as Layer 3-7 Firewall we need to configure OC-T1 as a Services Router (SR). This simply means
associating OC-T1 with our existing NSX Edge cluster in the management domain.

HANDS-ON LABS MANUAL | 334


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click on the Networking tab

2.Click on Tier-1 Gateways in the left menu


3.Click the 3 dots next to OC-T1

4.Select Edit

HANDS-ON LABS MANUAL | 335


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Specify Edge Cluster [361]

1. Confirm the Edge Cluster is configured to the mgmt-edge-cluster

2.Click on the CLOSE EDITING button

Create Server Pool [362]

A server pool is a set of servers that can share the same content, in this example it is our Apache web servers.

HANDS-ON LABS MANUAL | 336


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click on Load Balancing in the left menu

2.Click on the Server Pools tab

3.Click on the ADD SERVER POOL button

4.In the name field enter: OC-LB-Pool

5.Click on the Select Members hyperlink

HANDS-ON LABS MANUAL | 337


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Add Member OC-Apache-A [363]

1. Click on the ADD MEMBER button

2.In the name field enter: OC-Apache-A

3.In the IP field enter: 10.1.1.18

4.In the port field enter: 80

5.Click on the SAVE button

HANDS-ON LABS MANUAL | 338


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Add Member OC-Apache-B [364]

1. Click on the ADD MEMBER button

2.In the name field enter: OC-Apache-B

3.In the IP field enter: 10.1.1.19

4.In the port field enter: 80

5.Click on the SAVE button

HANDS-ON LABS MANUAL | 339


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Apply Members [365]

1. Click on the APPLY button

HANDS-ON LABS MANUAL | 340


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Set Health Monitor [366]

1. Click the Set hyperlink next to Active Monitor

HANDS-ON LABS MANUAL | 341


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Choose Default HTTP Monitor [367]

1. Select the default-http-lb-monitor for Port 80

2.Click on the APPLY button

HANDS-ON LABS MANUAL | 342


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Save Pool Settings [368]

1. Click on the SAVE button

Create Load Balancer [369]

HANDS-ON LABS MANUAL | 343


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click on the Load Balancers tab

2.Click on the ADD LOAD BALANCER button

3.In the name field enter: OC-LB

4.Under Attachment select OC-T1

5.Click on the SAVE button

6.When prompted Want to continue configuring this Load Balancer, select NO

Create Virtual Servers [370]

A Virtual Server is an IP address that acts as the front end for a Server Pool.

HANDS-ON LABS MANUAL | 344


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click on the Virtual Servers tab

2.Click on the ADD VIRTUAL SERVER button

3.Select the L7 HTTP option

HANDS-ON LABS MANUAL | 345


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Enter Load Balancer Settings [371]

1. In the Name field enter: OC-VIP

2.In the IP Address field enter: 10.1.1.2

3.In the port field enter: 80

4.Under Load Balancer select OC-LB

5.Under Server Pool select OC-LB-Pool

6.Click on the SAVE button

HANDS-ON LABS MANUAL | 346


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Test Load Balancer [372]

1. Open a new Chrome browser tab

2.Browse to the OC-VIP using the bookmark in the bookmark bar

3.Observe the connected webserver

HANDS-ON LABS MANUAL | 347


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Test Load Balancer Functionality [373]

1. Refresh the browser for this tab.

2.You should see the opposite webserver

HANDS-ON LABS MANUAL | 348


HOL-2446-05-HCI: Optimize and Modernize Data Centers

SSH to OC-Apache-A [374]

1. Open a PuTTY session to OC-Apache-A

2.Login with

◦Username: ocuser

◦Password: VMware123!

HANDS-ON LABS MANUAL | 349


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Stop Apache Server [375]

1. Run the command sudo systemctl stop apache2

Test Health Monitor [376]

Wait approximately 30 seconds.

1. Refresh the browser using the VIP several times in a row

2.You should see the OC-Apache-B web page only

Note: The Active Monitor has detected the failure of OC-Apache-A quickly and will no longer send requests to it

HANDS-ON LABS MANUAL | 350


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Start Apache Server [377]

1. Return to the OC-Apache-A PUTTY session

2.Run the command sudo systemctl start apache2

Wait approximately 30 seconds.

Refresh the browser for the OC-VIP tab several times in a row. You should see both OC-Apache-A and OC-Apache-B web pages, as
the Active Monitor would detect the return of OC-Apache-A quickly.

Summary [378]

In this section we configured all the required components for a load balancer to function in NSX. You also had the opportunity to test its
functionality including basic monitoring.

HANDS-ON LABS MANUAL | 351


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Module 11 - Conclusion [379]

In this module, we explored how quickly a load balancer can be instantiated on the NSX Tier-1 router.

Module Key takeaways: [380]

•A load balancer is connected to a Tier-1 logical router. The load balancer hosts single or multiple virtual servers.

•A virtual server is an abstract of an application service, represented by a unique combination of IP, port, and protocol. The

virtual server is associated with single to multiple server pools.

•A server pool consists of a group of servers. The server pools include individual server pool members.

•NSX Data Center load balancer supports the following features:

◦Layer 4 - TCP and UDP

◦Layer 7 - HTTP and HTTPS with load balancer rules support

◦Health check monitors - Active monitor which includes HTTP, HTPPS, TCP, UDP, and ICMP, and passive monitor

HANDS-ON LABS MANUAL | 352


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Module 11 - Migrating Workloads (45 mins) Intermediate

Module Overview [382]

This module provides an introduction to migrating application workloads from existing vSphere 7.x environment to VMware Cloud
Foundation. For the purposes of this lab, imagine that we are working on a datacenter consolidation project. Our source site is in
Rheinlander, WI and our destination lab is in Brussels, Belgium. We have been tasked with migrating the application VMs from our
source site to the destination site. Because we have VMware Cloud Foundation, we can use VMware HCX to migrate the application
VMs without downtime, and without changing the IPs. Within this lab HCX has been installed and configured. We will review the
topology of both sites, Then we will begin the process of migrating VMs and finish by migrating the Gateway. At the end of the lesson
we will review the installation and configuration of the HCX appliances to see how this all works behind the scenes.

This class consists of the following modules:

1. Review sites, Create Network Extension

2.HCX Advanced Migrations

3.HCX Enterprise Migrations

4.HCX Software Defined Networking

5.Lab topology Review

6.HCX Service Mesh Review

Hands-on Labs Interactive Simulation: Migrating to VCF with HCX [383]

This part of the lab is presented as a Hands-on Labs Interactive Simulation


Simulation. This will allow you to experience steps which are too time-
consuming or resource intensive to do live in the lab environment. In this simulation, you can use the software interface as if you are
interacting with a live environment.

Click the button below to start the simulation!

[vlp:switch-console|HOL-2246-05-HCI_module13_part1|Start the Interactive Simulation]

You can hide the manual to use more of the screen for the simulation.

HANDS-ON LABS MANUAL | 353


HOL-2446-05-HCI: Optimize and Modernize Data Centers

NOTE: When you have completed the simulation, click on the Manual tab to open it and continue with the lab.

[vlp:close-panel|manual|Close Instructions Panel]

HANDS-ON LABS MANUAL | 354


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Hands-on Labs Interactive Simulation: HCX installation, Activation, & Site Pairing [384]

This part of the lab is presented as a Hands-on Labs Interactive Simulation


Simulation. This will allow you to experience steps which are too time-
consuming or resource intensive to do live in the lab environment. In this simulation, you can use the software interface as if you are
interacting with a live environment.

Click the button below to start the simulation!

[vlp:switch-console|HOL-2246-05-HCI_module13_part2|Start the Interactive Simulation]

You can hide the manual to use more of the screen for the simulation.

HANDS-ON LABS MANUAL | 355


HOL-2446-05-HCI: Optimize and Modernize Data Centers

NOTE: When you have completed the simulation, click on the Manual tab to open it and continue with the lab.

[vlp:close-panel|manual|Close Instructions Panel]

HANDS-ON LABS MANUAL | 356


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Hands-on Labs Interactive Simulation: HCX Network Profile, Compute Profile, Service Mesh [385]

This part of the lab is presented as a Hands-on Labs Interactive Simulation


Simulation. This will allow you to experience steps which are too time-
consuming or resource intensive to do live in the lab environment. In this simulation, you can use the software interface as if you are
interacting with a live environment.

Click the button below to start the simulation!

[vlp:switch-console|HOL-2246-05-HCI_module13_part3|Start the Interactive Simulation]

You can hide the manual to use more of the screen for the simulation.

HANDS-ON LABS MANUAL | 357


HOL-2446-05-HCI: Optimize and Modernize Data Centers

NOTE: When you have completed the simulation, click on the Manual tab to open it and continue with the lab.

[vlp:close-panel|manual|Close Instructions Panel]

HANDS-ON LABS MANUAL | 358


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Hands-on Labs Interactive Simulation: Extend Network, Migrate VMs [386]

This part of the lab is presented as a Hands-on Labs Interactive Simulation


Simulation. This will allow you to experience steps which are too time-
consuming or resource intensive to do live in the lab environment. In this simulation, you can use the software interface as if you are
interacting with a live environment.

Click the button below to start the simulation!

[vlp:switch-console|HOL-2246-05-HCI_module13_part4|Start the Interactive Simulation]

You can hide the manual to use more of the screen for the simulation.

HANDS-ON LABS MANUAL | 359


HOL-2446-05-HCI: Optimize and Modernize Data Centers

NOTE: When you have completed the simulation, click on the Manual tab to open it and continue with the lab.

[vlp:close-panel|manual|Close Instructions Panel]

HANDS-ON LABS MANUAL | 360


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Conclusion [387]

In this lab, you have seen the powerful migration capabilities with VMware HCX. VMware HCX streamlines application migration,
workload rebalancing, and business continuity across data centers and clouds. We have demonstrated how HCX can move applications
seamlessly between environments at scale and avoid the cost and complexity of refactoring applications. We dove into the details of
how to install, configure and extend networks using HCX. When your business is ready to migrate your applications between any
VMware-based Cloud, you now know how easily this can be with VMware HCX. Only VMware gives you the freedom to run your
applications where ever you need them without downtime.

HANDS-ON LABS MANUAL | 361


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Module 12 - Deploying Applications to a Pre-Existing NSX Network (45 minutes) Intermediate

Module Overview [389]

It is anticipated that this module will take approximately 45 minutes to complete.

In this lab, we show how to use Aria Automation Assembler to deploy an OpenCart instance to pre-defined NSX networks configured
on the Cloud Foundation management domain.

This module consists of the following exercises:

•Creating NSX Segments with DHCP services, and connect to OC-T1 Tier-1 router

•Creating an Assembler Network Profile for OC-DB-Auto-Seg

•Creating an Assembler Network Profile for OC-Web-Auto-Seg

•Reviewing an Assembler Cloud Template

•Deploying OpenCart from Cloud Template

•Reviewing a Provisioning Diagram

•Reviewing a Deployed Application

•Deleting a Deployed Application

Create an NSX Segment and DHCP Server [390]

In this exercise, we will create two new NSX segments to host the OpenCart web and database servers. Each segment will use a /24
subnet and reserve a part of the address space for Aria Automation deployed services like load balancers and the remainder for DHCP
boot of hosts.

HANDS-ON LABS MANUAL | 362


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Initial Console Check [391]

1. Please ensure that the Lab Status is green and says “Ready”.

2.After you have verified that the lab is ready, please launch Google Chrome using the shortcut on the desktop.

HANDS-ON LABS MANUAL | 363


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Log into the NSX Console [392]

1. Click on the Management NSX-T shortcut link

2.Log in to NSX Manager with

◦Username: admin

◦Password: VMware123!VMware123!

3.Click the LOG IN button

HANDS-ON LABS MANUAL | 364


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Explore Network Topology [393]

HANDS-ON LABS MANUAL | 365


HOL-2446-05-HCI: Optimize and Modernize Data Centers

From the NSX Manager, start by creating a new software-defined network and assigning the IP subnet 172.16.20.0/24.

1. Click the Networking tab

2.Click on Network Topology (left column)

Within the topology view, you should see the OC-Auto-T1 router with only 1 segment below it, you may need to zoom in to see the Tier 1
names. We are going to add more segments to this Tier-1 router for use by Aria Automation Assembler. The next steps will create the
following networks:

OC-DB-Auto-Seg

•10.1.3.0/24

•Gateway 10.1.3.1/24

•DHCP Server 10.1.3.254

•DHCP Range 10.1.3.100-10.1.3.253

•VRA Address space 10.1.3.2-10.1.3.99

OC-Web-Auto-Seg

•10.3.4.0/24

•Gateway 10.1.4.1/24

•DHCP Server 10.1.4.254

•DHCP Range 10.1.4.100-10.1.4.253

•VRA Address space 10.1.4.2-10.1.4.99

Add Network Segment [394]

HANDS-ON LABS MANUAL | 366


HOL-2446-05-HCI: Optimize and Modernize Data Centers

From the NSX Manager, start by creating a new software-defined network and assigning the IP subnet 10.1.3.0/24.

1. Click on Segments (left column)

2.Click the ADD SEGMENT button

3.Fill in the Segment Name field with: OC-DB-Auto-Seg

4.For the Connected Gateway Tier 1 select: OC-Auto-T1 | Tier1

5.For the Transport Zone select: mgmt-domain-tz-overlay01 | Overlay

6.Fill in the Gateway CIDR IPv4 field with: 10.1.3.1/24

7. Click on SET DHCP CONFIG button

Configure the DHCP Server [395]

HANDS-ON LABS MANUAL | 367


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Enter the following values:

1. Set DHCP Type to Segment DHCP Server

2.Set DHCP Profile to the only option starting with DHCP.....

3.Fill in the DHCP Server Address field with:10.1.3.254/24


10.1.3.254/24

4.Fill in the DHCP Ranges with: 10.1.3.100-10.1.3.253

5.Fill in the DNS Server field to: 10.0.0.2

6.Click on the APPLY button

Save the Segment Settings [396]

1. Scroll down and click on the SAVE button

1. Click on NO when asked if you want to continue editing this segment.

HANDS-ON LABS MANUAL | 368


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review the new Segment [397]

Observe that a new segment with the name “OC-DB-Auto-Seg” has been created.

1. Click the View Topology Icon next to OC-DB-Auto-Seg

HANDS-ON LABS MANUAL | 369


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Topology View [398]

The topology view shows the new segment with its network path for routing out to the external network.

1. Click on the X to close the Topology View

HANDS-ON LABS MANUAL | 370


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Add Network Segment [399]

From the NSX Manager, start by creating a new software-defined network and assigning the IP subnet 10.1.4.0/24.

1. Click on Segments (left column)

2.Click the ADD SEGMENT button

3.Fill in the Segment Name field with: OC-Web-Auto-Seg

4.In the Connected Gateway Tier 1 field select: OC-Auto-T1 | Tier1

5.In the Transport Zone field select: mgmt-domain-tz-overlay01 | Overlay

6.Fill the the Gateway CIDR IPv4 field with: 10.1.4.1/24

7. Click on the SET DHCP CONFIG button

HANDS-ON LABS MANUAL | 371


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Configure DHCP Server [400]

Enter the following values:

1. Set DHCP Type to Segment DHCP Server

2.Set the DHCP Profile to the only option starting with DHCP.....

3.Fill in the DHCP Server Address field with: 10.1.4.254/24

4.Fill in the DHCP


HCP Ranges field with: 10.1.4.100-10.1.4.253

5.Fill in the DNS Server field with: 10.0.0.2

6.Click on the APPLY button

HANDS-ON LABS MANUAL | 372


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Save the Segment Settings [401]

1. Scroll down and click on the SAVE button

1. Click on NO when asked if you want to continue editing this segment.

HANDS-ON LABS MANUAL | 373


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review Network Topology [402]

1. Click on Network Topology

2.Click the Zoom In button multiple times

View Segment Services [403]

Your topology should look like the following view.

HANDS-ON LABS MANUAL | 374


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. If you hover your mouse over where it says '1 Service on OC-DB-Auto-Seg', you will see that it shows DHCP service. This is

the DHCP server we configured in the previous steps.

Create the Assembler Network Profile [404]

In this exercise we will configure a new Network Profile in Aria Automation Assembler for the OC-DB-Auto-Seg segment and associated

HANDS-ON LABS MANUAL | 375


HOL-2446-05-HCI: Optimize and Modernize Data Centers

DHCP Server that was created earlier.

For purposes of the lab, the prerequisite steps of adding a VMware Cloud Foundation Cloud Account to Aria Automation, and creating
the associated Project, Cloud Zones, and Image and Flavor mappings have already been accomplished. Feel free to click through the
configuration and review these settings. However, be careful to not make any changes.

Next, we assign the network that will be used with this Network Profile.

1. Click the Networks tab

2.Click on the + ADD NETWORK button

Next, we assign a range of IP addresses that Assembler can use when deploying network services (such as a virtual server on the load
balancer).

Browse to Aria Automation [405]

1. Click + in the Chrome browser to open a new window

2.Click the Aria Suite bookmark folder

3.Click on Aria Automation

Note: you may be presented with the self signed certificate and will need to accept to proceed

HANDS-ON LABS MANUAL | 376


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Launch the login page [406]

1. Click the GO TO LOGIN PAGE button

HANDS-ON LABS MANUAL | 377


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Login Using Workspace One [407]

Note: Workspace ONE is used for authentication with Aria Automation.

Login with:

1. In the Username field enter: configadmin

2.In the Password field enter: VMware123!

3.Click the Sign in Button

HANDS-ON LABS MANUAL | 378


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Navigate to Assembler [408]

1. Click on the Assembler tile

Aria Automation Assembler allows cloud administrators to define application workloads, using cloud templates, which can be deployed
across different VMware Clouds.

HANDS-ON LABS MANUAL | 379


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Create a Network Profile [409]

HANDS-ON LABS MANUAL | 380


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Begin by creating a Network Policy for the OC-DB-Auto-Seg NSX Segment that was created in the previous exercises.

1. Click the Infrastructure tab

2.Click Network Profiles (left column)

3.Click + NEW NETWORK PROFILE

New Network Profile [410]

HANDS-ON LABS MANUAL | 381


HOL-2446-05-HCI: Optimize and Modernize Data Centers

On the Summary page, enter the following values:

1. Account / Region, from the drop down select: HOL-Site-1-Mgmt / mgmt-datacenter

2.In the Name field enter: OC-DB-Auto-Seg

3.In the Compatibility tags field enter: oc-fixed-network:oc-db

New Network Profile - Networks [411]

Next, we assign the network that will be used with this Network Profile.

1. Click the Networks tab

2.Click the + ADD NETWORK button

HANDS-ON LABS MANUAL | 382


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Add Network [412]

1. Select the OC-DB-Auto-Seg

Note: You may need to scroll down in the list

2.Click on the ADD button

HANDS-ON LABS MANUAL | 383


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Manage IP Ranges [413]

1. Click on the MANAGE IP RANGES button

Next, we assign a range of IP addresses that Cloud Assembly can use when deploying network services (such as a virtual server on the
load balancer).

HANDS-ON LABS MANUAL | 384


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Add New IP Range [414]

1. Click the + NEW IP RANGE button

HANDS-ON LABS MANUAL | 385


HOL-2446-05-HCI: Optimize and Modernize Data Centers

New IP Range [415]

1. For the Source select: Internal

2.In the Name field enter: OC-DB-Auto-IP

3.In the Start IP Address field enter: 10.1.3.2

4.In the End IP Address field enter: 10.1.3.99

5.Click the ADD button

HANDS-ON LABS MANUAL | 386


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Complete IP Range Configuration [416]

Verify the creation of the IP range by making sure the network field is not blank.

1. Click CLOSE

Note: In the first section we created a DHCP server inside NSX and assigned it the IP address range 10.1.3.100 - 10.1.3.253. This range
represents the IPs that will be assigned to the VMs that get deployed on the OC-DB-Auto-Seg segment. Here we are assigning a
different IP range (10.1.3.2 - 10.1.3.99) to Aria Automation Assembler. This range represents the IPs that Assembler will assign to NSX
services that get created as part of the cloud template deployments. For example, IPs in this range will be assigned to any virtual
servers created on the NSX load balancer.

Next, we identify the associated Edge Cluster for the network along with its Tier 0 gateway router.

HANDS-ON LABS MANUAL | 387


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Network Policies [417]

1. Click the Network Policies tab


2.Set Tier-0 logical router to: mgmt-edge-cluster-t0-gw01

3.Set Edge cluster to: mgmt-edge-cluster

HANDS-ON LABS MANUAL | 388


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Security Groups [418]

1. Click on the Security Groups tab

2.Click on the + ADD SECURITY GROUP button

Select the Security Group [419]

HANDS-ON LABS MANUAL | 389


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Select the OC-DB-Auto-Grp

2.Click on the ADD button

Complete Network Profile creation [420]

1. Click CREATE

HANDS-ON LABS MANUAL | 390


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review Network Profile [421]

The Network Profile is created. With the network and related services created inside NSX, and the network profile defined in Cloud
Assembly we are ready to deploy application workloads. To do this we will use a Cloud Template.

HANDS-ON LABS MANUAL | 391


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Create the OC-Web-Auto-Seg Network Profile [422]

Begin by creating a Network Policy for the OC-Web-Auto-Seg NSX Segment that was created in the previous exercises.

1. Click + NEW NETWORK PROFILE

HANDS-ON LABS MANUAL | 392


HOL-2446-05-HCI: Optimize and Modernize Data Centers

New Network Profile [423]

1. For the Account / Region, from the drop down select: HOL-Site-1-Mgmt / mgmt-datacenter
2.In the Name field enter: OC-Web-Auto-Seg

3.In the Compatibility tags field enter: oc-fixed-network:oc-web

HANDS-ON LABS MANUAL | 393


HOL-2446-05-HCI: Optimize and Modernize Data Centers

New Network Profile - Networks [424]

Add Network [425]

HANDS-ON LABS MANUAL | 394


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Select OC-Web-Auto-Seg

2.Click on the ADD button

Manage IP Ranges [426]

1. Click on the MANAGE IP RANGES button

HANDS-ON LABS MANUAL | 395


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Create New IP Range [427]

1. Click on the + NEW IP RANGE button

HANDS-ON LABS MANUAL | 396


HOL-2446-05-HCI: Optimize and Modernize Data Centers

New IP Range [428]

1. For the Source select: Internal

2.In the Name field enter: OC-Web-Auto-IP

3.In the Start IP Address field enter: 10.1.4.2

4.In the End IP Address field enter: 10.1.4.99

5.Click on the ADD button

HANDS-ON LABS MANUAL | 397


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Complete IP Range Configuration [429]

Verify the creation of the IP range by making sure the network field is not blank.

1. Click on the CLOSE button

Note, in the first section we created a DHCP server inside NSX and assigned it the IP address range 10.1.4.100 - 10.1.4.253. This range
represents the IPs that will be assigned to the VMs that get deployed on the OC-Web-Auto-Seg segment. Here we are assigning a
different IP range (10.1.4.2 - 10.1.4.99) to Assembler. This range represents the IPs that Assembler will assign to NSX services that get
created as part of the cloud template deployments. For example, IPs in this range will be assigned to any virtual servers created on the
NSX load balancer.

Next, we identify the associated Edge Cluster for the network along with its Tier 0 gateway router.

HANDS-ON LABS MANUAL | 398


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Network Policies [430]

1. Click the Network Policies tab


2.Set Tier-0 logical router to: mgmt-edge-cluster-t0-gw01

3.Set Edge cluster to: mgmt-edge-cluster

HANDS-ON LABS MANUAL | 399


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Add a Load Balancer [431]

1. Click on Load Balancers tab

2.Click on the + ADD LOAD BALANCER button

Choose a Load Balancer [432]

HANDS-ON LABS MANUAL | 400


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Locate and select the OC-Auto-LB

Note: You may need to scroll down to find it

2.Click on the ADD button

Security Groups [433]

1. Click on the Security Groups tab

2.Click on the + ADD SECURITY GROUP button

HANDS-ON LABS MANUAL | 401


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Select the Security Group [434]

1. Select the OC-DB-Auto-Grp

2.Click on ADD button

HANDS-ON LABS MANUAL | 402


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Complete the Network Profile [435]

1. Click CREATE

HANDS-ON LABS MANUAL | 403


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review Network Profile [436]

The Network Profile is created. With the network and related services created inside NSX, and the network profile defined in Assembler
we are ready to deploy application workloads. To do this we will use a Cloud Template.

Deploy the Cloud Template [437]

In the previous exercises, we created a new NSX segment with an associated DHCP. We then added a network profile in Assembler for
this new network. In this exercise, we will use a Cloud Template to deploy an application onto the network.

Note: for purposes of the lab the cloud template has been pre-configured. Feel free to explore the template, however, be careful that
you don’t make any changes.

HANDS-ON LABS MANUAL | 404


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click on the Design tab

2.Click on the OpenCart Fixed Network cloud template

HANDS-ON LABS MANUAL | 405


HOL-2446-05-HCI: Optimize and Modernize Data Centers

OpenCart Fixed Network Template [438]

The Web Demo template is comprised of five components:

•Two network resources that connect deployed virtual machines to the correct networks

•A Cloud NSX Load Balancer which configures the virtual server for this instance of OpenCart on the existing OC-Auto-LB load
balancer specified as part of the template

•One or more Apache web servers (number of servers set when the user deploys the template)

•An instance of MySQL for this OpenCart application

HANDS-ON LABS MANUAL | 406


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Inspecting the Web Network [439]

1. Click on the OC-Web-Auto-Seg Network

2.This highlights the relevant part of the YAML file for this cloud template

Note the OC-Web-Auto-Seg resource is looking for an existing network with a capability tag of oc-fixed-network:oc-web.
These are known as “constraints”. Assembler needs to find a Network Profile with Capabilities that meet these Constraints
when deploying this template

HANDS-ON LABS MANUAL | 407


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Inspecting the DB Network [440]

1. Click on the OC-DB-Auto-Seg Network

2.This highlights the relevant part of the YAML file for this cloud template. The DB NSX-Network has constraints of oc-fixed-

network:oc-db

HANDS-ON LABS MANUAL | 408


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Inspecting the Load Balancer requirements [441]

1. Click on the OC-Auto-LB load balancer resource.

2.The load balancer resource will create virtual server resources on the OC-Web-Auto-Seg segment, with members of the

server pool (instances) based on the number of OC-Apache-Auto web servers this template deploys. The load balancer is

configured to listen on Port 80 Protocol and Port), and talk to the backend Apache server on Port 80 (InstanceProtocol and

InstancePort)

HANDS-ON LABS MANUAL | 409


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Inspect the OC-Apache-Auto resource [442]

1. Click on the OC-Apache-Auto resource

2.This resource creates an Apache server from a basic Ubuntu template using the extensive “Cloud Init” functionality built into

Assembler. Notice this resource uses both Flavor and Image mapping.

The remainder of the Apache resource definition will add needed Linux packages, configure users, and then configure the
Apache Webserver for our OpenCart application

Feel free to review the entire OC-Apache-Auto Cloud.Machine resource definition.

HANDS-ON LABS MANUAL | 410


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Inspect the MySQL Application [443]

1. Click on the OC-MySQL-Auto resource

2.This resource creates the MySQL database server from a basic Ubuntu template using the extensive “Cloud Init”

functionality built into Assembler. Notice this resource uses both Flavor and Image mapping.

For additional info on Cloud Init see https://cloudinit.readthedocs.io/en/latest/

For more information on Aria Automation Assembler see https://docs.vmware.com/en/VMware-Aria-Automation/index.html

HANDS-ON LABS MANUAL | 411


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Test the Template [444]

1. Click on the TEST button on the main design dashboard

Note: Leave the default Values of small Node Size and medium Front End Cluster Size
2.Click on the TEST button

HANDS-ON LABS MANUAL | 412


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review the test results [445]

The test function will evaluate the blueprint against the infrastructure to verify that things are properly configured in the lab. Ensure the
test is successful. If the test fails, work with the lab moderator to resolve any problems. Typical problems at this point are related to the
network profile names and capabilities.

1. Click on the X to close the test results pop-up

HANDS-ON LABS MANUAL | 413


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Deploy the Cloud Template [446]

Deploy the Cloud Template:

1. Click on the DEPLOY button on the main design dashboard

2.In the Enter Deployment Name field enter: OpenCart-Fixed-Demo

3.Click on the NEXT button

HANDS-ON LABS MANUAL | 414


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Select Deployment Size [447]

1. In the Node Size field enter: small

2.In the Front End Cluster Size field enter: medium

3.Click on the DEPLOY button

HANDS-ON LABS MANUAL | 415


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Monitor the Deployment [448]

Aria Automation Assembler will deploy the template. This will take approximately ~10 minutes in the lab.

You can monitor the progress from Assembler as well as by connecting to the vSphere Client to watch as the VMs are deployed.

Wait for the deployment to complete.

Note: If the deployment fails, attempt to restart it by navigating to the Deployments tab, and for the failed deployment, select
“Update” from the three-dot menu. Contact the lab moderator for assistance.

1. Monitor the progress as it moves through the 14 tasks.

2.Click on the History tab to see more details about what is happening as it proceeds through deployment

HANDS-ON LABS MANUAL | 416


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Provisioning Diagram [449]

1. Click on the History tab if you haven't already

2.Click on the Provisioning Diagram hyperlink

HANDS-ON LABS MANUAL | 417


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Explore the OC-Web-Auto-Seg Provisioning [450]

This is the zoomed-out view of the provisioning diagram.

1. The topmost box describes the item to be created. In this case, we are allocating network space from an existing segment

2.The second box shows which project this template is derived from. Access to resources can be controlled with projects

3.The bottom row shows the process Assembler walks through to choose where to allocate this network. In effect, Assembler

chooses the first Network Profile it finds that meets the constraints of the object being provisioned.

•Network Profile OC-Web-Auto-Seg meets the constraints of this resource

•The other Network Profile does not meet the constraints and is ineligible

HANDS-ON LABS MANUAL | 418


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Explore the OC-DB-Auto-Seg Network Allocation [451]

HANDS-ON LABS MANUAL | 419


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click on the Network Allocation step

2.Select the OC-DB-Auto-Seg component

3.Notice how the Network Profile that meets the constraints for OC-DB-Auto-Seg changes

4.Click on Close button twice

Explore the Deployed Application [452]

HANDS-ON LABS MANUAL | 420


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click the > to explore the components of the OpenCart-Fixed Demo deployment

2.Observe two deployed OC-Apache-Auto-XXX web servers on the 10.1.4.x network, with IP addresses in the range controlled

by NSX for DHCP on the OC-Web-Auto-Seg. (Note: The numeric suffix after the resources name is set by Assembler to keep

resource names unique. This naming mechanism was chosen during the initial Assembler set up in this environment).

3.An OC-MySQL-Auto-XXX resource in the 10.1.3.x network

4.An NSX Load Balancer on the 10.1.4.x network, with IP address in the range controlled by Assembler on the OC-Web-Auto-

Seg

5.Observe this IP address of the load balancer as we will be using it to test our application

Note: The IP address of your load balancer may be different than shown in the above screenshot

Connect to Load Balancer [453]

1. Click + in Chrome to open a new web browser tab

2.Enter the IP address of the NSX Load Balancer (http://<ip>) in the URL field

Note: As you refresh the page the name of the hosts shown in the “Connected to:” field is updated. With this, we can confirm that the
NSX load balancer is distributing the connections across the two web servers.

HANDS-ON LABS MANUAL | 421


HOL-2446-05-HCI: Optimize and Modernize Data Centers

View Web Servers in vSphere [454]

HANDS-ON LABS MANUAL | 422


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Click + in Chrome to open a new web browser tab

1. Click the Management vCenter shortcut link

2.Log in with:

◦Username: administrator@vsphere.local

◦Password: VMware123!

3.Click on the LOGIN button

Hosts and Clusters View [455]

HANDS-ON LABS MANUAL | 423


HOL-2446-05-HCI: Optimize and Modernize Data Centers

View. Observe the two “OC-Apache-Auto-###” VMs, as well as the single "OC-MySQL-
1. Navigate to the Hosts and Clusters View

Auto-###" running in our inventory. These VMs were deployed from Assembler as part of the template.

Note: Due to the nested nature of the lab environment it is common to see vSAN alerts/alarms in the vSphere Client. These

can be ignored in the lab.

2.Click OC-Apache-Auto-### to select one of the VMs

3.Note the following:

•CPU and Memory sizes match “Flavor = Small” from the Assembler Flavor Mapping. You can inspect Flavor Mapping in

Assembler by navigating to Assembler>Infrastructure (top bar)> Configure>Flavor Mappings (left bar), and then clicking on

small

•The VM is connected to OC-Web-Auto-Seg based on the OC-Web-Auto-Seg Network Profile selected for this VM. This was

selected by the constraint oc-fixed-network:oc-web being matched in the network profile

•Finally, note that the VMs may be in different resource pools, this is defined in Assembler and can be inspected by navigating

to Assembler>Infrastructure (top bar)>Configure>Cloud Zones> Select mgmt domain>Select Compute (top bar). Currently,

any resource pool is selected, however, this can be changed to only include specific resource pools as well. Feel free to

change this and test, however, you will want to delete the existing deployed application.

HANDS-ON LABS MANUAL | 424


HOL-2446-05-HCI: Optimize and Modernize Data Centers

View Deployment in NSX Manager [456]

1. Click the + in Chrome to open a new web browser tab


2.Click the Management NSX shortcut link

3.Log in to NSX Manager with:

◦Username: admin

◦Password: VMware123!VMware123!

4.Click the LOG IN button

HANDS-ON LABS MANUAL | 425


HOL-2446-05-HCI: Optimize and Modernize Data Centers

View the Load Balancer [457]

1. Click on the Networking tab

2.Click on Load Balancing in the left column

3.Click the Virtual Servers Link on the OC-Auto-LB

HANDS-ON LABS MANUAL | 426


HOL-2446-05-HCI: Optimize and Modernize Data Centers

View Virtual Servers [458]

1. Click the OC-Auto-LB-###-pool-1 under the server pool to inspect the members

HANDS-ON LABS MANUAL | 427


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Server Pool Members [459]

1. Here we can see the two OC-Apache-Auto-### servers that are listed as members on this Load Balancer. This will allow the

load balancer to distribute traffic between the server pool members.

2.Click on the Close button,, and then the Close button again returning to the load balancer overview

HANDS-ON LABS MANUAL | 428


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Remove the Application [460]

1. Click the VMware Cloud Services browser tab. If you happened to have been logged out Login again using the info below:

Credentials: configadmin

Password: VMware123!

Click on Assembler tileperience Program Participant Guide

2.Click on the Resources tab

3.Click on the Deployments tab

4.Click on the three dot menu next to OpenCart-Fixed Demo

5.Click on Delete

HANDS-ON LABS MANUAL | 429


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Confirm Delete [461]

1. Click on the red SUBMIT button

Note: It will take ~2 minutes for Cloud Assembly to delete the application.

Module 13 - Conclusion [462]

In this module, we showed how to use Aria Automation Assembler to deploy application workloads onto a pre-defined NSX network.
We began by creating an NSX Segment. We then created a DHCP Server and associated it with the segment. Next, we associated an
NSX Load Balancer that was previously created with the tier 1 logical router. After the network and related services had been created,
we defined a Network Profile in Assembler for the network. Finally, we used a Cloud Template to deploy a sample web server
application on a new network, that was provisioned in minutes through software-defined networking. In fact, the networking team
didn’t even need to be engaged to create the network in the core.

HANDS-ON LABS MANUAL | 430


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Module 13 - Deploying Applications to an On-Demand NSX Network (45 minutes) Intermediate

Module Overview [464]

It is anticipated that this module will take approximately 45 minutes to complete.

In this module, we use Aria Automation Assembler to dynamically deploy software-defined networking objects inside VMware Cloud
Foundation's NSX implementation as part of an application deployment using Aria Automation Assembler. We begin by reviewing the
vCenter and NSX inventory. We then create a network profile inside Assembler. We then deploy a template to demonstrate how an on-
demand NSX segment is deployed along with a corresponding tier 1 router, DHCP Server, and load balancer.

This module consists of the following exercises:

•Creating an Assembler Network Profile for OC-DB-Cloud-Seg

•Creating an Assembler Network Profile for OC-Web-Cloud-Seg

•Reviewing a Aria Automation Template

•Deploying OpenCart from a Template

•Reviewing the Deployed Application

•Deleting a Deployed Application

Prior to beginning the exercises, please close all the windows on the desktop.

Create Assembler Network Profiles [465]

A network profile defines a group of networks and network settings that are available for a cloud account in a particular region or
Datacenter in VMware Aria Automation.

You typically define network profiles to support a target deployment environment, for example a small test environment where an
existing network has outbound access only or a large load-balanced production environment that needs a set of security policies. Think
of a network profile as a collection of workload-specific network characteristics.

HANDS-ON LABS MANUAL | 431


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Initial Console Check [466]

1. Please ensure that the Lab Status is green and says “Ready”.

2.After you have verified that the lab is ready, please launch Google Chrome using the shortcut on the desktop.

HANDS-ON LABS MANUAL | 432


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Browse to Aria Automation [467]

1. Click + in the Chrome browser to open a new window

2.Click the Aria Suite bookmark folder

3.Click on Aria Automation

Note: you may be presented with the self signed certificate and will need to accept to proceed

HANDS-ON LABS MANUAL | 433


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Launch the login page [468]

1. Click the GO TO LOGIN PAGE button

HANDS-ON LABS MANUAL | 434


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Login Using Workspace One [469]

Note: Workspace ONE is used for authentication with Aria Automation.

Login with:

1. In the Username field enter: configadmin

2.In the Password field enter: VMware123!

3.Click the Sign in Button

HANDS-ON LABS MANUAL | 435


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Navigate to Assembler [470]

1. Click on the Assembler tile

Aria Automation Assembler allows cloud administrators to define application workloads, using cloud templates, which can be deployed
across different VMware Clouds.

HANDS-ON LABS MANUAL | 436


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Open Infrastructure Tab [471]

1. Click on the Infrastructure Tab

Aria Automation Assembler has built-in integration with VMware NSX-T. This integration allows for the creation of software-defined
objects inside NSX-T directly from Assembler. This built-in integration includes support for creating tier-1 logical routers, segments,
DHCP Servers, and load balancers.

To enable this integration, you create a network profile. In the Network Profile:

- We identify the cloud location (i.e. endpoint) where the software-defined networking components will be created (in the lab this is the
VCF management domain).

- We specify the network isolation type. In this module, we will use a network isolation type of “on-demand”. This indicates to
Assembler that it will interface with NSX to create the software-defined objects as part of the application deployment.

- We identify the parameters (i.e. NSX transport zone, Tier-0 gateway, NSX Edge Cluster) that Cloud Assembly will use to connect to
NSX and create the software-defined objects.

Begin by creating a new network profile for the Cloud Foundation management domain.

HANDS-ON LABS MANUAL | 437


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Locate and Configure the Edge Networks [472]

Scroll down on the left column

1. Click on Networks in the left menu

Note: You may need to scroll down to locate it in the Resources section

2.In the filter add 'uplink


uplink' as the filter

3.Click on VCF-edge_mgmt-edge-cluster_segment_uplink1_11, we will be updating both edge networks but will be doing this
one first.

HANDS-ON LABS MANUAL | 438


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Update the first Uplink [473]

1. In the IPv4 CIDR field enter: 172.27.11.0/24

2.In the IPv4 default gateway field enter: 172.27.11.1

3.In the DNS servers field enter: 10.0.0.2

4.Click on the SAVE button

HANDS-ON LABS MANUAL | 439


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Locate and Configure the Edge Networks [474]

1. In the filter add 'uplink


uplink' as the filter

2.Click on VCF-edge_mgmt-edge-cluster_segment_uplink2_12

HANDS-ON LABS MANUAL | 440


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Update the second Uplink [475]

1. In the IPv4 CIDR field enter: 172.27.12.0/24

2.In the IPv4 default gateway field enter: 172.27.12.1

3.In the DNS servers field enter: 10.0.0.2

4.Click SAVE

HANDS-ON LABS MANUAL | 441


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Create Network Profile [476]

1. Click on Network Profiles

2.Click on the + NEW NETWORK PROFILE button

HANDS-ON LABS MANUAL | 442


HOL-2446-05-HCI: Optimize and Modernize Data Centers

New Network Profile [477]

On the Summary tab enter the following values:

1. In the Account / Region field select: HOL-Site-1-Mgmt domain/ mgmt-datacenter


2.In the Name field enter: OC-DB-Cloud-Seg

3.In the Capability tags field enter: oc-cloud-network:oc-db

HANDS-ON LABS MANUAL | 443


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Add Networks to the Network Profile [478]

1. Click on the Networks tan

2.Click on the + ADD NETWORK button

HANDS-ON LABS MANUAL | 444


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Adding the Networks [479]

1. Click the filter field and enter 'edge


edge'

2.Select both VCF-edge segments shown (This allows Cloud Assembly created and routed networks to reach the outside via the

Tier-0 uplinks)

3.Click on the ADD button

HANDS-ON LABS MANUAL | 445


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Network Policies [480]

HANDS-ON LABS MANUAL | 446


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click the Network Policies tab

2.Set Isolation policy to On-demand network

3.Set transport zone to mgmt-domain-tz-overlay01

4.Set External network to VCF-edge_mgmt-edge-cluster_segment_uplink1_11

5.Set Tier-0 to mgmt-edge-cluster-t0-gw01

6.Set Edge Cluster to mgmt-edge-cluster

7. Leave Source at Internal (VRA will act as IPAM for this segment)

8.In the CIDR field enter: 10.1.5.1/24

9.Set Subnet size to /28 (-14 IP addresses)

10.Leave the IP Range Assignment at Static and DHCP

11.Click on the CREATE button

Review Network Profile Creation [481]

1. Here we can see the New Database Network Profile has been created.

HANDS-ON LABS MANUAL | 447


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Add a New Web Network Profile [482]

1. Click on the + NEW NETWORK PROFILE button

HANDS-ON LABS MANUAL | 448


HOL-2446-05-HCI: Optimize and Modernize Data Centers

New Network Profile [483]

On the Summary tab enter the following values:

1. Set the Account / Region to: mgmt domain/ mgmt-datacenter

2.In the Name field enter: OC-Web-Cloud-Seg

3.In the Capability tags field enter: oc-cloud-network:oc-web

HANDS-ON LABS MANUAL | 449


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Add Networks to the Network Profile [484]

1. Click on the Networks tab

2.Click on the + ADD NETWORK button

HANDS-ON LABS MANUAL | 450


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Adding the Networks [485]

1. Click the filter field and enter 'edge


edge'

2.Select both VCF-edge segments shown (This allows Cloud Assembly created and routed networks to reach the outside via the

Tier-0 uplinks)

3.Click on the ADD button

HANDS-ON LABS MANUAL | 451


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Network Policies [486]

HANDS-ON LABS MANUAL | 452


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click Network Policies tab

2.Set Isolation policy to On-demand network

3.Set transport zone to mgmt-domain-tz-overlay01

4.Set External network to VCF-edge_mgmt-edge-cluster_segment_uplink1_11

5.Set Tier-0 to mgmt-edge-cluster-t0-gw01

6.Set Edge Cluster to mgmt-edge-cluster

7. Leave Source at Internal (VRA will act as IPAM for this segment)

8.In the CIDR field enter: 10.1.6.1/24

9.Set Subnet size to /28 (-14 IP addresses)

10.Leave IP Range Assignment at Static and DHCP

11.Click on the CREATE button

Review Network Profile Creation [487]

1. Here we can see the New Web Server Network Profile has been created.

Deploy from a Template [488]

With the network profile created, we are ready to upload and deploy new Templates where we have defined resources for deploying an
on-demand NSX network and related objects.

HANDS-ON LABS MANUAL | 453


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click on the Design tab

2.Click on the NEW FROM button

3.Select the Upload option

1. Click Close

HANDS-ON LABS MANUAL | 454


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Upload Template [489]

1. In the Name field enter" 'OpenCart


OpenCart Cloud Network
Network'

2.Select the OpenCart project

3.Verify that Share only with this project is selected

4.Click the SELECT FILE button

5.The upload wizard should default to the downloads directory, select the Opencart Cloud Network.yaml file

6.Click on the Open button

7. Click on the UPLOAD button

HANDS-ON LABS MANUAL | 455


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Select the Template [490]

1. Click on the OpenCart Cloud Network to review it's design

HANDS-ON LABS MANUAL | 456


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review the Template [491]

Note we have seven resources.

•Two network resources that create on-demand NSX networks and T1 routers

•On-demand NSX Load Balancer and virtual servers for this instance of OpenCart

•One or more Apache web servers (number of servers set when the user deploys the template)

•An instance of MySQL for this OpenCart application

•Two Security objects are attached to respective virtual machines, to create on-demand security policies per VM type

HANDS-ON LABS MANUAL | 457


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Inspect the OC-Web-Cloud-Seg Resource [492]

1. Click on the OC-Web-Cloud-Seg resource

2.This highlights the relevant part of the YAML file for this cloud template

◦Note the OC-Web-Cloud-Seg resource will create a new “routed” network. It will match with a network profile

that has the capabilities oc-cloud-network:oc-web.

HANDS-ON LABS MANUAL | 458


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Inspect OC-DB-Cloud-Seg Resource [493]

1. Click on the OC-DB-Cloud-Seg resource

2.The OC-DB-Cloud-Seg has a constraint of oc-fixed-network:oc-db that will need to be matched by a corresponding network

profile

HANDS-ON LABS MANUAL | 459


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Inspect the OC-Cloud-LB [494]

1. Click on the OC-Cloud-LB load balancer resource.

2.The load balancer resource will create a new load balancer and virtual server resources on the OC-Web-Cloud-Seg segment,

with members of the server pool (instances) based on the number of OC-Apache-Cloud web servers this template deploys.

The load balancer is configured to listen on Port 80 Protocol and Port), and talk to the backend Apache server on Port 80

(InstanceProtocol and InstancePort).

HANDS-ON LABS MANUAL | 460


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Inspect Apache Server Policies [495]

1. Click on the OC-Apache-Cloud-Sec-Group resource

2.This resource creates an on-demand distributed firewall policy that applies to virtual machines created by the OC-Apache-

Cloud VM resource.

◦This creates a set of rules similar to what you have created and used in the previous OpenCart lab modules.

HANDS-ON LABS MANUAL | 461


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Inspect MySQL Policies [496]

1. Click on the OC-MySQL-Cloud-Sec-Group resource

2.This resource creates an on-demand distributed firewall policy that applies to virtual machines created by the OC-Apache-

Cloud VM resource.

◦This creates a set of rules similar to what you have created and used in the previous OpenCart lab modules.

HANDS-ON LABS MANUAL | 462


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Test OpenCart Cloud Demo [497]

1. Click on the TEST button

2.Enter the following values:


◦In the Node Size field enter: small

◦Set the Front End Cluster Size to: medium

3.Click the TEST button as second time

HANDS-ON LABS MANUAL | 463


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review Test Results [498]

1. The test function will evaluate the cloud template against the infrastructure to verify that things are properly configured in the

lab. Ensure the test is successful. If the test fails, work with the lab moderator to resolve any problems.

2.Click X to close the test results pop-up

HANDS-ON LABS MANUAL | 464


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Deploy OpenCart Cloud Demo [499]

1. Click on the DEPLOY button

2.In the Deployment Name field enter: OpenCart-Cloud-Demo

3.Click on the NEXT button

HANDS-ON LABS MANUAL | 465


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Finalize Deployment [500]

1. Enter the following values:

◦For the Node Size select: small

◦For the Front End Cluster Size select: medium

2.Click on the DEPLOY button

HANDS-ON LABS MANUAL | 466


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Monitor Progress [501]

Assembler will deploy the cloud template. This will take approximately ~15 minutes in the lab.

You can monitor the progress from Assembler as well as by connecting to the vSphere Client to watch as the VMs are deployed.

Note: If the deployment fails, attempt to restart it by navigating to the Deployments tab, and for the failed deployment, select
“Update” from the three-dot menu. Contact the lab moderator for assistance.

Wait for the deployment to complete, it is complete when it says successful

1. Once complete you can see how long the deployment has taken. (if this gets to 10min or more try refreshing your browser)

2.Click on the History tab

HANDS-ON LABS MANUAL | 467


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Historical View [502]

1. Use the scroll bar to review the historical information about the application deployment

2.Click on the Provisioning Diagram hyperlink

HANDS-ON LABS MANUAL | 468


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review Provisioning Diagram [503]

The provisioning diagram is one of the best troubleshooting tools available for diagnosing failing deployments. This exercise will only
show the initial network allocation to familiarize you with navigating the provisioning diagram

The initial screen presented will default to the first network provisioned, which in this lab is OC-Web-Cloud-Seg

1. The topmost box describes the item to be created. In this case, we are creating a new network space due to the type

ROUTED

2.The second box shows the project that this template is a part of. Access to resources can be controlled with projects

3.The bottom row shows the process Assembler walks through to choose where to allocate this network. In effect, Assembler

chooses the first Network Profile it finds that meets the constraints of the object being provisioned.

◦Network Profile OC-Web-Cloud-Seg meets the constraints of this resource

◦The remaining Network Profiles do not meet the constraints and are ineligible

HANDS-ON LABS MANUAL | 469


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review OC-DB-Cloud-Seg Provisioning Diagram [504]

1. Click on the blue Network Allocation box

2.Select the OC-DB-Cloud-Seg

3.Notice how the Network Profile that meets the constraints for OC-DB-Cloud-Seg changes

4.Click on the Close button

5.Click on the Close button again (button not shown in the screenshot)

HANDS-ON LABS MANUAL | 470


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review OpenCart Cloud Demo Deployment [505]

1. Click > to expand the OpenCart-Cloud-Demo deployment

Note the following:

2.Two deployed OC-Apache-Cloud-XXX web servers on the 10.1.6.x network, with IP addresses controlled by Assembler for

DHCP on the OC-Web-Cloud-Seg. (Note: The numeric suffix after the resources name is set by Assembler to keep resource

names unique. This naming mechanism was chosen during the initial Assembler set up in this environment).

3.An OC-MySQL-Cloud-XXX resource in the 10.1.5.x network

4.An NSX Load Balancer on the 10.1.6.x network, with an IP address in the range controlled by Assembler on the OC-Web-

Cloud-Seg.

Note down this IP address as you will need it in the following step.

HANDS-ON LABS MANUAL | 471


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Connect to Shopping Cart Demo [506]

1. Click the + to open a new Chrome tab

2.Enter the IP address of the NSX Load Balancer (http://<ip>), reviewed from the previous step, your's may be different than

10.1.6.2

HANDS-ON LABS MANUAL | 472


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review Deployment in vCenter [507]

HANDS-ON LABS MANUAL | 473


HOL-2446-05-HCI: Optimize and Modernize Data Centers

With the shopping cart application running, we will now look at the underlying components that were deployed in vSphere and NSX.

1. Click the + to open a new Chrome tab

2.Click the Management vCenter bookmark

Log in using the information below

◦Username: administrator@vsphere.local

◦Password: VMware123!

◦Click on the Login button

3.Navigate to the Hosts and Clusters view

4.From the hosts and clusters view, Select one of the OC-Apache-Cloud webservers identified in the Assembler Deployment

Summary. In this example, the machines are OC-Apache-Cloud-361 and OC-Apache-Cloud-362

5.Notice:

◦CPU and Memory sizes match “Flavor = Small” from Assembler Flavor Mapping.

◦The VM is connected to OC-Web-Cloud-Seg based on the OC-Web-Cloud-Seg Network Profile selected for this

VM. This was selected by the constraint oc-cloud-network:oc-web being matched in the network profile

Note the NSX segment (i.e. OC-Web-Cloud-Seg-###) the VM is connected to. We will connect to the NSX Manager and view details
for this network.

HANDS-ON LABS MANUAL | 474


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review in NSX Manager [508]

1. Click the + to open a new Chrome Tab

2.Click the Management NSX bookmark - Log in to NSX Manager if needed

Username: admin

Password: VMware123!VMware123!
Click on the Login button

3.Click on the Networking tab

4.Click on Network Topology in the left menu

5.Click on Tier-1 Gateways in the topology view window

(Optional )Click zoom or scroll the mouse wheel, you can also click and drag around

Note: that a new segment has been created with a name similar to “OC-Web-Cloud-Seg-###”. This segment was created Assembler
as part of the application deployment.

HANDS-ON LABS MANUAL | 475


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Locate the Tier-1 Gateways [509]

You will be able to locate the OC-Web-Cloud-Seg-### and the OC-DB-Cloud-Seg-### Tier-1 Gateways that were dynamically created
with the use of the routed network type selected in the YAML file.

1. Click on 2 Services
Services, and you will see a Load Balancer and Firewall rules have been provisioned

2.Click on 2 VMs at the button to expand out to see the VMs as shown above

Both of these VMs were given DHCP address leases by a DHCP server that was dynamically provisioned by Assembler

HANDS-ON LABS MANUAL | 476


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review Load Balancing [510]

1. Click on Load Balancing in the left menu

2.Examine the Dynamically Created Load Balancer called OC-Cloud-LB-###

3.Click on the Virtual Servers link

Inspect the Virtual Server [511]

HANDS-ON LABS MANUAL | 477


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Here we can see the virtual server, its associated IP address, ports, etc

1. Click on the OC-Cloud-LB-###-pool-1

Inspect the Server Pool [512]

1. Here we can see the list of both OC-Apache-Cloud-### servers, their IP address, port, weight, etc.

2.Click on the Close button

HANDS-ON LABS MANUAL | 478


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Close the Virtual Server window [513]

HANDS-ON LABS MANUAL | 479


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Inspect the Distributed Firewall [514]

1. Click on the Security tab

2.Click on Distributed Firewall from the left menu

3.Click the > to expand the OC-MySQL Policy

4.Click the > to expand the OC-Apache Policy

Notice the highlighted rules in the screenshot:

5.All of these rules and their states were derived from the cloud template creating a dynamic security policy. This ensures from
the time that this application is deployed to the time it is retired it adheres to the security policy set forth by the organization.

HANDS-ON LABS MANUAL | 480


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Delete Shopping Cart Demo [515]

1. Click on the Aria Automation Assembler browser tab. (note you may need to re-authenticate)

Credentials: configadmin

Password: VMware123!

Click on the Assembler tile

We will now remove the deployed application.

2.Click on the three-dot menu next to OpenCart-Cloud-Demo

3.Select the Delete option

HANDS-ON LABS MANUAL | 481


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Confirm Deletion [516]

1. Click on the SUBMIT button

The delete process usually takes 2-3 minutes to complete

Optional: If you have a vCenter Server window open during the delete process, you will see virtual machines power off and
being deleted

Module 14 - Conclusion [517]

In this module, we showed how to use Aria Automation Assembler to deploy application workloads on to an on-demand NSX network.
We began by creating a Network Profile in Assembler that identified the network attributes to be used. We then deployed a sample
shopping cart application on the new network from a template. We then explored the objects inside vSphere and NSX to become
familiar with the components that were deployed. Based on your environment would you see an improvement in turnaround times
delivering infrastructure back to the business?

HANDS-ON LABS MANUAL | 482


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Module 14 - Kubernetes Overview and Deploying vSphere Pod VMs (30 minutes) Advanced

Module 15 - Overview [519]

This module provides an introduction to Cloud Foundation with Tanzu and shows how to run vSphere Pod VMs on a vSphere Cluster.

Cloud Foundation (VCF) with Tanzu introduces a new construct that is called vSphere Pod, which is the equivalent of a Kubernetes pod.
A vSphere Pod is a special type of virtual machine with a small footprint that runs one or more Linux containers. Each vSphere Pod is
sized precisely for the workload that it accommodates and has explicit resource reservations for that workload. It allocates the exact
amount of storage, memory, and CPU resources required for the workload to run. vSphere Pods are only supported with Supervisor
Clusters that are configured with NSX-T Data Center as the networking stack.

Note that while Pod VMs are unique to vSphere, they are deployed and managed the same as any upstream conformant Kubernetes
pod.

This module contains six exercises. The exercises are successive and must be completed in order.

•Exercise 1: Lab Overview

•Exercise 2: Developer Access

•Exercise 3: Creating a Kubernetes Deployment

•Exercise 4: Scaling a Kubernetes Deployment

•Exercise 5: Kubernetes Integration with NSX

•Exercise 6: Deleting the Kubernetes Deployment

It is estimated that it will take ~20 minutes to complete all six exercises.

Exercise 1 - Lab Overview [520]

HANDS-ON LABS MANUAL | 483


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Log into vCenter Server [521]

HANDS-ON LABS MANUAL | 484


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Open Chrome and click on the vCenter tab and verify the page URL to ensure you have the correct user interface. The

vCenter login URL should read https://mgmt-vcenter.vcf.sddc.lab

2.In the User name box enter: administrator@vsphere.local

3.In the Password box enter: VMware123!

4.Click the LOGIN button

Begin by reviewing the lab environment using the vSphere Client.

HANDS-ON LABS MANUAL | 485


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Navigate to Menu

2.Select Inventory

Review Tanzu Supervisor Cluster [522]

HANDS-ON LABS MANUAL | 486


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Expand mgmt-vcenter.vcf.sddc.lab > mgmt-datacenter > mgmt-cluster

2.Expand Namespaces > Expand ns01

Observe that there are five ESXi hosts configured in a vSphere cluster named "mgmt-cluster
mgmt-cluster". Running on this cluster, are two NSX
Edge transport nodes (mgmt-edge01
mgmt-edge01, mgmt-edge02
mgmt-edge02). Kubernetes is enabled on the cluster as evident by the three
"SupervisorControlPlane
SupervisorControlPlane" virtual machines.

Supervisor Cluster [523]

HANDS-ON LABS MANUAL | 487


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Note: the term "supervisor cluster" is used to denote a vSphere cluster on which Kubernetes has been enabled.

1. Click ns01

Workload Management [524]

vSphere Pod VMs and Tanzu Kubernetes Clusters get deployed inside vSphere Namespaces. Namespaces control developer access
and define resource boundaries. Namespaces are isolated from each other, enabling a degree of multi-tenancy.

In the lab, we see that the "devteam" group has been granted edit permissions to the "ns01" namespace. Also, developers working in
this namespace have access to the "K8s Storage Policy". There are currently no resource limits set.

To access the Kubernetes instance running on this vSphere cluster, developers use the control plane IP address. To get the Kubernetes
Control Plane IP address:

1. Navigate to Menu

2.Select Workload Management

HANDS-ON LABS MANUAL | 488


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Supervisor Control Plane [525]

1. Click the Supervisors tab

Note: In the vSphere client, the Kubernetes features are enabled under "Workload Management". Enabling Kubernetes is referred to as
enabling the "Workload Control Plane (WCP)".

The 'Control Plane Node IP Address' shown for "k8s-lab" is: 172.16.10.2
172.16.10.2. This is the address the developer will use to connect to the
Kubernetes control plane.

In this exercise, we reviewed the lab configuration. We saw that the lab is comprised of a Cloud Foundation domain named "mgmt-wld"
that is comprised of a four-node vSphere cluster named "mgmt-cluster
mgmt-cluster". An NSX-T Edge Cluster has been configured on the cluster
and Workload Management (i.e. Kubernetes) has been enabled. The Kubernetes control plane IP is 172.16.10.2
172.16.10.2.

Exercise 2 - Developer Access [526]

In this exercise, we use Putty to log in to the developer's Linux workstation where we will authenticate to the Kubernetes control plane
and choose a vSphere Namespace in preparation for deploying the container-based application.

HANDS-ON LABS MANUAL | 489


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Connect to the Linux workstation:

1. Click the Putty icon in the system tray

2.Click CentOS

3.Click Load

4.Click Open

◦Login: root

HANDS-ON LABS MANUAL | 490


HOL-2446-05-HCI: Optimize and Modernize Data Centers

◦Password: VMware123!

HANDS-ON LABS MANUAL | 491


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Log into Kubernetes Control Plane [527]

From the Linux workstation, we use the "kubectl" command to connect to the Kubernetes control plane running on the vSphere cluster.
Authentication is done using vCenter Single Sign-On (SSO).

In the lab, we use the account "sam" that is in the "vsphere.local" SSO domain. This account is a member of the "devteam" group which
has been assigned "edit" privileges to the "ns01" namespace.

Notes:

•Before developers can authenticate, they need to download the vSphere CLI tools to their workstation. The CLI tools provide

a version of the "kubectl" command that includes a vCenter SSO authentication plug-in.

•The IP address passed as part of the "--server" flag is the IP address of the Kubernetes control plane that we looked at in the

previous exercise (172.16.10.2).

Run the following “kubectl vSphere login …” command to log into the Kubernetes Control Plane:

1. kubectl vsphere login --vsphere-username sam@vsphere.local --server 172.16.10.2

HANDS-ON LABS MANUAL | 492


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Switch to Namespace [528]

Next, set the Kubernetes context to the “ns01” namespace by running the command: "kubectl config use-context ns01.

1. kubectl config use-context ns01

The developer Sam has successfully authenticated and set his context to the "ns01" namespace. Any Kubernetes objects deployed by
Sam will be created in the "ns01" namespace.

In this exercise, we saw how developers access the Kubernetes instance running on the vSphere Cluster. Developers must first
download the vSphere CLI Tools, which includes a version of the "kubectl" command that includes a vSphere authentication plugin.
They then use “kubectl” command to authenticate and set their context.

Exercise 3 - Creating a Kubernetes Deployment [529]

We will now deploy a container-based application that will run as a vSphere Pod VM on the vSphere cluster.

HANDS-ON LABS MANUAL | 493


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Change directory to 'demo':

1. cd ~/demo

Review the demo1.yaml file.

2.cat demo1.yaml

The "demo1.yaml
demo1.yaml" manifest deploys a simple container-based application that is comprised of a single "nginx
nginx" web server. The
application is implemented as a Kubernetes "deployment" comprised of a single pod and includes a "load balancer" service that will be
implemented inside of NSX.

HANDS-ON LABS MANUAL | 494


HOL-2446-05-HCI: Optimize and Modernize Data Centers

HANDS-ON LABS MANUAL | 495


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Deploy &#39;demo1&#39; application [530]

Deploy the application using the "kubectl apply -f " command:

1. kubectl apply -f demo1.yaml

HANDS-ON LABS MANUAL | 496


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review Deployment [531]

The output shows the creation of both the deployment and service. Wait ~60 seconds to allow the image to be pulled and the container
to start. Then run the following commands to view details about the deployment:

1. kubectl get deployments

2.kubectl get pods

3.kubectl get services

Wait for the pod to enter a running state before continuing. In the lab, this can take two or three minutes.

HANDS-ON LABS MANUAL | 497


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review Deployment in vSphere [532]

Return to the vSphere Client:

1. Navigate to Menu

2.Select Inventory

HANDS-ON LABS MANUAL | 498


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Select ns01

Observe the vSphere Pod VM is now shown in the vCenter inventory under the ns01 namespace.

HANDS-ON LABS MANUAL | 499


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Further review in vSphere [533]

Follow the steps below to view additional details about the Kubernetes components:

1. Select ns01

2.Select MANAGE NAMESPACE

HANDS-ON LABS MANUAL | 500


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Follow the steps below to view additional details about the Kubernetes components:

1. Click the Compute tab

2.Click vSphere pods

3.Click Deployments

4.Click the Storage tab

5.Click the Network tab

6.Click Services (not shown above)

Note: Both the VCF administrator and the developer have full visibility into the vSphere Pod VMs deployed on the vSphere cluster.

In this exercise, we deployed a simple web server using the "nginx" container image. We saw a sample YAML manifest and the steps
developers use to deploy Kubernetes-based workloads directly on the vSphere cluster.

Exercise 4 - Scaling a Kubernetes Deployment [534]

In the previous exercise a single vSphere Pod VM was deployed as part of a Kubernetes "deployment". Let's scale this up so that we
have three pods to provide redundancy for the webserver.

HANDS-ON LABS MANUAL | 501


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click the Putty icon to return to the Putty terminal

Run the following commands:

2.kubectl scale deployment demo1 --replicas=3

Wait 30 seconds and then query the deployments and pods. Note that there are now three pods running.

3.kubectl get deployments

4.kubectl get pods

HANDS-ON LABS MANUAL | 502


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Wait for all three pods to enter a running state. It may take two or three minutes. Feel free to re-attempt step 4 until all pods state
running

5.Click the “vSphere


vSphere ” browser tab to return to the vSphere Client:

Observe that all three pods are shown in the vSphere client.

This exercise demonstrated how developers can interact with Kubernetes running on the vSphere cluster. VCF with Tanzu provides a
native Kubernetes experience to developers.

HANDS-ON LABS MANUAL | 503


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Exercise 5 - Kubernetes Integration with NSX [535]

The YAML manifest we used to deploy the vSphere Pod VMs includes a "service" object. This object is used to enable external network
access to the web server. This access is achieved using an NSX load balancer that was deployed by Kubernetes at the time of the
deployment.

Note: the load balancer was automatically created when the Kubernetes deployment was created. There is no requirement to manually
create NSX objects or do any configuration inside NSX prior to deploying Pod VMs.

Click the Putty icon to return to the Putty terminal

Run the "kubectl get services" command to view details about the service.

1. kubectl get services

Note the EXTERNAL IP 172.16.10.4 has been assigned to the demo1 service. This is the IP address that has been assigned to the NSX
load balancer. We access the web server using this IP.

HANDS-ON LABS MANUAL | 504


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Connect to &#39;demo1&#39; application [536]

Click the “vSphere


vSphere ” browser tab to return to the Chrome Browser

1. Click + to open a new browser tab

2.Enter the URL http://172.16.10.4

Confirm that you can connect to the web server.

HANDS-ON LABS MANUAL | 505


HOL-2446-05-HCI: Optimize and Modernize Data Centers

View Load Balancer in NSX [537]

HANDS-ON LABS MANUAL | 506


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Next, we will look at the Load Balancer inside NSX-T.

1. Click + to open a new browser tab

2.Click the Management NSX Chrome Bookmark folder (URL https://mgmt-nsx.vcf.sddc.lab)

If prompted by Chrome, click Advance, and Proceed to mgmt-nsx.vcf.sddc.lab (unsafe)

3.Login:

◦Username: admin

◦Password: VMware123!VMware123!

4.Click LOG IN

Click Save if you get the Customer Experience Improvement Program screen

Note
Note: You may notice that there is a warning that a 3 node cluster is recommended. This is expected in the lab to minimize resource
usage.

Search for Load Balancer [538]

HANDS-ON LABS MANUAL | 507


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Note: NSX is installed and configured with each workload domain. This includes the configuration of the NSX Container Plugin (NCP) at
the time that Kubernetes is enabled. The NCP provides NSX integration with Kubernetes.

From the NSX Home page:

1. Enter the Load Balancer Ingress IP address 172.16.10.4 in the search field and press enter

In the search results:

2.Click the [Virtual Servers] tab

The NSX load balancer that is serving requests to the Nginx web server running on the three pods is shown. Again, the NSX Load
Balancer was automatically created by Kubernetes when we created the deployment. This was done using the "Service" resource
defined in the demo1.yaml manifest.

3.Click the load balancer name link (domain-c8-<id>) to go to the Load Balancer configuration page.

View Server Pool [539]

Here we see the details for the virtual server, including the IP address, port, and type.

1. Click the link in the Server Pool column to view the Server Pool.

HANDS-ON LABS MANUAL | 508


HOL-2446-05-HCI: Optimize and Modernize Data Centers

We see the three pods listed as members of the server pool. Let's add more pods to the deployment and watch the server pool
members list get updated.

1. Click Close to close the Server Pool Members window.

HANDS-ON LABS MANUAL | 509


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Add Pods to &#39;demo1&#39; application [540]

Next, we return to the putty terminal where will scale the deployment again in order to show how the server pool in the NSX load
balancer is automatically updated as the size of the Kubernetes deployment increases.

Click the Putty icon to return to the putty terminal.

Run the following commands to scale the deployment to 10 pods and verify all pods reach the "running" state:

Note: In the Linux shell, you can use the up-arrow key to scroll back through previous commands and then edit and modify them to
avoid retyping the full command.

1. kubectl scale deployment demo1 --replicas=10

wait ~30 seconds for the new pods to deploy

2.kubectl get pods

HANDS-ON LABS MANUAL | 510


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review Load Balancer in NSX [541]

1. Click the “NSX


NSX” browser tab to return to the browser window.

2.Click refresh at the bottom of the page.

3.Click the Server Pool link.

HANDS-ON LABS MANUAL | 511


HOL-2446-05-HCI: Optimize and Modernize Data Centers

We see the Server Pool Members list automatically updated with the new pods (you may need to scroll down to see all 10).

1. Click Close to close the server pool member window.

HANDS-ON LABS MANUAL | 512


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Scale down &#39;demo1&#39; Application [542]

Click the Putty icon to return to the putty terminal

Run the following commands to scale the deployment back down to 2 pods:

1. kubectl scale deployment demo1 --replicas=2

wait ~30 seconds for the pods to shutdown

2.kubectl get pods

HANDS-ON LABS MANUAL | 513


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review Load Balancer in NSX [543]

1. Click the “NSX


NSX” browser tab to return to the browser window.

2.Click refresh at the bottom of the page.

3.Click the Server Pool link.

HANDS-ON LABS MANUAL | 514


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Confirm the Server Pool Members list is updated to include the 2 remaining Pod VMs. Your IPs may differ from what is shown.

1. Click Close
Close.

In this exercise, we looked at the NSX integration that comes with VCF with Tanzu. NSX makes it easy for developers to configure
network-related services (such as NAT and load balancers) for their container-based applications. The developers define the objects in
a YAML manifest. When the manifests are applied, Kubernetes instantiates the objects defined in it. This includes interfacing with the
NCP plug-in to automate the creation of the related network objects inside NSX.

Exercise 6 - Deleting the Kubernetes Deployment [544]

In this exercise, we will remove the nginx deployment.

HANDS-ON LABS MANUAL | 515


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Click the Putty icon to return to the putty terminal

Run the command below to delete the "demo1" deployment.

1. kubectl delete -f demo1.yaml

Wait ~30 seconds and then run the following commands to verify the Kubernetes objects are deleted:

2.kubectl get deployments

3.kubectl get pods

4.kubectl get services

In this exercise we saw how to remove a Kubernetes deployment.

HANDS-ON LABS MANUAL | 516


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Module 15 - Conclusion [545]

In this module, we saw how vSphere with Tanzu enables customers to run containers directly on vSphere. We saw how the
configuration and access to the environment is controlled by the VCF administrator from the vSphere client. We also saw how
developers authenticate and use YAML manifest to deploy pod VMs. Pod VMs created by the developer are visible to the VCF admin.
We also looked at how VCF and NSX make it easy for developers to expose services on the network.

Module Key takeaways [546]

•VCF with Tanzu introduces a new construct called a vSphere Pod, which is the equivalent of a Kubernetes pod.

•vSphere Namespaces with SSO authentication enable the admin to restrict access and control resource consumption.

•VCF with Tanzu makes it easy to deploy Kubernetes networking services, such as Load Balancer (L4) and Ingress (L7),

through integration with VMware NSX.

•To the admin, VCF with Tanzu looks and feels like vSphere. To the developer, VCF with Tanzu looks and feels like Kubernetes.

HANDS-ON LABS MANUAL | 517


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Module 15 - Deploying Tanzu Kubernetes Clusters (30 minutes) Advanced

Module 16 - Overview [548]

This module shows how to deploy a Tanzu Kubernetes Cluster (TKC) inside a vSphere Namespace.

The vSphere supervisor cluster provides a management layer from which Tanzu Kubernetes Clusters (TKCs) are built. The Tanzu
Kubernetes Grid Service is a custom controller manager with a set of controllers that runs on the supervisor cluster. One of the roles of
the Tanzu Kubernetes Grid Service is to provision Tanzu Kubernetes clusters.

While there is a one-to-one relationship between the Supervisor Cluster and the vSphere cluster, there is a one-to-many relationship
between the supervisor cluster and Tanzu Kubernetes clusters. You can provision multiple Tanzu Kubernetes clusters within a single
supervisor cluster. The workload management functionality provided by the supervisor cluster gives you control over the cluster
configuration and lifecycle while allowing you to maintain concurrency with upstream Kubernetes.

You deploy one or more Tanzu Kubernetes clusters to a vSphere namespace. Resource quotas and storage policy are applied to a
vSphere Namespace and inherited by the Tanzu Kubernetes clusters deployed there.

When you provision a Tanzu Kubernetes cluster, a resource pool and VM folder are created in the vSphere Namespace. The Tanzu
Kubernetes cluster control plane and worker node VMs are placed within this resource pool and VM folder. Using the vSphere Client,
you can view this hierarchy by selecting the Hosts and Clusters perspective and selecting the VMs and Templates view.

This module contains one exercise.

1. Deploy Tanzu Kubernetes Cluster

It is estimated that it will take ~25 minutes to complete this exercise. It is recommended that you close all browsers or Putty windows on
the desktop prior to beginning this module.

Exercise 1 - Deploy Tanzu Kubernetes Cluster [549]

In this exercise, we deploy a TKC named "tkc02" in the "ns01" namespace. The TKC is deployed with one control plane and one worker
node.

Note, that this configuration is intentionally small to facilitate the lab. It is not recommended for production-grade TKC deployments,
which should always have three control plane VMs with multiple worker nodes.

HANDS-ON LABS MANUAL | 518


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Navigate to the Hosts and Clusters view

1. Click to expand mgmt-vcenter.vcf.sddc.lab

2.Click to expand mgmt-cluster

3.Click to expand Namespaces

4.Click to expand ns01

HANDS-ON LABS MANUAL | 519


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Log into vSphere [550]

1. Click the Management vCenter bookmark (https://mgmt-vcenter.vcf.sddc.lab)

HANDS-ON LABS MANUAL | 520


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Login:

1. Username: administrator@vsphere.local

2.Password: VMware123!

3.Click LOGIN

HANDS-ON LABS MANUAL | 521


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Launch Putty [551]

HANDS-ON LABS MANUAL | 522


HOL-2446-05-HCI: Optimize and Modernize Data Centers

TKC clusters are deployed inside vSphere namespaces. Here we see a TKC named "tkc01" that is deployed in the "ns01" namespace.
We will add a second TKC named "tkc02".

To create a TKC developers need to:

•Login to the Kubernetes Control Plane and set the context to the vSphere namespace.

•Create a TKC deployment manifest YAML file.

Begin by logging in to the developer workstation.

1. Click the Putty icon in the system tray

2.Click CentOS

3.Click Load

4.Click Open

◦Login: root

◦Password: VMware123!

HANDS-ON LABS MANUAL | 523


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review Deploy-TKC-12012.yaml [552]

HANDS-ON LABS MANUAL | 524


HOL-2446-05-HCI: Optimize and Modernize Data Centers

From the Putty terminal, run the following commands to view the TKC manifest that will be used to deploy the TKC.

1. cd ~/tkc

2.cat Deploy-TKC02.yaml (case sensitive)

Below is an example of the YAML manifest file that will be used in this exercise. Take a few minutes to familiarize yourself with the
contents of this file.

•apiVersion: run.tanzu.vmware.com/v1alpha1

•kind: TanzuKubernetesCluster

•metadata:

•name: tkc02

•namespace: ns01

•spec:

•distribution:

•version: 1.23.8

•topology:

•controlPlane:

•count: 1 # control plane nodes

•class: guaranteed-xsmall # form factor

•storageClass: k8s-storage-policy # storage class for control plane

•workers:

•count: 1 # worker nodes

•class: best-effort-xsmall # form factor

•storageClass: k8s-storage-policy # storage class for workers

•settings:

•network:

•cni:

•name: antrea

•services:

•cidrBlocks: ["198.51.100.0/12"] #Cannot overlap with Supervisor Cluster

•pods:

•cidrBlocks: ["190.0.2.0/16"] #Cannot overlap with Supervisor Cluster

•storage:

•classes: ["k8s-storage-policy"] #Named PVC storage classes

•defaultClass: k8s-storage-policy #Default PVC storage class

Notes:

HANDS-ON LABS MANUAL | 525


HOL-2446-05-HCI: Optimize and Modernize Data Centers

The manifest file specifies the name of the TKC along with the name of the vSphere namespace where it will be deployed. It also is
where the developers indicate the version of Kubernetes to use, along with the number and size of the control plane and work nodes,
as well as network and storage settings.

The storage classes in the YAML manifest map directly to vSphere storage policies that are part of the vSphere Cloud Native Storage
(CNS). CNS is a vSphere feature that makes K8s aware of how to provision storage on vSphere on-demand, in a fully automated,
scalable fashion as well as providing visibility into container volumes from the vSphere client.

While the developer is able to specify the size (e.g. "class") and the number of virtual machines that get deployed as part of a TKC, they
are bound by the resource limits set for the vSphere namespace. While TKCs allow developers to self-provision Kubernetes clusters on
vSphere, the vSphere administrator is able to control and manage the amount of vSphere cluster resources (CPU, memory, storage)
they can consume.

Connect to Control Plane [553]

With the YAML manifest in place, we are ready to connect to the Kubernetes Control Plane and set the context to the vSphere
namespace "ns01".

Run the following commands in the Putty terminal:

1. kubectl vsphere login --vsphere-username sam@vsphere.local --server 172.16.10.2

2.kubectl config use-context ns01

HANDS-ON LABS MANUAL | 526


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review Namespace [554]

Begin by querying for a list of the currently deployed TKCs in the "ns01" namespace.

1. kubectl get tkc

Deploy TKC [555]

We see there is currently one TKC named "tkc01


tkc01" deployed in the "ns01" namespace.

We will now create a second TKC named "tkc02


tkc02" using the manifest file.

1. kubectl apply -f Deploy-TKC-12012.yaml

HANDS-ON LABS MANUAL | 527


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review TKC Deployment [556]

We are notified that the "tkc02


tkc02" cluster has been created inside of Kubernetes.

We can monitor the progress of the TKC deployment from the Linux workstation using these commands:

1. kubectl get tkc

2.kubectl describe tkc tkc02

HANDS-ON LABS MANUAL | 528


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review TKC Deployment in vSphere [557]

HANDS-ON LABS MANUAL | 529


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Note, that you may need to run these commands a few times as it can take a couple of minutes for the deployment to begin in the lab.

You can also monitor the progress of the virtual machine deployments from the vSphere Web Client.

1. Click the “vSphere-


vSphere- Tanzu Kubernetes
Kubernetes” browser tab to return to the vSphere Client.

Note that under the "ns01" namespace there is now a new TKC object named "tkc02".

2.Click to expand "tkc02


tkc02"

We see the control plane node being deployed.

3.Open the "Recent Tasks" pane in the vSphere Client

Monitor the control plane and worker node OVF deployments from the vSphere client. A few minutes after the control plane node is
deployed, the worker node will be deployed. Wait for both virtual machines to be deployed and powered on.

Note, in the lab, it typically takes ~15 minutes. However, when the cloud back-end is highly congested this can take upwards of ~30
minutes to complete in the lab. Please be patient.

Review TKC Deployment in Control Plane [558]

HANDS-ON LABS MANUAL | 530


HOL-2446-05-HCI: Optimize and Modernize Data Centers

After the control plane and worker virtual machines have deployed and powered on, return to the Putty Terminal and confirm the TKC is
running.

Click the Putty icon to return to the putty terminal

Run the following commands:

2.kubectl get tkc

3.kubectl get virtualmachines

4.kubectl describe tkc tkc02

Delete TKC Deployment [559]

Note, wait for all nodes to show a status of "ready". It may take several minutes before the worker node reaches the ready state.

Next, we will delete "tkc02


tkc02" from the inventory.

1. kubectl delete tkc tkc02

Wait ~30 seconds and then verify "tkc02" has been removed. Note, it may take a few (i.e. up to 5) minutes for the delete tkc command
to complete.

2.kubectl get tkc

Return to the vSphere web client and verify the TKC cluster "tkc02
tkc02" is no longer shown under the namespace "ns01
ns01".

HANDS-ON LABS MANUAL | 531


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Module 16 - Conclusion [560]

This module showed how to deploy a Tanzu Kubernetes Cluster (TKC) inside a vSphere Namespace.

VCF with Tanzu enables developers to deploy one or more Tanzu Kubernetes clusters to a vSphere Namespace. Resource quotas and
storage policy are applied to a vSphere Namespace and inherited by the Tanzu Kubernetes clusters deployed there.

When you provision a Tanzu Kubernetes cluster, a resource pool and VM folder are created in the vSphere Namespace. The Tanzu
Kubernetes cluster control plane and worker node VMs are placed within this resource pool and VM folder.

Once deployed, TKCs can be expanded by adding additional worker nodes, resized by increasing the CPU and memory assigned to
each node, upgraded, and deleted. These functions are explored in the next sections of this lab.

Module Key Takeaways [561]

•The Tanzu Kubernetes Grid Service is a custom controller manager with a set of controllers that is part of the supervisor

cluster. The purpose of the Tanzu Kubernetes Grid Service is to provision Tanzu Kubernetes clusters.

•There is a one-to-many relationship between the supervisor cluster and Tanzu Kubernetes clusters. You can provision

multiple Tanzu Kubernetes clusters within a single supervisor cluster.

•The workload management functionality provided by the supervisor cluster gives developers control over the cluster

configuration and lifecycle while allowing you to maintain concurrency with upstream Kubernetes.

•You deploy one or more Tanzu Kubernetes clusters to a vSphere namespace. Resource quotas and storage policy are applied

to a vSphere Namespace and inherited by the Tanzu Kubernetes clusters deployed there.

HANDS-ON LABS MANUAL | 532


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Module 16 - Adding Worker Nodes to a Tanzu Kubernetes Cluster (15 minutes) Advanced

Module 17 - Overview [563]

In this exercise, we will expand a Tanzu Kubernetes Cluster (TKC) by adding an additional worker node.

Being able to dynamically allocate capacity on demand, and add additional capacity over time is a critical capability of a modern private
cloud. When running Kubernetes workloads on top of Cloud Foundation with Tanzu it's easy to allocate infrastructure for hosting TKCs
and to expand the infrastructure over time.

TKCs can be expanded in two ways:

1. You can scale the TKC horizontally by adding more worker nodes.

2.You can scale the TKC vertically by increasing the size of the worker nodes.

In this exercise, we will show how to add additional worker nodes to an existing TKC.

Note: While the developer is able to increase the size and number of virtual machines that get deployed as part of a TKC, the vSphere
administrator controls the amount of vSphere cluster resources (CPU, memory, storage) they can consume.

This module contains one exercise:

1. Add Kubernetes Worker Nodes to Tanzu Kubernetes Cluster (TKC)

It is estimated that it will take ~20 minutes to complete this exercise.

Exercise 1 - Add Worker Nodes to TKC [564]

To avoid confusion while navigating through the lab it is recommended that you close all browsers and putty windows on the desktop
prior to starting this exercise.

Log into vSphere [565]

1. Click the Management vCenter bookmark (https://mgmt-vcenter.vcf.sddc.lab)

HANDS-ON LABS MANUAL | 533


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Login:

1. Username: administrator@vsphere.local

2.Password: VMware123!

3.Click LOGIN

HANDS-ON LABS MANUAL | 534


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Navigate to TKC [566]

We start by viewing the details of the TKC from the vSphere client. We then switch to the Linux workstation to show how easy it is for
developers to add additional worker nodes.

Navigate to the Inventory


Inventory-Hosts
Hosts and Clusters view

1. Click to expand mgmt-vcenter.vcf.sddc.lab > mgmt-datacenter > mgmt


mgmt-cluster
cluster

2.Click to expand Namespaces > ns01 > tkc01

Under the "ns01


ns01" namespace note the TKC cluster named "tkc01
tkc01". The TKC is made up of two VMs, one control plane node and one
worker node.

HANDS-ON LABS MANUAL | 535


HOL-2446-05-HCI: Optimize and Modernize Data Centers

TKC Cluster in vSphere [567]

HANDS-ON LABS MANUAL | 536


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Note, to facilitate the lab we use a TKC with a single control plane and worker node. This configuration is not recommended for
production-grade TKC deployments which should always have three control plane VMs with multiple worker nodes.

1. Click ns01
ns01.

2.Click MANAGE NAMESPACE on the right pane.

3.Click the Compute tab

4.Click to select Tanzu Kubernetes clusters

Note the details for the TKC. The status is "Running", the version is v1.23.8, and the Control Plane address is 172.16.10.3.

Supervisor Control Plane [568]

HANDS-ON LABS MANUAL | 537


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click Menu

2.Click on Workload Management

3.Click the Supervisors tab

The details for k8s-lab are shown. Note the supervisor cluster Control Plane IP address (172.16.10.2). This is the IP address the
developer will use to connect and expand the TKC.

HANDS-ON LABS MANUAL | 538


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Launch Putty [569]

HANDS-ON LABS MANUAL | 539


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Connect to the Linux workstation:

1. Click the Putty icon in the system tray

2.Click CentOS

3.Click Load

4.Click Open

◦Login: root

◦Password: VMware123!

Connect to Control Plane [570]

Log into the Kubernetes Control Plane and set the context to the "ns01" namespace:

1. kubectl vsphere login --vsphere-username sam@vsphere.local --server 172.16.10.2

2.kubectl config use-context ns01

HANDS-ON LABS MANUAL | 540


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Query Deployed TKC [571]

Query the deployed TKCs in the "ns01


ns01" namespace.

1. kubectl get tkc

Note that "tkc01


tkc01" is in a running state.

Run the following commands to view details about "tkc01


tkc01":

2.kubectl get virtualmachines

3.kubectl describe tkc tkc01

HANDS-ON LABS MANUAL | 541


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Edit TKC [572]

The developer sees the same information that was shown to the vSphere admin in the vSphere client. Again, we are able to observe
that the TKC currently has two VMs - one Control Plane and one Worker node.

Next, we will expand the TKC by adding a second worker node.

Run the "kubectl edit tkc/tkc01" command to edit the TKC:

1. kubectl edit tkc/tkc01

HANDS-ON LABS MANUAL | 542


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Open Editor [573]

HANDS-ON LABS MANUAL | 543


HOL-2446-05-HCI: Optimize and Modernize Data Centers

The YAML for the TKC will open in the vi editor.

Note: The vi editor has two modes. By default, you start in the “command mode”. While in this mode you use the arrow keys to scroll
through the doc and are able to run search commands. To edit the contents of the file you need to switch to “entry mode”. After
making changes, press the “esc” key to switch back to command mode to save the file and exit the vi editor.

•Press the escape key (esc) to verify you are in command mode.

•Search the file for the string “workers”.

◦This is done by typing “/” followed by the string “workers”. Note that the command is displayed at the bottom

of the putty window. You may need to click 'n' to see the next occurrence

HANDS-ON LABS MANUAL | 544


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Edit TKC configuration [574]

1. Use the arrow keys, move the cursor down to the “workers:” section, and position the cursor on the “1” in the string

“replicas: 1”

1. Press the “r” key (r = replace) then press “2”. The “1” will be replaced by “2”.

2.Press the escape key (esc) to verify you are in command mode.

3.Type “:wq” and press enter to save the change and exit the vi editor.

HANDS-ON LABS MANUAL | 545


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Kubernetes automatically detects the change and begins the process to add the second worker node.

Review updates [575]

To monitor the progress re-run the following commands:

1. kubectl get tkc

2.kubectl get virtualmachines

3.kubectl describe tkc tkc01

HANDS-ON LABS MANUAL | 546


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review in vSphere [576]

HANDS-ON LABS MANUAL | 547


HOL-2446-05-HCI: Optimize and Modernize Data Centers

HANDS-ON LABS MANUAL | 548


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Note the TKC status shows "updating" while the additional node is being added and that the TKC now has three VMs.

Return to the vSphere client to monitor the progress of the VM deployment.

1. Click the “vSphere – Clusters” browser tab to return to the vSphere client

2.Navigate to the Hosts and Clusters view

3.Open the Recent Tasks pane

It will take approximately 5 minutes for the new worker node to be deployed.

HANDS-ON LABS MANUAL | 549


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review in Control Plane [577]

After the worker node has been deployed and powered on, return to the Putty terminal.

1. Click the Putty icon to return to the Putty terminal

Run the "kubectl describe tkc tkc01" command to confirm the TKC now has two worker nodes.

2.kubectl describe tkc tkc01

HANDS-ON LABS MANUAL | 550


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review running TKC State [578]

Repeat running the "kubectl describe tkc tkc01" command until the worker node status is "ready". Approximately 5 minutes.

Query the TKC and verify it is still in a "Ready" state.

1. kubectl get tkc tkc01

Review TKC Deployment [579]

Next, we will shrink the TKC back to its original size by reducing the number of worker nodes back down to one.

Run the following commands in the Putty Window:

1. kubectl get tkc

2.kubectl get virtualmachines

HANDS-ON LABS MANUAL | 551


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Open Editor [580]

Note the TKC currently has one control plane node and two worker nodes. Edit the configuration a second time to reduce the worker
node count to one.

1. kubectl edit tkc/tkc01

HANDS-ON LABS MANUAL | 552


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Edit TKC configuration (1/2) [581]

HANDS-ON LABS MANUAL | 553


HOL-2446-05-HCI: Optimize and Modernize Data Centers

The YAML for the TKC will open in the vi editor.

Note: The vi editor has two modes. By default you start in the “command mode”. While in this mode you use the arrow keys to scroll
through the doc and are able to run search commands. To edit the contents of the file you need to switch to “entry mode”. After
making changes, press the “esc” key to switch back to command mode to save the file and exit the vi editor.

1. Press the escape key (esc) to verify you are in command mode.

2.Search the file for the string "replicas: 2”.

◦This is done by typing “/” followed by the string “replicas: 2”. Note that the command is displayed at the

bottom of the putty window.

HANDS-ON LABS MANUAL | 554


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Edit TKC configuration (2/2) [582]

HANDS-ON LABS MANUAL | 555


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Use the arrow keys, move the cursor down to the “workers:
workers:” section and position the cursor on the “2
2” in the string

“replicas: 2”

1. Press the “rr” key (r = replace) then press “11”. The “2


2” will be replaced by “11”.

2.Press the escape key (esc) to verify you are in command mode.

3.Type “:wq
wq” and press enter to save the change and exit the vi editor.

Review TKC Deployment [583]

Wait ~30 seconds and then re-run the commands to query the TKC configuration.

1. kubectl get virtualmachines

2.kubectl get tkc tkc01

Note the number of worker nodes has been reduced to one.

HANDS-ON LABS MANUAL | 556


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review update in vSphere [584]

HANDS-ON LABS MANUAL | 557


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Return to the vSphere Client. Verify there are now two VMs in the "tkc01" TKC. One control plane node and one worker node. In the
Recent Tasks you can see the deletion of the tkc-worker vm.

1. Click the “vSphere


vSphere” browser tab to return to the vSphere client

2.Navigate to the Inventory-Hosts and Clusters view

Module 17 - Conclusion [585]

Being able to dynamically allocate capacity on demand, and add additional capacity over time is a critical capability for any modern
cloud. When running Kubernetes workloads on top of Cloud Foundation with Tanzu it's easy to not only allocate infrastructure for
hosting TKCs, but also to resize TKCs as needed.

TKCs can be expanded in two ways.

1. You can scale the TKC horizontally by adding more worker nodes.

2.You can scale the TKC vertically by increasing the size of the worker nodes.

In this exercise, you saw how to add capacity to a TKC by adding an additional worker node.

Module Key Takeaways [586]

•Being able to dynamically allocate capacity on demand, and add additional capacity over time is a critical capability of a

modern private cloud.

•When running Kubernetes workloads on top of Cloud Foundation with Tanzu it's easy to allocate infrastructure for hosting

TKCs and to expand the infrastructure over time.

•It's easy to add additional worker nodes to an existing TKC.

•While the developer is able to increase the size and number of virtual machines that get deployed as part of a TKC, the

vSphere administrator is able to control and manage the amount of vSphere cluster resources (CPU, memory, storage) they

can consume.

HANDS-ON LABS MANUAL | 558


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Module 17 - Adding Capacity to a Tanzu Kubernetes Worker Node (15 minutes) Advanced

Module 18 - Overview [588]

In this exercise, we will add capacity to a Tanzu Kubernetes Cluster (TKC) by resizing the worker nodes.

Being able to dynamically allocate capacity on demand, and later grow that capacity over time is a critical capability for any modern
cloud. When running Kubernetes workloads on top of Cloud Foundation with Tanzu it's easy to not only allocate infrastructure for
hosting TKCs, but also to resize TKCs as needed.

TKCs can be expanded in two ways.

1. You can scale the TKC horizontally by adding more worker nodes.

2.You can scale the TKC vertically by increasing the size of the worker nodes.

In this exercise, we will show how to increase the size of the TKC worker nodes.

While the developer is able to increase the size and number of virtual machines that get deployed as part of a TKC, the vSphere
administrator controls the amount of vSphere cluster resources (CPU, memory, storage) they can consume.

This module contains one exercise:

1. Add capacity to TKC Worker Nodes

It is estimated that it will take ~15 minutes to complete this exercise.

Exercise 1 - Add Capacity to TKC Worker Nodes [589]

We start by viewing the details of the TKC from the vSphere client. We then switch to the Linux workstation to show how easy it is for
developers to increase the size of the TKC worker nodes.

To avoid confusion while navigating through the lab it is recommended that you close all browsers and putty windows on the desktop
prior to starting this exercise.

HANDS-ON LABS MANUAL | 559


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Navigate to the Hosts and Clusters view

1. Click to expand mgmt-vcenter.vcf.sddc.lab > mgmt-datacenter > mgmt


mgmt-cluster
cluster

2.Click to expand Namespaces > ns01 > tkc01

HANDS-ON LABS MANUAL | 560


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Log into vSphere [590]

1. Click the Management vCenter bookmark (https://mgmt-vcenter.vcf.sddc.lab)

HANDS-ON LABS MANUAL | 561


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Login:

1. Username: administrator@vsphere.local

2.Password: VMware123!

3.Click LOGIN

Navigate to TKC [591]

HANDS-ON LABS MANUAL | 562


HOL-2446-05-HCI: Optimize and Modernize Data Centers

View Worker Node configuration [592]

HANDS-ON LABS MANUAL | 563


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Under the "ns01" namespace note the TKC cluster is named "tkc01". The TKC is made up of two VMs, one control plane node, and one
worker node.

Note, that the lab uses a TKC with a single control plane and a single worker node. This configuration is not recommended for
production-grade TKC deployments which should always have three control plane VMs with multiple worker nodes.

1. Right-click on the worker node ("tkc01-workers-")

2.Click Edit Settings

Note that the worker node is currently configured with 2 vCPUs and 2GB of memory.

1. Click Cancel to close the settings window

HANDS-ON LABS MANUAL | 564


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Navigate to Workload Management [593]

We will resize the worker nodes by adding an additional 2GB of memory so that each has a total of 4GB.

Get the IP address for the Supervisor Cluster. This is the IP address the developer uses to connect to the Kubernetes Control Plane.

1. Click Menu

2.Click Workload Management.

3.Click the Supervisor Clusters tab

The details for the Supervisor Cluster "k8s-lab" are shown. Note the Control Plane IP address is 172.16.10.2
172.16.10.2.

HANDS-ON LABS MANUAL | 565


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Launch Putty [594]

HANDS-ON LABS MANUAL | 566


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Log in to the Linux workstation:

1. Click the Putty icon in the system tray

2.Click CentOS

3.Click Load

4.Click Open

◦Login: root

◦Password: VMware123!

Connect to Control Plane [595]

Log onto the Kubernetes Control Plane as the user "sam@vsphere.local" and set the context to the "ns01" namespace:

1. kubectl vsphere login --vsphere-username sam@vsphere.local --server 172.16.10.2

2.kubectl config use-context ns01

HANDS-ON LABS MANUAL | 567


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review TKC Deployment [596]

Query the deployed TKCs in the "ns01


ns01" namespace.

1. kubectl get tkc

Note that "tkc01


tkc01" is in a running state with one control plane and one worker node.

Run the following commands to view details about "tkc01


tkc01":

2.kubectl get virtualmachines

3.kubectl describe tkc tkc01

HANDS-ON LABS MANUAL | 568


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Open Editor [597]

The developer sees the same information that was shown to the vSphere admin in the vSphere client. Again, we are able to observe
that the TKC currently has two VMs - one Control Plane and one Worker node.

We will now increase the size of the worker node in the TKC by changing the virtual machine "class".

Run the "kubectl edit ..." command to edit the TKC:

1. kubectl edit tkc/tkc01

HANDS-ON LABS MANUAL | 569


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Edit TKC Configuration [598]

HANDS-ON LABS MANUAL | 570


HOL-2446-05-HCI: Optimize and Modernize Data Centers

The YAML for the TKC will open in the vi editor.

Note: The vi editor has two modes. By default, you start in the “command mode”. While in this mode you use the arrow keys to scroll
through the doc and are able to run search commands. To edit the contents of the file you need to switch to “entry mode”. After
making changes, press the “esc” key to switch back to command mode to save the file and exit the vi editor.

1. Press the escape key (esc) to verify you are in command mode.

2.Search the file for the string “replicas: 1”.

1. This is done by typing “/” followed by the string “replicas: 1”. Note that the command is displayed at the bottom

of the putty window.

HANDS-ON LABS MANUAL | 571


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Use the arrow keys to move the cursor down to the “workers:
workers:” section. Position the cursor on the “xx” in the string

“vmClass:
vmClass: best-effort-xsmall
best-effort-xsmall”

2.With the cursor on the “xx”, press the “xx” key to delete the letter “xx”. The line should now read “vmClass:
vmClass: best-effort-

small
small”.

3.Press the escape key (esc) to verify you are in command mode.

4.Type “:wq
:wq” and press enter to save the change and exit the vi editor.

Review Updates [599]

Kubernetes automatically detects the change and will immediately begin work to deploy a new worker node with the increased sizing
and to remove the original worker node. The new node will replace the original worker node.

Note: The Kubernetes cluster upgrade is achieved by deploying new VMs based on the new “class” size. These new VMs are then
added to the Kubernetes Cluster, after which the original VMs (based on the old class size) are removed.

To monitor the progress of the resize operation query the TKC cluster:

1. kubectl get tkc

2.kubectl get virtualmachines

HANDS-ON LABS MANUAL | 572


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review Updates in vSphere [600]

Return to the vSphere client to monitor the progress of the VM deployment.

1. Click the “vSphere


vSphere –Summary
Summary” browser tab
2.Navigate to the Hosts and Clusters view

A new virtual machine is deployed with the new capacity settings and joined to the Kubernetes clusters. After a few minutes, the original
worker node (based on the old settings) will be removed.

This will take approximately 10 minutes. If the cloud back-end is highly congested this can take as long as 20 minutes.

Return to the Putty window after the worker node has been removed and the total virtual machine count goes back to two.

HANDS-ON LABS MANUAL | 573


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review updates in Control Plane [601]

From the Linux workstation re-run the "kubectl describe ..." command:

1. kubectl describe tkc tkc01

Repeat the "kubectl describe tkc tkc01" command until all VMs are in a "ready" state.

HANDS-ON LABS MANUAL | 574


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review Worker Node Configuration in vSphere [602]

Return to the vSphere Client:

1. Click the “vSphere


vSphere” browser tab

2.Right-click the worker node ("tkc01-workers-


tkc01-workers-")

3.Click Edit Settings

HANDS-ON LABS MANUAL | 575


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Note that the worker node now has 2 vCPU and 4GB of memory assigned (previously it had 2 vCPUs and 2GB of memory).

1. Click OK

Module 18 - Conclusion [603]

Being able to dynamically allocate capacity on demand, and later grow that capacity over time is a critical capability for any modern
cloud. When running Kubernetes workloads on top of Cloud Foundation with Tanzu it's easy to not only allocate infrastructure for
hosting TKCs, but also to resize TKCs as needed.

TKCs can be expanded in two ways.

HANDS-ON LABS MANUAL | 576


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. You can scale the TKC horizontally by adding more worker nodes.

2.You can scale the TKC vertically by increasing the size of the worker nodes.

In this exercise, you saw how to add capacity to a TKC by increasing the size of the TKC worker nodes.

Module Key Takeaways [604]

•Being able to dynamically allocate capacity on demand, and add additional capacity over time is a critical capability of a

modern private cloud.

•When running Kubernetes workloads on top of Cloud Foundation with Tanzu it's easy to allocate infrastructure for hosting

TKCs and to expand the infrastructure over time.

•It's easy to resize the nodes that make up a TKC.

•While the developer is able to increase the size and number of virtual machines that get deployed as part of a TKC, the

vSphere administrator is able to control and manage the amount of vSphere cluster resources (CPU, memory, storage) they

can consume.

HANDS-ON LABS MANUAL | 577


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Module 18 - Upgrading a Tanzu Kubernetes Cluster (15 minutes) Advanced

Module 19 - Overview [606]

This module shows how to upgrade a Tanzu Kubernetes Cluster (TKC) running inside a vSphere Namespace.

The Supervisor Cluster provides a Kubernetes control plane from which Tanzu Kubernetes clusters are built. The Tanzu Kubernetes Grid
Service is a custom controller manager with a set of controllers that is part of the Supervisor Cluster. The purpose of the Tanzu
Kubernetes Grid Service is to provision and lifecycle manage Tanzu Kubernetes clusters.

You can provision multiple Tanzu Kubernetes clusters within a single Supervisor Cluster. The workload management functionality
provided by the Supervisor Cluster gives you control over the cluster configuration and lifecycle while allowing you to maintain
concurrency with upstream Kubernetes.

You deploy one or more Tanzu Kubernetes clusters to a vSphere Namespace. Resource quotas and storage policy are applied to a
vSphere Namespace and inherited by the Tanzu Kubernetes clusters deployed there.

When you upgrade a Tanzu Kubernetes cluster, a new control plane and worker nodes are deployed (using the newer Kubernetes
version) and swapped out with the existing nodes. After the swap, the old nodes will be removed. The nodes are updated sequentially.

This module contains one exercise:

1. Upgrade a TKC Cluster

It is estimated that it will take ~15 minutes to complete this exercise.

Exercise 1 - Upgrade a TKC Cluster [607]

To avoid confusion while navigating through the lab it is recommended that you close all browsers and putty windows on the desktop
prior to starting this exercise.

We start with a deployed TKC named "tkc01" that is running Kubernetes version 1.23.8
1.23.8. We will upgrade the TKC to Kubernetes version
1.24.9

Notes:

To facilitate the lab we use a TKC with a single control plane and one worker node. This configuration is not recommended for
production-grade TKC deployments, which should always have three control plane VMs with multiple worker nodes.

Prior to updating a TKC it may be necessary to first update the vSphere Namespace. The steps to do this are not covered in this
exercise. For purposes of this exercise, we assume the vSphere Namespace is up to date.

HANDS-ON LABS MANUAL | 578


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Open the Chrome browser and navigate to the Hosts and Clusters view

1. Click to expand mgmt-vcenter.vcf.sddc.lab > mgmt-datacenter > mgmt


mgmt-cluster
cluster

2.Click to expand Namespaces > ns01

3.Click MANAGE NAMESPACE on the right pane

HANDS-ON LABS MANUAL | 579


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Log into vSphere [608]

1. Click the Management vCenter bookmark (https://mgmt-vcenter.vcf.sddc.lab)

HANDS-ON LABS MANUAL | 580


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Login:

1. Username: administrator@vsphere.local

2.Password: VMware123!

3.Click LOGIN

Navigate to TKC [609]

HANDS-ON LABS MANUAL | 581


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review TKC in vSphere [610]

We see that the TKC is comprised of two virtual machines, one control plane node, and one worker node.

1. Click ns01

2.Click the Compute tab

3.Click Tanzu Kubernetes clusters

Note the Kubernetes version for "tkc01


tkc01" is currently version 1.23.8. We will upgrade "tkc01
tkc01" to Kubernetes version 1.24.9
1.24.9.

HANDS-ON LABS MANUAL | 582


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Launch Putty [611]

Note, that there are multiple ways to upgrade a TKC. In this exercise, we will use the "kubectl edit ..." command. Refer to the
documentation for information on alternative upgrade methods.

Open a Putty Terminal:

1. Click the Putty icon in the system tray

2.Click CentOS

3.Click Load

4.Click Open

◦Login: root

◦Password: VMware123!

HANDS-ON LABS MANUAL | 583


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Connect to Control Plane [612]

Connect to the Kubernetes control plane running on the supervisor cluster and set the context to the vSphere namespace "ns01
ns01".

1. kubectl vsphere login --vsphere-username sam@vsphere.local --server 172.16.10.2

2.kubectl config use-context ns01

HANDS-ON LABS MANUAL | 584


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review TKC [613]

Query the details of the deployed TKC in the "ns01" namespace.

1. kubectl get tkc

2.kubectl get virtualmachines

3.kubectl describe tkc tkc01

We see the details for the TKC named "tkc01


tkc01".

HANDS-ON LABS MANUAL | 585


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Query TKC versions [614]

Next, query for the list of available Kubernetes versions. This list is derived from the available OVF templates saved to the vSphere
Content Library that is associated with the namespace.

1. kubectl get tkr

The command above is using the shortened syntax for the command "# kubectl get tanzukubernetesreleases".

Note the available version is "v1.24.9+vmware.1-tkg.4


v1.24.9+vmware.1-tkg.4".

Note, that the available Kubernetes versions are determined by querying the available Kubernetes OVF images that have been saved in
the vSphere content library. If you don't see the version you expect, check the content library settings and ensure the necessary OVF
image is present.

HANDS-ON LABS MANUAL | 586


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Open Editor [615]

Edit the "tkc01" cluster and change the distribution version in the .spec.distribution.version and .spec.distribution.fullVersion properties
of the cluster manifest.

1. kubectl edit tkc/tkc01

The YAML for the TKC will open in the vi editor.

HANDS-ON LABS MANUAL | 587


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Note: The vi editor has two modes. By default, you start in the “command mode”. While in this mode you use the arrow keys to scroll
through the doc and are able to run search commands. To edit the contents of the file you need to switch to “entry mode”. After
making changes, press the “esc” key to switch back to command mode to save the file and exit the vi editor.

1. Press the escape key (esc) to verify you are in command mode.

2.Search the file for the string “controlPlane


controlPlane” from end of the file to top

1. This is done by typing “?” followed by the string “controlPlane”. Note that the command is displayed at the

bottom of the putty window.

HANDS-ON LABS MANUAL | 588


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Edit TKC Configuration [616]

HANDS-ON LABS MANUAL | 589


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Edit the manifest by changing the name under controlPlane and the nodePools.

To set the name under controlPools value to “v1.24.9---vmware.1-tkg.4”:

•Use the arrow keys to position the cursor on the first letter (“v”) in the string: “v1.23.8---vmware.3-tkg.1”

•Press “x” to delete each character in the string “v1.23.8---vmware.3-tkg.1”

•Press “i” to enter input mode and type the string “v1.24.9---vmware.1-tkg.4”

•Press “esc” to exit input mode

To set the name under nodePools to “v1.24.9---vmware.1-tkg.4”:

•Use the arrow keys to position the cursor on the first letter (“v”) in the string: “v1.23.8---vmware.3-tkg.1”

•Press “x” to delete each character in the string “v1.23.8---vmware.3-tkg.1”

•Press “i” to enter input mode and type the string “v1.24.9---vmware.1-tkg.4”

•Press “esc” to exit input mode

Type “:wq
:wq” and press enter to save the change and exit the vi editor.

Example:

From:

topology:

controlPlane:

replicas: 1

storageClass: k8s-storage-policy

tkr:

reference:

name: v1.23.8---vmware.3-tkg.1

vmClass: guaranteed-xsmall

nodePools:

- name: workers

replicas: 1

storageClass: k8s-storage-policy

tkr:

reference:

name: v1.23.8---vmware.3-tkg.1

HANDS-ON LABS MANUAL | 590


HOL-2446-05-HCI: Optimize and Modernize Data Centers

vmClass: best-effort-xsmall

To:

topology:

controlPlane:

replicas: 1

storageClass: k8s-storage-policy

tkr:

reference:

name: v1.24.9---vmware.1-tkg.4

vmClass: guaranteed-xsmall

nodePools:

- name: workers

replicas: 1

storageClass: k8s-storage-policy

tkr:

reference:

name: v1.24.9---vmware.1-tkg.4

vmClass: best-effort-xsmall

HANDS-ON LABS MANUAL | 591


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Review Updates [617]

You are notified that "tkc01


tkc01" has been edited.

Monitor the progress of the upgrade:

1. kubectl get tkc tkc01

2.kubectl describe tkc tkc01

Review Updates in vSphere [618]

You can also monitor the upgrade progress from the vSphere client.

Return to the vSphere Client

HANDS-ON LABS MANUAL | 592


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Watch as the new VMs are deployed and the old VMs removed. Monitor the upgrade from the vSphere client. Wait until both the
control plane and worker nodes have been upgraded.

It will take approximately 10 minutes for both the control plane and worker nodes to be upgraded. If the cloud back-end is highly
congested this can take as long as 40 minutes.

1. Within the host and clusters view, expand mgmt-vcenter.vcf.sddc.lab -> mgmt-datacenter -> mgmt-cluster

2.Expand Namespaces -> ns01 -> tkc01

3.Click MANAGE NAMESPACE on the right pane

HANDS-ON LABS MANUAL | 593


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click on the Compute tab.

2.Expand VMware Resources and click Tanzu Kubernetes clusters


clusters.

When the phase status changes from Upgrading to Running with the target version of v1.24.9
v1.24.9, this indicates that the upgrade
deployment is complete. The Kubernetes cluster itself may still be upgrading in which we will monitor within the console.

Verify new version [619]

Confirm the TKC has been upgraded and is now running Kubernetes version 1.24.9
1.24.9.

•Click the Putty icon to return to the Putty terminal

Enter the command:

1. kubectl get tkc

The new version can also be verified from the Linux workstation. Also we have confirmed there are no additional updates available.

HANDS-ON LABS MANUAL | 594


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Module 19 - Conclusion [620]

This module showed how to upgrade a Tanzu Kubernetes Cluster (TKC) inside a vSphere Namespace.

When you upgrade a Tanzu Kubernetes cluster, new virtual machines get deployed with the newer Kubernetes version. These new
nodes replace the existing nodes, which are subsequently removed. Both the control plane and worker nodes are updated using this
approach. The nodes are updated sequentially.

Module Key Takeaways [621]

•When you upgrade a Tanzu Kubernetes cluster, a new control plane and worker nodes are deployed (using the newer

Kubernetes version) and swapped out with the existing nodes. After the swap, the old nodes will be removed. The nodes are

updated sequentially.

HANDS-ON LABS MANUAL | 595


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Module 19 - Enabling the Embedded Image Registry (30 minutes) Advanced

Module 20 - Overview [623]

VMware Cloud Foundation with Tanzu comes with an embedded image registry that can be used to store and serve container images.

The embedded registry provides a secure image repository from which administrators and developers can push and pull the container
images that will be deployed as vSphere Pod VMs and inside Tanzu Kubernetes Clusters.

A separate registry is deployed for each supervisor cluster. A Harbor Project automatically gets created inside the registry for each
vSphere namespace.

Developers must copy the registry's root SSL certificate to their workstation and use the "docker-credential-vsphere" command to
authenticate. Once authenticated, they use docker commands to download, tag, and push images.

Notes:

•VMware NSX is required to use the embedded registry.

•The embedded registry is different from the open-source Harbor Registry project. The embedded registry has a limited

feature set and is tailored for use with vSphere with Tanzu and deploying vSphere Pod VMs.

Exercise 1 - Enable the Embedded Image Registry [624]

To avoid confusion while navigating through the lab it is recommended that you close all browsers and putty windows on the desktop
prior to starting this exercise.

Log into vSphere [625]

1. Click the Management vCenter bookmark (https://mgmt-vcenter.vcf.sddc.lab)

HANDS-ON LABS MANUAL | 596


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Login:

1. Username: administrator@vsphere.local

2.Password: VMware123!

3.Click LOGIN

HANDS-ON LABS MANUAL | 597


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Enable Harbor [626]

You use the vSphere client to enable the embedded image registry using the steps below.

Navigate to the Hosts and Clusters view

1. Click on the home navigation button (3


3 lines)

2.Select Workload Management

3.Click on the k8s-lab supervisor cluster

HANDS-ON LABS MANUAL | 598


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click on the Configure tab.

2.Click on Image Registry

3.Click on ENABLE HARBOR

HANDS-ON LABS MANUAL | 599


HOL-2446-05-HCI: Optimize and Modernize Data Centers

For this exercise, we will be using the default k8s-storage-policy storage policy for Harbor components. It will take ~5-10 minutes for
the registry to be created.

1. Select the k8s-storage-policy

2.Click OK

Review Harbor Deployment [627]

HANDS-ON LABS MANUAL | 600


HOL-2446-05-HCI: Optimize and Modernize Data Centers

The embedded registry runs on vSphere Pod VMs inside a special vSphere namespace. To view the vSphere Pod VMs:

1. Navigate to the host and clusters view

2.Expand mgmt-vcenter.vcf.sddc.lab -> mgmt-datacenter -> mgmt-cluster -> Namespaces

3.Click to expand vmware-system-registry-###...

The embedded registry is comprised of seven vSphere Pod VMs. Click through the list of vSphere Pod VMs to view details about each.
The namespace and Pod VMs are created when the image registry is enabled. These objects will automatically be removed if/when the
registry is disabled.

HANDS-ON LABS MANUAL | 601


HOL-2446-05-HCI: Optimize and Modernize Data Centers

View Harbor IP / URL [628]

The embedded registry is accessed and managed using the Harbor UI. A link to the UI is available from the Image Registry tab.

Navigate back to the Image Registry view on the vSphere Client:

1. Click on the home navigation button (3


3 lines)

2.Select Workload Management

3.Click on the k8s-lab supervisor cluster

HANDS-ON LABS MANUAL | 602


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Click on the Configure tab.

2.Click Image Registry on the left navigation menu

3.When the registry is up and running, the status will turn to Running.

4.Click the https://172.16.10.4 link to access the Harbor UI (note that the IP may be different in your lab)

HANDS-ON LABS MANUAL | 603


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Login to Harbor [629]

HANDS-ON LABS MANUAL | 604


HOL-2446-05-HCI: Optimize and Modernize Data Centers

The registry UI opens in a new browser tab and you are prompted to log in. The embedded image registry uses vCenter SSO for user
authentication.

If prompted about the connection not being private, click advance and proceed to the page http://172.16.10.4.

1. Log in with the following credentials:

Username: sam@vsphere.local

Password: VMware123!

2.Click LOG IN

Review Harbor UI [630]

We see a project named "ns01". This project corresponds to the vSphere "ns01" namespace. Projects in Harbor are automatically
created for each new vSphere namespace.

HANDS-ON LABS MANUAL | 605


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Create Second Namespace [631]

We will now create a second namespace named "ns02" to demonstrate its affect on Harbor registry from the vSphere UI.

1. Click on the vSphere client browser tab

2.Click on the home navigation button (3


3 lines)
3.Select Workload Management

HANDS-ON LABS MANUAL | 606


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Within the Workload Management page, click on Namespaces on the left navigation pane

2.Click on the Namespaces tab

3.Click NEW NAMESPACE

HANDS-ON LABS MANUAL | 607


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Select the k8s-lab Supervisor cluster

2.Enter ns02 for the name of the new namespace

3.Click CREATE

HANDS-ON LABS MANUAL | 608


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Add Permissions [632]

We will configure developer access to the new namespace.

1. Ensure you have selected the ns02 namespace

2.Under the Permissions widget, click "ADD


ADD PERMISSIONS
PERMISSIONS"

HANDS-ON LABS MANUAL | 609


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Set "Identity source" to vsphere.local


vsphere.local"

2.Set "User/Group Search to devteam

3.Set "Role" to Can edit

4.Click OK

HANDS-ON LABS MANUAL | 610


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Add Storage [633]

We will add storage for the ns02 namespace to consume by selecting a storage policy.

In this lab, we will not be adding any capacity limits but it can be done by editing limits from the Capacity and Usage widget.

1. Ensure you have selected the ns02 namespace

2.Under the Storage widget, click "ADD


ADD STORAGE
STORAGE"

HANDS-ON LABS MANUAL | 611


HOL-2446-05-HCI: Optimize and Modernize Data Centers

1. Place a check on the k8s-storage-policy checkbox

2.Click OK

HANDS-ON LABS MANUAL | 612


HOL-2446-05-HCI: Optimize and Modernize Data Centers

View Project (Namespace) in Harbor UI [634]

1. Return to the Harbor tab in your browser.

2.Click on the browser refresh button. If prompted, login again with the following credentials

Username: sam@vsphere.local

Password: VMware123!

3.Confirm the newly created namespace ns02 has been automatically created in Harbor registry with the matching Project

Name

HANDS-ON LABS MANUAL | 613


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Download Certificate [635]

HANDS-ON LABS MANUAL | 614


HOL-2446-05-HCI: Optimize and Modernize Data Centers

To push and pull images to and from the embedded registry, you first need to enable access. This is done by downloading the SSL
certificate and using the "docker-credential-vsphere" command to create an authentication token.

To download the SSL certificate from the Harbor UI:

1. Click on the ns01 project

2.Select the Repositories tab

3.Click REGISTRY CERTIFICATE to download the certificate for this repository

4.The certificate is saved with the file name "ca.crt" to the "Downloads" folder.

HANDS-ON LABS MANUAL | 615


HOL-2446-05-HCI: Optimize and Modernize Data Centers

SFTP to Developer Workstation [636]

Next, copy the "ca.crt" certificate to the /root folder on the Linux workstation. In the lab, we will do this once. In a customer
environment, this is a step that would need to be repeated on each developer's workstation where images will be pushed.

1. Launch (Windows Secure Copy) WinSCP from the taskbar

2.Populate the session login information as follows:

File protocol: SFTP

Host name: 10.0.0.3

User name: root

Password: VMware123!

3.Click Login

HANDS-ON LABS MANUAL | 616


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Copy Certificate to Developer Workstation [637]

1. On the left pane, navigate to C:\Users\Administrator\Downloads

2.On the right pane, navigate to /root

3.Drag the ca.crt file from the Windows download directory to the /root directory on the Linux Workstation.

HANDS-ON LABS MANUAL | 617


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Connect to Developer Workstation [638]

HANDS-ON LABS MANUAL | 618


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Connect to the developer workstation over SSH:

1. Click the Putty icon in the system tray

2.Click CentOS

3.Click Load

4.Click Open

◦Login: root

◦Password: VMware123!

Create Directory [639]

Run the following commands on the Linux workstation:

1. cd /etc/docker/certs.d

2.ls -l

3.mkdir 172.16.10.4

Note the directory "172.16.10.4" corresponds to the IP assigned to the embedded registry (this is the same IP used to connect to the
Harbor UI). You may need to use a different IP in your lab.

HANDS-ON LABS MANUAL | 619


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Copy Certificate [640]

Copy the ca.crt file from the /root directory into this directory:

1. cd /etc/docker/certs.d/172.16.10.4

2.cp /root/ca.crt .

3.ls -l

HANDS-ON LABS MANUAL | 620


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Create Token [641]

Next, use the "docker-credential-vsphere" command to authenticate to the harbor registry and create a token.

For the lab, the "docker-credential-vsphere" tool has already been downloaded to the Linux workstation. Note that it uses vCenter SSO
for user authentication.

1. Execute the command /usr/local/bin/docker-credential-vsphere login 172.16.10.4

2.When prompted, authenticated with the following credentials

Username: sam@vsphere.local

Password: VMware123!

3.On successful login you will see the following output:

INFO[0009] Fetched auth token INFO[0009] Saved auth token

HANDS-ON LABS MANUAL | 621


HOL-2446-05-HCI: Optimize and Modernize Data Centers

List Pulled Images [642]

With the embedded registry enabled, the certificate copied to the developer’s workstation, and a token successfully created we are
now ready to upload images.

Note, that a valid docker account is needed to pull images from docker, as such we will not pull the images from docker in the lab. The
images have already been downloaded. The steps used to do this are shown below for informational purposes.

Do not run these commands in the lab. They are for reference only, intended to show how the images were pulled from docker and
pushed to the embedded registry.

Login to Docker

docker login <username>

•Enter Username: [docker username]

•Enter Password: [password]

Download Images:

•# docker pull nginx:latest

•# docker pull wordpress:4.8-apache

•# docker pull mysql:5.6

Resume the lab from here

List the pulled images

1. Run the command

docker images

HANDS-ON LABS MANUAL | 622


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Tag Images [643]

Tag Images:

1. docker tag nginx:latest 172.16.10.4/ns01/nginx:latest

2.docker tag wordpress:4.8-apache 172.16.10.4/ns01/wordpress:4.8-apache

3.docker tag mysql:5.6 172.16.10.4/ns01/mysql:5.6

HANDS-ON LABS MANUAL | 623


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Push Images to Harbor [644]

HANDS-ON LABS MANUAL | 624


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Push Images to Harbor:

1. docker push 172.16.10.4/ns01/nginx:latest

2.docker push 172.16.10.4/ns01/wordpress:4.8-apache

3.docker push 172.16.10.4/ns01/mysql:5.6

View Images in Harbor UI [645]

1. Return to the Harbor tab in your browser.

2.Click on the browser refresh button. If prompted, login again with the following credentials

Username: sam@vsphere.local

Password: VMware123!

3.If not already in the ns01 project view as shown, click on ns01 to enter the ns01 project view.

4.Click on the Repositories tab

5.Here we see the three images uploaded to the "ns01" project. Developers can use these images to deploy containers on the

Supervisor Cluster and inside Tanzu Kubernetes Clusters.

HANDS-ON LABS MANUAL | 625


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Module 20 - Conclusion [646]

VMware Cloud Foundation with Tanzu comes with an embedded image registry that provides a secure image repository from which
administrators and developers can push and pull the container images deployed as vSphere Pod VMs and inside Tanzu Kubernetes
Clusters.

A separate registry is deployed for each supervisor cluster. A Harbor Project automatically gets created inside the registry for each
vSphere namespace.

Developers must copy the registry's root SSL certificate to their workstation and use the "docker-credential-vsphere" command to
authenticate. Once authenticated, they then use docker commands to download, tag, and push images.

Module Key Takeaways [647]

•The embedded registry provides a secure image repository from which administrators and developers can push and pull the

container images that will be deployed as vSphere Pod VMs and inside Tanzu Kubernetes Clusters.

•A separate registry is deployed for each supervisor cluster. A Harbor Project automatically gets created inside the registry for

each vSphere namespace.

•Developers must copy the registry's root SSL certificate to their workstation and use the "docker-credential-vsphere"

command to authenticate. Once authenticated, they use docker commands to download, tag, and push images.

•VMware NSX is required to use the embedded Harbor Registry.

•The embedded harbor registry is different from the open-source Harbor Registry project. The embedded version has a

limited feature set and is tailored for use with vSphere with Tanzu and deploying vSphere Pod VMs.

HANDS-ON LABS MANUAL | 626


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Module 20 - Use Helm to Deploy a Sample Application (15 minutes) Advanced

Module 21 - Overview [649]

In this module we demonstrate how a developer using helm charts (https://helm.sh/ ) can quickly deploy a sample container-based
application inside a Tanzu Kubernetes Cluster (TKC) running on a vSphere cluster that is part of a Cloud Foundation workload domain.

In this exercise we will use an open source OpenCart application that is freely available from the Bitnami Application Catalog
(https://bitnami.com/stack/opencart). We will deploy the OpenCart application inside an existing TKC (tkc01).

To avoid confusion while navigating through the lab it is recommended that you close all browser and putty windows on the desktop
prior to starting this exercise.

Exercise 1: Deploy OpenCart using Helm [650]

Next, run “kubectl get pods” to verify the OpenCart application is running. Wait for both pods to enter a “running” state.

1. kubectl get pods

HANDS-ON LABS MANUAL | 627


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Connect to Developer Workstation [651]

HANDS-ON LABS MANUAL | 628


HOL-2446-05-HCI: Optimize and Modernize Data Centers

If not already connected, connect to the developer workstation over SSH:

1. Click the Putty icon in the system tray

2.Click CentOS

3.Click Load

4.Click Open

◦Login: root

◦Password: VMware123!

Add Bitnami Repository [652]

HANDS-ON LABS MANUAL | 629


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Next, run the helm repo add command to add the public Bitnami repository https://charts.bitnami.com/bitnami to the local Helm
configuration. Then run the helm repo list command to verify the repository was successfully added.

Commands:

1. helm repo add bitnami https://charts.bitnami.com/bitnami

2.helm repo list

Login to the Tanzu Kubernetes Cluster [653]

HANDS-ON LABS MANUAL | 630


HOL-2446-05-HCI: Optimize and Modernize Data Centers

With the Helm repository added, we are ready to login to the TKC and deploy the OpenCart chart.

Run the kubectl vsphere login command to log on to the Kubernetes control plane followed by the kubectl config use-context
command to set the context to tkc01:

Commands:

1. kubectl vsphere login --vsphere-username sam@vsphere.local --server 172.16.10.2 --insecure-skip-

tls-verify --tanzu-kubernetes-cluster-namespace ns01 --tanzu-kubernetes-cluster-name=tkc01

2.kubectl config use-context tkc01

Begin OpenCart Installation [654]

HANDS-ON LABS MANUAL | 631


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Once we have successfully authenticated and set our context to “tkc01” we are ready to use Helm to deploy the OpenCart chart. To
do this, run “helm install …” and provide a name for the OpenCart instance (myopencart) along with the path to the chart in the
Bitnami repository (bitnami/opencart).

Commands:

1. helm install myopencart bitnami/opencart

Helm starts the deployment of the OpenCart chart.

Confirm Mariadb is Running [655]

Next, run the kubectl get pods command and verify that the mariadb pod is running. Note that in some situations it can take three or
four minutes for the mariadb pod to enter a running state. Re-run the kubectl get pods command until that state shows running.

1. kubectl get pods

HANDS-ON LABS MANUAL | 632


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Run the Extra Commands [656]

HANDS-ON LABS MANUAL | 633


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Once the mariadb pod is in a running state, past the four export commands into the Linux shell. These four commands will query for the
passwords and assign them to four environment variables: APP_HOST, APP_PASSWORD, DATABASE_ROOT_PASSWORD, and
APP_DATABASE_PASSWORD.

Note that when installing the OpenCart chart, Helm presents us with several additional commands that are needed to complete the
deployment.

Note: Helm begins by first deploying a mariadb pod. As part of this pod deployment, unique passwords are generated. The additional
commands are needed to (1) query for the generated passwords and assign them to environment variables in the Linux shell so we can
(2) run the helm upgrade command to deploy the frontend pod and complete the installation.

Click and drag each of the following to the putty window.

1. export APP_HOST=$(kubectl get svc --namespace default myopencart --template "{{ range (index

.status.loadBalancer.ingress 0) }}{{ . }}{{ end }}")

2.export APP_PASSWORD=$(kubectl get secret --namespace default myopencart -o

jsonpath="{.data.opencart-password}" | base64 -d)

3.export DATABASE_ROOT_PASSWORD=$(kubectl get secret --namespace default myopencart-mariadb -o

jsonpath="{.data.mariadb-root-password}" | base64 -d)

4.export APP_DATABASE_PASSWORD=$(kubectl get secret --namespace default myopencart-mariadb -o

jsonpath="{.data.mariadb-password}" | base64 -d)

Run Helm Upgrade Command [657]

HANDS-ON LABS MANUAL | 634


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Finally, run the helm upgrade command, passing in the names of the environment variables containing the passwords.

In the putty window scroll up to find the helm upgrade command that was displayed in the output of the helm install command.
Highlight the helm upgrade command to copy it to the clipboard. You can also click and drag the following command into the Putty
window.

1. helm upgrade --namespace default myopencart bitnami/opencart --set

opencartHost=$APP_HOST,opencartPassword=$APP_PASSWORD,mariadb.auth.rootPassword=$DATABASE_ROOT_PASSWORD,ma

2.Copy the url or take note. In this case the url would be http://172.16.10.5/ however you may have a slightly different URL

provided.

*Note if you did not use myopencart as the app name you will need to update that section.

Confirm Application is Running [658]

HANDS-ON LABS MANUAL | 635


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Access Application [659]

With the OpenCart application running, we are ready to access the shopping cart.

Double click the Google Chrome icon on the desktop to open the Chrome browser. (not pictured)

1. In the browser paste the URL for the OpenCart application (copied from the output of the helm upgrade command).

We are connected to the OpenCart web server.

HANDS-ON LABS MANUAL | 636


HOL-2446-05-HCI: Optimize and Modernize Data Centers

View Additional Details about the Application Deployed [660]

With the OpenCart application running, we can use the kubectl and helm commands to view additional details about the container-
based application.

Return to your putty application

Commands:

1. kubectl get pods

2.kubectl get svc

3.kubectl get pvc

4.helm list

Based upon the above outputs we can see additional information such as Cluster IP, Load Balancer, Ports being used, PVC consumed,
and finally all applications deployed via Helm.

Module 21 - Conclusion [661]

In this module we used Helm to deploy the OpenCart application, available from the Bitnami Application Catalog (https://bitnami.com/
stack/opencart), in order to show how easy it is for a developer to deploy a container-based application inside a Tanzu Kubernetes
Cluster (TKC) running on a vSphere cluster that is part of a Cloud Foundation workload domain.

Module Key Takeaways

•TKCs are fully conformant Kubernetes clusters that are easily accessed using existing developer tools (i.e Helm).

•Developers do not require any vSphere knowledge or skills to deploy, configure, and run container-based workloads inside a

HANDS-ON LABS MANUAL | 637


HOL-2446-05-HCI: Optimize and Modernize Data Centers

TKC.

•TKCs are an ideal place for developers to develop, run, and deploy container-based workloads.

HANDS-ON LABS MANUAL | 638


HOL-2446-05-HCI: Optimize and Modernize Data Centers

Conclusion

Conclusion [663]

You have completed and reached the end of our lab on Optimizing and Modernizing Data Centers powered by VMware Cloud
Foundation Environment. Please take a few minutes to provide feedback on your experience taking the lab, as this will help with future
updates.

HANDS-ON LABS MANUAL | 639


VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 vmware.com.
Copyright © 2024 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or
more patents listed at vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. and its subsidiaries in the United States and other jurisdictions. All other
marks and names mentioned herein may be trademarks of their respective companies. Lab SKU: HOL-2446-05-HCI Version: 20240112-193134

You might also like