You are on page 1of 464

HPE Solutions with VMware

Rev. 23.21
 Copyright 2023 Hewlett Packard Enterprise Development LP
The information contained herein is subject to change without notice. The only warranties for
Hewlett Packard Enterprise products and services are set forth in the express warranty
statements accompanying such products and services. Nothing herein should be construed
as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for
technical or editorial errors or omissions contained herein.
This is a Hewlett Packard Enterprise copyrighted work that may not be reproduced without
the written permission of Hewlett Packard Enterprise. You may not use these materials to
deliver training to any person outside of your organization without the written permission of
Hewlett Packard Enterprise.
Microsoft, Windows, and Windows Server are registered trademarks of the Microsoft
corporation in the United States and other countries.

Printed in the United States of America


HPE Solutions with VMware
Rev. 23.21
Contents

Course Map .................................................................................................................................................. 1

Module 1: Overview of HPE VMware Solutions


Learning objectives .................................................................................................................................... 2
Overview of VMware solutions .................................................................................................................. 3
VMware vSphere ..................................................................................................................................... 4
VMware software-defined solutions ......................................................................................................... 5
Software-defined storage (SDS) ......................................................................................................... 5
Software-defined networking (SDN) ................................................................................................... 5
VMware Tanzu Kubernetes Grid ............................................................................................................. 6
VMware Cloud Foundation (VCF) ............................................................................................................ 7
VCF Components .................................................................................................................................... 8
SDDC Manager .................................................................................................................................. 8
vSphere (compute) ............................................................................................................................. 8
vSAN (storage) ................................................................................................................................... 8
NSX (networking) ................................................................................................................................ 8
Aria Suite (formerly vRealize) ............................................................................................................. 9
Overview of HPE solutions for VMware .................................................................................................. 10
HPE ProLiant DL servers for VMware ................................................................................................... 11
HPE Synergy ......................................................................................................................................... 12
Fluid resource pools ......................................................................................................................... 12
Unified API ........................................................................................................................................ 13
Software-defined intelligence ........................................................................................................... 13
HPE storage solutions for VMware ........................................................................................................ 14
Zerto for VMware ................................................................................................................................... 15
HPE hyperconverged solutions ............................................................................................................. 16
HPE GreenLake cloud services for VMware ......................................................................................... 17
HPE GreenLake Edge-to-Cloud Platform .............................................................................................. 18

Rev. 23.21 | © Copyright 2023 Hewlett Packard Enterprise Development LP | Confidential – For Training Purposes Only
Contents

HPE Aruba Networking Central ........................................................................................................ 18


Data Services Cloud Console ........................................................................................................... 18
HPE GreenLake Compute Ops Management .................................................................................. 18
HPE GreenLake Central ................................................................................................................... 19
VMware Compatibility Guide.................................................................................................................. 20
HPE and VMware together .................................................................................................................... 21
Why HPE and VMware .......................................................................................................................... 22
Why HPE................................................................................................................................................ 23
Security ............................................................................................................................................. 23
Performance ..................................................................................................................................... 23
Value adds ........................................................................................................................................ 23
Hardware and software solution with unified support experience .................................................... 23
Use cases addressed by VMware on HPE ............................................................................................ 24
Strategic change ............................................................................................................................... 24
Hybrid or multi-cloud ......................................................................................................................... 24
Hybrid workplace solutions ............................................................................................................... 25
Use cases addressed by VMware on HPE ............................................................................................ 26
Secure workloads and data .............................................................................................................. 26
Apps development ............................................................................................................................ 26
AI-Ready Enterprise ......................................................................................................................... 26
Activity ....................................................................................................................................................... 27
Customer scenario: Financial Services 1A ............................................................................................ 28
Activity 1 ................................................................................................................................................. 29
Company pain points ........................................................................................................................ 30
Your task ........................................................................................................................................... 30
Summary .................................................................................................................................................... 32
Learning Checks ....................................................................................................................................... 33

Module 2: Design HPE Compute for VMware


Learning objectives .................................................................................................................................. 35
General design guidelines ....................................................................................................................... 36
Overview of the design process ............................................................................................................. 37

Rev.23.21 ii Confidential – For Training Purposes Only


Contents

Gathering information: Migrating an existing vSphere deployment ....................................................... 38


VM profiles ........................................................................................................................................ 38
Subscription expectations ................................................................................................................. 38
Cluster plans ..................................................................................................................................... 39
Current host profiles ......................................................................................................................... 39
Growth requirements ........................................................................................................................ 39
Gathering information: Migrating from physical machines..................................................................... 40
Gathering information: Considering workloads ...................................................................................... 41
Workload types ................................................................................................................................. 41
How do you get the information? ........................................................................................................... 43
HPE CloudPhysics ............................................................................................................................ 43
HPE Assessment Foundry Strategic Assessment Framework (SAF) .............................................. 43
HPE CloudPhysics ................................................................................................................................. 44
Overall process for using HPE CloudPhysics ........................................................................................ 45
Simple, customer-driven process for customers with Data Services Cloud Console ............................ 46
HPE CloudPhysics Observer requirements ........................................................................................... 47
Data collection with HPE CloudPhysics ................................................................................................. 48
SAF-based assessment process overview ............................................................................................ 49
Collection process .................................................................................................................................. 51
Supplemental material on using SAF Collector ................................................................................ 51
Collecting Windows performance data .................................................................................................. 53
Supplemental information: Collecting VMware performance data ................................................... 53
Uploading from SAF Collector ............................................................................................................... 55
Exporting data from SAF Analyze .......................................................................................................... 56
SAF integration with HPE sizers ............................................................................................................ 58
HPE Assessment Foundry training ........................................................................................................ 59
Common sizing tools overview .............................................................................................................. 60
HPE SSET ............................................................................................................................................. 61
Best practices for VMware on HPE ......................................................................................................... 62
Use VMware ESXi images for HPE servers .......................................................................................... 63
OS Support tool for HPE Synergy ......................................................................................................... 64
Leverage workload optimization profiles ................................................................................................ 65
Leverage HPE compute security ........................................................................................................... 66
HPE Gen11 security innovations ........................................................................................................... 67
TPM and Secure Boot integration with VMware vCenter ...................................................................... 68
Additional HPE ProLiant security capabilities ........................................................................................ 69
Consider VMware licensing ................................................................................................................... 70

Rev.23.21 iii Confidential – For Training Purposes Only


Contents

Designing HPE ProLiant DL servers for VMware .................................................................................. 71


Reviewing the HPE ProLiant DL Portfolio (AMD and Ampere-based servers) ..................................... 72
Reviewing the HPE ProLiant DL Portfolio (Intel-based servers) ........................................................... 73
Positioning HPE ProLiant DL servers based on workload..................................................................... 74
Gen11 versus Gen10 and Gen10 Plus............................................................................................. 74
Introducing HPE ProLiant for vSphere Distributed Services Engine ..................................................... 75
Pre-configured bundles..................................................................................................................... 75
Licensing ........................................................................................................................................... 76
Why customers need data processing units (DPUs) ............................................................................. 77
Why the AMD Pensando DPU ............................................................................................................... 78
HPE ProLiant for vSphere Distributed Services Engine ........................................................................ 79
Key benefits of HPE ProLiant for vSphere Distributed Services Engine ............................................... 80
RAs for VMware solutions on HPE ProLiant.......................................................................................... 82
Designing HPE Synergy solutions for VMware...................................................................................... 83
Designing ESXi clusters for HPE Synergy............................................................................................. 84
Leveraging HPE Synergy templates ...................................................................................................... 85
Step 1................................................................................................................................................ 85
Step 2................................................................................................................................................ 85
Step 3................................................................................................................................................ 85
Step 4................................................................................................................................................ 85
Step 5................................................................................................................................................ 85
Technical guidance for VMware solutions on HPE Synergy ................................................................. 86
Learning checks ........................................................................................................................................ 87
Activity ....................................................................................................................................................... 88
Activity 2.1 .............................................................................................................................................. 89
Task 1 ............................................................................................................................................... 89
Task 2 ............................................................................................................................................... 91
Task 3 ............................................................................................................................................... 95
Understanding VCF on HPE ..................................................................................................................... 97
VCF SDDC Manager and domains ........................................................................................................ 98
Management domain ........................................................................................................................ 98
Virtual infrastructure workload domain ............................................................................................. 98
VCF architecture: Standard model ........................................................................................................ 99
Guidelines for deploying VCF ........................................................................................................... 99
VCF architecture: Consolidated model ................................................................................................ 101
Why VCF on HPE Synergy? ................................................................................................................ 102
RAs for VCF on HPE ........................................................................................................................... 103

Rev.23.21 iv Confidential – For Training Purposes Only


Contents

HPE solutions to enhance management and monitoring ................................................................... 104


HPE compute management portfolio ................................................................................................... 105
HPE plug-ins to simplify management ................................................................................................. 106
HPE OV4VC benefits ........................................................................................................................... 107
Simplify operations and increase productivity ................................................................................ 107
Deploy faster ................................................................................................................................... 107
Impose configuration consistency .................................................................................................. 108
Increase visibility into environment ................................................................................................. 108
HPE OV4VC: Server only integration .................................................................................................. 109
HPE OV4VC licensing and managed devices ..................................................................................... 110
Licensing ......................................................................................................................................... 110
Managed devices ............................................................................................................................ 110
HPE OV4VC features .......................................................................................................................... 111
OV4VC Views ................................................................................................................................. 111
Enhanced Link Mode ...................................................................................................................... 111
OS deployment ............................................................................................................................... 111
Cluster-related features .................................................................................................................. 112
Cluster remediation ........................................................................................................................ 112
Proactive HA .................................................................................................................................. 112
Grow cluster ................................................................................................................................... 112
HPE OneView HSM plug-in for vLCM ................................................................................................. 113
Example of using the HPE OneView HSM plug-in for vLCM............................................................... 114
HPE OneView for VMware vRealize Log Insight ................................................................................. 115
HPE OneView for VMware vRealize Operations (HPE OV4VROPS) ................................................. 116
HPE OneView for vRealize Orchestrator (HPE OV4VRO) .................................................................. 117
HPE OneView Connector for VCF ....................................................................................................... 118
More automation benefits with HPE .................................................................................................... 119
HPE InfoSight: Industry’s most advanced AI for infrastructure............................................................ 120
Benefits of HPE InfoSight VMware integration .................................................................................... 121
HPE RESTful API ................................................................................................................................ 122
HPE RESTful API and Redfish conformance ................................................................................. 122
HPE DEV ............................................................................................................................................. 123
Activity ..................................................................................................................................................... 124
Activity 2.2 ............................................................................................................................................ 125
Summary .................................................................................................................................................. 126
Learning checks ...................................................................................................................................... 127

Rev.23.21 v Confidential – For Training Purposes Only


Contents

Module 3: Design HPE Storage Solutions for VMware


Learning objectives ................................................................................................................................ 129
Introduction to VMware storage ............................................................................................................ 130
Evolution of VMware storage integration ............................................................................................. 131
VMFS .............................................................................................................................................. 131
VAAI ................................................................................................................................................ 131
VASA .............................................................................................................................................. 132
vSAN ............................................................................................................................................... 132
vVols ............................................................................................................................................... 132
Overview of typical VMware storage options ....................................................................................... 133
VMware vSAN on HPE solutions ........................................................................................................... 134
VMware vSAN overview ...................................................................................................................... 135
VMware ESA versus OSA ................................................................................................................... 136
OSA versus ESA considerations ......................................................................................................... 137
HPE’s approach to vSAN ReadyNodes ............................................................................................... 138
Designing vSAN on HPE ProLiant DL servers .................................................................................... 139
Additional requirements for ESA deployments .................................................................................... 140
vSAN on HPE Synergy ........................................................................................................................ 141
Why HPE Synergy for vSAN ................................................................................................................ 142
Disaggregated compute and storage ............................................................................................. 142
Single infrastructure for any workload ............................................................................................ 142
High-speed interconnect between frames ...................................................................................... 142
Reduced complexity and cost ......................................................................................................... 142
HPE Synergy D3940—Ideal platform for vSAN: Flexibility .................................................................. 143
HPE Synergy D3940—Ideal platform for vSAN: Performance ............................................................ 144
Right-sized provisioning for any workload ........................................................................................... 145
Following best practices for vSAN on HPE Synergy: Cluster and network design ............................. 146
Following best practices for vSAN on HPE Synergy: Drivers and controllers ..................................... 147
Following best practices for vSAN on HPE Synergy: Redundant Connectivity for D3940s ................ 148
Selecting certified configurations for HPE Synergy and vSAN ............................................................ 149
Sizing HPE ProLiant DL or HPE Synergy for vSAN ............................................................................ 150
HPE storage arrays for VMware environments .................................................................................... 151
Use cases for SAN-backed storage ..................................................................................................... 152
HPE storage portfolio: Positioning for VMware ................................................................................... 153
Overview of HPE storage integration with VMware ............................................................................. 155
HPE management and automation portfolio for VMware .................................................................... 156
HPE Storage vCenter plugins .............................................................................................................. 157

Rev.23.21 vi Confidential – For Training Purposes Only


Contents

Additional HPE Storage plugins ........................................................................................................... 158


Fully automated provisioning for HPE Synergy ................................................................................... 159
Step 1.............................................................................................................................................. 159
Step 2.............................................................................................................................................. 159
Step 3.............................................................................................................................................. 159
VMware vCenter Site Recovery Manager introduction ........................................................................ 160
HPE Alletra, Nimble, and Primera array benefits for SRM .................................................................. 161
Why vVols for VMware databases ....................................................................................................... 162
Overview of vVols Storage Architecture .............................................................................................. 163
How vVols changes storage management .......................................................................................... 164
How vVols transforms storage in vSphere........................................................................................... 165
The HPE Alletra and HPE Primera advantages with vVols ................................................................. 167
Solid and mature ............................................................................................................................. 167
Simple and reliable ......................................................................................................................... 167
Trail-blazing replication ................................................................................................................... 167
Innovative and efficient ................................................................................................................... 167
Rich and powerful ........................................................................................................................... 167
HPE solutions for SRM and vVols ....................................................................................................... 168
HPE Alletra 5000, Alletra 6000 and HPE Nimble Storage ............................................................. 168
HPE Alletra 9000 and HPE Primera ............................................................................................... 168
HPE is the market leader for vVols ...................................................................................................... 169
VMware and HPE Storage array support for NVMe over Fabrics (NVMeoF) ..................................... 170
HPE InfoSight ...................................................................................................................................... 171
Example of HPE InfoSight in action ..................................................................................................... 172
HPE InfoSight’s cross-stack analytics ................................................................................................. 173
Example: Diagnose abnormal latency with VM analytics .................................................................... 174
Data-centric visibility for every VM ....................................................................................................... 175
Summary of HPE storage array benefits for VMware environments ................................................... 176
Application aware ........................................................................................................................... 176
Deeply integrated ............................................................................................................................ 176
Predictive ........................................................................................................................................ 176
Leadership ...................................................................................................................................... 176
Specific guidelines for HPE storage for VCF ....................................................................................... 177
VMware Cloud Foundation Storage: Flexible storage options ............................................................ 178
Best practices for automating VCF storage ......................................................................................... 180
HPE VCF solution ................................................................................................................................ 181
Customer scenario: Financial Services 1A .......................................................................................... 182

Rev.23.21 vii Confidential – For Training Purposes Only


Contents

Activity 3 ............................................................................................................................................... 183


Scenario .......................................................................................................................................... 183
Task ................................................................................................................................................ 183
Summary .............................................................................................................................................. 185
Learning checks ...................................................................................................................................... 186

Module 4: Design HPE Solutions for VMware Software-Defined


Networking
Learning objectives ................................................................................................................................ 187
HPE Synergy networking guidelines for VMware environments ....................................................... 188
Synergy compute-module-to interconnect-module connections.......................................................... 189
HPE Synergy multi-frame networking .................................................................................................. 190
Synergy FlexNICs ................................................................................................................................ 191
Synergy FC convergence .................................................................................................................... 192
Mapped VLAN mode............................................................................................................................ 193
Tunneled mode .................................................................................................................................... 194
Redundancy with M-LAG and LACP-S ................................................................................................ 195
Redundancy with single ICM LAGs and Smart Link ............................................................................ 196
Internal networks .................................................................................................................................. 197
Private networks .................................................................................................................................. 198
HPE Synergy support for other networking features ........................................................................... 199
VMware NSX ............................................................................................................................................ 200
VMware NSX ....................................................................................................................................... 201
VMware NSX architecture.................................................................................................................... 202
Management plane ......................................................................................................................... 202
NSX Manager ................................................................................................................................. 202
Control plane .................................................................................................................................. 202
NSX Controller ............................................................................................................................... 202
Local Control Plane (LCP) ............................................................................................................. 202
Data plane ...................................................................................................................................... 203
NSX virtual switch .......................................................................................................................... 203
Edge services ................................................................................................................................. 203
Use case 1: Networking virtualization .................................................................................................. 204
Overlay networking .............................................................................................................................. 205
Overlay segments ................................................................................................................................ 207
Transport zones ................................................................................................................................... 208
Example: Original network ................................................................................................................... 209

Rev.23.21 viii Confidential – For Training Purposes Only


Contents

Example: Plan for overlay segments ................................................................................................... 210


Use case 2: Microsegmentation .......................................................................................................... 211
Topology agnostic ........................................................................................................................... 211
Centralized control .......................................................................................................................... 211
Granular control based on high-level policies ................................................................................ 211
Network overlay-based segmentation ............................................................................................ 211
Policy-driven service insertion ........................................................................................................ 211
How NSX implements micro-segmentation ......................................................................................... 212
Security extensibility ............................................................................................................................ 213
Use case 3: Network automation with NSX + Aria .............................................................................. 214
HPE ProLiant for vSphere Distributed Services Engine and NSX offload ........................................ 215
HPE ProLiant with vSphere Distributed Services Engine .................................................................... 216
Offloading of NSX network services .................................................................................................... 217
Recommended configuration ............................................................................................................... 218
Configuration requirements.................................................................................................................. 219
Physical underlay network ..................................................................................................................... 220
Options for the physical underlay ........................................................................................................ 221
HPE Aruba Networking Fabric Composer ........................................................................................... 222
Design considerations for the physical infrastructure .......................................................................... 223
More details on MTU ............................................................................................................................ 224
Standard Ethernet ........................................................................................................................... 224
Jumbo frames ................................................................................................................................. 224
Advantages of jumbo frames .......................................................................................................... 224
Disadvantages of jumbo frames ..................................................................................................... 225
An example topology with HPE Aruba Networking CX switches ......................................................... 226
BGP routing between NSX edge HPE Aruba Networking CX switches .............................................. 227
HPE Aruba Networking NetEdit ........................................................................................................... 228
Cisco ACI ............................................................................................................................................. 229
Activity ..................................................................................................................................................... 230
Activity 4 ............................................................................................................................................... 231
Summary .................................................................................................................................................. 234
Learning Checks ..................................................................................................................................... 235
Appendix: Review VMware networking ................................................................................................ 236
Standard switch (vSwitch).................................................................................................................... 237
How vSwitch forwards traffic ................................................................................................................ 238
VMkernel adapters ............................................................................................................................... 239

Rev.23.21 ix Confidential – For Training Purposes Only


Contents

Implementing VLANs ........................................................................................................................... 240


Virtual switch tagging (VST) ........................................................................................................... 240
External switch tagging (EST) ........................................................................................................ 240
Virtual guest tagging (VGT) ............................................................................................................ 240
vSphere distributed switch (VDS) ........................................................................................................ 241

Module 5: Design an HPE Hyperconverged Solution for a Virtualized


Environment
Learning objectives ................................................................................................................................ 243
Emphasizing the software-defined benefits of HPE SimpliVity ......................................................... 244
HPE SimpliVity Data Virtualization Platform ........................................................................................ 245
Presentation Layer .......................................................................................................................... 245
Data Management Layer: File System ........................................................................................... 245
Data Management Layer: Object Store .......................................................................................... 245
Deduplication with HPE SimpliVity ...................................................................................................... 246
HPE SimpliVity Data Virtualization Platform in action ......................................................................... 247
Storage IO reduction ............................................................................................................................ 248
Backups .......................................................................................................................................... 248
Mirror............................................................................................................................................... 248
Snapshots ....................................................................................................................................... 249
Workload ......................................................................................................................................... 250
Final result ...................................................................................................................................... 250
HPE SimpliVity data protection mechanisms: RAIN ............................................................................ 251
RAIN ............................................................................................................................................... 251
HPE SimpliVity data protection mechanisms: RAID ............................................................................ 252
Why HPE SimpliVity data protection is better: Multiple drive failure withstood ................................... 253
How HPE SimpliVity localizes data ...................................................................................................... 254
Keeping data local with HPE SimpliVity Intelligent Workload Optimizer ............................................. 255
Streamlining day-to-day operations ..................................................................................................... 256
Sizing an HPE SimpliVity solution ........................................................................................................ 257
HPE SimpliVity design process ........................................................................................................... 258
Data gathering ..................................................................................................................................... 259
Basic information to put into the sizer ............................................................................................. 259
Additional information about files .................................................................................................... 259
Additional information about backups ............................................................................................. 259
Data gathering tools ........................................................................................................................ 260

Rev.23.21 x Confidential – For Training Purposes Only


Contents

Reviewing choices for the HPE SimpliVity platform ............................................................................ 261


HPE SimpliVity 380G ...................................................................................................................... 261
HPE SimpliVity 325......................................................................................................................... 261
Preparing for sizing .............................................................................................................................. 262
Getting started with the HPE SimpliVity Sizing Tool ............................................................................ 263
Inputting information to size the cluster ............................................................................................... 264
Architecting the HPE SimpliVity solution ............................................................................................. 265
HPE SimpliVity design process ........................................................................................................... 266
Architectural design ............................................................................................................................. 267
Cluster............................................................................................................................................. 267
vCenter (site 1) ............................................................................................................................... 267
vCenter (site 2) ............................................................................................................................... 267
Arbiter ............................................................................................................................................. 267
Federation ....................................................................................................................................... 268
Site-to-Site links .............................................................................................................................. 268
Network design .................................................................................................................................... 269
Management ................................................................................................................................... 269
Storage ........................................................................................................................................... 269
Federation ....................................................................................................................................... 269
General cluster and federation sizing guidelines ................................................................................. 270
HPE SimpliVity Integration with VMware .............................................................................................. 271
HPE SimpliVity plug-ins for VMware .................................................................................................... 272
HPE SimpliVity Deployment Manager with VMware vCenter .............................................................. 273
1. vCenter pre-setup ....................................................................................................................... 273
2. Beginning to use Deployment Manager ..................................................................................... 273
3. Node discovery ........................................................................................................................... 274
4. Node deployment........................................................................................................................ 274
5. Additional node deployment ....................................................................................................... 274
Why REST API .................................................................................................................................... 275
HPE SimpliVity Upgrade Manager ....................................................................................................... 276
HPE Alletra dHCI: Emphasizing the software-defined benefits ......................................................... 277
HPE Alletra dHCI versus traditional HCI solutions .............................................................................. 278
Scaling compute and storage with disaggregated HCI ........................................................................ 279
HPE Alletra dHCI and vVols features .................................................................................................. 281
Summary of key HPE Alletra dHCI benefits ........................................................................................ 282
HPE Alletra dHCI: Sizing and architecting the solution ...................................................................... 283
HPE Alletra dHCI data gathering and sizing tools ............................................................................... 284

Rev.23.21 xi Confidential – For Training Purposes Only


Contents

HPE Alletra dHCI platform building blocks .......................................................................................... 285


Storage ........................................................................................................................................... 285
Compute ......................................................................................................................................... 285
Hypervisor ....................................................................................................................................... 286
Management ................................................................................................................................... 286
Network ........................................................................................................................................... 286
Required VMware licenses .................................................................................................................. 287
VMware vCenter Server license ..................................................................................................... 287
VMware vSphere license ................................................................................................................ 287
HPE Alletra dHCI architecture ............................................................................................................. 288
Two deployment paths ......................................................................................................................... 289
Greenfield deployment.................................................................................................................... 289
Brownfield deployment ................................................................................................................... 289
HPE Alletra dHCI: Greenfield .............................................................................................................. 291
HPE Alletra dHCI: Brownfield .............................................................................................................. 292
HPE Infosight Welcome Center: Guided deployments ........................................................................ 293
Getting started ................................................................................................................................ 293
Physical installation ........................................................................................................................ 293
Software configuration .................................................................................................................... 293
Network automation: Extending dHCI stack setup to HPE switches ................................................... 294
Network automation—Supported network topology ............................................................................ 295
Network topology requirements: ..................................................................................................... 295
Deployment network prerequisites ................................................................................................. 295
Limitations ....................................................................................................................................... 295
HPE Alletra dHCI: Multiple vSphere HA/DRS cluster support ............................................................. 296
Restrictions .......................................................................................................................................... 297
Designing HPE Alletra dHCI specifically for VCF ................................................................................ 298
dHCI for VCF overview ........................................................................................................................ 299
dHCI for VCF prerequisites and limitations .......................................................................................... 300
Requirements ................................................................................................................................. 300
Limitations ....................................................................................................................................... 300
dHCI for VCF uses cases / value adds ................................................................................................ 301
Automating the lifecycle with HPE Alletra dHCI integrations with VMware ..................................... 302
HPE Alletra dHCI vCenter plug-in ........................................................................................................ 303
Intelligent upgrades for full-stack automation ...................................................................................... 304
Benefits of intelligent, one-click upgrades ........................................................................................... 305
Catalog matching ................................................................................................................................. 306

Rev.23.21 xii Confidential – For Training Purposes Only


Contents

Catalog matching (cont.) ...................................................................................................................... 307


HPE Alletra dHCI upgrade process: Fully nondisruptive to VMs ......................................................... 308
HPE Alletra dHCI compute node: SPP update architecture—available for Gen10 servers
and later ............................................................................................................................................... 309
HPE Alletra dHCI replication and VMware SRM: Efficient private cloud disaster recovery ................ 310
HPE Infosight for HPE Alletra dHCI ..................................................................................................... 311
Activity ..................................................................................................................................................... 312
Activity 5 ............................................................................................................................................... 313
Scenario .......................................................................................................................................... 313
Additional background information ................................................................................................. 313
Task ................................................................................................................................................ 313
Summary .................................................................................................................................................. 315
Learning checks ...................................................................................................................................... 316

Module 6: Design HPE GreenLake Solutions for VMware


Learning objectives ................................................................................................................................ 317
HPE GreenLake for VMware Overview .................................................................................................. 318
aaS delivery models ............................................................................................................................. 319
Benefits of HPE GreenLake for customers .......................................................................................... 320
Less downtime ................................................................................................................................ 320
IT hours and resources freed up for other purposes ...................................................................... 320
Greater agility ................................................................................................................................. 320
Better alignment between usage and cost ..................................................................................... 320
Greater value .................................................................................................................................. 321
Why HPE GreenLake for VMware ....................................................................................................... 322
How it works ......................................................................................................................................... 323
HPE GreenLake for VMware offerings .................................................................................................. 324
Configurable HPE GreenLake cloud services ..................................................................................... 325
HPE GreenLake for VMs ................................................................................................................ 325
HPE GreenLake for VMware Cloud Foundation (VCF) .................................................................. 325
HPE GreenLake for Private Cloud Enterprise ................................................................................ 325
HPE GreenLake for HCI ................................................................................................................. 326
HPE GreenLake for VDI ................................................................................................................. 326
AI-Ready Enterprise Platform VMware on HPE GreenLake .......................................................... 326
HPE GreenLake Map Book ............................................................................................................ 326
Custom HPE GreenLake services ....................................................................................................... 327

Rev.23.21 xiii Confidential – For Training Purposes Only


Contents

Services and support ........................................................................................................................... 328


HPE Partner services ..................................................................................................................... 328
HPE Advisory & Professional Services .......................................................................................... 328
HPE GreenLake Management Services......................................................................................... 328
HPE GreenLake cloud services pricing model .................................................................................... 329
Sizing and quoting HPE GreenLake solutions ..................................................................................... 330
Summary of HPE GreenLake for VMware business values ................................................................ 331
Activity ..................................................................................................................................................... 332
Activity 6 ............................................................................................................................................... 333
Demo of HPE GreenLake for Private Cloud Enterprise ................................................................. 333
HPE GreenLake Edge-to-Cloud Platform Video ............................................................................ 335
Summary .................................................................................................................................................. 336
Learning Checks ..................................................................................................................................... 337

Module 7: Plan Cloud Migrations


Learning objectives ................................................................................................................................ 339
Migration process overview ................................................................................................................... 340
Plan the migration ................................................................................................................................... 341
Understand migration scope and implications ..................................................................................... 342
Understand licensing implications ....................................................................................................... 344
Application licenses ........................................................................................................................ 344
Virtualization licenses ..................................................................................................................... 344
Overview of migration technologies ..................................................................................................... 346
VMware vSphere vMotion............................................................................................................... 346
VMware HCX .................................................................................................................................. 346
VMware Convertor .......................................................................................................................... 346
Zerto, a Hewlett Packard Enterprise solution ................................................................................. 347
Live (hot) migration versus cold migration ........................................................................................... 348
Live migration ................................................................................................................................. 348
Cold migration ................................................................................................................................. 348
Prepare for and execute the migration ................................................................................................. 349
Migration documentation...................................................................................................................... 350
Pre-testing and protection .................................................................................................................... 351
Migration execution .............................................................................................................................. 352

Rev.23.21 xiv Confidential – For Training Purposes Only


Contents

VMware vSphere vMotion....................................................................................................................... 353


vSphere vMotion .................................................................................................................................. 354
Compute-only migration.................................................................................................................. 354
Storage-only migration.................................................................................................................... 355
Compute-and-storage migration ..................................................................................................... 355
Other examples of vMotion .................................................................................................................. 356
vMotion with vVols .......................................................................................................................... 356
vMotion with RDM ........................................................................................................................... 356
Scope for vMotion ................................................................................................................................ 357
HPE Peer Persistence to enhance automatic vMotion for disaster recovery ...................................... 358
vMotion networking best practices ....................................................................................................... 359
Requirements for vSphere vMotion ..................................................................................................... 360
Requirements for vMotion without shared storage .............................................................................. 361
Common issues that can occur during vMotion ................................................................................... 362
VMware EVC ....................................................................................................................................... 363
Guidelines for using EVC ..................................................................................................................... 364
VMware HCX ............................................................................................................................................ 365
HCX use cases .................................................................................................................................... 366
Data center consolidation ............................................................................................................... 366
Infrastructure update....................................................................................................................... 366
Migration to the cloud ..................................................................................................................... 366
Rebalancing .................................................................................................................................... 366
Disaster recovery and other business continuity use cases........................................................... 366
VMware HCX components................................................................................................................... 367
HCX Manager and installers ........................................................................................................... 367
HCX-IX Interconnect Appliance ...................................................................................................... 367
HCX WAN Optimization Appliance ................................................................................................. 367
HCX Network Extension Virtual Appliance ..................................................................................... 368
VMware HCX migration types .............................................................................................................. 369
Cold migration ................................................................................................................................. 369
Bulk migration ................................................................................................................................. 369
vMotion migration ........................................................................................................................... 369
Replication Assisted vMotion (RAV) ............................................................................................... 369
Example network migration process for VMware HCX for VMware Cloud .......................................... 371
VMware vCenter Converter .................................................................................................................... 372
VMware vCenter Converter overview .................................................................................................. 373
VMware vCenter Converter components ............................................................................................. 374

Rev.23.21 xv Confidential – For Training Purposes Only


Contents

VMware vCenter Converter process .................................................................................................... 375


Zerto, a Hewlett Packard Enterprise company..................................................................................... 376
Zerto use cases ................................................................................................................................... 377
Infrastructure refreshes................................................................................................................... 377
Data center consolidation ............................................................................................................... 377
Cloud migration ............................................................................................................................... 377
Hybrid and multi-cloud migrations .................................................................................................. 378
Zerto for VMs architecture and components........................................................................................ 379
Zerto Virtual Manager (ZVM) .......................................................................................................... 379
Virtual Replication Appliance (VRA) ............................................................................................... 379
Other Zerto for VMs architectures ....................................................................................................... 380
Virtual Protection Group (VPGs) .......................................................................................................... 381
How Zerto for VMs works: Initial sync from source site to destination site .......................................... 382
How Zerto for VMs works: Ongoing sync ............................................................................................ 383
How Zerto for VMs works: Migration options ....................................................................................... 384
Prepping for the move.......................................................................................................................... 385
Pre-Move testing .................................................................................................................................. 386
Zerto Move process: Initial steps ......................................................................................................... 387
Zerto Move process: Committing the Move ......................................................................................... 388
Zerto benefits for migration .................................................................................................................. 389
Zerto licensing options ......................................................................................................................... 390
Activity 7 .................................................................................................................................................. 391
Summary .................................................................................................................................................. 393
Learning checks ...................................................................................................................................... 394

Module 8: Troubleshoot VMware-Based Infrastructure Solutions


Learning objectives ................................................................................................................................ 395
Troubleshooting tools ............................................................................................................................ 396
Overview of solutions that help you troubleshoot ................................................................................ 397
Requirements for using HPE InfoSight ................................................................................................ 398
Registering HPE compute solutions ............................................................................................... 398
Registering HPE Alletra 5000 and HPE Alletra 6000 storage arrays ............................................. 398
Registering HPE Primera and HPE Alletra 9000 storage arrays.................................................... 398
Requirements for HPE InfoSight Cross-Stack Analytics for VMware .................................................. 399
Requirements for HPE Alletra 5000 and HPE Alletra 6000 storage arrays ................................... 399
Registering HPE Primera and HPE Alletra 9000 storage arrays.................................................... 399
Using HPE InfoSight for troubleshooting ............................................................................................. 400

Rev.23.21 xvi Confidential – For Training Purposes Only


Contents

Using HPE plug-ins for VMware for troubleshooting ........................................................................... 401


HPE OneView for vCenter (OV4VC) .............................................................................................. 401
HPE Content Logs for VMware vRealize Log Insight ..................................................................... 401
HPE Alletra and HPE Nimble Storage Plug-in or VMware & HPE Storage Integration
Pack for VMware ............................................................................................................................ 401
HPE OneView for VMware vRealize Operations & HPE Storage Management Pack for
vRealize Operations Mgr ................................................................................................................ 401
Using HPE GreenLake for Compute Ops Management for troubleshooting ....................................... 403
HPE CloudPhysics ............................................................................................................................... 404
VMware esxtop and resxtop ................................................................................................................ 405
Analyzing esxtop and resxtop CPU output .......................................................................................... 406
Analyzing esxtop/resxtop memory and storage output ........................................................................ 408
Memory Panel ................................................................................................................................. 408
Virtual Machine Storage Panel ....................................................................................................... 409
Common issues ...................................................................................................................................... 410
Common compute issues .................................................................................................................... 411
Common storage issues ...................................................................................................................... 412
Common networking issues ................................................................................................................. 413
Examples of the troubleshooting and remediation process .............................................................. 414
Example issue 1: Initial steps .............................................................................................................. 415
Example issue 1: Example esxtop/resxtop output ............................................................................... 416
Example issue 1: Make a remediation plan ......................................................................................... 417
Example issue 2: Initial steps .............................................................................................................. 418
Example issue 2: OV4VC Networking view ......................................................................................... 419
Example issue: VM not working post-migration: Make a remediation plan ......................................... 420
Best practices for prevention ................................................................................................................ 421
Maintain best practices for server and storage firmware patches and upgrades ................................ 422
Use automation — with appropriate testing ......................................................................................... 423
Address issues proactively .................................................................................................................. 424
Implement data protection ................................................................................................................... 425
Summary .................................................................................................................................................. 426
Activity: Explore HPE InfoSight ............................................................................................................. 427
Activity 8 ............................................................................................................................................... 428
Alternative recorded demo.............................................................................................................. 429
Learning Checks ..................................................................................................................................... 430

Appendix: Answers

Rev.23.21 xvii Confidential – For Training Purposes Only


Contents

PAGE INTENTIONALLY LEFT BLANK

Rev.23.21 xviii Confidential – For Training Purposes Only


Overview of HPE VMware Solutions
Module 1

Course Map

Figure 1-1: Course Map

This course includes eight modules.

Rev. 23.21 | © Copyright 2023 Hewlett Packard Enterprise Development LP | Confidential – For Training Purposes Only
Module 1: Overview of HPE VMware Solutions

Learning objectives
This module provides an overview of VMware solutions as well as HPE solutions for VMware. It also
highlights the HPE and VMware strategic alliance and underscores the benefits of their partnership.

After completing this module, you will be able to:

• Explain the benefits of a VMware-based infrastructure


• Position VMware solutions to solve a hypothetical customer’s challenges

Rev. 23.21 2 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

Overview of VMware solutions


This module begins by describing several VMware solutions.

Rev. 23.21 3 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

VMware vSphere

Figure 1-2: VMware vSphere

VMware vSphere is VMware’s core enterprise solutions, providing an infrastructure for compute
virtualization. As of the release of this course, the latest VMware vSphere version is 8.
A vSphere environment consists of one or more vCenter Servers. A vCenter Server typically runs as a
virtual machine (VM); it manages multiple ESXi hosts. As you probably know, a VMware ESXi host
consists of a server running a bare-metal hypervisor. That hypervisor runs multiple virtual machines
(VMs) and grants them access to its CPU, memory, and disks resources.
VMware vSphere enables ESXi hosts to join in clusters, which can provide simpler management and
better resiliency for virtualized services. A Distributed Resource Scheduler (DRS)-enabled cluster can
dynamically load balance VMs across cluster hosts to optimize resource usage and improve performance.
A high availability (HA)-enabled cluster can respond when a cluster host fails and automatically restart its
VMs on new hosts. A vSphere environment also supports mobility for VMs across hosts and clusters with
vMotion.
Admins connect the vCenter Server using a vCenter client to manage the virtualized environment as a
whole. From vCenter they can perform tasks such as adding new ESXi hosts, creating datastores for
storing virtual host disks, setting up virtual networking, and creating VMs. In vSphere 7 and later, VMware
vSphere includes vSphere Lifecycle Manager (vLCM), which helps to simplify updates and patches on the
ESXi hosts.
A VMware vSphere environment often incorporates additional VMware solutions, some of which you will
examine over the next pages.

Rev. 23.21 4 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

VMware software-defined solutions

Figure 1-3: VMware software-defined solutions

Within a vSphere virtualized compute environment, you can abstract other formerly hardware-centric
needs, such as storage and networking.

Software-defined storage (SDS)


VMware vSAN is a software-defined storage (SDS) solution that is tightly integrated with vSphere,
running as software within the hypervisor layer.
vSAN is a hyper-converged infrastructure (HCI) solution in which ESXi hosts join in a cluster that pools
both compute and storage resources. The ESXi hosts contribute their local or direct-attached data
storage to an object storage pool that stores VMs’ disks. In this way, vSAN can eliminate the need for
external storage. vSAN tightly integrates the storage management with vSphere, offering a simpler option
for many customers.
As you will learn later in this course, HPE also provides HCI solutions optimized for VMware, including
HPE SimpliVity and HPE Alletra dHCI.

Software-defined networking (SDN)


VMware NSX is a software-defined networking (SDN) solution that, like vSAN, is tightly integrated with
vSphere.
NSX converts network functions that were once solely in the realm of physical hardware to abstracted
software services. Network functions like routing, load balancing, and security are provided by these
software services, simplifying network management and configuration.
NSX establishes “overlay” networks, which are virtual networks that can extend at Layer 2 over an
underlying physical network—regardless of the number of routing hops in that physical network. NSX
increases network agility and flexibility by eliminating the need to physically restructure connections.
Instead, admins can make changes to network functions from a management console.

Rev. 23.21 5 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

VMware Tanzu Kubernetes Grid

Figure 1-4: VMware Tanzu Kubernetes Grid

In addition to providing solutions for virtual compute, storage, and networking processes, VMware offers a
solution for container clusters: Tanzu Kubernetes Grid (TKG). 1 TKG is a turnkey solution for deploying,
running, and managing enterprise-grade Kubernetes clusters for hosting applications.

TKG provides the Kubernetes runtime for VMware for Kubernetes and is central to many VMware Tanzu
for Kubernetes offerings. To deploy and manage Kubernetes workload clusters, TKG relies on a
management cluster. This management cluster executes CLI and UI requests using an open-source tool
for Kubernetes cluster operations (i.e., Cluster API).

TKG offers two deployment options based on where the management cluster runs:
• Supervisor: this option allows admins to create and operate workload clusters natively in vSphere
with Tanzu
• Standalone: this option allows admins to deploy clusters in private and public cloud environments
using vSphere (versions 6.7, 7, and 8) on Amazon Web Services (AWS) or Microsoft Azure
TKG allows admins to make Kubernetes available to developers as a utility, just like an electricity grid.
Operators and developers can use this grid to create and manage clusters in a manner familiar to
Kubernetes users.
For admins, TKG deploys Kubernetes clusters in a VMware environment so they do not have to build a
Kubernetes environment themselves. TKG provides packaged tools (such as Carvel Package Tools) that
provide services such as networking, authentication, ingress control, and logging that a production
Kubernetes environment requires.

1 Source for this slide’s content was taken from two VMware support documents, available here:

https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/2/about-tkg/index.html and
https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/index.html

Rev. 23.21 6 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

VMware Cloud Foundation (VCF)

Figure 1-5: VMware Cloud Foundation (VCF)

VMware Cloud Foundation (VCF) is a hybrid cloud platform that can be deployed on-premises as a
private cloud or can run as a service within a public cloud. This integrated software stack combines
compute virtualization (VMware vSphere), storage virtualization (VMware vSAN), network virtualization
(VMware NSX), and cloud management and monitoring (VMware Aria) into a single platform. (VMware
Aria was formerly called VMware Realize.)
In the version 4 release of VCF, VMware added Tanzu, which embeds the Kubernetes runtime within
vSphere. VMware has also optimized its infrastructure and management tools for Kubernetes, providing a
single hybrid cloud platform for managing containers and VMs.

Rev. 23.21 7 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

VCF Components

Figure 1-6: VCF Components

VCF includes several software components that function as building blocks for the private cloud.

SDDC Manager
SDDC Manager is the management platform for VCF, enabling admins to configure and maintain the
logical infrastructure. It also automatically provisions VCF hosts.

vSphere (compute)
VMware vSphere hypervisor technology enables organizations to run applications in a common operating
environment across clouds and devices. vSphere includes key features such as:
• VM migration
• Predictive load balancing
• High availability and fault tolerance
• Centralized administration and management

vSAN (storage)
vSAN is a storage solution that is embedded in vSphere. It delivers storage for VMs, with features like:
• Hyper-converged object storage
• All flash or hybrid
• Deduplication and compression data services
• Data protection and replication

NSX (networking)
Networking is often the last part of the stack to virtualize, but virtualizing the network is key to achieving
the full benefits of a virtualized data center. Without virtualized networking, network and security services
still require manual configuration and provisioning, ultimately becoming a bottleneck that slows delivery of
IT resources.
VMware NSX has been updated to support hybrid cloud environments. In addition to supporting ESXi
servers, NSX supports containers and bare-metal servers. It also supports Kubernetes and OpenShift as

Rev. 23.21 8 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

well as AWS and Azure. Furthermore, it is not tied to a hypervisor so it supports Microsoft Hyper-V
environments.
NSX also supports:
• Distributed switching/routing
• Micro-segmentation
• Load balancing
• L2-L7 networking services
• Distributed firewall
• Analytics

Aria Suite (formerly vRealize)


VMware Aria is the new branding for VMware vRealize Suite, an integrated management environment for
hybrid cloud environments.
VMware Aria operational capabilities continuously optimize workload placement for running services
based on policies that reflect business requirements. Aria automation capabilities can leverage those
same policies when deciding where to place a newly requested service.

Rev. 23.21 9 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

Overview of HPE solutions for VMware


This section describes several HPE solutions for virtualized deployments.

Rev. 23.21 10 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

HPE ProLiant DL servers for VMware

Figure 1-7: HPE ProLiant DL servers for VMware

HPE ProLiant servers are the most popular HPE servers. HPE ProLiant server models are built to
accommodate different workloads and environments, including these:
• HPE ProLiant DL320 and HPE ProLiant DL325 are entry-level models optimized for price
performance and edge workloads.
• HPE ProLiant DL345 offers expanded storage and I/O bandwidth for SDS workloads.
• HPE ProLiant DL 360 offers expanded storage and I/O bandwidth for compute workloads.
• HPE ProLiant DL365 and HPE ProLiant DL380 offer high-performance computing in small 1U
packages for density-optimized environments.
• HPE ProLiant DL380, HPE ProLiant DL385, and HPE ProLiant ML350 offer more of everything—
particularly GPU accelerators—for demanding workloads.
HPE ProLiant servers offer features such as workload optimization and 360-degree security. Through
360-degree security, HPE embeds protections across the server lifecycle, from the manufacturing supply
chain to end of life.
HPE ProLiant servers also include features such as Workload Matching (for automated server
configuration) and Workload Performance Advisor (for real-time tuning advice).
You will learn about two categories of integrated HPE and VMware solutions later in this course:
• HPE ProLiant for vSphere Distributed Services Engine
• HPE vSAN Ready Node solutions

Rev. 23.21 11 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

HPE Synergy

Figure 1-8: HPE Synergy

HPE Synergy was the first composable infrastructure for VCF and, as such, supports bare metal,
containerized, and virtualized workloads. HPE Synergy is designed to abstract compute, storage, and
networking resources from their physical locations to create fluid resource pools. Using a web-based
interface, customers can dynamically assign and release resources from these pools.
These virtual resource pools are programmable using open-standards-based APIs. This allows for
scripting to automatically allocate resources in real-time. Real-time resource allocation helps support on-
demand applications and services for many use cases but is particularly useful for developers.
By facilitating the management and automation of resource allocation, HPE Synergy provides an ideal
platform for VCF.
Customers often struggle to resolve the push-pull dynamic between traditional and cloud applications.
Traditional applications require stability and are carefully managed by IT operations teams. In contrast,
cloud applications are driven by developers’ requirements and the need for speed.
A composable infrastructure such as HPE Synergy helps customers bridge the gap between traditional
and cloud applications by providing a single platform that accommodates both. Whether customers need
to deploy workloads on bare metal, as VMs, or in containers—or some mixture of the three—HPE
Synergy provides the same fluid resource pools that can be composed to suit the demands of both
traditional and cloud applications. A composable infrastructure such as HPE Synergy delivers
programmable processes that ease deployment.
HPE Synergy compute modules also support the HPE Silicon Root of Trust. The HPE Silicon Root of
Trust provides advanced protection against firmware attacks by way of an immutable fingerprint in the
silicon.
The composable infrastructure that HPE Synergy provides offers several key benefits, such as these:

Fluid resource pools


• Single infrastructure of disaggregated resource pools: Admins can map packages of storage and
compute resources, called modules, to each other within the HPE Synergy framework; this creates
miniature networks of resources that have just the right proportion of storage and compute for a given
workload.
• Physical, virtual, and container workloads: Thanks to the Unified API and software-defined
infrastructure (SDI), you can provision and automate bare metal as easily as VMs.

Rev. 23.21 12 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

• Auto-integration of resource capacity: The composer, which controls an HPE Synergy system,
discovers new modules as you add them to the system, and can be configured to provision them.

Unified API
• Single line of code to abstract every element of infrastructure: The API, which the composer hosts,
allows admins to write and run scripts that tell any part of the infrastructure what to do.
• Full infrastructure programmability: Because admins can script their commands, they can automate
management tasks that they previously had to perform manually.

Software-defined intelligence
• Template-driven workload composition: Admins can dynamically compose workloads; for example,
they can write a script that directs HPE Synergy to support VDI in the day and perform analytics at
night.
• Frictionless operations: By automating so many processes, HPE Synergy helps IT teams reduce the
cost of human error, which commonly occurs when admins have to perform repetitive tasks manually.

Rev. 23.21 13 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

HPE storage solutions for VMware

Figure 1-9: HPE storage solutions for VMware

HPE offers a variety of storage options for customers deploying VMware solutions.
HPE Modular Storage Array (MSA) is an entry-level SAN storage solution designed for businesses with
100 to 250 employees and remote and branch offices (ROBOs). HPE MSA offers the speed and
efficiency of flash and hybrid storage with advanced features such as Automated Tiering.
HPE Primera redefines what is possible in mission-critical storage with three key areas of unique value.
First, it delivers a simple user experience that enables on-demand mission-critical storage, reducing the
time it takes to manage storage. Second, HPE Primera delivers app-aware resiliency backed with 100%
availability, guaranteed. Third, it delivers predictable performance for unpredictable workloads so the
customer’s apps and business are always fast.
HPE Alletra is engineered to be tightly coupled with the HPE Data Ops Manager in Data Services Cloud
Console. Together, these solutions deliver a common, cloud operational experience across workload-
optimized systems on-premises and in the cloud. HPE Alletra solutions deliver agility and simplicity for
every application across their entire lifecycle, from edge to cloud. Customers can deploy, provision,
manage, and scale storage in significantly less time. For example, the platform can be set up in minutes,
and provisioning is automated.
HPE Alletra 5000 is a hybrid flash system, designed for primary and secondary workloads that require
reliable, cost-efficient storage. It guarantees 99.9999% availability and delivers up to 25% faster
performance than HPE Nimble Storage Adaptive Flash Arrays (the foundational architecture on which
HPE Alletra 5000 was based).
HPE Alletra 6000, supporting NVMe SSDs, is designed for business-critical workloads that require fast,
consistent performance. It delivers up to three times the performance of HPE Nimble Storage All Flash
Arrays. Alletra 6000 guarantees 99.9999% availability and scales easily. HPE Alletra 9000, on the other
hand, is designed for mission-critical workloads that have stringent latency and availability requirements.
It guarantees 100% availability and delivers the highest performance with NVMe SSDs.
HPE also offers data protection solutions. HPE StoreOnce meets the needs of customers who require
comprehensive, low-cost backup for a broad range of applications and systems. It provides extensive
support for applications and ISVs so customers can consolidate backups from multiple sources.
You will learn more about HPE storage solutions for VMware in Module 3 of this course.

Rev. 23.21 14 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

Zerto for VMware

Figure 1-10: Zerto for VMware

Zerto for VMware is a simple, software-only solution that delivers cloud data management and protection
for virtualized and container-based environments running on-premises or in the cloud. This single,
scalable platform for disaster recovery, data backup, and cloud mobility simplifies IT and reduces costs by
eliminating the need for multiple legacy solutions.
Zerto for VMware eliminates downtime and data loss for VMware vSphere users and supports all VMware
features used in day-to-day management.
With Zerto’s always-on replication engine for VMware vSphere, you can:
• Protect and replicate single or multiple VMs (up to thousands) with scale-out replication architecture
• Protect multiple virtual disks connected to the same VM
• Recover automatically and immediately
– recovery time objectives (RTOs) of minutes
– recovery point objectives (RPOs) of seconds
• Restore operations after a ransomware attack to a point in time only minutes to even seconds before
the attack
• Replicate VMware workloads on-premises to VMware on public cloud (or vice versa) without
changing your applications
• Eliminate snapshots and reduce impact on application performance

Rev. 23.21 15 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

HPE hyperconverged solutions

Figure 1-11: HPE hyperconverged solutions

HPE SimpliVity takes convergence to a new level by assimilating eight to twelve core data center
activities, including solid state drive (SSD) arrays for all-flash storage; appliances for replication, backup
and data recovery; real-time deduplication; WAN optimization; cloud gateways; backup software; and
more. And all of these functions are accessible under a global, unified management interface.
With the convergence of all infrastructure below the hypervisor, HPE SimpliVity allows businesses of all
sizes to completely virtualize the IT environment while continuing to deliver enterprise-grade performance
for mission-critical applications.
A core set of values unites all HPE SimpliVity models. Customers gain simple VM-centric management
and VM mobility. As they add nodes, capacity and performance scale linearly, delivering peak and
predictable performance. Best-in-class data services, powered by the HPE SimpliVity Data Virtualization
Platform, deliver data protection, resiliency, and efficiency.
HPE Alletra dHCI provides a disaggregated hyperconverged infrastructure solution. It allows customers to
scale compute and storage separately, while providing a low-latency, high-performance solution.
You will learn more about HPE hyperconverged solutions for VMware in Module 5 of this course.

Rev. 23.21 16 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

HPE GreenLake cloud services for VMware

Figure 1-12: HPE GreenLake cloud services for VMware

HPE GreenLake extends the benefits that customers associate with public cloud to all applications and
data, no matter where they live—at the edge, on premises, a colocation facility, or in public cloud.
To date, HPE GreenLake offers more than 70 cloud services across eighteen categories, including
Virtualization, Virtual Desktop Infrastructure, Hybrid and Multi-cloud, and AI/ML/Data Analytics.
To learn more about these categorized services, visit https://www.hpe.com/us/en/greenlake/services.html
You will learn more about HPE GreenLake solutions for VMware in Module 6 of this course.

Rev. 23.21 17 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

HPE GreenLake Edge-to-Cloud Platform

Figure 1-13: HPE GreenLake Edge-to-Cloud platform

HPE provides a unified cloud experience for customers, providing easy access to the applications they
need to configure and manage their hybrid environment. HPE GreenLake Edge-to-Cloud Platform
provides each customer with a single view of their environment and a single sign-on. Once customers
authenticate, they will be able to manage their devices, storage, networking, support, ordering—all in one
place.
Customers can access HPE GreenLake Edge-to-Cloud Platform here: https://common.cloud.hpe.com
You can find a demo here: https://greenlake.hpe.com

HPE Aruba Networking Central


HPE Aruba Networking Central brings the cloud experience to the edge, allowing customers to manage
their entire network infrastructure from one cloud-native command center.

Data Services Cloud Console


Because HPE understands that customers cannot, and do not want to, move all of their data to the cloud,
the Data Services Cloud Console brings the cloud experience to data wherever that data lives.
Key among the console's services is the Data Ops Manager, which orchestrates the on-prem storage
infrastructure and abstracts away storage complexity. Within Data Ops Manager, customers can quickly
provision storage for workloads and execute other tasks with simple-to-understand, intent-based
processes.

HPE GreenLake Compute Ops Management


HPE GreenLake Compute Ops Management simplifies operations with a seamless as-a-service
experience, edge-to-cloud, while eliminating manual tasks.
• It automatically discovers and configures servers, using zero touch provisioning.
• It delivers cloud operations to wherever compute lives.
• It helps customers accelerate innovation by delivering AI-driven insights.

Rev. 23.21 18 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

HPE GreenLake Central


HPE GreenLake Central helps customers track both their utilization and the associated costs, eliminating
the unexpected costs often associated with public cloud.

Rev. 23.21 19 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

VMware Compatibility Guide

Figure 1-14: VMware Compatibility Guide

HPE offers several solutions that are validated as compatible with VMware products, including many HPE
servers, HPE storage arrays, and HPE vSAN ReadyNode solutions.

Visit the VMware Compatibility Guide. Select the VMware solution that interests you. Then select Hewlett
Packard Enterprise as the vendor to see a list of compatible HPE solutions. To view more information
about each solution, simply click to select it.

Rev. 23.21 20 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

HPE and VMware together


This section discusses the HPE and VMware long-standing strategic alliance.

Rev. 23.21 21 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

Why HPE and VMware

Figure 1-15: Why HPE and VMware

HPE and VMware have been collaborating on customer solutions for more than 20 years and together lay
claim to a combined 75-year history of IT innovation. Their alliance forms one of the largest joint Channel
Partner communities today, with more than 90 training centers, including the largest VMware Authorized
Training Center (run by HPE).

By integrating VMware Cloud Foundation (VCF) and HPE solutions, the two companies have made it
easier to design, install, validate, deploy, and manage a hybrid cloud solution. To date, their collaboration
helps reduce IT complexity for more than 200,000 customers. Additionally, thanks to HPE Synergy, the
world’s first composable infrastructure solution, moving to SDI is easier than ever for HPE VMware
customers.

Learn more about this ongoing alliance by visiting the: HPE and VMware Alliance page.

Rev. 23.21 22 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

Why HPE

Figure 1-16: Why HPE

At HPE, we know that customers have many options, so why should customers choose HPE solutions for
VMware? The reasons are many.

Security
Security starts with the hardware, and HPE offers superior 360-degree security: we monitor our servers
from start to finish, from the time our servers hit the manufacturing supply chain to the point they reach
their end of life.
Part of our hardware security assurance stems from the HPE Silicon Root of Trust. The HPE Silicon Root
of Trust provides advanced protection against firmware attacks by way of an immutable fingerprint in our
servers’ silicon chips.

Performance
Our servers deliver optimized workload performance. Third-party test results confirm this claim. For
example, consider the results of benchmark evaluations conducted by the Standard Performance
Evaluation Corporation (SPEC). Across four categories (including HPE Superdome Flex Family,
HPC/Composable, AMD Rack, and Intel Rack), HPE servers earned 100 wins on eight workloads,
including virtualization, compute-intensive, and energy efficiency.

Value adds
Additionally, HPE for VMware solutions include value-added plug-ins that enhance visibility, automation,
and orchestration of customers’ virtualized deployments.

Hardware and software solution with unified support experience


As an HPE Partner, you can offer customers a complete hardware and software solution that includes
VMware licenses with HPE SKUs. The HPE OEM VMware licenses include support from HPE.
Customers benefit from a unified support experience in which a single support team takes a holistic
approach to solving any problems that might emerge. (Note that customers might sometimes ask about
the HPE OEM VMware licenses being more expensive than the same licenses sold by VMware. This is
not the case. HPE includes the support and the license within a single SKU, while VMware separates the
support from the license. Therefore, the two licenses cannot be compared directly.)

Rev. 23.21 23 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

Use cases addressed by VMware on HPE

Figure 1-17: Use cases addressed by VMware on HPE

Customers can address many use cases when they choose HPE for their VMware environment. This
page and the next will highlight a few of these use cases.

Strategic change
VMware on HPE helps accelerate companies’ digital transformation and modernize their applications and
data center. HPE GreenLake offerings built on VMware lend themselves to consolidating companies’
myriad systems—including legacy ones—onto a single, manageable platform that empowers companies
to shift to consumption-based services.
Our high-performance, general-purpose infrastructure solution vSAN is a foundational piece for
companies in the early stages of digital transformation. In fact, vSAN is the only storage virtualization
integrated with vSphere for greater consolidation ratios and consistent performance.
HPE also offers many solutions for customers in the later stages of transformation, such as VCF on HPE
Synergy or HPE ProLiant DL, coupled with leading HPE storage arrays.

Hybrid or multi-cloud
Hosting a platform built for hybrid cloud, such as VCF, on VMware-validated HPE hardware helps
companies migrate systems to private cloud as well as multi- or hybrid cloud. Such migrations can
transform operations and automate lifecycle management. And by migrating to private, multi-, or hybrid
cloud, customers transition from the largely outdated CapEx model of upfront infrastructure purchases to
the OpEx model of pay-per-use services.
HPE offers many solutions ideal for the hybrid- and multi-cloud environment.
With VCF and HPE Synergy, customers get a simple path to a hybrid cloud through a fully integrated
software-defined solution on modular infrastructure across compute, storage, network, security and cloud
management to run enterprise applications—traditional or containerized—in private and public cloud
environments.

Rev. 23.21 24 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

Hybrid workplace solutions


The recent pandemic essentially forced companies to adapt to new worker demands that welcome work
from home (WFH) models or hybrid workplace models (that feature both WFH and in-office). VDI helps
companies establish a hybrid workplace, while HPE—with its low-latency, GPU-rich solutions—helps VDI
deliver the performance that users demand.

HPE GreenLake for VDI facilitates the adoption of today’s reality of an anywhere workspace and
distributed workforce. With these solutions, companies can allow their workers the anywhere workspace
without sacrificing endpoint protection thanks to central, cloud-native endpoint visibility and management.

Rev. 23.21 25 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

Use cases addressed by VMware on HPE

Figure 1-18: Use cases addressed by VMware on HPE

Secure workloads and data


HPE servers offers a highly secure foundation for customers’ virtualized workloads. Customers can also
protect their data and guard against ransomware attacks with Zerto. Or they can use Zerto’s continuous
replication capabilities to migrate their applications and data from on-premises to public cloud, including
AWS, Azure and public-cloud VMware deployments.
HPE and VMware solutions also simplify data management and encourage the shift to Storage as a
Service. The HPE vSAN ReadyNode configurations make this shift an easy process. Or, for customers
who prefer to retain a SAN, HPE storage arrays offer many deep integrations, such as vVols (about which
you will learn more later in this course), that simplify management.

Apps development
HPE also helps customers modernize applications, increasing the speed at which they deliver
applications by capitalizing on the cloud (DevOps) without introducing its vulnerabilities (DevSecOps).
Customers can choose from several HPE solutions for VMware, such as an HPE GreenLake for
Containers solution built on the VMware Tanzu Kubernetes Grid.

AI-Ready Enterprise
Companies today need AI and ML to help automate processes and accelerate decision-making based on
data-derived analytics. For example, most companies want chatbots (which are driven by AI) because
many website visitors prefer to engage with a business via a chatbot. But AI workloads are data intensive
and frequently require costly and complex infrastructure that is difficult to manage and maintain.
The AI-Ready Enterprise Platform, the result of a joint effort between HPE, VMware and NVIDIA, helps
organizations adopt and make the most of AI—to future-proof their infrastructure.
For more information, click here or visit
https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/partners/accelerate-innovation-
with-AI-solution-brief.pdf

Rev. 23.21 26 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

Activity
You will now have the opportunity to practice applying what you have learned about VMware and HPE
solutions.

Rev. 23.21 27 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

Customer scenario: Financial Services 1A

Figure 1-19: Customer scenario: Financial Services 1A

For this activity, you will create a customer presentation to explain how VMware on HPE can help a
hypothetical customer: Financial Services 1A.
Financial Services 1A is primarily a credit union (a member-owned banking association) with about
US$10 billion in assets. It has 120 branches and about 850,000 members (i.e., customers with accounts).
In addition to savings and checking accounts, the company offers loans, such as mortgages and auto
loans. The credit union has about 1200 employees, including a sizeable development and IT staff.
Financial Services 1A has long been a prominent institution in its region but is currently facing new
competition and its growth has slowed significantly. It needs to continue to build and protect its reputation.
To do so, the company has one main goal: to attract and retain more customers. After extensive
research, C-level executives have determined that the best way to reach this goal is to offer personalized
services, based on their members’ lifestyles, stages in life, and financial goals.
Like many financial institutions, Financial Services 1A offers self-service options for their customers, but
the company wants to add more financial services and to simplify access, while maintaining strict security.
IT is also investigating the possibility of using AI to make its fraud protection services more reliable.
These new initiatives will require some changes, including:
• a virtualized environment that easily accommodates change
• automated services
• seamless visibility of and communication between virtual workloads and physical infrastructure

Rev. 23.21 28 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

Activity 1

Figure 1-20: Activity 1

The company already has a highly virtualized environment, with more than 80% of its workloads
virtualized.
About 12 years ago, Financial Services 1A consolidated services in a VMware vSphere deployment.
However, the company has never used vRealize (now Aria) solutions.
The company has one primary data center and a disaster recovery (DR) site. The primary data center has
30 VMware hosts in six clusters, running a variety of workloads including:
• General Active Directory services
• General enterprise solutions
• An extensive web farm for both internal and external sites
• Development platforms
• The Web front end interacts with a number of applications, including
– Customer banking and self-service applications
– Investment management
– Loan management
– Inventory management
– Business management
Financial Services 1A also has about 20 bare metal servers running more intensive data analysis and risk
management applications. Additionally, the company runs several load-balancing appliances and security
appliances, such as firewalls and an intrusion detection system/intrusion prevention system (IDS/IPS).
While the vSphere deployment hosts some business management solutions, the company moved some
of its customer relationship management (CRM), HR, and payroll services to the cloud about three years
ago. The customer also archives some of its less-sensitive data in AWS.
The ESXi hosts and bare metal servers are mostly HPE ProLiant DL servers (primarily 300 series and
Gen9). The customer also has about a dozen legacy Dell servers. The storage backend for the vSphere
deployment currently consists of Dell EMC storage arrays.

Rev. 23.21 29 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

The data center has a leaf and spine network using HPE FlexFabric 5840 switches. Traffic is routed at
the top of the rack.

Company pain points


The CIO has shared several concerns about the state of their current environment, including these:
• The virtual environment and the physical environment are out of sync. Admins can provision a new
VM very quickly, but getting a new host deployed takes a very long time. The same goes for setting
up new storage volumes and datastores.
• IT has started automating services using tools such as Ansible. Everyone is enthusiastic, but when
admins get down to trying to automate everything, they run into issues. For every attempted service
deployment, admins encounter resistance to automation, particularly with the physical infrastructure.
• The CIO does not have a good view of the entire environment. The bare metal workloads and virtual
workloads are siloed.
• The vSphere admins do not have a clear understanding of what is going on in the physical
infrastructure. They and the network and storage admins sometimes seem to struggle to
communicate what the virtual workloads need as far as physical resources.

Your task
After reviewing the customer scenario, take about 20 minutes to create a presentation about the HPE
solutions for VMware and how they can address Financial Services 1A pain points and goals.

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

Rev. 23.21 30 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

Rev. 23.21 31 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

Summary

Figure 1-21: Summary

This module introduced you to several VMware and HPE solutions. It also described the long-standing
relationship between VMware and HPE and the benefits of their ongoing partnership and commitment to
integrating their solutions. You also practiced applying what you learned from this module.

Rev. 23.21 32 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

Learning Checks
1. What is one way HPE helps customers protect themselves from security threats?
a. HPE Silicon Root of Trust protects HPE servers against compromised firmware.
b. HPE Alletra 6000 applies intrusion protection system (IPS) defenses to data stored on its drives.
c. HPE embeds a next generation firewall (NGFW) in every HPE data center switch.
d. HPE encourages customers to move all applications to co-lo and cloud for better protection.
2. Which HPE solution offers simple, cloud-based management and 100% availability?
a. HPE Alletra dHCI
b. HPE Alletra 5000
c. HPE Alletra 6000
d. HPE Alletra 9000

Rev. 23.21 33 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

PAGE INTENTIONALLY LEFT BLANK

Rev. 23.21 34 Confidential – For Training Purposes Only


Design HPE Compute for VMware
Module 2

Learning objectives
Module 2 delves into the practical details of designing HPE compute for VMware. The module offers
general design guidelines before covering best practices for deploying VMware on HPE. Next, you will
read how to design HPE ProLiant DL servers for VMware vSphere and HPE Synergy solutions for
VMware vSphere and VCF. The module also describes how HPE solutions enhance management and
monitoring.

The module also includes an activity, which gives you the opportunity to apply what you have learned.

After completing this module, you will be able to:

• Position HPE VMware solutions to solve a customer’s problems given a set of customer requirements
• Describe the integration points between HPE and VMware

Rev. 23.21 | © Copyright 2023 Hewlett Packard Enterprise Development LP | Confidential – For Training Purposes Only
Module 2: Design HPE Compute for VMware

General design guidelines


This module begins by reviewing general guidelines for designing VMware on HPE solutions.

Rev. 23.21 36 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Overview of the design process

Figure 2-1: Overview of the design process

Every good solution design begins with a thorough understanding of a customer’s requirements and an
assessment of their current environment. Once you understand the scope, goals, and requirements of the
customer’s vision, you can size the solution and create a bill of materials (BOM).
Naturally, each solution you design will be unique, specifically suited for a customer’s particular
requirements and needs. This module focuses on general guidelines that apply to all solutions. When
tools are available that can assist you, such as HPE assessment and sizing tools, this module draws your
attention to those and explains how to use them.
While this design process helps you to structure your efforts, bear in mind that the design process is not
strictly linear in the real world.

Rev. 23.21 37 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Gathering information: Migrating an existing vSphere deployment

Figure 2-2: Gathering information: Migrating an existing vSphere deployment

When migrating a customer’s existing vSphere deployment, gather information such as this:

VM profiles
VM profiles allow you to standardize the configuration of each type of VM. What VM profiles does the
customer need and how many? You can create one VM profile for each type of VM. As you plan the
migration, catalog the resources that each VM type requires.
• Number of vCPUs
• Allocated RAM
• Disk size
• Estimated input/output operations per second (IOPS)
• Estimated disk throughput
In addition to documenting resources required per VM profile, you should record how many of each type
of VM the customer needs.

Subscription expectations
The virtual resources allocated to VMs consume physical resources on the ESXi host. Because not every
VM will operate at 100% utilization at the same time, resources can be oversubscribed. However,
oversubscribing resources by too much can compromise performance.
For example, the number of VM vCPUs compared to the number of physical cores available is the vCPU-
to-core ratio. For most server workloads, a 4:1 vCPU-to-core ratio will deliver the performance that apps
and users generally demand. (In a 4:1 ratio, a host with 32 cores could support VMs with 128 vCPUs
total.)
If the customer has VMs running lower-priority or less CPU-intensive workloads, then the ratio could be
even higher, possibly as high as 8:1 (which might create a small degradation in performance).
Conversely, if the VMs are running workloads that are CPU-intensive, then the ratio should be lower to
maintain adequate performance.
VMware provides memory sharing and other technologies that might permit you to oversubscribe the host
memory by about 25% (in other words, if a host has 128 GB of memory, the VMs’ memory requirements

Rev. 23.21 38 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

might add up to 160 GB) . However, as much as possible, you should avoid memory oversubscription,
particularly for enterprise application workloads with high-performance requirements.
You should work with the customer to define the amount of oversubscription that the customer will
tolerate. Some customers might even want you to deploy mission-critical VMs without oversubscribing
resources.

Cluster plans
Does the existing vSphere deployment use clusters? A cluster consists of multiple ESXi hosts, which
work together to support VMs.
In a cluster environment, VMware Dynamic Resource Scheduler (DRS) can assign VMs to hosts based
on considerations such as the VM’s load, as well as configurable affinity and non-affinity rules. The DRS
spreads the VM workloads across vSphere hosts inside a cluster, monitoring available resources.
Depending on your automation level, the DRS can migrate VMs to other hosts within a cluster to
maximize performance.
A cluster can also implement high availability (HA). Among other features, HA ensures that if a host within
the cluster fails, its VMs restart on another host in the cluster.
If the customer uses clusters, you need to know which clusters support which VMs. You also need to
define the availability requirements. Should the cluster be able to tolerate the failure of one host for N+1
redundancy or more than one host?
Also, you should determine whether the cluster will apply Fault Tolerance (FT) to any VMs. FT creates a
standby copy of the VM, essentially doubling the requirements for that VM.

Current host profiles


Profiling the legacy ESXi hosts can also be useful. What processors do the hosts use and how many
cores do those processors have? What is the average CPU, memory, IOPS, disk throughput, and
network utilization on the hosts?
The last three values are particularly important for planning adequate storage performance. That said,
looking at the current average CPU and memory utilization is also useful; if those values exceed 80%,
then the customer’s VMs are currently oversubscribed. Knowing this will help guide your discussion with
the customer about desired oversubscription levels.

Growth requirements
Discuss how quickly the virtualized environment is expanding. Agree on a growth rate per year and a
number of years for which the environment will accommodate that growth. For example, you might size
the new deployment to accommodate 5% growth year-over-year for three years.

Rev. 23.21 39 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Gathering information: Migrating from physical machines

Figure 2-3: Gathering information: Migrating from physical machines

In addition to working with customers migrating their existing vSphere deployments, you might also work
with customers who want to migrate workloads currently running on physical machines. To migrate
workloads to a VMware vSphere on HPE deployment, you would begin by profiling each physical
machine.
The graphic above lists the information you should collect for each physical machine running workloads
the customer wishes to migrate to VMs. You would work with the customer to convert this gathered
information into a profile for a VM that can handle the same workload. For example, if the physical
machine has 16 cores and currently operates at 15% to 20% utilization, you and the customer might
decide that four vCPUs is sufficient for the VM.
You should also discuss desired oversubscription levels, plans for using VMware clusters, and expected
growth, just as you would in a discussion regarding migrating an existing vSphere environment.

Rev. 23.21 40 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Gathering information: Considering workloads

Figure 2-4: Gathering information: Considering workloads

Whether migrating an existing vSphere deployment or migrating physical machines, you need to collect
information about the customer’s existing environment and understand the expectations for the new
environment.

Workload types
Much of your design considerations for the virtual environment hinge on the customer’s workloads. These
are characteristics that six common workloads tend to exhibit:
• Traditional database (OLTP)
– Is designed for scale up (in other words, the application is designed for a single machine with a lot
of resources, rather than many machines with fewer resources each)
– Exhibits a high number of random reads and writes
– Requires data integrity
– Is frequently mission critical
• In-memory database (e.g., SAP HANA)
– Is designed for scale up
– Is memory and IOPS intensive and latency sensitive
– Exhibits a high number of writes
– Requires data integrity
• Business management
– Runs on databases (traditional or in-memory)
– Is IOPS intensive and latency sensitive
– Is frequently mission critical

Rev. 23.21 41 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

• Object storage
– Is designed for scale out
– Is IOPS intensive
• Big data and analytics
– Is designed for scale out
– Is compute and IOPS intensive (whether the application is more compute or IOPS intensive
depends on application)
 Sorting and searching apps: More IOPS intensive
 Classification, feature extraction, and data mining apps: More compute intensive
– Is latency sensitive and sometimes memory intensive (e.g., Spark, Hive)
• EUC or VDI
Virtual desktop infrastructure (VDI) is a common example of end user computing (EUC), which refers
to any solution that allows users to access compute resources remotely
– Is latency sensitive
– May need GPU acceleration to support power users, who require resource-demanding apps like
Computer-Aided Design (CAD)

Rev. 23.21 42 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

How do you get the information?

Figure 2-5: How do you get the information?

In addition to interviewing the customer and reviewing their documentation, you can obtain the
information you need to design a VMware on HPE solution using one of two tools: HPE CloudPhysics or
HPE Assessment Foundry (SAF).
We strongly recommend that you use one of these tools to collect information. Customer documentation
can be spotty or outdated and the customer’s knowledge less than complete. Consequently, relying only
on your customer as the source of information for your design may lead you to undersize the solution.

HPE CloudPhysics
HPE CloudPhysics is a SaaS-based big data analytics platform for VMware infrastructures. It offers data-
driven insights that can help you size a solution with more confidence, improve IT performance, and
optimize workload placement.
HPE employees and Partners are entitled to use HPE CloudPhysics for free. You should use this tool
when a customer already has a VMware environment for which you are planning an upgrade.
You will learn more about HPE CloudPhysics in the coming pages. (For more information about HPE
CloudPhysics, click here.)

HPE Assessment Foundry Strategic Assessment Framework (SAF)


HPE Assessment Foundry Strategic Assessment Framework (SAF) is a suite of tools that helps you
collect data about your customer’s environment. It analyzes configuration and workloads, generating
detailed reports. HPE SAF also helps you size HPE solutions.
HPE Assessment Foundry is owned and developed by HPE and available to HPE employees and
Partners for free. You should use it when HPE CloudPhysics does not support the customer’s
environment. For example, a customer might want you to migrate bare metal workloads to a VMware
environment; HPE SAF can help you assess the physical environment to size the HPE infrastructure for
the new VMware solution.
You will learn more about HPE SAF later in this module. (For more information about HPE Assessment
Foundry, click here.)

Rev. 23.21 43 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

HPE CloudPhysics

Figure 2-6: HPE CloudPhysics

HPE CloudPhysics helps you to assess a customer’s virtual environment as part of the design process for
migrating to new infrastructure.
When deployed in a customer’s environment, HPE CloudPhysics conducts in-depth analyses of
virtualization hosts, their storage, and the VMs that run on them. It begins by collecting about 200 metrics.
It then applies artificial intelligence (AI) to those metrics to deliver in-depth analytics for health checks,
performance troubleshooting, infrastructure optimization, and space-saving recommendations.
Insights into capacity requirements are of most interest to you for this scenario; HPE CloudPhysics offers
such insights and many, many more.

Rev. 23.21 44 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Overall process for using HPE CloudPhysics

Figure 2-7: Overall process for using HPE CloudPhysics

HPE Partners are entitled to use HPE CloudPhysics. If you have not used it before, register for an
account here: https://app.cloudphysics.com/partner/hpe/register.
You can then follow a simple process to begin an assessment for a particular customer:
• Log into the HPE CloudPhysics portal here: https://app.cloudphysics.com/login
• Add your customer and create an assessment
Doing so sends an email to the customer. Your customer must then accept the email invitation and follow
the instructions to install HPE CloudPhysics Observer. Once that is done, the assessment begins and
runs for 30 days: it is as easy as that. HPE CloudPhysics Observer collects data across your customer’s
environment and sends it to HPE CloudPhysics for analysis.
This assessment runs for weeks, but you can start seeing data in the HPE CloudPhysics portal within 15
minutes. However, HPE recommends that you begin your analysis of the collected data no sooner than
seven days after starting the assessment; by waiting, you ensure that the data and insights accurately
reflect the customer’s environment.

Rev. 23.21 45 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Simple, customer-driven process for customers with Data Services


Cloud Console

Figure 2-8: Simple, customer-driven process for customers with Data Services Cloud Console

Data Services Cloud Console is a SaaS-based console available through HPE GreenLake Edge-to-Cloud
Platform. If your customer already uses Data Services Cloud Console, the simple assessment process
just described becomes even simpler.
With Data Services Cloud Console, a customer can initiate the assessment process on their own. You
would not need to create the assessment and invite the customer. Instead, the customer would simply
select HPE CloudPhysics from within Data Services Cloud Console. If the customer has not yet registered
with HPE CloudPhysics, then they would simply install HPE CloudPhysics Observer (described on the
next page). The assessment then continues in the same manner discussed on the previous page.

Rev. 23.21 46 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

HPE CloudPhysics Observer requirements

Figure 2-9: HPE CloudPhysics Observer requirements

To assess a customer’s virtual environment, HPE CloudPhysics uses a virtual appliance called Observer.
Observer has minimal impact on your customer’s environment. The virtual appliance requires these
resources:
• 8 GB RAM
• 2 vCPUs
• 20 GB
• Network traffic: 5 MB per hour per 100 VMs
The virtual appliance must also have Internet access to at least entanglement.cloudphysics.com so that it
can report data for analysis. HPE CloudPhysics Observer supports proxies that implement the HTTP
Proxy protocol. It supports both unauthenticated and authenticated proxies that use basic, digest, or
NTLM authentication. However, it does not work through SOCKS and transparent (intercepting) proxies,
nor does it work through proxies that intercept TLS traffic.
To collect data, the virtual appliance needs connectivity to vCenter Server. It also needs credentials for a
read-only account on vCenter.
Many customers will naturally be concerned about the security and anonymity of their data. Reassure
them that all communications between HPE CloudPhysics Observer and the cloud component of the
solution use TLS 1.2 and are encrypted with the latest supported security standards. In addition, HPE
CloudPhysics collects performance and configuration data, not identifiable personal data. The only
“personal” information that HPE CloudPhysics gathers is the company name, contact name, and email
address for issuing invitations to the portal.

Rev. 23.21 47 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Data collection with HPE CloudPhysics

Figure 2-10: Data collection with HPE CloudPhysics

Observer uses API calls to vCenter to collect data. It does not deploy agents to VMs or hosts, nor does it
impact performance on the customer's environment. Instead, Observer uses performance and
configuration data that vCenter already collects at a 20-second granularity. As mentioned, this data
includes about 200 metrics spanning VMs, hosts, clusters, storage, and networks.
VMware vCenter typically “rolls up” and destroys that data after an hour. However, Observer preserves
data with the 20-second granularity. It submits the data to HPE CloudPhysics, which applies complex
patented analytics to it.

Rev. 23.21 48 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

SAF-based assessment process overview

Figure 2-11: SAF-based assessment process overview

You will now turn your attention to HPE SAF, which HPE recommends using for customers who are
migrating physical machines to a VMware deployment. HPE SAF is a purpose-built suite of applications
that enable you to rapidly gather critical workload metrics from many different devices. Armed with these
metrics, you can then perform analyses to define and validate sizing requirements and propose solutions.
To begin an HPE SAF assessment process, you create an assessment opportunity in SAF Analyze and
generate an invitation email. Once generated, this email is sent to you, so you can review it before
forwarding to the customer. This also ensures that the email to the customer comes from a source they
recognize and trust.
The customer follows the instructions in the email to download the latest HPE SAF Collector. HPE SAF
Collector gathers required data and metrics from the customer’s existing infrastructure using native
functionality in a matter of minutes. It does not need to be “installed” and (in the majority of cases) does
not need to be left running all week at the customer’s site. Therefore, it places no additional ongoing load
on the customer’s environment.
Once HPE SAF Collector, they upload the collected data to the HPE SAF servers using one of several
options for doing so. They will most likely be prompted for the Upload Token, which was provided to them
in the email they were sent.
You will be notified when the collected data has been uploaded to the HPE SAF Servers. At that point,
you will be able to see the new data in the opportunity you created to start this process. If you like, you
can share the opportunity with the customer so they can also view the reports.
Figure 2-13 shows the “Add NEW assessment” form, which you will fill out to begin a new assessment.

Rev. 23.21 49 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Figure 2-12: Add NEW assessment

After you create the assessment, you will receive an email with all instructions for GUI and CLI data
collection. (See Figure 2-14.)

Figure 2-13: SAF Collector email

Rev. 23.21 50 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Collection process

Figure 2-14: Collection process

Your customer (or you helping the customer) download the SAF Collector as a single zip file which
contains the latest CLI and GUI versions. This single downloaded zip file contains everything that your
customer (or you) need. Among other things, the zip file includes the SAF Collector User Guide and a
README file, which might contain last-minute notices.
HPE provides both a CLI and a GUI version of the SAF Collector to cover different situations. The CLI
version is easier to build into automation, for example.
The SAF Collector must be extracted from the zip file and copied to the machine where it will run. There is
nothing to install; just copy the SAF Collector executable to the desired machine. This machine must run
Windows 8, Windows 2012 or newer, or at least have .Net 4.5 or newer.
This machine must have a network connection to the device from which your customer (or you) intend to
collect data.
The SAF Collector User Guide has a section for each of the supported data sources and notes the pre-
requisites (if any) for collecting data from each source.

Supplemental material on using SAF Collector


When your customer (or you) first launches the SAF Collector, the collector attempts to check with the
SAF servers to ensure it is up to date.
If a newer update is available, it automatically downloads and updates itself and may provide a message
asking you to stop and restart the SAF Collector.
The SAF Collector GUI is deliberately as simple as possible; all collection jobs are managed from a single
screen. The Collector Jobs editor screen allows you to select any of the currently-supported data sources.
Once you have chosen the desired data source, you must provide the connection details, including an IP
address and user credentials.
Currently, HPE provides automatic obfuscation of IP and MAC addresses. Select which data obfuscation
level you would like and press OK to save the job specification.
The SAF Collector saves all of the data it collects into output files that are in the same location as the
SAF Collector executable. It stores this data in human-readable format so that the customer can review

Rev. 23.21 51 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

the data before it leaves their site. Of course, SAF Collector transmits collected data over a secure
connection to the HPE data centers.
If you would like to add another layer of obfuscation, you can do so manually using a text editor.
However, please note that any changes you make that damage the structure of the collected data, or the
relationship between objects within the collected data, will result in analysis failures.
Once data collection is complete, use the upload button to send the data back to the SAF servers.
Alternatively, click the “Browse to output” button to open an Explorer window. This window shows the
collection of zip files that you can then review or transfer to a USB device, for example.
The most common mistake is to extract the SAF Collector from the zip file before running. If you run the
SAF collector from within the zip file, it will still work; the collection output files will not be saved into the
same location. However, the “Browse to output” button will still take you to where they have been saved.

Rev. 23.21 52 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Collecting Windows performance data

Figure 2-15: Collecting Windows performance data

To collect Windows performance data, HPE SAF Collectors historical statistics from an archive data store
on disk. By default, Windows does not store the data HPE requires. Hence, you need to prepare a
Windows Perfmon Data Collection Set (DCS).
HPE provides an example Powershell script to configure the DCS. Once you configure the DCS, leave
the Windows host alone to record performance history data.
If you wish to collect seven days’ worth of performance history (the default recommendation), you must
wait seven days before running the SAF Collector. The SAF Collector will execute in a matter of minutes
as it is only copying the DCS output files that have been created by Perfmon. Repeat this process for
each Windows host involved in the collection.
Because this can be a tedious task, you may want to consider automation with the SAF Collector CLI.
Perhaps even better, check to see if you could collect data from a common storage array rather than
individual host OSes.
HPE is working on providing a –realtime mode for Windows in the near future. Keep checking the SAF
website.

Supplemental information: Collecting VMware performance data


While HPE generally recommends using HPE CloudPhysics to assess VMware environments, you can
also use HPE SAF to assess these environments.
Unlike third-party collector tools, SAF Collector enables you to choose from one of two data collection
modes when gathering information in a VMware environment: you can collect data in –realtime mode or –
historical mode.

Rev. 23.21 53 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Figure 2-16: Collecting VMware performance data with HPE SAF Collector

These two choices are possible due to the way vCenter stores performance data, that is, first in a memory
buffer and second in an archive.
• Memory buffer: The vCenter Server collects performance data at level 4, which is the highest level of
detail available. The vCenter Server captures this level 4 data into a one-hour FIFO memory buffer.
• Archive: The vCenter Server then archives the collected data into an on-disk historical database. The
level of detail that the vCenter stores in the archive depends on the customer’s settings. (The
customer sets this level in a vCenter configuration parameter.) The default setting is level 1, the
lowest level of detail.
When you run the SAF Collector for VMware using the –realtime mode, it collects the level 4 data from
the one-hour memory buffer. When you run the SAF Collector using the –historical mode, it collects the
data from the archive, which could have a level 1 or higher, depending on the customer’s settings.
If you intend to use the –realtime mode, then no preparation is required. That said, you will need to run
the SAF Collector for seven days if you need seven days’ worth of performance history. (Other tools, such
as Lanamark, for example, use this method.)
Using the –historical mode requires one additional preparation step because to collect useful data for
assessing a customer’s environment, you need data with a detail level of at least 3. Hence, before you
begin collecting data using –historical mode, confirm the vCenter settings are set to level 3 or higher at
five-minute and 30-minute intervals; if not, change the settings accordingly.
With the detail level set at 3 or higher, the SAF Collector in –historical mode needs only to run for minutes
because all of the required performance history data is already available. Once the collection has been
performed, remember: you will need to change the settings back to what they were prior to making the
change.
Collecting data is far simpler using the –historical mode; however, some customers may not wish to
change the vCenter archive settings. In these cases, you can use –realtime mode.
The customer may not want to use level 3 or level 4, perhaps due to a concern about disk space.
However, VMware support requires level 4 for diagnostics. Therefore, you should advise the customer to
increase the disk space available to the vCenter Server to avoid future problems with VMware support.

Rev. 23.21 54 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Uploading from SAF Collector

Figure 2-17: Uploading from SAF Collector

After collecting the data, the customer (or you) should upload it.
If the customer (or you) use the SAF Collector in GUI mode, they can simply click the “Upload” button to
open a dialogue prompting for the upload token. Once the upload token has been validated, the collection
data will be uploaded for processing.
Customers can find the upload token in the invitation email that you sent to initiate this process (see
Figure 2-20). The upload token is also displayed in the Assessment properties field in SAF Analyze.

Figure 2-18: Uploading from SAF Collector

Rev. 23.21 55 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Exporting data from SAF Analyze

Figure 2-19: Exporting data from SAF Analyze

From SAF Analyze, you can generate reports and export all table data to CSV.
When viewing the analysis screen for a particular assessment, the upper half of the screen, shown
bounded by a green box in Figure 2-21, contains a summary of the assessment statistics for that
particular data type.
To reduce the space on screen taken up by this summary box, you can use the expand/collapse button to
shrink the summary box as shown in the figure.
From here you can generate reports, which are created in Word document format so that, when
necessary, you can customize them.
All table displays have a small menu icon in the top right-hand corner, which allows you to export all or
some (a selected portion) of the data displayed in the current table.
Sometimes there is more information than can fit on the current screen. In such cases, a small toggle
button will indicate that additional data is available. Click it to cycle between the available data views.
In the case of the server and VMware data types, individual servers or guest VMs are listed. Click on
each device to open a new detail page showing the performance metrics for that device only, as shown in
Figure 2-22.

Rev. 23.21 56 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Figure 2-20: Exporting data from SAF Analyze

As with other summary screens, by default the 95th percentile figures are used for the calculations.
However, that is also user selectable.
Click on the history chart to expand it. This improves visibility and shows more detail.

Rev. 23.21 57 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

SAF integration with HPE sizers

Figure 2-21: SAF integration with HPE sizers

SAF is in continuous development to send all collected data to other applications, to ease the process of
converting an assessment to a sizing. A nice example is the integration with the HPE SimpliVity Sizing
Tool.
When you are analyzing a VMware farm, a logical next step might be to launch the HPE SimpliVity sizer,
so SAF Analyze provides a link for that. Notice that the Summary (green box in the figure) has headline
numbers all relevant to the HPE SimpliVity Sizing Tool. When you click the HPE SimpliVity Sizing Tool
launch button, the HPE SimpliVity Sizer Tool is launched and should have the numbers from SAF
Analyze automatically populated in it.

Rev. 23.21 58 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

HPE Assessment Foundry training

Figure 2-22: HPE Assessment Foundry training

HPE Assessment Foundry offers a Training page with materials to help you to better understand available
features. The preferred method of delivery is currently a short video, no more than five minutes, and
typically aimed at a single topic.

Rev. 23.21 59 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Common sizing tools overview

Figure 2-23: Common sizing tools overview

As you begin to size your solution, keep some additional best practices in mind. You should size to keep
VM load on the host’s resources at 80% or under. If the customer uses HA clusters, you also need to
consider redundancy. For example, if the customer wants N+1 redundancy, you should scope the solution
with an extra module so that the remaining modules can support the load if one module fails. If the
customer plans to use fault tolerance, you should double the requirements for each FT-protected VM.
Whenever possible use an HPE sizer to size the solution. HPE offers several sizers, including these:
• Solution Sales Enablement Tool (SSET)
– Consolidates the individual sizer apps from the presales tools category, now with enhanced
functionality
– Use to size workloads on HPE Synergy and HPE ProLiant DL
– Available on HPE SSET or Products and Sales Now (PSNow)
• HPE SimpliVity Sizing Tool
– Use to size workloads on HPE SimpliVity
– Available on HPE PSNow
• HPE dHCI Sizer
– Use to size the compute components for HPE Alletra dHCI
– Available on HPE PSNow
• HPE NinjaSTARS
– An HPE Assessment Foundry SAF tool for sizing storage solutions, including the storage
components for HPE Alletra dHCI
– Also available on HPE PSNow

Rev. 23.21 60 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

HPE SSET

Figure 2-24: HPE SSET

HPE SSET takes the guesswork out of sizing VMware-based solution on HPE. From the dashboard, you
can select to size a VMware solution, including these:
• VMware ESXi on HPE Platform
• VMware Cloud Foundation (VCF) on HPE platform

To start the process, you enter values in various fields, including Customer Account and Guidance Name,
and select either a Basic or Expert sizing mode. Additionally, you are prompted to enter information about
the environment (which you will have gathered earlier in the design process), including the number of
VMs, vCPUs and memory per VM, IOPS per VM, and so forth. You will also be prompted to select the
platform (Synergy or DL).

The sizer returns a solution summary within minutes, indicating specific server specifications, cache and
data disk specifications, and recommended reference architectures for the solution. From here, you can
also automatically populate a BOM.

Rev. 23.21 61 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Best practices for VMware on HPE


This section covers best practices for VMware on HPE.

Rev. 23.21 62 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Use VMware ESXi images for HPE servers

Figure 2-25: Use VMware ESXi images for HPE servers

HPE recommends using Custom HPE VMware ESXi Images, which it provides for HPE ProLiant servers,
including both DL servers and HPE Synergy compute modules. Each image comes pre-loaded with HPE
management tools, utilities, and drivers, which help to ensure that HPE servers can perform tasks (such
as boot from SAN) correctly.
You can obtain the Custom HPE VMware ESXi Image for various HPE servers at this link.
You must ensure that the server’s firmware is updated to align with the driver versions used in the HPE
Custom Image. (See the Service Pack for ProLiant (SPP) documentation at https://hpe.com/info/spp and
the “HPE Servers and option firmware and driver support recipe” document on http://vibsdepot.hpe.com
for information on SPP releases supported with HPE Custom Images.) Later in this module, you will learn
how to use HPE plug-ins for VMware tools to automate managing HPE server firmware and ESXi images
together.

Some customers might want to you customize the image further. In that case, you can use VMware
Image Builder, which is included with the vSphere Power CLI. You can add vSphere Installation Bundles
(VIBs) with additional drivers, HPE components, or third-party tools to a Custom HPE VMware ESXi
Image. You also have the option of downloading HPE ESXi Offline Bundles and third-party driver bundles
and applying them to an ESXi image that the customer is already using. As yet another alternative, you
can choose from the ESXi Offline Bundles and third-party drivers within the Custom HPE VMware ESXi
Image to create your own custom ESXi image.

If VMware updates the image in the future, HPE supports applying the update or patches to Custom HPE
VMware ESXi Image. However, HPE does not issue an updated custom image every time VMware
updates. Instead, it updates the image on its own cadence.

Rev. 23.21 63 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

OS Support tool for HPE Synergy

Figure 2-26: OS Support tool for HPE Synergy

The OS Support tool for HPE Synergy helps you determine which HPE Synergy Service Packs (SSPs)
your solution supports based on the OS and HPE Synergy compute modules you select in the tool.

You may choose any combination of the following OS versions and HPE Synergy compute modules:

• OS
– VMware ESXi
– Windows Server
– SUSE Linux Enterprise Server
– Red Hat Enterprise Linux
– Oracle Linus
– Citrix Hypervisor
• Compute module
– HPE Synergy 480 Gen10 Plus
– HPE Synergy 480 Gen10
– HPE Synergy 480 Gen9
– HPE Synergy 660 Gen10
– HPE Synergy 660 Gen9
– HPE Synergy 680 Gen9
– HPE Synergy 620 Gen9
After clicking to select an OS and compute module, the tool returns a complete list of supported HPE
SSPs. These SSPs contain specific firmware and drivers for HPE Synergy Management combinations.

You can find the OS Support tool for HPE Synergy here:
https://techhub.hpe.com/us/en/enterprise/docs/index.aspx?doc=/eginfolib/synergy/sw_release_info/OS_S
upport.html

Rev. 23.21 64 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Leverage workload optimization profiles

Figure 2-27: Leverage workload optimization profiles

HPE recommends that you leverage HPE ProLiant workload optimization profiles to deliver better
performance for the VMware environment.

HPE ProLiant servers allow you to easily tune the server’s resources by choosing from a collection of
preconfigured workload profiles, such as “Low Latency,” “Mission Critical,” and two Virtualization profiles.
When you select a workload profile, the server automatically configures the BIOS settings to optimize
performance for the expected workload. (For a complete list of available profiles, consult the UEFI manual
for the HPE model.)

For VMware on HPE, you should typically set the workload profile to one of the two Virtualization options:
“Virtualization—Power Efficient” or “Virtualization—Max Performance.” Both of these profiles ensure that
all available virtualization options are enabled. The profile you choose depends on the customer’s need
for increased power efficiency or enhanced performance.

Regardless of the profile you choose, HPE ProLiant DL Gen11 servers offer these advantages.
• Task optimization
HPE servers are optimized to meet the demands of applications requiring advanced graphics and
data accelerations
• Broadest portfolio
Our portfolio was designed to offer servers that meet the needs of a broad range of workloads and
use cases
• Technology responsiveness
At HPE, we strive continually to match the pace of technology innovation, building on decades of our
commitment to industry standards. We were the first to deliver a composable infrastructure and we
continue to meet the latest needs. Our servers offer the foundation for efficient and scalable cloud-
native workloads and infrastructure as code.
• Operational efficiency
HPE ProLiant servers deliver new levels of efficiency to power applications with the right economics
for optimal business value both in the data center and across the edge.

Rev. 23.21 65 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Leverage HPE compute security

Figure 2-28: Leverage HPE compute security

HPE servers offer built-in protections. HPE’s proprietary server management technology, Integrated
Lights Out (iLO), is responsible for much of this protection, which is embedded at the server-board and
firmware level.

• Silicon Root of Trust


The HPE iLO chip acts as a silicon root of trust to ensure protection at the very heart of our compute
modules. The silicon root of trust makes it virtually impossible to corrupt the server boot process (by
inserting malware, a virus, or other compromised code).

A unique digital fingerprint of the iLO firmware is embedded in each iLO chip at the factory. At startup,
the iLO chip confirms that the iLO firmware matches this digital fingerprint, verifying the system BIOS,
iLO firmware, and other essential firmware types at server startup. This process protects firmware
from compromise even by attackers with physical access to the server.

• Firmware runtime validation


More than a million lines of firmware code run before an operating system even starts, making it
essential to confirm that server firmware is free from malware or compromised code. From the
moment the server is powered on, HPE iLO periodically scans the server’s firmware to verify its
ongoing integrity.
• Authenticated firmware update
For further protection against compromised code, all firmware types that can be flashed by iLO are
digitally verified before installation (firmware “flashing” refers to writing to or upgrading the firmware).
Flashable firmware types include the iLO firmware, system BIOS, CPLD, and Innovation engine
firmware. The authenticated firmware update feature protects firmware from attackers without
physical access to the server (who may attempt to execute malicious code remotely).
• Secure Recovery
If firmware fails the validation or authentication step, the system automatically resets the iLO firmware
to the iLO recovery image (if available). This feature is supported with the iLO standard license.
During server startup, iLO also validates the server’s ROM. If the active system ROM fails validation
and a redundant system ROM is deemed valid, then the redundant system ROM becomes active.
If both the active and the redundant system ROM are invalid, then a firmware verification scan starts.
For all servers with the standard iLO license, iLO logs this failure to notify the user, who could then
manually complete the repair. If this server has an advanced iLO license and both the active and
redundant ROM are invalid, iLO will automatically initiate a repair.

Rev. 23.21 66 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

HPE Gen11 security innovations

Figure 2-29: HPE Gen11 security innovations

HPE Gen11 servers feature HPE iLO 6, which further enhances security. In previous generations
customers could choose to enhance security by purchasing HPE servers with Secure Device Identity
(iDevID), platform certificates, and Trusted Platform Modules (TPMs). But HPE Gen 11 provides these
security enhancements by default.

HPE iDevID and platform certificates are factory installed, preventing alterations to each server’s unique
identity.

iDevID is a factory-provisioned form of identification uniquely bound to a server. With iDevID, a server can
prove its identity across industry standards and protocols that authenticate, provision, and authorize
communicating devices. For example, customers might be implementing Zero Trust Security in their data
centers by applying 802.1X authentication. The HPE server can use its iDevID to pass that authentication
out of the box. This cryptographic identity follows a server for its entire lifecycle, regardless of OS
reimaging.

Platform certificates protect against supply-chain tampering, cryptographically verifying that the server’s
hardware and most of its firmware have not changed since it left the manufacturing plant. To customers,
these features equate to assurance that a newly-installed HPE Gen11 server is automatically a trusted
device.

With HPE Gen11 servers, customers gain an additional layer of protection through the use of TPMs. A
TPM provides hardware dedicated to secure cryptographic operations. TPMs store the private keys for
certificates, making it virtually impossible for unauthorized parties to obtain and misuse those keys. TPM
chips are fully integrated in the new HPE ProLiant servers.
As you would expect, HPE ProLiant servers’ secure foundation extends to secure communications to
services within HPE GreenLake Edge-to-Cloud Platform, including HPE GreenLake for Compute Ops
Management. These communications use robust AES-256 encryption.

Rev. 23.21 67 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

TPM and Secure Boot integration with VMware vCenter

Figure 2-30: TPM and Secure Boot integration with VMware vCenter

It is important for you to understand how VMware vCenter reacts to ESXi hosts with TPMs. You can then
ensure that you set up HPE servers with TPMs to work correctly within the VMware environment.
When customers add a server with a TPM as an ESXi host, VMware vCenter Server attempts to “attest”
the host’s integrity all the way from its firmware up through its ESXi software. It also attests hosts with
TPMs when the hosts reboot or reconnect to vCenter Server. For the ESXi host to pass vCenter’s
attestation, it must use UEFI Secure Boot, which is included in the HPE VMware Upgrade Pack. UEFI
Secure Boot ensures that only signed software is loaded at boot time.
The following steps outline at a high level the attestation process for an HPE server acting as an ESXi
host:
• During the first attestation process, the vCenter Server verifies that the host’s TPM was produced by
a known and trusted vendor (which HPE servers’ TPMs are). It has the host create an Attestation Key
(AK), stored in the TPM.
• On every subsequent boot, the ESXi host uses UEFI Secure Boot to load firmware and software
securely and to generate an Attestation Report.
– The UEFI firmware validates the Bootloader against a digital certificate stored in the firmware.
– The Bootloader validates the VMkernel. It takes various measurements of the settings, which
hashes and stores to the TPM. The host also sends these measurements to vCenter so that
vCenter can validate them against the host event log and VIB metadata.
– vCenter validates the measurements against the host event log and VIB metadata.
– Meanwhile, the init process runs the Secure Boot Verifier to boot the ESXi OS and ensure that
every component is uncompromised. It validates all the VIB metadata; every VIB digital signature
links to the VMware digital certificate that is stored in the Secure Boot Verifier.
• The host signs the Attestation Report with its AK to prove that a trusted source is sending it. It sends
the report to vCenter, which determines whether the host is running trusted software. vCenter then
assigns the host a “passed” or “failed” attestation status, which your customer (or you) can view in the
vSphere Client.

Rev. 23.21 68 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Additional HPE ProLiant security capabilities

Figure 2-31: Additional HPE ProLiant security capabilities

HPE ProLiant servers include several other security features, including these.

• Device-level firewall and audit logs


HPE ProLiant servers comply with NIST BIOS Protection Guidelines for Servers, which specifies
requirements for mitigating the execution of malicious or corrupt BIOS code. Among other measures,
the system BIOS and iLO firmware on HPE servers reside on flash chips that the iLO chip protects,
preventing unauthorized access from the host. Only the iLO firmware can write to the iLO and BIOS
flash chips, authenticating any images written to the latter.
HPE ProLiant servers also enable security admins to download audit log files.
• Secure User Data Erase
Following the guidelines outlined in DoD 5220.22-M, HPE offers secure erase functionality for the
internal storage system and hard disks. Secure erase overwrites all block devices, including hard
disks, storage systems attached to the server, and any internal storage that iLO used. For servers
that need to be repurposed or disposed of, this process ensures that all critical data is sanitized by
applying random patterns in a three-pass process. The process is thorough and can take several
hours or even days to complete.
• Multi-factor authentication for iLO
HPE iLO supports two-factor authentication (2FA) to provide additional security; enabling 2FA is
generally wise, particularly if you make connections remotely or outside of your local network.

Rev. 23.21 69 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Consider VMware licensing

Figure 2-32: Consider VMware licensing

As another consideration for designing HPE solutions for VMware, consider VMware vSphere licensing
models to guide your choice of processors.

VMware vSphere versions 7 and 8 require one license per processor on ESXi hosts. However, the
license only covers processors with up to 32 physical cores. If the processor has a higher core count than
32, the customer requires additional licenses for it. Hence, you should generally recommend processors
with a high core count because they allow hosts to support more VMs with fewer total processors. Fewer
processors require fewer licenses, which in turn lowers the solution’s cost.

But VMware vSphere+ requires one license per core and a minimum of 16 cores per CPU. When working
with the vSphere Plus per-core license, focus only on selecting processors that best serve the needs of
the workloads they will run.

Rev. 23.21 70 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Designing HPE ProLiant DL servers for VMware


In this section, you will learn more about positioning HPE ProLiant DL servers for various VMware use
cases.

Rev. 23.21 71 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Reviewing the HPE ProLiant DL Portfolio (AMD and Ampere-based


servers)

Figure 2-33: Reviewing the HPE ProLiant DL Portfolio (AMD and Ampere-based servers)

HPE ProLiant servers provide customers with proven building blocks to accelerate digital transformation
and offer validated solutions that cover the range of customers’ workload requirements.
The latest Gen11 AMD- and Ampere-based HPE ProLiant servers power your customers’ varying
compute workloads:
• HPE ProLiant DL325 features higher-performing CPUs and increased storage performance to lower
TCO for software-defined compute.
• HPE ProLiant DL345 reduces licensing costs and delivers expanded storage and enhanced I/O
bandwidth for software-defined storage (SDS) and content delivery network (CDN) workloads.
• HPE ProLiant DL365 offers maximum core density and high memory capacity in a small 1U package,
ideally suited for electronic design automation (EDA) and VDI workloads.
• HPE ProLiant DL385 offers more of everything (performance and processing) with maximum GPU
support for demanding workloads, like AI/ML and Big Data Analytics.
All HPE Gen11 servers offer the following advantages:
• 50% more cores (96 compared to 64 in AMD 3rd Generation EPYC processor)
• Higher performance (PCIe Gen5 with twice the I/O bandwidth as AMD 3rd Generation EPYC
processor)
• Double Data Rate 5 (DDR5) with 125% more memory bandwidth (compared to AMD 3rd Generation
EPYC processor)
• Improved CPU performance
For customers, all of this adds up to more value from their HPE investment: faster insights, greater
application reliability, and fewer service interruptions. The new Gen11 portfolio also lowers costs by
processing more data in a smaller footprint, which means less space in data center facilities.

Rev. 23.21 72 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Reviewing the HPE ProLiant DL Portfolio (Intel-based servers)

Figure 2-34: Reviewing the HPE ProLiant DL Portfolio (Intel-based servers)

HPE ProLiant DL Gen11 servers, based on the 4th Gen Intel™ Scalable Processor, power customers’
compute workloads:

• HPE ProLiant DL320 is an entry-level model optimized for price performance and edge workloads. As
compared to previous generations, the Gen11 model offers higher performing CPUs, better storage
performance, and denser GPUs. It can therefore work well with more AI and VDI at the edge.
• HPE ProLiant DL360 is a great choice for many enterprise workloads. The Gen11 model offers
greater CPU and memory density, as well as expanded storage and I/O bandwidth, which accelerate
performance for enterprise compute workloads.
• HPE ProLiant DL380 is a high-performance server, ideal for diverse workloads. Its maximum core
density and high memory capacity, paired with dense storage, enables this server to accelerate a
wide variety of workloads.
• HPE ProLiant ML350 is a powerful tower-based productivity workhorse for Remote Office/Branch
Office (ROBO) and virtualization workloads.
Gen11 servers help to improve user experience for many workloads:

• Virtualization/VDI workloads
– More cores and internal cache to run VM software
– Memory bandwidth to support all virtual machines
– More PCIe lanes to reduce latency
– Data security without impacting server performance
• Data management and big data workloads
– Large number of storage devices for management software and data storage
– More PCIe lanes and network bandwidth to reduce latency and ensure high throughput

Rev. 23.21 73 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Positioning HPE ProLiant DL servers based on workload

Figure 2-35: Positioning HPE ProLiant DL servers based on workload

The figure provides a guideline for positioning HPE ProLiant DL servers, which are built to deliver
optimized workloads and solutions. The Workload Characteristics row demonstrates typical requirements
for the workload in question. The “lead with” platforms shown under each workload are well-suited to
meet that workload’s particular characteristics.
For instance, a workload like transcoding/visualization (a subset of Compute for AI) demands very high
processing power. This workload will benefit by being run on a 2P server, which supports high-
performance CPUs. This workload can also benefit from GPU acceleration. Hence, HPE recommends an
HPE ProLiant DL380 (specifically, DL380a) or DL385, which are both 2P/2U servers that support up to
four double-wide GPUs in Gen11 (compared to three in Gen10 Plus).

Gen11 versus Gen10 and Gen10 Plus


When starting a customer conversation, lead with servers from the HPE ProLiant DL Gen11 portfolio.
Customers particularly benefit from Gen11’s increased power when they need to support AI/ML, data
analytics, other data solutions, and VDI. Gen11 can also increase performance for hybrid cloud and
container platforms.
You can still sell Gen10 and Gen10 Plus servers to price-sensitive customers, such as small-to-medium
businesses (SMB) and transactional customers, with less demanding workloads. If customers have
limited power capacity per rack and no ability to increase that power, Gen10 or Gen10 Plus might also
provide a better fit.

Rev. 23.21 74 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Introducing HPE ProLiant for vSphere Distributed Services Engine

Figure 2-36: Introducing HPE ProLiant for vSphere Distributed Services Engine

Over the rest of this topic, you will examine an exciting new HPE ProLiant DL offering, optimized for
VMware vSphere and VMware NSX.

HPE has developed the next generation of virtualized architecture built on HPE ProLiant DL380 Gen10
Plus, VMware vSphere and VMware NSX, and AMD Pensando’s Data Processing Unit (DPU)-enabled
architecture: HPE ProLiant with vSphere Distributed Services Engine. This HPE solution, factory-
integrated with vSphere Distributed Services Engine, supports any workload that currently runs on
vSphere 8 Enterprise Plus Edition or later. Its key innovation consists in offloading overhead services to
the DPU. In this way, a host retains more core cycles for running VMs and the VMs’ production
workloads.

HPE ProLiant for vSphere Distributed Services Engine also provides improved network security
capabilities by way of its air-gapped architecture. Additionally, with this offering, customers can meet (or
even improve upon) their IT sustainability goals, reducing their footprint and energy use while optimizing
operations.

Pre-configured bundles
HPE ProLiant for vSphere Distributed Services Engine is currently based on HPE ProLiant 380 Gen10
servers. HPE makes ordering the solution simple with three pre-configured bundles, each pre-optimized
for popular workloads or verticals:

• Small for edge deployments, common in:


– Retail
– Manufacturing
• Medium for VDI and rendering, common in:
– Financial Services Industry (FSI)
– Healthcare and Life Sciences (HLS)
– Communications, Media, and Entertainment (CME)

Rev. 23.21 75 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

• Large for database, data analytics, and streaming/broadcast, common in:


– FSI
– HLS
– CME
The table shows the hardware in the pre-configured bundles as of the release of this course.
Table 2-1: HPE ProLiant for vSphere Distributed Services Engine
HPE ProLiant DL380 Gen10 HPE ProLiant Gen10 Plus 6342 HPE ProLiant Gen10 Plus
Plus 6326 Small 12TB Server Medium 20TB Server with 6338 Large 38TB Server
with vSphere Distributed VMware vSphere Distributed with VMware vSphere
Services Engine Services Engine Distributed Services
Engine

PROCESSOR TYPE 6326 (16-core, 2.9GHz, 185W) 6342 (24-core, 2.8GHz, 230W) 6338 (32-core, 2.0GHz,
205W)

PROCESSOR NUMBER 2 processors


DPU 1 Pensando DSC245v2 10/24G 2p 32GB SPL Card

MEMORY 384GB (12 x 32GB) 512GB (16 x 32GB) 1024GB (16 x 64GB)
DDR4 2R 3200MT/s DDR4 2R 3200 MT/s DDR4 2R 3200 MT/s
NETWORK 1X Marvell QL41132HLCU 10GbE 2-port SFP+ Adapter
CONTROLLER
BOOT CONTROLLER HPE NS204i-p x2 Lanes NVMe PCIe3 x8 OS Boot Device
PCIE SLOTS 3 PCIe: x8/x16/x8
POWER SUPPLY 2 x 1600W
FANS HPE DL38X Gen10 Plus Maximum Performance Fan Kit
MANAGEMENT HPE iLO 5
SECURITY Trusted Platform Module (TPM)
FORM 2U Rack/8SFF/SFF Easy Install
FACTOR/CHASSIS/RAIL
KIT

Licensing
HPE ProLiant for vSphere Distributed Services Engine comes with the vSphere ESXi OS pre-installed.
However, it comes only with a 60-day evaluation license, which is activated when the server first boots
up. To continue functioning, the server must receive a valid license within the 60-day timeframe.
You should recommend a 2-processor VMware vSphere Enterprise Plus license key. This license will
enable the VMware vSphere Distributed Services Engine functionality on the AMD Pensando DPU, as
well as the host CPUs.

Rev. 23.21 76 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Why customers need data processing units (DPUs)

Figure 2-37: Why customers need data processing units (DPUs)

In this figure, you see that between 22% and 78% of CPU cycles are spent on overhead in a typical
VMware vSphere environment. When offload that overhead to a DPU, you free up those CPU cycles for
applications’ use.
The result? Better application performance, which is especially useful for customers’ AI/ML and other
similarly demanding workloads.

Rev. 23.21 77 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Why the AMD Pensando DPU

Figure 2-38: Why the AMD Pensando DPU

HPE ProLiant for vSphere Distributed Services engine uses the AMD Pensando DPU.
The Pensando DPU is built on the P4 Programmable Architecture and based on the same technology
that hyperscalers use. The Pensando DPU ensures an accelerated data path with support for cloud,
compute, network, storage, and security services at cloud scale with minimal latency, jitter, and power
requirements.
Based on a 7nanometer (nm) process that can support 400G throughput, the AMD DPU secures its
position as leading the competition by “at least a generation.” (Source: Fritts, Harold, “AMD Pensando
DPU Support for VMware vSphere 8 Announced,” StorageReview, 30 Sept 2022.)

Rev. 23.21 78 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

HPE ProLiant for vSphere Distributed Services Engine

Figure 2-39: HPE ProLiant for vSphere Distributed Services Engine

HPE ProLiant delivers unique value to the VMware vSphere Distributed Services Engine, offering
continuous protection from silicon to cloud with zero trust and data protection services (an optional add-
on) for backup and recovery with ransomware protection.
HPE ProLiant for vSphere Distributed Services Engine is pre-optimized and ready to deploy. It includes
the integrated AMD Pensando DPU to offload services from CPU. The HPE Custom Image for VMware
vSphere ESXi 8.0 and the HPE Service Pack for ProLiant are pre-installed on the server.
Additionally, HPE and HPE Partners have the expertise to help customers plan, design and deploy this
solution with HPE Financial Services available to help with budget. You can deliver this solution as a
traditional, up-front sale or as part of an HPE GreenLake offering.
As shown in this figure, in its initial launch, the solution will offload Network and NSX Services for ESXi
hosts. In future releases, offloading Storage and Host Management Services might also be supported.

Rev. 23.21 79 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Key benefits of HPE ProLiant for vSphere Distributed Services Engine

Figure 2-40: Key benefits of HPE ProLiant for vSphere Distributed Services Engine

Key benefits of HPE ProLiant for vSphere Distributed Services Engine include:

• Better performance
– The solution offloads VMware NSX networking and vSphere networking services from the CPU to
the DPU.
– With extra CPU cycles now available for them, virtualized workloads perform better.
• Continuous security
– The server supports zero-trust protection trust from silicon to cloud. (Note that the HPE Silicon
Root of Trust covers the server’s CPUs. It does not cover the AMD Pensando DPU because AMD
uses its own security schema.)
– HPE offers an uncompromised and trusted supply chain.
– If customers opt for the option HPE Data Protection solution, they can protect against ransomware
with fast, reliable recovery, backup, and cost-effective archive.
• Reduced complexity
– Customers can simplify operations of DPU lifecycle management by using existing tools and
workflows.
– Customers can consolidate more workloads per CPU, potentially reducing the number of ESXi
hosts.
– Admins HPE iLO and a single management port to manage the server host and the DPU.
• Single, secure operating model for traditional and cloud native
– With this solution, customers can manage virtualized and containerized workloads at the edge, on
premises or in the cloud.
• Lower energy use, better sustainability, and lower TCO
– Workload consolidation delivers power savings and lowers TCO.
– Customers can minimize energy usage, lower cooling costs, and run more apps on a smaller
footprint. In this way, they can increase sustainability and lower TCO.
• Future-proof architecture
– This solution is flexible and able to accommodate evolving workloads.
– This solution supports demanding next-gen applications and heterogeneous infrastructure, as well
as provides a baseline for a modern data center architecture.

Rev. 23.21 80 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

• Easier to buy (with three sizes pre-optimized for popular workloads/verticals), deploy, and operate
– HPE offers worry-free, flexible, turnkey solutions.
– The VMware software stack (VMware ESXi) is pre-installed.
– HPE Pointnext services can aid in installing and deploying the systems.
– As you will learn later in this module, HPE offers many VMware plug-ins which simply managing
HPE servers.

Rev. 23.21 81 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

RAs for VMware solutions on HPE ProLiant

Figure 2-41: RAs for VMware solutions on HPE ProLiant

For more information on designing HPE ProLiant for VMware solutions, consult Reference Architectures
such as these:

• HPE Reference Architecture for HPE ProLiant with VMware vSphere Distributed Services Engine
• HPE Reference Architecture for VMware Cloud Foundation 4.4 on HPE ProLiant DL Servers

You can find these and other Reference Architectures at: https://www.hpe.com/docs/reference-
architecture

Rev. 23.21 82 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Designing HPE Synergy solutions for VMware


In this section, you will turn your attention toward designing HPE Synergy solutions for VMware.

Rev. 23.21 83 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Designing ESXi clusters for HPE Synergy

Figure 2-42: Designing ESXi clusters for HPE Synergy

When designing an ESXi cluster for HPE Synergy, make sure to consider redundancy requirements as
you distribute nodes in the same cluster across frames.
For example, your solution might have six clusters: two with three modules and four with six modules. The
figure above illustrates how you could distribute those clusters across three frames. Distributing the
nodes evenly minimizes the impact if a full frame fails. In this example, a frame failure will leave the
cluster at two-third capacity. Of course, sometimes clusters will have a number of hosts that is not evenly
divisible by the number of Synergy frames. But you should still strive to distribute the hosts as evenly as
possible.
You should also clarify the customer’s requirements. Should the cluster operate without degradation of
services if one compute module fails or if an entire frame fails? In the latter case, you might need to
expand the cluster beyond what you would otherwise expect. For example, a cluster might require five
hosts to fully support the VMs assigned to that cluster. If the customer requires simple N+1 redundancy
(up to one compute module can fail), you could design a six-host cluster with two hosts in each of three
frames. However, what if the customer requires services to run without degradation if an entire frame
fails? With the design shown in the figure, a failed frame would leave only four hosts in the cluster. If the
customer absolutely requires at least five hosts at all times, you would need to add another frame and
more compute modules. For example, you might create a seven-host cluster with one or two hosts in
each of four Synergy frames. Then a frame could fail and leave the desired number of hosts still up.

Rev. 23.21 84 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Leveraging HPE Synergy templates

Figure 2-43: Leveraging HPE Synergy templates

You can automate provisioning and simplify firmware updates by leveraging HPE Synergy templates.
These templates allow admins to define standardized settings, which they can dynamically apply to bays
in the Synergy frames to compose and recompose resources.
A server profile template (SPT) applies to compute modules. It specifies settings such as:
• Boot settings
• BIOS settings
• Local and storage settings
• Network settings
• Firmware assignment
HPE Synergy also supports logical interconnect groups (LIGs), which defines network settings for
interconnect modules.
Consider an example of how these templates can speed provisioning. Suppose a customer currently has
one rack with three Synergy frames and now wants to add another rack. Review the steps below to learn
how templates facilitate scaling the solution.

Step 1
HPE Synergy Systems feature a Composer and redundant Composer, which manage multiple frames. To
add the new frames, you open the management ring on the existing rack and easily integrate the frame
link modules on the three new frames into the ring. The existing Composer will now manage both racks.
You could move the redundant Composer to a frame in the new rack for rack-level redundancy.

Step 2
You power everything on, and Composer auto-discovers the new frames.

Step 3
Admins can apply existing templates to the new frames, which quickly establishes the correct connectivity
and network settings.

Step 4
The logical enclosure settings are applied in tandem with firmware updates. Within a few hours and with
minimal admin work, the new Synergy frames are available.

Step 5
Admins can apply the existing SPTs to compute modules in the new frames to quickly scale up the
desired workloads.

Rev. 23.21 85 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Technical guidance for VMware solutions on HPE Synergy

Figure 2-44: Technical guidance for VMware solutions on HPE Synergy

For additional reference material, consult HPE Synergy Software Releases at:
https://techhub.hpe.com/us/en/enterprise/docs/index.aspx?doc=/eginfolib/synergy/sw_release_info/index.
html

Rev. 23.21 86 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Learning checks
1. You are advising a customer about how to deploy VMware vSphere on HPE Synergy. What is a
simple way to ensure that the ESXi host has the proper HPE monitoring tools and drivers?
a. Provision the hosts with the HPE custom image for ESXi.
b. Use Insight Control server provisioning to deploy the ESXi image to the hosts.
c. Manage the ESXi hosts exclusively through Synergy, rather than in vCenter.
d. Customize a Service Pack for ProLiant and upload it to Synergy Composer before using
Composer to deploy the image.
2. How does HPE ProLiant for vSphere Distributed Services Engine improve performance for
workloads, as compared to other HPE ProLiant servers?
a. By enabling Intelligent System Tuning (IST)
b. By offloading overlay network processing to a Pensando DPU
c. By providing slots for up to 8 processors
d. By supporting hardware-accelerated virtualization

Rev. 23.21 87 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Activity
You will now have the opportunity to practice applying what you have learned about designing VMware
on HPE solutions.

Rev. 23.21 88 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Activity 2.1

Figure 2-45: Activity 2.1

For this activity, you will return to the Financial Services 1A customer scenario. For this activity, you will
size an environment for one of the company’s clusters. You can choose a HPE ProLiant DL or HPE
Synergy solution, based on what you think is best for this customer.
This cluster supports a variety of Web applications and services for the customer's website and mobile
banking apps. The customer has told you that this cluster must support 200 VMs with this per-VM profile:
• 4 vCPUs
• 16 GB RAM
• 100 GB disk

Task 1
What additional information do you need to collect to properly size the deployment?

__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________

Rev. 23.21 89 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________

Rev. 23.21 90 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Task 2
In response to your questions, the customer has indicated that a 4 vCPU-to-core oversubscription ratio is
acceptable. The customer wants to fully allocate memory to VMs without oversubscription. The customer
wants N+1 redundancy for the cluster (one host can fail without impacting performance). You used HPE
CloudPhysics to discover this information:
• VM count, vCPUs, and allocated RAM given by customer are confirmed as correct
• Total IOPS = 2034 write; 4325 read
Create a BOM for this cluster. Use SSET, which you can find by following the steps below.
3. Access https://psnow.ext.hpe.com

Figure 2-46: Task 2: Products & Solutions Now

4. Log in with your credentials.

Rev. 23.21 91 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

5. Click the arrow next to Tools & Resources.

Figure 2-47: Task 2: Access Tools & Resources

Rev. 23.21 92 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

6. Select the check box next to 4. Sizing.

Figure 2-48: Task 2: Select Sizing

7. Scroll down and select the HPE SSET (Solution Sales Enablement Tool).

Figure 2-49: Task 2: Select HPE SSET

8. If prompted, log in again.

Rev. 23.21 93 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

9. Select + New Guidance.

Figure 2-50: Task 2: Start New Guidance

10. In the HPE Enterprise Solutions, select VMware ESXi on HPE Platform.

Figure 2-51: Task 2: Choose sizing from list

11. Click Start.


12. Use a Basic guidance. Fill in the information that the customer provided you. Select the desired
platform. (Have no preference for storage at this point). Click Review.
13. Export the BOM.

Rev. 23.21 94 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Task 3
1. Take notes on how you will present the BOM and its benefits to the customer. Explain how the
solution will help the customer solve their issues.

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

Rev. 23.21 95 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

2. Make a list of best practices to recommend to the customer.

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

Rev. 23.21 96 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Understanding VCF on HPE


You will now learn a bit more about VMware Cloud Foundation (VCF) architectures and why customers
benefit from deploying VCF on HPE infrastructure.

Rev. 23.21 97 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

VCF SDDC Manager and domains

Figure 2-52: VCF SDDC Manager and domains

VCF includes many of the same components familiar from VMware vSphere, including ESXi hosts and
vCenter Servers. NSX, about which you will learn more in Module 4, also forms a key component of VCF.
VCF adds SDDC Manager to configure and manage the logical infrastructure. SDDC also automates
some tasks such as provisioning hosts.
VCF domains are used to create logical pools across compute, storage, and networking. VCF includes
two types of domains: the management domain and virtual infrastructure workload domains.

Management domain
The management domain is created during the VCF “bring-up,” or installation, process. The management
domain contains all the components that are needed to manage the environment, such as one or more
instances of vCenter Server, the required NSX components, and the components of the VMware vRealize
Suite. The management domain uses vSAN storage.
You can set up availability zones to protect the management domain from hosts failing. Regions enable
you to locate workloads near users. Regions help you apply and enforce local privacy laws and
implement disaster recovery solutions for the SDDC.

Virtual infrastructure workload domain


Virtual infrastructure (VI) workload domains are reserved for user workloads. A workload domain consists
of one or more vSphere clusters. Each cluster must include a minimum of three hosts and can scale up to
a maximum of 64 hosts. (Check the current version of VCF for up-to-date information about scalability.)
SDDC manager automates the creation of the workload domain and the underlying vSphere cluster(s).
Within a cluster, all the servers must be homogeneous. That is, they must be the same model and type. If
the VI domain contains more than one cluster, however, the servers in the different clusters do not need
to be homogeneous. For example, if the domain has two clusters, the servers in cluster 1 must be the
same model and type, and the servers in cluster 2 must be the same model and type. However, the
servers in cluster 1 do not need to be the same model and type as the servers in cluster 2.
When the first VI workload domain is created, SDDC manager creates a vCenter Server and an NSX
Manager, which are placed in the management domain. For each additional VI workload domain, another
vSphere Server is deployed to the management domain. You can choose to have the additional VI
workload domain share an existing NSX manager cluster or deploy a new NSX Manager cluster.

Rev. 23.21 98 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

VCF architecture: Standard model

Figure 2-53: VCF architecture: Standard model

VMware supports two architecture modules: standard and consolidated.


Most installations will use the standard model. As this figure shows, the standard model includes a
dedicated management domain, which hosts only management workloads.
The standard model also includes at least one VI workload domain, which hosts the user workloads. As
mentioned earlier, one vCenter Server is required for each VI workload domain.
VMware recommends companies use the standard model because it separates management workloads
from user workloads and provides greater flexibility and scalability.
It is important to know that you do not select the architecture model you will use when you bring up VCF.
For every VCF deployment, you start by deploying the management domain. If you are using the
standard architecture, you continue by setting up a VI workload domain and deploying the user workloads
in this domain.

Guidelines for deploying VCF


Below are some general guidelines for deploying VCF in a standard model:
• Management domain:
– vSAN storage
– One vCenter Server
• VI workload domain:
– Deploy one vCenter Server for each VI workload domain.

Rev. 23.21 99 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

– When setting up a cluster, ensure all servers in the cluster are the same model and type. As noted
earlier:
 Servers in cluster 1 must be the same model and type
 Servers in cluster 2 must be the same model and type
 Servers in cluster 1 do not need to be the same as servers in clusters 2
– You can use SAN arrays to enhance performance for a VI workload domain. Supported storage
includes vSAN, vVols, NFS, or VMFS on FC.
– For vSAN-backed VI workload domains, vSAN Ready Node configurations are required. (You will
learn more about HPE vSAN Ready Nodes in Module 3.)
Customers also require the necessary VMware vSphere, vSAN, and NSX licenses to support the specific
VI workload domain deployment.

Rev. 23.21 100 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

VCF architecture: Consolidated model

Figure 2-54: VCF architecture: Consolidated model

The consolidated model is designed for companies that have a small VCF deployment or special use
cases that do not require many hosts. For example, customers might use a consolidated model for a
demo, lab, or testing environment. With the consolidated model, both management and user workloads
run in the management domain. You manage the VCF environment from a single vCenter Server. You
can use resource pools to isolate the management workloads and the user workloads.
Remember that when you bring up VCF, you do not select the architecture model. No matter which
architecture module you are using, you first deploy and bring up the management domain. If you are
using a consolidated architecture, you then deploy the user workloads in that management domain, using
resource pools to isolate them from the management workloads.
If you later want to migrate a consolidated architecture to a standard architecture, the process is fairly
straightforward. You create a VI workload domain and then move the workload VMs to the new domain.

Rev. 23.21 101 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Why VCF on HPE Synergy?

Figure 2-55: Why VCF on HPE Synergy?

Over the next pages you will learn about deploying VCF on HPE Synergy. HPE Synergy provides the
ideal foundation for VCF.
As you see here, Synergy eliminates Top of Rack (ToR) switches by bringing networking inside the frame;
in this way it greatly reduces infrastructure cost and complexity. Synergy Virtual Connect (VC) modules
provide profile-based network configuration, designed for server admins. Because server admins no
longer need to wait for network admins to reconfigure the infrastructure, they can move server profiles
from one Synergy compute module to another as required, making infrastructure management simpler
and more flexible. HPE Synergy also stands out from other solutions because it disaggregates storage
and compute. In other words, rather than each server having its own local drives, forcing companies to
scale compute and storage together, Synergy has separate compute modules and storage modules.
Admins can use profiles to flexibly connect or disconnect compute modules from drives on the storage
modules. Because Synergy provides the same flexibility and profile-driven management to both
virtualized and bare metal workloads, customers can consolidate traditional data center applications and
their VCF-based private cloud on the same infrastructure, reducing management complexity and costs.
Another key benefit of deploying VCF on HPE lies in the deep integrations between HPE and VMware,
including the HPE OneView Connector for VCF. These integrations help customers simply, automate,
and orchestrate management; you will learn more in the next section.

Rev. 23.21 102 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

RAs for VCF on HPE

Figure 2-56: RAs for VCF on HPE

For more information on designing HPE solutions for VCF, consult these Reference Architectures.

• HPE Reference Architecture for VMware Cloud Foundation 4.4 on HPE Synergy
• HPE Reference Architecture for VMware Cloud Foundation 4.4 on HPE ProLiant
• HPE Reference Architecture for VMware Cloud Foundation 4.3.1 on HPE ProLiant as Management
and HPE Synergy as Workload Domains
You can find these and other Reference Architectures at: https://www.hpe.com/docs/reference-
architecture

Rev. 23.21 103 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

HPE solutions to enhance management and monitoring


In this section, you will consider how to use HPE solutions to enhance management and monitoring.

Rev. 23.21 104 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

HPE compute management portfolio

Figure 2-57: HPE compute management portfolio

The HPE compute management portfolio includes three key solutions:


• HPE iLO
HPE iLO is embedded server management that enables customers to securely configure, monitor,
and update HPE servers from anywhere.
• HPE OneView
HPE OneView help to automated IT operations and simplify infrastructure lifecycle management
across compute, storage and networking.
• HPE GreenLake for Compute Ops Management
HPE GreenLake for Compute Ops Management delivers unified operations as-a-service from edge to
cloud. With it customers can manage the lifecycle for HPE ProLiant servers across many sites.

Rev. 23.21 105 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

HPE plug-ins to simplify management

Figure 258: HPE plug-ins to simplify management

HPE provides several plug-ins for VMware integration. The figure summarizes those plug-ins, and you will
learn more about each over the rest of this topic.
As indicated in the name, most of these integrations rely on HPE OneView. To benefit from them,
customers must use HPE OneView to manage their HPE servers. HPE OneView is built into HPE
Synergy solutions, but for other servers, customers will need to deploy HPE OneView and manage their
servers within it.
Please note that VMware recently re-branded the vRealize Suite as Aria; however, this rebranding was
still underway as of the release of this course. For example, the VMware Compatibility Guide was still
using vRealize names when this course was being developed. The HPE plug-in names also still use the
vRealize name, and this course will refer to the plug-ins by their precise name. If you need details about
which HPE plug-in versions are supported with which VMware management products, check the VMware
Compatibility Guide (Management and Orchestration section).

Rev. 23.21 106 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

HPE OV4VC benefits

Figure 2-59: HPE OV4VC benefits

The HPE OneView for vCenter (OV4VC) plug-in brings the power of HPE OneView to VMware
environments. With OV4VC, VMware admins can continue to use the VMware interfaces with which they
are familiar but gain access to HPE’s deep management ecosystem. The single-console access simplifies
administration. IT can further reduce their efforts by automating responses to hardware events.
With this plug-in, customers can launch HPE management tools directly from vCenter and proactively
manage changes with detailed relationship dashboards that extend across the virtual and physical
infrastructure. Through automation, IT can deliver on-demand server and storage capacity.
With OV4VC, customers can achieve a more stable and reliable environment with automation that
enables online firmware updates and workload deployment.
As of the release of this course, OV4VC 11.3 is the most recent version. Version 11.x supports VMware
vSphere 8 and updated VMware tools (version 11.0.5). As you will see in more detail later, HPE OV4VC
can help customers automate firmware and device updates together with ESXi OS image updates.
OV4VC 11.x has made these updates even easier by supporting multiple add-ons with one SPP and
improving the ESXi OS deployment feature. In earlier versions, OV4VC only supported deploying an ESXi
image to Synergy compute modules in environments using HPE Synergy with VC modules. OV4VC 11.x
makes this feature available for all HPE ProLiant DL and HPE Synergy deployments.
More key benefits of OV4VC are described below.

Simplify operations and increase productivity


OV4VC simplifies management by integrating the physical and virtual infrastructure, allowing:
• Comprehensive health monitoring and alerting
• Firmware and driver updates

Deploy faster
OV4VC simplifies on-demand provisioning. Template-based tools let customers:
• Leverage the HPE OneView automation engine
• Quickly and easily create or expand a VMware cluster

Rev. 23.21 107 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Impose configuration consistency


OV4VC integrates directly into VMware consoles for a consistent experience. Admins can use familiar
VMware tools for HPE management tasks. They can also launch HPE tools directly vCenter.

Increase visibility into environment


OV4VC provides customers with non-disruptive insight into the complete (virtual and physical)
environment.

Rev. 23.21 108 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

HPE OV4VC: Server only integration

Figure 2-60: HPE OV4VC: Server only integration

HPE OV4VC 9.6 (and below) supports both server and storage integration in vCenter. With HPE Ov4VC
10.0 and above, however, the plug-in includes support for servers only.
You can download OV4VC from the Software Center.
Storage integration is provided in the HPE Storage Integration Pack for VMware. You will learn more
about this plug-in later in this course.
Note: When upgrading OV4VC 9.6, be aware that Version 9.x backups cannot be restored using version
10.0 and later.

Rev. 23.21 109 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

HPE OV4VC licensing and managed devices

Figure 2-61: HPE OV4VC licensing and managed devices

You deploy HPE OV4VC as a VM. The OV4VC VM must have access to vCenter and HPE OneView, and
you must register it with vCenter. All vCenter clients connected to this vCenter Server can then access
the OV4VC views and features.

Licensing
OV4VC can be licensed with OneView standard or advanced licenses:
• Standard—Supports basic health and inventory features
• Advanced—Supports advanced features such as server profiles.
HPE Synergy includes the Advanced license, so no additional license is required when using HPE
Synergy.

Managed devices
With OV4VC 9.4 and above, all servers, enclosures, and Virtual Connect devices must be managed by
HPE OneView. OV4VC will report an error when trying to manage non-OneView managed devices. If
companies want to use OV4VC to manage devices that are not managed by OneView, they can use
OV4VC 9.3 (rather than upgrading to a later version).
As of the release of this course, supported devices include:
• HPE ProLiant BladeSystem c-Class
• HPE ProLiant 100, 300, 500, 700, or 900 series ML or DL servers
• HPE Synergy D3940 Storage Module
• HPE Synergy 12Gb SAS Connection Module
• HPE Synergy Server

Rev. 23.21 110 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

HPE OV4VC features

Figure 2-62: HPE OV4VC features

Only the basic monitoring and inventory features are available with a standard HPE OneView license.
Make sure that devices have advanced licenses (or use HPE Synergy) to obtain the full features,
summarized in the figure above. The following provides an overview of some of the key features of
OV4VC.

OV4VC Views
HPE OV4VC provides links to iLO interfaces on HPE servers and to HPE OneView. The plug-in also
provides a wide range of information about the infrastructure directly to vCenter in an HPE Server
Hardware tab. When admins select this tab, they can choose from several views:
• Hardware Detail displays information about server processors, memory, and physical adapters
• Firmware Inventory shows the firmware version installed on every server component
• Monitoring and Alerts display information about security-related activities events
• Port Reporting lists network adapters and helps admins correlate physical and virtual settings.

Enhanced Link Mode


Enhanced Link Mode allows access to the Network Diagram, which displays a complete view of
connections between virtual switches, server adapters, VC modules, and uplinks to help admins set up
and troubleshoot networking.
For HPE Synergy solutions, this mode also provides an Enclosure Summary where admins can find
information about the enclosure where the server is installed.

OS deployment
In earlier versions, OV4VC could only deploy the ESXi OS to HPE Synergy servers in VC-based
environments. With OV4VC 11.1 and later, customers can also deploy the OS to HPE rack servers and
HPE Synergy server in non-VC environments.

Rev. 23.21 111 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Cluster-related features
When the infrastructure that underlies VMware consists of an HPE Synergy or HPE BladeSystem or any
solution that uses Virtual Connect (VC) modules, customers can import VMware clusters into OneView.
Admins can then implement cluster-aware maintenance on the clusters from HPE OneView. Integrating
management within HPE OneView enables admins to automate tasks that would otherwise require
hopping between tools.
For example, admins can use HPE OneView cluster management to shrink a cluster as well as:
• Check cluster members' consistency with a template
• Apply cluster-aware firmware updates (i.e., cluster remediation)

Cluster remediation
Upgrading the hosts is as simple as choosing to remediate the deviation from the SPT. OV4VC upgrades
each host in a cluster one at a time, first moving that host's VMs to another host to avoid disruption.

Although OV4VC only supports cluster remediation for HPE Synergy compute modules in a VC
environment, OV4VC also provides another way for customers to automate driver and firmware
updates—if the customer uses VMWare vSphere Lifecycle Manager (vLCM). You will examine this option,
which applies to all HPE OneView managed servers, a bit later.

Proactive HA
HPE OV4VC enhances VMware's HA capabilities to prevent downtime. When selected as a partial failure
provider in the cluster's HA settings, OV4VC monitors hosts' health and notifies vCenter of impending
issues on a host. Admins can choose from a broad range of failure conditions for OV4VC to monitor,
including issues with memory, storage, networking adapters, fans, and power. When OV4VC informs
vCenter of an issue, the cluster can then move VMs to other hosts or take another remediation action, as
specified in the cluster HA settings. In this way, workloads move to a fully operational host before a
hardware issue causes an outage.

Grow cluster
HPE OV4VC makes it simple for admins to expand a cluster. To do so, admins initiate the grow cluster
process in the Grow Cluster wizard. The cluster is associated with an SPT and OS deployment plan,
which together define many required settings, including OS build plans, which are stored and managed
on OV4VC itself.
IT admins simply need to indicate the cluster, the new hardware, and the networking settings for the new
host. The networking settings can include vDS settings for particular functions such as management, fault
tolerance (FT), and vMotion. They can also configure multi-NIC vMotion.
HPE OneView configures the server settings, and then OV4VC installs the ESXi OS on the server.
Deployment takes about 30 minutes, and OV4VC can run eight concurrent deployments. OneView uses
VMware integration to automatically add the new ESXi host to the proper cluster.

Rev. 23.21 112 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

HPE OneView HSM plug-in for vLCM

Figure 2-63: HPE OneView HSM plug-in for vLCM

VMware vLCM helps customers deploy clusters quickly and easily. Replacing vSphere Update Manager
(VUM) in vSphere 7 and above, vLCM automates tasks such as updating and patching ESXi host
software.
HPE OneView Hardware Support Manager (HSM) for VMware vLCM integrates with vLCM, providing
one-click lifecycle management for ESXi, HPE drivers, and HPE firmware—all directly in the vSphere user
interface.
HSM makes VMware-compatible HPE server drivers and firmware (SPPs) available to vLCM. HPE
recommends that customers use vLCM as the single source of truth for upgrades. That is, they should set
baselines for ESXi images and HPE firmware versions within vLCM. vLCM will automatically check hosts
and validate that all components meet the baseline. If any do not, vLCM will automate updating the non-
compliant components.
In more detail, vLCM hands the firmware upgrade to HPE OneView while vLCM handles the non-
disruptive orchestration of VM migration, maintenance mode, and reboots. From the user perspective, the
process is seamless.
When customers manage their HPE servers with HPE OneView, they must use the OneView HSM for
VMware vLCM. This plug-in is included with HPE OV4VC 10.1 and above. It supports any HPE Gen10
server certified for ESXi 7.0 and HPE OneView. In addition, one HPE OV4VC instance supports multiple
vCenters/OneViews and external HPE firmware repository. Note that the OneView HSM plug-in for vLCM
only manages server drives and firmware, not firmware for HPE Synergy Composers or interconnect
modules.
When customers do not manage their HPE servers with OneView, they can enable the HSM for vLCM
plug-in within the iLO Amplifier Pack.

Rev. 23.21 113 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Example of using the HPE OneView HSM plug-in for vLCM

Figure 2-64: Example of using the HPE OneView HSM plug-in for vLCM

As you see, the HPE OneView HSM (or HSM) plug-in for vLCM allows customs to integrate necessary
HPE packs into the image baseline for a cluster. The desired cluster image in this example includes:
• the ESXi version (required)
• the HPE Add-on
This add-on provides HPE Customization for HPE Servers, a collection of drivers, patches, and
solutions. It is required because you are using the VMware ESXi image, rather than a Custom HPE
Image for VMware ESXi.
• the HPE Hardware Support Pack (HSP), delivered by HPE OneView HSM
The HSP is a firmware and drivers add-on that allows vLCM to assist in the firmware update process.
The HPE HSP can consist of either a Service Pack for ProLiant (SPP) or a VMware Upgrade Pack
(VUP). The SPP is the standard service pack for HPE ProLiant servers with all the firmware, drivers,
and system management software. A VUP is a special version that contains only firmware and
VMware drivers. HPE sometimes delivers VUPs to add VMware components that are not yet
supported in the SPP.
To use SPPs or VUPs with the HPE OneView HSM plug-in for vLCM, admins must load them into
HPE OneView (or HPE Synergy) repository.
Note a few caveats:
• HPE often recommends first imaging HPE servers with the appropriate Custom HPE Image for
VMware ESXi, as discussed earlier. Customers can then use a vLCM image—consisting of the
VMware base image, HPE Add-On, and HPE HSP—to update servers.
• To use vLCM with a VCF workload domain, admins must define an image when they create the
workload domain. However, at that point, the HSM plug-in is not integrated with the workload
domain’s vCenter Server (which is not yet deployed). So admins should first create a cluster image
with only the VMware base image and HPE Add-On. They use that image to build the workload
domain. They can then register HPE OV4VC and the HSM plug-in with the workload domain’s
vCenter Server. That enables them to add the HSP to the image. Then they can remediate the cluster
with the new image.
For more details on these processes and HPE recommendations, refer to the HPE OneView for vCenter
User Guide and to HPE Reference Architectures for VCF.

Rev. 23.21 114 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

HPE OneView for VMware vRealize Log Insight

Figure 2-65: HPE OneView for VMware vRealize Log Insight

You will now look at the HPE plugins for VMware vRealize Suite, starting with its Log Insight component.
(VMware is rebranding this solution as VMware Aria Operations for Logs; this course will continue to use
the vRealize Log Insight name for clarity.)
Included with the OneView Advanced license, HPE OneView for VMware vRealize Log Insight (OV4VLI)
provides content packs which admins can load into vRealize Log Insight. The content packs add
dashboards, extracted fields, saved queries, and alerts that are specific to the server hardware.
With operational intelligence and deep visibility across all tiers of their IT infrastructure and applications,
admins have a more complete picture of all the factors behind performance and possible issues. They
can troubleshoot and optimize more quickly using Log Insight's intuitive, easy-to-use GUI to run searches
and queries. And analytics help admins to find the patterns behind data.
OV4VC 2.0 has added several dashboards, including ones for auditing logs, viewing security events such
as failed login attempts, and viewing HPE OneView tasks.

Rev. 23.21 115 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

HPE OneView for VMware vRealize Operations (HPE OV4VROPS)

Figure 2-66: HPE OneView for VMware vRealize Operations (HPE OV4VROPS)

VMware vRealize Operations (being rebranded as VMware Aria Operations) helps customers monitor and
assess the health of their VMware environment.
HPE OneView for vRealize Operations (OV4VROPS) enhances the solution’s monitoring capabilities,
helping customers to gain visibility into the underlying physical infrastructure. It helps admins monitor
server health, power, temperature, performance, and system alerts. With these added insights, admins
can solve problems more quickly.
Admins can browse through the infrastructure tree, checking each device’s health and efficiency. Risk
alerts are clearly shown, ready to grab admins’ attention.
Admins can drill in on alerts to quickly discover potential issues for faster troubleshooting.
The figure above shows examples of hierarchy and heatmap dashboards provided by OV4VROPS.
OV4VOPS 3.2 provides these dashboards:
• HPE OneView Infrastructure Dashboard (hierarchy)
• HPE OneView Networking Dashboard (hierarchy)
• HPE OneView Servers Overview Dashboard (heatmap)
• HPE OneView Enclosure Overview Dashboard (heatmap)
• HPE OneView Uplink Port Overview Dashboard (heatmap)

Rev. 23.21 116 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

HPE OneView for vRealize Orchestrator (HPE OV4VRO)

Figure 2-67: HPE OneView for vRealize Orchestrator (HPE OV4VRO)

VMware vRealize Orchestrator (vRO) helps customers to automate complex IT tasks and standardize
operations with workflows. (VMware is in the process of rebranding VMware vRealize Orchestrator as
VMware Aria Automation Orchestration.)
With vRO, a library of building block actions defines functions such as powering on or stopping a VM. A
wide array of plug-ins, including third-party ones, define various actions. Admins can easily drag and drop
actions to define a workflow, which ensures repeatable and reliable operations. The workflow can feature
logical constructs such as if/then statements or an order to wait for a particular event to occur.
HPE OV4VRO offers many predefined workloads and actions, including:
• Performing actions on HPE OneView clusters
• Configuring HPE OneView
• Managing hypervisors on HPE OneView clusters
• Performing hardware actions on servers, such as powering them on or off or assigning server profiles
While offering many predefined workflows and actions, OV4vRO also permits admins to customize and
extend the workflows so that they can automate based on their company’s needs. HPE OV4VRO now
supports invoking the HPE iLO Redfish API, which offers even more actions.
HPE OV4VRO helps admins to easily automate the lifecycle of OneView-managed hardware from
deployment to firmware update to other maintenance tasks. Customers can make their existing workflows
more powerful by incorporating HPE OneView’s advanced management capabilities within them.

Rev. 23.21 117 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

HPE OneView Connector for VCF

Figure 7-68: HPE OneView Connector for VCF

HPE and VMware have tightly integrated SDDC Manager and HPE OneView powering HPE Synergy to
deliver simplicity in managing composable infrastructure and the private cloud environments. By
introducing the HPE OneView Connector for VFC, HPE brings composability features to VCF. Through
this unique integration and enhanced automation customers can dynamically compose resources within a
single console using SDDC Manager to meet the needs of VCF workloads, thus saving time and
increasing efficiency. This integration simplifies management of infrastructure by providing the ability to
quickly respond to business needs to add capacity on demand directly from SDDC Manager. It does so
seamlessly to increase business agility and help reduce costs from overprovisioning or under provisioning
of resources.
The HPE OneView Connector provides the interface between HPE OneView and SDDC Manager, using
DMTF’s Redfish APIs to communicate with SDDC Manager.
When you install the OneView Connector, you install it on a Linux VM. As part of the installation process,
you import the OneView Connector’s certificate into SDDC Manager. After the Connector is installed, you
must register it with SDDC Manager.
The OneView connector for VCF enables you to complete tasks such as:
• Create server profile templates that are visible in SDDC Manager
• Compose resources, which includes allocating resources to servers, storage, and networking
interfaces
• Decompose resources, returning them to Synergy’s fluid resource pools

Rev. 23.21 118 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

More automation benefits with HPE

Figure 2-69: More automation benefits with HPE

This topic will conclude by highlighting three more ways HPE helps customers automate and optimize
their VMware environments.

Rev. 23.21 119 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

HPE InfoSight: Industry’s most advanced AI for infrastructure

Figure 2-70: HPE InfoSight: Industry’s most advanced AI for infrastructure

HPE InfoSight gives customers a new way to approach troubleshooting and optimization. Collecting
millions of pieces of data each day from deployments across the world, this AI-based solution can detect
potential issues, and recommend solutions, before the issues grow into larger problems. HPE InfoSight
extends across the HPE storage, compute, and hyperconverged infrastructure. And it extends to the
virtualization layer. With the breadth and depth of insight delivered by HPE InfoSight, customers can hone
in on the true causes of issues and better optimize their infrastructure.

Rev. 23.21 120 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Benefits of HPE InfoSight VMware integration

Figure 2-71: Benefits of HPE InfoSight VMware Integration

HPE InfoSight’s integration with VMware provides greater insight into the environment. HPE InfoSight can
look at the entire VMware infrastructure and provide detailed advice on both optimizing the environment
and mitigating and avoiding problems. With the in-depth analysis of its cross-stack telemetry, HPE
InfoSight provides in-depth VMware analysis and troubleshooting. (You will learn more about that feature,
empowered by HPE storage arrays, in another module.) As shown in this figure, HPE InfoSight can report
symptoms of an issue, pinpoint the root cause, and then suggest a solution.

Rev. 23.21 121 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

HPE RESTful API

Figure 2-72: HPE RESTful API

The HPE APIs are a critical component of its ability to deliver a software-defined infrastructure.
HPE uses a Representational State Transfer (REST) model for its APIs. REST is a web service that
allows clients to use basic HTTP commands to perform create, read, update, and delete (CRUD)
operations on resources. When an application provides a RESTful API, it is called a RESTful application.
A RESTful API makes infrastructure programmable in ways that CLIs and GUIs cannot. For example, a
CLI show command provides output that an admin can read, but a script cannot. On the other hand, a
simple GET call to an API returns information in JSON format that is easily extractable for a script.
With RESTful APIs, developers can use their favorite scripting or programming language to script HTTP
calls for automating tasks, such as inventorying, updating BIOS settings, and many more. Because
RESTful APIs provide a simple, stateless, and scalable approach to automating, they are common to
many modern web environments, and customers’ staff should be quite familiar with developing them.

HPE RESTful API and Redfish conformance


Redfish is an open-source RESTful API sponsored and controlled by Distributed Management Task
Force (DMTF), an industry recognized peer-review standards body. Redfish provides a schema for
managing heterogeneous servers in today’s cloud and web-based data center infrastructures, helping
organizations to transform to a software-defined data center.
In accordance with HPE’s commitment to open standards, the iLO API, used by OneView and other tools
to manage HPE ProLiant servers, is Redfish conformant. The Redfish API offers many advantages over
earlier interfaces such as IPMI as Redfish is designed for security and scalability.

Rev. 23.21 122 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

HPE DEV

Figure 2-73: HPE DEV

HPE DEV is a website for developers in the HPE ecosystem. It is a hub that serves a community of
individuals and partners that want to share open-source software for HPE products and services. It offers
numerous resources to help developers learn and connect with each other, such as blogs, monthly
newsletters, technical articles with sample code, links to GitHub projects, and on demand workshops. A
Software Development Kit with libraries and rich sample code helps developers to easily create scripts for
the HPE RESTful API for their own environments.
You can find HPE DEV at: https://developer.hpe.com

Rev. 23.21 123 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Activity
You will now watch a demonstration.

Rev. 23.21 124 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Activity 2.2

Figure 2-74: Activity 2.2

In this activity, you will view HPE OV4VC in action, seeing how easily you can use it to manage HPE
Synergy-based clusters. To complete this activity, you will need access to the Internet and HPE Partner
credentials.
Follow these steps:
1. Log into https://hpedemoportal.ext.hpe.com/
2. Search for OV4VC. Select this demo: HPE OneView For VCenter (OV4VC): Grow/Shrink, monitor
and manage VMware cluster on HPE Synergy.
3. Click Details and then in the new window click OnDemand.
4. Fill out the form, selecting Dry Run / Self-Paced Instruction and indicating that you do not have an
OPP Id. Click Submit.

5. Wait a moment. You will be taken to the My demo event(s) page. There you can download a demo
guide with detailed instructions.

6. In the My demo event(s) page, click Launch. You might need to install an app, or you can use the
light version.
7. Use the instructions you downloaded to complete the demo.
8. When you are finished with the demo, return to the My demo event(s) page, and cancel the demo.

If you are not able to receive on-demand access or want a shorter activity, return to the main Demo Portal
page and search for this recorded demo: HPE Synergy managed from vRealize Operation Manager
(vRops). Open the demo and play it.

Rev. 23.21 125 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Summary

Figure 2-75: Summary

In this module, you started by reviewing the design process to set the stage for the module’s focus:
designing a VMware solution built on HPE. You reviewed information on HPE tools for gathering
information about and assessing your customer’s environment and tools for sizing the solutions you
design. You read about HPE solutions for VMware, most notably, the ProLiant Gen11 portfolio and HPE
Synergy solutions. Module 2 ended by offering you more information about HPE plug-ins for VMware that
will simplify deploying, managing and operating any VMware on HPE solution that you build. Finally, the
module gave you the opportunity to apply what you learned from reading this module.

Rev. 23.21 126 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

Learning checks
3. What is one difference between a VCF standard architecture and a consolidated architecture?
a. The standard architecture supports more than one VI domain while the consolidated supports only
one VI domain.
b. The consolidated architecture supports SAN arrays to improve storage performance, but the
standard architecture does not.
c. The standard architecture separates management workloads from user workloads.
d. The consolidated architecture uses a wizard to simplify the installation process rather than
requiring the Cloud Builder VM.
4. What is one benefit of HPE OneView for vRealize Orchestrator (OV4VRO)?
a. It integrates a dashboard with information and events from HPE servers into vRO.
b. It provides an end-to-end view of servers' storage (fabric) connectivity within vRO.
c. It adds pre-defined workflows for HPE servers to vRO.
d. It integrates multi-cloud management into the VCF environment.
5. Which HPE plug-in helps customers manage HPE server firmware, driver, and ESXi images
together?
a. HPE OV4VROPS
b. HPE OV4VLI
c. HPE OneView Connector for VCF
d. HPE HSM for vLCM

Rev. 23.21 127 Confidential – For Training Purposes Only


Module 2: Design HPE Compute for VMware

PAGE INTENTIONALLY LEFT BLANK

Rev. 23.21 128 Confidential – For Training Purposes Only


Design HPE Storage Solutions for
VMware
Module 3

Learning objectives
In this module, you will explore multiple HPE solutions for making storage more software-defined and
better integrated within a VMware environment. You will review VMware vSAN, the VMware option for
software-defined storage (SDS). You will also learn how HPE makes it easier for you to provide
customers with the vSAN- ready solutions they need. You will then look at the options that HPE storage
arrays provide for integrating with VMware.
After completing this module, you will be able to:

• Position HPE storage solutions for VMware


• Given a set of customer requirements, determine the appropriate storage virtualization technologies
and solutions

Rev. 23.21 | © Copyright 2023 Hewlett Packard Enterprise Development LP | Confidential – For Training Purposes Only
Module 3: Design HPE Storage Solutions for VMware

Introduction to VMware storage


You will first review the evolution of VMware storage technologies, including vSAN and Virtual Volumes
(vVols). You will then focus on the storage options that most customers implement in a VMware
environment: vSANs and Storage Area Network (SAN) arrays.

Rev. 23.21 130 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

Evolution of VMware storage integration

Figure 3-1: Evolution of VMware storage integration

Reviewing how VMware storage has evolved can help you understand the challenges that customers
have faced in managing storage for their virtual environments. The focus for this course is vSAN and
vVols.
The following sections outline these technologies. If you want more information, you can click here.

VMFS
A VM's drive is traditionally backed by a virtual machine disk (VMDK). This VMDK is a file, which can be
stored on a Storage Area Network (SAN) array. Virtual Machine File System (VMFS) is the file system
imposed on the SAN array for storing the VMDKs. VMware created VMFS in ESX 1.0 in 2001 to fulfill the
special requirements of block storage and to impose a file structure on block storage. This file structure
was initially flat but became clustered in later versions. VMFS enables multiple devices to access the
same block storage, locking each individual VM's VMDKs for that VM's exclusive access.
VMware added support for Network File System (NFS) volumes, which use an NFS server rather than
block storage to store VMDKs, as an alternative to VMFS in Vl3.
With vSphere 7.0, VMware introduced clustered VMDKs. Clustered VMDKs require VMFS 6; they are
useful for supporting clustered applications such as Microsoft Windows Server Failover Cluster (WSFC).
Many customers still use VMFS datastores, but VMFS can be challenging and require a lot of
coordination with storage admins.

VAAI
vStorage API for Array Integration (VAAI) was introduced in ESX 4.1 in 2010 to enhance functionality for
VMFS datastores; it was extended with more primitives in ESX 5.0. VAAI aimed to enlist the storage as
an ally to vSphere by offloading certain storage options to the storage hardware. For example, cloning an
image requires xcopy operations. With VAAI, a VAAI primitive requests that the storage array perform the
operations, freeing up ESXi host CPU cycles. Other VAAI primitives include unmap and block zero. VAAI
also introduced a better locking mechanism called atomic test and set (ATS).
VAAI is an important enhancement, which is fully supported out-of-the-box on HPE Nimble and HPE
Primera arrays. However, all vendors that support VAAI do so in the same way. In addition to supporting

Rev. 23.21 131 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

VAAI, HPE extends its VMware integration to vSAN and vVols, which you will learn more about in this
module, and vCenter, which you will learn more about later in this course.

VASA
vStorage APIs for Storage Awareness (VASA) was introduced in vSphere 5.0 in 2011. VASA APIs let the
storage array communicate its attributes to vCenter. This lets VMware recognize capabilities on storage
arrays such as RAID, data compression, and thin provisioning. While VASA 1.0 was basic, admins can
now create VASA storage profiles to define different tiers of storage, helping them to choose the correct
datastore on which to deploy a VM.
However, VASA only characterizes capabilities at the datastore level. Admins cannot, for example, select
different services for VMDKs stored within the same datastore.

vSAN
VMware introduced virtual SAN (vSAN) in vSphere 5.5 U1 in 2014. This software-defined storage solution
is VMware's second try at virtual storage. vSAN transforms physical servers and their local disks into a
VMware-centric storage service. It is integrated in vSphere and does not require separate virtual storage
appliances (VSAs). In a vSAN, VMs write objects to the disks provided by the vSAN nodes without the
requirement of a file system. vSAN features an advanced storage policy-based management engine.
You will look at HPE platforms for supporting vSAN throughout this module.

vVols
VMware introduced Virtual Volumes (vVols) in vSphere 6.0 in 2015 as an alternative to VMFS and NFS
datastores. With this solution, a VM's drive can be a vVol—which is an actual volume on the SAN array—
rather than a VMDK file.
The vVol technology provides a similar level of sophistication and VMware-integration as vSAN but for
customers who want to use a storage array backend rather than servers with local drives. Building on
VASA 2.0/3.0, vVols transforms storage to be VM-centric. VMs can write natively to the vVols instead of
through a VMFS file system. As of vSphere 6.5, replication is supported with vVols, and, as of vSphere
7.0, Site Recovery Manager (SRM) integrates with vVols. These features make vVols much more
attractive to enterprises for which availability and disaster recovery (DR) are critical.
Storage vendors create their own vVols solutions to plug into vSphere so vendors such as HPE can
provide a lot of value adds to customers. You will look at the benefits of HPE's solutions for vVols later in
this module.

Rev. 23.21 132 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

Overview of typical VMware storage options

Figure 3-2: Overview of typical VMware storage options

Many customers who have a VMware environment choose to implement vSANs or Storage Area Network
(SAN) arrays. These are the solutions you will focus on in this module.
Customers may also choose to implement other hyperconverged infrastructure solutions, such as HPE
SimpliVity or HPE Alletra dHCI. (You will learn more about these solutions in Module 5.)

Rev. 23.21 133 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

VMware vSAN on HPE solutions


You will now focus on implementing VMware vSAN on HPE solutions.

Rev. 23.21 134 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

VMware vSAN overview

Figure 3-3: VMware vSAN overview

VMware vSAN is VMware's integrated SDS solution. It enables a cluster of ESXi hosts to contribute their
local HDDs, SSDs, or NVMe drives to creating a unified vSAN datastore. VMs that run on the cluster can
then be deployed on this datastore. The vSAN cluster can also present the datastore for use by other
hosts and clusters using iSCSI. vSAN provides benefits such as establishing a high-speed cache tier and
automatically moving more frequently accessed data to that tier.
Because vSAN eliminates the need for a SAN backend, it can save customers money and simplify their
data center administration. VMware vSAN appeals to customers who want the simplicity of a storage
solution that is integrated with the compute solution and is easy to install with their existing VMware
vCenter server. A vSAN solution can also provide simplicity of scaling; to expand, you simply add another
host to the vSAN cluster.

Rev. 23.21 135 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

VMware ESA versus OSA

Figure 3-4: VMware ESA versus OSA

With the release of vSAN 8, VMware introduced Express Storage Architecture (ESA), which is designed
to take advantage of NVMe hardware. Because many customers will continue to use the previous
architecture, Original Storage Architecture (OSA), it’s important to understand both architectures.
As the original VMware architecture, OSA is designed to support older, slower storage devices. To
improve performance, OSA uses a two-tier architecture, providing a a cache tier and a capacity tier. As
the names suggest, the cache tier provides read caching and write buffering, and the capacity tier
provides a pool of storage. Once data is saved, it is stored in the capacity tier.
You use device groups to implement the two tiers. Specifically, you can create one to five device groups
for each host. Each device group includes one cache device, which is a flash device, and one to seven
capacity devices.
Built to support NVMe-based flash devices, ESA uses a single tier. Because a cache tier is no longer
required, data can be processed and stored more efficiently. ESA uses a new log-structured file system
called vSAN LFS. This system enables the vSAN to “ingest new data fast and efficiently while preparing
the data for a very efficient full stripe write. The vSAN LFS also allows vSAN to store metadata in a highly
efficient and scalable manner.” (“Integrating an Optional, Hardware-Optimized Architecture into vSAN”
https://core.vmware.com/blog/introduction-vsan-express-storage-architecture.) The new architecture also
includes an optimized log-structure manager and data structure, which provides near device-level
performance capabilities.

Rev. 23.21 136 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

OSA versus ESA considerations

Figure 3-5: OSA versus ESA considerations

Because the vSAN architecture controls how data is processed and stored, you must help your
customers decide whether to use OSA or ESA when they purchase and deploy vSAN nodes. Once you
make this decision, you cannot change it. In other words, there is no path for upgrading existing OSA
nodes to ESA nodes.

So why would you recommend ESA or OSA?

As mentioned earlier, ESA is optimized for direct-attached NVMe. Because you do not have to dedicate
storage to a cache tier, you can use take advantage of all the storage capacity to store data.

You may choose OSA for customers who are running workloads that do not require NVMe-level
performance. You may also choose OSA if your customer needs vSAN features not currently supported
in ESA. As ESA matures, however, VMware is likely to add support for more features in ESA. If you are
helping a customer upgrade an existing vSAN cluster to vSphere 8, explain that the customer should
select OSA for that cluster's architecture.

Rev. 23.21 137 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

HPE’s approach to vSAN ReadyNodes

Figure 3-6: HPE’s approach to vSAN ReadyNodes

To meet customer needs, HPE provides vSAN ReadyNodes that are optimized for vSAN ReadyNodes
profiles, including both ESA and OSA profiles.
HPE works with VMware to certify each configuration, and once the configuration is tested and verified,
we add the new node to the catalog for our partners to recommend to customers.
HPE provides configurations for each supported platform (HPE ProLiant DL325, HPE ProLiant DL345,
HPE ProLiant DL360, and HPE ProLiant DL380) that cover all vSAN profiles (HY2, HY4, HY6, HY8, AF4,
AF6, and AF8).
To view or order HPE vSAN ReadyNodes, use OCA. Within OCA, go to “Search From Product Catalog”;
then look for “Reference Builds” and “HyperConverged Infrastructure.”
When you select a vSAN ReadyNode, OCA only permits you to customize its configuration with a limited
set of certified options, helping to prevent you from making mistakes. HPE has done this to better help
you as a partner.
The VMware Compatibility Guide provides certified options, but there are many options to choose from
without much guidance as to when you would choose one over the other. With the HPE vSAN
ReadyNodes, you have fewer choices to sort through and can more easily determine the best solution
that meets the customer’s requirements.

Rev. 23.21 138 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

Designing vSAN on HPE ProLiant DL servers

Figure 3-7: Designing vSAN on HPE ProLiant DL servers

When you design vSAN clusters, you must first consider whether the client is using OSA or ESA.
For a smaller branch using OSA, the minimum cluster size is two nodes plus one witness VM. Although
VMware supports this minimum configuration for smaller branches, the recommended minimum for OSA
is four nodes.
With ESA, the minimum cluster size is always four nodes.
For both OSA and ESA, the maximum cluster size is 64 nodes.
When considering the cluster size for a customer, you should also factor in what level of failure the
customer can tolerate. That is, how many nodes can fail before the customer’s data is at risk? VMware
recommends you calculate the failures to tolerate (FTT) using the following equation:
2 x FTT + 1
For example, in a cluster with three nodes, the customer’s data is at risk if one node fails. The cluster
cannot tolerate another failure.
2 x <1 failure> + 1 = 3 nodes
If the customer wants an FTT of 2, you would need to plan for a cluster size of 5, as shown below.

2 x <2 failures> + 1 = 5 nodes

For more information about FTT, visit the VMware documentation at this link.

When planning clusters, you should also consider I/O requirements. Again, you must pick ESA or OSA
when you purchase and deploy the vSAN. For OSA, you must also consider if the workload is write-
intensive or read-intensive. Typically, if the workload is write-intensive, you should use SSDs (flash) for
both the cache and capacity tiers. Typically, if the workload is read-intensive, you can use flash storage
devices for the cache tier and HDDs for the capacity tier.

Rev. 23.21 139 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

Additional requirements for ESA deployments

Figure 3-8: Additional requirements for ESA deployments

If you choose ESA, there are additional requirements. ESA requires VMware vSAN 8 and above and 25
Gbps or higher network connections. VMware recommends that when you select switches you prioritize
faster link speeds over switches with larger buffers. For example, you should choose a 100 Gbps switch
over a 25 Gbps switch that offers large buffers. At the same time, VMware cautions against selecting a
switch that has limited or shared buffers. (See “Designing vSAN Networks—2022 Edition vSAN ESA” at
https://core.vmware.com/blog/designing-vsan-networks-2022-edition-vsan-esa.)

Rev. 23.21 140 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

vSAN on HPE Synergy

Figure 3-9: vSAN on HPE Synergy

The HPE Synergy D3940 modules fully support VMware vSAN. Use cases for vSAN on HPE Synergy
include supporting a VM farm and virtual desktop infrastructure (VDI). Running vSAN on HPE Synergy
can also provide the flexible support for shared DevOps volumes that app development environments
need and also work well for Web development. You can also deploy vSANs on HPE Synergy to provide
managed data services for mid-tier storage.

Rev. 23.21 141 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

Why HPE Synergy for vSAN

Figure 3-10: Why HPE Synergy for vSAN

What are the benefits of running VMware vSAN on Synergy?

Disaggregated compute and storage


HPE Synergy allows customers to scale compute and storage independently. Because customers do not
have to purchase more compute just to get more storage, they can save on upfront expenditure. They
can optimize the storage and compute ratio to meet the needs for their specific workloads. Because they
can easily re-provision compute modules or recompose how compute and storage connect together, if
their needs change in the future, they can repurpose extra compute and storage for other use cases.

Single infrastructure for any workload


With HPE Synergy, customers can deploy vSAN on some compute modules while running traditional
workloads that use SAN connected storage on other modules. They obtain a standard architecture for
vSAN, SAN-connected storage, virtualization, containers, and bare-metal—all managed and monitored by
HPE OneView.

High-speed interconnect between frames


HPE Synergy frames feature built-in, high-speed, redundant 20Gbps fabric. The iSCSI traffic used for
vSAN can flow east-west without being routed through top-of-rack (ToR) switches, yielding low latency for
server to storage access.

Reduced complexity and cost


HPE Synergy can help customers reduce overprovisioning by giving them access to fluid pools of
capacity. For example, multiple compute modules can share drives on D3940 modules. Admins can alter
the drive mappings flexibly as required.
In addition, HPE Virtual Connect (VC) modules help customers to deploy rack-scale fabric that eliminates
ToR switches in favor of end of row (EoR) switches only. This reduces costs and simplifies cabling and
networking.
HPE Synergy also provides a consistent operational experience that lets customers leverage existing
tools, processes and people as they deploy new workloads.

Rev. 23.21 142 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

HPE Synergy D3940—Ideal platform for vSAN: Flexibility

Figure 3-11: HPE Synergy D3940—Ideal platform for vSAN: Flexibility

You will now consider what makes the D3940 the ideal platform for vSAN.
The D3940 provides a flexible ratio of zoned drives to compute nodes. That means that customers can
choose to assign as many drives to each node as makes sense for their business needs. This flexibility
represents a vast improvement over legacy blade solutions in which storage blades were tied to a single
server blade, causing inefficient use of resources.
Each D3940 storage module provides up to 40 drives and 600 TB capacity. With a fluid pool of up to five
storage modules per frame, up to 200 drives can be zoned to any compute module in the frame.
Each compute module uses its own Smart Array controller to manage the drives zoned to it, so a single
module can support File, Block and Object storage formats together.
The conductor-satellite fabric enabled by VC modules also creates a flat, high-speed iSCSI network for
vSAN that extends over multiple frames, which means that vSAN clusters can extend over multiple
frames.

Rev. 23.21 143 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

HPE Synergy D3940—Ideal platform for vSAN: Performance

Figure 3-12: HPE Synergy D3940—Ideal platform for vSAN: Performance

A non-blocking SAS fabric provides optimal performance between vSAN hosts and the drives zoned to
them on D3940 modules. HPE tests showed that the non-blocking SAS fabric delivers up to 2M IOPs for
4KB random read workload using SSDs. (2M IOPs is for a single storage module connected to multiple
compute modules in as DAS scenario.)
HPE Synergy enables customers to deploy a customized mix of compute and storage resources and to
scale those separately, it provides an ideal SDS platform.

Rev. 23.21 144 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

Right-sized provisioning for any workload

Figure 3-13: Right-sized provisioning for any workload

The flexibility in drive-to-compute module ratio means that the D3940 can deliver the right-sized
provisioning for any workload.
This graph depicts three scenarios with different combinations of half-height compute modules and
D3940 modules in a frame. In the first scenario, the frame has 10 compute modules and one D3940
module, meaning that each compute module can have an average of 4 SFFs zoned to it. This scenario is
ideal for small databases and file sharing servers.
In the second scenario, the frame has six half-height compute modules and three D3940s, giving each
compute module an average of 20 SFFs. This configuration could work for SDS cluster nodes.
The final configuration has four half-height compute modules and four D3940s, meaning that each
computer module can have 40 SFFs dedicated to it, which is ideal for mail and collaboration services, VDI
or VM farms, and mid-sized databases.

Rev. 23.21 145 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

Following best practices for vSAN on HPE Synergy: Cluster and


network design

Figure 3-14: Following best practices for vSAN on HPE Synergy: Cluster and network design

You should follow a few best practices to ensure that the vSAN cluster, deployed on HPE Synergy,
functions optimally. Use a minimum 3-node cluster. All nodes in the cluster must act as vSAN nodes. The
vSAN cluster can present datastores to other clusters.
You should provide redundant connections for the vSAN network. Currently HPE Synergy only supports
OSA, and for OSA, you should raise the bandwidth limit on each connection to at least 10 Gbps.
The vSAN network can be an internal network as long as the cluster is confined within a logical frame,
which can include multiple Synergy frames connected with a conductor/satellite architecture. If the cluster
extends beyond the logical frame, the vSAN networks should be carried in conductor module uplink sets,
following the guidelines for iSCSI networks laid out in the previous module.

Rev. 23.21 146 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

Following best practices for vSAN on HPE Synergy: Drivers and


controllers

Figure 3-15: Following best practices for vSAN on HPE Synergy: Drivers and controllers

Each vSAN node should use a P416ie-m Smart Array controller operating in HBA only mode to access
D3940 drives (through two SAS Connection Modules in bays 1 and 4). The controller should configure
these drives as just a bunch of disks (JBODs). It is important not to use RAID for these drives.
VMware requires a caching (SSD) drive and one or more capacity drives per node. The Compatibility
Guide will indicate the number and type of drives for each tier. In the SPT or server profile for the vSAN
nodes you should configure the recommended set of caching drives as a single caching logical JBOD.
You can configure the capacity drives as one or more capacity logical JBODs.
You should help the customer understand that vSAN has some restrictions on the boot options. The
compute node can boot from internal M.2 hard drives (mirrored) but it requires a P204i storage controller.
PXE boots are also supported, as are USB boots. However, with USB boots, VMware requires the
customer to make other accommodations for log files so that they are stored in persistent storage.
You cannot configure the P416ie-m in mixed mode and create a boot volume from D3940 drives.
These recommendations are based on “HPE Synergy Installation and Recommended Practices Guide:
VMware vSAN ReadyNode on HPE Synergy for Gen10 and Gen9.” (Please note that this reference guide
was the most up-to-date when this course was released, but you should check for an updated version.)

Rev. 23.21 147 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

Following best practices for vSAN on HPE Synergy: Redundant


Connectivity for D3940s

Figure 3-16: Following best practices for vSAN on HPE Synergy: Redundant Connectivity for D3940s

It is also best practice to provide redundant connectivity for the D3940s used in the vSAN solution. You
should install two I/O adapters on each D3940. You must also install two Synergy 12Gb SAS Connection
Modules in the Synergy frame, one in ICM bay 1 and one in ICM bay 4.

Rev. 23.21 148 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

Selecting certified configurations for HPE Synergy and vSAN

Figure 3-17: Selecting certified configurations for HPE Synergy and vSAN

To ensure a successful vSAN deployment for your customers, begin with proposing HPE Synergy module
configurations that HPE has tested and validated with VMware.
As you have learned, you have a very simply way to find HPE ProLiant DL configurations that are certified
for vSAN: select vSAN ReadyNodes in OCA. However, this is not an option if you are planning an HPE
Synergy deployment. In that case, you will need to find certified components on your own using the
VMware Compatibility Guide, available by clicking here. Select that you are looking for vSAN and choose
Hewlett Packard Enterprise as the vSAN ReadyNode Vendor. You can also choose a vSAN ReadyNode
Profile. Select HY for hybrid HDD and flash or AF for all flash. The profile also has a number that
indicates its general scale. For vSAN ReadyNode Server Type, select Blade.
Then select Update and View Results. You can scroll through the results and find an HPE Synergy
compute module model and components that are certified for your profile.

Rev. 23.21 149 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

Sizing HPE ProLiant DL or HPE Synergy for vSAN

Figure 3-18: Sizing HPE ProLiant DL or HPE Synergy for vSAN

When sizing HPE ProLiant or HPE Synergy for a vSAN solution, you will need to gather information such
as:
• VM profiles—The VM profile determines the number of resources per VM, including number of
vCPUs, amount of memory, and disk capacity.
• Growth percentage—You should factor in the anticipated growth, so you can size the solution to
accommodate increases in capacity.
• Whether to include capacity for swapping in calculations—VMs might sometimes need to swap their
memory to the disk. To follow best practices, you should also calculate capacity for this swapping.
• Plans for compression and deduplication—You should determine if the customer plans to use these
features. If so, you need to know what compression/deduplication ratio to use for particular
workloads; these ratios can vary widely based on use case but can often be rather high for VM disks.
You should conduct tests with the customer.
• Number of failures to tolerate—As explained earlier, you need to know the customer’s tolerance for
failure.
• Workload I/O profile—The sizer will allow you to choose a workload profile. To select the correct
profile, you should discuss with the customer the general ratio of reads to writes. You should also
discuss whether the reads/writes will be random or sequential.
You can use SSET, which is accessible from HPE Products and Solutions Now (PSNow), to size your
solutions. Select “New Guidance” and then select “vSAN on HPE Platform.” You can then enter the
information you have gathered about the customer’s environment.

You can also use the Cost Estimator for VMware Protection to show the financial value of a particular
HPE solution.

Rev. 23.21 150 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

HPE storage arrays for VMware environments


You will now consider the benefits of HPE storage arrays for VMware environments. These benefits apply
whether the storage arrays support an HPE Synergy solution or another HPE compute solution such as
HPE ProLiant DL servers.

Rev. 23.21 151 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

Use cases for SAN-backed storage

Figure 3-19: Use cases for SAN-backed storage

Use cases for SAN-backed storage include service providers that want to offer managed data services
such as Quality of Services (QoS). SAN-backed storage is also ideal for organizations that need high-
availability with disaster recovery.

Another key use case for SAN-backed storage is that it provides support for low latency and high
Input/Output for workloads such as CRM, ERP, Oracle, and SQL for their own workforce.

Rev. 23.21 152 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

HPE storage portfolio: Positioning for VMware

Figure 3-20: HPE storage portfolio: Positioning for VMware

HPE offers a variety of storage options for customers deploying VMware solutions.
For mission-critical apps, HPE offers:
• HPE Alletra 9000—HPE Alletra is tightly coupled with the HPE Data Services Cloud Console.
Together, they deliver a common, cloud operational experience across workload-optimized systems
on-premises and in the cloud. Customers can deploy, provision, manage, and scale storage in
significantly less time. For example, the platform can be set up in minutes, and provisioning is
automated. HPE Alletra 9000 is designed for mission-critical workloads that have stringent latency
and availability requirements. It guarantees 100% availability.
• HPE Primera delivers app-aware resiliency backed with 100% availability, guaranteed. HPE Primera
also delivers predictable performance for unpredictable workloads so the customer’s apps and
business are always fast.
• HPE XP8—HPE XP8 is designed for mission-critical Open System, HPE NonStop, and mainframe
environments. It combines an extremely reliable architecture—full online scalability and completely
redundant hardware—with industry-leading disaster recovery (DR) solutions. Like HPE Alletra, it
comes with a 100% data availability guarantee.
For business-critical apps, HPE offers:
• HPE Alletra 6000—HPE Alletra 6000 is designed for business-critical workloads that require fast,
consistent performance. It guarantees 99.9999% availability and scales easily.
• HPE Alletra dHCI—HPE Alletra dHCI is designed to provide the flexibility of converged infrastructure
with the simplicity of hyperconverged infrastructure (HCI). It is built using HPE ProLiant DL servers
and HPE Alletra arrays, which automatically form a stack. Customers manage the stack from an
intuitive management UI and integrate it into vCenter, as easily as a traditional HCI stack. HPE Alletra
dHCI allows customers to scale compute and storage separately.
Existing customers may also be using HPE Nimble, which has been replaced with HPE Alletra 6000
and 5000.
For general-purpose apps, HPE offers
• HPE Alletra 5000—HPE Alletra 5000 guarantees 99.9999% availability and scales easily. It supports
a hybrid array, with a mix of SSDs and HDDs.

Rev. 23.21 153 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

• HPE MSA—HPE MSA is an entry-level SAN storage solution, designed for businesses with 100 to
250 employees and remote office/branch offices (ROBOs). MSA offers the speed and efficiency of
flash and hybrid storage and advanced features such as Automated Tiering.
• HPE SimpliVity—HPE SimpliVity offers HCI with many included components, including built-in data
protection and backup capabilities.

Rev. 23.21 154 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

Overview of HPE storage integration with VMware

Figure 3-21: Overview of HPE storage integration with VMware

Whatever the compute solution underlying the VMware environment, HPE storage arrays can make the
VMware environment work more efficiently, deliver simpler management, and provide higher performance
and availability.
Key features that you will examine in this topic include plugins for vCenter, vVols, NVMe over Fabrics
(NVMeoF), and HPE InfoSight Cross-Stack Analytics.
You should also be aware of HPE Recovery Manager Central for VMware (RMC-V). RMC enables
customers to enhance HPE Primera snapshots by easily copying them to HPE StoreOnce appliances.
RMC-V provide backup and replication for VMware environments. Backups are stored on an HPE
StoreOnce system and can be restored to the original or a different HPE Primera array.

Rev. 23.21 155 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

HPE management and automation portfolio for VMware

Figure 3-22: HPE management and automation portfolio for VMware

In Module 2, you learned about HPE OneView for vCenter (OV4VC), which supports server integration.
HPE also provides storage integration:
• HPE Storage Integration Pack for vCenter
• HPE Storage Management Pack for vRealize Operations Manager
• HPE Storage Plug-in for vRealize Orchestrator
• HPE Storage Replication Pack for Site Recovery Manager
• HPE Automation Pack for vRealize Orchestrator
You will learn more about these integrations in the next few pages.
Please note that VMware recently re-branded the vRealize Suite as Aria; however, this rebranding was
still underway as of the release of this course. For example, the VMware Compatibility Guide was still
using vRealize names when this course was being developed. The HPE plug-in names also still use the
vRealize name, and this course will refer to the plug-ins by their precise name. If you need details about
which HPE plug-in versions are supported with which VMware management products, check the VMware
Compatibility Guide (Management and Orchestration section).

Rev. 23.21 156 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

HPE Storage vCenter plugins

Figure 3-23: HPE Storage vCenter plugins

HPE plugins for vCenter are designed to simplify the provisioning process for both VMFS and vVol
datastores.
The HPE Alletra vCenter plug in supports both vSphere Web Client and HTML5. Customers can easily
create datastores based on HPE Alletra 5000/6000 volumes and then attach those to hosts directly
without having to search for LUNs. They can also perform other management tasks directly from vCenter,
including growing, cloning, deleting, and editing datastores. They can also make snapshots of datastores
or view performance and space details. (If your customers have legacy HPE Nimble Storage arrays, you
should also be aware that those arrays support this plugin.)
HPE Storage Integration Pack for VMware vCenter provides similar benefits for HPE Primera and HPE
Alletra 9000. Admins can create and manage VMFS and vVol based datastores on their Primera or
Alletra 9000 arrays directly from vCenter.

Rev. 23.21 157 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

Additional HPE Storage plugins

Figure 3-24: Additional HPE Storage plugins

In Module 2, you learned about vRealize Operations Manager (vROPS, but being rebranded as Aria
Operations); you learned how it provides deeper monitoring capabilities than vCenter alone. HPE Storage
Management Pack for VMware vRealize Operations Manager provides automated performance, capacity,
configuration compliance, and cost management features for HPE Alletra 9000, HPE Alletra 6000, HPE
Alletra 5000, and HPE Primera. The Management Pack uses the vROps analytics engine to monitor and
analyze the availability, performance, health, capacity, and workload of the supported HPE storage
arrays. Dashboards provide a simple view to easily monitor the important aspects of the environment and
identify problems areas. Alerts are available across various entities which are triggered on a metric
threshold breach.
Module 2 also introduced you to VMware Realize Orchestrator (vRO, but being rebranded as Aria
Automation Orchestrator). As you recall, this solution helps customers to automate complex IT tasks and
standardize operations with workflows. The HPE Automation Pack for vRO makes workflows more
powerful by enabling storage actions.
The HPE Automation Pack for vRO:
• Enables customers to build end-to-end workflows to automate common storage provisioning tasks
• Builds workflows for mass-provisioning of VMs or complex deployments
As of the release of this course, the HPE Automation Pack for vRO supports HPE Alletra 9000, HPE
Alletra 6000, HPE Alletra 5000, and HPE Primera.
If your customers have legacy HPE Nimble Storage arrays, you should also be aware that both plugins
discussed on this page support those arrays.

Rev. 23.21 158 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

Fully automated provisioning for HPE Synergy

Figure 3-25: Fully automated provisioning for HPE Synergy

Traditionally getting a volume hosted on a storage array attached to an ESXi host involves many,
relatively complex steps. Storage admins must create the volume. They need to find out the ESXi host's
World Wide Names (WWNs), add the host to the array, and export the volume to it. SAN admins must
also zone the SAN to permit the server's WWNs to reach the array. Server admins must find the exported
volume by LAN and add it. HPE Synergy provides fully automated volume provisioning for volumes on
HPE Alletra, Nimble, or Primera.
In the steps below, you can see how HPE Synergy simplifies provisioning volumes.

Step 1
HPE Synergy admins can add SAN Managers such as Cisco, Brocade, and HPE to bring SAN switches
into Synergy. Admins can then create networks for the SANs and manage servers’ SAN connectivity
using templates and profiles, as they do with servers’ Ethernet connectivity.

Step 2
HPE Synergy admins can also add HPE Alletra, Nimble, or Primera arrays to Synergy and create
volumes on them from Synergy. They can use server pools and templates to apply policies to volume
management.

Step 3
When admins create server profiles and server profile templates (SPTs), they can add connections for the
servers in the managed SANs. They can also attach volumes to the servers. When the profile is applied
to a compute module bay, HPE Synergy will automate all the heavy lifting of configuring the SAN zoning,
as well as exporting and attaching the volume.

Rev. 23.21 159 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

VMware vCenter Site Recovery Manager introduction

Figure 3-26: VMware vCenter Site Recovery Manager introduction

A plugin to the vCenter Server, VMware vCenter Site Recovery Manager (SRM) enables you to create
disaster recovery plans for a VMware environment. The recovery plan automates bringing up VMs in a
recovery site to replace failed VMs at a primary site. Because such plans can be complex and require
precise ordering to function correctly, SRM provides a testing feature that lets admins test their plans
before a failure actually happens. SRM also supports sub-site failover scenarios and failback to move
services back to the primary site again.
SRM can work in environments without stretched clusters (a stretched cluster has ESXi hosts in the same
cluster at two sites). In this case, SRM brings VMs back up on a new cluster after some downtime. As of
version 6.1 SRM can also work with stretched clusters.

Rev. 23.21 160 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

HPE Alletra, Nimble, and Primera array benefits for SRM

Figure 3-27: HPE Alletra, Nimble, and Primera array benefits for SRM

SRM requires storage array replication to ensure that VMs can access the correct data at the recovery
site if the primary site fails.
HPE Alletra, Nimble, and Primera arrays support Storage Replication Adapters (SRAs) for SRM. These
SRAs integrate the arrays' volume replication features with SRM. The HPE Alletra 5000/6000 SRA as
well as the Nimble SRA bring the inherent efficiency of their replication features. HPE Alletra 5000/6000
and Nimble also support zero-copy clones for DR testing. In other words, HPE Alletra 5000/6000 and
Nimble can create the clones without copying any data, making them highly space efficient and fast to
create.
The HPE Alletra 9000 and Primera SRA also support a broad range of features:
• Synchronous, asynchronous periodic, and asynchronous streaming replication (Remote Copy [RC])
modes
• Synchronous Long Distance (SLD) operation in which an array uses synchronous replication to a
secondary array at a metro distance and asynchronous replication to a tertiary array at long distance
• Peer Persistence with synchronous replication and 3 Data Center Peer Persistence (3DC PP) with
SLD
• VMware SRM stretched storage with 2-to-1 remote copy
Refer to the VMware Compatibility Guide to look up the SRA versions compatible with various SRM
versions.
You should also be aware that VMware SRM v8.3 added support for vVols. As a result, SRM can
replicate and restore vVols and include vVols in DR plans. When companies use SRM with vVols, SRM
can handle the replication natively and seamlessly. No SRA is required.
HPE provided day 0 integration with this feature and now supports it on all the arrays that you have
examined in this module. Companies can use SRM with vVols on the HPE storage arrays in a vSphere
6.5/6.7 or 7 environment. Because SRM is so important to companies, the ability to use vVols with SRM
will encourage many more enterprises to start using vVols and leveraging the other benefits of this
technology. HPE remains one of the few vendors to support SRM with vVols, positioning HPE storage
well in the VMware space.

Rev. 23.21 161 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

Why vVols for VMware databases

Figure 3-28: Why vVols for VMware databases

You will now examine an alternative solution that is specific to VMware. vVols replace the existing VMFS
and NFS implementations that have been used since ESXi was first launched.

vVols create a unified standard and architecture for all storage vendors and storage protocols, using the
vSphere APIs for Storage Awareness (also known as VASA). vVols enable vSphere to write virtual
machines (VMs) natively to storage arrays without using any kind of file system.
With vVols, common storage management tasks are automated, eliminating operational dependencies to
simplify management.
vVols are designed to be dynamic and efficient. No storage is pre-provisioned. vVols use thin provisioning
so the storage is not allocated until VM are created and running. vVols give storage admins more
granular control of storage resources and data services at the VM level.
Further, LUN provisioning is no longer required, and storage arrays can automatically reclaim space
when VMs are deleted or moved to another storage device.
And because vVols make storage arrays VM-aware, array features can be applied directly to individual
VMs instead of to entire LUNs.
To deliver these benefits, vSphere Storage Policy Based Management (SPBM) is used to allow policies
built based on array capabilities to be assigned to VMs.

Rev. 23.21 162 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

Overview of vVols Storage Architecture

Figure 3-29: Overview of vVols Storage Architecture

You will now explore the vVols architecture.


• Protocol endpoint—logical I/O proxy that serves as the data path between ESXi hosts to VMs and
their respective vVols
• VASA provider—software component that mediates out-of-band communication for vVols' traffic
between the vCenter Server, ESXi hosts, and the storage array
• Storage container—pool of raw storage capacity that becomes a logical grouping of vVols, seen as
a virtual datastore by ESXi hosts
• Virtual Volume (vVol)—container that encapsulates VM files, virtual disks and their derivatives
• Storage Policy-based Manager—set of rules that define storage requirements for VMs based on
capabilities provided by storage array; same policy manager as vSAN

Rev. 23.21 163 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

How vVols changes storage management

Figure 3-30: How vVols changes storage management

vVols empower vSphere admins to control the functions that they need to control. For example, vSphere
admins can choose to create a VM snapshot, to thin provision a VM, to create a virtual disk, or to delete a
VM. At the same time, vSphere ESXi hosts should not spend CPU cycles copying or deleting. Under
vSphere's direction, the storage array executes the task automatically. For example, when admins delete
a VM, the array deletes the VM in the vVols container and reclaims space. This automation eliminates
common tasks for storage admins and frees up their time for more sophisticated optimization.

Rev. 23.21 164 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

How vVols transforms storage in vSphere

Figure 3-31: How vVols transforms storage in vSphere

vVols transforms the VMware storage environment.


VMFS is LUN-centric. Storage pools are siloed away from VMware management, and because the
storage array cannot see inside the VMFS datastore, it can only apply features to the entire LUN. For
example, array-based snapshots are a great value add to a VMware environment, but with VMFS, the
array must take a snapshot of the entire LUN. Customers cannot set up different snapshot policies for
different VMs. vVols, on the other hand, breaks down the siloes, and array services are aligned to VMs.
Because vVols lets arrays see VMs as objects, arrays can apply features on a granular basis based on
the company's needs and priorities.

Figure 3-32: How vVols transforms storage in vSphere

With VMFS storage volumes are pre-allocated, which typically means that companies must over-provision
resources, which leads to inefficiency. But with vVols vSphere admins can then dynamically allocate
storage only when they need it.

Rev. 23.21 165 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

Figure 3-33: How vVols transforms storage in vSphere

With VMFS it is complicated to provision storage as it requires vendor specific tools for the storage array.
vVols provides simple provisioning and management through vSphere interfaces. The vSphere admins
can easily add a vVol datastore based on a vVol created on an HPE storage array and attach the
datastore to ESXi hosts. The LUNs are managed in the background, making the process much simpler
and more intuitive for non-storage experts.
In addition to reducing the lengthy VMFS provisioning processes, as you saw before, vVols enables
vSphere decisions to automate actions on HPE Nimble and Primera arrays. For example, when a
vSphere admin deletes a VM, the array automatically reclaims space.

Rev. 23.21 166 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

The HPE Alletra and HPE Primera advantages with vVols

Figure 3-34: The HPE Alletra and HPE Primera advantages with vVols

It is important that you understand vVols is not properly a VMware solution; rather it is a design
specification that storage vendors can use to plug their functionality into vSphere. Therefore, vendors like
HPE have a great opportunity to innovate and prove their value in this space. HPE’s solutions are among
the most mature solutions in this area. The sections below explain the benefits that differentiate HPE
Alletra, Nimble, and Primera solutions for vVols.

Solid and mature


HPE has already taken vVols well beyond the growing pains stage. As an integral VMware design
partner, HPE offers a mature vVols solutions—the result of over 8 years of development.

Simple and reliable


HPE Alletra, Nimble, and Primera have internal VASA Providers built into the solution, rather than
requiring external appliances. For customers this means that they gain the benefits of vVols with zero
additional installation requirements. This approach also increases the solution’s availability because it
avoids introducing another failure point.

Trail-blazing replication
Replication helps customers protect their data and recover from disaster. Both HPE and VMware are
leaders in the industry. HPE was the first vendor to provide support for replication for vVols, and HPE
Alletra 5000, HPE Alletra 6000, HPE Alletra 9000, and HPE Primera arrays all currently support
asynchronous replication for vVols. They can integrate this replication with VMware SRM 8.3 just as they
can when not using vVols.
VMware also continues to enhance solutions. For example, VMware is adding support for stretched
clusters to the VASA spec, which will make vVols a possibility for even more use cases.

Innovative and efficient


The HPE storage arrays' innovative vVols features help customers to operate more efficiently. For
example, they support placing snapshots on different tiers. The vVol-based snapshots are highly efficient.
HPE Alletra 5000 and 6000 arrays, for example, can quickly snapshot a vVol without actually copying any
data. This efficiency helps customers to snapshot more frequently while reducing VMs' footprint in storage
capacity. HPE storage arrays also allow admins to manage them easily, folder by folder.

Rich and powerful


HPE Alletra, Nimble, and Primera provide application-consistent snapshots for apps such as Microsoft
SQL Server or Exchange. Without application-consistent snapshots, databases can fail to recover from
the snapshot correctly, so the HPE arrays help customers protect their mission-critical services.

Rev. 23.21 167 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

HPE solutions for SRM and vVols

Figure 3-35: HPE solutions for SRM and vVols

As you design HPE SRM and vVol solutions you must consider how many vVols your customers need
and their replication requirements with SRM. Here you can see the differences in capabilities between
different models of HPE storage. These capabilities were correct as of the development of this course.

HPE Alletra 5000, Alletra 6000 and HPE Nimble Storage


• Support up to 10,000 vVols
• Support up to 400 replicated VMs with SRM
• Require NimOS 5.1.4.200 or later
If your customers are using legacy HPE Nimble Storage arrays, note that these capabilities apply to them
also.

HPE Alletra 9000 and HPE Primera


• Support up to 42,000 vVols
• Support up to 500 replicated VMs with SRM
• Require HPE Primera OS 4.2 or later

Rev. 23.21 168 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

HPE is the market leader for vVols

Figure 3-36: HPE is the market leader for vVols

HPE has partnered with VMware for more than 10 years to define, develop, and test vVols and is leading
the market in VMware vVol adoption. HPE was selected as the Fibre Channel reference platform for the
VMware engineering team. Through that partnership, HPE provides a tightly integrated experience that
does not require an additional plug-in or software piece to enable vVols and to support the VMware VASA
3.0 specification.
HPE was the first vendor to provide Day 0 support for VMware vVols and VMware SRM 8.3. HPE will
design the deployment and establish what resources and requirements are needed where, further
solidifying HPE’s lead in not only vVol adoption but also customer satisfaction and support.
In your next opportunity, you should try to capture your customers’ mindshare on Tanzu and VCF by
deploying vVols on HPE storage arrays.

Rev. 23.21 169 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

VMware and HPE Storage array support for NVMe over Fabrics
(NVMeoF)

Figure 3-37: VMware and HPE Storage array support for NVMe over Fabrics (NVMeoF)

You will now move on to another benefit of HPE storage arrays: their support for NVMe over Fabrics
(NVMeoF).
As you know, SSDs support much greater IOPS than HDDs, but for many years, devices have accessed
the SSDs with the same SCSI-based protocols designed for HDDs. NVMe, a more modern transfer
protocol, enables much faster access to SSDs, dramatically improving performance. However, in most
cases NVMe only runs over PCIe connections between a storage controller and the SSDs. Consider an
ESXi host with an FC connection to a storage array that supports NVMe. The storage array’s storage
controller can use NVMe to access the SSDs, but the host-to-array communications still use a much
slower SCSI-based protocol.
NVMe over Fabrics (NVMeoF) is designed to remove this limitation, running the NVMe protocol over one
of the media used to connect hosts to external arrays:
• NVMeoF FC—Carries NVMe over FC
• NVMeoF RDMA—Carries NVMe over a Remote Direct Memory Access (RDMA) fabric
• NVMeoF TCP/IP—Carries NVMe over a TCP/IP network such as one used for iSCSI
In a VMware environment NVMeoF allows SAN arrays to deliver the benefits of NVMe to virtualized
workloads.
HPE Alletra 9000 is purpose-built for NVMeoF, delivering all the benefits of this technology. Check the
latest documentation of other HPE storage arrays to determine if they support NVMeoF.

Rev. 23.21 170 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

HPE InfoSight

Figure 3-38: HPE InfoSight

You cannot leave your examination of how HPE arrays make the infrastructure more software-defined
without examining HPE InfoSight. HPE InfoSight is the AI-driven engine behind HPE Alletra, Primera, and
Nimble solutions, helping the data center to manage and monitor itself. HPE InfoSight is a game-changer
for customers, transforming the support experience. With HPE InfoSight, 86 percent of issues are
automatically opened and resolved. In addition, it reduces costs, leading to 79 percent lower storage
operational expenses. And because HPE InfoSight can solve problems proactively before dire
consequences occur, HPE storage systems can deliver six nines or even 100% availability.

Rev. 23.21 171 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

Example of HPE InfoSight in action

Figure 3-39: Example of HPE InfoSight in action

This example illustrates how HPE InfoSight protects customer environments. The particular case
occurred with HPE Nimble Storage arrays but could also apply to HPE Alletra and HPE Primera arrays.
• HPE InfoSight detected that a controller in a storage array went down unexpectedly. Because HPE
arrays include built in redundancy, this failure did not cause an impact. Because the failure indicated
a serious issue, HPE InfoSight flagged the event for analysis.
• Analysis revealed a bug, for which HPE Nimble engineers created and pushed out a fix within 24
hours.
• HPE InfoSight then provided 40 customers a non-disruptive update to avoid potential issues of the
same kind.

Rev. 23.21 172 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

HPE InfoSight’s cross-stack analytics

Figure 3-40: HPE InfoSight’s cross-stack analytics

When customers deploy their VMware environment on HPE storage arrays, they benefit from HPE
InfoSight Cross-Stack Analytics for VMware.
HPE InfoSight’s cross-stack analytics identifies VM noisy neighbors. Noisy neighbors are VMs or
applications that consume most of the resources and cause performance issues for other VMs. By
identifying high-consuming VMs, HPE InfoSight allows companies to take corrective actions.
HPE InfoSight provides information about resource utilization, providing visibility into host CPU and
memory usage. HPE InfoSight not only identifies latency issues but also helps IT admins pinpoint the root
causes across hosts or storage. It also reveals inactive VMs, allowing IT admins to repurpose or reclaim
their resources. IT admins can also view reports showing the top-performing VMs, based on IOPs and
latency.
By providing this detailed visibility into their environment and offering recommendations for optimizing
performance and remedying issues, HPE InfoSight helps admins better manage their environment,
ensure they have necessary resources, and optimize the distribution of workloads across the physical
infrastructure.

Rev. 23.21 173 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

Example: Diagnose abnormal latency with VM analytics

Figure 3-41: Example: Diagnose abnormal latency with VM analytics

Consider just one example of how HPE InfoSight enables admins to discover the root cause of an issue.
A customer's applications were experiencing issues with excessive latency. HPE InfoSight VMVision pulls
data from the VMware environment and correlates it with data from across the infrastructure. Admins no
longer need to run extensive tests to determine whether the storage, network, or another factor lies
behind the latency. They can pinpoint the true root cause and then take steps to resolve the issue.

Rev. 23.21 174 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

Data-centric visibility for every VM

Figure 3-42: Data-centric visibility for every VM

With HPE InfoSight VMVision, admins can examine and compare performance for all VMs. A heat map
helps the admins to quickly detect which VMs are experiencing issues. HPE InfoSight further helps
admins with explicit root cause diagnostics for the underperforming VMs. It even provides
recommendations for improving its performance.

Rev. 23.21 175 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

Summary of HPE storage array benefits for VMware environments

Figure 3-43: Summary of HPE storage array benefits for VMware environments

Take a few minutes to review the HPE storage solution benefits for VMware environments.

Application aware
vVols on HPE Alletra, Nimble, and Primera enables storage VM-level awareness that helps customers to
align storage resources with VMs and their workload requirements. HPE arrays are r

Deeply integrated
HPE arrays provide full VAAI & VASA 1.0, 2.0, 3.0, and 4.0 support. HPE Alletra, Nimble, and Primera
also provide SRAs to enhance SRM's disaster recovery capabilities.

Predictive
HPE InfoSight delivers predictive AI for the data center. It supports a broad array of HPE infrastructure,
including HPE Alletra, Nimble, and Primera arrays as well as HPE servers. HPE InfoSight’s ability to
proactively solve issues and help the data center manage itself represents a key value add for HPE
solutions. HPE InfoSight, as well as other technologies embedded in the HPE storage solutions, help
HPE deliver 6-nines uptime on HPE Alletra 6000/5000 and Nimble, and a 100% availability guarantee on
HPE Alletra 9000 and Primera. In this way, HPE storage helps to protect critical VMs.

Leadership
HPE has partnered with VMware for more than 20 years, delivering proven solutions from the datacenter
to the desktop to the cloud. HPE was the first vendor to support the vVols array-based replication
capability that was first available in vSphere 6.5, and one of only three vendors to support replication as
of 2021. HPE also supported vVols in SRM, as soon as v8.3 added that feature to SRM. Because
replication and SRM are key features for many enterprises, HPE storage provides the natural choice for
companies who want the benefits of vVols.

Rev. 23.21 176 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

Specific guidelines for HPE storage for VCF


VMware Cloud Foundation (VCF) is a hybrid cloud platform, which can be deployed on-premises as a
private cloud or can run as a service within a public cloud. This integrated software stack combines
compute virtualization (VMware vSphere), storage virtualization (VMware vSAN), network virtualization
(VMware NSX), and cloud management and monitoring (VMware vRealize Suite) into a single platform.
In the version 4 release of VCF, VMware added Tanzu, which embeds the Kubernetes runtime within
vSphere. VMware has also optimized its infrastructure and management tools for Kubernetes, providing a
single hybrid cloud platform for managing containers and VMs.
In the last section of this module, you review specific guidelines for implementing HPE storage for VCF.

Rev. 23.21 177 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

VMware Cloud Foundation Storage: Flexible storage options

Figure 3-44: VMware Cloud Foundation Storage: Flexible storage options

VCF supports flexible storage options for workload domains, which are used to create logical pools
across compute, storage, and networking.
VCF includes two types of domains: the management domain and virtual infrastructure workload
domains. The management domain contains all the components that are needed to manage the
environment, such as one or more instances of vCenter Server, the required NSX components, and Aria
components. The management domain uses vSAN storage.
Virtual infrastructure (VI) workload domains are reserved for user workloads. A workload domain consists
of one or more vSphere clusters.
VCF supports two types of storage for workload domains:
• Principal storage
• Supplemental storage
Principal storage is the storage type selected when you create a new workload domain. For the
management workload, vSAN-based storage is the only principal storage type supported. When you
create VI workload domains or clusters, other storage types can be used for principal storage.
• vVols
• VMFS on FC (VMFS formatted datastore, no raw device support)
• vSAN
• NFS v3
Supplemental storage can be used to add storage capacity to any domain or cluster, including the
management domain. Supplemental storage is used mainly for data-at-rest such as virtual machine
templates, backup data, ISO images, and so on. Supplemental storage is not used for workloads.
Supplemental storage options include:
• vVols
• Fibre Channel options: VMFS on FC, VMFS on iSCSI, NVMe-oF
• NFS options NFS v3 and NFS v4.1
It is important to note that admins can only add and scale vSAN storage within SDDC. However, other
options can still be the best choice for your customer’s needs.

Rev. 23.21 178 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

When designing a solution for customers, make sure to check the VMware Compatibility Guide for all of
the storage components. If the customer will use VMFS on FC, you must verify that the storage arrays
and the HBAs are on the hardware compatibility list (HCL).

Rev. 23.21 179 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

Best practices for automating VCF storage

Figure 3-45: Best practices for automating VCF storage

With HPE customers can choose the storage operational model that best meets their needs. They can
use vSAN and automate storage management with SDDC. Or they can use HPE storage arrays with their
low latency performance and superior availability. As you learned earlier, HPE storage arrays simplify
management through deep integrations with VMware such as vVols and HPE storage plug-ins for
vSphere and Aria (formerly vRealize).

Rev. 23.21 180 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

HPE VCF solution

Figure 3-46: HPE VCF solution

HPE and VMware collaborate to help customers accelerate their journey to hybrid cloud. HPE solutions
combined with VCF deliver a simplified, more secure private cloud that is flexible, easy to deploy,
seamless to manage, and easier to operate. Customers receive enterprise agility, reliability, and
efficiency from initial deployment through Day 2 operations. Together HPE and VMware have
revolutionized the data center by enabling consistent security and operations across private and public
clouds—delivering a true hybrid cloud experience.
HPE ProLiant DL Servers provide an optimized infrastructure for VCF. The combination of HPE ProLiant
DL Server infrastructure and VMware SDDC dramatically improves business outcomes as well as overall
value for customers. The integrated solution simplifies deployment and lifecycle management, giving
customers a faster time to value. Customers can begin with a small infrastructure footprint and expand to
meet changing business requirements.
Customers also have the option of combining the composable infrastructure of HPE Synergy with VCF.
As you know, HPE Synergy is designed to reduce infrastructure complexity and cost, providing flexibility
and profile-driven management for both virtualized and bare metal workloads. In addition, HPE and
VMware have tightly integrated SDDC Manager and HPE OneView powering HPE Synergy to deliver
simplicity in managing composable infrastructure and private cloud environments. With the HPE OneView
connector for VCF, HPE brings composability features to VCF. Through this unique integration and
enhanced automation, customers can dynamically compose resources within a single console using
SDDC Manager to meet the needs of VCF workloads, thus saving time and increasing efficiency. This
integration simplifies management of infrastructure by providing the ability to quickly respond to business
needs to add capacity on demand directly from SDDC Manager.
Customers also have their choice of scalable, high-performance storage: HPE Alletra or HPE Primera.
HPE provides Reference Architectures for VCF on HPE ProLiant and HPE Synergy. These Reference
Architectures provide pre-validated design-based solutions. Below are some Reference Architectures
available when this course was published, but you should always check for updated ones.
• HPE Reference Architecture for VMware Cloud Foundation 4.3.1 on HPE ProLiant as Management
and HPE Synergy as Workload Domain
• HPE Reference Architecture VMware Cloud Foundation 4.3.1 on HPE ProLiant DL Servers
• HPE Reference Architecture VMware Cloud Foundation 4.3.1 on HPE Synergy

Rev. 23.21 181 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

Customer scenario: Financial Services 1A

Figure 3-47: Customer scenario: Financial Services 1A

You are still in the process of helping a company migrate its vSphere deployment to HPE Synergy, and
you need to propose an HPE storage component of the solution. Customer discussions have revealed a
few key requirements. The customer is tired of endless struggles with storage being a black box that
VMware admins have little insight into and that slows down provisioning processes. For the upgrade, they
want a storage solution that provides tight integration with VMware. Ideally, VMware admins should be
able to provision and manage volumes on demand.
Because Financial Services 1A runs mission critical services on vSphere, the company is also concerned
with protecting its and its customers' data. Their current backup processes are too time consuming and
complex, and the customer is concerned that the complexity will lead to mistakes—and lost data.

Rev. 23.21 182 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

Activity 3

Figure 3-48: Activity 3

Begin this activity by reviewing more details for the scenario.

Scenario
You are still in the process of helping Financial Services 1A migrate its vSphere deployment to HPE
Synergy, and you need to propose an HPE storage component of the solution. Customer discussions
have revealed a few key requirements. The customer is tired of endless issues with storage being a black
box that VMware admins have little insight into and that slows down provisioning processes. For the
upgrade, they want a storage solution that provides tight integration with VMware. Ideally, VMware
admins should be able to provision and manage volumes on demand.
Because Financial Services 1A runs mission critical services on vSphere, the company is also concerned
with protecting its own data, as well as its customers' data. The company's current backup processes are
too time consuming and complex, and the customer is concerned that the complexity will lead to
mistakes—and lost data.

Task
Prepare a presentation on the relative benefits of vSAN or an HPE storage array as the storage solution
for this customer. In your presentation, note the advantages and disadvantages of both solutions. Also
emphasize the particular distinguishing benefits of HPE for either solution.
__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

Rev. 23.21 183 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

See the Appendix for possible answers for this activity.

Rev. 23.21 184 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

Summary

Figure 3-49: Summary

This module has guided you through designing HPE storage solutions for VMware environments. You
learned how HPE vSAN ReadyNodes can simplify the process of identifying and deploying an HPE
storage solution to meet a customer’s requirement. You also learned how to deploy vSANs on HPE
Synergy. Finally, you learned about using HPE storage arrays and the many benefits that these arrays
provide for VMware environments.

Rev. 23.21 185 Confidential – For Training Purposes Only


Module 3: Design HPE Storage Solutions for VMware

Learning checks
1. What is one benefit of HPE Synergy D3940 modules?
a. A single D3940 module can provide up to 40 SFF drives each to 10 half-height
compute modules.
b. Customers can assign drives to connected compute modules without fixed ratios of
the number per module.
c. A D3940 module provides advanced data services like Peer Persistence.
d. D3940 modules offload drive management from compute modules, removing the
need for controllers on compute modules.
2. Why would you recommend OSA or ESA?
a. OSA is designed for enterprise environments while ESA is designed for mid-sized or
small VMware environments.
b. Customers using HPE Synergy should use OSA, while customers using HPE
ProLiant should always use ESA.
c. Customers with demanding workloads should use ESA, while customers with less
demanding workloads can use OSA.
d. ESA replaces OSA so all customers should immediately move to ESA.
3. What is one strength of HPE Nimble and Primera for vVols?
a. They help the customer unify management of vVol and vSAN solutions.
b. They have mature vVols solutions that support replication.
c. They automatically convert VMFS datastores into simpler vVol datastores.
d. They provide AI-based optimization for Nimble volumes exported to VMware ESXi
hosts.
You can check the correct answers in “Appendix: Answers.”

Rev. 23.21 186 Confidential – For Training Purposes Only


Design HPE Solutions for VMware
Software-Defined Networking
Module 4

Learning objectives
This module outlines options for making the network as software defined as the rest of the data center. In
this module you will learn how to use a combination of VMware and HPE technologies to virtualize and
automate the network.
You will first learn about HPE Synergy networking. Then you will examine VMware NSX. You will then
look at using HPE Aruba Networking CX switches as the underlay for the data center and how HPE
Aruba Networking NetEdit helps companies automate. Finally, you will briefly review Cisco ACI for cases
in which you need to integrate with this third-party solution.
After completing this module, you will be able to:
• Position HPE software-defined networking (SDN) solutions based on use case
• Design HPE SDN solutions

Rev. 23.21 | © Copyright 2023 Hewlett Packard Enterprise Development LP | Confidential – For Training Purposes Only
Module 4: Design HPE Solutions for VMware Software-Defined Networking

HPE Synergy networking guidelines for VMware


environments
HPE Synergy embeds networking within the composable infrastructure solution. In this topic of the
module, you will review guidelines for designing HPE Synergy networking for VMware environments.

Rev. 23.21 188 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Synergy compute-module-to interconnect-module connections

Figure 4-1: Synergy compute-module-to interconnect-module connections

To understand HPE Synergy networking, you must understand the internal connections between compute
modules and interconnect modules.
The figure above reviews which interconnect module bay connects to which mezzanine on compute
modules. Bays 1-3 connect to the first port on the mezzanines, and bays 4-6 to the second port.
If you want to connect compute modules to D3940 modules for local storage, to external storage arrays
using Fibre Channel (FC), and to the Ethernet data center network for management and production
traffic, follow these population rules:
• Install SAS Interconnect Modules in bays 1 and 4; install Smart Array controllers in compute modules’
mezzanine 1
• Install your choice of FC interconnect modules in bays 2 and 5; install FC HBAs in compute modules’
mezzanine 2
• Install your choice Ethernet interconnect or satellite modules in bays 3 and 6; install Ethernet or
FlexFabric NICs in compute modules’ mezzanine 3
Virtual Connect FC modules, for FC, and Virtual Connect FlexFabric modules, for Ethernet plus optional
Fibre Channel over Ethernet (FCoE), are recommended to unlock the full benefits of Synergy
composability.

Rev. 23.21 189 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

HPE Synergy multi-frame networking

Figure 4-2: HPE Synergy multi-frame networking

You can deploy HPE VC interconnect modules (ICMs) in a conductor-satellite architecture to extend
networking across three to five frames and aggregate uplinks.
The figure shows a three-frame architecture. A VC ICM resides in bay 3 of frame 1. This module will act
as a conductor. It connects to satellite ICMs in the other two frames’ bay 3. Those satellite ICMs provide
little intelligence of their own. They simply forward traffic to the conductor, which provides the uplinks.
Together these ICMs form a fabric. Similarly, one conductor and two satellites are installed in the three
frames’ bay 6 and form a different fabric. You should install the bay 6 conductor in a different frame from
the bay 3 conductor for better redundancy in case a frame fails.
The two conductors can also interconnect, allowing them to establish multi-chassis link aggregation
groups (M-LAGs) with their uplinks to the data center LAN.

Rev. 23.21 190 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Synergy FlexNICs

Figure 4-3: Synergy FlexNICs

You will now explore how an HPE Synergy compute module, acting as an ESXi host, receives IP
connectivity through the interconnect modules in bays 3 and 6. A Converged Network Adapter (CNA) plus
a VC ICM together unlock the full benefits of composable networking.
You can divide a CNA into multiple FlexNICs or connections, each of which looks like a physical port to
the OS running on the compute module. Admins can assign different networks to each connection and
set bandwidth policies per connection.
For the purposes of this course, the compute module is an ESXi host. The host has virtual switches or
virtual distributed switches (VDS). Each virtual switch or VDS owns one or more of the FlexNICs and
connects VMkernel adaptors or VM port groups to them. In this example, the compute module has a
single two-port CNA; each port is divided into four FlexNICs. The ESXi host has three virtual switches,
each of which has one FlexNIC on each port assigned to it for redundancy. The three virtual switches
establish management, vMotion, and production networks as follows:
• Mgmt virtual switch has a management VMkernel. It owns ports 3:1a and 3:2a. (You will see later that
it can use LACP on the ports.)
• iSCSI virtual switch has two iSCSI VMkernels. It owns ports 3:1b and 3:2b; one port is assigned to
each VMkernel.
• vMotion virtual switch has a vMotion VMkernel; this virtual switch could also carry FT traffic if the
company uses that feature. It owns ports 3:1d and redundant 3:2c.
• Production virtual switch has multiple port groups assigned to it with different VLANs. It owns ports
3:1d and 3:2d.
These are the corresponding compute module network settings:
• Ports 3:1a and 3:2a are assigned to the Mgmt network
• Ports 3:1b is assigned to the iSCSI A network, and 3:2b is assigned to the iSCSI B network.
• Ports 3:1b and 3:2b are assigned to the vMotion network.
• Ports 3:1c and 3:2c are assigned to the Production network set, which has multiple networks inside it.
In this example, ports have four FlexNICs. The number of supported FlexNICs per port depends on the
VC ICM and CNA capabilities, whichever is lower. For example, the VC SE 40Gb F8 module supports
eight per port, as does the 4820C CNA, but the 3820C CNA supports only four.

Rev. 23.21 191 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Synergy FC convergence

Figure 4-4: Synergy FC convergence

Admins can also configure one of the FlexNICs on a CNA port, and the paired FlexNIC on the other port,
to use FC or enhanced iSCSI; the FlexNICs are then called FlexHBAs. In this example, 3:1b and 3:2b
operate in FCoE mode and are assigned to Synergy FC networks. The ports appear as storage adapters
on the ESXi host. The host uses these adapters to reach SAN storage arrays, accessible through the VC
ICMs, which require FC licenses.
This design could eliminate the need for a mezzanine 2 and interconnect modules in bays 2 and 5. On
the other hand, fewer FlexNICs are available for other purposes.

Rev. 23.21 192 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Mapped VLAN mode

Figure 4-5: Mapped VLAN mode

As you have seen, you assign each compute module connection to a network. ICMs have uplink sets that
own one or more external ports on the interconnect module. The uplink set also has networks assigned to
it. The compute module connection can send traffic to any other compute module connections in the
same network and over the uplink ports assigned an uplink set with its network.
With mapped VLANs, every network is assigned a VLAN ID. An uplink set can support multiple networks
so that those networks can share the uplinks. To maintain network divisions, traffic for all networks,
except the one marked as the native network, is tagged with the network VLAN ID as it is sent over the
uplink.
If a compute module connection is assigned to a single network, the traffic is untagged on the connection.
For example, the Mgmt network in this example is assigned to port 3:1a; it sends and receives untagged
traffic to and from the Mgmt virtual switch. But a downlink connection can also support multiple networks,
which are bundled in a network set. Again, traffic for all networks, except the network set’s native
network, is tagged on the downlink. This is useful for connecting to virtual switches that send tagged
traffic for multiple port groups.
The example you see in the figure above has fewer connections for simplicity, but the same principles
apply even if you are using more connections.
Mapped VLANs give Synergy the most control and are recommended in most circumstances. However,
they do require VMware admins to coordinate the VLANs that they set up in VMware and in Synergy.

Rev. 23.21 193 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Tunneled mode

Figure 4-6: Tunneled mode

Tunneled mode opens up the network to support any VLAN tags. If a virtual switch uses a connection
with a tunneled mode network, admins can add new port groups using new VLANs without needing to
change the Synergy configuration. However, tunneled mode causes all networks to share the same
broadcast domain and ARP table. If upstream switches bridge VLANs, this will cause MAC addresses to
be learned incorrectly and disrupt traffic. Therefore, tunneled mode is only recommended for very
changeable environments such as with DevOps.
And you will learn how to create an even better solution with NSX a bit later; that solution will keep
mapped VLAN networks stable on Synergy while allowing VMware admins to add new VM networks
flexibly.

Rev. 23.21 194 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Redundancy with M-LAG and LACP-S

Figure 4-7: Redundancy with M-LAG and LACP-S

Now that you understand how networks link compute module connections to uplink sets on ICMs, you can
look at some best practices for using redundant links. The following sections describe the two main ways
to establish multiple connections to the data center LAN.
For most Ethernet networks, it is recommended that you use LACP-S, or S-channel, to create link
aggregations between pairs of compute module connections. Pairs of connections are defined as
FlexNICs with the same letter on different ports. Connections in the same LAG are assigned to the same
network, and the OS that runs on the compute module must define the connections as a LAG too. For
ESXi this means that a distributed switch configured with a LAG must own the connections. LACP-S
provides faster fault recovery and better load balancing compared to traditional NIC teaming with OS load
balancing.
LACP-S works best when the connected ICMs use an M-LAG to carry the connections’ networks. The
ICMs automatically establish an M-LAG when the same uplink set has ports on both ICMs. The two ICMs
present themselves as a single entity to the devices connected to those ports. They could connect to one
data center switch or two switches in a stack that also supports M-LAG. VC SE 40Gb F8 modules support
up to eight active links per M-LAG. (Each module has six 40GbE uplinks, which can be split into four
10GbE links each. All links in the M-LAG must be the same speed).
When you use LACP-S and M-LAG together, whichever ICM receives traffic from the downlink LACP-S
LAG forwards the traffic across a local link in the M-LAG. Similarly, when an ICM receives traffic from
upstream, destined to the compute module connection, it forwards the traffic on its local downlink in the
S-channel. This reduces traffic on the links between ICMs.
Note also that this view shows the compute module connected directly to the ICMs for simplicity. In
reality, the compute module might connect to satellite modules, which connect to the conductor VC ICMs
in another frame. Only conductor ICMs have uplinks. Logically, though, the topology is the same.

Rev. 23.21 195 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Redundancy with single ICM LAGs and Smart Link

Figure 4-8: Redundancy with single ICM LAGs and Smart Link

For iSCSI a different configuration is recommended. The compute module’s pair of iSCSI connections
should be assigned to the two different networks with no aggregation. To decrease unnecessary traffic
over the conductor-to-conductor links, the VC conductor modules should have different uplink sets, which
only support their own downlink’s network. They can establish a LAG to the uplink switch with their own
links, but not an M-LAG.
This design requires Smart Link to handle failures. Without Smart Link, if all uplinks on an interconnect
module fail, but the downlinks are still operational, the compute modules will contain to send traffic on the
iSCSI network with a failure, causing disruption. Smart Link shuts down the downlinks in a network if all
the uplinks fail, allowing the compute module to detect the failure and fail over to the other connection.
You might also choose to use this design to permit an active/active configuration if the data center
switches do not support a stacking technology such as HPE Comware Intelligent Resilient Framework
(IRF) or HPE Aruba Networking Virtual Switching Extension (VSX). The virtual switch could load balance
with originating source port (by VM), for example, so some VMs would use the uplinks on ICM 3 and
some would use the uplinks on ICM 6.
Although the last two figures have shown the two approaches separately for clarity, the same CNA can
combine the two approaches on different FlexNICs. For example, you can have the iSCSI connections
using Smart Link and no link aggregation while the management and production connections use LAGs.
Similarly, the ICMs can have some uplink sets that use LAGs and some that use M-LAGs, but each uplink
set owns ports exclusively.

Rev. 23.21 196 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Internal networks

Figure 4-9: Internal networks

Internal networks are not assigned to uplink sets on interconnect modules, but are assigned to downlink
ports on compute modules. That means that compute modules can communicate with each other through
the interconnect modules, but their traffic does not pass out into the data center network. The traffic
extends as far as the connected conductor and satellite modules, which could be three frames.
If a cluster is confined to three frames, internal networks can be useful for functions like FT. A production
network, to which VMs connect, can also be an internal network, but only if the VMs in that network only
need to communicate within the three-frame Synergy solution. Also remember that VC modules are not
routers. Consider whether VMs need to communicate at Layer 3, even with VMs on hosts in the same
Synergy frames. If the data center network is providing the routing, the VMs' networks must be carried on
an uplink set.

Rev. 23.21 197 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Private networks

Figure 4-10: Private networks

A private network blocks connections between downlinks, but permits traffic out uplinks. This can be
useful if the network includes less trusted or more vulnerable VMs. Many hackers attempt to move from
one compromised machine to others, seeking to find more privileges and sensitive information as they go.
Preventing VMs from accessing VMs on another host can limit the extent of an attack. Of course, a
private network does not work when VMs need to communicate together as part of their functionality.

Rev. 23.21 198 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

HPE Synergy support for other networking features

Figure 4-11: HPE Synergy support for other networking features

The Synergy adapters support some additional functions the virtualization workloads with specialized
needs.
Single root input/output virtualization (SR-IOV) enables network traffic to bypass the software switch layer
typical in a hypervisor stack, which results in less network overhead and performance that more closely
mimic non-virtualized environments. To make this feature available to the customer, you must choose an
Ethernet adapter that supports it. You must also deploy compatible ICMs for the selected adapter.
The SR-IOV architecture on VC allows up to 512 VFs, but the Ethernet adapter itself might support fewer.
When admins create a connection in a Synergy server profile or SPT, they can enable VFs and set the
number of VFs from 8 to the max supported by the adapter. Admins can then assign individual VMs on
that host to a port group and the SR-IOV-enabled adapter. Each VM is assigned its own VF on the
adapter and has its own IP address and dynamic MAC Address; VLAN settings come from the port group
and should match what is configured for the network on Synergy. In this way, admins can continue to
manage VMs connection in a mostly familiar way, but the VMs experience dramatically improved
performance.
Many Synergy adapters also support DirectPath IO. This technology improves performance and
decreases the CPU load on the hypervisor by allowing VMs direct access to the hardware. However, this
technology is only recommended for workloads that need maximum network performance as it comes
with some significant drawbacks. It is not compatible with HA, vMotion, or snapshots.

Rev. 23.21 199 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

VMware NSX
You will now learn more about VMware NSX. While step-by-step implementation instructions and detailed
technology dives are beyond the scope of this course, by the end of this section, you should understand
the most important capabilities of NSX and be able to make key design decisions for integrating NSX into
your data center solutions.

Rev. 23.21 200 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

VMware NSX

Figure 4-12: VMware NSX

Originally VMware NSX came in two versions. NSX-V was specific to ESXi hosts controlled by VMware
vSphere while NSX-T worked with ESXi hosts, KVM hosts, and bare metal hosts, enabling companies to
orchestrate networking for virtualized, containerized, and bare metal workloads. However, VMware ended
general support for NSX-V in January 2022. NSX-T is now simply called NSX.
NSX helps customers virtualize networking and then to automate and orchestrate networks in sync with
their compute workloads. It moves networking to software, creating never-before-seen levels of flexibility.
It fundamentally transforms the data center’s network operational model as server virtualization did 10
years ago. In just minutes admins can move VMs and all their associated networks across Layer 3
boundaries within a data center and also between data centers. No interruption to the application occurs,
enabling active-active data centers and immediate disaster recovery options.
On the security front, NSX brings firewall capabilities inside hosts with automated, fine-grained policies
tied to VMs or container workloads. NSX enables micro-segmentation, in which security policies can be
enforced between every VM or workload to significantly reduce the lateral spread of threats inside the
data center. By making network micro-segmentation operationally feasible, NSX brings an inherently
better security model to the data center.

Rev. 23.21 201 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

VMware NSX architecture

Figure 4-13: VMware NSX architecture

A brief look at the VMware NSX architecture will give you the foundation you need to understand the NSX
features. Review each section to learn about that component of the architecture.

Management plane
The management plane consists of the NSX Manager, which holds and manages the configuration. It
plugs into vCenter, NSX Container Plugin, and Cloud Service Manager.

NSX Manager
Admins can access the NSX Manager through a GUI, as well as through a plugin to vCenter, and
configure and monitor NSX functions. The NSX Manager also provides an API, which enables it to
integrate with third-party applications. By allowing these applications to program network connectivity, the
NSX API provides the engine for wide-scale network orchestration.
You deploy an NSX Manager together with a Controller in an NSX Manager Appliance VM. VMware
recommends deploying a cluster of three NSX Manager Appliances for redundancy.

Control plane
The control plane builds MAC forwarding tables and routing tables.

NSX Controller
Each NSX Manager Appliance also includes an NSX Controller. The controllers form the Central Control
Plane (CCP). They perform tasks such as building MAC forwarding tables and routing tables, which they
send to the Local Control Plane.
Control plane objects are distributed redundantly across controllers such that they can be reconstructed if
one controller fails.

Local Control Plane (LCP)


ESXi hosts in the data plane are collectively called "transport nodes." (NSX also supports KVM hosts and
bare metal servers as transport nodes, but this course focuses on ESXi.) The transport nodes receive
forwarding information from the controllers in the CCP, enabling them to forward traffic in a more efficient,
distributed fashion. They also receive firewall rules from the CCP.

Rev. 23.21 202 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Data plane
The data plane consists of the transport nodes. They are responsible for receiving traffic in logical
networks, switching the traffic toward its destination, and implementing any encapsulation necessary for
tunneling the traffic through the underlay network. The data plane also routes traffic and applies edge
services.

NSX virtual switch


Each transport node has one or more NSX virtual switches.
On ESXi hosts, the NSX virtual switch was originally a specialized NSX Virtual Distributed Switch (N-
VDS). Now the NSX virtual switch can be a familiar VDS, provided that it is VDS 7.0 or above. (On any
other non-ESXi host, such as KVM hosts, the NSX virtual switch is based on Open vSwitch [OVS]).

Edge services
NSX provides a distributed router (DR) in the data plane for routing traffic directly on transport nodes.
However, some services such as NAT, DHCP, and VPNs are not distributed. A Services Router (SR),
which is deployed in an edge cluster, provides these services. The SR is also responsible for routing
traffic outside of the NSX domain into the physical data center network.
NSX also supports edge bridges, which can connect physical servers into the NSX networks.

Rev. 23.21 203 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Use case 1: Networking virtualization

Figure 4-14: Use case 1: Networking virtualization

You will now examine NSX features in more detail, starting with the network virtualization use case.
Network virtualization enables VMs to connect into a common logical network regardless of where their
hosts are located in the physical network. The physical network can implement routing at the top of the
rack without compromising the portability of VMs.

Rev. 23.21 204 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Overlay networking

Figure 4-15: Overlay networking

NSX uses overlay networking to provide network virtualization. A brief discussion of overlay networking in
general will be useful.
When designing a data center network, network architects typically prioritize values such as stability,
load-sharing across redundant links, and fast failover. They have found that an architecture that routes
between each network infrastructure device delivers these values well. However, such an architecture
can make it harder to extend application networks wherever they need to go.
With overlay networking, the physical infrastructure remains as it is: scalable, stable, and load-balancing.
Virtualized networks, or overlay networks, lie over the physical infrastructure, or underlay network. An
overlay network can be extended without regard to the architecture of the underlay network. Companies
can then deploy workloads in any location, but still the workloads can belong to the same subnet and
communicate at Layer 2. VMware managers can also deploy overlay networks on demand, without
having to coordinate IP addressing and other settings with the data center network admins.
Overlay networking technologies are also highly scalable, typically offering millions of IDs for the virtual
(overlay) networks.
There are many strategies to build an overlay network. Here you are focusing on one of the most
common. Tunnel endpoints (TEPs), also called virtual tunnel endpoints (VTEPs), create tunnels between
them. The tunnels are based on UDP encapsulation. When a VTEP needs to deliver Layer 2 traffic in an
overlay network, it encapsulates the traffic with a header specific to the overlay technology. It also adds a
delivery header, which directs the traffic to the VTEP behind which the destination resides. The underlay
network only needs to know how to route traffic between VTEPs; it has no visibility into the addressing
used for the overlay networks.
Common overlay technologies include Virtual Extensible LAN (VXLAN), Network Virtualization using
Generic Routing Encapsulation (NVGRE), and Generic Network Virtualization Encapsulation (Geneve).
Geneve is a newer standard that supports the capabilities of VXLAN, NVGRE and other network
virtualization techniques; NSX uses this technology.
Geneve encapsulates L2 frames into UDP segments and uses a 24-bit Virtual Network Identifier. Geneve
has a variable length header format, making it possible to add extra information to the header. This
information can be used by the underlay network to decide how to handle the traffic in the best way. The
Geneve header is also extensible. This means that it will be easier to add new optional features to the
protocol by adding new fields to the header.

Rev. 23.21 205 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Technologies such as Geneve do not provide automation on their own. However, NSX provides the
orchestration layer, enabling admins to simplify and automate the configuration of overlay networks.

Rev. 23.21 206 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Overlay segments

Figure 4-16: Overlay segments

NSX calls overlay networks, overlay segments.


In a vSphere deployment without NSX, a distributed port group (dvportgroup) on a VDS creates a
network, or VLAN, to which VMs on multiple hosts can connect. However, those hosts must all be in the
same Layer 2 domain; otherwise, the VMs cannot connect on the same network.
An NSX overlay segment, on the other hand, creates a logical network that interconnects VMs at Layer 2,
even when their hosts are divided by Layer 3 boundaries. The overlay segment is analogous to the
dvportgroup. It is associated with a VDS or VDSes through a transport zone, and VMs connect to the
overlay segment. In fact, the overlay segment even appears as a NSX dvportgroup inside of vCenter
(when the ESXi host is using a VDS as the NSX virtual switch). However, rather than define a VLAN, the
overlay segment defines a logical, overlay network that can extend anywhere.
NSX also supports traditional networks, called VLAN segments.
Note that segments used to be called logical switches, and you might sometimes still hear this term.

Rev. 23.21 207 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Transport zones

Figure 4-17: Transport zones

NSX uses transport zones to group segments. An overlay transport zone includes one or more overlay
segments, while a VLAN transport zone contains one or more VLAN segments.
NSX admins assign transport nodes to the transport zone, which makes the segments in that zone
available to those nodes. In this example, a compute ESXi cluster and an edge ESXi cluster have been
assigned to the overlay transport zone, "my overlays." Admins can then connect VMs running on those
clusters to the overlay segments in the transport zone.
A gateway (which consists of DR and SR components) can route traffic between the overlay segments in
the same zone. Tier-1 gateways route traffic between overlay segments. Tier-0 gateways connect to Tier-
1 gateways and route traffic out to the physical network.

Rev. 23.21 208 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Example: Original network

Figure 4-18: Example: Original network

You will now look at a simplified example of how NSX can alter the network architecture.
In this example, a company has an ESXi cluster called "compute cluster" with a VDS called
"Compute_VDS." Compute_VDS has a port group for "web_front-end" VMs and for "web_app" VMs. It
also has networks for vMotion and management traffic.

Rev. 23.21 209 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Example: Plan for overlay segments

Figure 4-19: Example: Plan for overlay segments

Now the company is deploying NSX. NSX will enable the company to virtualize the production networks
with the Web front-end and Web app VMs.
Admins create two overlay segments: "web_front-end" and "web_app." Admins places the segments in an
overlay transport zone. They attach the compute cluster to that zone.
Now the company can remove the VLANs that used to be associated with these networks from the
Compute_VDS uplinks, as well as from the connected physical infrastructure. Instead, the uplink carries
VLAN 100, which is the transport VLAN in this example. Even more importantly, admins can add new
overlay segments in the future without having to add corresponding VLANs and subnets in the physical
infrastructure.
Note that it is typically best practice to leave the management and vMotion networks in traditional VLAN-
backed segments. The same holds true for storage networks such as for vSAN or other iSCSI traffic.

Rev. 23.21 210 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Use case 2: Microsegmentation

Figure 4-20: Use case 2: Microsegmentation

You will now learn about how NSX fulfills the microsegmentation use case, helping customers to enhance
their control over their virtualized workloads more easily and more flexibly. In this blog, VMware
introduced what it means by micro-segmentation. The sections below summarize.

Topology agnostic
With traditional security solutions, traffic must pass through the firewall to be filtered, and the firewall
location determines the extent of security zones. But as workloads become more portable, companies
need more flexibility in creating security zones based on business need, not location. NSX micro-
segmentation deploys an instance of the firewall to each host, enabling companies to implement
topology-agnostic controls.

Centralized control
While firewall functionality is distributed to the ESXi hosts, the firewall is controlled centrally. Admins
create security policies for their distributed services through an API or management platform, and those
policies are implemented everywhere.

Granular control based on high-level policies


NSX micro-segmentation uses a policy-based approach. The distributed firewall can filter traffic at many
levels and based on criteria—such as OS type—beyond the traditional packet-header related criteria.

Network overlay-based segmentation


Companies can use network overlays to divide VMs into logical groups based on security policy and
business need.

Policy-driven service insertion


NSX can insert third-party applications into security policies to provide enhanced IDS/IPS and other
security features.

Rev. 23.21 211 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

How NSX implements micro-segmentation

Figure 4-21: How NSX implements micro-segmentation

NSX includes two firewall types. The distributed firewall (DFW) empowers microsegmentation for the
complete NSX domain. Defined centrally, the DFW is instantiated on every transport node and filters all
traffic that enters and leaves every VM or container. An edge firewall is implemented on an ESG, and it
filters traffic between the NSX domain and external networks. The stateful firewalls use rules that should
be familiar to you from other firewall applications. Rules specify the source and destination for traffic, the
service (defined by protocol and possibly TCP or UDP port), a direction, and an action—either allow or
deny. However, NSX permits great flexibility in defining the source and destination, making it easy for
admins to group devices together based on the company's security requirements.

Rev. 23.21 212 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Security extensibility

Figure 4-22: Security extensibility

NSX provides a platform for bringing the industry’s leading networking and security solutions into the
VMware ESXi hosts. By taking advantage of tight integration with the NSX platform, third-party products
can not only deploy automatically as needed, but also adapt dynamically to changing conditions in the
data center. NSX enables two types of integration. With network introspection, a third-party security
solution such as an IDS/IPS registers with NSX. A third-party service VM is then deployed on each ESXi
host and connected to the VDS used as the NSX virtual switch. The host redirects all traffic from vNICs to
the service VM. The service VM filters the traffic, which is then redirected back to the VDS. Examples of
supported next-generation firewalls and IDS/IPSes are listed in the figure above.
The second type of security extensibility is guest introspection. Guest introspection installs a thin third-
party agent directly on the VM, and this agent then takes over monitoring for viruses or vulnerabilities on
the VM. The figure above lists examples of supported solutions in this area.

Rev. 23.21 213 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Use case 3: Network automation with NSX + Aria

Figure 4-23: Use case 3: Network automation with NSX + Aria

Companies can deploy VMware Aria solutions (formerly VMware vRealize), which fully integrate with
NSX, to permit orchestrated delivery of virtualized services, including compute, storage, and networking
components, through ordered workflows and API calls. Companies can create policies to govern how
resources are allocated to services to ensure that applications are matched to the correct service level,
based on business priorities. IT can deliver a private cloud experience, allowing users to obtain their own
services through an IT catalog. Aria also provides extensibility through an API, allowing customers to
integrate the applications of their own choice and use those applications to dynamically provision
workloads.

Rev. 23.21 214 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

HPE ProLiant for vSphere Distributed Services Engine and


NSX offload
In this topic you will learn more about using HPE ProLiant of vSphere Distributed Services Engine for
NSX offload. You will learn more about configuration recommendations and requirements.

Rev. 23.21 215 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

HPE ProLiant with vSphere Distributed Services Engine

Figure 4-24: HPE ProLiant with vSphere Distributed Services Engine

Module 2 introduced you to HPE ProLiant with vSphere Distributed Services Engine solutions.
HPE ProLiant with vSphere Distributed Services Engine offloads processing from server CPUs to a Data
Processing Unit (DPU). By offloading processing to DPUs, the solution helps to improve efficiency of
east-west communications. HPE ProLiant with vSphere Distributed Services Engine currently supports
offloading network and NSX services to the DPU. In this way, customers can benefit from NSX network
virtualization without negatively impacting application performance.

Rev. 23.21 216 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Offloading of NSX network services

Figure 4-25: Offloading of NSX network services

By moving NSX network services to the AMD Pensando DPU, companies gain better segregation
between production and overhead processes, creating a security air gap. As discussed earlier, they also
gain better performance for workloads, which now have more CPU cycles available to serve them.
Customers also obtain a future-proof solution. In the future, when offloading NSX security services is
enabled, customers will be able to secure their traffic at wire speed. In other words, they will be able to
add sophisticated micro-segmentation policies without causing a performance hit to production apps and
services.

Rev. 23.21 217 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Recommended configuration

Figure 4-26: Recommended configuration

You will now examine some configuration recommendations for the HPE Reference Architecture for HPE
ProLiant with VMware vSphere Distributed Services Engine. This RA includes three servers, but you
could scale out the solution.
Each server is equipped with two adapters:
• 2-port AMD Pensando DSC25v2 Spl Card—Used by the DPU to carry the NSX networks (including
the NSX transport network, which tunnels all the overlay segments)
• 2-port HPE Marvell QL41132HLCU SFP+ Adapter—Used by the CPUs to carry all host management
storage (vSAN), and vMotion traffic. This design follows VMware’s recommendation to place the
vSphere management network on a different controller from the DPU. It also segregates the VM data
traffic from the infrastructure services traffic.
The data ports, including the ports used by the CPUs and DPU, connect to a pair of HPE Aruba
Networking CX 8325 switches. These switches form a highly available VSX group. Network admins must
set up the 8325 switch ports that connect to the DPU ports with the correct VLAN for the NSX transport
network. They must set up the switch ports that connect to the CPU’s (Marvel) ports as trunk ports, which
allow the correct VLANS for host management, vSAN, and vMotion.
The servers’ iLO ports connect to HPE Aruba Networking 6300M switches, which are deployed as a
highly available Virtual Switching Fabric (VSF) pair.

Rev. 23.21 218 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Configuration requirements

Figure 4-27: Configuration requirements

As you learned in Module 2, HPE ProLiant with VMware vSphere Distributed Services Engine is pre-
installed with the HPE Custom Image for VMware vSphere ESXi 8.0, and the HPE Service Pack for
ProLiant. It boots up with an evaluation license, which must be updated to a 2-processor VMware
vSphere Enterprise Plus license key (serving each processor up to 32 cores). This license enables the
VMware vSphere Distributed Services Engine functionality on the AMD Pensando DPU as well as the x86
host CPUs.
HPE ProLiant with VMware vSphere Distributed Services Engine supports firmware updates with VMware
vSphere Lifecycle Manager (vLCM) when licensed with HPE OneView Advanced and integrated into an
environment configured to utilize HPE OneView and HPE OneView for VMware vCenter (OV4VC). The
servers’ cluster must be managed by vLCM for vCenter to be able to configure the DPU-enabled servers
into a cluster.
Once a cluster is created, add the HPE ProLiant with VMWare vSphere Distributed Services Engine ESXi
servers to the cluster using the hostname/IP address. HPE recommends enabling Distributed Resource
Scheduler (DRS) and High Availability (HA) on the cluster. The vSAN services are optional. HPE
recommends having the vMotion and vSAN VMkernel ports assigned to network adapters on each host
before configuring these cluster services.

You must enable the network offloading capability from within vCenter. Create distributed switches and
ports groups for NSX as you would for any NSX network. But enable the offload to Pensando option on
the distributed switches in vCenter and in the transport node profiles in NSX Manager.

When you configure VMs’ NIC settings, you can enable unified passthrough v2 (UPTv2) mode. UPTv2
mode, which is supported by HPE ProLiant with VMware vSphere Distributed Service Engine, uses the
enhanced vmxnet3 driver (v7). It establishes an enhanced data path directly from VMs to the DPU, much
as with SRV-IO and its VFs. Because it fully offloads all network processing to the DPU, UPTv2 mode
provides the best performance. (Check AMD Pensando DPU documentation for the number of VFs.) The
default mode is MUX, which leaves some processing in the CPU, but still offloads the networking stack to
the DPU.

Rev. 23.21 219 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Physical underlay network


You will now learn more about setting up the physical underlay network to properly connect to the
VMware environment. This course focuses on using HPE Aruba Networking CX switches.

Rev. 23.21 220 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Options for the physical underlay

Figure 4-28: Options for the physical underlay

You have explored the major use cases for NSX and understand at a high level how NSX provides
software-defined networking and security for your customer's VMware-centric data center.
While NSX is meant to be deployed over any underlay, that does not mean that the underlay is immaterial
to the success of the solution. The tunneled traffic still ultimately crosses the underlay network, and
issues there can compromise traffic delivery or network performance. Because different teams usually
manage the virtual and physical networks, no one team has all the information that they need, and IT staff
can find it difficult to troubleshoot.
In short, the physical data center network matters. In the next section, you will learn how HPE Aruba
Networking CX switches fulfill this role, integrating with and enhancing an NSX solution.
HPE Aruba Networking also provides an SDN solution called HPE Aruba Networking Fabric Composer,
which provides tight integration with VMware and enhanced visibility across physical and virtual networks.

Rev. 23.21 221 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

HPE Aruba Networking Fabric Composer

Figure 4-29: HPE Aruba Networking Fabric Composer

HPE Aruba Networking Fabric Composer is a software-defined networking (SDN) solution that automates
and orchestrates fabrics based on HPE Aruba Networking CX switches. This solution offers many
integrations with VMware vSphere, simplifying management and enhancing performance.
HPE Aruba Networking Fabric Composer can automatically discover ESXi hosts and the networking
components on them. With visibility into these components, network admins can more easily establish the
correct VLANs and topology. HPE Aruba Network Fabric Composer even automates VLAN configuration
with lifecycle awareness. For example, the composer is aware when a VM moves to a new host and can
automatically extend the correct VLAN to Aruba Network CX switch ports connected to that new host.
HPE Aruba Network Fabric Composer can also improve performance for vSAN, optimizing the flow
specifically for that network.

Rev. 23.21 222 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Design considerations for the physical infrastructure

Figure 4-30: Design considerations for the physical infrastructure

Next you will examine how to connect switches to a VMware NSX network. You just need to check a few
settings on your HPE Aruba Networking CX switches.
Determine the settings in the uplink profile for each transit node. You will need to match those settings in
the ToR switches that connect to those nodes. The ToR switch ports must support the transport node's
transport VLAN ID. Typically, these switches will also be the default gateway for that VLAN. Also make
sure to match the MTU settings in the uplink profile in this VLAN on the switch. The next page explains
more.
Also remember VLANs for any non-overlay networks, such as management, vMotion, and storage. The
physical infrastructure will need to be tagged for the correct VLANs.
Also make sure that the link aggregation settings sync with the VMware NIC teaming settings, both on the
transport network and other networks. You will generally deploy ToR switches in pairs for redundancy.
You should deploy HPE Aruba Networking CX switches with VSX, which unifies the data plane between
two switches, but leaves the control plane separate. A LAG on a transport node can connect to both
switches in the VSX group. The switches use an M-LAG technology to make this possible.

Rev. 23.21 223 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

More details on MTU

Figure 4-31: More details on MTU

You will now look at the MTU requirements in a bit more detail.

Standard Ethernet
The standard Ethernet payload or Maximum Transmission Unit (MTU) is 1500 bytes.
The Ethernet protocol adds a header and a checksum to the payload. In total, according to the IEEE
802.3 Ethernet Version 2, the default maximum frame size is 1518 bytes. The 802.1Q tag adds 4 bytes to
the standard Ethernet frame header, so the default maximum frame size is 1522 bytes.

Jumbo frames
Ethernet frames between 1500 bytes up to 1600 bytes in size are called baby giant frames (or baby
jumbo frame) and Ethernet frames up to 9216 bytes are called jumbo frames.
Jumbo frames can cause problems in the underlay network because all components in the network, from
end-to-end, must support it. That means careful planning and careful implementation. In other words, you
must increase the MTU on the ToR switches that connect to transport nodes and on all network
infrastructure devices in between.

Advantages of jumbo frames


Jumbo frames can be more efficient than smaller frames because they only need one header to transport
a larger payload.
In theory, compared to a 1500 MTU, a 9000 MTU would be able to send six times the data with the same
number of frames. That means that there is less frame handling. In other words, a 9.78Gbps transfer at
8900 MTU would equal the same number of frames per second as a 1.65Gbps transfer at 1500 MTU.
NSX transport networks require increasing the MTU due to the extra headers added for tunneling the
traffic (UDP encapsulation and new IP delivery header). You must match the NSX transport network MTU
on the Ethernet switches in the data center network.

Rev. 23.21 224 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Disadvantages of jumbo frames


On the other hand, jumbo frames can also introduce issues.
When a jumbo frame is lost, more data is lost than with a regular frame. Therefore, in unreliable networks,
jumbo frames can be counterproductive.
Remember that encapsulation adds extra bytes to the frame. You also need to keep in mind if
applications are increasing the payload on their side. The payload becoming too large can cause
problems as well. For instance, Geneve can add 50 or more bytes to the header. Adding bytes to an
already expanded payload could make traffic exceed the MTU. The payload must then be sent in two
frames.
This would make the transport very inefficient. Some network components might even drop frames that
are too large, which would result in no communication at all.

Rev. 23.21 225 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

An example topology with HPE Aruba Networking CX switches

Figure 4-32: An example topology with HPE Aruba Networking CX switches

You will now look at some example topologies for using HPE Aruba Networking CX switches as the
physical underlay for NSX. These examples come from the AOS-CX Switch and VMware NSX-T Interop
Solution Guide; you can refer to it for more details.
This example illustrates three ESXi hosts acting as NSX transport nodes. They support VMs in two
overlay segments, which are using these subnets: 10.1.10.0/24 and 10.1.20.0/24. The ESXi hosts have
established tunnels with each other. Two of the ESXi hosts are within the same Layer 2 domain in the
physical underlay network. They use VLAN 10 as the transport VLAN. The other ESXi hosts are
separated by a Layer 3 boundary. It uses transport VLAN 20.
The HPE Aruba Networking CX switches simply need to connect to the ESXi hosts on the correct VLAN
(10 for the first two hosts and 20 for third host). The switches also need to match the MTU on that VLAN.

Rev. 23.21 226 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

BGP routing between NSX edge HPE Aruba Networking CX switches

Figure 4-33: BGP routing between NSX edge HPE Aruba Networking CX switches

This next example illustrates how HPE Aruba Networking CX switches can communicate routes with NSX
routers using Border Gateway Protocol (BGP).
As you recall, Tier-1 routers can route traffic between overlay networks within the NSX fabric. However, if
customers want external clients or other data center servers, outside the NSX environment, to reach the
VMs, the Tier-0 router must advertise routes to their networks.
In this example, the Tier-0 router is established on a pair of redundant edge VMs. Each edge VM has two
network connections, one on VLAN 2 and one on VLAN 3. The VLANs connect to one HPE Aruba
Networking CX switch on VLAN 2 and establish BGP sessions with that switch. The connect to the other
switch on VLAN 3 and similarly establish BGP sessions with that switch. These ToR HPE Aruba
Networking CX switches can use BGP (or another routing protocol) to communicate with routing switches
in the rest of the data center, propagating the routes to the VM’s networks.
As you see in the figure on the right, the port configuration between each HPE Aruba Networking CX
switch and the ESXi host or hosts with the edge VMs is simple: the port simply needs to support the
VLAN on which the edge VM has an IP address and implements BGP.

Rev. 23.21 227 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

HPE Aruba Networking NetEdit

Figure 4-34: HPE Aruba Networking NetEdit

You will now look at another way that HPE Aruba Networking makes managing the physical infrastructure
simpler and more automated: HPE Aruba Networking NetEdit.
Network operators are often slowed down as they make configurations because they do not have all the
relevant information at their fingertips. For example, they might not know the IP address of a server or
what address is available on the management network for a new switch. And even expert operators can
make mistakes, which can cause serious repercussions for the network. Fully 74% of companies report
that configuration errors cause problems more than once a year.
HPE Aruba Networking CX switches offer HPE Aruba Networking NetEdit, which provides orchestration
through the familiar CLI. It gives operators the intelligent assistance and continuous validation they need
to ensure that device configurations are consistent, compliant, and error free. IT operators edit
configurations in much the way that they are used to, working within a CLI, so no knowledge of scripting
or retraining is necessary. However, they create the configuration safely in advance with all the help tools
they need. They can search through multiple configurations and quickly find information such as the IP
addresses that other switches are using. They can also tag devices based on role or location. The editor
also provides validation so that a simple error does not get in the way of the successful application of a
configuration. Admins can then deploy the configuration with confidence. An audit trail helps admins
easily track changes for simpler change management and troubleshooting.

Rev. 23.21 228 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Cisco ACI

Figure 4-35: Cisco ACI

While the solutions covered earlier are the preferred SDN solutions for VMware environments on HPE,
some customers have Cisco entrenched as their data center networking solution. If you cannot dislodge
Cisco in the network, you can still win the compute and storage components and integrate them with
Cisco.
In Cisco ACI, Cisco switches, deployed in a leaf-spine topology, provide the data plane. They also
provide the control plane, using OSPF as the underlay protocol and VXLAN as the overlay protocol.
However, management of the switches is completely taken over by Application Policy Infrastructure
Controllers (APICs). The APICs manage all aspects of the fabric. Instead of configuring OSPF, VXLAN,
VLANs, and other features manually, admins configure policies about how they want to group endpoints
and handle their traffic. The APICs then configure the underlying protocols as required to implement
desired functions.
For customers with VMware-centric environments, APICs can integrate with VMware.

If you are deploying HPE compute infrastructure in a data center that uses Cisco ACI, you should
investigate requirements for integrating with ACI. HPE has specific recommendations for connecting HPE
Synergy to Cisco ACI. Create a tunneled network to assign to the compute module downlinks and
interconnect module downlinks. Then the Cisco ACI can apply whatever VLAN IDs required, and the HPE
Synergy interconnect modules will pass the tagged VLAN traffic on.

Rev. 23.21 229 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Activity
You will now practice applying what you have learned about VMware networking and HPE solutions.

Rev. 23.21 230 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Activity 4

Figure 4-36: Activity 4

Assume that Financial Services 1A wants to use NSX. Answer these questions.
1. Does this decision affect your proposal? If so, explain what changes you would make.

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

Rev. 23.21 231 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

2. Assume that the customer needs a ToR switch upgrade, and you are proposing HPE Networking
Aruba CX switches.
a. Explain benefits of your proposal.

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

Rev. 23.21 232 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

b. Make a list of settings that need to coordinate between the VMware environment and the switches.

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

Rev. 23.21 233 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Summary

Figure 4-37: Summary

In the module, you reviewed guidelines for HPE Synergy networking in VMware environments. You
learned about NSX and how its overlay capabilities make data center networks more flexible and aligned
with virtualized workload requirements. You then learned how to use HPE ProLiant for vSphere
Distributed Engine servers to offload NSX to a DPU.
You also learned how to use HPE Aruba Networking CX switches as the physical underlay and how to
use HPE Aruba Networking NetEdit to automate management.

Rev. 23.21 234 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Learning Checks
1. What benefit do overlay segments provide to companies?
a. They provide encryption to enhance security.
b. They provide admission controls on connected VMs.
c. They enhance performance, particularly for demanding and data-driven workloads.
d. They enable companies to place VMs in the same network regardless of the
underlying architecture.
2. What is one way that NetEdit helps to provide orchestration for HPE Aruba Networking
CX switches?
a. It provides the API documentation and helps developers easily create scripts to
monitor and manage the switches.
b. It lets admins view and configure multiple switches at once and makes switch
configurations easily searchable.
c. It integrates the HPE Aruba Networking CX switches into HPE GreenLake Central
and creates a single pane of glass management environment.
d. It virtualizes the switch functionality and enables the switches to integrate with
VMware NSX.
You can check the correct answers in “Appendix: Answers.”

Rev. 23.21 235 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Appendix: Review VMware networking


This appendix covers some foundational VMware networking concepts, which might be helpful if you are
not familiar with traditional VMware networking.

Rev. 23.21 236 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Standard switch (vSwitch)

Figure 4-38: Standard switch (vSwitch)

You will start by examining a standard virtual switch (or vSwitch), which is deployed on a single ESXi
host. A vSwitch is responsible for connecting VMs to each other and to the data center LAN. When you
define a vSwitch on an ESXi host, you can associate one or more physical NICs with that switch. The
vSwitch owns those NICs—no other vSwitch is allowed to send or receive traffic on them. You should
define a new vSwitch for every set of NICs that you want to devote to a specific purpose. For example, if
you want to use a pair of NICs for traffic associated with one tenant's VMs and a different pair of NICs for
another tenant's VMs, you should define two vSwitches. However, if you want the tenants to share
physical NICs, you should connect them to the same vSwitch using port groups to separate them.
In the vSphere client, adding a port group is called adding a network of the Virtual Machine type. The port
group defines settings such as the NIC teaming policy, which determines how traffic is distributed over
multiple physical NICs associated with the vSwitch, and the VLAN assignment—more on that later. The
port group also controls traffic shaping settings and other features such as promiscuous mode.
When you deploy a VM, you can add one or more vNICs to the VM, and connect each vNIC to a port
group. Each vNIC connects to a virtual port on exactly one port group on one vSwitch.
The figure above shows how the vCenter client presents the vSwitch and connected components.

Rev. 23.21 237 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

How vSwitch forwards traffic

Figure 4-39: How vSwitch forwards traffic

Like a physical Ethernet switch, a vSwitch creates a MAC forwarding table that maps each MAC address
to the port that should receive traffic destined to that address. However, the vSwitch does not build up the
MAC table by learning MAC addresses from traffic. Instead, the hypervisor already knows the VMs' MAC
addresses. The vSwitch forwards any traffic not destined to a virtual NIC MAC address out its physical
NICs.
The vSwitch also knows, based on the hypervisor, for which multicast groups VMs are listening. It
replicates and forwards multicasts to the correct VMs accordingly. (In vSphere 6 and above, you can
enable multicast filtering, which includes IGMP snooping, to ensure that the vSwitch always assesses the
multicast group memberships correctly). The vSwitch does flood broadcasts.
The way that vSwitches handle unicasts and multicasts ensures better security. Because the switch does
not need to flood unicasts to unknown destinations, it does not ever need to forward traffic destined to
one VM's MAC address to another VM. And it helps to prevent reconnaissance and eavesdropping
attacks in which a hacker overloads the MAC table and forces a switch to flood all packets out all ports.
The figure above provides an example of the traffic flow. Assume that VMs' ARP tables are already
populated. Now VM 1 sends traffic to VM 2's IP address and MAC address. The vSwitch forwards the
traffic to VM 2, based on the MAC forwarding table. When VM 3 sends traffic to a device at 10.2.20.15,
which is in a different subnet, VM 3 uses its default gateway MAC address as the destination. The default
gateway is not on this host, so the vSwitch forwards this traffic out its physical NIC.

As you see, the vSwitch is not a router. It directly forwards traffic only between VMs that are in the same
port group and VLAN.

Rev. 23.21 238 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

VMkernel adapters

Figure 4-40: VMkernel adapters

You can create a second type of network connection on virtual switch—a VMkernel adapter. The
VMkernel adapter is somewhat analogous to a port group. However, instead of connecting to VMs and
carrying their traffic, it carries traffic for the hypervisor. A VMkernel adapter can carry all the types of traffic
that you see in the figure above. When you create the adapter, you choose the function for which the
adapter carries traffic. In the figure above, you are creating a VMkernel adapter for the ESXi host's
management connection. You also give the adapter an IP address.
The figure above shows how VMware shows the settings after you have created the VMkernel and
connected it to a switch.
You can make the same VMkernel port carry multiple types of this traffic—you simply select multiple
types when you create the adapter. However, some functions, such as vMotion, should have a dedicated
adapter with its own IP address. In the past, admins preferred to dedicate a pair of 1GbE interfaces to
each VMkernel adapter. With 10GbE to the server edge so common now, though, you might connect
multiple VMkernel adapters to the same switch and consolidate traffic.

Rev. 23.21 239 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

Implementing VLANs

Figure 4-41: Implementing VLANs

VMware vSwitches define a VLAN for each port group and VMkernel adapter. Like a physical switch, the
vSwitch enforces VLAN boundaries, only forwarding traffic between ports in the same VLAN. A vSwitch
can take one of three approaches in defining the VLAN for a port group or VMkernel adapter. Read each
section to learn more about that approach.

Virtual switch tagging (VST)


• VLAN ID: Any ID between 1 and 4094
• Device that determines network's VLAN assignment: vSwitch (no awareness on VMs)
• Where traffic is tagged: Between vSwitch and physical switch
• Typical use: Permitting VMs in multiple subnets, and even VMkernel adapters, to share the same
physical NICS with logical separation

External switch tagging (EST)


• VLAN ID: 0
• Device that determines network's VLAN assignment: Physical switch, based on the physical port's
native (untagged) VLAN
• Typical use: A vSwitch that supports a single network such as a vSwitch dedicated to the
management VMkernel adapter

Virtual guest tagging (VGT)


• VLAN ID: 4095
• Device that determines network's VLAN assignment: VMs (and physical switches)
• Where traffic is tagged: All the way between VM and physical switch
Typical use: A network with VMs that must support multiple VLANs on a single vNIC (802.1Q support in
guest OS required)

Rev. 23.21 240 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

vSphere distributed switch (VDS)

Figure 4-42: vSphere distributed switch (VDS)

For deployments with many hosts and clusters, defining standard vSwitches individually on each host is
tedious and error prone. If an admin forgets to define a network on one host, moving a VM that requires
that network to that host will fail. A vSphere distributed switch (VDS) provides a centralized way to
manage network connections, simplifying administrators’ duties and reducing these risks. The
management plane for the VDS resides centrally on vCenter. There you create distributed port groups,
which include the familiar VLAN and NIC teaming policies. You also define a number of uplinks based on
the maximum number of physical NICs that a host should dedicate to this VDS.
You deploy the VDS to hosts, each of which replicates the VDS in its hypervisor. The individual instances
of the VDS hold the data and control plane and perform the actual switching. When you associate a host
to the VDS, you must associate a physical NIC with at least one uplink. Each uplink can be associated
with only one NIC, but if the VDS has additional uplinks defined, you can associate other physical NICs
with them. The multiple NICs act as a team much as they do on an individual virtual switch, using the
settings selected on the VDS centrally. VDSs also support LACP for link aggregation.
The VDS’s distributed port groups are available on the hosts for attaching VMs or VMkernel adapters.
Note that for VDSes, the VMkernel adapter attaches to a distributed port group, rather than directly to the
switch.

Rev. 23.21 241 Confidential – For Training Purposes Only


Module 4: Design HPE Solutions for VMware Software-Defined Networking

PAGE INTENTIONALLY LEFT BLANK

Rev. 23.21 242 Confidential – For Training Purposes Only


Design an HPE Hyperconverged
Solution for a Virtualized Environment
Module 5

Learning objectives
• Given a set of customer requirements, position hyperconverged SDI solutions to solve the customer’s
requirements
• Given a set of customer requirements, determine the appropriate hyperconverged platform
• Explain the integration points between HPE hyperconverged solutions and VMware solutions
• Use the HPE SimpliVity Upgrade Manager

Rev. 23.21 | © Copyright 2023 Hewlett Packard Enterprise Development LP | Confidential – For Training Purposes Only
Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Emphasizing the software-defined benefits of HPE SimpliVity


You need to explain to the customer how HPE SimpliVity makes their data center more software-defined
with an emphasis on simplicity. This topic shows how the SimpliVity Data Virtualization Platform simply
delivers responsive, always-there storage without a lot of tuning.

Rev. 23.21 244 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE SimpliVity Data Virtualization Platform

Figure 5-1: HPE SimpliVity Data Virtualization Platform

The HPE SimpliVity nodes look like standard x86 servers with components such as SSDs, DRAM, and
CPUs. And like any virtualized hosts, they run ESXi or Hyper-V.
But the Data Virtualization Platform empowers simple software-defined storage (SDS), built into the
solution. In logical architecture, it sits between the hardware and the hypervisor, abstracting the hardware
from the VMs and apps that are running on top.
The following sections summarize each part of the architecture.

Presentation Layer
The Presentation Layer interacts with the VMware hypervisor and presents datastores to the hypervisor.
From the point of view of hypervisors—and VMs and apps running on top of them—each datastore is full
of all of the data written to it. However, this layer does not contain any actual data or metadata.

Data Management Layer: File System


The Data Management Layer links the Presentation Layer with the disks that store the actual data. The
top part of this layer is the File System, which stores containers representing VMs and VM backups. The
File System does not store any actual data, only metadata, pointing to data in the object store. To back
up or clone a VM, the File System simply creates a new container with the same metadata. Because no
data is actually copied, the process completes very quickly.

Data Management Layer: Object Store


The Object Store forms the rest of the Data Management Layer. The Object Store stores deduplicated
data. As you see, if VMs (and the backups) have identical data blocks, the object store contains only one
copy of the data. The metadata in the File System simply refers to the same object more than once. The
Object Store data is physically stored on local drives contributed from each node.

Rev. 23.21 245 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Deduplication with HPE SimpliVity

Figure 5-2: Deduplication with HPE SimpliVity

Some legacy hyperconverged vendors either support inline deduplication or post-process deduplication.
While their inline deduplication does have the intended effect of reducing IOPS and capacity demands on
their drives, it is CPU-intensive, taking power away from production VMs and reducing available IOPS.
Post-process deduplication does the same while also adding IOPS demands on the disk drives.
The HPE SimpliVity Data Virtualization Platform delivers inline deduplication and compression for all data
without compromising the performance of the application VMs running on the same hardware platform.

Rev. 23.21 246 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE SimpliVity Data Virtualization Platform in action

Figure 5-3: HPE SimpliVity Data Virtualization Platform in action

This figure shows the Data Virtualization Platform in action. The figure simplifies a bit by collapsing the
two parts of the data management later. As you see, the data management layer only writes to the disk
when a VM sends a write request with a new block. If the block already exists, the data management
layer simply updates metadata, and no IO actually occurs on the disk. Because the best IO is the one that
you don't have to do, HPE SimpliVity doesn't just dramatically reduce capacity requirements, it also
improves performance.

Rev. 23.21 247 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Storage IO reduction

Figure 5-4: Storage IO reduction

In a legacy solution, workload IO makes up only a fraction of the total IO requirements. Snapshots, data
mirroring, and backups all add IOs too. With its ultra lightweight approach to protecting data and by
applying inline deduplication for all data, HPE SimpliVity helps customers to reduce their storage IO and
improve performance with less infrastructure.
Read the following sections to see how SimpliVity makes IO disappear.

Backups
When backups run, any data that has been changed since the last backup (at the very least) needs to be
read off the array and sent across the network to the backup storage location. In traditional solutions, this
creates a major spike every night, which is the reason backups are generally only scheduled in the
evenings. By taking local backups via metadata, HPE SimpliVity is able to take full backups with
essentially no I/O, thus eliminating the largest chunk of I/O.

Mirror
To replicate data to a remote site, a traditional solution must read data from the array and send it across
the WAN. This results in additional I/O. By intelligently only moving unique data between data centers,
HPE SimpliVity dramatically reduces the amount of data moved.

Figure 5-5: Storage IO reduction—Mirror

Rev. 23.21 248 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Snapshots
Array-level or vSphere snapshots are quick and often used as a short-term recovery point. While their
effect is relatively small, these snapshots do add to IO requirements. Because HPE SimpliVity backups
can be taken in seconds and have no IO impact, they make an easy replacement for local snapshots.

Figure 5-6: Storage IO reduction—Snapshots

Rev. 23.21 249 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Workload
HPE SimpliVity leaves just the primary application workload, with just a bit of data protection overhead.
And remember that SimpliVity deduplicates and compresses all data, not just data protection. This
reduces the I/O profile even further.

Figure 5-7: Storage IO reduction—Workload

Final result
HPE SimpliVity has dramatically reduced IO requirements while delivering data protection as good or
better than the legacy solution.

:
Figure 5-8: Storage IO reduction—Final result

Rev. 23.21 250 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE SimpliVity data protection mechanisms: RAIN

Figure 5-9: HPE SimpliVity data protection mechanisms: RAIN

HPE SimpliVity clusters combine two ways of protecting data: redundant array of independent nodes
(RAIN) and redundant array of independent disks (RAID). RAIN is described below, and RAID is
described on the next page.

RAIN
The cluster assigns every VM to a replica set with two nodes. Each node has a copy of the VM’s data,
and writes to the VM’s virtual drive are synchronously replicated to both nodes.
To decrease latency, the OVC on the node receiving replicated data sends an Ack as soon as it receives
a write request. The original OVC then sends an Ack to the VM. Meanwhile both nodes individually
deduplicate and compress data and write it to each node’s local drives.
The RAIN function described above is SimpliVity's typical behavior. However, as of OmniStack v4.0.1,
customers can choose to create single-replica datastores. VMs created on single-replica datastores are
single-replica VMs, for which the cluster maintains a copy on only one node. The company might choose
to use single-replica VMs for non-critical apps.

Rev. 23.21 251 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE SimpliVity data protection mechanisms: RAID

Figure 5-10: HPE SimpliVity data protection mechanisms: RAID

SimpliVity further protects data by having each node use RAID to store data. A single node can lose one
drive without losing any data. By combining RAID and RAIN, the cluster can lose at least two, and
possibly more, drives without losing any data.

Rev. 23.21 252 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Why HPE SimpliVity data protection is better: Multiple drive failure


withstood

Figure 5-11: Why HPE SimpliVity data protection is better: Multiple drive failure withstood

Combining RAIN and RAID makes HPE SimpliVity more resistant to lost data. With vSAN’s basic
protection mechanisms, when a drive fails on two nodes, a customer risks losing some data. (However,
you can design vSAN to withstand single drive failures on more nodes.) But HPE SimpliVity can always
withstand the failure of a drive on multiple nodes without losing any data.

Rev. 23.21 253 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

How HPE SimpliVity localizes data

Figure 5-12: How HPE SimpliVity localizes data

For any solution that features SDS, data localization can become an important consideration.
Hyperconverged solutions transform local drives on the clusters’ nodes into an abstracted pool of storage,
which is good from the point of view of simplicity and management. However, from the point of view of
performance, it is best when a VM’s virtual disk is stored on the local drives that belong to the node that
hosts that VM. At the same time, the data also needs to be stored on one or more other nodes to protect
against failures.
The solution could take a few different approaches. In the primary data localization approach, the VM’s
primary data is localized on its node while copies are distributed across multiple other nodes. The RF2
approach makes one copy (in addition to the original) while RF3 makes two. In either case, the peak
performance when all nodes are up is good because the VM’s data is localized. However, replication
takes a toll on performance because the primary node needs to calculate to write each copied block. And
performance becomes poor when a VM moves because data is no longer localized. The system can
rebalance and move data to the VM’s current node, but this takes time and generates IOs that can
decrease performance across the system. In short, these approaches cannot deliver consistent,
predictable performance.
Having no data localization improves predictability because the performance is the same when all nodes
or up or when one fails. However, without data localization, the performance is only fair.
HPE SimpliVity takes a full data localization approach so that it provides the best peak performance and
the best predictability. A VM’s data is localized on the node that hosts it, and all of its data is also
replicated to the same other node. Replication takes less of a toll on performance because the primary
node knows that it always replicates to the same other node.
If the first node fails—or if its local drives fail--the VM can move to the second node and continue to
receive exactly the same excellent performance without any data rebalancing.

Rev. 23.21 254 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Keeping data local with HPE SimpliVity Intelligent Workload Optimizer

Figure 5-13: Keeping data local with HPE SimpliVity Intelligent Workload Optimizer

If the VM needs to move, how does the HPE SimpliVity cluster guarantee that it moves to the node that
already has its data? You will look at an HPE SimpliVity solution built on VMware as an example. The
HPE SimpliVity cluster is a VMware cluster that uses VMware Distributed Resource Scheduler (DRS) and
High Availability (HA). DRS handles choosing the node to which each VM is deployed while HA helps the
cluster restart VMs on a new node if the original host fails. DRS can take factors such as CPU and RAM
load into account when it schedules where to deploy or move a VM. However, DRS does not have insight
into where the SimpliVity DVP stores data. It assumes all data is external to the hosts and, therefore,
moves VMs around freely within the cluster with no regard to where the data may be.
Some competing hyperconvergence solutions simply react to DRS. After DRS moves the VM, the solution
moves data around until it is local again. However, this “follow the VM” approach takes time and impacts
performance with a lot of extra network traffic and IOs. The SimpliVity Intelligent Workload Optimizer
takes a proactive approach. It integrates with DRS and creates DRS rules to ensure that each VM is
deployed on one of the two nodes that stores its data.
This allows VMs to have the peak and predictable performance that data locality and DRS can both
provide, while avoiding the extra I/O and network load of the "follow the VM" approach. The HPE
SimpliVity DVP handles the configuration automatically. In fact, SimpliVity self-heals the configuration
even if an admin changes the groups or rules.

Rev. 23.21 255 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Streamlining day-to-day operations

Figure 5-14: Streamlining day-to-day operations

SimpliVity’s restore capabilities set it apart from the competition, allowing companies to restore data in
seconds.
The City of Encinitas experienced how HPE SimpliVity can make customers more efficient, cut costs, and
free up precious IT time. During the COVID 19 pandemic, the City of Encinitas struggled to get their staff
working remotely. Before the pandemic, many of their residents would access city services in person, but
this was no longer possible once shelter in place orders went into effect. The customer knew that they
needed to invest in online services so that residents could help themselves and wouldn’t try to go into city
offices during COVID. Additionally, the City of Encinitas employees were tied to their desktops, which
limited their mobility and were expensive to procure and maintain.
The City of Encinitas chose to deploy HPE SimpliVity with VMware Horizon so that users could remotely
and securely access their desktop PCs or VDI sessions during the pandemic. By updating their
infrastructure and replacing desktops with mobile devices such as laptops, the City of Encinitas cut costs.
The HPE SimpliVity solution has also saved time for IT staff, as the new solution requires significantly
less maintenance. Now staff members have more time to work on other projects and engage in training.
(“Enabling daily life in a coastal city,” HPE Digital Game Changers, 2022.)

Rev. 23.21 256 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Sizing an HPE SimpliVity solution


This section focuses on sizing an HPE SimpliVity solution.

Rev. 23.21 257 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE SimpliVity design process

Figure 5-15: HPE SimpliVity design process

This section covers the first two steps of the HPE SimpliVity design process. You will first review
strategies and tools for collecting the data necessary for sizing solution. You will then look at how to input
what you have learned into the HPE SimpliVity Sizing Tool to determine the number and type of nodes to
deploy.
Please note that you will need HPE employee or partner credentials to access some of the tools
referenced in this section.

Rev. 23.21 258 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Data gathering

Figure 4-16: Data gathering

Begin by reviewing the data gathering process. Read the following sections to review.

Basic information to put into the sizer


• Number of VMs
• Compute (processor): number of vCPUs, desired vCPU to physical core ratio (V:P), and peak GHz
• Compute (memory): vRAM requirements
• Storage capacity: Used vs allocated
• Backup requirements
• Storage performance
• IOPS
• Percentage read vs write
• IO sizes
• Growth

Additional information about files


• Are file systems large? Do file servers represent a significant percentage of the total storage?
• Do hosts or applications implement compression or deduplication? Do they implement encryption?
• Do VMs have a lot of media and archive files that are already compressed (such as .tz, .gz, .zip,
.mp3, .mp4)?

Additional information about backups


• What are the customer’s backup and archival needs?
• What are the primary SLAs (will be fulfilled with datastore-level policies)
– Where will data be backed up (local and remote)

Rev. 23.21 259 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

– How frequent of backups


o Most frequent = 10 minutes (more frequent requires DSR)
o How long will backups be maintained?
o What is the change rate?
• Does the customer have any workload-specific SLAs? (will be fulfilled by individual VM-level policies)

Data gathering tools


• Interviews
• HPE CloudPhysics—This AI-driven solution gives you deep insights and rich information about a
customer’s VMware environment. As an HPE Partner, you can register for an account here:
https://app.cloudphysics.com/partner/hpe/register
• HPE Assessment Foundry (SAF)—This free suite of tools helps you collect data customer
environments. It analyzes configuration and workloads, generating detailed reports. It also helps you
size HPE solutions. For more information, click HPE Assessment Foundry.
You can also examine information within VMware vCenter and use third-party solutions. However, these
HPE tools make your life easier.

Rev. 23.21 260 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Reviewing choices for the HPE SimpliVity platform

Figure 5-17: Reviewing choices for the HPE SimpliVity platform

HPE SimpliVity platforms can meet a variety of customers' requirements, workloads, and preferences.
Read the sections below for a brief review of each model.

HPE SimpliVity 380G


• Option for extra PCIe card (GPU or NIC)
• GPU acceleration useful for workloads such as CAD and analytics
• Ideal for VDI deployments

HPE SimpliVity 325


• Powered by AMD EPYC for 2P-like performance with 1P
• Ideal for VDI and other use cases:
– ROBO and edge
– Small-to-medium businesses (SMBs)
– Space-constrained environments

Rev. 23.21 261 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Preparing for sizing

Figure 5-18: Preparing for sizing

You will now begin to plan the HPE SimpliVity clusters.


Several factors affect how many clusters you plan. Location can play a role. For example, if you are
designing a solution for a customer with several branch offices, each site might have its own cluster.
Stretched clusters can span WAN links and multiple sites. However, you should only use a stretched
cluster when you want to distribute services across the two sites. For the ROBO solution, it can make
more sense to deploy a separate cluster at each site so that VMs for that site stay local. Clusters can
back up data to a cluster at another site for higher availability.
You might need to plan multiple clusters at the same site if you need a large solution with more than the
recommended number of nodes per cluster.
And you might also want to create multiple clusters even if you have fewer than 16 nodes. It can be
beneficial to isolate latency sensitive applications such as VDI on their own clusters. When in doubt,
separate your workload types for optimal performance.
Finally, consider the need for separate compute nodes, which you might want to deploy for power users
in a VDI solution or to support processor hungry applications.

Rev. 23.21 262 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Getting started with the HPE SimpliVity Sizing Tool

Figure 5-19: Getting started with the HPE SimpliVity Sizing Tool

If you are an HPE Partner, you can access the HPE SimpliVity Sizing Tool. (Click here to access the
sizer. If you have trouble accessing the sizer at this link, check HPE Products and Solutions Now for
updated information about it.)
The figure above shows the first sizer window, which will show any saved sizings.
1. Click the Create New Sizing button to begin sizing a new solution.
2. In the pop-up window that is displayed, enter a name in the Sizing Name field and select the type of
deployment: You can choose Infrastructure Cluster for general virtualization and End-User
Computing (GPU) or End-User Computing (Non-GPU) for VDI.
3. Click Create Sizing.
4. Click Add Cluster.
5. In the pop-up window that is displayed, enter a name in the Cluster Name field and click Add
Cluster.

Rev. 23.21 263 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Inputting information to size the cluster

Figure 5-20: Inputting information to size the cluster

When you add a cluster, you will see a window like the one shown in the figure above. Each cluster
consists of one or more VM groups. You enter sizing information for each VM group separately. Refer to
the latest tool documentation and help for more information about the various fields.

Rev. 23.21 264 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Architecting the HPE SimpliVity solution


This topic reviews architectural designs and decisions for an HPE SimpliVity solution.

Rev. 23.21 265 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE SimpliVity design process

Figure 5-21: HPE SimpliVity design process

You will now review elements of the HPE SimpliVity architecture and best practices for designing them.

Rev. 23.21 266 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Architectural design

Figure 4-22: Architectural design

You should create an architecture diagram that shows the components within each cluster and how
clusters connect together. Read the sections below to review the different components.

Cluster
Include each cluster and the site at which it is located. Indicate the number of nodes and the model.
Attached to the diagram, you can add more information about the model such as processor choices and
amount of memory.

vCenter (site 1)
For vCenter and vSphere VDI deployments, you should indicate where vCenter servers are located. They
can be deployed on a separate management SimpliVity cluster, which is generally preferred for larger
deployments. For small deployments, you can place vCenter on the same SimpliVity cluster that hosts
production VMs. You can also deploy vCenter outside of SimpliVity. If you choose to deploy vCenter on a
SimpliVity cluster that it manages, you must deploy vCenter first and then move it to the cluster.
For Hyper-V deployments, you should similarly indicate where Microsoft System Center (MSSC) is
deployed

vCenter (site 2)
A single vCenter server can manage multiple HPE SimpliVity clusters in a Federation. However, the
Federation can also include up to 5 vCenter servers. In this example, site 2 has its own vCenter server for
resiliency. When a Federation has multiple vCenter servers, they must connect with Enhanced Linked
mode.
For Hyper-V, a single MSSC instance is supported, but MSSC can use Microsoft clustering.

Arbiter
An Arbiter helps to break ties in failover situations. HPE SimpliVity 3.7.10 or earlier always required the
installation of an arbiter. For OmniStack v4 and above, Arbiters are only required for two-node clusters or
for any stretch clusters. However, they are also recommended for four-node clusters.

Rev. 23.21 267 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

An Arbiter can never be deployed on a cluster for which it acts as Arbiter. However, it can be deployed on
a different cluster. It can also act as Arbiter for multiple clusters.

Federation
A Federation includes multiple HPE SimpliVity clusters that are managed by the same vCenter
infrastructure. This infrastructure could consist of one vCenter server or multiple vCenter servers
operating in Linked mode.

Site-to-Site links
You need to indicate the link between sites, specifying their bandwidth and latency. This example has
separate clusters at each site, so the latency requirements are less strict. A link used by a stretched
cluster, which has members at multiple sites, must have round trip time (RTT) latency of 2ms or less.

Rev. 23.21 268 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Network design

Figure 5-23: Network design

Every cluster requires three networks: a management, storage, and federation network.

Management
The Management network is the network on which external devices reach the SimpliVity cluster and on
which SimpliVity communicates with vCenter. This network has a default gateway, and it should be
advertised in the routing protocol used by the network so that it is reachable from other subnets.
It can use 1, 10, or 25 GbE NICs, which are shared by VMs' production networks using tagged VLANs.

Storage
Each node has a VMkernel adapter for storage traffic. This adapter connects to the Storage network, as
does each OVC. The Storage network carries NFS traffic for mounting the SimpliVity datastore to the host
and handles IO requests from VMs.
If the cluster has compute nodes, their VMkernel adapters should connect to this network, too.
This network should be dedicated to this purpose; it is not routed. It requires an MTU of 9000 and a
latency of 2ms or under. It can be 10GbE or 25GbE.
With v 4.1.0, HPE SimpliVity allows IT admins to control how much bandwidth HPE SimpliVity uses for
backup and restore operations. This feature is particularly useful for customers who deploy HPE
SimpliVity at branches, remote locations, or any location that has limited bandwidth.

Federation
The Federation network carries OVC-to-OVC communications between nodes. Only OVCs should be
connected to this network.
This network should be dedicated to this purpose; it is not routed. It requires an MTU of 9000. It should
use 10GbE.
OVCs contact OVCs in other clusters on their Federation IP addresses, but the traffic is routed out the
Management network, which has the default gateway.

Rev. 23.21 269 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

General cluster and federation sizing guidelines

Figure 5-24: General cluster and federation sizing guidelines

To properly plan an HPE SimpliVity solution, you need to understand the maximum number of nodes
supported for clusters and federations.
HPE SimpliVity supports single-node clusters, which provides only RAID protection for data. However,
HPE generally recommends that clusters consist of at least two nodes. The maximum recommended
cluster size is 16 nodes.
If the customer wants HA and remote backups, the federation needs at least two clusters. A federation
supports up to 96 nodes. For large ROBO environments, a federation could consist of 48 2-node clusters.
In v4.0.1 and above, companies can deploy the HPE SimpliVity Management Virtual Appliance to help
manage the federation. This appliance is a dedicated, highly available VM that helps to centralize
management and coordinate operations. A federation managed with the SimpliVity Management Virtual
Appliance is called a centrally managed federation, while other federations are called peer-to-peer
federations. A centrally managed federation supports up to 96 nodes, all managed by a single vCenter
Server. (Customers can also choose to use two to five vCenter Servers in Enhanced Link Mode to
manage the centrally-managed architecture. The federation still supports a maximum of 96 nodes.)
A peer-to-peer federation does not use an HPE SimpliVity Management Virtual Appliance. In that case, a
single vCenter Server can only manage up to 32 nodes. With two vCenter Servers in Enhanced Linked
Mode, the federation can support up to 64 nodes. The federation can support up to 96 nodes with three to
five vCenter Servers in Enhanced Linked Mode. (Each vCenter Server can support up to 32 nodes, but
the maximum for federation remains 96.)
These guidelines are based on HPE OmniStack 4.1.3. If you are using a different version, check the
Administrator Guide for that version.
If your customer has specialized requirements that do not work with these guidelines, you can contact
your Partner Business Manager (PBM) for guidance.

Rev. 23.21 270 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE SimpliVity Integration with VMware


The next section focuses on how HPE SimpliVity is integrated with VMware.

Rev. 23.21 271 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE SimpliVity plug-ins for VMware

Figure 5-25: HPE SimpliVity plug-ins for VMware

In addition to providing simple, out-of-the-box SDS, HPE SimpliVity integrates with the virtualization
solution to help customers manage SimpliVity from a single interface. The HPE SimpliVity plug-in for
VMware enables admins to manage SimpliVity nodes as VMware hosts just as they are used to doing,
but also adds extra functionality specific to SimpliVity. For example, admins monitor the HPE SimpliVity
Federation as a whole. They can also manage automatic backup policies and initiate manual backup and
recoveries. The plug-in also lets admins monitor databases and the underlying storage from a single tool.
They can create new datastores and expand existing ones. With a single view for monitoring resource
utilization, they can more quickly find and resolve issues. Finally, the HPE SimpliVity plug-in for VMware
includes a Deploy Management Virtual wizard, which allows you to convert a peer-managed federation to
a centrally managed federation. The wizard gives you more flexibility in deploying and managing
federations.
HPE SimpliVity also offers seamless integration with Aria Automation (formerly vRealize Automation or
vRA) and Aria Automation Orchestrator (formerly vRealize Orchestrator or vRO). Earlier in this course,
you learned about how these solutions help companies use powerful workflows to orchestrate their
services. HPE has developed workflows specific to HPE SimpliVity to accelerate companies' efforts to
automate their SimpliVity environment. The figure above shows a list of the tasks customers can
automate with the workflows.

Rev. 23.21 272 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE SimpliVity Deployment Manager with VMware vCenter

Figure 5-26: HPE SimpliVity Deployment Manager with VMware vCenter

After admins install the HPE SimpliVity nodes in the data center on Day 0, the HPE SimpliVity
Deployment Manager helps to automate the deployment of the solution. Read the sections below to see a
high-level overview of the process.

1. vCenter pre-setup
First, admins should establish on vCenter the clusters to which they want to add HPE SimpliVity nodes.

2. Beginning to use Deployment Manager


The admin enters settings on the Deployment Manager to connect to vCenter. The admin chooses a
cluster and specifies whether to use an existing Federation or create a new one.
The admin can then define network settings or import them from an XML.

Rev. 23.21 273 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

3. Node discovery

Figure 5-27: HPE SimpliVity Deployment Manager with VMware vCenter (cont.)

Admins first discover and add a single node to the cluster. They can then add more.
Here you see that the first node receives a DHCP address. The admin then just needs to scan for host,
and the Deployment Manager automatically discovers it.

4. Node deployment
The admin now tells the Deployment Manager to deploy network settings and the ESXi hypervisor to the
host.
After adding the first node to the cluster, admins can quickly deploy the same settings to add more nodes.

5. Additional node deployment


The cluster rapidly discovers the other nodes and deploys settings to them

Rev. 23.21 274 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Why REST API

Figure 5-28: Why REST API

Admins can quickly complete common tasks for managing HPE SimpliVity clusters in a GUI. But
sometimes admins need to repeat the same task many times. For example, they might need to clone
multiple VMs every morning for a test team, so clicking through a GUI would be tedious. That’s why HPE
has created the HPE SimpliVity REST API: to allow companies to script the most common administrative
tasks available in the GUI.

Rev. 23.21 275 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE SimpliVity Upgrade Manager

Figure 5-29: HPE SimpliVity Upgrade Manager

Now look at another way HPE SimpliVity simplifies management.


The HPE SimpliVity Upgrade Manager helps customers to quickly upgrade a complete Federation to new
software without impacting services. Admins choose the new software and run the Upgrade Manager.
The Upgrade Manager upgrades one node in each cluster at a time, first moving that node's VMs to other
nodes. After upgrading one node, the Upgrade Manager moves the VMs for the next node and upgrades
that node until all nodes are on the same software. As you see, if the Federation has multiple clusters, the
Upgrade Manager can upgrade multiple clusters at once.
After the upgrade is complete, admins can choose to roll back the upgrade on all nodes or individual
nodes. While all nodes in a Federation typically must be on the same version, they are permitted to be on
different versions while the Federation is in this state. Once admins are sure that all nodes are running
the new software and the upgraded Federation is working as expected, they can commit the upgrade.
After that point, they can no longer roll back.
Keep in mind that you can update firmware at the same time as the ESXi OS, but in that case you cannot
enable secure boot.

Rev. 23.21 276 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE Alletra dHCI: Emphasizing the software-defined benefits


You will now explore the benefits of HPE Alletra dHCI, a disaggregated hyperconverged infrastructure
(dHCI) solution.

Rev. 23.21 277 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE Alletra dHCI versus traditional HCI solutions

Figure 5-30: HPE Alletra dHCI versus traditional HCI solutions

Customers need to simplify how they deploy and manage the infrastructure that supports their virtualized
workloads. Although HCI solutions offer an attractive solution, they can have one drawback. Traditional
HCI stacks consist of servers, which contribute both the compute and storage resources, in the form of
local storage drives. When customers need to scale the solution, they add another server, and compute
and storage scale uniformly. However, many workloads feature complex architectures that scale less
cleanly. For these unpredictable workloads, the requirements for the storage-hungry database layer might
grow more quickly than requirements for the compute-intensive application layer. With traditional HCI, the
customer must invest in more compute power than needed simply to obtain the required storage. Or the
opposite may occur.
In either case, customers with unpredictable workloads face a difficult choice. Do they deploy a
converged bundle of servers and storage arrays so that they can scale storage and compute separately,
but miss out on the simplicity and operational benefits of HCI? Or do they deploy HCI and end up over-
provisioning?
HPE Alletra dHCI provides the flexibility of converged infrastructure with the simplicity of HCI. It consists
of HPE ProLiant DL servers and HPE Alletra arrays, which automatically form a stack. Customers
manage the stack from an intuitive management UI and integrate it into vCenter, as easily as a traditional
HCI stack.
HPE Alletra dHCI is designed to deliver high performance and availability while allowing customers to
scale compute and storage separately.

Rev. 23.21 278 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Scaling compute and storage with disaggregated HCI

Figure 5-31: Scaling compute and storage with disaggregated HCI

The ability to scale compute and storage separately translates to concrete benefits for the customer, as
some examples will illustrate. In the figure above, the customer initially deploys a pool of 32 compute
nodes, but far less storage. At this point, the customer does not require much storage, so they save
money by avoiding overprovisioning.
Later the company needs more storage capacity. The customer can easily scale up the storage capacity
without disruption to the stack. The customer can scale up capacity within a single chassis by adding new
drives, including ones with mixed capacities. The customer can scale up further by attaching capacity
expansion shelves—each one being its own independent RAID group.

Figure 5-32: Scaling compute and storage with disaggregated HCI

Rev. 23.21 279 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Later the customers’ workloads might need more IOPS. Again, the customer can easily adjust the
solution for the new needs, scaling out the number of storage nodes.

Figure 5-33: Scaling compute and storage with disaggregated HCI

Rev. 23.21 280 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE Alletra dHCI and vVols features

Figure 5-34: HPE Alletra dHCI and vVols features

HPE Alletra dHCI supports all the same features of HPE Alletra, including its support for VMware vVols.

As you recall, a vVol is a volume on a SAN array, which a VM uses to back its disk, rather than a VMDK
file within a VMFS datastore. vVols can simplify storage management. When a VMware admin performs a
task like creating a new virtual disk, or snapshotting a disk, the storage array automatically provisions the
vVol or takes the snapshot. The vVol approach also enables admins to apply policies at a VM-level rather
than a LUN-level.

Alletra arrays offer mature vVol support with features such as QoS, thin provisioning, data encryption and
deduplication. Alletra snapshots are fast and efficient. Alletra supports application-aware snapshots for
vVols, which help ensure consistency for data backed up with Volume Shadow Copy (VSS). The VM
recycle bin helps to protect companies from mistakes. Alletra defers deleting VMs for 72 hours, allowing
admins to reclaim the VMs within that time period, if necessary.

Companies using vVols can also take advantage of HPE Alletra replication features.

Rev. 23.21 281 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Summary of key HPE Alletra dHCI benefits

Figure 5-35: Summary of key HPE Alletra dHCI benefits

HPE Alletra dHCI allows customers to reimaging HCI without limitations. It offers a turnkey experience
that radically simplifies IT. Built for business-critical apps and mixed workloads, it unlocks IT agility while
ensuring apps are always-on and always-fast. HPE Alletra dHCI is intelligently simple with native, full-
stack intelligence from storage to VMs and policy-based automation for virtualized environments. Its
99.9999% guaranteed data availability and sub-millisecond latency make it ideal for demanding
workloads. HPE Alletra dHCI offers an order of magnitude better than traditional HCI for write latency.
With HPE Alletra dHCI, customers can independently scale compute and storage non-disruptively. This
design reduces costs by eliminating overprovisioning and wasted resources.
HPE Alletra dHCI offers the simplicity of a cloud-like experience. And when you deliver with HPE
GreenLake, customers couple the benefits of a simple pay-as-you-go payment model with the security
and control of private cloud.
HPE Alletra dHCI can meet the needs of your customers’ demanding workloads while still delivering cost-
effective operations.

Rev. 23.21 282 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE Alletra dHCI: Sizing and architecting the solution


In this section, you will focus on designing and deploying an Alletra dHCI solution for a VMware
environment

Rev. 23.21 283 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE Alletra dHCI data gathering and sizing tools

Figure 5-36: HPE Alletra dHCI data gathering and sizing tools

You will need to assess the environment currently hosting the workloads that the customer plans to
migrate to HPE Alletra dHCI. As you learned in Module 2, you can use HPE CloudPhysics to assess a
customer’s current VMware environment. Alternatively, you can use HPE Assessment Foundry (SAF),
which works for VMware and other environments. You can access these tools here:
• HPE Cloud Physics Portal
• HPE Assessment Foundry Portal
After you have collected information about the current environment and future needs, you can size the
solution. For Alletra dHCI, you size the storage and compute separately:
Storage in NinjaSTARS, accessible here: HPE Assessment Foundry Portal
Compute in dHCI Sizer Tool, accessible from the sizing tools here: PSNow
For many of these tools, you can import SAF output directly into the tool, saving you time. Check also for
emerging support for importing HPE CloudPhysics data.
HPE also provides a dHCI Networking Tool to point you toward the correct switches, cables, and
transceivers. You can access this tool here: HPE Infosight Downloads

Rev. 23.21 284 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE Alletra dHCI platform building blocks

Figure 5-37: HPE Alletra dHCI platform building blocks

HPE Alletra dHCI supports a range of products, which can be combined into the stack. This figure shows
the products that customers can use to build the solution. (Note that this information was current when
this course was created; please monitor the Alletra dHCI briefcase in HPE Seismic for updates such as
support for HPE ProLiant Gen11.)

Storage
The storage consists of either HPE Alletra 5000 or HPE Alletra 6000 arrays. The HPE Alletra 5000 offers
hybrid flash for a balance of cost-effectivity and good performance. The HPE Alletra 6000 arrays feature
all-flash to provide very low latency for demanding workloads. HPE Alletra dHCI only supports iSCSI, not
FC.

Compute
Alletra dHCI supports several popular HPE ProLiant DL servers, enabling you to choose the options for
the customer’s workload requirements and processor vendor preferences:
• AMD-based servers
– HPE ProLiant DL325 Gen10/Gen10+/Gen10+v2
– HPE ProLiant DL385 Gen10/Gen10+/Gen10+v2
• Intel-based servers
– HPE ProLiant DL360 Gen9/Gen10/Gen10+
– HPE ProLiant DL380 Gen9/Gen10/Gen10+
– HPE ProLiant DL560 Gen9/Gen10
– HPE ProLiant DL580 Gen9/Gen10
The Gen 9 models are supported only in brownfield deployments, which enables customers to use their
existing servers for an Alletra dHCI deployment. You will learn more about both greenfield and brownfield
deployments later in this module.

Rev. 23.21 285 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Hypervisor
HPE Alletra dHCI supports VMware vSphere 8.0, 7.0 or 6.7 for greenfield deployments or VMware
vSphere 6.5 for brownfield deployments.

Management
For management, HPE Alletra dHCI enables admins to use the familiar VMware vCenter. It also includes
tools to set up, manage, and upgrade the stack:
• Integrated dHCI Stack Setup
• Integrated dHCI Stack Manager
• Integrated dHCI Stack Upgrades

Network
For greenfield deployments, Alletra dHCI supports HPE StoreFabric M-Series, HPE FlexFabric
57x0/59x0, and HPE Aruba Networking 6300/83xx switches.

Rev. 23.21 286 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Required VMware licenses

Figure 5-38: Required VMware licenses

VMware vCenter Server license


HPE Alletra dHCI requires a VMware vCenter Server Standard license.

VMware vSphere license


The HPE Nimble Storage dHCI solution requires a VMware vSphere license that supports high-availability
functionality and APIs for Array Integration and Multipathing. HPE recommends using vSphere Enterprise
Plus because only this license supports DRS, which is used for one-click upgrades. However, if
customers do not require that feature, they can use vSphere Standard licenses.

Rev. 23.21 287 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE Alletra dHCI architecture

Figure 5-39: HPE Alletra dHCI architecture

HPE Alletra dHCI integrates HPE ProLiant hosts running vSphere, 10 Gbps switches, and an HPE Alletra
array into a single stack. As this figure shows, the integrated solution has a single management plane,
which is VMware vCenter.
Before this integrated stack can be created, the HPE Storage Connection Manager (SCM) must be
installed on each host where the HPE Alletra dHCI solution will be deployed. Depending on whether this
is a greenfield or brownfield deployment, HPE or the customer might install the SCM.
HPE provides a number of tools which help to automate the HPE Alletra dHCI solution:
• dHCI Stack Setup—Customers run this wizard to set up the HPE Alletra dHCI solution. In a greenfield
deployment, the wizard guides admins through the process of creating a vCenter server, setting up
datastores and a cluster, setting up new switches, and adding and configuring new ProLiant servers.
In a brownfield deployment, the wizard guides admins through the process of adding an HPE Alletra
array to an existing vCenter server, as well as specifying and discovering the HPE ProLiant servers
and switches that will become part of Alletra dHCI
• dHCI Stack Management—Stack management is implemented as a vCenter plug-in, allowing admins
to manage and monitor Alletra dHCI from within the familiar vCenter interface.
• dHCI DNA Collector—The Collector gathers information about the storage system, including
configuration settings, health, and statistics. This information is reported in the vCenter plug-in.
• dHCI Stack Upgrades—This tool manages and streamlines the process of upgrading the devices in
the integrated stack.
As you can see the devices use heartbeats to ensure the stack is healthy and intact.

Rev. 23.21 288 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Two deployment paths

Figure 5-40: Two deployment paths

HPE Alletra dHCI offers two deployment options: greenfield or brownfield.

Greenfield deployment
As the name suggests, a greenfield solution is a new deployment. For switches, customers can choose
from HPE StoreFabric M-Series, HPE FlexFabric 57x0/59x0, or HPE Aruba Networking 6300/83xx
switches. Whatever switch you choose must support Data Center Bridging (DCB).
At the time this course was created, new Alletra dHCI deployments supported the following servers:
• HPE ProLiant DL325 Gen10 and Gen10+
• HPE ProLiant DL385 Gen10 and Gen10+
• HPE ProLiant DL360 Gen 10
• HPE ProLiant DL380 Gen 10
• HPE ProLiant DL560 Gen 10
• HPE ProLiant DL580 Gen 10
As always, you should check for updated information. In greenfield deployments, HPE integrates the
correct Custom HPE VMware ESXi Image on the HPE ProLiant servers.
The greenfield solution also includes at least one new HPE Alletra 5000 or 6000 array. HPE Alletra dHCI
is only tested and supported for iSCSI.

Brownfield deployment
Brownfield deployments allow customers to use some existing components. They bring their own
switches, which must support DCB.
The customer can choose to obtain new HPE ProLiant servers as in a greenfield deployment. Or the
customer can provide their own existing HPE ProLiant servers. At the time this course was created, the
following servers were supported:
• HPE ProLiant DL325 Gen10 and Gen10+
• HPE ProLiant DL385 Gen10 and Gen10+

Rev. 23.21 289 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

• HPE ProLiant DL360 Gen 10 and Gen 9


• HPE ProLiant DL380 Gen 10 and Gen 9
• HPE ProLiant DL560 Gen 10 and Gen 9
• HPE ProLiant DL580 Gen 10 and Gen 9
As always, you should check for updated information.
To incorporate the existing HPE ProLiant DL servers into the HPE Alletra dHCI stack, the servers must
use a Custom HPE VMware ESXi Image.
The brownfield deployment must use a new HPE Alletra array or arrays. The supported models are the
same as for a greenfield deployment.

Rev. 23.21 290 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE Alletra dHCI: Greenfield

Figure 5-41: HPE Alletra dHCI: Greenfield

With greenfield deployments, HPE handles many of the deployment tasks at the factory. HPE installs the
VMware vSphere dHCI image on the new HPE ProLiant servers. HPE also installs the HPE Storage
Connection Manager. HPE also installs the dHCI system image on the HPE Alletra arrays.
At the customer site, admins only need to complete a few tasks. They rack and stack the systems. Then
they initialize the network and run the simple HPE Alletra dHCI Setup Wizard. The solution is then ready
for production.

Rev. 23.21 291 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE Alletra dHCI: Brownfield

Figure 5-42: HPE Alletra dHCI: Brownfield

For brownfield deployments using existing HPE servers, HPE only sets up the new HPE Alletra arrays.
You must help the customer ensure that the existing network and compute components meet the
requirements for being part of Alletra dHCI. For example, you must install the correct Custom HPE
VMware ESXi Image and the Storage Connection Manager on the servers. Then you can run the HPE
Alletra dHCI Setup Wizard.

Rev. 23.21 292 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE Infosight Welcome Center: Guided deployments

Figure 5-43: HPE Infosight Welcome Center: Guided deployments

The HPE InfoSight Welcome Center is designed to help you quickly and easily deploy HPE storage
solutions. In addition to HPE Alletra dHCI, the InfoSight Welcome Center supports:
• HPE Primera
• HPE Alletra
• HPE GreenLake for Block Storage
The sections that follow describe the guidance the Welcome Center provides for Alletra dHCI.

Getting started
The “Getting started” section provides a preinstallation checklist for both greenfield (new) and brownfield
(existing) installations.
For Alletra dHCI, the preinstallation checklist helps you prepare so you can install the actual solution in 30
to 45 min. For example, the preinstallation checklist details for:
• Required components
• Recommendations for location
• Power sources
• Network layout
• Network ports and cabling
• Guidelines for creating firewall policies to allow Nimble dHCI traffic
• Storage and server configuration
• Network configuration

Physical installation
The Welcome Center also guides you through the installation. For Alletra dHCI, it provides videos to walk
you through the steps of physically installing and cabling the storage array, servers, and switches.

Software configuration
This section explains the process of configuring the switch, preparing the environment, discovering the
array, setting up the array, configuring the vCenter Server, and validating the array.

Rev. 23.21 293 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Network automation: Extending dHCI stack setup to HPE switches

Figure 5-44: Network automation: Extending dHCI stack setup to HPE switches

dHCI stack management now extends to HPE Aruba Networking switches. You can automate switch
configuration for setting up the stack and adding servers, thereby reducing the number of steps required
for switch configuration by over 70%. dHCI stack management will help you easily scale with the number
of compute nodes for even greater time savings. dHCI stack management adds extra switch configuration
and health checks to Configuration Checks and removes the potential for misconfigurations by applying
dHCI best practices every time you configure switches.

Rev. 23.21 294 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Network automation—Supported network topology

Figure 5-45: Network automation—Supported network topology

There are several requirements, prerequisites, and potential limitations for using HPE Alletra dHCI’s
network automation features.

Network topology requirements:


The solution must be a new deployment. The topology must be a two-switch architecture, and the
switches must be either HPE Aruba Networking 8325 or HPE Aruba Networking 8360 switches.
HPE ProLiant DL compute nodes must have HPE iLO connections; shared or dedicated connections are
supported.

Deployment network prerequisites


To enable dHCI stack management’s network automation, the deployment network must meet several
prerequisites.
• Switch management interfaces must be configured.
• ISLs must be configured.
• Network uplinks must be configured.
• All remaining ports (for dHCI or unused) are assigned to the Management VLAN.

Limitations
You cannot use an out-of-band (OOB) switch for HPE iLO or array management. Instead, the servers’ iLO
ports must connect to the HPE Aruba Networking 8325 or 8360 switches. This topology only supports two
switches.

Rev. 23.21 295 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE Alletra dHCI: Multiple vSphere HA/DRS cluster support

Figure 5-46: HPE Alletra dHCI: Multiple vSphere HA/DRS cluster support

At the time this course was released, HPE Alletra dHCI supported a maximum of one vSphere cluster in
the integrated management plane of the solution. However, customers can create additional vSphere
clusters outside the dHCI management plane, and those external clusters can use the standard iSCSI
shared storage backed by the HPE Alletra dHCI array. Admins can provision this storage using the HPE
Alletra dHCI array’s GUI, and they connect the clusters to it just as they would in a standard vSphere
solution. However, the dHCI management plane has no visibility into the external clusters’ servers.
This design is much more flexible and adaptable than classic HCI vendors, which also support only a
single vSphere cluster in the management plane, but cannot provision storage for external clusters.

Rev. 23.21 296 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Restrictions

Figure 5-47: Restrictions

Set customers’ expectations correctly as to what is and is not supported with HPE Alletra dHCI. HPE
Alletra dHCI arrays can implement replication with Peer Persistence to other arrays of the same exact
model. Those arrays can be within the dHCI cluster or outside it. It does not matter. However, HPE Alletra
dHCI arrays cannot replicate with Peer Persistence to arrays of different models.
HPE Alletra dHCI can perform asynchronous replication without any restrictions beyond the usual
restrictions for asynchronous replication on HPE Alletra.
Also remember that only iSCSI is officially supported for HPE Alletra dHCI.

Rev. 23.21 297 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Designing HPE Alletra dHCI specifically for VCF


In this section you will learn how to design HPE Alletra dHCI specifically for VMware Cloud Foundation
(VCF).

Rev. 23.21 298 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

dHCI for VCF overview

Figure 5-48: dHCI for VCF overview

HPE Alletra dHCI for VMware Cloud Foundation (VCF) makes it simpler for customers to deploy VCF.
This solution consists of a dedicated set of SKUs for the array and servers. HPE delivers a specialized
HPE Alletra dHCI array, which is designed to integrate as a new VI workload domain into an existing VCF
environment.

For this integration to work, the customer must first have a VCF management workload domain present.
When the customer runs the dHCI Stack Setup wizard, the wizard will automatically integrate the HPE
Alletra dHCI stack as a new VI workload domain under that management domain.

Rev. 23.21 299 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

dHCI for VCF prerequisites and limitations

Figure 5-49: dHCI for VCF prerequisites and limitations

There are several requirements and limitations you must consider as you design an HPE Alletra dHCI for
VCF solution.

Requirements
The customer should be using VCF 4.3. However, if a customer wants to use another version of VCF, the
version of VMware ESXi on the servers can be reinstalled to match the desired version. Refer to the VCF
compatibility matrix. For example, if VCF 4.2 is installed, then you would install VMware ESXi 7.0U1 Build
17551050. But VCF integration will only work with version 4.1 and later.
You must be careful to order the correct SKU, which is different from the standard HPE Alletra dHCI
solution. Order the SKU for dHCI with VCF-enabled HPE Alletra 6000 array. Then HPE will configure the
array at the factory to operate within the VCF solution.
The HPE Alletra dHCI cluster must use a minimum of 3 HPE ProLiant DL servers. HPE will preinstall new
servers with VMware ESXi 7.0U2 Build 17867351. As noted above, customers can later install a different
supported build to work with a different supported VCF version.
Admins must also configure an iSCSI network pool in SDDC prior to the deployment.

Limitations
The HPE Alletra dHCI cluster always joins VCF as a new workload domain. Joining an existing domain is
not currently possible.
The deployment creates a single vVol datastore. Creating additional datastores in the dHCI deployment
or using a plugin is not supported at this time.
The dHCI Network Automation feature is not supported for this solution.
Converting an existing HPE Alletra dHCI array to a VCF-enabled HPE Alletra dHCI array in the field is not
supported.

Rev. 23.21 300 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

dHCI for VCF uses cases / value adds

Figure 5-50: dHCI for VCF uses cases / value adds

HPE Alletra dHCI for VCF offers many advantages over a traditional VCF cluster. The Stack Setup wizard
simplifies and automates the deployment of new VI workload domains. HPE Alletra dHCI for VCF also
offers six 9s of data availability and sub-millisecond latency. Additionally, HPE Alletra dHCI for VCF has
superior TCO versus a traditional VSAN architecture. And updating clusters is simple through dHCI One-
Click Update and SDDC Manager.

Rev. 23.21 301 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Automating the lifecycle with HPE Alletra dHCI integrations


with VMware
In this section you will learn how to automate the lifecycle for HPE Alletra dHCI and HPE Alletra dHCI
integrations with VMware.

Rev. 23.21 302 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE Alletra dHCI vCenter plug-in

Figure 5-51: HPE Alletra dHCI vCenter plug-in

Once HPE Alletra dHCI is set up, admins can manage it using the dHCI vCenter plug-in. They can
complete tasks such as: -
• Add new servers
• Create a new VMFS datastore
• Grow the VMFS datastore
• Clone a VMFS datastore
• Create a snapshot of a VMFS datastore
• Create a vVol datastore
Because admins are using the familiar vCenter interface, managing Alletra dHCI is straightforward. Also
note that, if customers have deployed an HPE Alletra dHCI for VCF solution, the plug-in features
automatically adapt for that environment.
The vCenter plug-in also allows admins to perform a consistency check to ensure their Alletra dHCI
cluster is set up correctly.

Rev. 23.21 303 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Intelligent upgrades for full-stack automation

Figure 5-52: Intelligent upgrades for full-stack automation

HPE Alletra dHCI contains many components and integration points, each with their own firmware and
software. Fortunately, HPE Alletra dHCI supports one-click upgrades to dramatically simplify the upgrade
process. Unlike some vendors’ upgrade solutions, HPE dHCI Alletra does not require a control VM to run
the process. Instead, the storage node’s controllers drive the full-stack upgrades.

HPE Alletra dHCI also uses catalogs and allowlists to ensure compatibility and validity for the
interconnecting components from storage node OS, compute node SPP, VMware ESXi OS, dHCI
integrations, and Storage Configuration Manager (SCM). Admins will only need to update one component
manually: VMware vCenter.

Rev. 23.21 304 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Benefits of intelligent, one-click upgrades

Figure 5-53: Benefits of intelligent, one-click upgrades

HPE Alletra dHCI one-click upgrades allows users to upgrade only what is essential to maintain
compliancy, saving IT time by reducing maintenance tasks. The process also helps to prevent the issues
that can plague upgrade processes. For example, a new ESXi OS version might have a known issue with
a particular HPE SPP. The HPE Alletra dHCI upgrade process would prevent deploying those two
components together. That is, the HPE dHCI Stack Manager validates upgrades by checking in with HPE
InfoSight. HPE InfoSight allowlists or denylists catalogs (groups of software component versions) based
on whether the catalogs would introduce issues for the HPE Alletra dHCI stack, considering its current
software versions.

These features intelligently protect HPE Alletra dHCI stacks from known problems as they arise, making
the HPE Alletra dHCI upgrade process much more intelligent and resilient than classic HCI vendors’
processes.

Rev. 23.21 305 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Catalog matching

Figure 5-54: Catalog matching

HPE uses dHCI catalogs to group together compatible software versions. You will now explore an
example of how catalog matching works.
In this example, the customer initially deployed the HPE Alletra dHCI solution with versions from Catalog
1.0. Later the customer manually updated the vCenter Server from 6.7 U1 to 6.7 U3. Now the customer
wants to upgrade the HPE storage array OS and ESXi OS to versions in Catalog 2.0. The allowlist
permits the upgrade because the vCenter Server’s current version is certified in Catalog 2.0.

Rev. 23.21 306 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Catalog matching (cont.)

Figure 5-55: Catalog matching (cont.)

In this next example, the customer also deployed an HPE Alletra dHCI cluster based on Catalog 1.0
software. Later admins updated vCenter Server manually to 6.7 U3. They also manually deployed an
ESXi patchfix release.
Now admins want to update the HPE Alletra dHCI cluster to Catalog 2.0 software versions. However,
Catalog 2.0 is denylisted because the 6.7 patchfix is not certified in it. The Catalog 1.0 updates are also
denylisted.
To resolve this issue, you will need to help the customer manually update the ESXi OS to a certified
release in the desired catalog. You should recommend that the customer manage updating all of the
Alletra dHCI cluster software (except vCenter Server) with one-click upgrades in the future. Then the
customer will avoid incompatibilities and issues like this.

Rev. 23.21 307 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE Alletra dHCI upgrade process: Fully nondisruptive to VMs

Figure 5-56: HPE Alletra dHCI upgrade process: Fully nondisruptive to VMs

The HPE Alletra dHCI upgrade process is a built-in, pre-validated, yet simple way for performing full-stack
lifecycle management. It follows a careful step-flow process.

1. An admin issues a dHCI catalog update request.


2. dHCI downloads kjd HPE array OS, VMware ESXi, HPE ProLiant SPP, and HPE SCM from HPE
InfoSight to a storage node.
3. dHCI performs checks on applicable dHCI components.
4. The HPE array OS and dHCI system image upgrade. The upgrade process does not disrupt VMs
functionality.
5. The first compute node enters maintanence mode. VMware DRS migrates that node’s VMs to other
nodes.
6. The VMware ESXi OS and HPE SCM update on the compute node. The compute node reboots. This
step typically takes about five minutes plus the node reboot time.
7. The compute node’s HPE ProLiant SPP is updated. The node reboots. This step typically takes about
75 minuts plus the host reboot time.
8. The compute node is now updated and online. It exits maintanence mode. DRS moves its VMs back
to it.
Steps 5-8 repeat for other compute nodes in the cluster.

Finally, post-upgrade checks are performed, and the HPE Alletra dHCI cluster upgrade is complete.

Rev. 23.21 308 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE Alletra dHCI compute node: SPP update architecture—available


for Gen10 servers and later

Figure 5-57: HPE Alletra dHCI compute node: SPP update architecture—available for Gen10 servers
and later

Now that you know the overall process, examine the SPP upgrade process in more detail.

The compute node SPP firmware upgrade process is controlled directly from the dHCI storage node,
using a combination of HPE iLO, Integrated Smart Update Tools (iSUT), and Smart Update Manager
(SUM).

iSUT is akin to an agent that runs on every HPE ProLiant Gen10 (and later) server. It helps install OS
specific components and is part of every HPE ESXi image starting from the 2019.12 release. For previous
releases of the HPE ESXi image, iSUT gets installed when ESXi is updated to a newer release. iSUT
must be installed and running in AutoDeploy mode before the SPP update begins. HPE Alletra dHCI
checks that the mode is set correctly as part of the pre-check process. If it is set incorrectly, HPE Alletra
dHCI changes the mode to AutoDeploy. If it cannot change the mode, the precheck will fail.

SUM is part of the dHCI SPP package, and it is executed to run directly on the HPE Alletra array
controllers. It discovers the installed versions of hardware, firmware, and software on each compute
node. It deploys updates in the correct order and ensures that all dependencies are met before deploying
an update.

Although it is possible to perform simultaneous updates for multiple compute nodes at the same time,
dHCI deploys updates to only a single node at a time to ensure minimal cluster disruption.

Note that these tools are available only for Gen10 and later servers. If a brownfield deployment uses
Gen9 servers, dHCI will not be able to upgrade the servers’ SPP in this way.

Rev. 23.21 309 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE Alletra dHCI replication and VMware SRM: Efficient private cloud
disaster recovery

Figure 5-58: HPE Alletra dHCI replication and VMware SRM: Efficient private cloud disaster recovery

HPE Alletra dHCI integrates with VMware Site Recovery Manager (SRM) to deliver efficient disaster
recovery. As you learned, Alletra dHCI arrays have similar replication to all HPE Alletra arrays. The dHCI
Storage Replication Adapter (SRA) allows VMware SRM to leverage those capabilities to protect
customers’ data in a disaster recovery site.

The arrays provide application-driven protection for VMs’ data. Data reduction techniques minimize the
amount of data that must flow over the WAN. The HPE dHCI Alletra arrays can replicate data to other
HPE Alletra arrays outside of their own cluster, and, as long as they use asynchronous replication, they
can do so to disparate HPE Alletra models, generations, and OS releases. All of these capabilities are
integrated into VMware SRM, which provides VM-centric disaster recovery testing and failover.

Rev. 23.21 310 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE Infosight for HPE Alletra dHCI

Figure 5-59: HPE Infosight for HPE Alletra dHCI

The same benefits of HPE InfoSight that you have examined throughout this course extend to HPE
Alletra dHCI.
To integrate HPE Alletra dHCI, you must visit the HPE InfoSight portal and register HPE Alletra dHCI.
Once Alletra dHCI is registered, you must enable telemetry streaming for HPE InfoSight and cross-stack
analysis:
1. From the settings menu (the gear icon) on the HPE InfoSight Portal, select Telemetry Settings.
2. Locate the array you want to monitor and click the Streaming button to On. This button enables data
streaming from the array.
3. In the same row, click the VMware button to On. This button allows data to be collected from
VMware. Wait for HPE InfoSight to process the vCenter registration and start streaming VMware and
array data (up to 48 hours).
As you recall, HPE InfoSight Cross-Stack Analytics gives admins deep insight into VMs’ performance and
the root cause for under-performing VMs. It offers actionable recommendations for resolving issues and
can often help diagnose even the rarest of conditions.

Rev. 23.21 311 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Activity
You will now design an HPE HCI solution for a customer.

Rev. 23.21 312 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Activity 5

Figure 5-60: Activity 5

This activity uses a new scenario from the scenario in previous modules.

Scenario
A small community college is struggling to maintain its data center, which has grown organically over the
years. The data center has a lot of aging equipment that is difficult for the limited IT staff to manage. The
college has shifted some services to the cloud, and, while the college wants to maintain other services
on-prem, the customer has made simplifying the data center a priority. The customer has already begun
virtualizing with VMware; your contact originally brought you in to help with a server refresh to handle the
consolidated workloads.
In this discussion, you have discovered some more issues. The CIO wants to improve availability by
adding VMware clustering. He realizes that clustering requires shared storage, but the data center does
not have a SAN—and the CIO does not want to add one. The IT staff doesn't have the expertise to run a
SAN. The CIO also has received complaints from IT staff about the organization's current manual
processes for backups. But—he tells you—he doesn't have the budget for another project at this point.

Additional background information


The community college has approximately 5,000 students and 500 faculty and employees. The IT staff is
small with few full-time positions. The virtualized environment currently has just five hosts, which support
about 10-12 VMs each. The customer is using Dell servers.

Task
Decide whether you will propose an HPE SimpliVity or HPE Alletra dHCI solution for this customer.
Explain your reasons and the benefits that your solution provides.

__________________________________________________________________________

__________________________________________________________________________

Rev. 23.21 313 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

Rev. 23.21 314 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Summary

Figure 5-61: Activity 5

In this module, you reviewed how HPE SimpliVity helps customers to simplify their management
processes and protect their data. You also learned how to size and design HPE SimpliVity solutions. You
then learned all about designing, deploying, and updating HPE Alletra dHCI solutions.

Rev. 23.21 315 Confidential – For Training Purposes Only


Module 5: Design an HPE Hyperconverged Solution for a Virtualized Environment

Learning checks
1. On which network do HPE SimpliVity nodes have their default gateway address?
a. Storage
b. Management
c. Cluster
d. Federation
2. How does an HPE SimpliVity cluster protect data from loss in case of drive failure?
a. Only RAIN (replicating data to at least three nodes
b. Only RAID (with the level depending on the number of drives)
c. Both RAID (with the level depending on the number of drives) and RAIN (replicating data to two
nodes)
d. Only RAID (always RAID 10)
3. What is one benefit of an HPE Alletra dHCI one-click upgrade?
a. It performs all upgrades simultaneously, ensuring that the process completes quickly.
b. It upgrades vCenter Server first and then has vCenter Server upgrade other components.
c. It stages upgrades on one of the HPE Alletra dHCI compute nodes to pre-test them.
d. It upgrades one server at a time to prevent disruption to services.
4. What is always required for a brownfield HPE Alletra dHCI deployment?
a. New HPE ProLiant DL Gen10+ or later server
b. New HPE Alletra 5000 or 6000 array
c. Existing HPE ProLiant DL Gen10+ or later server
d. Existing HPE Alletra 5000 or 6000 array

Rev. 23.21 316 Confidential – For Training Purposes Only


Design HPE GreenLake Solutions for
VMware
Module 6

Learning objectives
In this module you will learn about HPE GreenLake, which allows you to take an as-a-service delivery
approach to delivering VMware solutions on HPE infrastructure. Upon completion of this is module, you
will be able to:

• Explain the as-a-service model of HPE GreenLake for VMware


• Understand the financial benefits of HPE GreenLake for VMware for customers
• Describe HPE GreenLake for VMware services and offerings
• Understand the business values of HPE GreenLake for VMware

Rev. 23.21 | © Copyright 2023 Hewlett Packard Enterprise Development LP | Confidential – For Training Purposes Only
Module 6: Design HPE GreenLake Solutions for VMware

HPE GreenLake for VMware Overview


In this topic you will learn about the general benefits of HPE GreenLake for customers and Partners.

Rev. 23.21 318 Confidential – For Training Purposes Only


Module 6: Design HPE GreenLake Solutions for VMware

aaS delivery models

Figure 6-1: aaS delivery models

The on-demand scalability of cloud has become a necessity for many customers. HPE makes it possible
for customers to benefit from this experience on-prem.

Customers have traditionally associated aaS consumption with the public cloud. While individual
customers might define aaS in different ways, aaS generally has these characteristics:
• Pay-as-you-go or pay-per-use; customers pay a variable bill based on how many resources they
actually use.
• Some level of shared responsibility; the service provider is responsible for managing and delivering
some pieces of the solution, while the customer is responsible for others. Who is responsible for what
varies widely on the type of service. For example, in infrastructure as a service (IaaS), the service
provider maintains the physical infrastructure and delivers a virtualized OS; the customer is
responsible for maintaining that OS and any applications and data on it.
Customers enjoy the aaS delivery model because they can more quickly obtain the services that they
require without having to invest as much upfront capital. Because the service provider might help deliver
and manage aspects of the solution, customers can also obtain a faster path to a tested solution without
the need for as much in-house expertise. In this way aaS can help customers keep up with an ever-
accelerating pace of innovation and transformation.

Rev. 23.21 319 Confidential – For Training Purposes Only


Module 6: Design HPE GreenLake Solutions for VMware

Benefits of HPE GreenLake for customers

Figure 6-2: Benefits of HPE GreenLake for customers

You can see here the specific benefits that HPE GreenLake offers customers and how as-a-service,
cloud-everywhere solutions like HPE GreenLake accelerate your customers' business.

Less downtime
Built-in services and support help to ensure that the solution meets the customer's availability
requirements. And because HPE dedicates an account team to the solution, the customer’s IT staff can
focus on other critical areas.

IT hours and resources freed up for other purposes


Customers can take the IT funds that they would typically pour into heavy, upfront infrastructure costs and
invest them in the areas that make the most sense for their business. Fewer upfront costs mean more
budget for innovation.

As-a-service delivery also frees up IT staff from mundane tasks so they can focus on more value-
generating initiatives.

Greater agility
HPE GreenLake solutions include a buffer. As a customer's requirements grow, the customer can
instantly start using resources in the buffer. No pause for new deployment or provisioning means that
customers can bring new products to market more quickly—and thus generate more revenue and beat
out the competition.

Better alignment between usage and cost


With HPE GreenLake, HPE commits to a unit price. Customers commit to a certain level of usage, but
beyond that they pay only for what they use. They have visibility into the relationship between usage and
spending. Customers can even choose to implement chargeback to departments or other groups.

If the customer needs to expand, a simple change order often suffices to expand the capacity. And the
greater the capacity, the lower the unit price, adding up to better value for the customer.

Rev. 23.21 320 Confidential – For Training Purposes Only


Module 6: Design HPE GreenLake Solutions for VMware

Greater value
All of these benefits add up to real economic value. When you present an HPE GreenLake solution to a
customer, make sure that you emphasize the total value of the solution, including the infrastructure, the
services, the support, and the OpEx savings.

Rev. 23.21 321 Confidential – For Training Purposes Only


Module 6: Design HPE GreenLake Solutions for VMware

Why HPE GreenLake for VMware

Figure 6-3: Why HPE GreenLake for VMware

Many customers are seeking ways to make their VMware environment more automated. Their users want
a more on-demand experience for obtaining virtualized applications. HPE GreenLake is the answer for
customers who want to get the benefits of the cloud model but are reluctant to move data outside of the
data center.

HPE GreenLake customers enjoy a faster time to value with rapid deployment. HPE GreenLake’s pay-
per-use model means that customers can obtain the environment they need now with less upfront capital
outlay. Customers obtain all the hardware, software, and services that they need under a single contract,
with a single invoice, and with HPE as the single point of contact.

Customers have the choice of where and how they would like to deploy the solution. Customers can
easily scale the HPE GreenLake service so they stay ahead of demand. Since workloads stay on-prem,
customers can maintain compliance requirements while still being able to take advantage of the flexibility
of the cloud.

Rev. 23.21 322 Confidential – For Training Purposes Only


Module 6: Design HPE GreenLake Solutions for VMware

How it works

Figure 6-4: How it works

HPE GreenLake services for VMware are easy to deploy and use. Depending on the particular service,
admins or users can self-provision VMs of various sizes. Customers are metered and billed for the virtual
environment, based on their actual usage. (The exact pricing model depends on the particular service;
you will learn more later in this module.)

HPE Consumption Analytics and HPE Capacity Planning grant customers better insights into their
utilization. They can better forecast their finances. And they can plan for future projects without the fear of
unexpected costs creeping in. These tools customers visibility for showback and chargeback reports so
they know exactly where each dollar goes into the solution.

Customers choose where and how the solution is deployed and have the ability to scale as they need. By
giving customers a flexible consumption model, HPE GreenLake solutions can help customers reduce
their in-house IT costs.
To obtain all these benefits, customers pay a single monthly bill. This bill includes hardware, software,
support, installation services, and operational services.

Rev. 23.21 323 Confidential – For Training Purposes Only


Module 6: Design HPE GreenLake Solutions for VMware

HPE GreenLake for VMware offerings


You will now take a closer look at HPE GreenLake cloud services for VMware environments.

Rev. 23.21 324 Confidential – For Training Purposes Only


Module 6: Design HPE GreenLake Solutions for VMware

Configurable HPE GreenLake cloud services

Figure 6-5: Configurable HPE GreenLake cloud services

HPE GreenLake offers many powerful, pre-approved cloud services that can be integrated into an HPE
GreenLake for VMware solution.

HPE GreenLake for VMs


HPE GreenLake for General Purpose Virtual Machine infrastructure provides an on-premises private-
cloud experience that enables exceptional DevOps performance with point-and-click simplicity. This
solution features VMware or Red Hat hypervisors and either HPE converged or industry-standard
systems to deliver cost- or performance-optimized infrastructure in a pay-per-use model. These
standardized configurations, built on the HPE ProLiant or HPE Synergy platforms, are composed of tried-
and-tested infrastructure building blocks that maximize efficiency, asset utilization, and agility in a hybrid
IT environment. They are suitable for a variety of common VM workloads.

HPE GreenLake for VMware Cloud Foundation (VCF)


HPE GreenLake for VMware Cloud Foundation (VCF) customers get a simple path to hybrid cloud
through a fully integrated, software-defined solution across compute, storage, network, security, and
cloud management to run enterprise applications in private and public cloud environments. This new
approach allows IT infrastructure to be deployed quickly for any application, using automation to deploy
VMs and containers within minutes, not days. With HPE GreenLake, customers can access VCF as a
fully managed service on the HPE ProLiant and HPE Synergy platforms, dynamically configure
composable resources to manage the demands of VCF workloads and accelerate their journey to a true
hybrid cloud model.

HPE GreenLake for Private Cloud Enterprise


HPE GreenLake for Private Cloud Enterprise offers everything customers love about public cloud—
business agility, cloud economics, automation, scalability—without the tradeoff, namely control. With HPE
GreenLake for Private Cloud Enterprise, customers retain control over data sovereignty, performance,
workload optimization, security, privacy and total cost of ownership (TCO).
This solution supports bare metal, containers, and VMs in any combination, delivering their value without
the costly investment of dedicated infrastructure. For new application development, the HPE GreenLake
approach to private cloud offers developers access to the resources they need to accelerate deployment
while eliminating the bottlenecks of traditional processes.
This solution uses familiar hypervisors and containers and, like all HPE GreenLake solutions, offers self-
service provisioning that enables customers to allocate capacity across services as needed.
To learn more about HPE GreenLake for Private Cloud Enterprise, visit https://www.hpe.com/us/en/hpe-
greenlake-private-cloud-enterprise.html

Rev. 23.21 325 Confidential – For Training Purposes Only


Module 6: Design HPE GreenLake Solutions for VMware

HPE GreenLake for HCI


With HPE GreenLake for HCI customers gain cloud-based VM management across hybrid cloud. They
can build self-service cloud on demand where they need it — on-premises, in data centers, colocation
sites, or at the edge. Customers can access VMs across hybrid cloud on demand.
Simplifying Hybrid Cloud Data Protection is achieved through a unified cloud offer that seamlessly
protects VMs and enables VM mobility.

HPE GreenLake for VDI


HPE GreenLake for VDI offers unique benefits for customers seeking aaS delivery for VDI. Like desktop
as a service (DaaS), it gives customers a faster path to VDI as opposed to “do it yourself” options. At the
same time, it delivers on-prem services, not services in the public cloud. While the public cloud may be
suited for rapidly scaling general-purpose workloads, it often fails to deliver the performance required for
more demanding workloads—or it makes purchasing this performance too expensive. HPE GreenLake
for VDI delivers secure desktops from customer data centers, enabling customers to obtain the necessary
performance for demanding workloads at a more attractive price point.

AI-Ready Enterprise Platform VMware on HPE GreenLake


HPE GreenLake for AI-Ready Enterprise Platform solutions combine the powers of HPE, NVIDIA, and
VMware to create an end-to-end enterprise platform optimized for AI workloads on a consumption-based
model. Customers can control data compliance, performance and security without compromising
performance. Customers can even leverage virtualization to fold AI deployments into existing enterprise
infrastructure.
This AI-Ready solution is an integrated platform, featuring NVIDIA AI Enterprise Suite optimized (and
certified) for VMware vSphere with Tanzu—all factory-integrated on an HPE ProLiant DL380/DL385
NVIDIA-certified system.
This offering is available in two packages:
• Training, which offers pre-built options for training models before deploying them in enterprise-grade
production
• Inference, which focuses on model deployment and scoring needs to optimize newly trained models
to be more efficient; once in production, the Inference package can service incoming client requests
for an AI inference cluster, supporting both GPU and CPU workloads

HPE GreenLake Map Book


To learn more about different HPE GreenLake offerings, see the HPE GreenLake Map Book in HPE
Seismic. This map book helps you to select and sell the correct HPE GreenLake cloud services for your
customer requirements.

Rev. 23.21 326 Confidential – For Training Purposes Only


Module 6: Design HPE GreenLake Solutions for VMware

Custom HPE GreenLake services

Figure 6-6: Custom HPE GreenLake services

HPE GreenLake cloud services offer hardware optimized for a variety of workloads. But occasionally a
customer wants very granular control over particular hardware elements (such as the ability to select an
exact processor). Or a customer might need you to add other components not included in the HPE
GreenLake cloud services. For these scenarios, you can create a custom HPE GreenLake service.
You can deliver any HPE hardware, as well as HPE OEM VMware licenses, as an HPE GreenLake
service. You will look at the sizing and quoting process in a moment.

Rev. 23.21 327 Confidential – For Training Purposes Only


Module 6: Design HPE GreenLake Solutions for VMware

Services and support

Figure 6-7: Services and support

Customers can choose how much or how little they would like their HPE GreenLake solution managed by
HPE. Every HPE GreenLake offering comes with capacity management, proactive and reactive support,
and infrastructure. The HPE GreenLake cloud services that you examined a moment ago also come with
virtualization software. The proactive support includes benefits such as these:
• Assigned account teams
• Accelerated escalation of incidents
• Rapid response to critical hardware incidents (24x7)
• Personalized incident reports
You can enhance the HPE GreenLake offering with additional services, which are described below.

HPE Partner services


Your partner organization can add your own in-house services. You can include these services as part of
the HPE GreenLake offering; the customer still pays one monthly bill.

HPE Advisory & Professional Services


You can augment your services with advisory and migration services from the HPE Advisory &
Professional Services (A&PS) portfolio. This portfolio is aligned to major digital transformation initiatives
and business drivers. HPE helps customers accelerate digital transformation by redefining experiences
and operations at the intelligent edge, bringing the cloud experience and operating model to all apps and
data, and unlocking the value of all data from the edge to the cloud. HPE A&PS can advise customers on
the next steps of their transformation journey and map out their priority initiatives.

HPE GreenLake Management Services


HPE GreenLake Management Services offloads the heavy lifting of running modern IT, when and where
companies need it. You can offer your customers comprehensive monitoring, operations, administration,
optimization, and nearly continuous improvement across all areas of their HPE GreenLake solution.
Customers can choose how much of the application stack they want HPE to manage from the
infrastructure to the workload. For example, HPE can manage the hardware infrastructure and
virtualization environment only. Or HPE can manage the hardware infrastructure, virtualization
environment, and guest OS. You and the customer will need to define the precise services required
during quoting process.

Rev. 23.21 328 Confidential – For Training Purposes Only


Module 6: Design HPE GreenLake Solutions for VMware

HPE GreenLake cloud services pricing model

Figure 6-8: HPE GreenLake cloud services pricing model

Customers benefit from having one monthly bill that bundles all their hardware, software license, and
service costs for the solution. They also benefit from a bill that aligns their costs with their utilization. As
with public cloud services, customers using HPE GreenLake pay a monthly bill based on their usage. If
their usage goes up, they pay more. If it goes down, they pay less.

Customers pay a single monthly bill that covers all the hardware, software, software licenses, and
services included in the solution. Because HPE deploys the solution within customers’ own data centers,
the customer remains responsible for facilities costs such as heating and cooling. However, customers
can choose colocation options, in which case the facilities costs are bundled with the per-unit cost for the
rest of the solution.
At a high-level, the monthly bill is calculated by multiplying the number of units consumed by the per-unit
cost. Different HPE GreenLake offerings use different units, which are clearly spelled out in the contract.
If a service includes compute and storage elements, it typically has a unit for metering each. Some
examples of units include:

• Peak GB allocated to VM RAM (for compute)


• Usable GiB on storage arrays (for storage)
You should be aware, though, that customers commit to utilize a particular percentage of the deployed
infrastructure; this is called the committed or reserved capacity. That percentage is often 80% but can be
more or less depending on the contract. Customers will always pay at least for the committed capacity
regardless of the actual consumption. As an HPE Partner, you should size the solution such that the
customer will use more than the committed capacity.
Another benefit of HPE GreenLake services for customers is this: as they expand the capacity of their
HPE GreenLake service, they might grow into new pricing bands, which offer lower per-unit costs.

Rev. 23.21 329 Confidential – For Training Purposes Only


Module 6: Design HPE GreenLake Solutions for VMware

Sizing and quoting HPE GreenLake solutions

Figure 6-9: Sizing and quoting HPE GreenLake solutions

For some HPE GreenLake cloud services, you might be able to use HPE GreenLake Quick Quote tool.
This tool speeds up the proposal and quoting process with pre-approved pricing. The HPE GreenLake
cloud services available within the tool depend on your region, so you will need to check. You can access
the HPE GreenLake Quick Quote tool from the quoting tools in HPE PSNow. However, you might need to
fill out a form to request access.
If HPE GreenLake Quick Quote does not offer the HPE GreenLake cloud service you need, you can
create a BOM in HPE One Configuration Advanced (OCA). You must use OCA for custom solutions. To
size a custom HPE GreenLake service, you can use the same tools, such as HPE SSET, that you would
use to size a traditional HPE solution.
When using OCA for an HPE GreenLake service, keep in mind that you will need to create two BOMs: a
start BOM and an end BOM. You should size the start BOM to meet the customers’ current needs with a
suitable buffer. Customers should typically be utilizing about 80% to 90% of the start BOM capacity within
the first six months after deployment. You size the end BOM for the expected capacity requirements at
the end of the contract term (three, four, or five years).
Remember to include all hardware and software licenses in the BOMs. Consult with your partner business
manager (PBM) about the proper way to include services in the BOMs. The start BOM and end BOMs
must include exactly the same products, differing only in quantity.
Refer to the HPE GreenLake Tools Briefcase for more details on this process.

Rev. 23.21 330 Confidential – For Training Purposes Only


Module 6: Design HPE GreenLake Solutions for VMware

Summary of HPE GreenLake for VMware business values

Figure 6-10: Summary of HPE GreenLake for VMware business values

In summary, HPE GreenLake cloud services can provide a robust, secure, highly available physical and
virtual infrastructure with flexibility and the agility to scale with customer demands.
HPE takes pressure off customers by performing installation and setup services. Customers can select
the best services for their requirements, including design services and management services. When they
choose a fully managed solution, customers can free up resources from managing the VM infrastructure
and have employees work on more strategic projects.
Customers enjoy HPE support for the complete hardware and software solution with a single-point-of-
contact relationship. They can add colocation services to minimize their data center footprint if they
choose.
Customers can use services in the HPE GreenLake Edge-to-Cloud Platform, such as Consumption
Analytics and Capacity Planning, to monitor their utilization and take control of their costs.
In short, customers couple the pay-per-use economic benefits and flexibility of cloud with the customized
experience, control, and security of an on-prem environment.

Rev. 23.21 331 Confidential – For Training Purposes Only


Module 6: Design HPE GreenLake Solutions for VMware

Activity
You will now view some of the HPE GreenLake cloud services in more detail.

Rev. 23.21 332 Confidential – For Training Purposes Only


Module 6: Design HPE GreenLake Solutions for VMware

Activity 6

Figure 6-11: Activity 6

You can choose to explore the HPE GreenLake Private Cloud Enterprise interface. For this activity, you
will need Internet access and HPE Partner credentials.
Or you can watch a brief video about HPE GreenLake Edge-to-Cloud Platform. You only need Internet
access for that activity.

Demo of HPE GreenLake for Private Cloud Enterprise


1. Log into https://hpedemoportal.ext.hpe.com/
2. Search for Private Cloud Enterprise. Select this demo: HPE GreenLake for Private Cloud
Enterprise – Interactive Demo.
3. Click Details and then in the new window click OnDemand.
4. Fill out the form, selecting Dry Run / Self-Paced Instruction and indicating that you do not have an
OPP Id. Click Submit.

Rev. 23.21 333 Confidential – For Training Purposes Only


Module 6: Design HPE GreenLake Solutions for VMware

5. Wait a moment. You will be taken to a view of the Private Cloud Enterprise dashboard. Options that
you can select will be outlined in orange.
6. Click Virtual Machines and explore the list of VMs.
7. Click Launch Service Console to see how easy it is to add a VM as an “instance.”
8. Click Provisioning > Instances and then click Add.

9. You will be guided through adding the instance.


10. When you are finished with the demo, click the X to close the demo. Cancel the demo.

Rev. 23.21 334 Confidential – For Training Purposes Only


Module 6: Design HPE GreenLake Solutions for VMware

HPE GreenLake Edge-to-Cloud Platform Video


Follow these steps:
1. Access https://greenlake.hpe.com
2. Click Launch HPE GreenLake and Get Started
3. Watch the recorded demonstration

Rev. 23.21 335 Confidential – For Training Purposes Only


Module 6: Design HPE GreenLake Solutions for VMware

Summary

In this module, you learned about the benefits and use cases for HPE GreenLake for VMware. You also
learned about the many options for services and offerings that are available to customers. You finished
the module by learning more about HPE GreenLake for VMware components and how to size and meter
the solution and by exploring more resources related to HPE GreenLake in an activity.

Rev. 23.21 336 Confidential – For Training Purposes Only


Module 6: Design HPE GreenLake Solutions for VMware

Learning Checks
1. You are working on an HPE custom quote for an HPE GreenLake for VDI solution. What do you need
to create in OCA?
a. A BOM for each HPE GreenLake pricing band
b. A Start BOM and an End BOM
c. Separate BOMs for HPE compute, HPE storage, and software
d. An Infrastructure BOM and a Support BOM
2. What accurately describes the as-a-service financial model of HPE GreenLake?
a. Customers pay for the infrastructure upfront and services monthly.
b. Customers pay only for what they use each month in one bill (but commit to a minimum usage).
c. Customers pay for a specific tier of usage for each month (with a commitment to a minimum tier).
d. Customers pay for the solution in one lump sum at the end of the contract.

Rev. 23.21 337 Confidential – For Training Purposes Only


Module 6: Design HPE GreenLake Solutions for VMware

PAGE INTENTIONALLY LEFT BLANK

Rev. 23.21 338 Confidential – For Training Purposes Only


Plan Cloud Migrations
Module 7

Learning objectives
Needs change. Customers outgrow the environment that amply met their requirements four years ago.
They decide to move workloads to the cloud. Or after an enthusiastic push to place all workloads in the
cloud, they realize that some of those workloads actually belong on-prem. No matter what the reasons,
customers often need to migrate virtual machines (VMs) from one location to another. VM migration can
even come into play on a day-to-day basis. VMware Distributed Resource Manager can automatically
rebalance loads by migrating VMs from one host to another host in a cluster.

In this module, you will learn about VMware and HPE solutions that help customers migrate VMs for
these and many other use cases. After completing this module, you will be able to:

• Identify appropriate migration methodologies


• Understand the migration process including planning, validation, execution, and documentation
• Explain migration-related licensing considerations
• Understand VMware vSphere vMotion
• Understand using Zerto, a Hewlett Packard Enterprise company, for migration

Rev. 23.21 | © Copyright 2023 Hewlett Packard Enterprise Development LP | Confidential – For Training Purposes Only
Module 7: Plan Cloud Migrations

Migration process overview

Figure 7-1: Migration process overview

Migrations involve many moving pieces, and a successful migration demands careful planning and
execution. While it is beyond the scope of this module to make you an expert in implementing large-scale
migrations, it will give you a foundation in some of the important concepts.
The migration process naturally begins with in-depth planning. In the next topic of this module, you will
learn about many of the important questions customers must consider at this stage. When customers are
migrating critical workloads, they cannot afford anything to go wrong. To minimize risks and unpleasant
surprises, they should always test the migration in advance. They should also protect their workloads.
Only then can they execute the migration process—typically during a carefully scheduled maintenance
window. After the migration, customers must validate again, ensuring that all services are running as
expected. They should also document the new environment carefully to help users better use the new
environment and to help admins more quickly resolve issues that might emerge later.

Rev. 23.21 340 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Plan the migration


In this topic, you will explore the planning stage of the migration process. You will learn about important
information to document and questions to consider.

Rev. 23.21 341 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Understand migration scope and implications

Figure 7-2: Understand migration scope and implications

Planning begins by identifying the workloads that the customer wants to migrate. In a VMware
environment, those workloads are VMs.
You can then start to map relationships between the VMs. What are your applications’ architectures? Map
out VMs and their places within those architectures. For example, an application might feature several
VMs acting as front-end servers and one VM acting as a database. You should also map dependencies
between applications; you should know which VMs depend on which other VMs to function successfully.
These relationship maps will help you to group related VMs for migration together. You can deploy HPE
CloudPhysics in the customer’s environment to obtain detailed dependency mappings quickly and easily.
You also need to understand the relationship between workloads and data. You need to know, of course,
where and how VMs’ disks are stored. But you also need to understand what other data a VM might need
to access. For example, consider an application that fetches data from and submits data to a database.
Where will that database reside post-migration? If the workload accessing the data will now be running in
a different location from the database, will the application suffer from undue latency? Customers need to
consider these questions to ensure that users remain satisfied with application performance post-
migration.
Migrating workloads calls up all sorts of architecture implications, particularly when you are considering a
major migration from on-prem to cloud or vice versa. At the application level, developers will need to
assess whether applications will need any re-coding for the new environment. However, this course
focuses on architecture concerns from the virtualization layer and down, not on the application layer. You
will need to assess the compute infrastructure for VM hosts. What information can you gather about the
processors used by source hosts and destination hosts? Are they the same vendor and generation? If
migrating to the cloud, can you even know? Many migration methodologies completely recreate the VM
and then attach the new VM to the source VM’s disks. In that case, the CPU architecture does not matter.
But when you migrate a live VM, which has been created on a host using a particular CPU architecture,
you need to use the proper technologies to avoid issues with CPU incompatibilities.

You also need to consider the storage architecture. It is easiest to migrate VMs when the destination
hosts have access to the storage currently used to store the VMs’ disks. However, often that is not
possible. Perhaps one of the goals of the migration is to move to an upgraded storage solution. Or, as
another example, when you migrate VMs to public cloud instances, the public cloud provider controls how
and where the instances’ (VMs’) disks are stored. Migrating VMs to a new compute and storage
infrastructure is certainly possible—you simply need to select the right technologies. You should also
consider any implications that the change in underlying storage architecture might have on workloads.
Pre-testing will help customers understand whether migrated VMs continue to function correctly.

Moving VMs from one location to another often places the VMs across Layer 3 boundaries. In that case,
you and the customer need to plan how to handle the IP addressing at the new location. Does the

Rev. 23.21 342 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

customer require maintaining the existing IP addresses on the VMs? In that case, network admins will
need to make a plan for “cutting” routing over from the source environment to the destination
environment—in other words, for changing how the network routes traffic to the IP networks in question
and also advertising those changes. Does the customer plan to change the IP address? That approach
causes complexity also because DNS entries for services must be updated with the new IP addresses.
And if any applications refer to IP addresses instead of hostnames, the applications can break. Clearly,
customers must carefully formulate and document their plan.

Rev. 23.21 343 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Understand licensing implications

Figure 7-3: Understand licensing implications

Application licenses
Issues with licenses can negatively affect services after a migration. If customers fail to carefully track
which applications are using which licenses in the new environment, they might fail a licensing audit.
Even worse, if licenses enforce consequences for violations, real or mistaken—as some licenses do—
applications might stop working after the migration.

Before migrating the VMs, customers should carefully document every application slated for migration
and the licenses used by that application. They should also document how the license is enforced and
whether it is bound to a particular VM. For example, some vendors associate each license with a
particular MAC address. In that case, the customer will need to plan either how to keep VM MAC
addresses consistent as the VMs migrate or how to migrate the licenses to new MAC addresses.

When customers are expanding services, they might need more application licenses. They should plan
and obtain those licenses in advance so that services are ready to run immediately after the migration.

Customers must carefully document the original and then the final location of all licenses so that they can
demonstrate that they are complying with their license agreements. When customers move workloads to
the cloud, they can face additional challenges. For example, some vendors require that licenses reside on
dedicated machines, which can be difficult to guarantee in a cloud environment. The customers will need
to explore their public cloud vendor’s offerings for dedicated instances.

Virtualization licenses
Customers must, of course, have sufficient VMware vSphere licenses for the destination virtualization
environment, if they own that environment. VMware vSphere is licensed per ESXi host processor (for
processors up to 32 cores). Perhaps the customer is consolidating VMs onto fewer hosts with processors
that have more cores (up to 32). Then the number of required licenses for the future might actually
decrease. In other cases, the new environment might feature more total ESXi host processors, and the
customer will need to purchase additional licenses before the migration.
When customers move workloads to a public cloud, generally the customer pays per cloud instance. The
customer needs to license guest OSs and applications, but not the underlying virtualization infrastructure.
If the cloud migration enables the customer to decommission some on-prem ESXi hosts, the customer no
longer needs as many VMware vSphere licenses.

Rev. 23.21 344 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Understand egress costs for a public cloud solution

Figure 7-4: Understand licensing implications

Customers migrating workloads to the public cloud should understand all cost implications. Most public
cloud vendors charge egress fees for data stored in the cloud. Customers must pay the egress fee
whenever they access the stored data as well as if they move the stored data back on-prem or to a
different cloud.

Rev. 23.21 345 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Overview of migration technologies

Figure 7-5: Overview of migration technologies

As part of the migration plan, you and the customer will need to select appropriate migration technologies.
The sections below introduce you to the technologies, but you will examine each in more detail
throughout this module.

VMware vSphere vMotion


VMware vSphere offers a native migration technology: vMotion. vMotion allows a powered-on (live) VM to
move from one ESXi host to another ESXi host.
vSphere can implement vMotion automatically as part of implementing other features. When an ESXi
cluster uses Distributed Resource Scheduler (DRS), DRS can decide to move VMs to balance the load
across cluster hosts; DRS uses vMotion to execute the move. In a cluster using Proactive HA—such as
the Proactive HA enabled by HPE OneView for vCenter (OV4VC—vMotion can move VMs off of a host
that is experiencing issues).
Admins can also execute vMotion manually. As you will learn in more detail later, vMotion not only allows
VMs to move from one host to another, but it can also migrate VMs’ disks to new storage. vMotion can
also work over long distances.
Admins can use vMotion for many types of migration, including data center consolidation or migration to
an upgraded infrastructure. Admins can also use vMotion to migrate workloads to the cloud, from the
cloud back on-prem, or, in some cases, even between clouds.

VMware HCX
VMware HCX is a migration platform well suited for all the migration use cases listed for vMotion. HCX
can use vMotion as well as other technologies to move VMs from a variety of source environments to a
variety of destination environments. It provides many benefits beyond vMotion to make the migration
execute more smoothly, including providing scheduling and handling the networking during the migration.

VMware Convertor
VMware Convertor enables customers to convert non-VMware machines, which might include physical
machines or third-party VMs, to VMware VMs. VMware Convertor also supports converting one type of
VMware VM to another type of VMware VM (such as a VMware Player VM to a vSphere VM).

Rev. 23.21 346 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Zerto, a Hewlett Packard Enterprise solution


Zerto is a leading disaster recovery and ransomware resilience solution. But the same technologies that
provide these data protections also allow customers to move workloads from one environment to another.
Zerto can help customers consolidate their data centers; migrate workloads to upgraded infrastructure;
and migrate workloads to, from, and between clouds.

Rev. 23.21 347 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Live (hot) migration versus cold migration

Figure 7-6: Live (hot) migration versus cold migration

During the planning stage, customers must also look at their service requirements and determine whether
a live (also called hot) migration or cloud migration better suits their needs.

Live migration
Live migration involves moving a powered-on VM. The VM moves with its disks, memory, and state intact.
A common live migration technology is vMotion.
Live migration offers the best option for applications that can tolerate absolutely no downtime. However,
the migration itself might be more complex than a cold migration. The technology needs to synchronize
ongoing changes to the VM and ensure nothing is lost as the VM moves. As you will see, vMotion has
many perquisites and requirements to ensure that it proceeds correctly. A large-scale live migration might
be more complex than a large-scale cold migration. For example, maintaining clients’ access to
applications during the process might involve extra work. For example, customers might need to establish
some form of overlay networking to ensure VMs in the new location maintain the same IP addresses.
Customers should also be aware that if an issue occurs during the live migration, services might
experience downtime. Admins might need to respond to those issues by “rolling back” the migration (in
other words, returning VMs and services to the source environment). Again, that can create downtime
even though live migration usually imposes no downtime.

Cold migration
In a cold migration, the source VM powers down, which fixes the VM in a known state. Then the migration
occurs. Finally, the VM is powered on at the destination location. Because a cold migration is designed to
include downtime, admins have time to perform supporting tasks for the migration. Customers often
choose cold migration for applications that can tolerate some downtime. (However, a live migration could
also, of course, meet those applications needs.)

Rev. 23.21 348 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Prepare for and execute the migration


You will now examine the rest of the migration stages at a high level. This discussion will give you some
foundational knowledge, which you can apply as you then learn about each technology in detail.

Rev. 23.21 349 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Migration documentation

Figure 7-7: Migration documentation

You should carefully document as you plan, test, and execute the migration.
Document every VM’s location in the source environment and in the destination environment.
Make sure that you understand VMs’ disks. Are they VMDKs stored in a VMFS datastore? Are they
vVols? Are they Raw Disk Mappings (RDMs)? The disk type can affect requirements for the migration.
Also document disk sizes to ensure that the destination environment has sufficient capacity. Also
document the storage architecture at the source and destination. Determine whether the virtual disk will
be moving to a new datastore and document that datastore’s name and location. Make sure that the
destination host has access to the required storage.
Document each VM’s current IP address and the plan for that VMs’ IP address in the destination
environment. VMs often receive IP addresses dynamically with DHCP. Sometimes admins configure the
DHCP server to bind a particular IP address to a particular MAC address. If the customer is using this
approach, VMs will need to retain their current MAC addresses to retain their current IP addresses. Some
technologies keep the MAC address consistent, but other migration types might not. You should
determine what will happen in your migration. If you cannot determine the MAC address in advance, you
might need to set the IP address statically.
You and the customer should also consider the customers’ backup solution. In addition to backing up
VMs before the migration, you should verify that the backup solution will continue working after the
migration.
You should create and document your testing plan, as well as document the results. You will learn more
about this testing on the next slide.
It is critical to document the migration schedule. Many migrations involve downtime, so the customer will
need to schedule a maintenance window for the outage. They must schedule that window well in advance
and communicate it clearly to all stakeholders.

Rev. 23.21 350 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Pre-testing and protection

Figure 7-8: Pre-testing and protection

Migrations typically involve too many critical applications for customers to simply execute them and hope
they work. Unexpected issues often crop up, and the only way to plan for and mitigate those issues is to
test the migration in advance.
As possible, you should try moving VMs to the destination environment in advance while leaving the
source VMs running and delivering services. Ideally you should use solutions with features that make
such testing simple. First test that migrated VMs can boot successfully. If not, you might look for issues
with the disk migration. Once you have a booted VM, continue to test it. Does it have the expected
settings, such as all of its disks and the expected IP address? Do the required applications start up
smoothly, or do they encounter any errors, such as failure to reach necessary support services or failed
licensing? Make sure that after the real migration occurs, clients will be able to access the VMs’ services.
Take careful note of any issues and resolve them.
Pre-testing also gives you valuable data about how long the migration will take. You can then schedule
the maintenance window with more confidence.
After you have tested the migration, you can schedule the real thing. You are nearly ready. However,
soon before the actual migration, HPE strongly recommends that you create a full backup of every VM
that you plan to migrate. Do not rely on VM snapshots alone. Create a full backup, which is ideally stored
on a backup solution separate from the VMware environment. Then if something goes wrong during the
migration, you will at least have a backup of the functional VM.

Rev. 23.21 351 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Migration execution

Figure 7-9: Migration execution

The migration process itself depends mostly on the selected methodology. This module will cover these
methodologies:
• vMotion with optional Enhanced vMotion Compatibility (EVC)
• VMware HCX
• VMware Convertor
• Zerto

Rev. 23.21 352 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

VMware vSphere vMotion


You will first examine vMotion.

Rev. 23.21 353 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

vSphere vMotion

Figure 7-10: vSphere vMotion

vMotion supports many types of migrations. Here you see several examples for a VM, which has a
VMDK-based disk.

Compute-only migration
vMotion compute migration moves a VM to a new ESXi host. For the destination, you can choose either
an ESXi host or, only if a cluster uses DRS, a cluster. In the latter case, DRS chooses the specific
destination host.
When you choose a compute-only migration, the VM moves to a new host, but its virtual disk remains in
the same location. Therefore, the source and destination host must both have access to the storage with
that disk.
At a high-level the process is:
1. vCenter sends a vMotion request to the destination host and ensures that the host is ready to receive
the VM.
2. vCenter collects information about the VM, such as its configuration file location and disk size, and
communicates that information to the source and destination hosts.
3. vCenter sends a vMotion request to the source host and ensures the host is ready to send the VM. It
also temporarily places the VM’s configuration file in read-only mode.
4. The destination host creates the VM, attaches it to the correct virtual disk(s), and attaches it to the
correct network(s).
5. The source host copies the source VM’s state to the destination host over the vMotion network. The
state consists of the VM’s memory (the largest component) as well as settings such as the VM’s MAC
address. As long as the vMotion network provides sufficient bandwidth, the copy should finish in just
a couple seconds.
Because the VM is still active on the source host, changes continue to occur in its memory. VMware
handles this issue by copying the memory iteratively. At the beginning of the process, the host installs
page tracers in the guest OS, which track which memory pages are changed (become “dirty”). After
the initial state copy, the source host copies any dirty pages. That copy will take even less time than
the first copy. But a few more pages might become dirty in that time, so the host sends those pages.

Rev. 23.21 354 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

The process continues until VMware determines that the copy will take less than 500 ms. The source
host then stuns the source VM to stop any more changes from occurring during that final copy.
Now the destination VM and source VM are converged.
6. The “cutover” from source to destination occurs.
The source host suspends the source VM. The destination host powers on the destination VM.
However, the destination VM does not need to fully boot. Instead, the host directs it to the memory
copied from the source VM.
The vCenter Server registers the destination host as the VM’s host. The source host powers down its
VM, unregisters it, and frees up any resources allocated to it.

Storage-only migration
vMotion can also execute a storage-only migration, in which it copies a VMs’ virtual disk (or disks) from
one datastore to another. This is sometimes called Storage vMotion. Like compute-only vMotion, Storage
vMotion allows the VM to remain live during the process.
Customers might use Storage vMotion to keep VMs up while they take a storage array down for
maintenance. Or they might use it to migrate from a legacy array to a new array or to redistribute the
storage load across datastores.
The source and destination datastore can be on the same storage location (such as a storage array) or
on different ones. The ESXi host simply needs access to both datastores. You can also choose to place
the disk files and VM configuration file on the same datastore or on different datastores.
At a high level the process is:
1. VMware sets up the file directory with the VMs’ files on the destination datastore. (This directory
includes the configuration, log, swap, and snapshot files).
2. The ESXi host creates a destination VM, which is not yet live. The host needs this destination VM,
even though it already hosts the source VM, because the destination VM will be attached to the new
disk.
3. VMware copies the virtual disk (in this example, a VMDK file) to the destination datastore.
Much as with compute vMotion’s state copy, the virtual disk copy occurs in successively smaller
iterations. The first copy takes the longest. Then VMware copies any changes that occurred during
that copy, and so on. When the data to be copied is small enough, the host briefly stuns the VM to
stop changes and executes the final copy.
4. The “cutover” occurs.
The destination VM becomes live. The source VM is powered down and unregistered. VMware
deletes all the VM files from the source datastore.

Compute-and-storage migration
Sometimes you want to use vMotion to move a VM to a destination host, but the source and destination
hosts lack shared storage. In other words, the destination host cannot access the VMs’ disks. In that
case, you use compute-and-storage migration, which combines the two processes about which you just
learned. VMware copies the VMs’ disk(s) to the destination datastore and also sends the VM to a new
ESXi host.

Rev. 23.21 355 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Other examples of vMotion

Figure 7-11: Other examples of vMotion

You can also execute vMotion on VMs with different types of disks.

vMotion with vVols


VMs with vVols support the same options for vMotion as VMs with VMDKs, including compute only,
storage only, and compute and storage. Of course, when you use Storage vMotion, the destination
storage must support vVols.

vMotion with RDM


VMs that use RDM also support the same options for vMotion as other VMs, including compute only,
storage only, and compute and storage. But you should keep some additional guidelines in mind. Storage
vMotion (as part of storage-only or compute-and-storage migrations) only copies the RDM mapping file
from one datastore to another. The destination ESXi host must have access to the RDM LUN itself. In
other words, even when you use compute-and-storage migration, the source and destination host must
both have access to the same storage array.
However, you can convert the RDM to a VMDK. In that case, the destination ESXi host no longer needs
access to the RDM LUN, so it can reside in an entirely different location with access to different storage.
In general, you should avoid using RDMs unless necessary.

Rev. 23.21 356 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Scope for vMotion

Figure 7-12: Scope for vMotion

vSphere vMotion enables customers to migrate VMs to any ESXi host managed by the same vCenter
server, whether that host is in the same cluster, a different cluster, or not in a cluster at all.
With the standard vMotion options (compute only, storage only, and compute and storage), you can even
migrate a VM to an ESXi host managed by a different vCenter server. However, the source and
destination vCenter servers must be in Enhanced Linked Mode or Hybrid Linked Mode. Those modes
require the vCenter servers to be within the same Single Sign On (SSO) domain.
Whether or not source and destination hosts are managed by the same vCenter servers, the hosts might
be separated by large physical differences. If you need to conduct vMotion over long distances, you
should ensure that the connection between the sites provides a round-trip latency of 150 milliseconds or
less.
As of vSphere 7 Update 1c, vSphere also offers Advanced Cross vCenter Server vMotion (XVM). With
XVM, you can migrate VMs to any vCenter server with no need for linked mode or the same SSO domain.
Often companies use this option to migrate VMs to a public cloud provider such as VMware Cloud on
AWS. The XVM option allows you to choose a new destination host and new storage.
Note that some of these vMotion options require different licenses. You will examine the licensing
requirements a bit later.

Rev. 23.21 357 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

HPE Peer Persistence to enhance automatic vMotion for disaster


recovery

Figure 7-13: HPE Peer Persistence to enhance automatic vMotion for disaster recovery

This module focuses on migration; however, vMotion can also play a role in disaster recovery based on
an ESXi cluster stretched over two sites. If the primary site fails, the VMs come up on cluster hosts at the
disaster recovery data center. Or, as illustrated in the figure, two active data centers provide protection for
each other.
These scenarios require the stretched cluster hosts to have access to the same shared storage. Of
course, the customer wants redundancy for that storage, so both sites have a storage array, which serves
VMs active at the site. HPE storage arrays use replication and Peer Persistence to support this use case.
The volumes used to store VM data are protected by replication. One array is the primary array for the
volume and responds to IO requests for it. It also continuously replicates the volume’s data to the array at
the other site. Peer Persistence enables automatic failover. An array recognizes when the other array is
down, becomes the primary array for the volume, and automatically fails over the ESXi host connections
to the volume. With these technologies the HPE storage arrays help to protect the virtualized services.
To learn more about these technologies attend the Advanced HPE Storage Solutions course.

Rev. 23.21 358 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

vMotion networking best practices

Figure 7-14: vMotion networking best practices

Now that you understand vMotion options at a high level, you will explore best practices for ensuring that
vMotion proceeds without errors.
Remember that vMotion copies a VM’s state over a vMotion network. The ESXi host connects to this
network with a VMkernel adapter. When you create a VMkernel, you can choose the functions that it
supports. VMware recommends that you dedicate a VMkernel to vMotion only. That VMkernel will have its
own TCP/IP stack with its own IP address. Note that each host supports a maximum of one VMkernel for
vMotion.
VMware also recommends that you provide the vMotion VMkernel with a dedicated active and standby
NIC. (While not recommended, you could place the VMkernel on a virtual switch that also supports
production networks. For that case, VMware recommends using vSphere Network I/O control.) For
modern networks, those NICs should support 10 Gbps or 25 Gbps. VMware recommends at least 250
Mbps per concurrent vMotion session. You should also pay attention to latency, particularly for vMotion
across long distances. The RTT should be less than 150 ms. If the vMotion network lacks bandwidth or
has excessive latency, the state copy stage might reach a point where the VM “dirties” memory pages
faster than the pages can copy. Then the source and destination VMs will never reach convergence, and
vMotion will fail.
For the VM state copy, you want to transmit as much data as you can as quickly as possible. You can
reduce network processing overhead in a TCP/IP network by transmitting that data in as few packets as
possible. To do so, increase the IP maximum transmission unit (MTU) above the standard 1500 bytes to
as much as 9000 bytes. It is very important that every network device on the vMotion network has the
same MTU. Otherwise, network devices will detect that packets are too large and drop them. Remember
that on some switches, you need to increase the MTU separately, sometimes called enabling jumbo
frames. The Ethernet MTU must be at least 18 bytes larger than the IP MTU to accommodate the
Ethernet headers. Again, you must enable a consistent MTU across all the devices.
So that vMotion works wherever and whenever you need it, you must ensure that all your hosts can reach
each other’s vMotion IP addresses. They can have Layer 2 or Layer 3 connectivity.
This final guideline applies to the migrating VMs’ network connection, rather than the vMotion network
itself. You need to pay attention to the port group configuration for the VMs’ networks on both the source
and destination network. If the destination host has matching port group names to the ones assigned to
the VM, VMware will automatically use those networks. You must make sure that those same-named port
groups actually refer to the same networks. If VMware cannot find matching port groups, the vMotion
wizard asks you to select the networks.

Rev. 23.21 359 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Requirements for vSphere vMotion

Figure 7-15: Requirements for vSphere vMotion

You must carefully check that the VMware environment meets VMware requirements for the types of
vMotion that you want to use.
First, the environment must have the correct licenses. The customer must have a vSphere license (any
edition) and licensed vCenter server to use vMotion at all. All vSphere editions support both vMotion and
Storage vMotion. XVM requires a vSphere Enterprise Plus license. DRS and HA also require a vSphere
Enterprise Plus license.
You must verify that vMotion is enabled on both the source and destination hosts.
If you are planning to implement a compute-only migration, make sure that the source and destination
host have access to the shared storage holding the VMs’ disks. Also validate that the SAN is zoned
correctly to provide this access.
Also verify that the destination host has sufficient resources to host the VM, including CPU and memory.
If the VM uses vGPU, the destination host must have a GPU with an available vGPU to allocate. In
addition, its vgpu.hotmigrate.enabled setting must be set to true.
VMware imposes limitations for the number of active vMotion sessions. If you are planning a wide-scale
migration, make sure to stay within those. Refer to the latest documentation for specific numbers.
When you are implementing vMotion between vCenter servers, but are not using Advanced Cross
vCenter vMotion (XVM), make sure that the vCenter servers meet the requirements. Enable Enhanced
Linked Mode or Hybrid Linked Mode between them. For the link to work, the servers must be in the same
SSO domain. You use Enhanced Linked Mode between on-prem vCenter servers and Hybrid Linked
Mode between an on-prem vCenter server and cloud vCenter server in the same SSO domain. Also
make sure that the vCenter Servers have their time synchronized to a consistent source such as an NTP
server. They do not have to be in the same time zone, but they must have consistent clocks.

Rev. 23.21 360 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Requirements for vMotion without shared storage

Figure 7-16: Requirements for vMotion without shared storage

When you want to run vMotion between hosts without shared storage (compute and storage vMotion),
you must meet all the requirements for vMotion plus additional requirements for Storage vMotion.
Storage vMotion requires VMs’ disks (backed by VMDK files or vVols) to be in persistent mode. (The
vCenter UI calls this Independent – Persistent mode). The default mode for disks, dependent, is intended
to support snapshots. In persistent mode, reads and writes to the disk persist regardless of whether a
snapshot is reverted. Putting the disk in persistent mode allows VMware to copy the complete disk to the
destination. You should communicate with other admins when you are preparing to use vMotion and plan
to place the disk in persistent mode.
Storage vMotion can also apply to VMs with RDM disks. As discussed earlier, though, Storage vMotion
does not copy the RDM LUN to a new location. It copies the mapping file. The destination host, therefore,
requires access to the RDM LUN. On the other hand, you can convert the RDM to a VMDK file in a
destination datastore.
To support the fastest migration, VMware recommends that you schedule storage only or storage and
compute migrations for times of low activity for the workload and storage environment.
For more details on requirements in vSphere 8, refer to this link:
https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-vcenter-esxi-management/GUID-6068ECD7-
E3FA-4155-A326-D996BDBDF00C.html

Rev. 23.21 361 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Common issues that can occur during vMotion

Figure 7-17: Common issues that can occur during vMotion

If vMotion fails, you will need to troubleshoot.


An error near the beginning of the process can indicate that either the source or destination host does not
support vMotion or has Migrate.Enabled set to false.
Also look for connectivity issues on the vMotion network. From the source ESXi host you can run a
vmkping to the destination ESXi host’s vMotion IP address. For example, the source host’s vMotion
VMkernel might be named vmk2, and the destination host’s vMotion IP address, 10.28.28.4. Use this
command:
vmkping -I vmk2 10.28.28.4
If you are using 9000 for the MTU, make sure that the MTU is correct across the network by sending a
larger ping (28 bytes smaller than 9000 to account for IP and ICMP headers). Use this command:
vmkping -I vmk2 -d -s 8972 10.28.28.4
You should also double-check that both ESXi hosts have access to DNS services.
If the destination host lacks sufficient resources, it will not accept the vMotion request. Double-check the
destination host’s resources, following the guidelines about which you learned earlier. Also verify that the
ESXi hosts and vCenter servers take their time from a consistent source.
If you are running into issues with a compute-only migration, the destination host might not be able to
reach the VMs’ disk(s). Make sure that the destination host has access to the datastore in question. Verify
the hosts connectivity to the SAN array (or other storage option). When connectivity seems good, issues
with SAN zoning can be the culprit. The destination host needs access to the correct LUNs.
vMotion can also encounter issues when the source and destination hosts have incompatible CPUs. You
will learn how to address this issue with vMotion EVC in a moment.
This course only has time to suggest some of the most common issues. For a more exhaustive list, refer
to this VMware resource: https://kb.vmware.com/s/article/1003734

Rev. 23.21 362 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

VMware EVC

Figure 7-18: VMware EVC

Remember what happens near the end of the vMotion process. The destination VM powers on, but its
normal boot process is bypassed; instead, the VM is linked to the state sent from the source VM. For this
process to work correctly, the destination ESXi host processors must be compatible with the processors
on the host on which the VM originally booted. That is, the destination and source hosts’ processors must
be from the same vendor. And the destination hosts’ processors must be the same generation as the
source hosts’ processors or a higher generation.
Often customers want to migrate VMs without having to worry about the particular CPU hardware
underlying the environments. VMware EVC lets them do so. This technology abstracts the commands
being sent down to the processor, so the destination VM can go live without issues, no matter what the
underlying CPU.

Rev. 23.21 363 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Guidelines for using EVC

Figure 7-19: Guidelines for using EVC

You can enable EVC at a cluster level. VMware recommends you enable it by default when you create
the cluster. Then you know the feature is available if you might need it in the future. You can also enable
per-VM EVC. This feature ties the EVC setting to the VM rather than the cluster alone. Then the VM
carries the EVC capability with it as it moves clusters, even if the new cluster does not have EVC enabled
on it. Per-VM EVC helps to ensure that VMs can migrate from newer to older clusters without issue.

Rev. 23.21 364 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

VMware HCX
You will next learn about the use cases and general capabilities of VMware HCX.

Rev. 23.21 365 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

HCX use cases

Figure 7-20: HCX use cases

VMware vSphere vMotion provides a great option for live migrations. However, customers planning large-
scale migrations between environments often want additional features and capabilities. VMware HCX is
VMware’s platform for application migrations of all kinds.
VMware HCX is licensed per processor (socket) in the destination environment. Customers can select the
Advanced edition or the more fully featured Enterprise edition. Refer to the “VMware HCX Licensing and
Packaging Overview” for information about the editions.

Data center consolidation


Customers can use VMware HCX to rebalance VMs and consolidate them on fewer resources. Through
mergers and acquisitions, a company might have acquired multiple data centers. Customers can move
VMs from the acquired data centers to the main data center and save on operational costs.

Infrastructure update
Most customers already have highly virtualized environments. When you sell them an upgraded
infrastructure, you often also need to help them move services from their existing infrastructure to the new
devices.

Migration to the cloud


VMware HCX can also help customers move VMs from an on-prem VMware environment to a cloud one.

Rebalancing
On the other hand, some customers want to rebalance their cloud workloads and bring them back on-
prem. Or they might want to decrease their dependence on one provider by moving some services to
another cloud. VMware HCX can help with these use cases as well.

Disaster recovery and other business continuity use cases


VMware HCX Disaster Recovery helps customers protect their virtualized services against unplanned
downtime. It can protect any VMs managed by vSphere, whether on-prem or in the cloud.

Rev. 23.21 366 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

VMware HCX components

Figure 7-21: VMware HCX components

The figure illustrates key VMware HCX components.

HCX Manager and installers


VMware HCX features an HCX Manager appliance at the source and destination site. Depending on the
site role, customers use a different installer for the HCX Manager.
First, consider a scenario in which the customer is migrating VMs from an on-prem vSphere environment
to a vSphere environment in the public cloud. Many public cloud providers, such as VMware Cloud on
AWS, Azure VMware Solution, and Google Cloud VMware Engine, are HCX enabled. That means that
the HCX Cloud Manager is already available in that cloud. Customers can follow the public cloud
provider’s instructions to obtain the HCX Cloud Connector installer (an OVA file). They then install that
Cloud Connector in their on-prem vSphere environment, and the installation will deploy the HCX Manager
appliance.
Customers will need to run the HCX Cloud Manager installer at the destination site in two cases. The
destination cloud is not HCX enabled, or the destination site is an on-prem vSphere environment. (Note
that the HCX Cloud Manager installer is the correct installer for the destination site, even though the
destination is not a cloud in the latter case.) Again, the installation deploys the HCX Manager appliance.
Once each site has its HCX Manager installed, customers can pair the source and destination site.
Customers create a service mesh to define the HCX services that they want to use between the sites.
The source HCX Manager will then automatically deploy the required components as virtual appliances at
both the source and destination site. The sections below briefly describe these components’ roles.

HCX-IX Interconnect Appliance


The IX Interconnect Appliance handles the migration technologies, including replication and vMotion. IX
Interconnect Appliances at different sites establish secure IPsec tunnels with encryption to protect the
customer’s data.

HCX WAN Optimization Appliance


The WAN Optimization appliance optimizes connectivity between the sites to help the migration run more
quickly and smoothly. It uses deduplication to reduce the amount of data that must flow over the site-to-

Rev. 23.21 367 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

site or Internet link. It also uses line conditioning to resolve issues such as packet errors or out-of-order
packets.

HCX Network Extension Virtual Appliance


The Network Extension Virtual Appliance uses VMware NSX to establish overlay networks between the
sites. These overlay networks enable VMs to migrate with their MAC address and IP addresses intact.
That provision goes far in maintaining continuity of services during and after the migration.

Rev. 23.21 368 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

VMware HCX migration types

Figure 7-22: VMware HCX migration types

HCX offers several types of migration for different uses cases.

Cold migration
In a cold migration, VMs are shut down. HCX uses VMware NFC to replicate the VMs’ disks to the
destination site. It then brings up the destination VMs.
Because VMs are shut down at the beginning of the process, cold migration can impose significant
downtime. VMware recommends cold migration only for non-production workloads that can tolerate this
downtime.

Bulk migration
Bulk migration can work well for many production workloads—as long as those workloads can tolerate a
small amount of downtime.
In bulk migration, HCX replicates the source VMs’ disks to the destination environment while the source
VMs remain active. Customers can choose to begin the switchover automatically after the replication
finishes or to schedule the switchover for a later time. Scheduling the switchover allows customers to
predict when downtime will occur and schedule a maintenance window at that time. For scheduled
switchovers, HCX performs ongoing replication with a target of keeping the destination disks no more
than two hours out of date.
To switch over each VM, HCX powers down the source VM, which prevents further changes during the
migration process. HCX then performs the final synchronization, creates the destination VM, and powers
that VM on. The downtime can be similar to a VM reboot or a bit longer, depending on how much data
needs to be synched.

vMotion migration
This form of migration uses vMotion. Therefore, it is well suited for applications that cannot tolerate any
downtime.

Replication Assisted vMotion (RAV)


Customers should use RAV for larger-scale environments to combine the benefits of bulk migration and
live migration.

Rev. 23.21 369 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

RAV begins much like a bulk migration. HCX replicates source VMs’ disks to the destination environment.
Source VMs remain active. Again, admins can choose to begin the switchover immediately after the
replication completes or to schedule the switchover. If admins choose to schedule the switchover, RAV
continuously replicates data from the source VMs’ disks to the destination disks to keep the disks in sync.
To conduct the switchover, RAV operates much like vMotion. It migrates live VMs’ state to the destination
environment.

Rev. 23.21 370 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Example network migration process for VMware HCX for VMware


Cloud

Figure 7-23: Example network migration process for VMware HCX for VMware Cloud

The figure, sourced from VMware, illustrates how networking remains stable during an HCX migration
from an on-prem vSphere environment to VMware Cloud.
First, the HCX-IX Interconnect Appliance establishes secure tunnels. The source appliance replicates VM
disks to the new environment using one of the methods previously discussed. The HCX Network
Extension Virtual Appliances (labeled L2C in the figure for Layer 2 Connectors) also establish a secure
tunnel between each other. They create overlay networks for the migrating VMs, which they carry over
that tunnel. Initially the on-prem environment continues to handle the routing for the networks extended in
this way.
The figure shows a point during the switchover between the source and target VMs. HCX has switched
over some of the VMs in network 10.1.9.0/24, but not all of them. Therefore, the on-prem environment
continues to handle the routing for this network. If a cloud VM in this network needs to reach any
resources outside of that network, its traffic traverses the tunnel to the on-prem site, where it is routed.
This process adds some latency for the VMs’ traffic, but it will only persist relatively briefly during the
migration.
Each network also has a switchover. When all the VMs in a network have switched over to the new
environment, HCX switches over to the network. The Network Extension Virtual Appliances disconnect
the overlay network. The network is removed from the on-prem site, and the destination SDDC in VMware
Cloud takes over routing for the network. Network 10.1.8.0/24 is in this state in the example figure.

Rev. 23.21 371 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

VMware vCenter Converter


VMware Converter allows organizations to convert physical machines, third-party VMs, and VMware VMs
to many types of VMware VMs. This section will explain more about this tool.

Rev. 23.21 372 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

VMware vCenter Converter overview

Figure 7-24: VMware vCenter Converter overview

VMware vCenter Converter helps non-VMware customers more easily migrate to a VMware environment.
It can convert machines to any VMware VM type, including:
• VMware ESXi
• VMware Workstation
• VMware Workstation Player
• VMware Fusion
For enterprise use cases, you will usually convert customers to VMware ESXi.
You can convert VMware VMs of other types to VMware ESXi, but, even more useful, you can convert
third-party VMs to VMware. As of the release of this course, Hyper-V was supported. For example, a
customer might have decided to move from Microsoft Hyper-V to VMware. Or perhaps mergers have left
the customer with heterogeneous environments, and the customer wants to standardize on VMware. You
can convert the Hyper-V VMs to VMs capable of running on VMware ESXi hosts.
While many customers are now highly virtualized, a few might still be in the process of converting.
VMware vCenter Converter can convert physical Windows or Linux servers to VMware VMs.
Finally, users might have a system image that they want to run as a VM on VMware vSphere. VMware
vCenter Converter can convert that image to a VMware VM.
For a full list of supported sources and destinations, refer to VMware vCenter Converter documentation.

Rev. 23.21 373 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

VMware vCenter Converter components

Figure 7-25: VMware vCenter Converter components

VMware vCenter Converter consists of three main components. VMware vCenter Converter Standalone
Server provides the conversion functionality. Admins access the server’s UI with a VMware vCenter
Converter Standalone Client.
The VMware vCenter Converter Standalone Agent runs on the source that you want to convert. The
VMware vCenter Converter Standalone Server automatically installs the agent on the source when you
begin the conversion. Agents are only required for converting Windows machines. For Linux machines,
VMware vCenter Converter Standalone Server creates a “helper” VM on the destination ESX host. It will
copy the source Linux machine to that helper VM.

Rev. 23.21 374 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

VMware vCenter Converter process

Figure 7-26: VMware vCenter Converter process

When VMware vCenter Converter converts a machine, it clones the machine and configures it properly
for VMware. It sets the destination VM’s vCPU and memory allocations, disks, and networks. You can
also tell VMware vCenter Converter to perform tasks such as installing VMware tools and customizing
guest OS settings.
VMware vCenter Converter can convert powered-on Windows or Linux machines, whether those
machines are physical or virtual; this is called a hot conversion. Refer to VMware vCenter Converter
documentation for a list of prerequisites for the powered-on machine, such as turning off antivirus,
firewalls, Window User Account Control (UAC) and Remote UAC, and simple file sharing. Hot
conversions use volume-based cloning, in which VMware vCenter Converter copies the source VMs’
volumes to the destination machine at either the file level (Windows or Linux) or block level (Windows
only). It converts all volumes to basic volumes except LVM2 logical volumes, which it can preserve. When
you set up the conversion job, you can select options such as which volumes you want to copy and the
destination ESXi host datastore to which you will copy them.
VMware vCenter Converter can also convert powered-off VMs. (It does not support converting powered-
off physical machines). Powered off conversions can use disk-based cloning or, for Windows guest OS
only, volume-based cloning. Disk-based cloning copies the source VMs’ disks and volume metadata to
the selected destination (such as an ESXi host datastore).
After cloning, the converted VM will resemble the source OS in almost all ways. It will have the same OS
settings such as computer name, Windows SID, and user profiles. It will have the same data, including all
applications and files. It can even preserve the disk partitions and volume serial numbers.
But the converted VM will have new underlying hardware from CPU to USB adapters, graphic cards, disk
controllers, hardware disks and partitions, and network adapters. These changes might affect some
applications. For example, the guest OS’s NICs will have new MAC addresses, which would affect
licenses that use MAC addresses. As part of the conversion process, you can configure guest OS options
and change the guest OS’s MAC address.
When you run a conversion job, you also choose how the job concludes. You can have VMware vCenter
Converter automatically shut down the source and switch over to the destination after the conversion.
Usually customers will want that behavior, allowing the converted VM to take over operations. However,
they could also choose to keep both the source and converted VM running. (The conversion process
does not affect the source’s functionality.) In this latter case, though, customers must put the machines on
different isolated networks or change the name, SID, an IP address of one of the machines. Otherwise,
conflicts occur.

Rev. 23.21 375 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Zerto, a Hewlett Packard Enterprise company


The same technologies that make Zerto stand out in the data protection industry enable Zerto to migrate
VMs between environments. In this topic, you will learn about migration use cases supported by Zerto
and how Zerto works.

Rev. 23.21 376 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Zerto use cases

Figure 7-27: Zerto use cases

Zerto offers industry-leading ransomware resiliency and disaster recovery. It helps customers
experiencing catastrophic issues, such as an entire failed data center, resume services with little lost data
and a speedy recovery time.
However, this module focuses on Zerto’s mobility benefits. Largely vendor agnostic, Zerto enables
customers to migrate many types of workloads to many types of destination environments. For example,
you can migrate VMs from one vSphere environment to another, or you can migrate Hyper-V VMs to
vSphere. You can migrate both Hyper-V and vSphere VMs to many public cloud services, and vice versa.
Refer to the “Interoperability Matrix for All Zerto Versions” to find supported source and destination
environments.
Zerto can migrate most VMs in supported environments, but some limitations do exist. (For example,
Zerto cannot replicate data for VMs with no disks.) For a full list of VMs that cannot be replicated, refer to
Zerto's online help. Also note that this course will focus on Zerto for VMs, but you and your customers
should be aware that Zerto supports other types of workloads as well.

Infrastructure refreshes
Customers can use Zerto to migrate their existing VMware vSphere environment from legacy storage
arrays or compute servers to modern storage or compute. They can also migrate from Hyper-V to
VMware vSphere as part of the refresh.

Data center consolidation


Customers can use Zerto to rebalance workloads across a data center. They can deal with mergers and
acquisitions by consolidating workloads onto less infrastructure. Zerto helps them to handle just about
any use case that involves migrating virtualized workloads from one on-prem location to another.

Cloud migration
With Zerto, customers can more easily adopt public cloud services, migrating the virtualized workloads to
that cloud. Zerto supports many common public cloud vendors as well as common technologies for
private clouds. Zerto can also help customers migrate from a managed service provider (MSP) to a
private cloud or even rebalance workloads from the cloud back to an on-prem environment.

Rev. 23.21 377 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Hybrid and multi-cloud migrations


Zerto also helps customers to move workloads between multiple public clouds or between an on-prem
environment and a public cloud.

Rev. 23.21 378 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Zerto for VMs architecture and components

Figure 7-28: Zerto for VMs architecture and components

Zerto continuous data protection (CDP) continuously replicates data from a source to a destination. This
technology forms the foundation of Zerto’s ransomware resilience and disaster recovery features, but also
enables Zerto to migrate workloads from a source site to a destination site.

Zerto designates the source site as the “protection site” and the destination site as the “recovery site;”
these terms make sense for data protection uses cases. For clarity in this discussion, however, this
course will continue to call these sites the source and destination sites. In this example, both the source
and destination sites feature on-prem virtualization platforms. The sections below describe the
components of the Zerto architecture.

Zerto Virtual Manager (ZVM)


A Zerto Virtual Manager (ZVM) helps to orchestrate the solution. A ZVM is required for each source or
destination environment. (An environment might correspond with a site, as shown in the figure, but it does
not have to.) The ZVM runs as a Windows service on a dedicated VM with a Windows OS.
The ZVM handles all aspects of the solution except the actual replication of data. It integrates with the
hypervisor manager for the protected or recovery environment. That hypervisor manager is either
VMware vCenter or Microsoft SCVMM, depending on the hypervisor vendor. The ZVM collects the
inventory of hosts, VMs, virtual disks, and networks. It can then furnish that information to Zerto GUI,
allowing users to choose the VMs that they want to protect and other such actions. The ZVM also keeps
track of changes in the environment. For example, if vMotion or Hyper-V Live Migration moves a VM to a
new host, the ZVM learns the new location of the VM and updates the UI.

Virtual Replication Appliance (VRA)


You must install a Virtual Replication Appliance (VRA) on each host with VMs that you want to migrate.
You must also install a recovery VRA on each host to which you want to migrate VMs. The VRAs handle
the actual replication of data. You will learn more about the replication technology in a moment.
Based on VMware limitations (4 SCSI controllers per VM and 15 SCSI targets per controller), a recovery
VRA is limited to 60 SCSI targets. Each virtual disk is one SCSI target. However, Zerto can scale to
migrate more than 60 virtual disks to a target host. Zerto automatically spins up Virtual Replication
Appliance Helpers (VRA-H), which connect to additional disks, but have no IP resources and nearly no
other resources.

Rev. 23.21 379 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Other Zerto for VMs architectures

Figure 7-29: Other Zerto for VMs architectures

These example architectures use a cloud for the destination.


In the example on the left, the customer is migrating on-prem VMs to the cloud. The source site has a
ZVM and VRAs as usual. The cloud recovery site requires a Zerto Cloud Appliance (ZCA). The ZCA is a
public cloud instance that runs both the ZVM and VRA as services. The ZCA receives the replicated data
and stores it in cloud storage. It also creates the journal used during replication. You could use a similar
architecture for a reverse migration, moving VMs from the cloud to an on-prem environment.
The example on the right illustrates migration between two clouds. For this scenario, both clouds use
ZCAs.

Rev. 23.21 380 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Virtual Protection Group (VPGs)

Figure 7-30: Virtual Protection Group (VPGs)

You will now look at how Zerto migrates VMs in more detail. You must first understand Virtual Protection
Groups (VPGs). When using Zerto for data protection, customer admins create Virtual Protection Groups
(VPGs) to define the VMs that they want to protect and the protection settings for those VMs. Similarly,
they can use VPGs to define the groups of VMs that they want to migrate together.
VPGs help to empower application-centric migration for complex and multi-tiered applications. For
example, one VPG might include all front-end servers, middleware, and the database used by an
Enterprise Resource Planning (ERP) application. Another VPG might include all the VMs related to a
Customer Resource Management (CRM) application. The protected VMs can reside on the same or
different hosts and be part of the same VPG.
In addition to defining the source VMs, the VPG supports a smoother migration by defining the boot order
for the VMs, and the MAC and IP addresses for destination VMs.
Admins can create three types of VPGs, two of which are intended for data protection. A Data Mobility
and Migration VPG enables mobility and migration use cases. It defines the source VMs and the
destination site to which to replicate them. After the initial replication is complete, admins can apply a
Move workflow, which migrates the VMs to the destination site.

Rev. 23.21 381 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

How Zerto for VMs works: Initial sync from source site to destination
site

Figure 7-31: How Zerto for VMs works: Initial sync from source site to destination site

After admins create a VPG, the Zerto solution implements replication, beginning with the synchronization
phase. The source VRA and destination VRA establish an HTTPS session. The source VRA replicates all
the existing data for the VMs' virtual disks to the recovery VRA, which stores it on mirror disks. The
source VMs must be powered on during this process to create an active IO stack. During synchronization,
admins cannot execute other actions on the VRA.
Zerto supports pre-seeding to reduce the time and bandwidth impact of the synchronization process. That
is, admins first send data to the destination site in another way. Perhaps they physically transfer drives to
the recovery site. Then Zerto can compare a source VM's virtual disk to the virtual disk already at the
recovery site and only synchronize the differences.

Rev. 23.21 382 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

How Zerto for VMs works: Ongoing sync

Figure 7-32: How Zerto for VMs works: Ongoing sync

After the initial synchronization, CDP with journaling begins. The source VRA detects any new write to
protected VMs. It sends the write to the recovery VRA, which records the write in a journal. These
journals allow the destination environment to stay in sync with the still powered-on source VMs.
Note that Zerto checks that the destination VRA will have enough capacity for this journal before Zerto
begins the replication process. The datastore in which the VRA stores the journal must have either 15%
of its total capacity free or 30 GB available (whichever is smaller). This requirement helps to ensure that
the VPG's journal cannot completely fill the storage, which would freeze the VPG and end replication for
the VMs within it.

Rev. 23.21 383 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

How Zerto for VMs works: Migration options

Figure 7-33: How Zerto for VMs works: Migration options

The VPG keeps the source and destination virtual disks in sync, but the migration will not occur until
admins apply a workflow for cutting over to the destination.
Zerto supports two cutover methods. The Live Failover option switches over from a powered-on VM; this
option is typically intended for recovering from unplanned events. The Move option, which is more typical
for migrations, powers down the source VM during the migration. This locks VMs in a set state during the
migration but does add some downtime.

Rev. 23.21 384 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Prepping for the move

Figure 7-34: Prepping for the move

You should help customers prepare before applying the Move workflow.
You should verify that replication occurs on the VPGs without any errors.
You should double-check that the destination environment has the required capacity for the migrating
services. Pay close attention to all the considerations in the “Plan the migration” and “Prepare for and
execute the migration” topics.
Make a plan for the networking switchover as well. You can define the correct IP addresses for
destination VMs within the VPG. However, Zerto itself does not handle network migrations. For example,
if customers want the destination VMs to reside on the same subnets as the source VMs, they must
establish overlay routes with a technology such as NSX. Network architects must also make a plan for
migrating the routing path out of those overlay networks from the source to the destination site. Discuss
plans with the network architects and coordinate.

Rev. 23.21 385 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Pre-Move testing

Figure 7-35: Pre-Move testing

HPE strongly encourages customers to test before initiating the Move workflow. They can then ensure
that their migrated services will continue to work correctly. They can also gather real-world data on how
long the migration will take so that they can schedule an outage during a maintenance window.
Zerto makes this testing easy. You simply run a Failover Test workflow on the VPG with the VMs that you
want to test. The test will bring up the destination VMs while leaving the source VMs still running. For the
test, you can choose to use the same MAC and IP addressing specified in the VM for the move or
different ones just for the test. In that way, you can choose the correct settings for the current point in the
network switchover process. You also select the “checkpoint” for the test. The checkpoint specifies the
point in time for the source VMs that you want capture in the failover test. You might choose the most
recent checkpoint (while will be mere seconds ago); keep in mind, though, that the test will not capture
any changes after that point.
After you start the test, Zerto creates the VMs at the destination site, applying the specified MAC and IP
addressing. It boots the VMs in the order indicated in the VPG. The destination VRA also “promotes”
changes recorded in journals to VMs’ disks so that the VMs are synched to the checkpoint you selected.
You can then log into the VMs. Verify that they booted correctly. Then perform other checks. Did
applications and services start correctly? Did any errors occur? Can VMs reach each other and other
necessary services? Test as much functionality as you can to eliminate unpleasant surprises later.
Record any issues and make sure to resolve them before the final migration.
When you are finished, stop the failover test. Zerto then powers down and deletes the destination VMs.

Rev. 23.21 386 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Zerto Move process: Initial steps

Figure 7-36: Zerto Move process: Initial steps

Now you will examine what happens when you apply the Move workflow to a VPG and start the migration.
The high-level process includes these steps:
1. Zerto gracefully shuts down the source VMs to fix the VMs in place and ensure data integrity for
applications, files, and services.
Zerto uses VMware tools or Hyper-V Integration Services to execute the shutdown, so make sure that
you have made these tools available. Alternatively, you can shut down the VMs gracefully before the
migration. You can choose an option in the Move workflow to forcibly power down VMs, but HPE
generally recommends against this option.
Also note that Zerto automatically turns HA off on source clusters to prevent clusters from trying to
bring VMs back up.
2. Zerto creates clean checkpoints for the source VMs.
3. The source VRA replicates writes from the latest checkpoints to the destination VRA, which records
them in VMs’ journals.
4. Zerto creates the destination VMs with the settings (such as MAC address and IP address) defined in
the VPG. It attaches the VMs to their disks.
5. It powers on the destination VMs.
These steps complete the pre-commit process.

Rev. 23.21 387 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Zerto Move process: Committing the Move

Figure 7-37: Zerto Move process: Committing the Move

At this point, the migration remains in a non-finalized state. You can still roll it back to revert services to
the source site. The move must be committed to finalize the transition.
When you initial the Move workflow, you choose from one of three options:
• Auto-commit—After a configurable timeout, Zerto automatically commits the move. This is the default
option.
• Auto-rollback—After a configurable timeout, Zerto automatically rolls back the move.
• None—Zerto does not take any automatic action. You must choose what you want to do and then
manually commit or roll back the move.
When the move is committed manually or automatically, these steps happen:
6. Zerto deletes the source VMs. (You can alternatively choose to keep the source VMs, but typically
you delete them.)
7. Zerto promotes data in the journals to the destination VMs’ virtual disks. The VMs are fully operational
at the correct checkpoint while this process is occurring. The VRA automatically directs reads to the
journal or virtual disk as required.
8. You can optionally choose to reverse replication. In that case, the VRA at the destination site
becomes the source VRA. It replicates data in the new VMs’ virtual disks back to mirror disks and
journals at the source site. You might choose this option so the customer retains the ability to “roll
back” services even though the move has been committed. (Admins would need to initiate a new
Move for the “rollback.”) Customers can also use this option to retain the source environment as a
data protection environment.

Rev. 23.21 388 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Zerto benefits for migration

Figure 7-38: Zerto benefits for migration

As you have seen, Zerto helps to simplify complex migration processes. This vendor-agnostic solution
helps customers to automate and orchestrate migration to and from a broad range of on-prem and cloud
environments. VPGs make it easier for customers to migrate groups of interconnected VMs together and
then boot those VMs in the correct order. Zerto’s pre-testing options help customers to resolve issues
before they cause unplanned downtime and thus migrate critical services with more confidence.
Reversing synchronization at the end of the migration process can further reduce risk. Customers can
use the former environment as a data protection environment and retain the ability to roll back to that
environment if desired.

Rev. 23.21 389 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Zerto licensing options

Figure 7-39: Zerto licensing options

Migration capabilities are included with the same Zerto licenses that entitle customers for ransomware
resiliency and disaster recovery:
• Perpetual or subscription-based Zerto Data Protection
• Perpetual or subscription-based Zerto Enterprise Cloud Edition (ECE)
However, if customers only want the migration capabilities and not ongoing data protection, they can
choose the Zerto Migration License as a more cost-effective option. This license is only valid for 180
days. If, near the end of the term, customers decide that they want to obtain the ransomware resiliency
and disaster recovery capabilities, they can upgrade to another license.
For all of these licenses, customers require one license per protected (source) VM. Zerto offers packs of
various sizes.

Rev. 23.21 390 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Activity 7

Figure 7-40: Activity 7

For this activity you will return to the scenario presented in modules 1-4 with the company Financial
Services 1A. Financial Services 1A consolidated services in a VMware vSphere deployment about 12
years ago and has one primary data center and a disaster recovery site. The primary data center has 30
VMware hosts in six clusters running a variety of workloads. Throughout this course, you have designed a
solution for the customer. Now you will help the customer plan the migration from the legacy environment
to the proposed environment.
For this activity, you do not need to make a full plan. Instead, consider the methodologies presented in
this module. Which would you recommend for this scenario? (There could be more than one correct
answer.) Explain your choice and any key considerations for your plan.

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

Rev. 23.21 391 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

Rev. 23.21 392 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Summary
HPE Partners can add value for their customers by helping them migrate from their legacy environment to
their new HPE-based environment. They can also honor customers’ hybrid cloud strategy and help them
migrate workloads to, from, and between clouds. You now understand many of the factors at play in these
migration processes. You also understand the technologies and HPE solutions, such as Zerto, that can
help facilitate the migration.

Rev. 23.21 393 Confidential – For Training Purposes Only


Module 7: Plan Cloud Migrations

Learning checks
1. A customer is using vMotion, but not Advanced Cross vCenter vMotion. What is a requirement?
a. The source and destination hosts must be at the same site.
b. The source and destination hosts must be managed by the same vCenter Server.
c. The source and destination hosts must be managed by the same vCenter Servers or vCenter
Servers with licenses owned by the same organization.
d. The source and destination must be managed by the same vCenter Servers or vCenter Servers
joined by Enhanced Linked Mode.
2. You are helping a customer use VMware HCX. The customer needs to migrate a very large number
of VMs, which cannot tolerate any downtime. Which option should you select?
a. Bulk migration
b. Cold migration
c. HCX vMotion
d. HCX RAV
3. You are using Zerto to migrate VMs, and you apply the Move workflow to a VPG. What step does
Zerto perform first?
a. Creating a new checkpoint for the source VM
b. Powering up the target VM
c. Powering down the source VM
d. Promoting data in the journal to the target VM
4. What are two benefits of using Zerto for migration? (Select two.)
a. It provides options for pre-testing.
b. It embeds network overlay technology inside it.
c. It is a free solution for customers with HPE storage arrays.
d. It is capable of moving VMs to and from a large variety of on-prem and cloud environments.

Rev. 23.21 394 Confidential – For Training Purposes Only


Troubleshoot VMware-Based
Infrastructure Solutions
Module 8

Learning objectives
You can demonstrate your value as a trusted advisor by helping customers troubleshoot their virtualized
services. You can combine VMware tools with HPE tools to zero in on root causes and address issues
before they become widespread issues. By the time that you complete this module, you will be able to:

• Given a scenario, troubleshoot a client's VMware-based issue using the appropriate tool and/or
process
• Given a scenario, identify the steps to remediation and create an action plan
• Given a scenario, describe why a VM might not work after migration
• Given a scenario, describe how to implement preventive measures and remediate based on an action
plan

Rev. 23.21 | © Copyright 2023 Hewlett Packard Enterprise Development LP | Confidential – For Training Purposes Only
Module 8: Troubleshoot VMware-Based Infrastructure Solutions

Troubleshooting tools
In this section you will learn about several useful troubleshooting HPE and third-party tools.

Rev. 23.21 396 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

Overview of solutions that help you troubleshoot

Figure 8-1: Overview of solutions that help you troubleshoot

Because you never know when an issue will occur, you must have tools in place to gather information
and give you insights. Throughout this course, you have learned about HPE tools that help you monitor
VMware environments, as well as VMware tools with which HPE integrates. Those same tools can help
you troubleshoot. Many of these tools not only provide rich information, but they also offer intelligent
analysis and remediation recommendations. Over the next pages, you will review how to use these tools.
You will also learn how to use VMware’s esxtop and rextop tools.

Rev. 23.21 397 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

Requirements for using HPE InfoSight

Figure 8-2: Requirements for using HPE InfoSight

HPE InfoSight supports devices across the HPE compute and storage portfolios. If customers want to
gain the benefits of HPE InfoSight, they must take just a few simple steps to register the solutions. For all
of these processes, the customer must obtain an HPE InfoSight organization. HPE generally
recommends that a customer uses a single HPE InfoSight organization.

Registering HPE compute solutions


To register compute solutions such as HPE ProLiant servers and HPE Alletra 4000 Systems in HPE
InfoSight, customers need an HPE Passport account. They must also install the iLO Amplifier Pack on a
VM and discover their servers with the iLO Amplifier Pack.
Then they log into HPE InfoSight with their HPE Passport account and obtain a claim token from a device
enrollment page. They can then return to the iLO Amplifier Pack and link it to HPE Infosight using the
claim token.
For a list of supported compute solutions and step-by-step configuration instructions, refer to the HPE
InfoSight for Servers Getting Started Guide.

Registering HPE Alletra 5000 and HPE Alletra 6000 storage arrays
During the sales process for HPE Alletra 5000 and HPE Alletra 6000 storage arrays, HPE provides the
customer an HPE organization and adds the arrays to it. For more information, refer to the HPE InfoSight
User Guide for HPE Alletra 6000, HPE Nimble Storage.

Registering HPE Primera and HPE Alletra 9000 storage arrays


Customers require an HPE Passport account to log into HPE InfoSight. Within HPE InfoSight, they can
obtain a claim token in the device enrollment page for HPE Primera and HPE Alletra arrays.
When configuring the array, admins must enable Remote Support. For HPE InfoSight to work, the array
must be able to periodically “call home” to HPE. Then in the telemetry support settings, admins should
choose to send data to HPE InfoSight. They should follow the prompts to configure the HPE InfoSight,
specifying the claim token.
For step-by-step configuration instructions, refer to the HPE InfoSight User Guide (HPE Alletra 9000,
Primera, 3PAR).

Rev. 23.21 398 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

Requirements for HPE InfoSight Cross-Stack Analytics for VMware

Figure 8-3: Requirements for HPE InfoSight Cross-Stack Analytics for VMware

As you learned in Module 2, HPE Infosight offers Cross-Stack Analytics for VMware. This feature gives
customers greater insight into VMs’ performance and the reasons behind performance degradations. But
to obtain these benefits, customers must take extra steps beyond just registering their HPE products in
HPE InfoSight. First, make sure customers understand that HPE InfoSight only supports this feature
when the VMware environment uses HPE Alletra or Primera arrays. Then explain the requirements based
on the type of array.

Requirements for HPE Alletra 5000 and HPE Alletra 6000 storage arrays
First install the HPE Alletra and HPE Nimble Storage plug-in for VMware. Then you can configure
VMware settings in HPE InfoSight and enable Cross-Stack Analytics.

Registering HPE Primera and HPE Alletra 9000 storage arrays


For HPE Primera and HPE Alletra 9000 storage arrays, install the VMware Storage Integration Pack.
Then configure the VMware settings in HPE InfoSight.

Rev. 23.21 399 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

Using HPE InfoSight for troubleshooting

Figure 8-4: Using HPE InfoSight for troubleshooting

As you learned earlier in this course, HPE InfoSight gathers a vast breadth and depth of information.
When you are troubleshooting an issue, you can draw on that information. You can view systems’ health
and find malfunctioning components. You can view performance metrics to help you find under-
provisioned resources.
Even better, HPE InfoSight helps you make sense of this information. It can be difficult to know whether
the values you are seeing are normal or not. You might struggle to translate metrics into a diagnosis on
your own. But HPE InfoSight automates most of the investigation. Its root cause analysis points you
toward the probable cause of an issue. And HPE InfoSight does not leave you wondering about next
steps. You can follow its actionable recommendations to remediate issues quickly.
For example, admins often struggle to determine why workloads experience storage latency because the
root cause can lie in many areas from the application to the guest OS to the hypervisor to the storage
network to storage arrays. HPE InfoSight analyzes the situation to help you find the right root cause.
Cross-Stack Analytics for VMware provides key insights for VMware environments. With it you can
identify “noisy neighbors” that interfere with other VMs. You can pinpoint VMs experiencing latency issues
and identify the source of the latency. Again, HPE InfoSight Cross-Stack Analytics not only helps admins
find issues, but also provides diagnoses of the issues and remediation recommendations.

Rev. 23.21 400 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

Using HPE plug-ins for VMware for troubleshooting

Figure 8-5: Using HPE plug-ins for VMware for troubleshooting

You learned about the many plug-ins that HPE offers for VMware vSphere in Module 2. Take a moment
to review how these plug-ins can help you troubleshoot.

HPE OneView for vCenter (OV4VC)


This plug-in applies to customers using HPE Synergy or managing their HPE servers with HPE OneView.
It provides monitoring and alerts for the servers within the VMware vSphere environment. Admins can use
its Cluster Consistent Checks to find servers with the wrong firmware or settings. By standardizing cluster
hosts in this way, customers can stave off future problems. Network Diagrams, which show how virtual
networks match up to networks in the physical network, come in handy for troubleshooting connectivity
issues.

HPE Content Logs for VMware vRealize Log Insight


This plug-in adds HPE hardware logs to VMware Aria Log Insight (formerly vRealize Log Insight). You
can then search for logs relevant to both the virtualization platform and the underlying infrastructure from
within the same UI. VMware Aria Log Insight also provides analytics for the logs.

HPE Alletra and HPE Nimble Storage Plug-in or VMware & HPE Storage
Integration Pack for VMware
In Module 3 you learned how these packs enable admins to execute many storage management tasks
directly from VMware vCenter. They provide performance and space information to VMware vCenter.
Remember that these packs are also prerequisites for using HPE InfoSight Cross-Stack Analytics for
VMware.

HPE OneView for VMware vRealize Operations & HPE Storage Management Pack
for vRealize Operations Mgr
VMware Aria Operations Manager (formerly vRealize Operations Manager) helps customers more quickly
identify and remediate issues in the VMware vSphere environment. Because issues can originate from
anywhere in the infrastructure stack, Aria Operations Manager becomes more intelligent when it has

Rev. 23.21 401 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

access to information about the physical infrastructure. These HPE plug-ins provide that information. You
can then use Aria Operations Manager to proactively identify and remediate emerging issues with
performance and capacity. You can also more easily identify and remediation configuration issues that
might cause future problems.

Rev. 23.21 402 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

Using HPE GreenLake for Compute Ops Management for


troubleshooting

Figure 8-6: Using HPE GreenLake for Compute Ops Management for troubleshooting

When customers manage HPE ProLiant servers in HPE GreenLake for Compute Ops Management,
admins can monitor and troubleshoot servers across multiple sites from the cloud. They can check on
servers’ and their components’ status. They can also configure alerts and notifications that call their
attention to issues before support calls start pouring in. Customers also have access to FAQs and a
Community Support Forum. Even better, HPE GreenLake for Compute Ops Management always comes
with support from HPE Pointnext Services.
HPE GreenLake for Compute Ops Management can help customers gain insights into their environment
with analytics, detailed reports, and other logs. If customers need to work with HPE support during an in-
depth troubleshooting process, they can turn on enhanced logging.

Rev. 23.21 403 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

HPE CloudPhysics

Figure 8-7: HPE CloudPhysics

You have learned about using HPE CloudPhysics to assess and size VMware environments. You can
also use this solution to run best practice checks. If you are setting up a solution for a customer, you
might run the check after completing the configuration to double-check your work. You can also use this
feature when troubleshooting to find out whether a misconfiguration might be causing the issue.
When you run the check, HPE CloudPhysics assesses the physical infrastructure running VMware
vSphere and verifies that infrastructure devices are configured to VMware’s specifications. It then makes
recommendations to bring any misconfigurations into alignment with VMware requirements.

Rev. 23.21 404 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

VMware esxtop and resxtop

Figure 8-8: VMware esxtop and resxtop

VMware’s esxtop/resxtop utilities can help you with in-depth troubleshooting. These utilities collect
performance metrics on ESXi hosts, including statistics for CPU, memory, and storage. They collect
metrics for the host as a whole and for individual VMs and processes on the host.
The main difference between the utilities lies in how you run them. You must run esxtop locally from the
ESXi Shell of the ESXi host that you want to analyze. You run resxtop on a Linux machine to collect
metrics from remote ESXi hosts.
When you run esxtop/resxtop you select the mode. In interactive mode, the utility outputs the resource
utilization to the console, updating in real-time. You can then look through the metrics in that display. If
you want to collect the metrics in a file to analyze later or to hand off to other support team members, you
can use batch mode. That mode collects info in a file. The esxtop utility only also supports replay mode.
To use that mode, you must first run the vm-support command. Then you run esxtop in replay mode to
collect the metrics in a file.

Rev. 23.21 405 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

Analyzing esxtop and resxtop CPU output

Figure 8-9: Analyzing esxtop and resxtop CPU output

You will now explore some tips for analyzing the esxtop/resxtop output. This output consists of several
“panels.” The figure shows the CPU panel. To learn how to navigate through the panels in interactive
mode, refer to VMware vSphere documentation on “Performance Monitoring Utilities: resxtop and esxtop.”
As with all panels, the CPU panel begins with a line indicating the time, host uptime, and the number of
worlds, VMs, and vCPUs. A world refers to a unit that the VMware hypervisor needs to schedule. It is
roughly equivalent to a process or thread.
The first line also shows the CPU load average. You will see three numbers for the load; each shows the
load average over a different time period—one minute (left), five minutes (middle), and 15 minutes (right).
For this metric, 1.00 = 100%, 0.99 = 99%, and so on. The utilization can be more than 1. Usually, you
want the utilization to be around 1, indicating that the CPUs are fully utilized but not too oversubscribed. If
the utilization is 2 or over, the host’s processors are very oversubscribed, and you should add more CPU
resources or move some of the VMs to another host.
The PCPU USED and PCPU UTIL lines break down the utilization per physical core (no Hyperthreading)
or logical core (Hyperthreading). Each line shows the statistic for each core and then the average across
all cores. The PCPU USED metric reports the percentage of CPU cycles used. The PCPU UTIL metric
reports the percentage of time the core was not idle.
The CPU panel also shows more detailed statistics for groups of worlds. A VM is a group of worlds (the
processes running on that VM), and so is a resource pool. If the world is not part of a VM or resource
pool, it has its own group statistics. For simplicity, the discussion below refers to VM only, but keep in
mind that the same descriptions apply to worlds grouped in other ways. For these metrics 100 = 100%.
Some of the most relevant metrics for troubleshooting include:
• %USED—The percentage of physical core cycles used by this VM. (The percentage is of one
physical core, so it can be higher than 100%.) The %USED metric includes the percentage of CPU
cycles scheduled for the VM’s worlds to run (%RUN) minus any system time spent on resources
other than this one (%OVRLP) plus any system time spent on behalf of this VM (%SYS). System time
spent on behalf of a VM would include tasks such as processing packets for a VM’s NICs. (However,
%RUN + %SYS - %OVRLP can yield a different value than %USED due to the use of features like
Hyperthreading and turbo mode.) You can use the %USED statistic to identify “noisy” VMs

Rev. 23.21 406 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

• %RDY—The percentage of time that this VM spends waiting for its worlds (processes) to run. Any
value over 0 indicates some level of contention for CPU resources. You should watch out for
percentages over five. At and over that point the VM is waiting enough to degrade application
performance. The %RDY statistic might be high across many VMs because the host has too high of a
vCPU-to-core ratio. CPU limits set on the VM could also cause a high %RDY metric on that VM. In
most cases, customers should avoid setting CPU limits and instead rely on a properly sized
environment.
• %CSTP—The percentage of time that this VM spends in what VMware calls a “ready, co-deschedule
state.” In other words, the VM has processes ready to execute, but it has multiple vCPUs, and the
host is waiting to schedule the vCPUs to execute together. If this percentage is over three, you should
try reducing the number of vCPUs on the VM. While reducing the vCPU number to boost
performance might seem counter-intuitive, doing so can actually give the VM greater access to the
host’s CPU resources by reducing that co-scheduling waiting time.
To learn what all of the metrics indicate, refer to VMware resources:
https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-monitoring-performance/GUID-AC5FAD2D-
96DE-41C4-B5C6-A06FE65F34C6.html

Rev. 23.21 407 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

Analyzing esxtop/resxtop memory and storage output

Figure 8-10: Analyzing esxtop/resxtop memory and storage output

Next you will learn about analyzing statistics in esxtop/resxtop’s Memory and Virtual Machine Storage
Panels.

Memory Panel
The first line in the Memory Panel indicates how oversubscribed the memory is. As with CPU load, the
three values indicate averages over one, five, and 15 minutes. Ideally you should see zeros here. A value
of .1 indicates that the host would need 110% of its memory to fully meet the needs of VMs and other
resources. Some oversubscription might not degrade performance, but the oversubscription should alert
you to continue checking for signs of issues.
You can check the VMKMEM /MB line for more information. This line indicates how much memory the
hypervisor (ESXi VMkernel) manages (managed), how much it wants to keep free (minfree), how much of
the memory VMs and other resources have reserved (rsvd), and how much machine memory is
unreserved (unrsvd). Just because the VMs and resources have reserved memory does not mean that
they are actually using that memory. So the system can sometimes tolerate a degree of oversubscription
if the VMs are not actually contending for the memory.
To continue your check, look at the VMKMEM state to see how much pressure the VMkernel memory is
under. This state corresponds to the percentage of memory that is free as compared to the total memory
(see the PMEM line).
• High—Free memory is 6% of total or higher
• Soft—Free memory is 4-6% percent of total
• Hard—Free memory is 2-4% percent of total
• Low—Free memory is 0-2% percent of total
You typically want the state to be high, while low indicates a problem (and soft and hard might indicate
problems).
For a strong sign that memory oversubscription is causing issues, check the SWAP/MB line. Swapping
indicates that VMs must deal with a lack of memory by copying data on and off disks; the swapping
process can seriously degrade performance. The curr state indicates the current swapping usage, r/s is

Rev. 23.21 408 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

the rate at which memory is being swapped in from the disk, and w/s is the rate at which memory is being
swapped to the disk.
Sometimes you will see VMs are swapping memory even though memory is not oversubscribed and the
memory state is high. This situation can happen when a VM has insufficient memory assigned to it or a
memory limit assigned to it. You might try removing any limits and allocating more memory to the VM.
To learn what all of the metrics indicate, refer to VMware resources:
https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-monitoring-performance/GUID-4B6BD1C0-
AA99-47F1-93EF-4921D56AE175.html

Virtual Machine Storage Panel


This panel shows disk statistics per VM. You should look particularly at the latency statistics: LAT/rd (read
latency) and LAT/wr (write latency), which report latency in ms. Generally, latency should be no more
than 10ms on average with no more than 20 ms at times of peak busyness. However, latency tolerance is
highly workload specific. For example, virtual desktop infrastructure (VDI) can have lower latency
requirements. You should discuss particular requirements with the stakeholders for each application.
To learn what all of the metrics indicate, refer to VMware resources:
https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-monitoring-performance/GUID-F3B4000D-
B76A-4E67-A855-BD3246F20469.html

Rev. 23.21 409 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

Common issues
This topic will provide lists of common issues for your reference.

Rev. 23.21 410 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

Common compute issues

Figure 8-11: Common compute issues

Often a mis-sized environment lies at the root of performance issues. Perhaps someone who came
before you sized the environment incorrectly. Perhaps the customer did not accurately convey their
requirements. Or perhaps a customer outgrew an environment too quickly. You can address these issues
by upgrading processors, expanding memory per-host, or adding hosts.
Sometimes the VMware ESXi hosts are sized properly as a whole, but VMs are allocated unevenly across
them. Make sure that hosts are grouped in appropriate clusters. Recommend that the customer enable
Distributed Resource Scheduler (DRS) on the clusters so that the cluster can balance loads
automatically. Note that customers must purchase vSphere Enterprise Plus licenses to use DRS.
Also look out for the mistakes with VM settings pointed out earlier. CPU and memory limits on VMs can
cause issues. Often sizing hosts adequately is a better strategy than limiting VMs’ resource utilization.
Also remember that you might need to decrease the number of vCPUs on a VM if it is showing a CSTP%
over three.
You can also look for issues with the HPE server settings. Make sure that hardware-assisted virtualization
is enabled. Also make sure that Hyperthreading is enabled (if desired; sometimes customers want to
avoid Hyperthreading for specific workloads, but in most cases Hyperthreading improves performance).
Also check the HPE ProLiant server’s workload profile, which automatically optimizes many settings
based on the workload requirements. For example, maybe the current profile is “General Power Efficient
Compute.” You might try changing it to “Virtualization – Power Efficient.” If the performance is still lower
than expected, you could try “Virtualization – Max Performance.” You can also use HPE Intelligent
System Tuning (IST) to customize settings for workloads based on real-time monitoring.

Rev. 23.21 411 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

Common storage issues

Figure 8-12: Common storage issues

Perhaps you have seen excessive latency on your VMs. You might need to look for the root cause in
many places. The storage might be under-provisioned, of course, and the customer might need to
upgrade from HDD to flash or from flash to NVMe.
However, the storage array might not be at fault at all. You should also look for issues in the storage
network. iSCSI networks use TCP/IP, usually over Ethernet, but they should be segregated from other
Ethernet networks. Often admins raise the MTU for iSCSI to 9000 so that each packet can carry more
data; however, raising the MTU brings the risk of discrepancies across the network. Incompatible MTUs
lead to dropped packets. Make sure that the MTU is consistent across the complete iSCSI network. In
addition to setting the IP MTU, network admins will also need to ensure that Ethernet ports are configured
to carry the jumbo frames.
Look for ways that customers can use their storage network bandwidth more efficiently. Typical storage
network topologies feature redundant pathways; however, the ESXi hosts often use just one of those
pathways. Customers might be using Most Recently Used for the multi-path I/O setting. However, if you
change this setting to Round Robin, hosts will use both paths, which can improve performance. On the
other hand, it is very important for you to follow the instructions in arrays’ implementation guides. The
HPE Alletra 9000: VMware ESXi Implementation Guide and HPE Primera: VMware ESXi Implementation
Guide explain how to use the proper plug-in and plug-in rules to support Round Robin.
If you see an increase in the number of I/O requests from workloads after deploying new storage, the new
storage might have an issue with I/O misalignment. VMware defines a specific block size for VMFS
datastores. If the storage array defines a different size for its blocks, the VMFS blocks become staggered
across multiple storage array blocks. Then VMs might need to send two I/O requests to access one
VMFS block. As all those extra I/O requests add up, latency can increase. You should find the correct
block size in the specific storage array’s Implementation Guide for VMware environments.
When new problems occur without any changes to settings, look for faulty HBAs, SFPs, and cables. If
you are using HPE InfoSight, that solution can also give clues into device components’ health. Try
swapping out the suspected faulty component with a new component to see if issues resolve.

Rev. 23.21 412 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

Common networking issues

Figure 8-13: Common networking issues

When troubleshooting connectivity issues for VMs, carefully examine the configuration on ESXi hosts in
the same cluster. A port group or dvport group should have consistent uplinks across all the hosts; the
NICs’ speeds should be the same, and the NICs should have the same duplex settings (duplex should
almost always be full).
Also check for discrepancies between hosts’ virtual switches and top of rack (ToR) switches. The ToR
switch ports must support the right VLAN tags for port groups assigned to specific VLANs. If a port group
does not specify a VLAN (VLAN 0), the ToR switch ports must accept untagged traffic and assign that
traffic to the correct VLAN for that port group’s network. And if a port group uses VLAN 4095, the switch
port should support the VLANs used by the guest OS. If distributed virtual switches are using LACP over
multiple uplinks, the ToR switch should be using LACP on a link aggregation group (LAG) connected to
each ESXi host. HPE Aruba Networking switches, deployed as Virtual Switching Extension (VSX) pairs,
can establish multi-chassis LAGs.
When you’re checking virtual switch and distributed virtual switch settings, remember that admins can set
switch-wide settings, but they can also override them at the port level. Make sure to check any overridden
settings as well.
Some non-storage networks, such as vMotion networks and NSX transport networks, might also use
increased MTUs. Again be very careful to check for a consistent MTU across the entire end-to-end path.
Networks often feature redundancy. Sometimes admins make misconfigurations across a standby path,
which they miss until a failover situation occurs. Check for any discrepancies, such as missing VLANs.
Faulty adapters or cables can also cause intermittent or unpredictable networking issues. Swap out
components with known good ones to isolate problem components.

Rev. 23.21 413 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

Examples of the troubleshooting and remediation process


You will now examine some simplified examples of tracking down the source of an issue and making a
plan for remediating that issue.

Rev. 23.21 414 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

Example issue 1: Initial steps

Figure 8-14: Example issue 1: Initial steps

In the first scenario, a customer has asked for your help understanding why an application is performing
poorly. This application runs on several VMs in the customer’s VMware vSphere environment, which you
migrated to HPE ProLiant DL 360 servers and HPE Alletra 6000 arrays about a year ago.
You ask the customer to define “poor performance.” The customer explains that sometimes the
application is fine, but at busy times, users start complaining about slow responses. You next attempt to
determine what has changed in the environment. The customer has not recently deployed any new
infrastructure devices or changed the infrastructure. The customer claims that usage for this particular
application has also remained steady. Then you learn that the customer recently began a large new
project, which involved adding many VMs to one of the ESXi clusters—the same cluster that runs the
poorly performing application.
You decide to check resource utilization and other statistics on hosts in the cluster. You make a list of
appropriate tools, including VMware esxtop/resxtop and HPE InfoSight Cross-Stack Analytics.

Rev. 23.21 415 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

Example issue 1: Example esxtop/resxtop output

Figure 8-15: Example issue 1: Example esxtop/resxtop output

The figure shows example esxtop/resxtop CPU and memory output on the ESXi hosts supporting VMs
exhibiting poor performance.
The average CPU load is around one and sometimes exceeding one, but not necessarily excessive. The
detailed VM statistics show that the RDY% is low, so the VMs seem to have access to sufficient CPU
resources.
The memory statistics show that memory is oversubscribed. You check the VMkernel memory state next.
It is hard, which indicates that memory is beginning to be under pressure. The swapping statistics show
that VMs need to swap a lot of data on and off disks. This swapping could explain the decrease in
application performance. This host likely has too many VMs, making too many demands on its memory.
The other hosts in the cluster are likely experiencing similar issues.
While this example focused on the esxtop/resxtop utility, you also could have used HPE InfoSight to
investigate the issue.

Rev. 23.21 416 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

Example issue 1: Make a remediation plan

Figure 8-16: Example issue 1: Make a remediation plan

You document the suspected issue: hosts do not have sufficient memory to meet the increasing
demands. Your plan? Add a host to expand the cluster, decreasing the VM load per-host and hopefully
eliminating the memory contention. After you present your findings, backed by statistics, the customer
agrees to the plan.
This cluster uses DRS, so after you deploy the new host, the cluster automatically moves VMs to the new
host and eases the load on the existing hosts. You run esxtop/resxtop and validate that the hosts are no
longer swapping memory to disks. You also work with application stakeholders to stress test the
application. Performance has indeed returned to the expected levels, and users are much happier.
If the fix hadn’t worked, you could have rolled back that change and tried something else. Or if the
problem seemed partially fixed, you could have left the new host in place and also investigated other
underlying causes for the poor performance.

Rev. 23.21 417 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

Example issue 2: Initial steps

Figure 8-17: Example issue 2: Initial steps

In this next scenario, a customer is experiencing an issue after migrating several VMs to a new
environment. One of the applications is no longer accessible.

A lot changed in the migration. The VMs moved to new ESXi hosts with new processors. These hosts
connect to a new storage array. They are located in a different area of the data center with different ToR
switches. The customer gives you a clue to the issue, however. The VMs associated with the application
show limited IP connectivity.

You decide to check HPE OV4VC's network view to start the troubleshooting process. If necessary, you
might also check VMware Aria logs for errors.

Rev. 23.21 418 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

Example issue 2: OV4VC Networking view

Figure 8-18: Example issue 2: OV4VC Networking view

The figure shows what you might see in the OV4VC Networking view. (This figure is not an exact
screenshot so that it is simple and clear enough for you to read, but it looks much like what you would see
in the UI.)
As you can see, you can use this view to trace connectivity from each VMware kernel adapter and VM
NIC to the Virtual Connect module and then to the physical network. The VM in question is assigned to
the VLAN40_myapp port group, which uses VLAN 40. However, there is no network on a Virtual Connect
module with that VLAN ID.

Rev. 23.21 419 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

Example issue: VM not working post-migration: Make a remediation


plan

Figure 8-19: Example issue: VM not working post-migration: Make a remediation plan

You have seen that the VM’s port group is using VLAN 40, but that VLAN does not exist on compute
module ports 3:1a and 3:2a, which connect to the port group’s virtual switch. Likely admins missed adding
this VLAN to the network set assigned to compute module ports 3:1a and 3:2a. It is also possible that the
port group’s VLAN ID is incorrect. After discussions with the customer, you determine that the former
possibility is correct. You must add the missing network to the network set. You then test out the VM’s
connectivity. Now the VM can successfully reach external resources, and clients can reach it as well.

Rev. 23.21 420 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

Best practices for prevention


In the final section of this module, you will review some best practices, which can help organizations
prevent issues before they occur.

Rev. 23.21 421 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

Maintain best practices for server and storage firmware patches and
upgrades

Figure 8-20: Maintain best practices for server and storage firmware patches and upgrades

It is best practice to keep servers’ firmware up to date to the latest version of firmware supported with the
current ESXi version. You should also keep ESXi patches and VMware tools up to date. Keeping up with
updates and patches prevents customers from falling prey to known issues. Before installing any
upgrades or patches, though, verify that all components are supported in the VMware Compatibility Guide
(https://www.vmware.com/resources/compatibility/search.php). Also maintain the same firmware and
ESXi versions across all servers in a cluster. Consistent versions help to deliver consistent behavior as
features like DRS and HA move VMs to new ESXi hosts in the cluster.
As you learned in Module 2, when possible, you should use VMware Lifecycle Manager (vLCM) to
automate upgrades. Integrate HPE Hardware Support Manager (HSM) with vLCM to integrate HPE SPPs
and add-ons with the ESXi image.

Rev. 23.21 422 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

Use automation — with appropriate testing

Figure 8-21: Use automation — with appropriate testing

By automating patches and upgrades with tools like vLCM you can accelerate upgrades and reduce the
chance for human error. The same principle applies to automating tasks such as cluster expansion, which
you can do with HPE OV4VC.
At the same time, you should advise customers about implementing appropriate pre-testing. For example,
you might test a patch in an ESXi cluster that supports a development environment before deploying the
patch to an ESXi cluster with production workloads.

Rev. 23.21 423 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

Address issues proactively

Figure 8-22: Address issues proactively

Customers can leverage the monitoring capabilities provided by HPE tools such as HPE InfoSight
(including its Cross-Stack Analytics capabilities), HPE plug-ins for VMware, and HPE GreenLake for
Compute Ops Management.
These tools can help bring issues to customers’ attention before the issues grow into larger problems. For
example, they can find hosts or VMs that are not yet down but which have health issues that might cause
unexpected downtime later. HPE InfoSight can also alert customers to emerging hardware issues. And it
can show resource bottlenecks, which, if they become worse, might start to degrade application
performance. By periodically checking for and remediating such issues, admins improve their chances of
resolving issues before users submit trouble tickets.
The HPE plug-ins help customers identify compliance issues, such as HPE servers with the incorrect
firmware version.

Rev. 23.21 424 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

Implement data protection

Figure 8-23: Implement data protection

Despite all the best practices, errors can occur, and catastrophe can strike. A solid data protection plan
helps to protect customers against issues such as accidentally deleted files, failed storage drives, and
even wide-scale disasters. Different technologies provide protection for different types of data loss. HPE
recommends protecting VMs’ virtual disks with multiple layers of data protection.
VMware supports VM snapshots, which help customers remediate configuration errors, accidentally
deleted files, and corrupted files. Admins simply revert the VM to a snapshot taken before the problem
occurred. HPE storage arrays also support snapshots for volumes. As touched on in Module 7, HPE
storage arrays support replication technologies, which help customers recover from storage array failover
or even widescale disaster.
Backups create point-in-time copies of data. They can protect many types of data loss from file loss, file
corruption, and hardware platform failure to a full data center outage or power failure. A continuous data
protection technology such as Zerto also protects from all of these issues, as well as from malware such
as ransomware.
When a customer wants to retain data for many years, in order to comply with data retention regulations,
tape-based archives provide the best solution.
For more information about using these technologies in a comprehensive data protection plan, take the
Creating HPE Data Protection Solutions course.

Rev. 23.21 425 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

Summary
This module has prepared you for addressing issues both proactively and reactively. You reviewed how
to use HPE and VMware tools in these efforts, and you applied what you learned in two example
scenarios.

Rev. 23.21 426 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

Activity: Explore HPE InfoSight


You will now explore the proactive insights provided by HPE InfoSight.

Rev. 23.21 427 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

Activity 8

Figure 8-24: Activity 8

In this example, you will explore how you can use HPE InfoSight to troubleshoot issues. For this activity,
you need access to the Internet and HPE Partner credentials. You will preferably run a live demo, or you
can watch a recorded demo. Follow these steps:
1. Log into https://hpedemoportal.ext.hpe.com/
2. Search for InfoSight. Select this demo: Self-service, self-driving HPE InfoSight Demo.
3. Click Details and then in the new window click OnDemand.
4. Fill out the form, selecting Dry Run / Self-Paced Instruction and indicating that you do not have an
OPP Id. Click Submit.

-0-
5. Because you do not have an OPP id, you might not receive on-demand access. In that case, refer to
the steps at the end of this activity to learn how to view a pre-recorded demo.

Rev. 23.21 428 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

6. If you did gain access, choose Test-drive HPE InfoSight.

7. Choose a troubleshooting scenario that interests you.

8. The demo will give you all the instructions that you need.
9. If you have time, you can choose another scenario.
10. When you are finished, return to the My demo event(s) page. Click Cancel to end the demo.
11. Reflect on how you could use a demo such as this with your customers.

Alternative recorded demo


If you are not able to receive an on-demand environment, search for this recorded demo: HPE InfoSight
test-drive and troubleshooting. Click the details and play the recording.
After you watch the demo, reflect on how you could use a demo such as this with your customers.

Rev. 23.21 429 Confidential – For Training Purposes Only


Module 8: Troubleshoot VMware-Based Infrastructure Solutions

Learning Checks
1. What condition must be met for customers to use HPE InfoSight Cross-Stack Analytics for VMware?
a. The VMware environment must have a vSphere Enterprise Plus license.
b. Customers must deploy the HPE OV4VROPS plug-in.
c. Customers must purchase the VMware vSphere licenses through HPE OEM.
d. The VMware environment must use supported HPE arrays, such as HPE Alletra or Primera.
2. Using esxtop, you find that an ESXi host has oversubscribed memory. What other sign should you
look for, which could indicate that VMs do not have access to enough memory?
a. The host memory state is high.
b. The host is swapping memory to disk.
c. The VMs show a high CSTP%.
d. The VMkernel memory shows a high memfree value.

Rev. 23.21 430 Confidential – For Training Purposes Only


Answers
Appendix

Module 1
Activity 1 answers
Your presentation might have mentioned ideas such as these:
IT is struggling because some processes are software-defined while managing the infrastructure is
manual. The customer needs to simplify and introduce more automation. HPE offers tight integration with
VMware environments and has been a trusted partner with VMware for more than 20 years for good
reason. HPE plugins help customers orchestrate the physical and virtual environment together. With
greater automation, IT can spend less time on provisioning tasks and meet line of business needs more
quickly and reliably.
HPE can also offer a unified support experience for the physical and virtual environment, which can help
time for resolving issues and improve the customer experience.
You can even offer the customer an as-a-service experience much like public cloud, but with
infrastructure dedicated to the customer in their data center. Admins can quickly provision VMs on this
infrastructure, and the company pays as they go. This self-service experience could give the customer
the flexibility and speed that the company requires.

Learning check answers


1. What is one way HPE helps customers protect themselves from security threats?
a. HPE Silicon Root of Trust protects HPE servers against compromised firmware.
b. HPE Alletra 6000 applies intrusion protection system (IPS) defenses to data stored on its drives.
c. HPE embeds a next generation firewall (NGFW) in every HPE data center switch.
d. HPE encourages customers to move all applications to co-lo and cloud for better protection.
2. Which HPE solution offers simple, cloud-based management and 100% availability?
a. HPE Alletra dHCI
b. HPE Alletra 5000
c. HPE Alletra 6000
d. HPE Alletra 9000

Rev. 23.21 | © Copyright 2023 Hewlett Packard Enterprise Development LP | Confidential – For Training Purposes Only
Appendix: Answers

Module 2
Activity 2.1 answers
Task 1
What additional information do you need to collect to properly size the deployment?
Some of the information that you might have listed includes:
• What level of oversubscription is acceptable? (vCPU-to-core? RAM subscription?)
• What level of redundancy does the customer require? Will the customer use DRS and HA enabled
clusters? Is N+1 redundancy sufficient, or does the customer need N+2, or more?
• What levels of growth does the customer anticipate year over year?
• What is the current datastore capacity?
• What kind of IOPS requirements do VMs have?
• More data about current hosts and resource utilization, which you could collect with these HPE tools
– HPE CloudPhysics
– HPE Assessment Foundry (SAF)

Task 2
If you chose HPE ProLiant DL, your solution details will resemble this:

Rev. 23.21 432 Confidential – For Training Purposes Only


Appendix: Answers

If you chose HPE Synergy, your solution details will resemble this:

While this is a small solution for HPE Synergy, remember that you are planning only one cluster for this
activity. For a real world scenario, you would use expert sizing and plan multiple clusters.

Task 3
3. Take notes on how you will present the BOM and its benefits to the customer. Explain how the
solution will help the customer solve their issues.
Your presentation might include points such as this:
• HPE ProLiant servers have come with many features to secure the firmware and other server
components—from the factory to retirement. HPE ProLiant Gen11 servers come with IDevIDs,
allowing them to authenticate out of the box. HPE ProLiant Gen11 servers come with TPMs that
VMware vCenter Server can use to attest ESXi hosts’ software. In short, HPE servers help to
protect customers from rootkits, ransomware, and other malware. Because responding to such
attacks is expensive, these features can save customers money in the long run.
• HPE ProLiant servers have profiles that admins can apply to easily optimize the hardware
settings for the virtualized environment. They support Intelligent System Tuning (IST) to further
optimize based on real-time monitoring. In short, customers will notice improved performance
with less time required from IT.

Rev. 23.21 433 Confidential – For Training Purposes Only


Appendix: Answers

• HPE offers solutions such as HPE OneView to manage the servers and integrate them with
VMware management, monitoring, and automation solutions. Automation leads to fewer errors
and less unexpected downtime. IT has the time to spend on strategic projects and can respond to
LOB more quickly. Customers can get projects to market more quickly.
• (For HPE Synergy) A composable network makes it easier for admins to provision new
workloads. Admins can assemble compute and storage resources flexibly. This flexibility can
reduce overprovisioning.
4. Make a list of best practices to recommend to the customer.
• Use the Custom HPE VMware ESXi Image; determine whether the customer can use the custom
HPE image or whether you need to help them assemble their own custom image.
• Make sure the Gen11 servers are using UEFI Secure Boot so that VMware vCenter Server can add
them as TPM-enabled hosts and attest their software .
• Set the correct workload profile on servers for power efficient virtualization or max performance
virtualization, depending on the workload requirements.
• If you are recommending HPE Synergy:
– Discuss the importance of a template-based approach to management
– Recommend distributing cluster hosts across frames
These features will be discussed later in the module, but you might have mentioned:
• Recommend registering the servers in HPE InfoSight for faster troubleshooting and proactive
optimization.
• Recommend adding HPE OneView plugins for vCenter and other VMware solutions.

Learning check answers


1. You are advising a customer about how to deploy VMware vSphere on HPE Synergy. What is a
simple way to ensure that the ESXi host has the proper HPE monitoring tools and drivers?
a. Provision the hosts with the HPE custom image for ESXi.
b. Use Insight Control server provisioning to deploy the ESXi image to the hosts.
c. Manage the ESXi hosts exclusively through Synergy, rather than in vCenter.
d. Customize a Service Pack for ProLiant and upload it to Synergy Composer before using
Composer to deploy the image.
2. How does HPE ProLiant for vSphere Distributed Services Engine improve performance for
workloads, as compared to other HPE ProLiant servers?
a. By enabling Intelligent System Tuning (IST)
b. By offloading overlay network processing to a Pensando DPU
c. By providing slots for up to 8 processors
d. By supporting hardware-accelerated virtualization

Rev. 23.21 434 Confidential – For Training Purposes Only


Appendix: Answers

3. What is one difference between a VCF standard architecture and a consolidated architecture?
a. The standard architecture supports more than one VI domain while the consolidated supports only
one VI domain.
b. The consolidated architecture supports SAN arrays to improve storage performance, but the
standard architecture does not.
c. The standard architecture separates management workloads from user workloads.
d. The consolidated architecture uses a wizard to simplify the installation process rather than
requiring the Cloud Builder VM.
4. What is one benefit of HPE OneView for vRealize Orchestrator (OV4VRO)?
a. It integrates a dashboard with information and events from HPE servers into vRO.
b. It provides an end-to-end view of servers' storage (fabric) connectivity within vRO.
c. It adds pre-defined workflows for HPE servers to vRO.
d. It integrates multi-cloud management into the VCF environment.
5. Which HPE plug-in helps customers manage HPE server firmware, driver, and ESXi images
together?
a. HPE OV4VROPS
b. HPE OV4VLI
c. HPE OneView Connector for VCF
d. HPE HSM for vLCM

Module 3
Activity 3 answers
• vSAN benefits
– Cost effective
– Highly integrated with VMware
– Relatively simple to deploy
• HPE benefits for vSAN (for HPE ProLiant DL solution)
– Simple ordering process for certified products with HPE vSAN Ready Nodes
– Several ESA options for the highest performance and future proof design
• HPE benefits for vSAN (for HPE Synergy solution)
– Flexibility on D3940 (no fixed number of drives per compute module)
– High performance flat iSCSI network across frames
• HPE storage array benefits
– Advanced services such as QoS, snapshotting, and replication (important for mission critical web
and business management services)
– Arrays that offer very high availability and even 100% availability
– Tight integration with VMware

Rev. 23.21 435 Confidential – For Training Purposes Only


Appendix: Answers

– Simplified provisioning with vVols and/or vCenter plugins; ability to perform many storage
management tasks directly from vCenter
– Integration with VMware SRM for disaster recovery use cases
– Ability to use replication and integrate with SRM when using vVols
– On the HPE Alletra 9000, NVMeoF to deliver the extremely high performance of NVMe to ESXi
hosts from an external array
– Self-driving troubleshooting and optimization with HPE InfoSight and Cross-Stack Analytics for
VMware

Learning check answers


1. What is one benefit of HPE Synergy D3940 modules?
a. A single D3940 module can provide up to 40 SFF drives each to 10 half-height compute modules.
b. Customers can assign drives to connected compute modules without fixed ratios of the
number per module.
c. A D3940 module provides advanced data services like Peer Persistence.
d. D3940 modules offload drive management from compute modules, removing the need for
controllers on compute modules.
2. Why would you recommend OSA or ESA?
a. OSA is designed for enterprise environments while ESA is designed for mid-sized or small
VMware environments.
b. Customers using HPE Synergy should use OSA, while customers using HPE ProLiant should
always use ESA.
c. Customers with demanding workloads should use ESA, while customers with less
demanding workloads can use OSA.
d. ESA replaces OSA so all customers should immediately move to ESA.
3. What is one strength of HPE Alletra and Primera for vVols?
a. They help the customer unify management of vVol and vSAN solutions.
b. They have mature vVols solutions that support replication.
c. They automatically convert VMFS datastores into simpler vVol datastores.
d. They provide AI-based optimization for volumes exported to VMware ESXi hosts.

Module 4
Activity 4 answers
Assume that Financial Services 1A wants to use NSX. Answer these questions.
1. Does this decision affect your proposal? If so, explain what changes you would make.
You might discuss how interested the customer is in offloading the NSX overlay processing away from
CPUs. This offloading is particularly important for demanding workloads such as VDI, visualization, data
analytics, artificial intelligence (AI)/machine learning (ML), and databases. For clusters supporting those
workloads, you might recommend HPE ProLiant DL with vSphere Distributed Engine servers.

Rev. 23.21 436 Confidential – For Training Purposes Only


Appendix: Answers

2. Assume that the customer needs a ToR switch upgrade, and you are proposing HPE Networking
Aruba CX switches.
a. Explain benefits of your proposal.
The HPE Aruba Networking CX switches support Virtual Switching Extension (VSX), which allows
multiple switches to operate as a highly available group. The switches can use VSX to provide multi-
chassis link aggregation groups (M-LAG), which make the network more resilient against hardware
failures.
You can propose several great ways for customers to simplify management of the CX switches. HPE
Aruba Networking NetEdit makes it easy for admins to manage multiple CX switches using familiar CLI
tools. HPE Aruba Networking Fabric Composer transforms the switches into an automated fabric. This
solution integrates tightly with VMware and can automatically adjust the network based on lifecycle
changes such as moving VMs. That automation reduces the risk of error and greatly speeds up
provisioning.
b. Make a list of settings that need to coordinate between the VMware environment and the switches.
• The switches must support the VLAN ID used for the NSX transport network and exactly the
same MTU specified for that network.
• They must also support the correct VLAN IDs for non-tunneled networks such as the
management network, vMotion network, and vSAN network (if used). If any of those networks are
using increased MTUs, the MTU must match on the switches.
• The switches’ link aggregation settings must match up with what is being used on virtual
switches. For example, if distributed switches are using LACP, the physical switches must do so
also.
• The switches might need to use a protocol such as Border Gateway Protocol (BGP) to exchange
routes with the NSX Tier 0 router.

Learning check answers


1. What benefit do overlay segments provide to companies?
a. They provide encryption to enhance security.
b. They provide admission controls on connected VMs.
c. They enhance performance, particularly for demanding and data-driven workloads.
d. They enable companies to place VMs in the same network regardless of the underlying
architecture.
2. What is one way that NetEdit helps to provide orchestration for HPE Aruba Networking CX switches?
a. It provides the API documentation and helps developers easily create scripts to monitor and
manage the switches.
b. It lets admins view and configure multiple switches at once and makes switch
configurations easily searchable.
c. It integrates the HPE Aruba Networking CX switches into HPE GreenLake Central and creates a
single pane of glass management environment.
d. It virtualizes the switch functionality and enables the switches to integrate with VMware NSX.

Rev. 23.21 437 Confidential – For Training Purposes Only


Appendix: Answers

Module 5
Activity 5 answers
Decide whether you will propose an HPE SimpliVity or HPE Alletra dHCI solution for this customer.
Explain your reasons and the benefits that your solution provides.
HPE SimpliVity is intended for edge and ROBO environments. This customer is looking for a data center
solution, so HPE Alletra dHCI could be the better choice. On the other hand, the customer is an SMB.
The discussion below focuses on HPE Alletra dHCI, but you could also point out similar benefits if
selected HPE SimpliVity.
You should have mentioned benefits such as these. This customer craves simplicity. HPE Alletra dHCI is
very simple to deploy and manage. For a greenfield deployment, which you are recommending, HPE
handles most of the integration at the factory. Customer admins only need to run a simple wizard. With
the switches that you are proposing (HPE Aruba Networking CX 8325s or 8360s), even the network is
automated. Non-storage experts like the college's IT staff can easily deploy VMs across the cluster
without having to worry about attaching LUNs. This customer does not want to have to think about and
fuss with storage. These solutions offer simple management, integrated with vCenter Server. They
feature built-in protections for data and a simple one-click upgrade processes.
Even though the HPE Alletra dHCI cluster is so simple to manage, it delivers benefits you would expect
from enterprise storage. HPE Alletra dHCI offers much lower latency than other HCI solutions. It can
protect data with replication to other HPE Alletra arrays (including ones outside the dHCI cluster).

Learning check answers


1. On which network do HPE SimpliVity nodes have their default gateway address?
a. Storage
b. Management
c. Cluster
d. Federation
2. How does an HPE SimpliVity cluster protect data from loss in case of drive failure?
a. Only RAIN (replicating data to at least three nodes
b. Only RAID (with the level depending on the number of drives)
c. Both RAID (with the level depending on the number of drives) and RAIN (replicating data to
two nodes)
d. Only RAID (always RAID 10)
3. What is one benefit of an HPE Alletra dHCI one-click upgrade?
a. It performs all upgrades simultaneously, ensuring that the process completes quickly.
b. It upgrades vCenter Server first and then has vCenter Server upgrade other components.
c. It stages upgrades on one of the HPE Alletra dHCI compute nodes to pre-test them.
d. It upgrades one server at a time to prevent disruption to services.

Rev. 23.21 438 Confidential – For Training Purposes Only


Appendix: Answers

4. What is always required for a brownfield HPE Alletra dHCI deployment?


a. New HPE ProLiant DL Gen10+ or later server
b. New HPE Alletra 5000 or 6000 array
c. Existing HPE ProLiant DL Gen10+ or later server
d. Existing HPE Alletra 5000 or 6000 array

Module 6
Learning check answers
1. You are working on an HPE custom quote for an HPE GreenLake for VDI solution. What do you need
to create in OCA?
a. A BOM for each HPE GreenLake pricing band
b. A Start BOM and an End BOM
c. Separate BOMs for HPE compute, HPE storage, and software
d. An Infrastructure BOM and a Support BOM
2. What accurately describes the as-a-service financial model of HPE GreenLake?
a. Customers pay for the infrastructure upfront and services monthly.
b. Customers pay only for what they use each month in one bill (but commit to a minimum
usage).
c. Customers pay for a specific tier of usage for each month (with a commitment to a minimum tier).
d. Customers pay for the solution in one lump sum at the end of the contract.

Module 7
Activity 7 answers
VMware vSphere vMotion, VMware HCX, and Zerto can all work for this use case.
• Some vMotion benefits:
– Likely already supported with the customer’s current VMware licensing
– Managed from vCenter, probably already familiar to admins
– Live migration without VM downtime
• Some vMotion considerations:
– Ensuring VMs’ disks meet the criteria for a compute and storage migration
– Ensuring networks are correctly setup on the destination hosts
– Ensuring all other vMotion and Storage vMotion requirements are met

Rev. 23.21 439 Confidential – For Training Purposes Only


Appendix: Answers

• Some VMware HCX benefits:


– Live migration for large numbers of VMs with Replication Assisted vMotion (RAV)
– Other migration options also supported if customer prefers
– Ability to keep MAC addresses and IP addresses for VMs and integrated handling of
network cutover
– Options for scheduling the migration
• Some VMware HCX considerations
– Licensing and deploying the solution
• Some Zerto benefits
– Support for large migrations
– Ability to group VMs in Virtual Protection Groups (VPGs) to migrate together; ability to set
destination MAC addresses and IP addresses in the VPGs
– Ability to pre-test
– Ability to reverse replication after the migration for rollback or data protection in the
legacy environment
– Options for scheduling the migration
• Some Zerto considerations
– Licensing and deploying the solution
– Updating or cutting over the networking during the migration
• Some considerations for all the technologies:
– Ensuring sufficient VMware vSphere licenses for the new environment
– Documenting application licenses and how to keep them valid post-migration
– Assessing workload availability requirements and scheduling the migration
– Creating full backups for VMs before the migration

Learning check answers


1. A customer is using vMotion, but not Advanced Cross vCenter vMotion. What is a requirement?
a. The source and destination hosts must be at the same site.
b. The source and destination hosts must be managed by the same vCenter Server.
c. The source and destination hosts must be managed by the same vCenter Servers or vCenter
Servers with licenses owned by the same organization.
d. The source and destination must be managed by the same vCenter Servers or vCenter
Servers joined by Enhanced Linked Mode.
2. You are helping a customer use VMware HCX. The customer needs to migrate a very large number
of VMs, which cannot tolerate any downtime. Which option should you select?
a. Bulk migration
b. Cold migration
c. HCX vMotion
d. HCX RAV

Rev. 23.21 440 Confidential – For Training Purposes Only


Appendix: Answers

3. You are using Zerto to migrate VMs, and you apply the Move workflow to a VPG. What step does
Zerto perform first?
a. Creating a new checkpoint for the source VM
b. Powering up the target VM
c. Powering down the source VM
d. Promoting data in the journal to the target VM
4. What are two benefits of using Zerto for migration? (Select two.)
a. It provides options for pre-testing.
b. It embeds network overlay technology inside it.
c. It is a free solution for customers with HPE storage arrays.
d. It is capable of moving VMs to and from a large variety of on-prem and cloud environments.

Module 8
Learning check answers
1. What condition must be met for customers to use HPE InfoSight Cross-Stack Analytics for VMware?
a. The VMware environment must have a vSphere Enterprise Plus license.
b. Customers must deploy the HPE OV4VROPS plug-in.
c. Customers must purchase the VMware vSphere licenses through HPE OEM.
d. The VMware environment must use supported HPE arrays, such as HPE Alletra or Primera.
2. Using esxtop, you find that an ESXi host has oversubscribed memory. What other sign should you
look for, which could indicate that VMs do not have access to enough memory?
a. The host memory state is high.
b. The host is swapping memory to disk.
c. The VMs show a high CSTP%.
d. The VMkernel memory shows a high memfree value.

Rev. 23.21 441 Confidential – For Training Purposes Only


Appendix: Answers

PAGE INTENTIONALLY LEFT BLANK

Rev. 23.21 442 Confidential – For Training Purposes Only


PAGE INTENTIONALLY LEFT BLANK
To learn more about HPE solutions, visit
www.hpe.com
© 2023 Hewlett Packard Enterprise Development LP. The information contained herein is
subject to change without notice. The only warranties for Hewlett Packard Enterprise products
and services are set forth in the express warranty statements accompanying such products
and services. Nothing herein should be construed as constituting an additional warranty.
Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions
contained herein.

You might also like