You are on page 1of 319

VXFLEX INTEGRATED

RACK ADMINISTRATION
- CLASSROOM
Version [2.0]

PARTICIPANT GUIDE

PARTICIPANT GUIDE

prathamesh.mitkar@atos.net
prathamesh.mitkar@atos.net
Dell Confidential and Proprietary

Copyright © 2019 Dell Inc. or its subsidiaries. All Rights Reserved. Dell Technologies,
Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its
subsidiaries. Other trademarks may be trademarks of their respective owners.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page i


prathamesh.mitkar@atos.net
Table of Contents

Course Introduction .................................................................................... 1

VxFlex Integrated Rack Administration ................................................................... 2


Course Objectives................................................................................................................ 3
Prerequisite Skills ................................................................................................................ 4
VxFlex System FLEX Rebranding ........................................................................................ 5
Course Agenda .................................................................................................................... 6

VxFlex Integrated Rack Overview and Management Tools .................... 7

VxFlex Integrated Rack Architecture and Components ......................................... 8


Dell EMC VxFlex Integrated Rack ........................................................................................ 9
VxFlex Integrated Rack Workload Examples ..................................................................... 11
VxFlex Integrated Rack Hardware Components................................................................. 12
Dell EMC VxFlex OS – Software Defined Storage ............................................................. 15
VxFlex Integrated Rack Deployment Options ..................................................................... 18
VxFlex Integrated Rack Networking - Physical View .......................................................... 20
Spine-leaf Architecture ....................................................................................................... 22
VxFlex Integrated Rack Networking - Logical View ............................................................ 24
VxFlex OS Deployment on ESXi ........................................................................................ 28
VxFlex OS Cluster in a VxFlex Integrated Rack Example .................................................. 29
Dell EMC Cabinet .............................................................................................................. 30
Reference Documentation ................................................................................................. 31

VxFlex Integrated Rack Administration Scope and Tools .................................... 32


VxFlex Integrated Rack Administration Scope ................................................................... 33
Tools for Managing VxFlex Integrated Rack ....................................................................... 34
Introduction to VxFlex Manager ......................................................................................... 35
VxFlex Manager User Interface.......................................................................................... 37
Integrated Dell Remote Access Controller (iDRAC) ........................................................... 41
VMware vCenter Server and vSphere Web Client .............................................................. 43
Red Hat Virtualization Manager ......................................................................................... 45

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc.


Page ii prathamesh.mitkar@atos.net
Cisco NX-OS CLI to Configure Nexus Switches ................................................................. 47

VxFlex OS Management Interfaces......................................................................... 48


VxFlex OS GUI Dashboard ................................................................................................ 49
Capacity Utilization ............................................................................................................ 51
VxFlex OS Frontend and Backend View ............................................................................ 52
VxFlex OS GUI Monitor View ............................................................................................. 54
VxFlex OS vSphere Plug-In Overview................................................................................ 55
VxFlex OS CLI Overview ................................................................................................... 57
VxFlex OS Gateway and Installer ...................................................................................... 58
Lab: Explore the VxFlex Integrated Rack ........................................................................... 59

Storage Resource Management .............................................................. 60

System Capacity Components ............................................................................... 61


Protection Domain Recommendations ............................................................................... 62
Add Protection Domain ...................................................................................................... 64
View Protection Domain Properties .................................................................................... 65
Storage Pool Recommendations........................................................................................ 67
Medium Versus Fine Granularity Layout ............................................................................ 69
Add Storage Pool ............................................................................................................... 71
View Storage Pool Properties ............................................................................................ 73
Inline Data Compression .................................................................................................... 74
Spare Capacity Overview................................................................................................... 75
Modify Spare Capacity Policy............................................................................................. 76

Storage Provisioning and Volume Management ................................................... 78


VxFlex OS Volumes ........................................................................................................... 79
Volume Provisioning Considerations for MG Pools ............................................................ 81
Add Volumes ..................................................................................................................... 82
Map a Volume in GUI ......................................................................................................... 84
Add Volume using VxFlex Manager ................................................................................... 85
Add and Map Volume Using VxFlex OS Plug-in ................................................................. 86
Locating Volumes in vSphere............................................................................................. 87

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page iii


prathamesh.mitkar@atos.net
Modify Volume Operations ................................................................................................. 88
Increase Volume Size ........................................................................................................ 90
Unmap Volumes ................................................................................................................ 91
Volume Limits .................................................................................................................... 92
Query Volume Limits .......................................................................................................... 94
Remove Volumes............................................................................................................... 95
Volume Migration ............................................................................................................... 96
Supported Migration Paths................................................................................................. 98

VxFlex OS Rebuild and Rebalance ......................................................................... 99


VxFlex OS Rebuild Operation .......................................................................................... 100
VxFlex OS Rebalance Operation ..................................................................................... 102
Managing Rebuild and Rebalance Settings...................................................................... 103
Rate Limits: Network Throttling ........................................................................................ 104
Configure Network Throttling............................................................................................ 105
Rebuild and Rebalance Throttling .................................................................................... 106
Setting I/O Priority ............................................................................................................ 108
Enable and Disable Rebuild and Rebalance at Storage Pool ........................................... 109

Managing System Parameters .............................................................................. 110


MDM Virtual IP Addressing .............................................................................................. 111
Manage Capacity Alert Threshold .................................................................................... 112
Checksum Protection Mode ............................................................................................. 113
Background Device Scanner Mode .................................................................................. 115
Performance Profiles for System Components ................................................................. 117
VxFlex OS License Management ..................................................................................... 118
View Oscillating Failure Counters .................................................................................... 119

VxFlex OS User Management ............................................................................... 120


VxFlex OS User Accounts................................................................................................ 121
VxFlex OS User Authentication ........................................................................................ 122
User Roles and Permissions ............................................................................................ 123
Configuring LDAP Authentication ..................................................................................... 125
Manage LDAP Authentication .......................................................................................... 127

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc.


Page iv prathamesh.mitkar@atos.net
Managing Local Users ..................................................................................................... 128
Lab: Managing VxFlex Integrated Rack Storage .............................................................. 130

Compute and Network Resource Management ................................... 131

Compute Resource Management ......................................................................... 132


Compute Resources in VxFlex Integrated Rack ............................................................... 133
VMware vSphere Environments for VxFlex Integrated Rack ............................................ 134
VMware vCenter for Compute Resource Management .................................................... 135
VxFlex Management Controller Cluster ............................................................................ 136
VxFlex Management Controller vSAN Datastore.............................................................. 137
VxFlex Node Cluster ........................................................................................................ 138
ESXi Boot Device............................................................................................................. 139
Provisioning Storage for Production Virtual Machines ...................................................... 140
Create Datastore on VxFlex OS Volume .......................................................................... 141
Virtual Machine Deployment Options ............................................................................... 142
Create VM Example ......................................................................................................... 144
Add Storage to VM........................................................................................................... 145
Virtual Machine Management........................................................................................... 146
VM Migration Using vSphere VMotion .............................................................................. 147
Migration Wizard .............................................................................................................. 149
Red Hat Virtualization Manager for Compute Resource Management ............................. 150

Network Resource Management ........................................................................... 152


Physical Switch Port Configuration................................................................................... 153
Virtual Port-Channels ....................................................................................................... 155
NX-OS Query Commands................................................................................................ 156
Virtual Networking with VMware vSphere......................................................................... 159
VxFlex Node Cluster DVswitch0 ...................................................................................... 160
VxFlex Node Cluster DVswitch1 and DVswitch2 .............................................................. 162
Summarize VxFlex Integrated Rack Logical and Physical Network Layout ...................... 163
ESXi Host Network Interfaces .......................................................................................... 165
VxFlex Management Controller Virtual Networking .......................................................... 166
Adding Production Network .............................................................................................. 167

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page v


prathamesh.mitkar@atos.net
Configure Access Switches .............................................................................................. 168
Adding New Distributed Port Group ................................................................................. 170
Adding Production Networking in RHV ............................................................................. 171
Lab: Managing Virtual Machines ...................................................................................... 173

Data Protection and Backup .................................................................. 174

Data Protection Using VxFlex OS Snapshots ...................................................... 175


VxFlex Integrated Rack Data Protection Options ............................................................. 176
VxFlex OS Snapshots ...................................................................................................... 177
Consistency Group Snapshots ......................................................................................... 178
VxFlex OS Snapshots Structure....................................................................................... 179
Create Snapshots ............................................................................................................ 180
Snapshot Policy ............................................................................................................... 181
Removing Snapshots ....................................................................................................... 182
Restore Snapshot ............................................................................................................ 183

VMware Data Protection Options.......................................................................... 184


VMware Snapshots .......................................................................................................... 185
How to Create VM Snapshots .......................................................................................... 186
VMware vSphere High Availability (HA) ........................................................................... 187
Configuring High Availability ............................................................................................ 188
VMware vSphere Fault Tolerance (FT) ............................................................................ 189
Configure Fault Tolerance................................................................................................ 190
VMware Distributed Resource Scheduler (DRS) .............................................................. 191
RHV Data Protection Options........................................................................................... 192

Integration with Dell EMC Data Protection Solutions ......................................... 194


Dell EMC Data Protection Solutions ................................................................................. 195
Introduction to Avamar ..................................................................................................... 196
Avamar Integration with Data Domain .............................................................................. 197
VMware Image Backup with Avamar................................................................................ 198
Guest Backup .................................................................................................................. 199
Avamar Replication .......................................................................................................... 200
Avamar with VxFlex Integrated Rack ............................................................................... 201

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc.


Page vi prathamesh.mitkar@atos.net
Replication Using RecoverPoint for Virtual Machines ........................................ 202
RecoverPoint for Virtual Machines Overview ................................................................... 203
RecoverPoint for VMs Use Cases .................................................................................... 204
RecoverPoint for VMs Architecture .................................................................................. 205
Repository Volume........................................................................................................... 207
Journal Volumes .............................................................................................................. 208
RecoverPoint for VMs vCenter Web Client Plugin ............................................................ 209
Protect VM Wizard ........................................................................................................... 210
Setting Replication Policy................................................................................................. 211
RecoverPoint for VMs Management: Using Plug-in .......................................................... 212
Lab: Protecting Virtual Machines...................................................................................... 213

System Monitoring .................................................................................. 214

Virtual Environment Monitoring............................................................................ 215


Monitoring Virtual Compute and Network ......................................................................... 216
Resource Monitoring ........................................................................................................ 217
Monitoring Inventory Objects with Performance Charts .................................................... 218
Monitoring VMs ................................................................................................................ 219
VM Performance Monitoring ............................................................................................ 220
vSphere Distributed Switch Health ................................................................................... 221
vSphere Distributed Switch - Port Groups and Uplinks..................................................... 222
Monitor vSAN Health ....................................................................................................... 223
Monitoring vSphere Events and Alerts ............................................................................. 224
Monitoring RHV Environment ........................................................................................... 225
Monitor the RHV Environment.......................................................................................... 226

VxFlex OS Monitoring ............................................................................................ 227


Monitoring VxFlex OS Cluster Using GUI ......................................................................... 228
View Object Properties..................................................................................................... 229
View System Alerts .......................................................................................................... 230
State Summary View ....................................................................................................... 231
VxFlex OS Events Overview ............................................................................................ 232

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page vii


prathamesh.mitkar@atos.net
Event Record Structure .................................................................................................... 233
Recommended Action for VxFlex OS Events ................................................................... 234
Forward VxFlex OS Events to syslog ............................................................................... 235

Hardware Monitoring ............................................................................................. 236


Server Hardware Monitoring Using iDRAC....................................................................... 237
Server System Information ............................................................................................... 238
Server Storage Monitoring ............................................................................................... 239
Configuring iDRAC Alerts................................................................................................. 240
Configuring OME Alerts ................................................................................................... 241
Check Interfaces on Physical Switch................................................................................ 242
Check VLAN Status on Switch ......................................................................................... 243
Physical Switch Port-Channel Summary .......................................................................... 244
Show Details of Switch Interface/Port .............................................................................. 245

Health Monitoring with VxFlex Manager .............................................................. 246


VxFlex Manager Dashboard - Health ............................................................................... 247
VxFlex Manager Dashboard - Utilization and Storage ...................................................... 248
VxFlex Manager - Service Details .................................................................................... 249
VxFlex Manager - Resources ........................................................................................... 250
VxFlex OS Details ............................................................................................................ 251
Node Details .................................................................................................................... 252
VxFlex Manager - Compliance Scan ................................................................................ 253
Secure Remote Services for Call Home ........................................................................... 254
Secure Remote Services with VxFlex Manager................................................................ 255
SNMP to Monitor VxFlex Integrated Rack ........................................................................ 256
SNMP and Syslog Forwarding with VxFlex Manager ....................................................... 257
Configuring vCenter to Send SNMP Alerts ....................................................................... 258
Configuring SNMP Trap Forwarding for Cisco Nexus Switches ....................................... 259
Lab: System Monitoring ................................................................................................... 260

VxFlex Integrated Rack Upgrade and Troubleshooting ...................... 261

System Life Cycle Management and Upgrade ..................................................... 262


Life Cycle Management Using Release Certification Matrix ............................................. 263

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc.


Page viii prathamesh.mitkar@atos.net
Compliance Check Using VxFlex Manager ...................................................................... 264
Adding Compliance File to VxFlex Manager..................................................................... 266
VxFlex Integrated Rack Upgrade ..................................................................................... 268
Professionally Managed Upgrade Benefits....................................................................... 269
Upgrade Planning and Preparations ................................................................................ 270
Upgrade Considerations and Best Practices .................................................................... 271
Upgrade Using VxFlex Manager ...................................................................................... 273

VxFlex Integrated Rack Basic Troubleshooting .................................................. 275


VxFlex Integrated Rack Problem Management ................................................................ 276
Troubleshoot Network Issues ........................................................................................... 277
Troubleshoot I/O Path ...................................................................................................... 278
Troubleshoot VM Access ................................................................................................. 279
Validate DVswitch Uplink Status ...................................................................................... 280
VxFlex OS SDS Connectivity Status ................................................................................ 281
Check Physical Switch Status .......................................................................................... 282
Troubleshoot Performance Issues ................................................................................... 283
MTU Size Errors .............................................................................................................. 284
MTU Size on Kernel Adapter............................................................................................ 285
Switch Interface Errors ..................................................................................................... 286
Load Balancing Policy ...................................................................................................... 287
Show Switch Interface CRC Details ................................................................................. 288
Common VxFlex OS Issues ............................................................................................. 289
vSAN Troubleshooting in Controller Cluster ..................................................................... 290
VxFlex Manager Service Mode ........................................................................................ 291
Drive Replacement Using VxFlex Manager ...................................................................... 293
VxRack Node Health Check ............................................................................................. 294

Log Collection ........................................................................................................ 295


Log Collection Using VxFlex Manager ............................................................................. 296
VMware vCenter, ESXi, and vSAN Logs .......................................................................... 297
Access VMware Logs Using CLI ...................................................................................... 298
Access Logs for Storage-Only Nodes .............................................................................. 299

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page ix


prathamesh.mitkar@atos.net
Collect VxFlex OS Logs ................................................................................................... 300
Collect Logs Using VxFlex OS Plug-in ............................................................................. 301
Collect Server Logs Using iDRAC .................................................................................... 302
Lab: Maintenance Mode and Licensing ............................................................................ 303
Course Summary ............................................................................................................. 304
Dell EMC Proven Professional Certification Path ............................................................. 305

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc.


Page x prathamesh.mitkar@atos.net
Course Introduction

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 1


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Administration

VxFlex Integrated Rack Administration

Introduction

This course focuses on VxFlex Integrated Rack (formally VxRack FLEX)


administration and management aspects. It presents tools and processes to
monitor, configure, and maintain the VxFlex integrated rack. This course also
provides demonstrations, or hands-on labs, of the VxFlex integrated rack
administration activities.

VxFlex Integrated Rack Administration - Classroom

Page 2 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Administration

Course Objectives

By the end of this course, you will be able to:


 Describe system architecture and theory of operation
 Describe and use various tools for system management
 Provision and manage storage, and virtual compute resources
 Configure and manage data protection and security
 Monitor alerts, events, and health of the system
 Perform basic maintenance and troubleshooting

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 3


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Administration

Prerequisite Skills

The following skills are prerequisites:


 Knowledge of hyperconverged infrastructure and software defined data center
 VxFlex integrated rack product fundamentals
 Configure and manage virtual resources using VMware vSphere tools
 Configure and manage IP networking and Cisco switches and routers (3K and
9K)

VxFlex Integrated Rack Administration - Classroom

Page 4 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Administration

VxFlex System FLEX Rebranding

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 5


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Administration

Course Agenda

Introductions

VxFlex Integrated Rack Administration - Classroom

Page 6 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Overview and Management
Tools

This module revisits the VxFlex integrated rack architecture and configurations. It
also introduces management tools that are used by an administrator to configure,
monitor, and troubleshoot VxFlex integrated rack.

Upon completing this module, you will be able to:


 Describe VxFlex integrated rack architecture, components, and capabilities
 Describe and use the management interfaces to perform VxFlex integrated
rack administration

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 7


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Architecture and Components

VxFlex Integrated Rack Architecture and Components

This lesson reviews the VxFlex integrated rack architecture and operations. It
presents hardware and software components, networking, and the logical
configuration of the system. It also covers a list of resources and documentation
available to support managing and maintaining VxFlex integrated rack in your
environment.

This lesson presents the following topics:


 VxFlex integrated rack hardware components
 VxFlex OS components and operation
 VxFlex integrated rack deployment options
 VxFlex integrated rack networking

VxFlex Integrated Rack Administration - Classroom

Page 8 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Architecture and Components

Dell EMC VxFlex Integrated Rack

Dell EMC VxFlex integrated rack is a hyperconverged, rack-scale engineered


system with integrated networking. It provides the scalability, performance, and the
ease of management required by both traditional and modern workloads. VxFlex
integrated rack is purpose-built to enable customers to quickly deploy
Infrastructure-as-a-Service (IaaS) and private cloud architecture. The system helps
businesses to reduce cost and risk by buying a pre-engineered, fully configured
system, versus building one.

 VxFlex integrated rack is designed for extreme scalability. It has a fully


integrated Cisco rack-scale network fabric included in the solution. You can
start small and grow to hundreds of nodes in a cluster.
 With a flexible architecture, you can have a choice of hypervisor - VMware
vSphere or Red Hat Virtualization. It also supports other VxFlex OS (formerly
called ScaleIO) compatible OS/hypervisors through bare-metal nodes.
 VxFlex integrated rack uses VxFlex OS’s self-healing architecture that employs
data mirroring and rebuild mechanisms. This enables a "six nines" (99.9999%)
availability profile. In addition, the hardware provides no single point of failure
throughout the cluster. It also supports enterprise-class backup and disaster
recovery solutions from Dell EMC.
 VxFlex integrated rack uses Dell EMC VxFlex OS, a software-defined storage
solution. VxFlex OS provides massive parallelism, which eliminates hot spots by

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 9


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Architecture and Components

evenly distributing data among servers and media devices. The symmetry
resulting from this leads to high and predictable system performance.
 VxFlex integrated rack comes with a rich set of tools to manage the
environment. VxFlex Manager enables administrators to deploy and manage
the infrastructure and workloads. VxFlex OS interfaces and hypervisor native
tools also provide options to configure and manage system resources. The
Release Certification Matrix (RCM) enables system life cycle assurance through
compliance.

VxFlex Integrated Rack Administration - Classroom

Page 10 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Architecture and Components

VxFlex Integrated Rack Workload Examples

Here are common deployment use cases for the VxFlex integrated rack. The
applications/workloads can vary from high-performance business database
applications such as Oracle, SAP, and Microsoft, to modernized analytic
applications such as Hadoop and Splunk.

The image on the slide represents workloads that run on VxFlex integrated rack.
These workloads are sorted by industry verticals.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 11


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Architecture and Components

VxFlex Integrated Rack Hardware Components

A VxFlex integrated rack system consists of VxFlex nodes, a Management


Controller, access switches, aggregation switches, and out-of-band management
switches.

VxFlex nodes: VxFlex integrated rack nodes use Dell PowerEdge servers to
provide computing power and storage. The available server options for VxFlex
nodes are PowerEdge R640, R740xd, and R840. The older systems use
PowerEdge models R630, and R730xd. VxFlex nodes are available as either
hyper-converged, storage-only, or compute-only nodes. These nodes come with
various combinations of processor, storage, and memory to provide choice and
flexibility that is needed in a customer environment.

The storage-only nodes run Red Hat Enterprise Linux operating system and
provide storage capacity independent of processing power. The compute-only
nodes provide computing power independent of storage. The hyper-converged
nodes provide both processing power and storage capacity. All nodes have four
10/25-GbE SFP28+ ports along with a management port for iDRAC.

VxFlex Integrated Rack Administration - Classroom

Page 12 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Architecture and Components

Tip: All nodes use a storage controller (RAID) card that contains a
pair of M.2 SSDs as a boot device called BOSS. BOSS is a plug-in
PCIe device that provides protection and high-speed connectivity.
Refer to VxFlex integrated rack Datasheet for details on node type
and specifications.

VxFlex Management Controller: A minimum of three nodes are clustered to


provide the Management Controller. They are clustered separately from the
production VxFlex nodes and run various management and services VMs. The
Management Controller uses 5-disk or 8-disk PowerEdge R640 servers and runs
VMware ESXi. The Controller node drives are used as storage for the virtual
machines running on them. However, this storage pool is separate from the
production storage and uses VMware vSAN software-defined storage technology.
A separate VMware vCenter is required to manage the Controller cluster.

The networking in VxFlex integrated rack includes:


 A pair of Cisco Nexus 93180YC-EX access switches for communication
between nodes. In a multirack configuration, each rack requires a pair of access
switches. Another option for the access switch is the Cisco 93240YC-FX2 -
designed for spine-leaf deployment in data centers. You may see a pair of
Cisco Nexus 3132Q-X, or Cisco Nexus 3164Q switches in the older VxFlex
integrated rack environments.
 A pair of Cisco Nexus 9332PQ or 9236C aggregation switches that are used for
inter-rack communication.
 A Cisco Nexus 31108TC-V or 3172TQ management switch provides the
connection for out-of-band traffic, such as iDRAC. More management switches
can be added based on the port count requirement.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 13


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Architecture and Components

Example of a cabinet elevation

VxFlex Integrated Rack Administration - Classroom

Page 14 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Architecture and Components

Dell EMC VxFlex OS – Software Defined Storage

VxFlex OS is foundational to VxFlex integrated rack. It is software-defined


distributed shared storage that can deliver high levels of performance and
scalability. VxFlex OS pools together local storage devices from each node.
Volumes are then created from the pooled storage and used as shared storage
among all nodes. The software components that make up VxFlex OS include
Storage Data Client (SDC), Storage Data Server (SDS), and the Metadata
Manager (MDM).

The SDC is a lightweight driver that enables a server to consume storage,


providing front-end storage volume access for file systems and applications.

The SDS enables a node in the VxFlex OS cluster to contribute its local storage to
the aggregated pool. Any server that contributes storage to the VxFlex OS cluster
needs a running instance of the SDS.

The MDM manages the metadata, cluster configurations, monitoring, rebalance,


and rebuild tasks. Additionally, the MDM handles all user interaction with the
system. In production clusters, the MDM is configured in redundant Cluster mode.
For example, a 5-node MDM cluster consists of one Master, two Slave MDMs, and
two Tie-breakers spread across five nodes.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 15


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Architecture and Components

Protection Domain
Fault set A Fault set B

A Protection Domain is a subset of SDSs. When data is written to a VxFlex OS


cluster, it is mirrored onto two different devices on two different SDSs within a
Protection Domain.

Storage Pools enable the creation of different storage tiers in the VxFlex OS
cluster. A Storage Pool is a set of physical storage devices in a Protection Domain.
When a volume is configured from a Storage Pool, it is distributed over all devices
and servers contributing to that pool.

Depending on the allocation unit size, a storage pool layout can either be of
"Medium Granularity (MG)" or "Fine Granularity (FG)" type. In MG storage pools,
volumes are divided into 1 MB allocation units, distributed and replicated across all
disks contributing to a pool. FG storage pools are more space efficient, with an
allocation unit of just 4 KB and a physical data placement scheme based on Log
Structure Array (LSA) architecture.

A Fault Set is a logical entity that creates groups of SDSs in a Protection Domain
that are likely to fail as a group. For example, SDSs that are all powered in the
same rack. By design, VxFlex OS never maintains both copies of a block of data in
the same Fault Set.

VxFlex Integrated Rack Administration - Classroom

Page 16 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Architecture and Components

VxFlex OS stores data in small chunks of 1 MB (MG storage pool)/4 KB (FG


storage pool) in size. The MDM orchestrates placement and movement of data
chunks within a VxFlex OS cluster. The MDM also assigns a chunk to a specific
SDS. When a chunk location changes from one SDS to another, as in the case of
rebalancing, the MDM tells the SDC that its metadata is out-of-date and to update it
as needed.

Each SDC node has an in-memory map that holds information about which SDS
owns which chunk of data. The SDC cache is extremely space-efficient, consuming
a minimum of memory. For example, For the 1-MB allocation unit, VxFlex OS can
store all needed metadata for 10 PB of data in 3 MB of RAM. VxFlex OS has
hundreds of thousands of metadata entries within this 3 MB which makes the
system highly scalable. The SDS controls the allocation of chunks to specific data
locations within the SDS. The SDS also maintains its own metadata for this
information.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 17


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Architecture and Components

VxFlex Integrated Rack Deployment Options

VxFlex integrated rack can be deployed as a hyperconverged architecture, 2-layer


architecture or in a mixed architecture.
 Hyperconverged deployment: This architecture uses hyper-converged nodes.
These nodes perform both storage data server and data client roles. You can
also add storage-only nodes here for the capacity expansion.
 Two-layer deployment: This architecture consists of separate layers of
compute and storage only nodes. In this architecture, SDC runs on compute
and SDS runs on storage only nodes. Besides the separation of roles, this
architecture also separates the front-end (host) and back-end (storage) data
traffic. In this deployment, SDC to SDS and SDS to SDS communication uses a
separate dedicated pair of NICs/VLANs.
 Mixed deployment: This architecture enables a deployment that mixes the fully
converged and two-layer model. This flexibility allows organizations to adopt
both approaches and enable each of an organization’s departments to choose
the model that best aligns to their processes.

Important: Storage-only nodes used for 2-layer deployment come


with two additional ports for dedicated storage communication.

VxFlex Integrated Rack Administration - Classroom

Page 18 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Architecture and Components

Best Practice: Dell EMC recommends a base configuration for


hyperconverged deployment of six VxFlex nodes (four minimum) to
gain the benefit of the VxFlex OS performance. For two-layer, the
recommended base configuration is a minimum of three compute
nodes, and six storage nodes (four minimum).
After the base configuration, nodes can be added in any increments.
However, the Dell EMC recommendation is to add nodes in
increments of four at a time. More racks can be added for expansion
or data protection capability, such as Avamar and Data Domain.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 19


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Architecture and Components

VxFlex Integrated Rack Networking - Physical View

The one rack unit Cisco Nexus 93180YC-EX access switch provides 48 -
1/10/25G-Gbps SFP+ ports and six - 40/100-Gbps QSFP+ uplink ports. It provides
10 GbE and 25 GbE IP connectivity between the FLEX nodes and Controller
nodes. It also provides a 40 Gb uplink connection to the external network or
aggregation layer.

A pair of Cisco Nexus 9332PQ or 9236C aggregation switches is used to scale


beyond a single rack. A 9236C switch provides 32 ports for access switch
connectivity. Therefore, a pair of 9236C can support up to 16 cabinets with a single
uplink or eight cabinets for dual uplinks from the access switches. 16 cabinets
provide scale up to 384 nodes including VxFlex Controller nodes. The Cisco
9332PQ provides 28 ports per switch for access switch connectivity. Therefore,
with a single uplink from the access switch, it can aggregate up to 14 access switch
pairs and 336 nodes.

The Cisco Nexus 3172TQ management switch provides 1 Gb connection for out-of-
band management. It uplinks directly to the customer management network to
provide access to the component management ports.

VxFlex Integrated Rack Administration - Classroom

Page 20 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Architecture and Components

Important: Each node requires four connections to the access


switches (two for each switch). Two of these physical connections
are aggregated using the Cisco virtual port channel (vPC)
technology. The remaining two connections (one for each switch) are
dedicated for data traffic only.

Tip: Refer to the VxFlex Integrated Rack Architecture Overview


document for details on physical and network topology and switch
port configurations.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 21


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Architecture and Components

Spine-leaf Architecture

For greater scalability without network oversubscription, VxFlex integrated rack can
be configured with spine-leaf architecture. In this architecture, access switches
(Cisco Nexus 93180YC-EX) are replaced by the leaf switches (Cisco Nexus
93240YC-FX2), and the aggregation switches (Cisco Nexus 9236C) are replaced
with spine switches (Cisco Nexus 9336C-FX2). The leaf switches connect to the
spine switches in a full-mesh topology. This environment can scale out by adding
leaf switches and reduce oversubscription traffic by adding spine switches. The
spine-leaf architecture ensures predictable latency as server traffic always travels
the same number of hops to another server (except the servers on the same leaf).
Uplink to the customer core network is provided through two or four border leaf
switches. With the spine-leaf architecture, Layer 3 gateways are available at leaf
switches. This distributed gateway enables VM migration seamlessly between the
racks. VxFlex integrated rack currently supports three to six spine switches.

With spine-leaf architecture, the maximum number of nodes in a single cluster is


still 384, including the controller nodes. This is the same maximum available with
access/aggregation topology, but with six spine switches, there is 1:1
oversubscription on the switches. This means that there will be no performance
degradation, even if every node is using its network at full capacity.

VxFlex Integrated Rack Administration - Classroom

Page 22 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Architecture and Components

Customer Network

Border Leaf01 Border Le


af 02

Access 1A Access 1B Access 2A Access 2B


Leaf Switches

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 23


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Architecture and Components

VxFlex Integrated Rack Networking - Logical View

In VxFlex integrated rack, logical networking is provided by the hypervisor.


Distributed virtual switches or DVswitches are configured to manage virtual
networking. Both the Management Controller and the VxFlex node cluster consist
of three DVswitches, each with multiple port groups. Port groups on DVswitches
are identified with a network label and associated VLANs.

Each VxFlex node contributes four physical adapters or vmnics to the DVswitch
uplinks. Two of these connections are aggregated using Virtual Port Channel
(vPC). vPC is used only for nondata traffic on VxFlex nodes. The figure shows a
range of typical vPC identifiers that might be used for a series of servers. For
example, the first FLEX node is vPC 111, the second is vPC 112. Other two
connections are dedicated for VxFlex OS data traffic. DVSwitch 0 is used for all the
management and customer production VLANs. DVSwitch 1 and 2 are used for
VxFlex OS data 1 and data 2 VLANs respectively. Each VxFlex node also provides
an Ethernet port for out-of-band iDRAC connection.

VxFlex Management Controller logical networking is slightly different from the


VxFlex node cluster as shown in the graphics. Beside four ports (using vPC) and
iDRAC connection, another port is configured for the connection from DVSwitch2 to
the management switch. Note that the VLANs are illustrated here with their default
identifiers. They might be different if there is a conflict in the customer’s existing
networking.

VxFlex Integrated Rack Administration - Classroom

Page 24 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Architecture and Components

Logical networking of storage only nodes in the fully converged deployment is


shown below. Two ports are bonded together for non-data traffic, and the other two
are bonded for carrying data traffic.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 25


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Architecture and Components

In a 2-layer deployment, logical networking for the Management Controller and


compute layer is similar to the fully converged deployment. Remember that
compute-only nodes only support ESXi hypervisor. The difference lies in storage
layer networking. Nodes in the storage layer, run Red Hat Enterprise Linux. Each
node in this layer provides six ports - four 25G and two 10G. Two 10G ports are
bonded together to carry all the nondata traffic - shown em1 and em2 as one port
in the graphic for simplicity. Two 25G ports are used for data traffic between SDC,
and SDS -shown here as data1 and data2 networks. The remaining two 25G ports
are dedicated for back end storage traffic only - shown here as data3 and data4
networks.

VxFlex Integrated Rack Administration - Classroom

Page 26 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Architecture and Components

Note that both compute, and storage only nodes have iDRAC connections for out-
of-band node management (not shown in the graphic).

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 27


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Architecture and Components

VxFlex OS Deployment on ESXi

In a VMware environment, the MDM, and SDS components are installed on a


dedicated virtual machine called Storage VM (SVM). SVM is configured on each
ESXi host, whereas the SDC is installed in the hypervisor kernel as a VIB (vSphere
Installation Bundle) agent. The SVM is a Linux or CentOS 7.5 based virtual
machine that is dedicated to VxFlex OS. The SDC creates a logical adapter, which
is an ESXi kernel construct. The logical adapter informs the ESXi about the arrival
and disappearance of SCSI devices. These LUNs can be formatted with VMFS,
and then exposed to the virtual machine.

VMware DirectPath I/O is a technology that enables guests to directly access


hardware devices. VM with DirectPath I/O can directly access the HBA/RAID card
instead of going through the hypervisor. DirectPath I/O improves VM performance
by saving CPU cycles for high-performance workloads. Also, if a storage drive fails,
the SVM is alerted immediately without going through the VMware stack.

Caution: PowerEdge Rx40 (R640, R740, ...) servers with any NVMe
devices cannot be added in a DirectPath I/O based system. Instead,
you can use an RDM-based system.

VxFlex Integrated Rack Administration - Classroom

Page 28 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Architecture and Components

VxFlex OS Cluster in a VxFlex Integrated Rack Example

The environment presented here is an example of VxFlex OS cluster in a VxFlex


integrated rack. The cluster consists of four nodes – three ESXi and one storage-
only node running Red Hat Enterprise Linux. All the nodes are configured in a
single Protection Domain and contribute to the same Storage Pool. The nodes are
configured as a 3-node MDM cluster where nodes are configured as Master MDM,
Slave MDM, and a Tie Breaker. The RHEL node in the cluster is a storage-only
node, and only runs VxFlex OS SDS. The volumes are created from the VxFlex OS
Storage Pool and mapped to the SDCs. These volumes are appeared as raw LUNs
to the OS/hypervisor.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 29


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Architecture and Components

Dell EMC Cabinet

Dell EMC cabinets are configured with Panduit Intelligent Physical Infrastructure
(IPI) appliance. The IPI Appliance provides an intelligent gateway to gather
information about power, thermals, security, alerts, and all components in the
physical infrastructure for each cabinet.

The IPI Appliance incorporates door thermal sensors, door handle sensors, HID
security door handles, and intelligent PDUs. The IPI Appliance is the central point
of information for all intelligent operations in the cabinet. The PDUs enable remote
monitoring capabilities in each IPI cabinet and outlet-level control for each PDU.
Each cabinet has its own appliance with standard, redundant power.

The IPI Appliance is configured with default settings in the factory. Within the
solution, there are environmental, security, and power requirements, in addition to
asset and thermal management considerations.

VxFlex Integrated Rack Administration - Classroom

Page 30 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Architecture and Components

Reference Documentation

Listed here are important documentation that an administrator needs to understand


their VxFlex integrated rack solution and its specifications. Also listed are links to
obtain more resources and documentation for VxFlex integrated rack, Cisco,
VMware, and other Dell EMC products.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 31


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Administration Scope and Tools

VxFlex Integrated Rack Administration Scope and


Tools

This lesson presents a high-level overview of key VxFlex integrated rack


administration and management activities. Then, it presents tools to perform these
activities. The lesson assumes that an administrator is already familiar with
VMware and Cisco management utilities to manage their compute and networking
environment.

This lesson presents the following topics:


 Introduction to VxFlex integrated rack administration activities
 Overview of VxFlex Manager
 Overview of iDRAC, hypervisor management tools, and Cisco NX-OS CLI

VxFlex Integrated Rack Administration - Classroom

Page 32 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Administration Scope and Tools

VxFlex Integrated Rack Administration Scope

VxFlex integrated rack systems are delivered as prebuilt, preconfigured, and fully
tested at the factory for the optimum system operation. This eliminates the need for
most system configuration, monitoring, and performance tuning tasks. Many of the
availability, load balancing, and capacity management tasks are highly automated,
which makes it a low touch, easy to manage solution. However, VxFlex integrated
rack administrators are still responsible for managing its day-to-day operation.

Administrators are primarily responsible for managing system health based on


alerts and notifications. They also configure VMs, VM networks, VxFlex OS
Volumes, system users, backup, and data protection. Beside these day-to-day
activities administrators are also required to monitor the system compliance,
security, and availability. They have the option to collect various reports on capacity
trend and compliance scan from the system. These reports enable administrators
to see how the system is operating and identify where improvement should be
made. These tasks can be performed using the various management tools
available with VxFlex integrated rack.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 33


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Administration Scope and Tools

Tools for Managing VxFlex Integrated Rack

VxFlex integrated rack consists of various hardware and software components.


Each of these have different interfaces for monitoring and management.

 VxFlex Manager - A unified management and orchestration solution to deploy,


manage, and maintain the VxFlex integrated rack infrastructure and workloads.
 Integrated Dell Remote Access Controller (iDRAC) – An out-of-band
management interface for the PowerEdge servers.
 NX-OS (Nexus Operating System) CLI – An interface used to configure and
manage physical networking with Cisco switches.
 VMware vSphere Client and Web Client – Interfaces used to manage and
monitor VMware environment, including the ESXi hosts, VMs, and virtual
networks
 Red Hat Virtualization Manager is used to manage hosts, VMs and other
virtual resources in Red Hat virtualization environment.
 VxFlex OS GUI, CLI, and VxFlex OS vSphere plug-in – Interfaces used to
configure and manage VxFlex OS resources

 VxFlex OS plug-in for VMware vSphere allows management of VxFlex OS


cluster directly from the vSphere Web Client

VxFlex Integrated Rack Administration - Classroom

Page 34 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Administration Scope and Tools

Introduction to VxFlex Manager

VxFlex Manager is a management and orchestration tool developed for VxFlex


integrated rack. VxFlex Manger features a user interface that provides an intuitive,
end-to-end infrastructure automation experience through a unified console.

VxFlex Manager is a virtual appliance that runs on the management cluster of


VxFlex integrated rack. It uses REST client and services for all communication
between the VxFlex Manager UI and its underlying resources. Using VxFlex
Manager, you can define and capture infrastructure requirements using templates.
Once defined, the process can be easily repeated through automation. With VxFlex
Manager, you can easily add resources such as node, volumes, and VLANS.

VxFlex Manager enables discovery of servers, switches, element managers, and


VxFlex OS gateway. Integrating with Open Manage Enterprise (OME) and Secure
Remote Services provides call home capabilities for critical hardware and VxFlex
OS alerts. It also provides automated Release Certification Matrix (RCM)
compliance monitoring and non-disruptive remediation.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 35


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Administration Scope and Tools

VxFlex Manager Architecture

VxFlex Integrated Rack Administration - Classroom

Page 36 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Administration Scope and Tools

VxFlex Manager User Interface

The Getting Started page provides a guided flow through the common
configurations that are required to prepare a new VxFlex Manager environment. A
green check mark on a step indicates that you have completed the step. As an
administrator you should have the VxFM already configured during implementation,
however for any future expansions or infrastructure changes you can revisit this
page. The Getting Started page provides the following information:

 Step 1: Release Certification Matrix - Provides the RCM location and


authentication information for use within VxFlex Manager.
 Step 2: Networks - Provides detailed information about the available networks
in the environment. The networks defined in VxFlex Manager are used in
templates to specify the networks or VLANs that are configured on nodes and
switches for services.
 Step 3: Discover - Grants VxFlex Manager access to resources in the
environment by providing the management IP and credential for the
discoverable resources.
 Step 4: Add Existing Service - Discovers VMware clusters that are already
deployed in the environment and manages them within VxFlex Manager.
 Step 5: Templates - Creates a template with requirements to follow during
deployment. Templates enable you to automate the process of configuring and
deploying infrastructure and workloads.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 37


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Administration Scope and Tools

The Dashboard provides utilization overview of the services and resources being
managed by the VxFlex Manager. It displays at-a-glance information about service
history, resource overview, appliance overview, and activity logs. The dashboard
contains following sections:

 The Service Overview section displays a graphical representation of the


available services based on the status.
 The Resource Overview section displays information about Discovered
Resources, Node Health, Node Utilization, and Storage Utilization in detail.
 The Appliance Overview section provides a set of buttons that enable you to
perform the most commonly used actions within VxFlex Manager. In addition,
the Appliance Overview displays a gauge meter that shows the storage used by
the appliance.
 The Activity Log section lists the most recent user and system-initiated
activities.

VxFlex Integrated Rack Administration - Classroom

Page 38 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Administration Scope and Tools

A service is an object in VxFlex Manager representing the infrastructure configured


during a deployment. VxFlex Manager supports the deployment of three types of
VxFlex OS services: Hyper-converged, Storage Only, and Compute Only. The
Services page displays the services along with their health status. On this page,
you can Deploy New Service or Add Existing Service to VxFM.

When a Service is selected, a comprehensive view of the Service Details is


displayed. The Port View tab displays the port view for the selected component.

A template specifies requirements for the deployment of a set of infrastructure


resources through VxFlex Manager's built-in automation workflows. On the
Templates page, you can access the default sample templates or create templates
that meet your specific requirements. For example, you can create a template for

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 39


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Administration Scope and Tools

deploying a physical server. After creating a template, the template is saved in a


draft state. You must publish the template before deploying it.

A resource is a physical and virtual data center object that VxFlex Manager
interacts with including but not limited to server/nodes, network switches, VM
managers, and element managers. The Resources page displays detailed
information about all the resources and server pools that VxFlex Manager has
discovered and inventoried.

VxFM online help is built into the tool and serves as the user guide.

VxFlex Integrated Rack Administration - Classroom

Page 40 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Administration Scope and Tools

Integrated Dell Remote Access Controller (iDRAC)

The Integrated Dell Remote Access Controller (iDRAC) is designed to make server
administrators more productive and improve the overall availability of the Dell
servers. iDRAC alerts administrators to server issues, helps them perform remote
server management, and reduces the need for physical access to the server.

iDRAC with Lifecycle Controller is embedded in every Dell PowerEdge server. It


provides functionality that helps you deploy, update, monitor, and maintain the Dell
PowerEdge servers. Because it is embedded within each server from the factory, it
requires no operating system or hypervisor to work.

As a part of iDRAC, the Dell Lifecycle Controller simplifies server life cycle
management tasks like provisioning, deployment, servicing, user customization,
patching and updating. It is a collection of out-of-band automation services,
embedded pre-OS applications, and remote interfaces that give you deployment,
update, and maintenance capabilities through managed, persistent storage.
Lifecycle Controller reduces your time spent on management tasks, reduces
potential for error, improves security, and increases overall efficiency in your
VxFlex integrated rack environment.

Key iDRAC features are:


 Inventory and monitoring

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 41


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Administration Scope and Tools

 Inventory and monitor server health, network adapters, and storage


subsystem
 Support for SNMPv3 GET requests
 Deployment
 OS, network settings, storage device, and virtual media configuration
 Enable auto-discovery
 Configure policy for virtual addresses, initiator, and storage targets
 Update BIOS and device firmware
 Maintenance and Troubleshooting

 Optimize system performance and power consumption


 Log events and set alerts
 Blink/unblink component LEDs
 Backup and restore server profile

When you log in to the iDRAC web interface, the system Dashboard page provides
the summary of the managed server. You can view system health, information, and
the virtual console. The tabs on the top provide information about system and
storage components, configuration options, and server maintenance.

VxFlex Integrated Rack Administration - Classroom

Page 42 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Administration Scope and Tools

VMware vCenter Server and vSphere Web Client

As in any typical VMware vSphere deployment, the vCenter server is a centralized


tool to configure and manage the virtual compute and network environment.
VMware vCenter Server Appliance (VCSA) is a preconfigured Linux-based virtual
machine that is optimized for running vCenter Server and its components. vCSA
provides functionality which includes, cloning of VMs, template creation, VMware
vMotion, and Storage vMotion. It is also used for initial configuration of VMware
Distributed Resource Scheduler (DRS) and VMware vSphere high-availability
clusters.

Important: For vSphere 6.0, VMware vCenter Update Manager


(VUM) is integrated into the vCSA and runs as a service to assist
with host patch management. For vSphere 6.5, the VUM patch
management is part of vCSA.

VxFlex integrated rack has two separate vSphere environments - one for the
VxFlex node cluster (running production applications), and the other one for the
VxFlex Management Controller cluster. Both the environments have distinct
elements such as ESXi hosts, networks, VMs, and datastores.

vCSA provides functionality that includes:

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 43


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Administration Scope and Tools

 Cloning of VMs
 Template creation
 VMware vSphere vMotion, and VMware Storage vMotion
 Initial configuration of VMware Distributed Resource Scheduler (DRS) and
VMware vSphere high-availability (HA) clusters

For more information about VMware vSphere and vCenter Server, see
www.vmware.com.

The VMware vSphere Web Client is a browser-based, fully extensible, platform-


independent user interface. The vSphere Web Client uses the VMware API to
mediate the communication between the browser and the vCenter Server. From
the vSphere Web Client, you can configure, monitor, and manage the VMware
virtualization environment. This environment includes the cluster, network,
DVswitch, datastore, the ESXi hosts, and virtual machines. You are also able to get
information about the hardware environment when needed.

vSphere Client is HTML5-based interface to administer vCenter Server and ESXi.


With version 6.7 u1, vSphere Client has all the features available with the web
client. Note that, the VxFlex OS plug-in for vSphere is only accessible through the
Web Client.

VxFlex Integrated Rack Administration - Classroom

Page 44 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Administration Scope and Tools

Red Hat Virtualization Manager

Red Hat Virtualization Manager (RHV-M) provides a user interface and a RESTful
API to manage the resources in Red Hat Virtualization environment. RHV-M is an
appliance style management package that is installed as a virtual machine in the
VxFlex Management Controller cluster (similar to VMware vCSA). It provides a rich
set of capabilities to monitor and manage virtual resources. Besides handling
standard virtual machines tasks, such as VM creation, managing virtual networking,
and VM storage, it provides policy-based VM scheduling, user access
management, and automation through Ansible.

RHV-M also provides comprehensive monitoring of the system resources through


the Dashboard, logs, and event notifications. You can define quality of service at
data center level, and assigned it to individual resources such as storage, VM
network, and CPU for the fine-grain control over these resources.

The engine-backup tool provides capability to back up and restore the RHV-M
database and configuration. Backup and restore APIs used by RHV-M enable an
administrator to perform full or file-level backup and restore of a virtual machine
and its data.

VxFlex Management Controller also contains an RH Satellite Server VM integrated


with RHV-M. The RH Satellite Server manages both RH Enterprise Linux and RHV
subscriptions required for updates and software patches. The Satellite Server also
enables RHV-M to view and receive available errata.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 45


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Administration Scope and Tools

For more information about Red Hat Virtualization, see www.redhat.com/rhv

The RHV-M Dashboard provides an overview of the health and status of the RHV
environment. It displays the summary of system resources and their utilization. The
top section of the Dashboard provides a global inventory of resources and their
status. The Global Utilization section shows the overall utilization of the system
resources, displayed as percentage and line graph of utilization in the last 24
hours. The Cluster Utilization section shows the cluster utilization for the CPU and
memory in a heatmap.

You can further drill down to each type of resource by navigating through the
resource tabs in the right-side panel of the Dashboard. The Compute tab provides
access and configuration to compute resources including VMs, Templates, and
Hosts. The Network tab provides access to various network resources and vNIC
Profiles. The Storage tab provides access to Domains, Volumes, and Disks.

The Administration tab provides options to configure and manage system


administration tasks. It includes managing users, roles, Scheduling Policy, MAC
Address Pools, and resource Quotas. Here you can also manage external resource
providers including RH Satellite server and OpenStack resources. You can
configure and manage system events and notifications under the Events tab.

VxFlex Integrated Rack Administration - Classroom

Page 46 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Administration Scope and Tools

Cisco NX-OS CLI to Configure Nexus Switches

NX-OS is an operating system for the Cisco Nexus series Ethernet switches. NX-
OS provides switch management functionality through a CLI. It provides:

 Static, and dynamic routing, and VLAN configuration


 Authentication, configuration, performance and health statistics
 Debugging and logging

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 47


prathamesh.mitkar@atos.net
VxFlex OS Management Interfaces

VxFlex OS Management Interfaces

This lesson presents the VxFlex OS management interfaces that are extensively
used to configure and manage storage resources in the VxFlex integrated rack
environment.

This lesson presents the following topics:


 VxFlex OS GUI
 VxFlex OS Plug-in for VMware vSphere
 VxFlex OS CLI

VxFlex Integrated Rack Administration - Classroom

Page 48 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex OS Management Interfaces

VxFlex OS GUI Dashboard

VxFlex OS GUI can be used to perform standard VxFlex OS configuration,


monitoring, and maintenance operations. The VxFlex OS GUI is designed with
large-scale clusters in mind and works effectively over high-latency networks. It
communicates with the MDM to accomplish configuration changes and obtain data
for display.

Important: The VxFlex OS GUI is a Java-based utility requiring Java


version 1.8 (or higher) 64-bit. To access the GUI, you need the
Master MDM IP address/hostname, or the MDM virtual IP address (if
used) and user credentials.

The Dashboard tiles provide a visual overview of the storage system status. The
tiles are dynamic, and contents are refreshed at the interval set in the system
preferences (default: 10-second intervals). System preferences can also be used to
set the display to basic or advanced reporting. Below is an overview of the various
tiles:

 Physical Capacity: Displays the storage capacity status of the VxFlex OS


cluster, rounded off to multiples of 8 GB. The available raw capacity is
presented by rings. When you hover over the ring, it provides you specifics on

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 49


prathamesh.mitkar@atos.net
VxFlex OS Management Interfaces

capacity Protected, Degraded, Unused, Spare, and so on. The number at the
center displays the total amount of available raw storage.
 The toggle at the bottom-left corner of the physical capacity tile shows
Capacity Utilization. This tile displays the capacity used in the VxFlex OS
system. Besides displaying the physical, allocated, and provisioned
capacity, it indicates the ratio of compression and the capacity savings due
to thin provisioning.
 I/O Workload: Displays the performance statistics of the system—IOPS,
bandwidth, and I/O size.
 Rebalance and Rebuild: This tile displays the system's internal IOPS. This
internal I/O may be generated due to rebuilding or rebalancing the data within
the system.
 SDCs: Displays the number of SDCs in the system.
 Volumes: Displays the number of volumes created, available capacity, and the
used capacity in the system. The amount of free capacity shown on this tile is
the maximum amount that can be used for creating a volume. This capacity
considers how much raw data is needed for maintaining data mirroring and
system spares.
 Protection Domains: Displays the number and status of all Protection Domains
that are defined in the system. This tile also displays the number and status of
all Storage Pools defined in the Protection Domains.
 SDSs: Displays the number and status of all SDSs in the system. If any SDSs
are currently in Maintenance Mode, the orange maintenance icon is displayed
on this tile. This tile also displays the number and status of all storage devices
that are defined in the Storage Pools.
 Management: Displays the status of the MDM cluster. The status is displayed
graphically as a combination of the MDM cluster elements, and an alert icon if
active alerts exist. When you hover your mouse pointer over this tile, a tooltip
displays the MDM IP information of the cluster.

VxFlex Integrated Rack Administration - Classroom

Page 50 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex OS Management Interfaces

Capacity Utilization

This is a representation of capacity from a storage efficiency perspective. The


storage efficiency view is displayed with details on the compressed storage within
the pool. Mouse-over each of the Physical, User Data, and Volumes category
bars to see detail. Stop cursor for few seconds to the pop-ups to display.

Mouse-over the Volumes bar graph depicts 40 GB of base volumes, and 72 GB


total, implying 32 GB of virtual snapshot capacity. The User Data shows how much
virtual capacity is presented to users. Since it is compressed, it is not a reflection of
the actual net capacity consumed.

You can change the filter option at the top left part of the screen. You can choose
the view for Protection Domain and pools.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 51


prathamesh.mitkar@atos.net
VxFlex OS Management Interfaces

VxFlex OS Frontend and Backend View

The Frontend view provides detailed information about frontend objects in the
system, including volumes, SDCs and snapshots, and lets you perform various
configuration operations. The Backend view provides detailed information about
VxFlex OS backend objects such as Protection Domain, Storage Pools, and
storage devices in the system. You can configure these objects from the backend
view. The device view shows all the devices in the system. Expand all the devices
by clicking + signs. You can also perform a few media device operations from this
view, such as device LED On/off.

The main areas of the Frontend/Backend view include:


 Filter - lets you filter the information displayed in the table and Property Sheets.
 Table - displays detailed information about system objects. The table displays a
wide range of information, which can be filtered. Certain commands can be
performed on objects, using the context-sensitive menu for the desired row in
the table, or the Command menu on the toolbar.
 Property Sheet - display detailed read-only information about the object
selected in the table.
 Toolbar Button - Frontend Volume view and Backend Storage view displays
different toolbar options. Volume toolbar provides options to view the volume
details including, V-Tree Migration, and Snapshot Policy. Similarly, the Storage
view toolbar displays information on capacity, I/Os, and other storage

VxFlex Integrated Rack Administration - Classroom

Page 52 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex OS Management Interfaces

parameters. Each button provides a different combination of properties that can


be displayed together in the table.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 53


prathamesh.mitkar@atos.net
VxFlex OS Management Interfaces

VxFlex OS GUI Monitor View

The Monitor > Alerts view provides a list of the alert messages currently active in
the system, in table format. You can filter the table rows according to alert severity,
and according to object types in the system.

An example of an alert message is shown on the slide. Regarding the license alert
message, it shows that a trial license is in use, thus the user must purchase a
license and install it.

VxFlex Integrated Rack Administration - Classroom

Page 54 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex OS Management Interfaces

VxFlex OS vSphere Plug-In Overview

VxFlex OS plug-in for vSphere enables VMware administrators to perform basic


VxFlex OS management directly from the vSphere Web Client. The VxFlex OS
plug-in communicates with the MDM and the vCenter server. After the plug-in is
installed (and registered) to the VMware vCenter, the VxFlex OS icon is added to
the vCenter Home Inventories pane.

Many VxFlex OS configuration and provisioning operations can be performed from


the VxFlex OS option in the left pane under the Global Inventory Lists. Select the
proper VxFlex OS component first, and then select the desired operation from the
Actions drop-down menu.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 55


prathamesh.mitkar@atos.net
VxFlex OS Management Interfaces

The plug-in also provides a wizard to deploy VxFlex OS in a VMware environment.


The installation wizard can be used to install the MDM, SDS, and SDC
components, collect server logs, and update the SDC parameters.

VxFlex Integrated Rack Administration - Classroom

Page 56 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex OS Management Interfaces

VxFlex OS CLI Overview

The VxFlex OS CLI enables you to perform all provisioning, maintenance, and
monitoring activities in VxFlex OS. Use SSH or RDM to log in to the shell running
on the MDM servers to use CLI.

All CLI commands use the following format: scli [--mdm_ip


][command]<parameter>. The order of the parameters in the command is
insignificant. SCLI commands are lowercase and case-sensitive. All parameters
are preceded by ‘--'. The mdm_ip indicates the IP address of the Master MDM that
receives and runs the command. You must execute SCLI commands on the active
Master MDM, not on any of the slaves. However, you can send a command from a
Slave MDM by using the --mdm_ip parameter and providing the MDM IP address.
If the command is run from the Master MDM, this switch may be omitted. For more
information about these commands, see the online help or VxFlex OS CLI
Reference Guide.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 57


prathamesh.mitkar@atos.net
VxFlex OS Management Interfaces

VxFlex OS Gateway and Installer

The VxFlex OS Gateway includes the REST Gateway and the SNMP trap sender
functionality. In a VxFlex integrated rack, the Gateway is installed on the VxFlex
Management Controller. As part of the installation process, the VxFlex OS
Lightweight Installation Agent (LIA) is also installed. LIA works seamlessly with the
Gateway enabling the future upgrades and collecting system logs. You can enable
and disable the Gateway components.

The VxFlex OS Installer is part of the VxFlex OS Gateway. The Installer is used to
install and configure VxFlex OS components. The Installer also provides capability
to upgrade, analyze, and collect system logs. System Administrators are not
expected to use this interface for any day-to-day system administration and
management.

VxFlex Integrated Rack Administration - Classroom

Page 58 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex OS Management Interfaces

Lab: Explore the VxFlex Integrated Rack

This lab presents an introduction to VxFlex Manager, iDRAC, the VxFlex OS


management interfaces, and VMware vSphere Web Client.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 59


prathamesh.mitkar@atos.net
Storage Resource Management

This module presents the storage resource management aspect of VxFlex


integrated rack. It presents configuration and validation of the VxFlex OS elements
in your environment.

Upon completing this module, you will be able to:


 Configure and manage the VxFlex OS storage resources
 Provision storage from VxFlex OS
 Manage the VxFlex OS system components

VxFlex Integrated Rack Administration - Classroom

Page 60 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
System Capacity Components

System Capacity Components

This lesson presents the VxFlex OS capacity components and their properties.

This lesson presents the following topics:


 Managing Protection Domain
 Managing Storage Pools
 Managing spare capacity

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 61


prathamesh.mitkar@atos.net
System Capacity Components

Protection Domain Recommendations

In a VxFlex integrated rack system, each SDS node that contributes storage to the
VxFlex OS is a part of a VxFlex OS protection domain. A protection domain
contains a group of SDSs that provide backups and protection for each other.
Typically, a VxFlex integrated rack system has only one protection domain, but it is
possible to distribute SDSs into multiple protection domains (typically in a multi-
tenant environment).

VxFlex OS allows the reconfiguration of the VxFlex OS nodes across Protection


Domains to accommodate dynamic data center environments with constantly
changing workloads. Moving an SDS to a new Protection Domain requires first
removing the SDS and then adding it back to the VxFlex OS system. This operation
can be done without the loss of access to existing VxFlex OS volumes.

To ensure protection, the smallest supportable VxFlex OS cluster on VxFlex


integrated rack consists of four SDS nodes. A volume that is provisioned from such
a system can survive a planned or unplanned shutdown of any one of these nodes.

Configuring more than one Protection Domain may help to contain the impact of
server downtime on storage availability. With multiple Protection Domains, one
server or device can fail in every Protection Domain, and production I/O is
unaffected. Multiple Protection Domains improve system resilience.

VxFlex Integrated Rack Administration - Classroom

Page 62 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
System Capacity Components

Important: If a VxFlex integrated rack environment contains both


ESXi nodes and RHV nodes, they can be added to the same VxFlex
OS cluster. However, they should be separated into different
Protection Domains.

There are other compelling reasons to take advantage of Protection Domains


beyond resilience.
 To separate volumes for performance planning, for example, assigning highly
accessed volumes in “less busy” domains, or dedicating a particular domain to
an application.
 For data location and partitioning in multitenancy deployments, so that tenants
can be segregated efficiently and securely.

Best Practice: Dell EMC recommends that the Protection Domain


members are homogenous in terms of the node type, disk type, disk
count, and disk size. The homogeneity of VxFlex nodes in a
Protection Domain ensures predictable performance and scalability.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 63


prathamesh.mitkar@atos.net
System Capacity Components

Add Protection Domain

To add a Protection Domain using the VxFlex OS GUI:


1. Select Backend > Storage
2. Right-click the system. Select Add Protection Domain
3. Specify the name for the new Protection Domain

To add a Protection Domain using the VxFlex OS vSphere plug-in


1. Select the VxFlex OS icon from the vSphere Web Client under Home >
Inventories
2. Click the VxFlex OS Systems in left pane
3. Select the desired VxFlex OS system and click Actions and select Create
Protection Domain
4. Specify the name for the new Protection Domain

Note: Usually administrators are not required to create Protection


Domain as they are preconfigured at the factory.

VxFlex Integrated Rack Administration - Classroom

Page 64 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
System Capacity Components

View Protection Domain Properties

You can get all the information about a Protection Domain (PD) from the VxFlex
OS CLI or GUI. The scli --query_protection_domain command provides
information about PD ID, number of Storage Pools, Fault Sets, SDS nodes in the
PD, number of volumes and available capacity. Details on each Storage Pool,
SDS, and volume is also displayed. In the GUI, you can view the property sheet for
the protection domain.

The GUI provides information that is organized into various categories:


 General – Visual representation of PD capacity, I/O workload, and
rebuild/rebalance activity
 Identity – PD name and ID
 MDM Cluster – Information on the cluster, the master MDM, and virtual IP
addresses
 Alerts – Any outstanding alert on PD
 Capacity – Detail representation of capacity allocated, thick/thin allocation,
Snapshot, Spare, and unused capacity
 Health – Provides PD health status, showing the capacity that is protected, in
maintenance, degraded, and so on
 Workload – Details of current read/write, rebuild/rebalance, and total workload
on a PD in terms of bandwidth and IOPS

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 65


prathamesh.mitkar@atos.net
System Capacity Components

 Rebuild/Rebalance – Current rebuild and rebalance workload


 RAM Read Cache – Cache size, status, and hit ratio
 Performance – Performance profile setting
 Oscillating Failure parameters – Preconfigured monitoring parameters—
discussed later in the module
 Related Objects – High-level object information

VxFlex Integrated Rack Administration - Classroom

Page 66 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
System Capacity Components

Storage Pool Recommendations

The most common use of storage pools is to establish performance tiering. For
example, within a protection domain, you can combine all the flash devices into one
pool and all the hard disk drives into another pool. By assigning volumes, you can
guarantee that frequently accessed data resides on low-latency flash devices while
the less frequently accessed data resides on high-capacity HDDs. Thus, you can
establish a performance tier and a capacity tier. You can divide the device
population as you see fit to create any number of storage pools.

To provide consistent performance, Dell EMC recommends that all devices in a


Storage Pool have similar storage properties. Mixing different types of media in the
same pool is not recommended. Due to the fact that data is distributed evenly
across a storage pool, performance is limited to the lowest performing member of
the Storage Pool.

VxFlex OS might not perform optimally if there are large differences between the
sizes of the devices in the same Storage Pool. For example, if one device has a
much larger capacity than the rest of the devices, performance may be affected.

With version 3.x, VxFlex OS introduces a new storage efficient Fine Granularity
layout. The space allocation in the FG layout is based on 4 KB allocation unit,
which can significantly reduce the amount of unused allocation units as data is
written, especially for small write I/Os. FG storage pools can live alongside MG
pools in a given SDS. Volumes can be migrated between the two layouts.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 67


prathamesh.mitkar@atos.net
System Capacity Components

Caution: Storage Pools are set at the time of deployment and should
not be changed.

Each Storage Pool can work in one of the following modes:


 Zero padding enabled: Ensures that every read from an area previously not
written, returns zeros. This behavior incurs some performance overhead on the
first write to every area of the volume. This setting should be enabled to take
advantage of various available data protection solutions. Zero padding is the
default setting for the FG pools and cannot be changed.
 Zero padding disabled: A read from an area that is previously not written to
returns unknown content. This content might change on subsequent reads.

The zero padding policy cannot be changed after the addition of the first device to a
specific Storage Pool. You can add Storage Pools during installation. Although
storage pools are configured for optimal performance during installation, you can
modify the Storage Pools post-installation.

VxFlex Integrated Rack Administration - Classroom

Page 68 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
System Capacity Components

Medium Versus Fine Granularity Layout

FG layout requires NVDIMM-based acceleration pool. NVDIMM-based acceleration


pools provide caching and preparing data for physical write operations. It helps
minimize physical I/O activities and better endurance of the SSD media. The
acceleration pool along with SSD or NVMe media is assigned to the FG pool at the
time of creation. If you only have one NVDIMM on each SDS, you can only have
one FG storage pool in the cluster. If there are two NVDIMMs per SDS, you can
either have a larger acceleration pool for the single FG pool or have two separate
FG pools.

Note: NVDIMM failure in SDS will result in the failure of all the SSDs
associated with that NVDIMM in the node.

FG enables data compression which allows for faster reads and writes. Data
compression is not supported for Medium Granularity (MG). FG pools support thin
provisioned, zero padded volumes. FG pools use log structure array (LSA) to store
data in fixed size (256 KB) containers called logs. This architecture mitigates
fragmentation issues and minimizes empty regions. It also enables inline
defragmentation and does a garbage collection of full logs during rewrites.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 69


prathamesh.mitkar@atos.net
System Capacity Components

The benefit of the space-efficient FG layout comes at some performance cost. In


most cases (but not all) FG will be slower than MG. In FG layout, when
compression is enabled, reads are slower than in cases where compression is
disabled. There are exceptions to these rules. For example, Write Response Time
is similar in FG and MG. Another exception is using snapshots. In FG, the
snapshots have no impact on the performance of applications and workloads
running during the snapshot creation. In MG, snapshots might have an impact on
performance in some cases. For this reason, in cases that are very sensitive to
performance and do not require snapshots, MG would be a better choice. It would
also be a good option for cases where the data is uncompressible. For example, if
it is already compressed or if the data is encrypted by the application. Note that the
new writes and changes are cached in the NVDIMMs. This minimizes the number
of writes to the SSDs and results in high SSD endurance.

Another consideration for the FG pools is the fact that 256x more metadata is
written due to the 4 KB allocation unit compared to 1 MB allocation unit. Byte
alignment further increases the amount of metadata. Compression results in more
data to be stored, which further adds to more metadata. The metadata of FG pools
cannot be saved in memory like in MG pools. So, FG reserves some space on
each disk to save the metadata.

VxFlex Integrated Rack Administration - Classroom

Page 70 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
System Capacity Components

Add Storage Pool

The Storage Pool is always added to a Protection Domain. Each time that you add
devices to the system, you must map them to Storage Pools. Create Storage Pools
before you start adding SDSs and devices to the system. Storage pools can be
added from the CLI, GUI, and vSphere Plug-in. Adding FG pool first requires an
acceleration pool.

NVDIMM Acceleration Pools: Acceleration data must reside on


NVDIMM devices. These devices must be mapped to a logical DAX
device which is created with a new Linux package called ndctl. DAX
logical devices reside on NVDIMM persistent memory. The
acceleration pools are assigned to an FG storage pool. Writes are
assembled and buffered in acceleration pools. The benefit of the
DAX device is that it protects the buffered data in the event of a
power failure.

The acceleration pool is added in a similar way as other pool types. Choose
NVDIMM under the Pool Type, and add NVDIMM devices (DAX devices) for each
SDS node in the cluster.

To create a Storage Pool using the GUI:


1. Select Backend > Devices

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 71


prathamesh.mitkar@atos.net
System Capacity Components

2. Right-click the desired Protection Domain and select Add > Add Storage Pool
3. Specify the name for the new storage pool
4. Select the desired storage pool configuration options
5. Once a pool is created, right click the desired SDSs that are contributing
storage to the pool, and choose Add devices.
6. Choose the Path, Name, and Storage Pool to add the device.

Note: When you select the fine-granularity option in the wizard, it


allows you to select the acceleration pool and assign it to the FG
pool. Also, notice the option to enable compression at the pool level.

To add a Storage Pool using the VxFlex OS vSphere plug-in:


1. Select the VxFlex OS icon from the vSphere Web Client under Home
2. Click Protection Domains in the left pane
3. Select the desired Protection Domain. Click Actions and select Create Storage
Pool
4. Specify the name for the new Storage Pool
5. You may be asked for the VxFlex OS authentication password.

Note: Usually administrators are not required to create Storage


Pools as they are preconfigured at the factory. Acceleration Pools
are also preconfigured at the time of VxFlex OS deployment.

VxFlex Integrated Rack Administration - Classroom

Page 72 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
System Capacity Components

View Storage Pool Properties

You can get all the information about a Storage Pool from the VxFlex OS CLI or
GUI. The CLI command provides detailed information about volumes that are
created from the Storage Pools and available storage capacity. It also provides
information about all other attributes of a Storage Pool. You can also view
information about storage pools in the GUI by opening the properties sheet.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 73


prathamesh.mitkar@atos.net
System Capacity Components

Inline Data Compression

Starting with version 3.0, VxFlex OS provides the capability for inline data
compression. The compression reduces the volume of data that is stored on the
disk and improves space utilization. This storage efficiency feature is enabled via
NVDIMM devices that are used for fine-grained storage pools.

Compression can be enabled at the volume level. However, an administrator can


also set the compression at the pool level. Setting the compression at the pool level
makes it a default setting when the volume is created on the storage pool.

FG compressed pools can be chosen for situations where space efficiency is more
valuable than I/O performance, and the data is compressible. If VxFlex OS
determines that the user data is not compressible, it will override the compression
attribute. FG non-compressed pools are used where it does not make sense to
enable compression, but you still need read-intensive performance. In case a
volume has compression that is enabled, and non-compressible data is being
written, the system can determine that it should not compress the data, thus
avoiding the additional CPU utilization required by compression.

VxFlex Integrated Rack Administration - Classroom

Page 74 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
System Capacity Components

Spare Capacity Overview

VxFlex OS protects data during server failures by reserving spare capacity, which
cannot be used for volume allocation. If there is a failure, VxFlex OS must have
enough capacity available to rebuild the data that was on the failed component.

Having enough spare capacity ensures full system protection during the event of a
node or disk failure. Ensure that the spare capacity is at least equal to the capacity
of the node containing the maximum capacity or the maximum Fault Set capacity.
Spare capacity is configured as a percentage of the total capacity. Therefore, if all
nodes contain equal capacity, set the capacity value to at least 1/N of the total
capacity—where N is the number of SDS nodes.

For example, you have an 11-node cluster of 3 TB each where the system must
protect against single node failure. The spare capacity should be at least 3 TB or
1/11th of the capacity—about 9.1%. Keep in mind that although 30 TB is available
for production data out of the 33 TB total, data is mirrored, so the protected
capacity available is only 15 TB.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 75


prathamesh.mitkar@atos.net
System Capacity Components

Modify Spare Capacity Policy

There usually is no reason to modify the spare capacity from default values.
However, it is possible. Having a larger spare capacity allows the system to tolerate
some cascaded failures since there is more capacity to store rebuilt data. However,
less capacity is then available for storing user data.

Decreasing the spare capacity must be done with extreme caution. Although the
system gains more usable capacity, there may not be enough space to rebuild the
data protection after a failure.

To modify the Spare Capacity Policy using the VxFlex OS GUI:


1. In the Backend, select and right-click the desired Storage Pool
2. Select Settings > Configure Spare Policy
3. Specify the desired Spare Percentage Policy number

VxFlex Integrated Rack Administration - Classroom

Page 76 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
System Capacity Components

To modify the Spare Capacity Policy using the VxFlex OS CLI:


 The scli --modify_spare_policy command modifies the current spare
capacity reservation policy

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 77


prathamesh.mitkar@atos.net
Storage Provisioning and Volume Management

Storage Provisioning and Volume Management

This lesson presents activities that a VxFlex integrated rack administrator would
perform for storage provisioning. It presents creating volumes, expanding volume
capacity, mapping and unmapping volumes, and removing a volume.

This lesson presents the following topics:


 Creating and mapping volumes
 Expanding volume capacity
 Managing volume I/O limits

VxFlex Integrated Rack Administration - Classroom

Page 78 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Storage Provisioning and Volume Management

VxFlex OS Volumes

Volumes are created from a Storage Pool and can be exposed to the applications
as a local storage device using the SDCs. When a volume is configured from a
Storage Pool, it is distributed over all devices residing in that pool. Each volume
block has two copies on two different SDSs. This allows the system to maintain
data availability following a single-point failure. The data is available following
multiple failures, as long as each failure took place in a different storage pool.

Internally, every VxFlex OS volume is organized into 1 MB and 4 KB chunks that


are distributed across all devices within the Storage Pool from which the volume is
provisioned. VxFlex OS maintains mirrored copies of every data chunk in a volume
in two different devices in different servers. This ensures resilience with the ability
to survive the loss of one device or one server participating in a Storage Pool.

It is important to understand that the VxFlex OS volume chunks are not the same
as data blocks. The I/O operations are performed at the block level. If an
application writes out 4 KB of data, only 4 KB are written, not 1 MB. The same goes
for read operations—only the required data is read.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 79


prathamesh.mitkar@atos.net
Storage Provisioning and Volume Management

Volumes can be configured as:


 Thick: Capacity is allocated immediately, even if not used. This can cause
capacity to be allocated, but never used, leading to wasted capacity. Thick
capacity provisioning is limited to available capacity.
 Thin: Capacity is “on reserve,” but not allocated until used. This policy enables
more flexibility in provisioning. Whereas thick capacity is limited to available
capacity, thin capacity provisioning can be oversubscribed.

VxFlex Integrated Rack Administration - Classroom

Page 80 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Storage Provisioning and Volume Management

Volume Provisioning Considerations for MG Pools

VxFlex OS supports Thin provisioning which means that overprovisioning a device


can occur. Over provisioning occurs when you allocate a thin device, or group of
devices, that are not backed by real disk space.

Since there is no capacity alert available in vCenter or Red Hat Virtualization


Manager for VxFlex OS volume consumption, Dell EMC recommends using thick
provisioning for creating VxFlex OS volumes. You can then use thin provisioning
when creating VM disks on these volumes.

If thin provisioning must be used for the VxFlex OS volumes, ensure that you
monitor their consumption in the VxFlex OS interfaces. Also, ensure that you have
the appropriate capacity threshold alert set in the VxFlex OS system. A best
practice is to keep extra free space to avoid any issues with oversubscribed
storage pools.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 81


prathamesh.mitkar@atos.net
Storage Provisioning and Volume Management

Add Volumes

You can create and map volumes using various management tools. To start
allocating volumes, the system requires that there be at least four SDS nodes. The
created volume cannot be used until it is mapped to at least one SDC.

Volumes can be added using the GUI. After selecting the Volumes submenu from
the Frontend tab, the system administrator can right-click the desired storage pool
to create a volume. If you want to create more than one volume, type the number of
volumes you would like to add in the Copies box. If you are adding multiple
volumes, they are created with the same name and a number is appended instead
of the characters. The number in the Size box represents the volume size in GB.
The basic allocation granularity is 8 GB. You can select either Thick or the Thin
provisioning options. Thin provisioning is the default for volumes created from an
FG pool. Leave the RAM Read Cache cleared for MG pools provisioned by SSD
devices.

Administrator can

User-defined names are optional, however it is highly recommended to give each


volume a meaningful name associated with its operational role. You can define
volume names according to the following rules:
 Contain fewer than 32 characters

VxFlex Integrated Rack Administration - Classroom

Page 82 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Storage Provisioning and Volume Management

 Contain only alphanumeric and punctuation characters


 Be unique within the object type

When a name has not been defined, the system displays default system-defined
names, using the volume’s ID. In place of the Protection Domain and Storage Pool
names, you can also use Protection Domain ID and Storage Pool ID respectively. It
is highly recommended to use thick provisioning for creating VxFlex OS volumes.
Also, do not enable RAM Read Cache since it is disabled on the pools.

You can alternately use a CLI command when logged into the master MDM:

scli --add_volume --protection_domain_name <NAME> --


storage_pool_name <NAME> --volume_name <NAME> [--
thin_provisioned | --thick_provisioned]

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 83


prathamesh.mitkar@atos.net
Storage Provisioning and Volume Management

Map a Volume in GUI

To map volumes through the GUI, perform the following steps:

 In the Frontend > Volumes view, go to the volumes, and select the desired
volumes.
 Right-click a storage pool and select Map. The Map Volumes window is
displayed, showing a list of the volumes to be mapped.
 In the Select Nodes panel, select one or more SDCs to which you want to map
the volumes.

 You can use the search box to find SDCs.


 You are warned if you select an SDC that is already mapped to the volume.
A green icon is displayed in the mapping matrix on the right side of the
window.
 Click Map Volumes.

The progress of the operation is displayed at the bottom of the window. Keep the
window open until the operation is completed and until you can see the result of the
operation.

VxFlex Integrated Rack Administration - Classroom

Page 84 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Storage Provisioning and Volume Management

Add Volume using VxFlex Manager

You can also create and add volumes directly from VxFlex Manager. In the Volume
Name field, select Create New Volume, or select an existing volume. For a
compute-only service, you can select only an existing volume that has not yet been
mapped. For a hyper-converged service, VxFlex Manager shows both options.
Select the volume name, Storage Pool, size, and volume type - thick or thin.

The new volume icon will appear on the Service Details page. The volume is
grayed out, because the service is still in progress. After the deployment completes
successfully, the volume shows the check mark and appears in the Storage list on
the Service Details page. For a storage only service, the volume is created, but not
mapped. For a compute only or hyper-converged service, the volume is mapped to
all the SDCs in the cluster.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 85


prathamesh.mitkar@atos.net
Storage Provisioning and Volume Management

Add and Map Volume Using VxFlex OS Plug-in

The procedure to create and map volumes using the VxFlex OS plug-in is as
follows:
 Click the VxFlex OS plug-in from the vSphere Web Client home tab.
 From the Storage Pools screen, select the storage pool and click Create
volume.
 In the Create Volume dialog box, enter the volume information.
 To map the volume to ESXs, select Map volume to ESXs.
 In the Select ESXs area, select the clusters or ESXs to which this volume
should be mapped.

VxFlex Integrated Rack Administration - Classroom

Page 86 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Storage Provisioning and Volume Management

Locating Volumes in vSphere

After a volume is created and mapped to the desired SDC, you can use the volume
to provide storage to your virtual machines. The volume can be used to expand an
existing datastore or create a VMFS datastore on it. To identify the unique ID,
select the volume in the VxFlex OS GUI, and look at its ID in the properties sheet.
You can also identify the unique ID using VMware. The VMware management
interface shows each device named as EMC Fibre Channel Disk followed by an ID
number starting with the prefix eui.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 87


prathamesh.mitkar@atos.net
Storage Provisioning and Volume Management

Modify Volume Operations

In addition to adding, mapping, removing, and unmapping a volume, there are


many other tasks which allow the System Administrator to modify a volume. From
the VxFlex OS GUI, select the targeted volume and right-click to see all the
available volume modification tasks.

The tasks include:


 Rename: Volumes are renamed when the administrator wants the name of the
volume to signify its current operation being performed.
 Unmap Volumes: Volumes can be unmapped to remove an SDC's access to
them. Unmapping a volume does not destroy any data on the volume or free
any capacity. It only makes the volume unavailable to the SDCs and
applications on those SDCs.
 Snapshot Volume: Snapshots of volumes can be used for testing purposes.
You can perform all manipulations similar to a volume on a snapshot as well.
Snapshots are discussed in detail later in the course.
 Increase Size: The volume size can be increased to present more capacity to
the SDCs.

VxFlex Integrated Rack Administration - Classroom

Page 88 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Storage Provisioning and Volume Management

 Set Volume Limits: Volume limits are set on a per SDC basis and ensure that
the Quality of Service is being maintained.
 Remove: Before removing a volume from a system, you must ensure that it is
not mapped to any SDCs. If it is mapped, unmap it before removing it. Removal
of a volume erases all the data on the corresponding volume.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 89


prathamesh.mitkar@atos.net
Storage Provisioning and Volume Management

Increase Volume Size

You can increase, but not decrease, a volume capacity at any time, as long as
there is enough capacity for the volume size to grow. The size of an existing
volume can be increased while it is still mapped to the SDCs. However, the
operating system must also recognize that volume capacity increase. In the case of
an ESXi, the capacity of the datastore on that volume must be increased.

Also, the new size will be rounded up to the next multiple of 8 GB.

If the volume is being used for a VMware datastore, the datastore does not
increase its size automatically. You have to increase the datastore afterwards in
order for it to use the additional capacity of the volume.

VxFlex Integrated Rack Administration - Classroom

Page 90 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Storage Provisioning and Volume Management

Unmap Volumes

If you no longer need to use a volume, you may unmap it. You can unmap a VxFlex
OS volume from the VxFlex OS GUI. Remember to first remove them in an orderly
manner from the application and host environment. From VxFlex OS GUI,
Frontend > Volumes, right-click the target volume that you want to unmap and
select Unmap Volumes from the drop-down list.

 The Unmap Volumes window is displayed, showing a list of the volumes that to
be unmapped.
 If you want to exclude some SDCs from the unmap operation, clear the
checkbox for those nodes—these cleared SDCs retain mapping to the volume.
 Then click Unmap Volumes.

The progress of the operation is displayed at the bottom of the window. Keep the
window open until the operation is completed, and until you can see the result of
the operation. You can verify the number of SDCs mapped to a volume from the
Mapped SDCs column in the Frontend window.

Snapshots are unmapped in the same way as volumes are unmapped. Unmapping
a volume does not delete the volume or return the volume capacity to the storage
pool.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 91


prathamesh.mitkar@atos.net
Storage Provisioning and Volume Management

Volume Limits

With the VxFlex OS Quality of Service (QoS) feature, the Administrator can control
or throttle the IOPS and/or bandwidth of any volume. It ensures that a volume
does not monopolize all the potential IOPs from the Storage Pool.

The bandwidth and IOPS limits for volumes can be monitored and set for each
SDC that the volume is mapped to. It enables the administrators to adjust the
amount of bandwidth and IOPS that any given SDC can use. You can configure
this QoS feature with the CLI and GUI on a client or volume basis. You can adjust
the amount of bandwidth and IOPS that any given SDC can use. For the QoS
feature to work, the volumes must be mapped before setting these limits. The
defaults are unlimited. Here are the steps:

 In the Frontend > Volumes view, right-click the target volume. From the drop-
down list, select Set Volume Limits.
 The Set Volume Limits window is displayed.
 In the Bandwidth Limits and IOPS Limits boxes, type the required values, or
select the corresponding Unlimited option.
 The number of IOPS must be larger than 10.
 The volume network bandwidth is in MB/sec.

VxFlex Integrated Rack Administration - Classroom

Page 92 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Storage Provisioning and Volume Management

 In the Select Nodes panel, select the SDCs to which you want to apply the
changes.
 Click Set Limits.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 93


prathamesh.mitkar@atos.net
Storage Provisioning and Volume Management

Query Volume Limits

The example here shows how to retrieve volume bandwidth limits through the GUI.
To view the limits through the CLI, use the following command:

scli --query_sdc_volume_limits --volume_name <NAME> --


sdc_name <NAME>

VxFlex Integrated Rack Administration - Classroom

Page 94 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Storage Provisioning and Volume Management

Remove Volumes

Before removing a volume from a system, you must ensure that it is not mapped to
any SDC. If it is, unmap it before removing it.

Removal of a volume erases all the data on the corresponding volume. To remove
one or multiple volumes, perform these steps:

 In the Frontend > Volumes view, right-click the target volume that you want to
remove.
 From the drop-down list, select Remove.
 The Remove Volumes window is displayed, showing a list of the volumes that
will be removed.
 Click OK.

You may be asked to validate the VxFlex OS credential to complete the operation.

You can follow the same procedure if you want to remove a volume’s related
snapshots or remove snapshots only. Before removing a volume or snapshots, you
must ensure that they are not mapped to any SDCs. If they are, unmap them
before removing them.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 95


prathamesh.mitkar@atos.net
Storage Provisioning and Volume Management

Volume Migration

Volume migration between the pools or Storage Domains may be needed for many
reasons. Performance tiering may be the primary driver as application workloads
are dynamic and may change over time. You may also want to migrate volumes
during the development cycle from testing, development to operations.

With VxFlex OS 3.0, volumes and their snapshots can be migrated across storage
pools within the same Protection Domain and across the Protection Domains. It is
done using V-Tree Migration. Snapshots can only be migrated within the same
storage pool type. If you need to migrate a source volume between the different
storage types, you need to delete all the snapshots attached to the volume before
performing the migration.

To perform the migration, select the V-Tree Migration from the volume toolbar
menu. From displayed volumes, right-click on the volume that needs to be
migrated. Select V-Tree Migration > Migrate. From the migration wizard, select
the destination pool, and click OK.

The direction of the flow of data and the status of the migration are visible in the V-
Tree Migration view. The migration can also be performed using the CLI.
Immediately after initiating the migration, a new volume is created in the target
storage pool. so it resides in both pools for a time. New writes are sent to the new
target volume as the migration proceeds. Once the migration is complete, the
volume in the source pool will disappear.

VxFlex Integrated Rack Administration - Classroom

Page 96 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Storage Provisioning and Volume Management

Note that migration workload can be throttled to manage its impact to the
production workload. To do so, right-click on the storage pool, from Settings and
choose Set I/O Priority, and select the Migration Policy.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 97


prathamesh.mitkar@atos.net
Storage Provisioning and Volume Management

Supported Migration Paths

The table in the slide displays all the possible migration paths currently supported
between storage pool configurations. If the source volume is on a medium-grained
pool, that pool must be zero-padded to be relocated to the fine-grained pool.

If the source volume is on a medium-grained pool, that pool must be zero-padded


to be relocated to the fine-grained pool. All volumes on fine-grained pools are zero-
padded, so they can be migrated to medium grained-pools.

VxFlex Integrated Rack Administration - Classroom

Page 98 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex OS Rebuild and Rebalance

VxFlex OS Rebuild and Rebalance

This lesson presents techniques to manage rebuild and rebalance workloads and
resource consumption.

This lesson presents the following topics:


 Network throttling to limit rebuild/rebalance I/O
 Setting I/O priorities

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 99


prathamesh.mitkar@atos.net
VxFlex OS Rebuild and Rebalance

VxFlex OS Rebuild Operation

VxFlex OS mirrors all user data. Each piece of data is stored on two different
servers within a Protection Domain. The copies are randomly distributed across the
storage devices in the Storage Pool.

When a failure occurs, such as on a server, device, or network, VxFlex OS


immediately starts the process of rebuilding the data by promoting the secondary
copies of data to primary data, while creating new secondary copies on other
media devices.

There are two types of rebuild:


 Forward rebuild is the process of creating another copy of the data on a new
server. A forward rebuild occurs when one or more devices fail. When this
happens, some storage blocks lose their mirrored copy. In the forward rebuild,
all the devices in the Storage Pool work together, in a many-to-many fashion, to
create copies of all the failed storage blocks (either 1 MB (MG) or 4KB (FG)).
This method ensures a fast rebuild.
 Backward rebuild is the process that resynchronizes the copies. A backward
rebuild occurs when a previously inaccessible device comes online and still has
valid data on it. In this case, the system performs a backward rebuild since the
forward rebuild would move more data than a backward rebuild. Backward
rebuild copies only the changes made to the data while this device was

VxFlex Integrated Rack Administration - Classroom

Page 100 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex OS Rebuild and Rebalance

inaccessible. This process minimizes the amount of data that is transferred over
the network during recovery.

Minimum Rebuild time is important and VxFlex OS automatically selects the type of
rebuild to perform. Sometimes, more data is transferred to minimize the time that
the user data is not fully protected.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 101


prathamesh.mitkar@atos.net
VxFlex OS Rebuild and Rebalance

VxFlex OS Rebalance Operation

Rebalance is the process of moving data copies between the SDSs to balance the
workloads evenly across the nodes. It distributes data evenly across servers and
storage media. It occurs when VxFlex OS detects that the user data is not evenly
balanced across the devices in a Storage Pool. Rebalance can occur as a result of
several conditions such as SDS addition or removal, device addition or removal, or
following a recovery and rebuild operations.

VxFlex OS moves copies of the data from the most used devices to the least used
ones.

VxFlex Integrated Rack Administration - Classroom

Page 102 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex OS Rebuild and Rebalance

Managing Rebuild and Rebalance Settings

Both rebuild and rebalance compete with the application I/O for the system
resources including network, CPU, and storage media. VxFlex OS provides a rich
set of parameters that can control this resource consumption. The system is
factory-tuned for balancing between speedy rebuild/rebalance and minimization of
the effect on the application I/O. The user has fine-grained control over the rebuild
and rebalance behavior.

There are various settings that can affect the resources that are used to perform
these actions which may help system performance and affect recovery times after
a failure.

The options are:


 Set network throttling at the Protection Domain level
 I/O priority setting at the Pool level
 Enable/disable Rebuild and Rebalance

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 103


prathamesh.mitkar@atos.net
VxFlex OS Rebuild and Rebalance

Rate Limits: Network Throttling

Network throttling affects network limits and is used to control the flow of traffic over
the network. It is configured per Protection Domain. The SDS nodes transfer data
between themselves. This data consists of user data being replicated as part of the
VxFlex OS data protection, and data copied for internal rebalancing and recovery
from failures. You can modify the balance between these types of workloads by
throttling the data copy bandwidth. This change affects all SDSs in the specified
Protection Domain.

VxFlex Integrated Rack Administration - Classroom

Page 104 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex OS Rebuild and Rebalance

Configure Network Throttling

To set Network throttling:


 Go to the Backend > Storage view, and right-click a protection domain. Select
Settings, and then click Set Network throttling.
 The Set Network Throttling window is displayed.
 Set Rebalance and Rebuild throughput limit, and then click OK.

When both rebuild and rebalance occur simultaneously, the aggregate bandwidth
that is consumed by both does not exceed the individual maximum for each type.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 105


prathamesh.mitkar@atos.net
VxFlex OS Rebuild and Rebalance

Rebuild and Rebalance Throttling

Rebuild and Rebalance throttling is a way to manage Rebuild and Rebalance


priorities for application I/O versus rebuild and rebalance I/Os. On installation, both
the rebuild and rebalance processes are set to not use any throttling. This setting is
recommended for most environments.

The Rebuild throttling policy determines the priority between the rebuild I/O and the
application I/O when accessing SDS devices. Application I/Os are continuously
served. Rebuild throttling increases the time the system is exposed with a single
copy of data but also reduces the impact on the application. If modifying the rebuild
throttling, choose the right balance between the two.

Rebalance throttling sets the rebalance priority policy for a Storage Pool. The policy
determines the priority between the rebalance I/O and the application I/O when
accessing SDS devices. Application I/Os are continuously served. Unlike rebuild,
rebalance does not impact the reliability of the system so reducing its impact is not
risky.

The following possible priority policies may be applied:


 No Limit: No limit on rebuild or rebalance I/Os. Any rebuild or rebalance I/O is
submitted to the device immediately, without further queuing. Rebuild and
rebalance I/Os are relatively large so setting this policy speeds up the process,
but has the maximum effect on the application I/O.

VxFlex Integrated Rack Administration - Classroom

Page 106 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex OS Rebuild and Rebalance

 Limit Concurrent I/O: Limits the number of concurrent rebuild or rebalance


I/Os per SDS device. The I/Os are limited to a predefined number of concurrent
I/Os. Once the limit is reached, the next incoming rebuild or rebalance I/O waits
until the completion of a currently executed I/O. For example, setting the value
to "1" guarantees that the device only has one rebuild or rebalance I/O at any
given moment. It ensures that the application I/Os only wait for one I/O in the
worst case.
 Favor Application I/O: Limits rebuild or rebalance in both bandwidth and
concurrent I/Os. As long as the number of concurrent rebuild or rebalance I/Os,
and the bandwidth they consume, do not exceed the predefined limits, those
I/Os are served. Once either limit is reached, the I/Os wait until such time that
the limits are not met again. It imposes a bandwidth limit along with the Limit
Concurrent I/Os option.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 107


prathamesh.mitkar@atos.net
VxFlex OS Rebuild and Rebalance

Setting I/O Priority

To set I/O priority using VxFlex OS GUI:


1. Click the Backend > Storage tab.
2. Then right-click any storage pool, select Settings, and click Set I/O Priority.
3. In Set I/O Priority window, select the appropriate options for rebuild, rebalance,
and migration and then click OK.

VxFlex Integrated Rack Administration - Classroom

Page 108 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex OS Rebuild and Rebalance

Enable and Disable Rebuild and Rebalance at Storage Pool

By default, the Rebuild and Rebalance features are enabled in the system,
because they are essential for system health, optimal performance, and data
protection. These features are only disabled temporarily in specific circumstances,
and should not be left disabled for long periods of time. Rebuild and Rebalance
features are enabled and disabled per Storage Pool.

For example, new servers are added to the cluster during the application peak
workload hours. To avoid network congestion from rebuild and rebalance, defer
these operations to off-peak hours.

The decision to defer rebuilds should be carefully considered since a rebuild


addresses the issue of a single copy of data. Deferring rebalance, but NOT rebuild
may still be acceptable in a production environment.

Enabling or disabling the rebuild or rebalance features can be done through the
GUI and CLI and should be done with extreme caution.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 109


prathamesh.mitkar@atos.net
Managing System Parameters

Managing System Parameters

Introduction

This lesson presents managing VxFlex OS system parameters, such as configuring


a capacity alert threshold, managing data checksum, background device scanner
mode, and reviewing performance profiles.

This lesson presents the following topics:


 Set capacity alert threshold
 Review Storage Pool parameters
 Review Protection Domain parameters
 Performance Profiles

VxFlex Integrated Rack Administration - Classroom

Page 110 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Managing System Parameters

MDM Virtual IP Addressing

In a VxFlex integrated rack system, the MDM cluster is assigned a virtual IP


address that is used for communications between the MDM cluster and SDCs.
MDMs are sometimes switched between MDM cluster members during normal
operation of the cluster, however the virtual IP addresses are always mapped to
the active MDM. The virtual IP address on the MDM cluster ensures that all SDCs
remain connected to the MDM, even after the physical server is replaced. It
simplifies the maintenance procedures and prevents the need to reconfigure all
SDCs in a system.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 111


prathamesh.mitkar@atos.net
Managing System Parameters

Manage Capacity Alert Threshold

Capacity alert threshold is an important parameter to manage the system capacity.


Setting it appropriately helps administrators to get the capacity consumption alert in
advance. The alert can be set per Storage Pool.

This parameter provides two threshold values:


 High capacity threshold: Threshold from the nonspare capacity of the Storage
Pool that triggers a HIGH priority alert.
 Critical capacity threshold: Threshold from the nonspare capacity of the Storage
Pool that triggers a CRITICAL priority alert.

VxFlex Integrated Rack Administration - Classroom

Page 112 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Managing System Parameters

Checksum Protection Mode

The checksum feature addresses errors that change the payload during the transit
through the VxFlex OS system. VxFlex OS protects data in-flight by calculating and
validating the checksum value for the payload at both ends.

During write operations, the checksum is calculated when the SDC receives the
write request from the application. This checksum is validated just before each
SDS writes the data on the storage device. During read operations, the checksum
is calculated when the data is read from the SDS device. It is validated by the SDC
before the data returns to the application.

If the validating end detects a discrepancy, it initiates a retry. The checksum is


done in the granularity of a sector which is typically 500 bytes. This feature applies
to all I/Os: Application, Rebuild, Rebalance, and Migrate. The checksum is also
kept in Read Memory Cache, protecting every block that is maintained in SDS
memory against memory corruption. The checksum feature can be enabled at the
Protection Domain level, and is defined at the Storage Pool level.

Pools with Fine Granularity with or without compression, have persistent checksum
by default. This can't be changed. Each I/O goes through compression, checksum
is calculated before it is written to the disk. There are two types of checksum:

The checksum feature may have a major impact on performance and availability
during periods of high sustained I/Os and is usually disabled. To modify this setting,
perform the following steps:

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 113


prathamesh.mitkar@atos.net
Managing System Parameters

 In the Backend view, navigate to, and select the desired Storage Pools
 Right-click and select Configure Inflight Checksum from the drop-down list
 The Configure Inflight Checksum window is displayed
 To enable the Checksum feature, select the Enable Inflight Checksum
option
 To disable the Checksum feature, clear the Enable Inflight Checksum
option
 Click OK

VxFlex Integrated Rack Administration - Classroom

Page 114 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Managing System Parameters

Background Device Scanner Mode

The Background Device Scanner enhances the resilience of VxFlex integrated rack
by constantly searching for, and fixing device errors before they can affect the
system. It provides increased data reliability compared to what the media
checksum scheme provides. The scanner seeks out corrupted sectors on the
devices in the pool, provides SNMP reporting about errors that are found, and
keeps statistics about its operation. When a scan is completed, the process
repeats, thus adding constant protection to the system.

You can set the scan rate (default: 1 MB/second per device), which limits the
bandwidth that is allowed for scanning. The following scan modes are available:

 Device only mode: The scanner uses the device's internal checksum
mechanism to validate the primary and secondary data. If a read succeeds in
both devices, no action is taken. If a faulty area is read, an error is generated. If
a read fails on one device, the scanner attempts to correct the faulty device with
the data from the good device. If the fix succeeds, the error-fixes counter is
increased. If the fix fails, a device error is issued. If the read fails on both
devices, the scanner skips to the next storage block.

A similar algorithm is performed every time an application read fails on the primary
device.

 Data comparison mode: This is only available if zero padding is enabled. The
scanner performs the same algorithm as above. In addition, after successful

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 115


prathamesh.mitkar@atos.net
Managing System Parameters

reads of primary and secondary, the scanner calculates and compares their
checksums. If this comparison fails, the compare errors counter is increased,
and the scanner attempts to overwrite the secondary device with the data from
the primary device. If it fails, a device error is issued.

The scanning function is enabled and disabled (default) at the Storage Pool level,
and this setting affects all devices in the Storage Pool. You can make these
changes at any time, and you can add/remove volumes and devices while the
scanner is enabled.

The scanning will start about 30s after adding a device to a Storage Pool in which
the scanner is enabled.

VxFlex Integrated Rack Administration - Classroom

Page 116 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Managing System Parameters

Performance Profiles for System Components

You can use the GUI to apply performance profiles to system components. The
high-performance profile configures a predefined set of parameters for high-
performance use cases. The main difference between the high and default profiles
are the amount of server resources (CPU and memory) that are consumed. The
high-performance profile always consumes more resources.

When a container is provided in the command, System, Protection Domain, or


Fault Set, all the objects currently in that container are configured. The profiles can
also be applied separately to SDSs, SDCs, and the MDM cluster.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 117


prathamesh.mitkar@atos.net
Managing System Parameters

VxFlex OS License Management

Using VxFlex OS in a production environment requires a license.


 A storage upgrade may require a license update
 VxFlex OS supports the following license options:
 Basic - Enables all basic functionality
 Enterprise - Enables advanced features
 VxFlex OS license management can be performed from either the CLI or the
GUI

VxFlex OS licenses are purchased by physical device capacity in TB. You can
activate your licensed capacity over multiple VxFlex OS systems, each system with
its unique installation ID. The license is installed on the MDM cluster, using the
set_license command. Since VxFlex OS licenses are purchased by physical device
capacity in TB, any upgrade or addition may require extra licensing. You can view
current license information using the CLI or the GUI.

In the VxFlex OS GUI, in the upper right corner, open the drop-down list that is
displayed next to the username and select About. To update the license
information choose System Settings > License. Now copy and paste the license
key information.

VxFlex Integrated Rack Administration - Classroom

Page 118 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Managing System Parameters

View Oscillating Failure Counters

Oscillating failure reporting provides visibility to error situations, and helps to


reduce their impact on normal system operation. This feature detects and reports
various oscillating failures, in cases when components fail repeatedly and cause
unnecessary failovers. You can configure the time interval that is associated with
each window type, and the number of failures that are allowed before reporting
commences for each window type, per counter.

The currently configured counter parameters are displayed in the corresponding


Property Sheet, in the Oscillating Failure Parameters section. The counters
shown depend on the object that is selected in the table.

 Window: The sliding time window for each interval—Short, Medium, and Long
 Threshold: The number of errors that may occur before error reporting
commences
 Period: The time interval of each window, in seconds

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 119


prathamesh.mitkar@atos.net
VxFlex OS User Management

VxFlex OS User Management

This lesson presents VxFlex OS user access and authentication.

This lesson presents the following topics:


 VxFlex OS user authentication and authorization
 VxFlex OS access control

VxFlex Integrated Rack Administration - Classroom

Page 120 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex OS User Management

VxFlex OS User Accounts

There are multiple components in a VxFlex OS environment which have different


sets of user accounts.

The primary accounts are the VxFlex OS user accounts. These accounts are used
to authenticate against the VxFlex OS cluster itself. They are sometimes referred to
as the MDM accounts, since the MDM handles the authentication. This is the
account that you use to log in to the VxFlex OS GUI. Also, you often have to log in
using a VxFlex OS account before running a scli command. The default
administrator account is admin, but you can create more accounts as needed.

VxFlex OS runs on nodes and virtual machines that use either the CentOS or
RHEL operating system. These have their own individual OS user accounts. You
would use one of these user accounts when opening an SSH session to an SVM,
storage-only node, or KVM host to perform maintenance or to run a scli command.
You may have to log in to the VxFlex OS Gateway virtual machine's operating
system for maintenance as well. The default user name for these VMs is root.

The VxFlex OS Gateway hosts a web-based interface. It has its own admin
account that is separate from the others.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 121


prathamesh.mitkar@atos.net
VxFlex OS User Management

VxFlex OS User Authentication

VxFlex OS User accounts grant users role-based access to the GUI and specific
SCLI commands.

User authentication may be done with both local authentication and using Active
Directory (AD) over LDAP or LDAPS (Secure LDAP). VxFlex OS can support both
AD users that are fully controlled through the customer’s existing centralized
authentication server, and local users concurrently. You can associate groups from
the AD with the existing VxFlex OS roles to ensure the Role-Based Access (RBAC)
model. When a user logs in to the VxFlex OS system, the MDM identifies that the
user belongs to the AD domain. The MDM then authenticates the user against the
AD server over secured communications. After the user is authenticated, VxFlex
OS accepts the group to which the user belongs, and associates the appropriate
role and permissions to the user.

Access to the VxFlex OS Gateway requires defining a dedicated named user. This
user may either be a local user or an LDAP user. Access to the Installation
Manager (IM) requires a user name and password which should be the VxFlex OS
Gateway user.

VxFlex Integrated Rack Administration - Classroom

Page 122 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex OS User Management

User Roles and Permissions

With VxFlex OS, the following roles are available:

 Monitor Role - Monitoring/Read-only view, no access to configuration change.


 Frontend Configurator Role - Perform any frontend related configurations
which include volume manipulations, such as adding, deleting, mapping,
snapshots, and SDC operations. In the VxFlex OS GUI, it correlates with the
Frontend view’s operations.
 Backend Configurator Role - Perform any backend related configurations
which include Protection Domain and Storage Pool operations, and
manipulating Fault Sets and SDS. In the VxFlex OS GUI, it correlates with the
Backend view’s operations.
 Configurator Role - Includes both Backend and Frontend roles which are only
applicable for local users.
 Administrator Role - Configure Configurator and Monitor users.
 Security Role – Define admin accounts and LDAP access.
 Super User Role - Only local user with all privileges. Only one Super User is
allowed per system.

The authorization permissions of each user role are defined differently for local
authentication and for LDAP authentication. Although the role names are similar,
the permissions that are granted to them are not.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 123


prathamesh.mitkar@atos.net
VxFlex OS User Management

User roles that are defined locally are defined in a nested manner. Higher-level
roles automatically include the abilities of lower roles. For example, a user with a
configurator role also has the abilities of the monitor role. A user with the
administrator role has the abilities of both a configurator and monitor.

User roles that are defined in the LDAP domain are mutually exclusive, with no
overlap, apart from the Configurator role. For example, if you want to give an LDAP
user permission to perform both monitoring and configuration roles, assign that
user to both Backend/Frontend Configurator and Monitor LDAP groups.

VxFlex Integrated Rack Administration - Classroom

Page 124 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex OS User Management

Configuring LDAP Authentication

Before you begin, ensure that the OpenLDAP package is installed and configured
on each server running the MDM. LDAP configuration steps are operating system
dependent and not presented here. Steps for preparing a server may be different
for secure and nonsecure LDAP authentication. Once servers are ready, follow the
following steps to configure LDAP/LDAPS authentication:

 Use --add_ldap_service command to add the LDAP service to the MDM


used for authentication. The command returns the LDAP service ID. LDAP
should be configured on all the MDMs in the system.
 Once the service is ready, the next step is to associate the Active Directory
(AD) groups with the different VxFlex OS roles.
Use --assign_ldap_groups_to_roles command to assign AD/LDAP
groups to VxFlex OS roles. This command maps LDAP groups to VxFlex OS
system roles. The LDAP service must be configured in advance. Once you have
mapped the roles, a user in an LDAP group has those roles.
 Once the LDAP service is set, and groups are assigned, you need to specify the
authentication method in which VxFlex OS authenticates the users. The users
may be restricted to the local domain only, to the LDAP server only, or both
types may be allowed. This decision is at the discretion of the system
administrator, and is dictated by the security policy of the organization.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 125


prathamesh.mitkar@atos.net
VxFlex OS User Management

Keep in mind that LDAP user roles are not nested in the way that local VxFlex OS
user roles are. For example, granting an LDAP group an administrator role does
not give it the monitor role. LDAP users cannot log in to the VxFLEX OS GUI or the
VMware plug-in unless they have the monitor role.

VxFlex Integrated Rack Administration - Classroom

Page 126 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex OS User Management

Manage LDAP Authentication

Once authentication is set, a user can log in to the system according to the defined
method. When logging in locally, the command expects a user name and
password. The LDAP command should also include the LDAP domain that it is
using, and the LDAP authentication parameter.

When logging into the GUI, use a local username, such as admin, or an LDAP
username with the domain name. An example of an LDAP username is
sunder@corp.local.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 127


prathamesh.mitkar@atos.net
VxFlex OS User Management

Managing Local Users

You can only create local users with the CLI interface, and they are only effective
within the VxFlex OS CLI environment. This command is only available to
administrator roles. The --add_user command is used to set the username and
role for the new user. When a new user is created, the administrator that created
the user receives an automatically generated password that is required for first-time
authentication. When the new user logs in the first time, they are required to
change this password. When the system authenticates a user, all commands that
are performed are tracked to their credentials until a logout is performed, or until
the session expires.

You can modify existing VxFlex OS user's roles. You can also disable the default
Super User to ensure that all users are associated with specific people. If you need
to re-enable the SuperUser, use the reset_admin command.

Example of managing user commands


 To add a user: scli --add_user --username <NAME> --user_role
<ROLE>
 To check existing users: scli --query_users or scli –query_user –
username <NAME>
 To modify user credentials: scli --modify_user --username <NAME> -
-user_role <ROLE>

VxFlex Integrated Rack Administration - Classroom

Page 128 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex OS User Management

 To reset password: scli --reset_password --username <NAME> and


scli --set_password --old_password <OLD PASSWORD> --
new_password <NEW PASSWORD>
 Disable SuperUser: scli --disable_admin [--i_am_sure]

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 129


prathamesh.mitkar@atos.net
VxFlex OS User Management

Lab: Managing VxFlex Integrated Rack Storage

This lab presents managing the VxFlex OS storage resources.

VxFlex Integrated Rack Administration - Classroom

Page 130 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Compute and Network Resource Management

This module presents compute and network resource management in a VxFlex


integrated rack at a high level. The assumption here is that an administrator is
already familiar with the compute and network resource management activities in
VMware environment.

For detailed information about this topic, see training and documentation at
www.vmware.com.

Upon completing this module, you will be able to:


 Create and manage virtual machine resources
 Manage virtual network resources

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 131


prathamesh.mitkar@atos.net
Compute Resource Management

Compute Resource Management

This lesson presents compute resource management activities in a VMware


environment.

This lesson presents the following topics:


 VxFlex integrated rack VMware environments
 VM deployment and migration
 VM storage provisioning and creating a datastore

VxFlex Integrated Rack Administration - Classroom

Page 132 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Compute Resource Management

Compute Resources in VxFlex Integrated Rack

The VxFlex integrated rack compute only and hyperconverged nodes provide the
required computing resources. These resources are pooled together and
configured to host virtual machines. On these VMs you can run your application
workloads and other services. You have a choice of hypervisors to use on these
nodes: VMware vSphere or Red Hat Virtualization (RHV).

Virtual machines play a few different roles in a VxFlex integrated rack system.
 Production VMs: You deploy your production virtual machines onto the VxFlex
integrated rack system to perform your computing needs. These VMs could be
database servers, application servers, or any other kind of virtual machine.
 VxFlex integrated rack services VMs: Many internal functions and VxFlex
integrated rack management tools run on virtual machines. This includes
vCenter, Red Hat Virtualization Manager, VxFlex Manager, and support
services.
 Storage Services VMs: Storage virtual machines are needed to use the
physical storage on a hyperconverged node. These VMs give VxFlex OS
access to storage.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 133


prathamesh.mitkar@atos.net
Compute Resource Management

VMware vSphere Environments for VxFlex Integrated Rack

VxFlex integrated rack has two separate vSphere environments - one for the
VxFlex Management Controller cluster, and the other for the VxFlex node cluster.

The Controller cluster hosts VMs that provide services for the VxFlex integrated
rack system itself. It is a VMware vSphere cluster where all the nodes run the ESXi
hypervisor, and they are managed by VMware vCenter. For storage, the Controller
cluster uses VMware vSAN. Similar to VxFlex OS, vSAN aggregates the locally
attached disks of the VxFlex Controller nodes to create a pool of distributed shared
storage.

The VxFlex node clusters primarily host the production virtual machines. Nodes in
the VxFlex cluster run either ESXi or RHV hypervisors and are managed by
VMware vCenter or by Red Hat Virtualization Manager. Unlike the Controller
cluster, the VxFlex node cluster uses VxFlex OS for all the customer production
data. VxFlex OS provides massive scalability and flexibility in terms of
hypervisor/OS and bare-metal deployments. If using ESXi nodes, Storage Virtual
Machines are needed to provide storage to VxFlex OS.

VxFlex Integrated Rack Administration - Classroom

Page 134 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Compute Resource Management

VMware vCenter for Compute Resource Management

As in any vSphere deployment, you can use the VMware vCenter as the
centralized tool to manage ESXi hosts and VMs that are running in the VxFlex
cluster.

VMware vCenter enables you to standardize and simplify configuration and


management of VMware ESXi hosts. With VMware vCenter, you can perform the
following activities:
 Capture a template of a known, validated VM configuration—including compute,
networking, storage, and security settings—and deploy it to many hosts.
 Allocate processor and memory resources to virtual machines, and, if RAM hot-
add and CPU hot-plug are enabled, modify allocations of the resources while
virtual machines are running.
 Enable applications to dynamically acquire more resources to adapt to periods
of peak demands.
 Automatically allocate available resources among virtual machines according to
predefined rules that reflect business needs and changing priorities.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 135


prathamesh.mitkar@atos.net
Compute Resource Management

VxFlex Management Controller Cluster

The VxFlex Controller cluster maintains the environment for the overall VxFlex
integrated rack management. The virtual machines running on the Controller
Cluster include vCenter Server Virtual Appliance (vCSA), VxFlex OS Gateway VM,
VxFlex Manager, and OpenManage Enterprise VMs, Secure Remote Services
VMs, and Windows jump servers for support access. This cluster uses vSAN for
storage, so you do not need VxFlex OS in the controller cluster.

vSphere provides a high availability solution for vCenter Server, which is known as
vCenter Server High Availability (VCHA). The vCenter High Availability architecture
uses a three-node cluster to provide availability against multiple types of hardware
and software failures. A vCenter HA cluster consists of one active node that serves
client requests, one passive node to assume the role of the active node in case of
failure, and one quorum node called the witness node.

From vSphere 6.5 VUM and SQL are integrated into vCSA, so the SQL license
requirement is removed along with their virtual machines.

VxFlex Integrated Rack Administration - Classroom

Page 136 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Compute Resource Management

VxFlex Management Controller vSAN Datastore

The vSAN datastore is created during initial setup, which uses the storage local to
Controller nodes. The vSAN datastore size is an aggregate of all the Capacity
drives in the Controller cluster. All VMs created in the VxFlex Controller cluster are
stored on the vSAN datastore.

To view the vSAN Datastore backing in vSphere Web Client, go to Storage, select
the vsanDatastore, and click Configure > Device Backing. This view shows the
physical disks that makes up the vSAN. These are disks from the VxFlex
Management Controller nodes running ESXi.

 The top portion of this screen shows each node and the disk group. Notice that
the disk group for each node has the same number of disks.
 The bottom screen shows the physical disks that each node has contributed to
the VxFlex Controller vSAN.

For more information about VMware vSAN, see www.vmware.com.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 137


prathamesh.mitkar@atos.net
Compute Resource Management

VxFlex Node Cluster

The VxFlex node clusters (or production clusters) provide compute and storage to
customer applications. The node's local storage is pooled together by VxFlex OS.

To provide storage to VxFlex OS, ESXi hosts need a Storage VM (SVM). The SVM
has the node's storage controller mapped to it using DirectPath I/O. This gives the
SVM direct access to the storage controller. Rebooting or powering off the SVM
causes VxFlex OS to believe that the node failed.

The Storage VMs cannot be migrated to other hosts because they need direct
access to the local storage of the node. However, other VMs consuming the VxFlex
OS storage can be migrated from one ESXi host to another or from one RHV host
to another.

Nodes running RHV do not need storage VMs. Instead, VxFlex OS SDC and SDS
software run directly on the RHEL OS of the node.

The Storage-only node runs RHEL and contributes storage to the VxFlex OS
cluster. No customer application runs on Storage-only nodes. Compute only nodes
provide computing power, but do not contribute any storage to the VxFlex OS
storage pool

The ESXi nodes in the VxFlex cluster are managed by a VCSA that is hosted on
the controller cluster. Similarly, RHV nodes are managed by an RHV-M virtual
machine on the controller cluster.

VxFlex Integrated Rack Administration - Classroom

Page 138 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Compute Resource Management

ESXi Boot Device

Each ESXi node requires a Storage VM, to access VxFlex OS storage. Because
the SVM provides access to VxFlex OS storage, it cannot be stored on VxFlex OS
storage. Instead, its files are stored on a small datastore that uses the node's
internal BOSS (PowerEdge 14G) or SATADOM (13G) storage. These datastores
are labeled DASXX or something similar.

DASXX datastores should only store the SVMs and system files. Production virtual
machines should be stored on datastores that are backed by VxFlex OS storage.

Shown is the storage view in the vSphere Web Client for the FLEX cluster. Notice
the five DASXX datastores in this example. There is one datastore for each node.
The device backing of these datastores is labeled as a local SATADOM device.
Along with the datastore, these devices also host the ESXi operating system.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 139


prathamesh.mitkar@atos.net
Compute Resource Management

Provisioning Storage for Production Virtual Machines

Any virtual machine that is used for production must be stored on VxFlex OS
storage. The first thing that you need to do is to create a volume in VxFlex OS. It is
recommended to use thick provisioning for the VxFlex OS volume. Thick
provisioning is recommended because the hypervisor will not be aware of whether
the volume is overprovisioned. Thin provisioning can be subsequently used when
creating the virtual machine disks (if needed).

You must also map the volume to all the SDCs so that they have access to the
volume. Then make a note of the ID number of the new volume. This number is
needed to locate the volume in vSphere or RHV-M.

In vSphere, you can see the details of the Storage Devices available to a VxFlex
node cluster. The EMC Fibre Channel Disk that you see here are the VxFlex OS
volumes that are mapped or available to a specific host. The ends of their
identifiers match the ones that are shown in the VxFlex OS interface.

If you have recently created and mapped a volume, you may have to rescan for
storage devices.

VxFlex Integrated Rack Administration - Classroom

Page 140 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Compute Resource Management

Create Datastore on VxFlex OS Volume

After you have created the VxFlex OS volumes, you can create a datastore on the
VxFlex OS EMC Fibre Channel Disk. When creating a datastore, be sure to select
a device that uses a VxFlex OS volume. Here, you see the wizard screen to select
the device. The selected device has an Identification number that matches the one
that was previously created in VxFlex OS. After completing the wizard, select
Finish, and the datastore is created.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 141


prathamesh.mitkar@atos.net
Compute Resource Management

Virtual Machine Deployment Options

VxFlex integrated rack provides the same methods for building virtual machines in
vSphere as in any vSphere environment. Some of these methods, such as cloning
or deploying VMs from a template, require vCenter. Others are universal to
whatever management platform is being used. Using the New Virtual Machine
wizard makes it easy. One key difference when allocating storage for a VM is that
you should choose a datastore that uses VxFlex OS volumes.

To review, some of the most common methods of VM deployment are:


 Building a virtual machine – building a VM from the ground up involves building
the VM, installing a Guest OS into the VM, and installing VMware Tools.
Installing an OS into a VM takes about the same amount of time it would take
on a physical system. Most other deployment methods involve imaging a base
VM template to avoid installing the OS repeatedly for each VM.
 Cloning a VM – Cloning VMs involve taking a copy of a base VM (image) to
create a new virtual machine. Cloning takes a copy of the configuration and
storage files for a VM and uses it as the basis for a new virtual machine.

VxFlex Integrated Rack Administration - Classroom

Page 142 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Compute Resource Management

 Deploying a VM from Template – Deploying a VM from template involves


building a template off a base image VM and then using it to provision the new
VMs. Templates consist of the same files that support a virtual machine.
Deploying a VM from a template essentially involves copying these files to
create a VM.
 Importing a virtual appliance – for a VM to be imported as a virtual appliance it
must be prepared in Open Virtualization Format (OVF) before an import. After it
is imported, the virtual appliance can be started in the ESXi or vCenter
inventory.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 143


prathamesh.mitkar@atos.net
Compute Resource Management

Create VM Example

When creating VMs in the VxFlex integrated rack environment, ensure that you
select the VxFlex OS storage and not the individual datastore on each host.
Allocating storage is part of the process when using the New Virtual Machine
wizard in the vSphere Web Client.

VxFlex Integrated Rack Administration - Classroom

Page 144 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Compute Resource Management

Add Storage to VM

You can expand the storage capacity of a virtual machine by adding a virtual disk.
To add a disk to a virtual machine, select the New Hard disk under New Device in
the Edit Settings screen. Specify the size of the new disk, expand the New Hard
disk and then expand Location. Select either Store with the virtual machine or
Browse. Browse shows you a list of devices (as shown in the image). If you set up
your VMFS datastores with meaningful names, it can help you to choose the
correct device.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 145


prathamesh.mitkar@atos.net
Compute Resource Management

Virtual Machine Management

There are many options to control a VM and its environment. Some of these
options include monitoring, creating a snapshot, cloning, creating a template, and
adding/removing devices. To see all the options, select the VM in the left navigation
pane and right-click.

You can modify virtual machine settings with the Edit Settings option. With
supported guest operating systems, you can also add CPU and memory while the
virtual machine is powered on.

VxFlex Integrated Rack Administration - Classroom

Page 146 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Compute Resource Management

VM Migration Using vSphere VMotion

A vSphere VMotion migration moves a powered-on running virtual machine from


one host to another in a cluster. It provides capabilities such as:
 Improves overall hardware use
 Continuous VM operation for scheduled downtime
 Supports vSphere DRS to balance workload across hosts

A VM must meet the following requirements for migration:


 It must not have a connection to an internal standard switch: Virtual switch with
zero uplink adapters.
 It must not have CPU affinity configured.
 vSphere VMotion must be able to create a swap file accessible to the
destination host before migration can begin.

Types of migrations:
 Cold: Migrate a virtual machine that is powered off
 Suspended: Migrate a virtual machine that is suspended
 vSphere VMotion: Migrate a virtual machine that is powered on

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 147


prathamesh.mitkar@atos.net
Compute Resource Management

Concurrent migrations are possible. Refer to VMware website for the latest
information about maximum concurrent migration to a single vSphere VMFS
datastore.

VxFlex Integrated Rack Administration - Classroom

Page 148 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Compute Resource Management

Migration Wizard

There is a Migration wizard to guide you through the migration process.

Note: You cannot migrate the Storage virtual machines as they use local storage
so therefore not a candidate for migration.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 149


prathamesh.mitkar@atos.net
Compute Resource Management

Red Hat Virtualization Manager for Compute Resource


Management

If using Red Hat Virtualization, you use Red Hat Virtualization Manager (RHV-M),
to manage the environment. RHV-M runs on a virtual machine that is hosted on the
controller cluster.

RHV-M enables you to perform many common virtualization management tasks


including:
 Creating Virtual Machines from scratch or from an OVF template.
 Allocate processor and memory resources to virtual machines.
 Migrate virtual machines from one node to another.

To create a VM from an OVF template, go to Compute, select Virtual Machines


and select Import from the top right corner drop-down.

VxFlex Integrated Rack Administration - Classroom

Page 150 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Compute Resource Management

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 151


prathamesh.mitkar@atos.net
Network Resource Management

Network Resource Management

This lesson presents network resource management activities.

This lesson presents the following topics:


 Managing physical switch configurations
 Managing virtual switch configurations

VxFlex Integrated Rack Administration - Classroom

Page 152 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Network Resource Management

Physical Switch Port Configuration

The access switches provide networking for both the controller nodes and VxFlex
nodes. The traffic coming from these nodes uses various VLANs to allow the traffic
to remain separated, even if it is traveling over the same physical cable. Also, the
switches are configured with virtual port channels (vPC) to allow multiple physical
connections to act as one, even if they are on separate switches.

Each VxFlex node has four connections to the access switches: two connections to
each switch. Two of these four connections (one from each switch) are used to
provide networking for management and production VM traffic. A virtual port
channel (vPC) is created for these connections to enable them to act as one.
Because different types of traffic from different VLANs are traveling on this port, it is
configured in switchport trunk mode. This allows multiple VLANs to use that port.

The other two connections (one from each switch) are dedicated to the VxFlex OS
data traffic. These ports tag all incoming traffic with a VLAN ID for that data
network. The two VxFlex OS data ports on the two switches use different VLAN
IDs, and they are not part of a vPC. Instead VxFlex OS handles the load balancing
of traffic on these two connections.

Note that, dedicated storage-only nodes in a two-layer deployment have three


connections to each access switch. Two of these connections are dedicated for the
VxFlex OS data traffic, and the remaining one from each switch is port channeled
to provide connection for the management and VM traffic.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 153


prathamesh.mitkar@atos.net
Network Resource Management

Each access switch has two ports that connect to each controller node. Since each
port carries traffic that is segregated on different VLANs, they are configured in
switchport trunk mode. This allows them to accept traffic tagged with multiple
VLANs. Also, to provide higher bandwidth, they are configured as virtual port
channels.

VxFlex Integrated Rack Administration - Classroom

Page 154 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Network Resource Management

Virtual Port-Channels

VxFlex integrated rack uses virtual port channel for all the management and
production traffic. With port channels, links that are connected to different network
devices act as a single port channel to a third device. They are already set up for
the management switches to the uplink switches providing high availability. The
other benefit of vPC includes fast convergence and bandwidth. Do not change
these settings.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 155


prathamesh.mitkar@atos.net
Network Resource Management

NX-OS Query Commands

To see the configuration of the physical switches, log in to the switch and use Cisco
NX-OS commands. Here are a few useful commands.

The show running-config command displays the full configuration file for the
switch. With this command, you can see every configuration setting on a switch.
Although the large amount of output can be difficult to read, if you know what to
look for, you can find the full details of any component.

Shown are two portions of the show running-config command output. They
show the configuration of some of the ports that connect to VxFlex nodes and to
Controller nodes. The description for the port usually helps identifies its purpose.

The first port for the VxFlex node is used for management traffic. It is set for
switchport mode trunk and the VLAN IDs that are allowed are shown. The
channel-group specifies that it is a part of a virtual port group (vPC). The second
port for the VxFlex node is for VxFlex OS data. The switchport access vlan
command shows which VLAN ID the switch will tag all data from this port.

The two ports shown that connect to controller nodes are also trunked and the
allowed VLANs are shown. They are both part of a vPC, because they have a
channel-group assigned.

VxFlex Integrated Rack Administration - Classroom

Page 156 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Network Resource Management

The show interface brief command displays a list of all interfaces on the
switch, their status, which VLAN they are tagging packets with, and which virtual
port channel they are a part of.

In the command output shown, Eth1/1/1 is a port that connect to a VxFlex node for
management traffic. The Mode column shows that it is a trunk that accepts multiple
VLAN IDs. The accepted VLANs are not shown with this command. You can also
see that it is part of vPC 111 in the Port Ch # column.

Eth1/2/1 is a port that connect to a VxFlex node for VxFlex OS data traffic. The
VLAN column shows that it is tagging packets with a VLAN ID of 231.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 157


prathamesh.mitkar@atos.net
Network Resource Management

The show vpc command displays a list of all virtual port channels. The Active
VLANs column shows which VLANs are allowed on that port channel. A VLAN only
appears in that column if it is configured for that port channel on both switches.

The show vlan command displays a list of all VLANs, their name, and which ports
and vPCs are using them. This command is helpful when determining which VLAN
IDs are already in use and what they are used for.

VxFlex Integrated Rack Administration - Classroom

Page 158 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Network Resource Management

Virtual Networking with VMware vSphere

In the VxFlex integrated rack environment, each ESXi node uses distributed virtual
switches that may contain multiple port groups depending on the network and the
cluster. The distributed virtual switches span all nodes in the cluster. Each
distributed switch has port groups and uplinks.

A port group is an aggregate of multiple ports with a common configuration. Each


port group is identified with a network label. In VxFlex integrated rack, traffic can be
tagged with a VLAN at the port group level. A port group can have both virtual
machine ports, and VMkernel ports which provide network access to the ESXi
kernel. Any VM or VMkernel port in a port group can communicate to one another.

Because port groups span multiple ESXi hosts, they need access to physical
networking to allow communication between hosts. Each distributed switch has
uplinks to provide connections between hosts and to other components outside the
vSphere environment.

At the access switch side, the switch port that is connected to the ESXi node must
be configured to accept the tagged traffic. This occurs if the port is configured in a
trunk mode and allow the VLAN traffic to pass through. Once configured, the switch
accepts the tagged packets.

The screenshot displays two uplinks each with six vmnics (one for each node) in
this specific VxFlex integrated rack deployment.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 159


prathamesh.mitkar@atos.net
Network Resource Management

VxFlex Node Cluster DVswitch0

In the VxFlex node cluster, DVswitch0 carries the traffic for management processes
and for any production data. To keep the different traffic segregated, the distributed
virtual switch uses multiple port groups. Each port group is given its own VLAN, so
that the data remains separate even when traveling across the physical switches.
Upon installation, there are three port groups.

The vcesys-esx-mgmt port group provides networking for ESXi management.


Each ESXi server has a VM kernel port, vmk0, on this network. The VMkernel port
provides network access to the ESXi kernel. Vmk0 is configured to provide ESXi
management.

The vcesys-esx-vmotion port group is for vMotion traffic. It also contains a


VMkernel port from each ESXi server, vmk1. This VMkernel port is configured to
send vMotion traffic. Whenever a virtual machine is migrated from one ESXi host to
another, it travels over this port group.

The vcesys-sio-mgmt port-group provides networking for VxFlex OS management


traffic. Because VxFlex OS management traffic always goes to the SVM on an
ESXi server, no VM kernel ports are needed for this port group. Instead, each
SVM's management network interface is a part of this port group.

DVswitch0 has two uplinks on each ESXi host. Each uplink connects to a separate
access switch. The ports on these switches for these connections are configured

VxFlex Integrated Rack Administration - Classroom

Page 160 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Network Resource Management

as a VLAN trunk. This allows the traffic from all port groups, which have been
tagged with different VLAN IDs, to travel over those ports.

Note: The screenshot has had some vmkernals and virtual machines removed to
save space.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 161


prathamesh.mitkar@atos.net
Network Resource Management

VxFlex Node Cluster DVswitch1 and DVswitch2

DVswitch1 and DvSwitch2 in the VxFlex node cluster are for the VxFlex OS data
traffic. This traffic is sent over separate uplinks than the management and
production traffic on DVswitch0 to provide high performance for the VxFlex OS
data.

Both of these distributed switches have one port group each, vcesys-sio-data1
and vcesys-sio-data2. Each of these port groups has a VMkernel port for each
ESXi host. These VMkernel ports allow the SDC software, which runs on the kernel
as a driver, to access VxFlex OS volumes over the network. Each SVM also has its
network interfaces for data on each port group. This allows the VxFlex OS SDS
software running on the SVM to send data over the network.

Each distributed switch has only one uplink. Instead of applying a VLAN tag at the
port group, VLAN tagging is performed at the physical switch. Each data network
has its own VLAN ID.

To review, let's look at how an SDC running on an ESXi server would access data
on a VxFlex OS volume. The SDC runs as a driver on the ESXi kernel, so it must
communicate through a VMkernel port, vmk2 or vmk3. The VMkernel port is part of
one of the VxFlex OS port groups on a distributed switch. It contacts an SVM,
either through an uplink that is configured for that distributed switch, or directly if
the SVM is on the same host.

VxFlex Integrated Rack Administration - Classroom

Page 162 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Network Resource Management

Summarize VxFlex Integrated Rack Logical and Physical


Network Layout

Let's take a closer look at the network portion of the solution and how it can affect
the VxFlex OS data I/O path.

The diagram shows three distributed virtual switches, DVswitch 0, 1, and 2, each
with one or more port groups. The lines show how traffic from those port groups
travels to the physical switches.

In DVswitch0 at the top, are three port groups, each used for management
purposes. These port groups have two vmnics to use as uplinks for its traffic. The
ports are set up for link aggregation. This requires setting the proper teaming and
failover settings for the port groups and enabling VPC on the physical switch.

Since each port group on DVswitch0 uses a different VLAN, the switch port that is
connected to the ESXi node is configured to accept VLAN tagged traffic. This
occurs if the port is configured to be in trunk mode and configured to allow the
VLAN traffic to pass through.

DvSwitch1 and DvSwitch2 both carry VxFlex OS data traffic. Two separate
distributed switches are used for the data traffic to provide traffic isolation. Notice
that they each use separate vmnics that are connected to separate physical
switches. This is spreading the I/O workload, and making data access available
across two different 10/25-Gb connections for high throughput.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 163


prathamesh.mitkar@atos.net
Network Resource Management

The two VxFlex OS data networks do not have a VLAN assigned at the port group.
However, a VLAN tag is assigned at the physical switch.

Finally, in the physical area at the top, you can see the iDRAC 1-GbE switch which
is separate from the other switches. There are two reasons for this. First, you do
not need high-speed data ports for management traffic. And second, it separates
management control traffic from the production traffic. All these come together to
comprise different I/O paths in the VxFlex integrated rack. It is good to know these
elements and their relationships, especially if you need to troubleshoot an issue.

VxFlex Integrated Rack Administration - Classroom

Page 164 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Network Resource Management

ESXi Host Network Interfaces

You can view the physical adapters on an ESXi host by selecting it and going to the
Configure > Physical adapters section. This shows a list of network interface
cards and their speeds, MAC addresses, and so on. In this image, we are looking
at physical network interface cards, which show us four 10Gb ports that are used
for VxFlex integrated rack networking. It also shows which distributed switch is
using each interface as an uplink.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 165


prathamesh.mitkar@atos.net
Network Resource Management

VxFlex Management Controller Virtual Networking

VxFlex Management Controller cluster also has three DVswitches, however the
layout is different than the VxFlex node cluster. Using the Topology view gives a
good high-level view of the networking.

In this image, we see VLAN ID 110 is used for the vcesys-esx-mgmt port group and
VLAN ID 151 is used for the vcesys-sio-mgmt, VxFlex OS management port group.
These VLANs are the same as the VxFlex node cluster networking. VLANs should
be consistent across clusters and laid out globally. For DVswitch0, we see that
there are two uplinks each with three vmnics, for the three nodes in this VxFlex
Controller cluster.

VxFlex Integrated Rack Administration - Classroom

Page 166 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Network Resource Management

Adding Production Network

Production virtual machines need their own port groups and VLANs to have
networking. They should not use the existing port groups or VLANs, which are for
VxFlex integrated rack system use only.

Since production traffic should be separated from VxFlex OS data traffic, it should
use the uplinks that are configured for DVswitch0. Also, because production traffic
should be logically separated from the other management traffic on DVswitch0, you
will create a separate VLAN and port group for production.

There are two things that you must do to add networking for production virtual
machines. You must configure the access switches to accept traffic tagged with the
new VLAN ID. Because the uplinks of DVswitch0 use virtual peer channels, you
must configure those vPCs. You also must create a port group that will use the new
VLAN ID. That way, traffic from any virtual machine using that port group will be
tagged with the correct VLAN ID, and be allowed to travel across the access
switches.

Depending on the need, you may create multiple port groups and VLANs for
production. Multiple VLANs and port groups will allow you to create separate
networks for different applications in your production environment.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 167


prathamesh.mitkar@atos.net
Network Resource Management

Configure Access Switches

To configure the physical switches, you will create a VLAN and add it to the
relevant port channels. Log in to the access switches and use the show vlan
command to list the currently configured VLANs. Find a VLAN number that is not
currently used. This will be your new VLAN.

Also, display the port channel interfaces with the show interface
description command. Make a note of the interface names, starting with Po,
that are used for the uplink, peer-link, and connections to all ESXi hosts. These are
the port channels that will need the new VLAN added.

VxFlex Integrated Rack Administration - Classroom

Page 168 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Network Resource Management

Once you have gathered the necessary information, you can configure both access
switches. First, create the new VLAN using the available VLAN ID that you
identified. Give it a name to make it easier to know its purpose.

Next, add that VLAN to virtual port channels for the peer link, uplink, and ESXi
hosts. Use the vPC numbers identified earlier.

Confirm that the VLANs have been added with the show vpc command. This
shows your new VLAN listed under each of the vPCs. This command only shows
VLANs that are configured across the virtual port channel. Therefore, VLANs are
not listed here until they are configured.

The switches are now configured to accept traffic tagged with the new VLAN ID.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 169


prathamesh.mitkar@atos.net
Network Resource Management

Adding New Distributed Port Group

To create a network for production virtual machines, create a new Distributed Port
Group. Begin this process using the wizard:

 On the vSphere Web Client Home menu, select Networking.


 Right-click DVswitch0, and under Distributed Port Group, select New
Distribution Port Group.

In the New Distributed Port Group wizard, you can configure most settings to
meet your requirements. However, you must set the VLAN ID to match the VLAN
you had configured on the access switches. Also, under Teaming and failover,
select Route Based on IP hash for the Load balancing. This setting is required for
any uplinks that use vPC, like the ones on DVswitch0. Route based on IP Hash
works by taking the source and destination IP addresses, and performing
calculations on each packet to determine which uplink to be used. Because the
load balancing is based on the source/destination IP addresses, a VM
communicating with multiple IP addresses can balance its load across all the
network adapters. This makes better use of the available bandwidth.

VxFlex Integrated Rack Administration - Classroom

Page 170 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Network Resource Management

Adding Production Networking in RHV

Adding networking for production hosts in a Red Hat Virtualization environment is


similar to creating one in vSphere. However, since VxFlex OS components run
directly on the Red Hat server, the networking is simpler.

First, the access switches must be configured to accept traffic tagged with your
new VLAN ID. Then, in Red Hat Virtualization Manager (RHV-M) create a network.
Enable VLAN tagging, and provide the VLAN ID. Also, set the network to use an
MTU of 9000.

For each Red Hat host, allow it to use the new network, and set it to use bond0 on
the host. bond0 is used for management and production traffic. The other
interfaces are for VxFlex OS data traffic only.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 171


prathamesh.mitkar@atos.net
Network Resource Management

VxFlex Integrated Rack Administration - Classroom

Page 172 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Network Resource Management

Lab: Managing Virtual Machines

This lab presents activities that are related to managing your virtual compute and
network resources.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 173


prathamesh.mitkar@atos.net
Data Protection and Backup

This module focuses on various methods of data backup and protection on a


VxFlex integrated rack.

Upon completing this module, you will be able to:


 Configure VxFlex OS Snapshots
 Configure VMware data protection solutions for VxFlex integrated rack
 Describe integration with Dell EMC backup and data protection solutions

VxFlex Integrated Rack Administration - Classroom

Page 174 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Data Protection Using VxFlex OS Snapshots

Data Protection Using VxFlex OS Snapshots

This lesson presents VxFlex OS snapshots and how it can be used to protect
VxFlex integrated rack data.

This lesson presents the following topics:


 VxFlex OS Snapshot overview
 Configuring and managing VxFlex OS snapshots
 Configuring and managing the Consistency Group snapshots

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 175


prathamesh.mitkar@atos.net
Data Protection Using VxFlex OS Snapshots

VxFlex Integrated Rack Data Protection Options

Daily backups provide minimal required data insurance by protecting against data
corruption, accidental data deletion, storage component failure, and site disaster.
The daily backup process creates fully recoverable, point-in-time copies of
application data. Successful daily backups ensure that, in a disaster, a business
can recover with not more than 24 hours of lost data. The best practice is to
replicate the backup data to a second site to protect against a total loss of data if
there is a full site disaster. Most daily backups are saved for 30 days to 60 days.

For datasets that are more valuable, data replication achieves a higher level of
data insurance. Typically, data replication is done in addition to daily backup.
Replication cannot always protect against data corruption, because a corrupted file
replicates as a corrupted file.

Business continuity provides application availability insurance by ensuring zero


data loss and near-zero recovery time for business-critical data.

VxFlex integrated rack supports a wide range of data protection options for both
operational recovery and business continuity. Besides VxFlex OS and hypervisor-
based solutions, VxFlex integrated rack can integrate with Dell EMC backup
recovery and business continuity solutions.

VxFlex Integrated Rack Administration - Classroom

Page 176 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Data Protection Using VxFlex OS Snapshots

VxFlex OS Snapshots

Snapshots are point in time copies of a volume. VxFlex OS provides the capability
to take snapshots of a volume. Once a snapshot is created, it exists as a separate
unmapped volume that can be used in the same manner as any other VxFlex OS
volume. Snapshots are thin provisioned 1 and are generated instantaneously. This
means that they can be created quickly and do not use much space. An
administrator can create up to 126 snapshots per volume for MG pools and 126 for
an FG pool. Out of this, 64 can be policy managed. Compared to MG storage
pools, snapshots of FG storage pool create significant capacity savings.

Snapshot Policy Management provides the ability to manage snapshots by


selective interval through an automated granular retention policy. The snapshots
are taken according to the rule defined. You can define the time interval in-between
two rounds of snapshots as well as the number of snapshots to retain, in a multi-
level structure.

In the context of VxFlex integrated rack, you could perform a snapshot of a volume
that is being used for a VMware datastore. This would create a point-in-time copy
of the contents of that datastore which includes all the virtual machines that are
stored there.
1
Maximum thin capacity provisioning = 5 * (gross capacity - used capacity))

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 177


prathamesh.mitkar@atos.net
Data Protection Using VxFlex OS Snapshots

Consistency Group Snapshots

If you select multiple volumes and create a snapshot of them, they are placed into
a consistency group. A consistency group enables manipulation of the snapshots
as one set. For example, selecting one snapshot of a consistency group and then
removing the consistency group, deletes all snapshots that are a part of that
consistency group. However, a user can also remove individual snapshots if
needed.

Consistency groups are used when multiple volumes have a contextual relationship
and must have snapshots performed simultaneously. They can be especially useful
for creating crash-consistent backups for database applications.

VxFlex Integrated Rack Administration - Classroom

Page 178 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Data Protection Using VxFlex OS Snapshots

VxFlex OS Snapshots Structure

The structure related to all the snapshots resulting from one volume is referred to
as a volume tree or VTree. It is a tree spanning from the source volume as the root,
whose siblings are either snapshots of the volume itself or descendants of it. Thus,
some snapshot operations are related to the VTree, and may affect parts of it.

You can also capture a snapshot of a snapshot, which creates a new branch in the
V-tree. One limitation concerning V-Trees is that they cannot be moved from a
traditional medium-grained storage pool to a new fine-grained pool.

The graphic on the slide represents an example of a VTree structure. The BLUE,
S11 and S12 are snapshots of V1. S111 is a snapshot of snapshot S11. Together,
V1 and S1x, and S1xx are the VTree of V1. When you migrate a volume, volume
tree and all its snapshots are migrated together

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 179


prathamesh.mitkar@atos.net
Data Protection Using VxFlex OS Snapshots

Create Snapshots

Creating snapshots is done through the VxFlex OS GUI. Select the volumes that
you want to create a snapshot of, right-click, and select Snapshot Volume. Next,
set the name of the snapshot and confirm the settings.

You can also create snapshots using VxFlex OS CLI commands. The following
example shows the commands to create a snapshot, and map it to an SDC.

 Create a snapshot:
scli --snapshot_volume --volume_name vol_1 --snapshot_name
snap_1
 Map a snapshot:

scli --map_volume_to_sdc --volume_name snap_1 --sdc_ip


192.168.123.123

This image shows a command to create a consistent snapshot

VxFlex Integrated Rack Administration - Classroom

Page 180 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Data Protection Using VxFlex OS Snapshots

Snapshot Policy

Snapshot policies contain a few attributes and elements, offering the ability to
automate snapshots for specified volumes based on specified retention schedules.

Up to 60 policy-managed snapshots can be retained per root volume. However,


one should use caution when using manual or CLI-initiated snapshots, as there is
no mechanism to prevent them from consuming a portion of that 60-snapshot
“pool.”

To create a snapshot policy through VxFlex OS GUI,


 From Frontend menu, choose Volumes > Snapshot policy > Add new policy
 Add the policy parameters. Click Create.
 Next screen displays a list of available volumes in the system. Select the
volumes, click Add to add source volume to policy.

Multiple source volumes can be managed per policy, but a source volume cannot
span policies- it can be the source volume for a single policy.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 181


prathamesh.mitkar@atos.net
Data Protection Using VxFlex OS Snapshots

Removing Snapshots

You can remove a volume together with its snapshots, or remove individual
snapshots. Snapshots can also persist after the base volume is removed, so they
are independent.

You can also remove a consistency group and its snapshots. Before removing a
volume or snapshot, you must ensure that they are not mapped to any SDCs. If
they are, unmap them before removing them. Removal of a volume or snapshot
erases all the data on the corresponding volume or snapshot.

Best Practice: Avoid deleting volumes or snapshots while the MDM


cluster is being upgraded, to avoid causing a Data Unavailability
status.

 In Frontend > Volumes > V-Trees view, select the volume from which you
want to remove the snapshots and right-click. You can either remove the
volume or consistency group.
 The Remove Volumes window is displayed, showing a list of the objects that
will be removed.

VxFlex Integrated Rack Administration - Classroom

Page 182 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Data Protection Using VxFlex OS Snapshots

Restore Snapshot

In case of data corruption on the source volume, you can revert the data back from
a snapshot using Overwrite Content feature available on the volume. You can
choose the desired Snapshot as a source volume to revert to that point-in-time.

You can also recover data from a snapshot manually by mounting it to the SDC
from which the data originated, and copying data back to the production volume.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 183


prathamesh.mitkar@atos.net
VMware Data Protection Options

VMware Data Protection Options

This lesson presents VMware vSphere data protection features. It also provides
overview of RHV protection features.

This lesson presents the following topics:


 Virtual Machine snapshots
 VMware High Availability
 VMware Fault Tolerance
 Distributed Resource Scheduler
 RHV data protection options

VxFlex Integrated Rack Administration - Classroom

Page 184 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VMware Data Protection Options

VMware Snapshots

A snapshot is a point-in-time copy of a virtual machine. They can be taken while


the virtual machine is either powered on or off, so they are useful for capturing the
state of a virtual machine. Then, if needed, you can roll back to a snapshot to revert
to the state it was in. This is often used before making large configuration changes
to enable you to undo any changes that were made.

Although a snapshot acts as a copy of the entire virtual machine, only the changes
to the virtual machine are stored. This means that the initial size of a snapshot is
small. The longer a snapshot is retained, the more capacity it uses, since the
number of changes to a VM grows. For this reason, snapshots are good for short
periods of time. For longer retention, a backup solution is needed.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 185


prathamesh.mitkar@atos.net
VMware Data Protection Options

How to Create VM Snapshots

To create a snapshot, right-click the virtual machine and select Snapshots > Take
Snapshot. You can then give the snapshot an identifying name and description. If
the VM is powered on, you can also snapshot the virtual machine’s memory. This
takes longer since it needs to copy the memory to disk, but it enables the VM to be
rolled back without requiring a reboot.

VxFlex Integrated Rack Administration - Classroom

Page 186 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VMware Data Protection Options

VMware vSphere High Availability (HA)

VMware High Availability (HA) automatically restarts virtual machines on another


host in the cluster. For example, if an ESXi host goes down or is isolated, another
ESXi host accesses the VM files on shared storage and starts the VM again. High
availability can monitor if the ESXi host loses connectivity to its storage. It can also
monitor the VMware Tools heartbeat so that a nonfunctioning virtual machine is
restarted on a separate host.

Keep in mind that VMware High Availability does not provide seamless recovery.
During normal operation, the virtual machine only uses one ESXi host as its
compute source. When there is a problem, the virtual machines go down and then
restarts on another host. When virtual machines are restarted on another host, they
reboot. High Availability requires that the virtual machines use shared storage and
that the hosts are placed in a cluster with a shared management network. VxFlex
integrated rack already has all hosts in a cluster with a shared management
network, and VxFlex OS provides shared storage between all hosts. VxFlex
integrated rack environment, therefore, can enable the use of VMware vSphere
High Availability feature.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 187


prathamesh.mitkar@atos.net
VMware Data Protection Options

Configuring High Availability

HA can be configured when the cluster is created. In a VxFlex integrated rack


environment, that cluster has already been created during the deployment and
implementation. To configure HA, edit the cluster settings and turn on vSphere HA.
Then select which components are to be monitored. This determines what triggers
a VM to restart on another host.

VxFlex Integrated Rack Administration - Classroom

Page 188 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VMware Data Protection Options

VMware vSphere Fault Tolerance (FT)

VMware Fault Tolerance can be enabled for individual virtual machines to provide
zero downtime on a host failure. It works by creating a secondary copy of the virtual
machine on another ESXi host of the cluster. This secondary copy has its own set
of virtual machine files and memory which are kept synchronized with the primary
virtual machine. The synchronization happens every 10 to a few hundred
milliseconds using a method called Fast Checkpoints. This option is ideal for
uninterrupted availability of critical virtual machines.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 189


prathamesh.mitkar@atos.net
VMware Data Protection Options

Configure Fault Tolerance

Fault Tolerance is enabled for individual virtual machines. To enable Fault


Tolerance, the cluster must already be configured to use High Availability. There
must be Vmkernel adapters with the Fault Tolerance logging service enabled. Fault
Tolerance uses those adapters to send the fast checkpoint traffic. That traffic can
be significant so ensure that there is enough bandwidth.

To configure, right-click the virtual machine, and select Fault Tolerance > Turn on
Fault Tolerance. Select datastores for the secondary VM files and other fault
tolerance files, and select the ESXi host that runs the secondary VM.

VxFlex Integrated Rack Administration - Classroom

Page 190 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VMware Data Protection Options

VMware Distributed Resource Scheduler (DRS)

VMware Distributed Resource Scheduler (DRS) automatically optimizes resources


in a cluster. Virtual machines automatically migrate to underutilized hosts or
storage. This way, if one ESXi server is running many virtual machines while others
are not, the VMs can automatically migrate to the other hosts. It is also possible to
configure affinity rules as certain VMs may prefer certain resources, or so the group
of VMs run on the same resource.

DRS is especially helpful if High Availability or Fault Tolerance is being used.


Those features can cause virtual machines to become undistributed. For example,
if a host fails, High Availability causes the VMs on that host to migrate to other
hosts. When the failed host comes back online, virtual machines are not running.
DRS redistributes VMs back to that host.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 191


prathamesh.mitkar@atos.net
VMware Data Protection Options

RHV Data Protection Options

RHV Managers enables you to take snapshots of a virtual machine for operational
recovery. Snapshot causes a new Copy-on-write (COW) layer to be created. All
writes performed after a snapshot is created are written to the new COW layer. Any
virtual machine that is not being cloned or migrated can have a snapshot taken
when running, paused, or stopped. Snapshots of VMs that are based on Direct
LUN connections are not supported, live or otherwise.

The backup and restore API is used to perform full or file-level backup and
restoration of virtual machines. The API combines several components of RHV,
such as live snapshots and the REST API, to create and work with temporary
volumes. These volumes can be attached to a VM containing backup software
provided by an independent software provider.

The engine-backup tool can be used to back up the RHV Manager. The tool backs
up the engine database and configuration files into a single file, and can be run
without interrupting the ovirt-engine service.

RHV supports two types of disaster recovery solutions to ensure that environments
can recover when a site outage occurs. Both solutions support two sites, and both
require replicated storage.

 Active-Active Disaster Recovery solution is implemented using a stretch


cluster configuration. This means that there is a single RHV environment with a
cluster that contains hosts capable of running the required virtual machines in

VxFlex Integrated Rack Administration - Classroom

Page 192 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VMware Data Protection Options

the primary and secondary site. Virtual machines automatically migrate to hosts
in the secondary site if an outage occurs. However, the environment must meet
latency and networking requirements.
 Active-Passive Disaster Recovery solution is implemented by configuring
two separate RHV environments: The active primary environment, and the
passive secondary (backup) environment. Failover and failback between sites
must be manually executed, and is managed by Ansible.

For more information about Red Hat Virtualization data protection, see
www.redhat.com/rhv

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 193


prathamesh.mitkar@atos.net
Integration with Dell EMC Data Protection Solutions

Integration with Dell EMC Data Protection Solutions

This lesson presents data protection solutions for VxFlex integrated rack systems.

This lesson presents the following topics:


 Introduction to Avamar and Data Domain
 Dedicated backup system
 Shared backup system

VxFlex Integrated Rack Administration - Classroom

Page 194 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Integration with Dell EMC Data Protection Solutions

Dell EMC Data Protection Solutions

The best data protection strategies include the combination of deduplicated


backup, archive, data replication, and workload mobility capabilities. VxFlex
integrated rack supports this strategy by integrating with industry-leading data
protection solution such as Avamar, Data Domain, and RecoverPoint for virtual
machines.

Integrated Data Protection products are pre-architected, assembled, and supported


by Dell EMC. While customers are free to use any data protection system they
choose for their VxFlex integrated rack, integrated data protection solutions are
faster to deploy, optimized, and come with a single-call support.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 195


prathamesh.mitkar@atos.net
Integration with Dell EMC Data Protection Solutions

Introduction to Avamar

Dell EMC Avamar is a complete backup solution. It performs scheduled and on-
demand backups and provides the backup storage.

One of the key Avamar features is global deduplication. It identifies redundant


subfile blocks of data and does not store, or even transfer, the same block of data
twice. This drastically improves backup speed and reduces capacity utilization,
since most of the backup data is redundant. Since most data on a backup client
remains static from one backup to another, only the changed data that is not
already been backed up is sent. Although only some data is sent to Avamar during
a backup, that data gets merged with the data that is already stored on the Avamar.
This provides a full backup of the data with the speed and efficiency of incremental
backups.

Avamar is available in a few different deployment options: single-node, multinode,


and Avamar Virtual Edition. Single-node is suitable for small environments and is
often integrated with a Data Domain. Avamar Virtual Edition is a single-node server
that runs as a virtual machine. Multinode servers have more capacity and can be
expanded by adding more nodes. However, multinode servers are not supported
with VxFlex integrated rack.

VxFlex Integrated Rack Administration - Classroom

Page 196 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Integration with Dell EMC Data Protection Solutions

Avamar Integration with Data Domain

Data Domain is a deduplicated storage system that can be integrated with Avamar.
In this type of configuration, Avamar is used to manage backup clients, schedules,
datasets, and other policies, while Data Domain is used as a storage device.
Backup data is sent directly from the client to the Data Domain system using Data
Domain’s DD Boost technology. Back up metadata used to identify files and
backups is stored on Avamar. The backup process uses Data Domain
deduplication methods rather than Avamar’s method which can provide faster
backup and recovery, especially for large active databases. Data Domain provides
flexibility since storage can be shared with other Avamar servers or other
applications.

Storing Avamar backup data on Data Domain adds a few capabilities. One
capability, Instant Access, enables a backup of a failed VMware virtual machine to
be powered on and available almost instantly. Rather than waiting for all the virtual
machine data to transfer back to the original datastore, the VM data is presented to
the hypervisor through an NFS share so that it can be instantly powered on. Later,
the data can be transferred back to the original datastore using VMware vMotion
for Storage.

Another capability, Cloud Tier, is a Data Domain feature that sends older data to
cloud storage. Avamar is integrated with the Cloud Tier feature so that you can
manage tiering policies from the Avamar GUI. Data that has been sent to the cloud
tier is recalled automatically if a restore from that data is needed.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 197


prathamesh.mitkar@atos.net
Integration with Dell EMC Data Protection Solutions

VMware Image Backup with Avamar

Avamar integrates with VMware to back up virtual machines as images. This


means that the entire virtual machine image is backed up, rather than backing up
individual files within the virtual machine. Avamar accomplishes image backups by
using an image proxy virtual machine. When a backup is performed, Avamar
initiates a snapshot of the virtual machine. The image proxy VM then mounts the
snapshot data, and backs up those files using deduplication to only transfer unique
blocks of data. Since the image proxy processes the backup, the impact to the
production virtual machine is minimal. Usually, multiple image proxies are deployed
to handle large amounts of backups and for high availability in case a proxy is not
working.

Avamar image backups also take advantage of VMware’s changed block tracking.
This means that only the changed blocks on a virtual drive are scanned by the
Avamar image proxy. This further increases backup performance. Avamar can
back up an entire vApp as a single entity. When a virtual machine needs to be
restored, you may either restore the entire virtual machine, individual virtual drives,
or individual files from the image backup.

VxFlex Integrated Rack Administration - Classroom

Page 198 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Integration with Dell EMC Data Protection Solutions

Guest Backup

Guest backups involve installing the backup agent directly onto the guest and
performing backups. Guest backups provide more granularity than image backups.
Administrators can choose to back up only certain files or directories, and can use
plug-ins for databases and applications. However, guest level backups use CPU
and memory resources of the machine that is being backed up. Some features that
require image backups, such as Avamar’s Instant Access are not available.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 199


prathamesh.mitkar@atos.net
Integration with Dell EMC Data Protection Solutions

Avamar Replication

Avamar can replicate its backup data to another Avamar server for disaster
recovery. With replication, you can be sure that your backups are not lost, even if
the primary Avamar server becomes unavailable or lost. Replication takes
advantage of deduplication technology, so that only changed data is sent over the
network. Replication is available with an Avamar integrated Data Domain as well.
However, the target replication site must also have both an Avamar and Data
Domain system. Many replication topologies are supported which enables flexibility
in deploying Avamar servers.

VxFlex Integrated Rack Administration - Classroom

Page 200 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Integration with Dell EMC Data Protection Solutions

Avamar with VxFlex Integrated Rack

Avamar can be used to backup a VxFlex integrated rack system. Only Avamar with
an integrated Data Domain is supported. The Avamar server can be a single node
or a virtual edition server. Multi-node Avamar servers, or grids, are not supported.

The Avamar and Data Domain systems can either be racked separately from the
VxFlex system or, in smaller environments, with it. In a small environment, the
Avamar and Data Domain systems are connected directly into the VxFlex
integrated rack access switches. Because ports on the access switch are used for
the backup components, the VxFlex integrated rack supports four fewer nodes.

In larger environments, the Avamar and Data Domain can be racked separately.
Since the Avamar node and Data Domain systems require extra network
connections, a pair of Cisco Nexus 9K switches is used. These switches provide
communication between the VxFlex integrated rack and the Avamar/Data Domain.

A separate backup VLAN is created to separate the backup traffic. A vCenter


account with the proper permissions is also created so that Avamar can initiate
snapshots and perform other backup related tasks.

Also, image proxies need to be deployed in the vSphere environment. For the best
performance, these proxies should have access to the datastores of the VMs that
you want to back up.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 201


prathamesh.mitkar@atos.net
Replication Using RecoverPoint for Virtual Machines

Replication Using RecoverPoint for Virtual Machines

This lesson presents the overview of RecoverPoint for VMs solution.

This lesson presents the following topics:


 RecoverPoint for VMs architecture and design
 RecoverPoint for VMs configuration and management

VxFlex Integrated Rack Administration - Classroom

Page 202 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Replication Using RecoverPoint for Virtual Machines

RecoverPoint for Virtual Machines Overview

RecoverPoint for VMs is a hypervisor-based data protection solution for VMware


virtual machines. It enables both local and remote replication, enabling recovery to
any point-in time. RecoverPoint enables the access of a point-in-time image, either
locally or at another site, while still performing replication. For synchronous
replication, data is replicated after every write. For asynchronous replication,
administrators can set the replication schedule.

RecoverPoint for VMs integrates with VMware vCenter Server to enable


administrators to view VMs protected by RecoverPoint.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 203


prathamesh.mitkar@atos.net
Replication Using RecoverPoint for Virtual Machines

RecoverPoint for VMs Use Cases

RecoverPoint for VMs also provides protection against natural disasters, accidents,
utility outages, and technical malfunctions. It also helps administrators recover from
daily operational mishaps like data corruptions, virus attacks, and operational
errors.

RecoverPoint for VMs also helps during system upgrades and data migrations for a
data center migration or expansion.

VxFlex Integrated Rack Administration - Classroom

Page 204 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Replication Using RecoverPoint for Virtual Machines

RecoverPoint for VMs Architecture

The RecoverPoint for VMs virtual RecoverPoint Appliances (vRPA) are installed in
the VMware vSphere environment. For high availability and performance, vRPAs
are deployed in clusters of two to eight nodes. The vRPAs are delivered in an Open
Virtualization (OVA) format.

Each VMware ESXi host that participates in protecting virtual machines requires
the RecoverPoint for VMs splitter to be installed. The splitter is a vSphere
Installation Bundle (VIB) file. Splitters are aggregated within a VMware cluster. As
the ESXi splitter operates from within the virtual layer, it can replicate any storage.
ESXi Splitters can be shared by multiple vRPA clusters.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 205


prathamesh.mitkar@atos.net
Replication Using RecoverPoint for Virtual Machines

When a production VM performs a write to its virtual disk (VMDK), the


RecoverPoint splitter intercepts it and sends a copy to the production VMDK and
also to the vRPA cluster. The vRPAs send the writes to the replica journal enabling
the users to recover to any point-in-time for operational recovery. The write IO is
then read from the Replica Journal and rewritten to the Replica VMDK. The vRPAs
can also talk to splitters on other local ESXi hosts to provide protection for the VMs
on those systems.

Management of the solution is done through the RecoverPoint for VMs plug in for
VMware, that interacts with the vSphere API on the vCenter Server and the REST
API on the vRPAs. If remote protection is required, a WAN link can be used to copy
data to a vRPA cluster at another location.

VxFlex Integrated Rack Administration - Classroom

Page 206 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Replication Using RecoverPoint for Virtual Machines

Repository Volume

The repository is a unique system volume that is dedicated to each vRPA cluster.
The repository volume is used for storing configuration and consistency group
information which is required for transparent failover between RPAs. The
converged systems standard size for repository volume is 5.72 GB.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 207


prathamesh.mitkar@atos.net
Replication Using RecoverPoint for Virtual Machines

Journal Volumes

Journal volumes hold snapshots of data to be replicated. Each journal volume


holds as many point-in-time images as its capacity allows, after which the oldest
image is removed to make space for the newest. Journals consist of one or more
volumes, presented to all RPAs in the cluster. Space can be added to enable for a
longer history to be stored, without affecting replication. The size is determined by
analyzing the environment and adjusting later. The size of a journal volume is
based on several factors:

 The change rate of the data being protected


 The amount of time between point-in-time images—could be as small as each
write
 The number of point-in-time images that are kept

Journal volumes are required on local and remote sites. Each copy of data in a
consistency group must contain one or more volumes that are dedicated to hold
point in time history of the data. The type and amount of information that is
contained in the journal differs according to the journal type. The maximum size of
a journal volume should be 250 GB—per copy for a consistency group. There are
two types of journal volumes:

 Copy journals
 Production journals

VxFlex Integrated Rack Administration - Classroom

Page 208 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Replication Using RecoverPoint for Virtual Machines

RecoverPoint for VMs vCenter Web Client Plugin

The RecoverPoint for VMs Management vCenter plug in is automatically installed


on the vCenter Server during setup of the RP cluster.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 209


prathamesh.mitkar@atos.net
Replication Using RecoverPoint for Virtual Machines

Protect VM Wizard

To protect a Virtual Machine, you can launch the Protect VM Wizard from the
vSphere Web Client.

To start the Wizard, right-click a VM, and From the drop-down window select the
All RecoverPoint for Virtual Machine Actions and then the Protect button. This
starts the Protect Wizard. Alternatively, you can open this Wizard from Configure>
RecoverPoint for virtual machines (under More at the bottom). Click the link
Protect this VM. Make sure that the VM is selected.

The first step in the Protect VM Wizard is the Select VM protection method. To
protect the virtual machine, choose one of the following options: Create a new
consistency group creates a consistency group for the virtual machine. In the
Create new consistency group screen, enter a descriptive name for the
consistency group. Then select the production vRPA cluster for the VM. The other
protection option is to Add VM to an existing consistency group which enables
you to add the VM to an existing consistency group. From Select consistency
group, select the consistency group to which the virtual machine is added.

Next, Configure production settings. Enter a name for the production or source
copy. Choose the size for the Production Journal. See recommendations for the
Journal size in RPVM documentation. Select the Datastores displayed in the table.
If a datastore in a different location is desired, it can be manually registered.

VxFlex Integrated Rack Administration - Classroom

Page 210 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Replication Using RecoverPoint for Virtual Machines

Setting Replication Policy

Next, Add a copy. Enter a name for the remote copy. For example, remote vRPA
cluster in RoundRock. In this example, if Boston is selected, the copy would be
local.

Next, Configure copy settings. The Protection Policy section is also on this
page.
 If Synchronous mode is chosen, no data is lost between the production VM
and the Replica if there is a disaster.
 If Asynchronous mode is chosen, you must choose the RPO (Recover Point
Objective) which determines how much data is acceptable to be lost. From the
drop-down menu in the RPO section a user can choose the size such as Bytes,
KB, MB, GB and TB, number of writes, or the passage of time from seconds to
hours.

Select copy resources (a remote cluster), and storage. Define failover networks
for the copy, and on Ready to complete page, review the settings and click
Protect.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 211


prathamesh.mitkar@atos.net
Replication Using RecoverPoint for Virtual Machines

RecoverPoint for VMs Management: Using Plug-in

You can manage the replication environment from plugin. From the plugin select
Protection, and then Consistency Groups. Highlight the Consistency Group.
Select Topology and review the details.

For more information about RecoverPoint for virtual machines, see product
documentation at www.dellemc.com.

VxFlex Integrated Rack Administration - Classroom

Page 212 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Replication Using RecoverPoint for Virtual Machines

Lab: Protecting Virtual Machines

This lab presents activities that are related to protecting virtual machines using
available snapshot technologies.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 213


prathamesh.mitkar@atos.net
System Monitoring

This module focuses on monitoring a VxFlex integrated rack. It provides an


overview of component monitoring for vSphere, VxFlex OS, and PowerEdge
servers. It also presents VxFlex integrated rack health and compliance monitoring
using Vision.

Upon completing this module, you will be able to:


 Monitor virtual environment host, storage, network, and performance
 Monitor VxFlex OS components
 Monitor VxFlex integrated rack using VxFlex Manager
 Monitor the node hardware
 Configure SNMP to send alerts

VxFlex Integrated Rack Administration - Classroom

Page 214 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Virtual Environment Monitoring

Virtual Environment Monitoring

This lesson presents key monitoring activities for virtual compute and network
environments. For detailed information, see VMware vSphere monitoring
documentation.

This lesson presents the following topics:


 Virtual machine monitoring for performance
 VM network monitoring
 vSAN health monitoring

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 215


prathamesh.mitkar@atos.net
Virtual Environment Monitoring

Monitoring Virtual Compute and Network

Most of the virtual compute and network monitoring is done with VxFlex node
cluster vCenter. Here you can monitor the health of the ESXi hosts and the clusters
to which they belong. All production VMs run in this vCenter and must be monitored
for resource usage, and performance. You can also monitor and manage virtual
networks, such as distributed virtual switches, port groups, and VLAN settings. The
VxFlex node cluster uses datastores which is also monitored for capacity usage
and performance. These datastores are created on the VxFlex OS storage.
vCenter provides some VxFlex OS monitoring capability through the VxFlex OS
plug in.

VxFlex Integrated Rack Administration - Classroom

Page 216 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Virtual Environment Monitoring

Resource Monitoring

Resource usage is reported on the Summary tab. Select the cluster, host, or VM in
the Navigator pane, and review the USED vs CAPACITY values of the CPU,
memory, and storage.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 217


prathamesh.mitkar@atos.net
Virtual Environment Monitoring

Monitoring Inventory Objects with Performance Charts

The vSphere statistics subsystem collects data on the resource usage of inventory
objects. Data on a wide range of metrics is collected at frequent intervals. The data
is processed and archived in the vCenter Server database. You can access
statistical information through command line monitoring utilities or by viewing
performance charts in the vSphere Web Client.

VxFlex Integrated Rack Administration - Classroom

Page 218 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Virtual Environment Monitoring

Monitoring VMs

The vSphere Web Client lets you look at a virtual machine at a high level with the
Summary tab. It also enables you to monitor a specific aspect of a VM. The
Monitor tab gives you options to look at Issues, Performance, Tasks and Events,
Policies, and Utilization. The screenshot on this slide shows the recent events that
have occurred on the VM.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 219


prathamesh.mitkar@atos.net
Virtual Environment Monitoring

VM Performance Monitoring

You can review information about CPU utilization on virtual machines that are
available in vCenter Server.

Temporary spikes in CPU usage indicate that you are making the best use of CPU
resources. Consistently high CPU usage might indicate a problem. You can use the
vSphere Web Client CPU performance charts to monitor CPU usage for hosts,
clusters, resource pools, virtual machines, and vApps.

Host machine memory is the hardware back-up for the guest virtual memory and
guest physical memory. Host machine memory must be at least slightly larger than
the combined active memory of the virtual machines on the host. A virtual
machine's memory size must be slightly larger than the average guest memory
usage. Increasing the virtual machine memory size results in more overhead
memory usage.

Network performance depends on application workload and network configuration.


Dropped network packets indicate a bottleneck in the network. Slow network
performance can be a sign of load-balancing problems. If you suspect that a virtual
machine is network constrained, confirm that VMware Tools are installed and
measure the effective bandwidth between the virtual machine and its node system.

VxFlex Integrated Rack Administration - Classroom

Page 220 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Virtual Environment Monitoring

vSphere Distributed Switch Health

You can view the status of each host, or node, and its VDS from the vSphere Web
Client. To view VDS health, go to Network, select the VDS in the left pane, select
the Monitor tab, and then Health.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 221


prathamesh.mitkar@atos.net
Virtual Environment Monitoring

vSphere Distributed Switch - Port Groups and Uplinks

Under Networks, Topology provides a high-level view of the distributed switch


layout, its VMkernel port, and uplink status. Here, you can see all of the VMkernel
ports and Virtual Machine ports and their connections. If you click a port, it shows
which uplinks it uses.

The green color of the port plugs show that the port is active. One of the ports is
down in the example shown.

VxFlex Integrated Rack Administration - Classroom

Page 222 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Virtual Environment Monitoring

Monitor vSAN Health

Monitoring vSAN health is critical for the proper functioning of the VxFlex Controller
cluster. To validate the health of the VxFlex Controller cluster, perform the vSAN
health test periodically. Select the cluster, and under the Monitor tab, select vSAN
> Health and then click the Retest button. In a healthy system, all tests should
pass successfully.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 223


prathamesh.mitkar@atos.net
Virtual Environment Monitoring

Monitoring vSphere Events and Alerts

vSphere includes a user-configurable events and alarms subsystem. This


subsystem tracks events happening throughout vSphere and stores the data in log
files and the vCenter Server database. This subsystem also enables you to specify
the conditions under which alarms are triggered. This functionality is useful when
you want to be informed, or take immediate action, based on events or conditions.

Events are records of user actions or system actions that occur on objects in
vCenter Server or on a host. Examples of events include license key expiry, VM
power on, or lost host connection. Event data includes details about the event such
as who generated it, when it occurred, and what type of event it is.

Alarms are notifications that are activated in response to an event, a set of


conditions, or the state of an inventory object. Triggered alarms are visible in
several locations throughout the vSphere Web Client. To view all triggered alarms,
click All in the Alarms sidebar panel.

VxFlex Integrated Rack Administration - Classroom

Page 224 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Virtual Environment Monitoring

Monitoring RHV Environment

Regular monitoring of a Red Hat Virtualization system can be performed through


RHV-M. You should always monitor the utilization of storage, CPU, RAM, and
network of the RHV hosts. If any of these reach capacity, then more RHV nodes
might be needed.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 225


prathamesh.mitkar@atos.net
Virtual Environment Monitoring

Monitor the RHV Environment

RHV generates events when errors occur. These events can be forwarded to an
email or SNMP server.

VxFlex Integrated Rack Administration - Classroom

Page 226 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex OS Monitoring

VxFlex OS Monitoring

This lesson presents VxFlex OS monitoring, alerts, and events.

This lesson presents the following topics:


 VxFlex OS cluster monitoring
 VxFlex OS alerts and events

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 227


prathamesh.mitkar@atos.net
VxFlex OS Monitoring

Monitoring VxFlex OS Cluster Using GUI

Administrators can view and monitor various VxFlex OS components using the
user interface. The Dashboard tiles provide a visual overview of storage system
status. The tiles are dynamic, and contents are refreshed at the interval set in the
system preferences (default: 10s). The Dashboard’s navigation button switches the
display of the navigation tree. You can change the Dashboard display by double-
clicking the wanted navigation tree node.

VxFlex OS Dashboard provides status on cluster capacity, workload, and any


rebalance and rebuilds in operation. You can monitor capacity in use, unused, and
reserved for spare capacity. The Dashboard also has an alert indicator that
displays the number of active alerts in the system, using the system-wide color
codes.

You can also view MDM cluster information and the master MDM IP address by
hovering over the Management pane.

VxFlex Integrated Rack Administration - Classroom

Page 228 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex OS Monitoring

View Object Properties

The VxFlex OS GUI provides capabilities to view detailed information about various
objects and their status.

 Navigate to the wanted object. Click the expandable Property Sheet on the
right side of the window.
 The Property Sheets display detailed read-only information about the element.
 Users can even simultaneously work with multiple Property Sheets. One for
each of several related elements, such as a device, an SDS, Storage Pool, and
Protection Domain.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 229


prathamesh.mitkar@atos.net
VxFlex OS Monitoring

View System Alerts

The Alerts indicators show the overall error state of the system. When lit, indicators
show the number of active alerts of each severity. Similar indicators are displayed
in some views of the Backend table, and also on Property Sheets. You can view
details about the alerts active in the system in the Alerts view.

The Alerts view provides a list of the alert messages currently active in the system,
in table format. You can filter the table rows according to alert severity, and
according to object types in the system. RED indicates critical severity alert,
whereas ORANGE is medium, and YELLOW is a low severity alert.

The image on the slide shows an example of alerts in a system.

VxFlex Integrated Rack Administration - Classroom

Page 230 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex OS Monitoring

State Summary View

A quick way to see alerts is to use the State Summary view in the Backend view.
This view shows alerts next to the items that the alert pertains to. This can make it
easy to find which components need attention.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 231


prathamesh.mitkar@atos.net
VxFlex OS Monitoring

VxFlex OS Events Overview

The VxFlex OS system generates and records events in response to changing


conditions within the cluster. Event messages notify you of the changes, in case
your intervention is needed. Every VxFlex OS event has a severity level that is
associated with it that ranges from INFO to CRITICAL, depending on the condition
causing the event.

 Critical events definitely require user intervention. They indicate a data


unavailability condition that the VxFlex OS cluster cannot recover from without
explicit user actions for recovery.
 Warning and Error events may be transient conditions, or they may require
user intervention.
 Info event informs you of events that you should be aware of, but that do not
put the system at risk (no urgency).

VxFlex Integrated Rack Administration - Classroom

Page 232 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex OS Monitoring

Event Record Structure

Events may be viewed when logged on the Master MDM using the
showevents.py script that is provided as part of the VxFlex OS installation. The
output of this command may be in color. It corresponds to the severity of the logs.
Green color entries for normal operations. Yellow and orange for warnings and
more critical events. The MDM stores the events in a persistent and private
database file which periodically archives them.

Shown here is the structure of a VxFlex OS event as recorded in the system. Every
VxFlex OS event has six distinct fields: ID, Date, Name, Severity, Message, and
Extended. These fields are selected for a particular event, which is displayed by the
showevents.py command.

As shown here, the showevents.py output produces one line of output per event.
Each line uses a color-coded font, based on the severity level for the particular
event.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 233


prathamesh.mitkar@atos.net
VxFlex OS Monitoring

Recommended Action for VxFlex OS Events

The VxFlex OS Manage and Monitor Guide documents every possible event by a
uniquely identifying Name field. For each type of event, it indicates the
recommended action.

For detailed list of events and recommended action, see the latest VxFlex OS
Manage and Monitor Guide.

VxFlex Integrated Rack Administration - Classroom

Page 234 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex OS Monitoring

Forward VxFlex OS Events to syslog

VxFlex OS events may be forwarded to a remote syslog server. Syslog allows for
convenient monitoring in data centers that have standardized syslog as a
mechanism to aggregate logs from all applications.

Events are forwarded to local or to remote syslog server with scli --


start_remote_syslog command.

 Mandatory parameter: Server IP address


 Optional parameters: TCP port number, syslog facility number
 Example:
scli --start_remote_syslog --remote_syslog_server_ip
192.168.1.103
--remote_syslog_server_port 514

You can stop logging at any time with --stop_remote_syslog flag.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 235


prathamesh.mitkar@atos.net
Hardware Monitoring

Hardware Monitoring

This lesson presents monitoring hardware components and subcomponents for


their status and health.

This lesson presents the following topics:


 PowerEdge Server hardware monitoring
 Cisco Nexus switches monitoring

VxFlex Integrated Rack Administration - Classroom

Page 236 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Hardware Monitoring

Server Hardware Monitoring Using iDRAC

The iDRAC home page displays the Dashboard, which provides high-level health
status of the various system components. The GREEN check indicates that there
are no health issues with the server. You can click each component to find more
details.

The Dashboard also provides basic information about the system including model,
service tag, iDRAC MAC address, BIOS, and firmware version.

You can also launch a console session from the Dashboard. Power controls are
available here in the blue button just under the Dashboard title. The tabs on the top
of the home page take you to specific details based on which action you would like
to perform.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 237


prathamesh.mitkar@atos.net
Hardware Monitoring

Server System Information

From the System view, iDRAC allows you to monitor different hardware
components such as Batteries, CPU, and Power Supplies. You can drill down
various components to find more information. For example, information about fans
and system temperature can be found under Cooling. If a fan has an issue, its
status goes to a warning or critical state. There are similar details for the memory,
network devices, and other components.

VxFlex Integrated Rack Administration - Classroom

Page 238 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Hardware Monitoring

Server Storage Monitoring

Storage is examined under its own tab where you can see the high-level status of
the physical disks and drill down into detail on each device. If a device has an
issue, the status will change color.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 239


prathamesh.mitkar@atos.net
Hardware Monitoring

Configuring iDRAC Alerts

You can configure how iDRAC handles different alerts on the Configuration,
System Settings page. For example, you may want some alerts to generate an
email or SNMP trap. Some events can even be configured to perform an action
when they occur, such as an automatic reboot of the server.

To use email and SNMP, you must configure their settings, such as the SMTP
server and email address information, in the SMTP (Email) Configuration selection.

VxFlex Integrated Rack Administration - Classroom

Page 240 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Hardware Monitoring

Configuring OME Alerts

Once VxFlex Manager discovers OpenManage Enterprise, you can open the
application and manage critical and error alerts for the device. You can configure
the email (SMTP) address that receives system alerts, SNMP destinations, and
Syslog properties in OME. To manage these settings, you must have the
OpenManage Enterprise administrator-level credentials.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 241


prathamesh.mitkar@atos.net
Hardware Monitoring

Check Interfaces on Physical Switch

Here is an example from a Cisco 3172 showing brief information about the
interfaces on this switch. This is a good first command to run and see attributes,
such as Status and Speed. You can see the assigned VLAN, its access mode –
whether its access or trunk mode, and port speed. Notice that the 40G trunk is
used to connect to other switches. You can also see the reason that a port is down.
When you see Administratively down, it means the admin set the port to down or
shutdown. The only way this port can be active again is if the admin purposely
activates it with the no shutdown attribute on the port.

VxFlex Integrated Rack Administration - Classroom

Page 242 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Hardware Monitoring

Check VLAN Status on Switch

Showing the VLAN can provide a high-level view of VLAN definition. This example
is from an access switch. Notice there are port channels in use here, indicated as
Po in the Ports column. To get more details on the port channels, you can run a
show port-channel command.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 243


prathamesh.mitkar@atos.net
Hardware Monitoring

Physical Switch Port-Channel Summary

In this example, you can see the virtual port channels that are set up on the Cisco
switch and their status. All port channels are tied to real ports aka interfaces. Notice
on Port-channel 40, one Ethernet interface is down and one is up. The next thing
that you may want to do is to show the details of the down interface, if you think it
should be up. Use the show interface command to gather further detailed
information about the port.

VxFlex Integrated Rack Administration - Classroom

Page 244 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Hardware Monitoring

Show Details of Switch Interface/Port

To see details of a port, use the show interface Ethernet x/x, where x/x is the port
or interface number. Notice that you can see the MTU size here along with the
status. This example is from the Cisco 3172 switch.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 245


prathamesh.mitkar@atos.net
Health Monitoring with VxFlex Manager

Health Monitoring with VxFlex Manager

This lesson presents monitoring VxFlex integrated rack components using VxFlex
Manger. It also covers configuring SNMP for various components.

This lesson presents the following topics:


 Monitor VxRack system health with VxFlex Manger.
 Configure SNMP for a customer monitoring system.
 Secure Remote Services for call home

VxFlex Integrated Rack Administration - Classroom

Page 246 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Health Monitoring with VxFlex Manager

VxFlex Manager Dashboard - Health

The VxFlex Manager Dashboard provides a quick overview of your VxFlex


integrated rack system, all in one page. It is a quick and easy way to see if there
have been any errors or warnings throughout the system, without having to log in to
multiple interfaces. VxFlex Manager gathers information from the switches, nodes,
virtualization components, and VxFlex OS, and presents it on the dashboard.

Shown is the dashboard for a VxFlex integrated rack system with one service. A
service is a collection of resources in a VxFlex integrated rack. In this example, the
service is a vSphere Cluster and VxFlex OS storage running on six nodes. This
service shows a warning.

The example also shows that there is a total of 14 nodes in the system. One of
them also a warning.

By clicking the blue warning links, you can quickly see which nodes or systems are
showing the warning.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 247


prathamesh.mitkar@atos.net
Health Monitoring with VxFlex Manager

VxFlex Manager Dashboard - Utilization and Storage

Further down in the Dashboard, you can see the utilization of nodes. In the
example, 42% of the nodes, or 6 out of 14, are in use by a service. The remaining
nodes can be added later if more capacity is needed. They can be configured to
provide compute and storage to the existing service, or to a new service.

The Dashboard also shows the VxFlex OS storage usage. In the example shown,
there is only one VxFlex OS cluster that only has 512 GB of storage provisioned.

VxFlex Integrated Rack Administration - Classroom

Page 248 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Health Monitoring with VxFlex Manager

VxFlex Manager - Service Details

From the Services section of VxFlex Manager, you can view your services.
Selecting a service shows a diagram of all the resources in the service. You can
quickly see each resource along with their statuses. You can see details on each
resource by clicking it. You can then view logs from that resource. Some
maintenance tasks are also available. For example, you can place a node into
Service Mode which places the VxFlex OS and VMware services into maintenance
mode.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 249


prathamesh.mitkar@atos.net
Health Monitoring with VxFlex Manager

VxFlex Manager - Resources

A VxFlex Manager resource is any physical or virtual element in a VxFlex


integrated rack. This includes switches, nodes, vCenter servers, and VxFlex OS
clusters. You can view all of the resources in the Resources section. This includes
both resources that are currently used by a service and ones that are not in use.

In this view, you can quickly see information about each resource. This includes
whether they are healthy and if they are complaint with the RCM level. It also
provides links to the management interfaces of each component, or in the case of a
switch, its IP that you can connect to.

VxFlex Integrated Rack Administration - Classroom

Page 250 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Health Monitoring with VxFlex Manager

VxFlex OS Details

You can view details of a resource by selecting it in the Resources view and
clicking View Details. Here, you can see detailed information about the resource
including performance statistics. Shown is the details page for a VxFlex OS
system. It shows its capacity and historical IOPS data.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 251


prathamesh.mitkar@atos.net
Health Monitoring with VxFlex Manager

Node Details

The Resources page displays detailed information about all the resources and
node pools that VxFlex Manager has discovered and inventoried. You can perform
various operations from the All Resources and Node Pools tabs. Here, you can see
that the Resource Details page displays detailed information about the resource
and associated components. Performance details, including system usage, CPU
usage, memory usage, and I/O usage are displayed. Performance usage values
are updated every five minutes.

If there is a drive failure, VxFlex Manager provides wizards to guide you through
the process of selecting a disk to remove and completing the disk replacement.
VxFlex Manager supports drive replacement for storage-only and hyperconverged
SSD drives for Rx40 (R640, R740xd...) models only. It enables drive replacement
for NVMe disks only on storage-only nodes.

VxFlex Integrated Rack Administration - Classroom

Page 252 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Health Monitoring with VxFlex Manager

VxFlex Manager - Compliance Scan

VxFlex Manager monitors current firmware and software levels and compares them
to the active RCM definition, which contains the baseline firmware and software
versions. It shows any deviation from the baseline in the compliance status of the
resources. You can use VxFlex Manager to update the servers to a compliant
state. Using VxFlex Manager, you can choose a default RCM for compliance, or
add new RCMs.

You can view RCM compliance by clicking a service in the Services window and
clicking the View Compliance Report button.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 253


prathamesh.mitkar@atos.net
Health Monitoring with VxFlex Manager

Secure Remote Services for Call Home

VxFlex Manager and VxFlex OS connect with Secure Remote Services to transmit
encrypted RCM compliance assessment. It also transmits server hardware and
VxFlex OS alerts to Dell EMC Customer Support.

If there is an issue or potential issue that requires attention, customer support


engineers can connect back in to troubleshoot or repair.

VxFlex Integrated Rack Administration - Classroom

Page 254 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Health Monitoring with VxFlex Manager

Secure Remote Services with VxFlex Manager

VxFlex Manager can be configured to send alerts to support staff using Secure
Remote Services. Secure Remote Services routes alerts to the Dell EMC support
queue for diagnosis and dispatch.

To configure Secure Remote Services with VxFlex Manager, OpenManage


Enterprise Tech Release (OME) must also be deployed on the controller cluster.
OME collects alerts from the PowerEdge server components in your environment.
It then forwards them to Alert Connector, which is a component of VxFlex Manager.
Alert Connector forwards those alerts to SRS gateway.

For information about how to configure Secure Remote Services, see the Dell EMC
VxRack FLEX Administration Guide.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 255


prathamesh.mitkar@atos.net
Health Monitoring with VxFlex Manager

SNMP to Monitor VxFlex Integrated Rack

Simple Network Management Protocol (SNMP) is a network management protocol,


which is used for collecting status information from network devices, such as
servers and switches. The SNMP enabled device runs SNMP agent, and
communicates with the SNMP management server to share information about
device status. In VxFlex integrated rack, all SNMP traps should be directed towards
customer’s active SNMP monitoring system. This will provide proactive alerting for
critical and warning level events. These events include, but are not limited to,
hardware failures requiring field replacement and software faults that could
negatively impact the stability of the system.

VxFlex Integrated Rack Administration - Classroom

Page 256 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Health Monitoring with VxFlex Manager

SNMP and Syslog Forwarding with VxFlex Manager

VxFlex Manager enables users to forward SNMP traps or syslog to local servers of
their selection. It acts as an aggregator for all devices in VxFlex Authentication is
provided by VxFlex Manager, through the configuration settings provided. VxFlex
Manager can be configured to forward syslogs to up to five destination remote
servers.

To configure SNMP, specify the access credentials for the SNMP version you are
using and then add the remote server as a trap destination. VxFlex Manager and
the network management system use access credentials with different security
levels to establish two-way communication. For SNMPv2 traps to be sent from a
device to VxFlex Manager, you need to provide VxFlex Manager with the
community strings on which the devices are sending the traps.

VxFlex Manager receives SNMPv2 traps from devices, and forwards SNMPv2 or
v3 traps to the network management system.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 257


prathamesh.mitkar@atos.net
Health Monitoring with VxFlex Manager

Configuring vCenter to Send SNMP Alerts

Configure vCenter to send SNMP alerts to perform following steps:


 From a selected host in a vCenter, add the new alarm.
 Under the General page:
 Add the Alarm name vcesys-supportalarms.
 Select Host as the inventory object to Monitor.
 Under the Triggers page:
 Choose ANY for conditions that generate trigger.
 Click ‘+” and select specific trigger conditions, for example, CPU usage.
 Click the settings for Operator, Warning, and Critical Condition
 Under the Actions page:

 Select Send a notification trap as the action.

VxFlex Integrated Rack Administration - Classroom

Page 258 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Health Monitoring with VxFlex Manager

Configuring SNMP Trap Forwarding for Cisco Nexus Switches

Cisco NX-OS generates SNMP notifications as either traps or informs. A trap is an


unacknowledged message that is sent from the agent to the SNMP managers
listed in the host receiver table. Informs are messages that are sent from the SNMP
agent to the SNMP manager which the manager needs to acknowledge. You can
configure Cisco NX-OS to send notifications to multiple host receivers.

The image shows commands to configure SNMP trap on the Cisco Nexus
switches. For more information, see www.Cisco.com.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 259


prathamesh.mitkar@atos.net
Health Monitoring with VxFlex Manager

Lab: System Monitoring

This lab presents monitoring activities in a VxFlex integrated rack environment.

VxFlex Integrated Rack Administration - Classroom

Page 260 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Upgrade and Troubleshooting

This module focuses on the VxFlex integrated rack system life cycle management
and basic maintenance tasks.

Upon completing this module, you will be able to:


 Describe the system life cycle management and upgrade procedures
 Perform basic system analysis and maintenance tasks
 Perform log collection and basic troubleshooting

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 261


prathamesh.mitkar@atos.net
System Life Cycle Management and Upgrade

System Life Cycle Management and Upgrade

This lesson presents the overview of system life cycle management and upgrades.

This lesson presents the following topics:


 Release Certification Matrix
 RCM upgrade planning and resources
 Upgrade considerations and best practices

VxFlex Integrated Rack Administration - Classroom

Page 262 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
System Life Cycle Management and Upgrade

Life Cycle Management Using Release Certification Matrix

Managing the system life cycle is key to having a stable, secure, and compliant
system. Dell EMC converged, and hyperconverged systems use Release
Certification Matrix (RCM) for the system life cycle management. Each RCM
version document defines the specific hardware components and related software
version combinations that are tested and certified for integrity and compatibility.
The main purpose of the RCM is to provide a reference of known, approved, and
supported configurations of these systems. These documents are regularly
updated as new software and firmware are released. Using the RCM to update and
maintain a system results in a consistent, secure, up-to-date, and validated
platform over its entire life cycle.

Important: Dell EMC Systems are supported only when all


components are running the certified version in the Release
Certification Matrix.

You can download the Release Certification Matrix documents from the RCM portal
available at the Technical Resource Center.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 263


prathamesh.mitkar@atos.net
System Life Cycle Management and Upgrade

Compliance Check Using VxFlex Manager

Not adhering to the RCM can put the integrity of the VxFlex integrated rack system
at risk. VxFlex Manager provides the capability to check system compliance with
the designated RCM version. VxFlex Manager identifies noncompliant components
and recommends remediation.

The following steps are performed to view the RCM compliance report:
 Select the resource for which you want to view the compliance report.
 Under the RCM Compliance column, click the link corresponding to the RCM
Compliance option.
 The Release Certification Matrix Compliance Report page is displayed.
 Select the Firmware Components option to view the details of the firmware
available on the selected resource. Select the Software Components option to
view the software components in the compliance report.

To update the noncompliant resources, click Update Resources from the


Compliance Report page. Note that, if the RCM that contains an update is not
available in the Compliance and OS Repository, you cannot perform the firmware
or software update.

The compliance report can also be exported for the available resources.

 Export as CSV or PDF

VxFlex Integrated Rack Administration - Classroom

Page 264 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
System Life Cycle Management and Upgrade

 Display roll up by category such as compute, network, storage, and


virtualization.
 Further breaks down each node into all component current and expected
versions.

Sample of a compliance report exported as pdf.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 265


prathamesh.mitkar@atos.net
System Life Cycle Management and Upgrade

Adding Compliance File to VxFlex Manager

VxFlex Manager enables you to load newer compliance versions and specify a
default version for compliance checking. You can load multiple compliance
versions into VxFlex Manager. VxFlex Manager enables you to load the
compliance files either using Secure Remote Services or from a local repository.

If your system is registered with Secure Remote Services, VxFlex Manager


displays a banner at the top of the Compliance and OS Repositories page to notify
about a new RCM release. The UI only displays the banner for users with the
Administrator role. It will also enable the administrator to download the RCM
version.

If you selected to load RCM from configured Secure Remote Services, click the
Available RCMs drop-down list, and select the RCM.

Tip: If you want to be able to add the RCM with Secure Remote
Services, you must first configure the alert connector.

If the desired compliance version exists on the support VM, you can load it to
VxFlex Manager. From VxFlex Manager, click Settings, and then choose
Compliance and OS Repositories. On the Compliance and OS Repositories

VxFlex Integrated Rack Administration - Classroom

Page 266 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
System Life Cycle Management and Upgrade

page, select the Compliance Versions tab. You can use the Compliance
Versions tab to load RCM versions and specify a default version for compliance
checking. To load a new compliance version, click the Add button. The default
compliance version is always used for shared resources, such as switches. You
can specify that each service uses the default compliance version or a different
compliance version. The operating system images for ESXi and VxFlex OS are
included with the RCM.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 267


prathamesh.mitkar@atos.net
System Life Cycle Management and Upgrade

VxFlex Integrated Rack Upgrade

Applying new software or firmware upgrade may require a temporary shutdown of a


VxFlex integrated rack component. Redundant features enable your applications to
continue running throughout the upgrade process if managed carefully.

It is recommended that the individual performing the upgrade should be well versed
with the system operation and interdependencies. Be certain to review the RCM
release notes, and the upgrade procedure in detail before performing the upgrade.

Dell EMC VxFlex integrated rack Upgrade Guide provides a detailed procedure to
upgrade between current and targeted RCM versions. Some upgrades may require
multiple hops from one RCM to another. Multiple-step upgrades may involve
significant extra effort and complexity. These upgrades occur in a specific order as
noted in the Upgrade Guide. Also, identify if there are other interrelated
components or applications that should be upgraded at the same time. System
health assessment and remediation are required prior to performing an upgrade.
The primary objective of the assessment and remediation activity is to get the
system into a good known state to perform the upgrade successfully.

VxFlex Integrated Rack Administration - Classroom

Page 268 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
System Life Cycle Management and Upgrade

Professionally Managed Upgrade Benefits

The VxFlex integrated rack Upgrade service is also available from Dell EMC
Professional Services. This service reduces risk by providing the necessary
activities and tasks by highly skilled professionals to perform a Release
Certification Matrix (RCM) upgrade. It also includes an assessment and
remediation to ensure system stability.

Dell EMC Professional Services and Partner teams minimize risk with pre-
engineered, predefined, and pretested upgrade paths. Their expertise helps to
reduce the costs and time that is typically associated with “do-it-yourself” upgrades.
Best practices, unique depth, and breadth of knowledge are brought to the delivery
of this service. Also, if an issue arises, Dell Technologies technical support can
more quickly and easily diagnose and fix the problem.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 269


prathamesh.mitkar@atos.net
System Life Cycle Management and Upgrade

Upgrade Planning and Preparations

As part of the preparation, check the health of the system a few days before the
upgrade time. If you find something unusual, you have time to correct it before the
targeted upgrade starts. Also, check the health again just before starting the actual
upgrade to ensure that the environment is still healthy and stable. See the Dell
EMC VxFlex integrated rack Health Assessment and Remediation Guide for more
information.

Make sure that you review this entire upgrade document and each referenced
component upgrade document before you begin the upgrade. If you are uncertain
or have questions, call Dell EMC Support. The upgrade guide also provides the
approximate duration for each component upgrade. Estimate the total upgrade time
based on the components in your environment. Schedule some additional time in
case there are other unforeseen issues. Create a configuration backup file for each
system component before you begin the upgrade.

VxFlex Integrated Rack Administration - Classroom

Page 270 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
System Life Cycle Management and Upgrade

Upgrade Considerations and Best Practices

When upgrading VxFlex integrated rack, observe the following best practices:

 Upgrade redundant components one at a time. When upgrading paired


switches, upgrade one switch and verify that it is operational before continuing
to the next one. To migrate the active virtual machines, upgrade each node in a
cluster individually.
 VxFlex Manager automates some of the steps that are required to upgrade from
one VxFlex integrated rack RCM release to another. With VxFlex Manager, you
can automate many upgrade tasks such as upgrading Cisco switches, firmware
updates for all the managed nodes, and the ESXi driver for the ESXi nodes.
 For a manual upgrade, it is important to closely follow the procedures when
performing an upgrade involving VxFlex OS. It is acceptable to have a single
SVM offline while the SDS is in VxFlex OS maintenance mode. However, if
another SVM is rebooted or powered off at the same time, data unavailability
will occur. Always validate that the SVM is back online, healthy, and has exited
the VxFlex OS maintenance mode in the VxFlex OS GUI. This should be
completed especially before rebooting or powering off another SVM.
 Enter Service Mode in VxFM places the SDS in maintenance mode and
powers off the SVM. It also places the ESXi host into maintenance mode. For
maximum availability purposes, this button is enabled only when four or more
nodes exist in a service. Only one node can be put into service mode at a time
within a service. Once a node in the service mode, the Enter Service Mode

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 271


prathamesh.mitkar@atos.net
System Life Cycle Management and Upgrade

button is disabled for all other nodes. This option can be used to eliminate the
manual procedure of taking the node offline.

VxFlex Integrated Rack Administration - Classroom

Page 272 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
System Life Cycle Management and Upgrade

Upgrade Using VxFlex Manager

You need to upgrade the VxFlex Manager virtual appliance before proceeding with
the RCM upgrade within VxFlex Manager. Refer to the upgrade guide for the
procedure to upgrade the VxFM virtual appliance.

To update firmware and software components using VxFlex Manager, go to the


Services page. Select a service to view the compliance report. On the Service
Details page, click View Compliance Report. To update the noncompliant
resources, click Update Resources. To apply the updates right away, choose
Allow VxFlex Manager to perform firmware and software updates now and
click apply. You can also update resources from the Resources page, but if a
resource is in use in a service it will redirect to the Services page.

VxFlex Manager performs rolling updates of the VxFlex OS nodes within a single
service. That means update one node at a time. This is necessary because each
service is associated with a single Protection Domain, and a Protection Domain
might not span across services.

During execution, the workflow performs the following steps for each node and its
associated SVM:
 Checks to see if the node is part of the VxFlex OS cluster. This check ensures
that the workflow does not attempt to update nodes that are not a part of the
cluster.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 273


prathamesh.mitkar@atos.net
System Life Cycle Management and Upgrade

 Finds the SVM for each node that is part of the VxFlex OS cluster by performing
a lookup within the VxFlex OS Gateway.
 The workflow then puts the SDS into maintenance mode, turn offs the SVM,
and finally puts the ESXi node in maintenance mode.
 Runs the update task for the firmware that is specified in the RCM. To perform
the update, it installs the required update and reboots the ESXi node. To ensure
that the ESXi node restarts successfully, it performs a verification check after
the reboot.
 Takes the node out of maintenance mode, powers on the SVM and calls the
VxFlex OS Gateway.

This process repeats for every node that requires an update. The updates for each
node take approximately 45 minutes to complete. The RCM update process
handles node, BIOS, firmware, and ESXi driver updates automatically. However,
updates to the VxFlex OS software and to the top of rack switches must be
performed manually.

VxFlex Integrated Rack Administration - Classroom

Page 274 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Basic Troubleshooting

VxFlex Integrated Rack Basic Troubleshooting

This lesson introduces basic troubleshooting in a VxFlex integrated rack


environment. In this lesson, we will look at troubleshooting some common issues
and failure scenarios.

This lesson presents the following topics:


 Troubleshooting common network issues
 Troubleshooting performance issues
 Node health and switch configuration check

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 275


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Basic Troubleshooting

VxFlex Integrated Rack Problem Management

VxFlex integrated rack systems are engineered for high availability and reliability,
however, problems can occur. To help identify the problem, it is best to know the
normal operating environment. This can be achieved by saving logs and
configuration files after the initial installation, or when things are running normally.
This information can be used as a baseline for comparison.

The most common cause of many issues is configuration changes. Since VxFlex
integrated rack has many integrated components, it is important to validate the
impact on the overall system before making any changes. Even a small change or
fix in one area may introduce a problem in some other system areas.
Understanding the overall I/O path and component integration can help identify the
root cause of a problem.

Understanding the network is key to a successful environment for normal


operation, upgrades, and any kind of maintenance. Ensure that before doing any
maintenance, upgrades, or expansions all devices on all the system networks are
communicating properly.

See Dell EMC VxFlex Integrated Rack Administration Guide for procedures and
best practices about administrating the system.

VxFlex Integrated Rack Administration - Classroom

Page 276 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Basic Troubleshooting

Troubleshoot Network Issues

Since nearly all services in a VxFlex integrated rack are distributed, networking
could be a key trouble point. Validating communication between all physical and
virtual components is a key troubleshooting task. When troubleshooting network
connectivity, you should check related server NICs, the DVswitches, and the
VLANs. You may also need to verify the Virtual Port Channels configured for the
management networks. If new production VLANs were added, check the location of
these VLANs. In VMware environments, all the production VLANs should be
configured on DVswitch 0. The ping command is the most useful tool to verify the
connectivity. Refer to the port map and LCS to verify the network information.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 277


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Basic Troubleshooting

Troubleshoot I/O Path

Depending on which components in the I/O Path fail, you will see different results.

If a VMDK file for a virtual machine is corrupted, then only that VM is affected. If a
datastore becomes corrupted, all VMs on that datastore are affected.

If an SDC fails, then the ESXi server loses access to all VxFlex OS volumes. Other
ESXi servers will not be affected.

Network failures can also cause the ESXi, or multiple ESXi servers to lose access
to VxFlex OS storage. Besides, the SDSs might lose connectivity to each other,
resulting in various VxFlex OS errors.

If an SVM or Red Hat SDS fails, then VxFlex OS shows errors for that SDS.
However, VxFlex OS storage is still available, due to its resilient nature. You will
also see a rebuild operation just after the failure.

A failure of DirectPath I/O or other device failures will also show errors for the
VxFlex OS devices. If all devices on an SDS have failed, then DirectPath I/O or a
storage controller has likely been disabled or failed. An individual device failure
indicates a problem with that device.

VxFlex Integrated Rack Administration - Classroom

Page 278 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Basic Troubleshooting

Troubleshoot VM Access

If you have created a VLAN for an application and that new VM/application cannot
access the new VLAN, it could be a problem with the VLAN. Check to see if it is
defined correctly in all places. For example, check the settings on the VM, on the
DVswitch, and on the physical switch. Understand whether the network traffic
needs to leave the VxFlex integrated rack environment or not, which determines if
the VLAN is enabled on the customer uplink ports. Using switch commands,
display the status of the interface ports, virtual PortChannel and VLANs.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 279


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Basic Troubleshooting

Validate DVswitch Uplink Status

When looking at the ESXi elements for network troubleshooting, examine each
Distributed Virtual Switch. Validate that its uplinks are active. If they are, you can
see the green designation. If you see red, it is inactive and should be investigated.
You should check the physical elements such as the switch and cables if you find
an uplink down.

VxFlex Integrated Rack Administration - Classroom

Page 280 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Basic Troubleshooting

VxFlex OS SDS Connectivity Status

You can query the status of the VxFlex OS SDS by using this scli command. It
pings the different SDS validating traffic to and from each SDS on their specific IP
addresses. You could also use this command to validate IPs if a change to the IP
address of an SDS was required.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 281


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Basic Troubleshooting

Check Physical Switch Status

There are multiple commands that you can run on the switch to show the status of
different components. Show interface brief, for example lets you see the
assigned VLAN, its access mode, and port speed.

VxFlex Integrated Rack Administration - Classroom

Page 282 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Basic Troubleshooting

Troubleshoot Performance Issues

If you are seeing performance issues, you may see symptoms such as reduced
IOPS reported, SDS errors, or "packet too big" messages. If you see these errors,
you should check if the auto-negotiation is ON for a switch port. All switch ports
should not use auto-negotiation and should be set to the proper speed. Remember
that there are specific policy settings that are required on different networks.
Sometimes intermittent network hardware failures can affect the latency.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 283


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Basic Troubleshooting

MTU Size Errors

You may have MTU size mismatch error messages or suspect a mismatch. MTU
mismatch errors can occur when the components along a network path are
configured with different MTU sizes. MTU mismatch is undesirable because it can
increase CPU utilization on the switches due to the overhead of disassembly and
reassembly of the packets. If there is an MTU mismatch, you may see ICMP
“packet too big” messages sent back to the source. However, if the traffic passes
through a firewall, the ICMP messages may be blocked. IPv4 routers fragment on
behalf of the source node that is sending a larger packet. However, the IPv6
routers do not fragment IPv6 packets on behalf of the source, but drop the packet.
It sends back an ICMPv6 Type 4 packet (size exceeded) to the source indicating
the proper MTU size. You can validate the MTU size by using the ping command
and telling it to not fragment the packet as shown in these examples. Next check
the DVswitches in the VxFlex integrated rack environment.

Check each DVSwitch to ensure that its MTU size is properly set. Verify that the
MTU size for DVswitch0 is set to 1500, and DVswitches 1 and 2 are set to 9000.

VxFlex Integrated Rack Administration - Classroom

Page 284 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Basic Troubleshooting

MTU Size on Kernel Adapter

Next, validate the MTU size on each kernel adapter on all ESXi hosts. In the
vSphere web client, select the host in the left navigation pane, and then select a
VMkernel adapter. Adaptor vmk0 is shown in the graphics. Select Configure >
Networking > VMkernel adapter and in the bottom window under the Properties
validate the MTU size set to 1500. Similarly, verify that the VMkernal interfaces on
DVSwitch 1 and 2 for the VxFlex OS data paths are set to 9000.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 285


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Basic Troubleshooting

Switch Interface Errors

Check for counter errors on a physical switch to see if packets are getting dropped
as this may be an indication of an MTU size mismatch or other error.

VxFlex Integrated Rack Administration - Classroom

Page 286 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Basic Troubleshooting

Load Balancing Policy

For vSphere port groups, validate the load balancing policy by selecting Configure
> Settings > Policies > Teaming and failover. The flexmgr-install and Sio-data
port group policies should be set to Route based on originating virtual port. All
other port group policies should be set for Route based on IP address.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 287


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Basic Troubleshooting

Show Switch Interface CRC Details

To see details of a switch port, or interface, use the show interface Ethernet x/x,
where x/x is the port number. Notice that you can see the MTU size here in addition
to the status which is where you check CRC also. CRC issues can point to an MTU
size mismatch along the I/O path, or cable, or port speed that is not set properly.
This example is from the 3172 switch.

VxFlex Integrated Rack Administration - Classroom

Page 288 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Basic Troubleshooting

Common VxFlex OS Issues

For VxFlex OS networking, ensure end-to-end connectivity across all components.


Remember that the component network ports are configured on various different
VLANs. For example, if the VxFlex OS Gateway has an incorrect IP address for the
MDMs, it cannot communicate with VxFlex OS. Carefully review the IP addresses
from the Logical Configuration Survey. In cases when the addresses are not
sequential, it can be difficult to remember the IP addresses that you need to
troubleshoot. Remember again that the management and data ports of the VxFlex
OS components are on different networks. Make sure that you confirm the
operating system identities of the servers. There is no standard for mixing Storage-
Only and ESXi nodes, so they could be in any location or have any IP address.
Knowing the operating system of each node also helps avoid confusion.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 289


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Basic Troubleshooting

vSAN Troubleshooting in Controller Cluster

vSAN is running in the controller cluster. Although minimal changes may occur in
this environment, issues can still arise. To do some preliminary troubleshooting,
first run the Retest check of vSAN from the vSphere Web client. You can also look
at some of the documentation listed below or call Dell Technologies support.

How to troubleshoot the vSAN network is discussed in KB-212043

Detecting a vSAN network partition is extensively discussed in KB 303788. For


more information about the vSAN health check, see
https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/pro
ducts/vsan/vmw-gdl-vsan-health-check.pdf

For troubleshooting information, see the VMware® Virtual SAN Diagnostics and
Troubleshooting Reference Manual at
https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/vs
an/vsan-troubleshooting-reference-manual.pdf

VxFlex Integrated Rack Administration - Classroom

Page 290 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Basic Troubleshooting

VxFlex Manager Service Mode

A VxFlex integrated rack node may need to be taken down for various reasons. It is
critical to follow an orderly shutdown sequence which enables you to bring it back
online. The first task is to gather information about the node. If the targeted node is
the primary MDM, the recommendation is to switch the MDM ownership manually
before shutting it down. Refer to Administration Guide or VxFlex OS documentation
for the detailed procedure.

The best approach to bring the node offline is using the VxFlex Manager Service
Mode feature as shown in the graphics. It automates an orderly shutdown
sequence to bring the node into maintenance mode safely. Click the node from the
Services view. Under Node Actions select Enter Service Mode. The node under
the Service Mode displays with wrench icon. It also context-limits activities that can
be performed on that particular service. After the node maintenance is complete,
click Exit Service Mode to bring the node back online.

Alternately, you can use the manual process to bring the node offline.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 291


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Basic Troubleshooting

 For hyperconverged nodes running ESXi: Put the SDS for the node into
Maintenance Mode from the VxFlex OS GUI. Then, from VMware vCenter
migrate any running VMs off the node (except the Storage VM), shut down the
Storage VM, and place the ESXi node into the maintenance mode.
 For storage-only nodes: After you put the node into Maintenance Mode from
VxFlex OS GUI, put it under maintenance from VMware vCenter since they do
not run any SVM.

Entering a node into Maintenance Mode from VxFlex OS GUI.

VxFlex Integrated Rack Administration - Classroom

Page 292 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Basic Troubleshooting

Drive Replacement Using VxFlex Manager

VxFlex Manager enables the replacement of a drive in a deployed node. Before


starting this process, ensure that you have a replacement drive and access to the
node. Once this process has started, it cannot be undone. Note that the node will
be placed in Service Mode until the entire process is complete. To replace the
drive, from the Services view, select the node where the drive need be replaced.
From Node Action, select Drive Replacement. An administrator can select the
drive to replace with graphical and other identifying data assistance. VxFlex
Manager automates the removal of the drive from VxFlex OS. You can also blink
and unblink drive LED for physical identification. Once the drive is replaced, VxFlex
Manager automates the addition of the new drive to VxFlex OS.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 293


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Basic Troubleshooting

VxRack Node Health Check

Sometimes when hardware fails, you may experience different logical failures in
other areas of system operations. Error logs help determine what component is
failing, whether the same component is always failing, or if there is a component in
common across multiple nodes. Log in to iDRAC and examine the server to see if
something is occurring at the hardware layer. If you are unsure, after collecting
logs, you can reset error counters or logs to ensure that you are capturing and
examining the latest information.

VxFlex Integrated Rack Administration - Classroom

Page 294 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Log Collection

Log Collection

This lesson presents the key system logs and the log collection process.

This lesson presents the following topics:


 Log collection procedure from various system components such as VxFlex
Manager, VxFlex OS, iDRAC, etc.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 295


prathamesh.mitkar@atos.net
Log Collection

Log Collection Using VxFlex Manager

VxFlex Manager provides server logs, VxFlex OS logs, and an activity log of user
and system-generated actions. Other log categories include deployment,
infrastructure or hardware configuration, infrastructure or hardware monitoring,
licensing, network configuration, and template configuration. By default, log entries
display in order of occurrence. These logs contain information about VxFlex
Manager activities.

To see component-level logs, generate a troubleshooting bundle in the virtual


appliance management section. A troubleshooting bundle is a compressed file that
contains logging information for VxFlex Manager managed components. If
necessary, download the bundle and send it to Dell EMC support for issue
debugging. The troubleshooting bundle includes VxFlex Manager application logs,
alert connector logs, VxFlex OS Gateway logs, iDRAC Lifecycle logs, Dell switch
logs, Cisco switch logs, and VMware ESXi logs.

VxFlex Integrated Rack Administration - Classroom

Page 296 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Log Collection

VMware vCenter, ESXi, and vSAN Logs

To troubleshoot the virtual environment, you may need to access the VMware
support log bundle. You can view vCenter Server logs by selecting the vCenter
Server and going to Monitor > System Logs. Additionally, you can right-click the
ESXi host or a VM and select Export System Logs to start the Export Logs
wizard. Generally, you do not need the performance data. There are more options
under some of the selection choices. Under the Storage selection, you have
different elements that are related to vSAN. If you are working in the VxFlex OS
cluster, you do not need to select vSAN.

See the VMware KB articles for more information about:


 Collecting diagnostic information for ESX/ESXi hosts and vCenter Server using
the vSphere Web Client (2032892)
 Collecting vSAN support logs and uploading to VMware (2072796)
 Collecting diagnostic information for VMware vCenter Server and ESX/ESXi
using the vSphere PowerCLI (1027932)

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 297


prathamesh.mitkar@atos.net
Log Collection

Access VMware Logs Using CLI

On the VCSA virtual machine, the vc-support.sh script can be run to collect the
vCenter log bundle. It records all logs and the information from the VCSA until the
time of the collection. This script creates a vcsupport.zip file in the /root directory of
the vCSA. Either use SCP to export the generated support bundle to another
location, or download from https://<VCSAIP>:443/appliance/<support-bundle>.tgz
using root credentials. You can reference the VMware knowledgebase article
2110014 for details about the types of files that are included in the bundle.

The vm-support command can be run on the ESXi nodes to generate the
vSphere log bundle. The bundle is displayed in /var/tmp, /var/log, or the current
working directory. The vSphere bundle is the standard vmsupport bundle that is
collected in typical ESXi troubleshooting. The bundle contains all logfiles for a
specific node. This command creates a .tgz file that contains many logfiles related
to the ESXi node. You can also download the logs remotely with the URL:
https://<esxihost>/cgi-bin/vm-support.cgi. Another important script for debugging or
re-creating the system with the collected data is the reconstruct.sh script file.
This file is created in the root directory of the support bundle. Certain commands in
vm-support generate a large file. This file consumes more resources and is likely to
result in a timeout error or takes a considerable amount of time to execute. To
control the creation of larger files, the reconstruct.sh file breaks down the
larger file into fragments when added into the vm-support bundle. Upon completion
of the support tool, users can re-create the larger file by running the
reconstruct.sh file above the extracted bundle directory.

VxFlex Integrated Rack Administration - Classroom

Page 298 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Log Collection

Access Logs for Storage-Only Nodes

Linux logfiles can be collected using the emcgrab procedure. This procedure is a
comprehensive collection of key elements from the Linux storage only node. Log
on to the Linux console with the root user id and retrieve logs using the scp
command. Collect the Linux operating system logs from the /var/log directory.
Alternatively, EMC Grab Utility can also be used to collect the logfiles. For more
information, search for EMC grab at https://www.dell.com/support/home.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 299


prathamesh.mitkar@atos.net
Log Collection

Collect VxFlex OS Logs

For VxFlex OS-related issues, you can retrieve logs through the VxFlex OS
Gateway or using the VxFlex OS CLI.

To retrieve logs using the VxFlex OS Gateway, log in to the VxFlex OS Gateway,
and from the top menu bar, select Maintain. Provide login credentials for the MDM
and LIA password and, click Retrieve system topology. In the Maintenance
operation screen, click Collect Logs and enter the MDM admin password. Click
Collect Logs. You can monitor the process of operation in the Monitor tab.

Refer to the VxFlex OS User Guide for information about the VxFlex OS CLI
command to retrieve system logs.

You can get logfiles for each VxFlex OS component directly by using ssh. The
logfile is a zipped .tar file format. If there is more than one component on a VxFlex
OS Virtual Machine, it gathers that information as well. Run the get_info.sh
providing the MDM user and its password. Run the script
/opt/emc/scaleio/mdm/diag/get_info.sh –u <MDMuser> -p
<MDMpassword> -r to retrieve logs. The files are located in the /tmp/scaleio-
getinfo directory.

VxFlex Integrated Rack Administration - Classroom

Page 300 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Log Collection

Collect Logs Using VxFlex OS Plug-in

You can collect VxFlex OS installation log by using the Show server log option in
the vSphere web client. Use copy and paste to save the information to a file.
Troubleshoot any VxFlex OS installation issues with the server log in the VxFlex
OS plug-in.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 301


prathamesh.mitkar@atos.net
Log Collection

Collect Server Logs Using iDRAC

System Events can be downloaded from the Maintenance section after logging into
iDRAC. Lifecycle Log files can be viewed online.

VxFlex Integrated Rack Administration - Classroom

Page 302 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Log Collection

Lab: Maintenance Mode and Licensing

This lab presents activities that are related to maintaining and troubleshooting
VxFlex integrated rack.

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 303


prathamesh.mitkar@atos.net
Log Collection

Course Summary

This course presented tasks and activities that are required to manage the VxFlex
integrated rack in your environment.

This concludes the training.

VxFlex Integrated Rack Administration - Classroom

Page 304 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
Log Collection

Dell EMC Proven Professional Certification Path

Shown here is the certification path to the Systems Administration Multi-Cloud


exam. The VxFlex Integrated Rack Administration course is part of the Dell EMC
Proven Professional Specialist track. For more information about the Dell EMC
Proven Professional program and the requirements to achieve the Dell EMC
Certified Specialist, go to Dell EMC Education Services at
https://education.emc.com/content/emc/en-us/home/certification-overview.html

VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 305


prathamesh.mitkar@atos.net
Log Collection

VxFlex Integrated Rack Administration - Classroom

Page 306 © Copyright 2019 Dell Inc.


prathamesh.mitkar@atos.net
VxFlex Integrated Rack Administration - Classroom

© Copyright 2019 Dell Inc. Page 307


prathamesh.mitkar@atos.net

You might also like