You are on page 1of 196

Dell EMC PowerFlex Appliance

Administration Guide

July 2021
Rev. 7.1
Notes, cautions, and warnings

NOTE: A NOTE indicates important information that helps you make better use of your product.

CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.

WARNING: A WARNING indicates a potential for property damage, personal injury, or death.

© 2019 - 2021 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Other trademarks may be trademarks of their respective owners.
Contents
Revision history........................................................................................................................................................................ 10

Chapter 1: Introduction................................................................................................................ 12

Chapter 2: Administering the network......................................................................................... 13


Using an embedded operating system-based jump server...................................................................................... 13
Jump server   ................................................................................................................................................................13
Jump server tools.........................................................................................................................................................14
Jump server access..................................................................................................................................................... 14
File sharing services.....................................................................................................................................................14
Security and administration....................................................................................................................................... 15
Jump server updates...................................................................................................................................................15
Install the embedded operating system-based iDRAC tools................................................................................... 15
Converting the Windows jump VM to the embedded operating system jump VM............................................15
Installing the offline repository................................................................................................................................. 16
Verifying connectivity between Storage Data Server (SDS) and Storage Data Client (SDC)....................... 16
Verifying connectivity between Storage Data Server (SDS) and PowerFlex Gateway....................................17
Checking the maximum transmission unit on all switches and servers................................................................ 18
Checking the maximum transmission unit on the access switch..................................................................... 18
Checking the maximum transmission unit on a VMkernel port.........................................................................18
Checking the maximum transmission unit on all port groups or ports............................................................ 19
Adding a network to the deployed service using PowerFlex Manager................................................................. 19
Add a network to a service ............................................................................................................................................20
Add a VLAN to an access switch connected to a PowerFlex appliance cluster................................................20
Verifying a VLAN configuration...................................................................................................................................... 21
Gather logs from the network switch for troubleshooting...................................................................................... 21
Customer switch port configuration examples.......................................................................................................... 22
Upgrading Dell EMC PowerSwitch switches using ONIE........................................................................................35

Chapter 3: Administering the storage.......................................................................................... 36


Determining and switching the PowerFlex Metadata Manager.............................................................................36
Update resource inventory..............................................................................................................................................37
Add volumes to the service............................................................................................................................................ 38
Adding a PowerFlex appliance node to an existing cluster..................................................................................... 40
Removing a PowerFlex node for maintenance .......................................................................................................... 41
Entering and exiting service mode........................................................................................................................... 41
Rebooting a PowerFlex node..........................................................................................................................................42
Resize a volume................................................................................................................................................................. 42
Unmapping a volume........................................................................................................................................................ 43
Unmapping a volume using a PowerFlex version prior to 3.5.................................................................................43
Removing a volume ..........................................................................................................................................................43
Removing a volume using a PowerFlex version prior to 3.5................................................................................... 44
Disabling persistent checksum on medium granularity storage pools.................................................................. 44
Using PowerFlex GUI presentation server to disable persistent checksum..................................................44
Enabling persistent checksum for medium granularity storage pools.................................................................. 45

Contents 3
Using PowerFlex to enable persistent checksum................................................................................................45
Add licenses to PowerFlex and PowerFlex Manager................................................................................................46
Managing volumes, nodes, and network components............................................................................................. 46
Monitoring system health................................................................................................................................................ 47
Upgrading PowerFlex appliance firmware...................................................................................................................48
Mapping a volume using PowerFlex version prior to 3.5 to a Windows PowerFlex compute-only node.... 49
Mapping a volume using Windows PowerFlex compute-only node...................................................................... 49
Enabling and disabling SDC authentication.................................................................................................................50
Preparing for SDC authentication........................................................................................................................... 50
Configuring SDCs to use authentication............................................................................................................... 50
Windows and Linux SDC nodes................................................................................................................................ 51
Enabling SDC authentication ...................................................................................................................................52
Disabling SDC authentication................................................................................................................................... 52
Expanding an existing PowerFlex cluster with SDC authentication enabled................................................53

Chapter 4: Administering the storage with asynchronous replication...........................................54


Remote replication on PowerFlex hyperconverged nodes .....................................................................................54
Remote consistency group (RCG)................................................................................................................................54
Replication direction and mapping................................................................................................................................ 54
Adding a replication consistency group....................................................................................................................... 55
Checking the current copy status.................................................................................................................................56
Modifying the recovery point objective.......................................................................................................................56
Adding a replication pair to a remote consistency group........................................................................................ 56
Unpairing from a remote consistency group.............................................................................................................. 56
Freezing a remote consistency group.......................................................................................................................... 57
Unfreezing a remote consistency group......................................................................................................................57
Setting the target to inconsistent mode..................................................................................................................... 57
Setting the target to consistent mode........................................................................................................................ 58
Running a test failover.....................................................................................................................................................58
Stopping test failover.......................................................................................................................................................59
Running a failover............................................................................................................................................................. 59
Restoring replication.........................................................................................................................................................59
Reversing replication........................................................................................................................................................ 60
Creating a snapshot of the remote consistency group (RCG) volume............................................................... 60
Pausing the remote consistency group........................................................................................................................ 61
Pausing the initial copy..................................................................................................................................................... 61
Resuming the initial copy................................................................................................................................................. 61
Resuming the replication consistency group...............................................................................................................61
Setting priority................................................................................................................................................................... 62
Mapping remote consistency groups to the Storage Data Clients (SDC).......................................................... 62
Mounting a VMFS datastore copy on the target VMware ESXi cluster..............................................................62
Unmapping an Storage Data Client (SDC) from the remote consistency group target volumes................. 63
Configuring replication on PowerFlex storage-only nodes......................................................................................63
Add storage data replication to PowerFlex...........................................................................................................63
Extract and add the MDM certificate.................................................................................................................... 64
Create the replication consistency group..............................................................................................................64
Disabling replication on PowerFlex storage-only nodes.....................................................................................66
Freeze the remote consistency group................................................................................................................... 66
Remove the remote consistency group................................................................................................................. 66
Remove a peer system...............................................................................................................................................66

4 Contents
Remove replication trust for peer system............................................................................................................. 67
Enter SDS in maintenance mode............................................................................................................................. 67
Remove storage data replication from PowerFlex.............................................................................................. 67
Remove a storage data replication RPM............................................................................................................... 68
Clean up network configurations.............................................................................................................................68
Exit SDS in maintenance mode................................................................................................................................ 68
Remove journal capacity............................................................................................................................................69
Remove target volumes from the destination system....................................................................................... 69

Chapter 5: Configuring and viewing alerts................................................................................... 70


Configure the alert connector........................................................................................................................................70
Configuring SNMP trap and syslog forwarding.......................................................................................................... 71
Configure SNMP trap forwarding............................................................................................................................72
Configure syslog forwarding..................................................................................................................................... 73

Chapter 6: Administering PowerFlex Manager............................................................................. 75


Back up and restore PowerFlex Manager................................................................................................................... 75
Add or modify user accounts..........................................................................................................................................75
Assigning users to services............................................................................................................................................. 76
Recovering a lost password............................................................................................................................................ 77
Access switch password management.........................................................................................................................77
VMware vCenter password management................................................................................................................... 78
VMware ESXi operating system password management........................................................................................ 78
Adding a non-root user to VMware ESXi...............................................................................................................79
Minimum VMware vCenter permissions...................................................................................................................... 79
Create a user in monitoring mode........................................................................................................................... 79
Create a user in lifecycle mode................................................................................................................................80
Create a user in managed mode.............................................................................................................................. 80
Windows server operating system password management ................................................................................... 81
Updating passwords in PowerFlex Manager............................................................................................................... 81
Update passwords for the PowerFlex Gateway................................................................................................... 81
Updating passwords for PowerFlex Gateway components ............................................................................. 82
Updating passwords for system components...................................................................................................... 82
Updating passwords for nodes................................................................................................................................ 83
Embedded operating system password management..............................................................................................83
Adding users................................................................................................................................................................. 84
Granting sudo privileges to a user...........................................................................................................................84
Managing users with sudo privileges......................................................................................................................85
Deleting users...............................................................................................................................................................85
Presentation server root password management..................................................................................................... 85
Red Hat Enterprise Linux user and password management................................................................................... 86
Enabling sudo on a user............................................................................................................................................. 86
SUSE user and password management....................................................................................................................... 87
Creating users.............................................................................................................................................................. 87
Deleting users............................................................................................................................................................... 87
Enabling sudo on a user............................................................................................................................................. 87
Credentials management.................................................................................................................................................88
Restarting the PowerFlex Manager virtual appliance...............................................................................................88

Contents 5
Chapter 7: Deploying PowerFlex nodes using PowerFlex Manager............................................... 89
Full network automation.................................................................................................................................................. 89
Full network automation: Deploying a PowerFlex compute-only node with Red Hat Enterprise
Linux or CentOS...................................................................................................................................................... 90
Full network automation: Deploying a PowerFlex storage-only node.............................................................93
Full network automation: Deploying a VMware ESXi PowerFlex hyperconverged node or
PowerFlex compute-only node............................................................................................................................ 98
Adding volumes to a PowerFlex hyperconverged node or PowerFlex compute-only node ................... 103
Partial network automation........................................................................................................................................... 103
Partial network automation: Deploying a PowerFlex compute-only node with Red Hat Enterprise
Linux or CentOS.....................................................................................................................................................104
Partial network automation: Deploying a PowerFlex storage-only node......................................................107
Partial network automation: Deploying a VMware ESXi PowerFlex hyperconverged node or
PowerFlex compute-only node............................................................................................................................ 111
Adding volumes to a PowerFlex hyperconverged node or PowerFlex compute-only node .................... 116

Chapter 8: Restoring the PowerFlex Gateway............................................................................. 117


Configure SNMP for PowerFlex................................................................................................................................... 118
Installing the PowerFlex Gateway................................................................................................................................ 118
Installing the PowerFlex Gateway prior to PowerFlex 3.5..................................................................................... 119
Changing the root password on the VM....................................................................................................................120
Configuring the PowerFlex Gateway network interfaces......................................................................................120
Configuring the PowerFlex Gateway NTP client...................................................................................................... 121
Configuring the PowerFlex Gateway hostname.......................................................................................................122
Installing the Java and PowerFlex Gateway RPMs................................................................................................. 122
Restoring the PowerFlex Gateway configuration.................................................................................................... 122
Deploying the PowerFlex GUI presentation server................................................................................................. 123
Linking and unlinking the MDM to the presentation server web UI.................................................................... 124
Link the MDM to the presentation server web UI............................................................................................. 124
Unlink the MDM to the presentation server web UI......................................................................................... 124

Chapter 9: Upgrading VMware vSphere for patch releases.........................................................125


Upgrading VMware vSphere infrastructure management components.............................................................125
Stage and upgrade the iDRAC and firmware............................................................................................................ 126
Shutting down all the VMs running on the controller host....................................................................................127
Upgrading VMware vSphere ESXi............................................................................................................................... 127
Powering on all the VMs running on the controller host....................................................................................... 128
Upgrading the iDRAC service module.........................................................................................................................128
Change the SVM CPU clock reservation...................................................................................................................128
Find the CPU and clock speed............................................................................................................................... 129
Migrating vCLS VMs on controller nodes .................................................................................................................129
Upgrading the embedded operating system jump VM........................................................................................... 129
Installing the offline repository.....................................................................................................................................130

Chapter 10: Upgrading a PowerFlex appliance environment........................................................ 131


Intelligent catalog (IC) trains and the upgrade process..........................................................................................131
Change the maximum transmission unit (MTU) value............................................................................................132
Back up and verify the dvSwitch configuration................................................................................................. 132
Change the maximum transmission unit (MTU) on the access switch ....................................................... 133

6 Contents
Change the maximum transmission unit (MTU) on the cust_dvswitch....................................................... 133
Change the maximum transmission unit (MTU) for VMware vMotion VMK.............................................. 133
Upgrading the PowerFlex Manager virtual appliance............................................................................................. 134
Upgrade PowerFlex Manager using backup and restore................................................................................. 134
Upgrading the PowerFlex Manager virtual appliance using Secure Remote Services............................. 140
Restarting the PowerFlex Manager virtual appliance....................................................................................... 142
Upgrading components............................................................................................................................................ 142
Adding a new Intelligent Catalog file and OS images to PowerFlex Manager.................................................. 143
Upgrade the PowerFlex presentation server............................................................................................................ 143
Upgrading PowerFlex Gateway.................................................................................................................................... 144
Upgrading Java on the PowerFlex Gateway and PowerFlex GUI presentation server.................................. 144
Update the PowerFlex GUI presentation server...................................................................................................... 145
Update PowerFlex appliance nodes............................................................................................................................ 146
Migrating VMware vSphere Cluster Services (vCLS) VMs.................................................................................. 147
Upgrading Cisco NX-OS 7.x to Cisco NX-OS 9.x.................................................................................................... 147
Upgrading the electronic programmable logic device (EPLD)............................................................................. 149

Chapter 11: Upgrading VMware NSX-T Edge nodes.....................................................................153


Stage and upgrade the iDRAC and firmware............................................................................................................153
Validate the vSAN health ............................................................................................................................................. 154
Shut down all the VMs on the NSX-T Edge Gateway host.................................................................................. 154
Put VMware NSX-T Edge Gateway host into maintenance mode......................................................................154
Upgrade VMware vSphere ESXi.................................................................................................................................. 154
Exit maintenance mode..................................................................................................................................................155
Power on all VMs running on the VMware NSX-T Edge Gateway host............................................................ 156
Upgrade the iDRAC service module............................................................................................................................156
Upgrade the VMware vSphere Distributed Switch.................................................................................................156
Upgrade the VMware vSAN disk format (vSAN storage option only)............................................................... 157
Verifying VMware vSAN health (vSAN storage option only)............................................................................... 157

Chapter 12: Enable replication on existing PowerFlex hyperconverged nodes............................. 158


Prerequisites..................................................................................................................................................................... 158
Workflow............................................................................................................................................................................158
Remove an existing PowerFlex hyperconverged service from PowerFlex Manager...................................... 159
Create and configure replication port groups...........................................................................................................159
Preparing the SVMs for replication.............................................................................................................................159
Set the SDS NUMA...................................................................................................................................................159
Enabling replication on a PowerFlex appliance with FG Pool..........................................................................160
Verify Network Manager is disabled..................................................................................................................... 160
Update the network configuration........................................................................................................................ 160
Update the grub configuration file......................................................................................................................... 161
Enter the SDS nodes into maintenance mode and power off.............................................................................. 162
Add virtual NICs to SVMs..............................................................................................................................................162
Record the MAC address of the newly added network interface controllers.................................................. 162
Modifying the vCPU, memory, vNUMA and CPU reservation settings on SVMs........................................... 163
Modify the memory size...........................................................................................................................................163
Increase the vCPU count.........................................................................................................................................163
Setting the vNUMA advanced option...................................................................................................................163
Set the vNUMA advanced option..........................................................................................................................164

Contents 7
Modifying the memory size according to the SDR requirements for FG pool-based PowerFlex
systems with replication ..................................................................................................................................... 164
Increasing the vCPU count according to the SDR requirement.................................................................... 165
Setting the vNUMA advanced option...................................................................................................................165
Editing the SVM configuration............................................................................................................................... 165
Powering on the SVM and configuring network interfaces .................................................................................166
Configure the newly added network interface controllers for SVMs........................................................... 166
Add a permanent static route for replication external networks .................................................................. 166
Install SDR RPMs on the SDS nodes (SVMs)...........................................................................................................167
Exit SDS maintenance mode......................................................................................................................................... 167
Verify communication between the source and destination................................................................................. 167
Add journal capacity percentage..................................................................................................................................168
Calculate journal capacity to allocate................................................................................................................... 168
Add allocated journal capacity................................................................................................................................ 168
Adding the Storage Data Replicator to a PowerFlex appliance........................................................................... 169
Create the peer system between the source and destination site .................................................................... 169
Adding the peer system ................................................................................................................................................ 170
Create the replication consistency group.................................................................................................................. 170
Finding the current copy status.............................................................................................................................. 171
Modifying the recovery point objective.................................................................................................................171
Defining the network for replication in PowerFlex Manager................................................................................. 171
Adding an existing service to PowerFlex Manager..................................................................................................172

Chapter 13: Retrieving PowerFlex performance metrics............................................................. 176


Retrieving PowerFlex performance metrics using the PowerFlex GUI.............................................................. 176
Retrieving PowerFlex performance metrics using a PowerFlex version prior to 3.5...................................... 176

Chapter 14: Performing maintenance activities in a PowerFlex cluster....................................... 178


Maintenance modes........................................................................................................................................................ 178
Entering protected maintenance mode...................................................................................................................... 179
Exiting protected maintenance mode......................................................................................................................... 179

Chapter 15: Administering the CloudLink Center........................................................................ 180


Adding and managing CloudLink Center licenses.................................................................................................... 180
License CloudLink Center........................................................................................................................................ 180
Add the CloudLink Center license in PowerFlex Manager...............................................................................180
Delete expired or unused CloudLink Center licenses from PowerFlex Manager........................................ 181
Configure custom syslog message format........................................................................................................... 181
Registering KMIP on CloudLink Center.................................................................................................................181
Manage a self-encrypting drive (SED) from CloudLink Center........................................................................... 182
Manage a self-encrypting drive from the command line....................................................................................... 182
Release a self-encrypting drive.................................................................................................................................... 183
Release management of a self-encrypting drive from the command line......................................................... 184
Changing the CloudLink secadmin user password.................................................................................................. 184
Unlocking the CloudLink secadmin user.....................................................................................................................185
Setting CloudLink Vault passcodes............................................................................................................................. 185
Back up and restore CloudLink Center...................................................................................................................... 185
Viewing back up information...................................................................................................................................185
Changing the schedule for automatic backups.................................................................................................. 186

8 Contents
Generating a backup file manually......................................................................................................................... 186
Generating a backup key pair..................................................................................................................................187
Downloading the current backup file.................................................................................................................... 187
Restoring the CloudLink backup............................................................................................................................ 188

Chapter 16: Powering off and on the PowerFlex appliance cluster.............................................. 189
Powering off a PowerFlex appliance hyperconverged cluster............................................................................. 189
Powering on a PowerFlex appliance hyperconverged cluster.............................................................................. 190
Powering off PowerFlex appliance two-layer cluster..............................................................................................191
Powering on PowerFlex appliance two-layer cluster.............................................................................................. 192
Powering off PowerFlex compute-only nodes with Windows Server 2016 or 2019....................................... 193
Powering off PowerFlex compute-only nodes with Red Hat................................................................................193

Chapter 17: Ports and authentication protocols..........................................................................194


PowerFlex Manager ports and protocols...................................................................................................................194
PowerFlex ports and authentication........................................................................................................................... 194
VMware vSphere ports and protocols....................................................................................................................... 195
Red Hat Virtualization Manager and Red Hat Virtualization Host ports and protocols................................. 195
CloudLink Center ports and protocols........................................................................................................................195

Chapter 18: Additional documentation........................................................................................196

Contents 9
Revision history
Date Document revision Description of changes
July 2021 7.1 Updated Upgrade PowerFlex Manager
using backup and restore process.
June 2021 7.0 Added content for
● Administering storage with
asynchronous replication
● Remote replication on PowerFlex
storage-only nodes
● Minimum VMware vCenter
permissions required to support
PowerFlex Manager
● VMware vCLS VM migration
● Enabling replication on existing
PowerFlex hyperconverged nodes
● Dell PowerSwitch S5296F
● Upgrading VMware NSX-T Edge
Gateway nodes
December 2020 6.1 Added content for
● Upgrading VMware vSphere for
patch releases
Updated content for
● Native asynchronous replication
November 2020 6.0 Added content for
● Customer switch port examples
● Persistent checksum for data
integrity
● SDC authentication
● Full and partial network automation
Updated content for
● CloudLink
September 2020 5.1 Updated content for
● PowerFlex Gateway
June 2020 5.0 Added content for
● Storage data replication (SDR)
● Cisco NX-OS upgrade to 9.x
● PowerFlex 3.5
● CloudLink 6.9
● Protected maintenance mode (PMM)
March 2020 4.0 Updated content for
● CloudLink
● Windows compute-only nodes
● Dell EMC Networking
November 2019 3.0 Updated for
● CloudLink support
● Windows Server OS support
● changes to embedded operating
systems
Removed
● OpenManage Enterprise tasks

10 Revision history
Date Document revision Description of changes
September 2019 2.0 Updating and adding new topics for the
September release
August 2019 1.0 Initial release

Revision history 11
1
Introduction
This guide provides procedures for administering the PowerFlex appliance.
It provides the following information:
● Administering the operating system, network, and storage
● Managing components with PowerFlex Manager
● Monitoring system health
● Monitoring and alerting using Secure Remote Services
● Configuring SNMP trap and syslog forwarding
● Backing up and restoring
● Managing PowerFlex appliance passwords
● Powering on and off
The dvswitch names are for example only and may not match the configured system. Do not change these names or a data
unavailable or data lost event may occur.
Depending on when the system was built, it uses an embedded operating system-based jump server or a Windows-based jump
server. The specific procedures in this guide describe using the Windows-based jump server. You can accomplish the same tasks
using the tools available for the embedded operating system-based jump server. If you are using a system with an embedded
operating system-based jump server, refer to Using an embedded operating system-based jump server for more details.
Dell EMC PowerFlex appliance was previously known as Dell EMC VxFlex appliance. Similarly, Dell EMC PowerFlex Manager
was previously known as Dell EMC VxFlex Manager, and Dell EMC PowerFlex was previously known as Dell EMC VxFlex OS.
References in the documentation will be updated over time.
PowerFlex Manager provides the management and orchestration functionality for PowerFlex appliance.
See the Glossary for terms, definitions, and acronyms.

12 Introduction
2
Administering the network
Perform these procedures to administer the PowerFlex appliance network.

Using an embedded operating system-based jump


server
Depending on when the system was built, it will use either an embedded operating system-based jump server or a Windows-
based jump server. The tools available for each jump server accomplish the same tasks. The procedures in this guide use the
Windows-based jump server. If you are using a system with an embedded operating system-based jump server, refer to this
topic for what tools to use instead.
A Windows-based jump server configuration uses the following tools:
● WinSCP for secure copies
● PuTTY for SSH access
● Remote Desktop (RDP) for remote login
An embedded operating system-based jump server configuration uses the following tools:
● SCP for secure copies
● SSH for login through secure shell
● VNC for remote login
● Filezilla for secure FTP (interactive SCP is not supported)
● Browsers, for example Chrome and Firefox
The following table lists Windows-based tools and the equivalent embedded operating system-based tool location:

Windows-based tool Embedded operating system-based tool


WinSCP SCP (from a terminal or console window)
D:\ /shares/
SSH (PuTTY) SSH (from a terminal or console window)
RDP VNC
PowerShell (Windows command terminal) bash (from a terminal or console window)

Jump server  
The PowerFlex appliance management environment may include a jump server used to complete routine maintenance and
troubleshooting. Remote access is provided using VNC (GUI) and SSH, which is always on. The jump server has an integrated
configuration for various file sharing services which can be enabled and disabled as needed. The enable and disable services
scripts are located on the desktop.
The VM installation is relatively minimal installation, but also has Xorg and KDE (a graphical desktop environment). A nonroot
account (admin) is provided for use. The admin account does have full administrator escalation privileges (sudo) which must
be used in order to perform some tasks (account password is required). All yum repos are disabled or non-existent, to prevent
inadvertent, or ad hoc updates from being applied.
NOTE: Most maintenance, management, and orchestration operations are still intended to be performed using PowerFlex
Manager.

Administering the network 13


Jump server tools
The jump server is a standard embedded operating system based on CentOS 7 installation running Xorg and KDE for a
desktop environment. Current versions of tftp-server, nfs-server, httpd (Apache), samba (CIFS), and vsftpd (FTP)
are installed for use, as needed. Desktop icons for starting and stopping each service have been prepared to make daemon
and firewall control relatively simple. Also installed are versions of Firefox and Google Chrome browsers,  command-line  SCP is
available to transfer files to and from the jump server.  

Jump server service scripts 


Each service has a bash script for starting and stopping the associated daemon. The use of sudo is built into each script. The
scripts test if the target daemon is (or is not) running. The scripts then perform the set tasks (start and stop service, open and
close firewall rules).  After successfully running, the scripts sleep (persist) for 10 seconds on screen, then exit the running script
shell.  
The scripts are located in the ~admin/service-scripts directory, these scripts are also run by the desktop icons.  

Jump server access


You can access the jump server in several ways:
● A network based graphical login is provided using VNC  (tigervnc-server), a vncviewer client is needed in order to
access this service. 
● OpenSSH provides a command line or text-based interface access. Any SSH client can access the VM using this method.
● The VMware vSphere client can be used to access the VM either using a graphical or text-based console.

Virtual networking computing (VNC)


tigervnc-server is installed and running as the administration user on ports 5901 and 5092. To access this network
based graphical log, download the vncviewer binary. Downloads for MacOSX, Linux, and Windows are available. Select the
appropriate package for your environment.
Once installed, run the viewer binary and enter the hostname or the IP address of the VNC server, appending 5901 or
5092 to connect to the appropriate port.  This stage of the authentication is completed using vncserver directly. Unless
X509 certificates are configured and installed on both ends for identification purposes, the client reports that the connection
is insecure. This alarm is related to server identity only, as the connection itself is encrypted (similar to Microsoft Remote
Desktop).  
A (encrypted) separate, password protects the VNC, only the first eight characters of which are significant.  The  Xorg or KDE
screen is configured to lock after 15 min of inactivity, and will require the account password in order to regain access.   

OpenSSH
An openSSH server is listening on the default port (22/tcp). Non-root connections are permitted, and any client capable of
handling the ciphers suites that are presented can connect without issue. SSH client selection and configuration are beyond the
scope of this guide.  

VMware web console


VMware vSphere client is the integrated console connection method present within the vCSA. The running VM allows both
admin and root access from the console. Use Ctrl + Alt + F2 to switch to a virtual text-based login screen.

File sharing services


NFS, CIFS, and HTTPD all use the same share or document root: /shares.  This is a Logical Volume Manager (LVM) volume
that is mounted as a separate disk device. FTP is restricted to user accounts, and the administrator has the shares directory
as a bind mount underneath the users home directory.  Main features of the file sharing services are:

14 Administering the network


● FTP uses the user account password in order to access
● CIFS relies on a separate smbpassword db
● NFS and HTTP are not secured by a password  
● TFTP UDP-based and not secure. The other end must allow for UDP packets on port 69 in order to retrieve any files  

Security and administration


A nonroot user account (admin) is provided for regular use. The account has access to full sudo (root privilege) escalation, and a
bind mount for the large secondary drive (/shares) where most data should be stored.  If the password for the admin account
is changed, it is recommended that you also change the vnc and smb passwords (as the admin user, run vncpasswd and
or smbpasswd to fall through a password change cycle).  
The server has selinux enabled in enforcing mode, and only provides two publicly available service ports (22/tcp and 5901/
tcp). Other services are only permitted through the firewall when their associated service scripts are run.  

Jump server updates


The new embedded operating system based on CentOS jump server is an RCM/Intelligent Catalog object, and will have patch
releases associated with it. These patches are applied in the same process as the embedded operating systems SVMs. The
patch clusters are part of an RCM payload in which one is present.

The yum update model has been disabled where possible (repositories are removed or disabled).   

Install the embedded operating system-based iDRAC


tools
Perform this procedure to install the iDRAC tools on an embedded operating system-based jump server.

Steps
1. Locate the embedded operating system-based iDRAC tools and installation instructions on the Dell Technologies Support
site. The latest Linux version is available here.
2. Run the following command on the embedded operating system-based jump box to create a specific symlink to satisfy SSL
requirements:

sudo ln -s /usr/lib64/libssl.so.10 /usr/lib64/libssl.so

When the symlink is in place, RACADM tools will function as expected.

Converting the Windows jump VM to the embedded


operating system jump VM
Use this procedure to convert the Windows jump VM to the embedded operating system jump VM.

Steps
1. Obtain the updated embedded OS image from the IC software repository.
2. Deploy the existing the embedded jump VM and assign a valid IP address with internet connectivity. A valid DNS entry must
be defined.
The embedded OS jump VM will replace the existing Windows server.
3. Run df -h to verify that there is enough available free space on the /shares partition of the embedded jump VM to
download the RPM packages and create the ZIP file. At least 15 GB is recommended.

Administering the network 15


4. Run uname -a to determine the embedded operating system version and verify the Linux kernel version by reviewing the
output and the values in the file (/etc/centos-release).

5. Run cat /etc/centos-release to verify the embedded operating system version.

Installing the offline repository


Use this task to install an offline repository.

Steps
1. Create a directory in the /shares volume called centos-RPM, type: sudo mkdir /shares/Centos-RPM.
2. Copy the repository update ZIP file to the /tmp directory of the embedded operating system VM using WinSCP or similar.
3. Extract the contents of the repository update ZIP file to the /shares/Centos-RPM directory, type: sudo unzip /tmp/
repofilename.zip -d /shares/Centos-RPM.
4. Create and modify a new repository file in the (/etc/yum.repos.d) directory, type: sudo vi /etc/yum.repos.d/
centos.rpm.repo. In this example, the file that is created is (/etc/yum.repos.d/centos.rpm.repo).
5. Clean the yum cache, type: # sudo yum clean all.
6. Verify access to the new repository, type: # sudo yum repolist.
7. Deploy the updates from the repository, type: yum update. When prompted answer (y).
8. When the process is complete, reboot the system, type: reboot.
9. Once the system reboot has completed, verify kernel version, type: uname -a viewing the (/etc/centos-release) file.
10. Verify the embedded operating system version, type: cat /etc/centos-release.
11. Remove the RPM files, type: sudo rm -f -r /shares/Centos-RPM.
12. Remove the repository index file, type: sudo rm /etc/yum.repos.d/centos.rpm.repo.
13. Clean yum cache, type sudo yum clean all.

Verifying connectivity between Storage Data Server


(SDS) and Storage Data Client (SDC)
Use this procedure to ping the Storage Data Server (SDS) from Storage Data Client (SDC).

Steps
1. Open an SSH session with a VMware ESXi host using PuTTy or a similar SSH client.
2. Log in to the host using root.
3. Type vmkping <ping command> to ping each SDC using the following commands:

16 Administering the network


Ping command Description
-s Specifies the packet size, and the number of data bytes
sent. (The packet size is 8972 because the IP header is 20
bytes and the ICMP header is 8 bytes.)
-d Prohibits fragmentation.

-I Specifies the outgoing VMkernel interface.

4. Repeat from each VMware ESXi host using the following commands:

Ping command Description


-s Specifies the packet size, and the number of data bytes
sent. (The packet size is 8972 because the IP header is 20
bytes and the ICMP header is 8 bytes.)
-d Prohibits fragmentation.

-I Specifies the outgoing VMkernel interface.

For example:
NOTE: The following command requires to use vmk number to reference the port group. For a standard build, flex-
data3-<vlanid> is vmk2 and flex-data4-<vlanid> is vmk3.

[root@node7:~] vmkping -d -s 8972 -I vmk3 192.168.176.6


8980 bytes from 192.168.176.6: icmp_seq=0 ttl=64 time=0.191 ms

Verifying connectivity between Storage Data Server


(SDS) and PowerFlex Gateway
Use this procedure to ping the SDS and PowerFlex Gateway from SDS.

Steps
1. Open an SSH session with an SDS host using PuTTy or a similar SSH client.
2. Log in to the host using root.
3. Ping each SDS and the PowerFlex using a 9000 byte packet without fragmentation on the SDS to SDS data networks.
4. Repeat for each SDS host.
5. Repeat for the PowerFlex Gateway.
In the ping command example below:

Ping command Description


-s Specifies the packet size and the number of data bytes to
be sent. (The packet size is 8972 because the IP header is
20 bytes and the ICMP header is 8 bytes.)
-M do Prohibits fragmentation

For example:

[root@node4 ~]# ping -s 8972 -M do 192.168.152.102


8980 bytes from 192.168.152.102: icmp_seq=1 ttl=64 time=0.299 ms

Administering the network 17


Checking the maximum transmission unit on all
switches and servers
Maximum transmission unit (MTU) is the largest physical packet size, measured in bytes, that a network can transmit. Any
messages larger than the MTU are divided into smaller packets before transmission.

Checking the maximum transmission unit on the access switch


Use this procedure to check the maximum transmission unit on either the Dell EMC PowerSwitch switch or the Cisco Nexus
switch.

Steps
1. From the switch CLI, log in to the switch you want to check.
2. Check each interface for their MTU configuration.

Switch type MTU configuration


Dell EMC PowerSwitch For the Dell EMC S5048F PowerSwitch switch, type the
following:
NOTE: port-channel 100 is used as an example in
the following:

S5048F-45#show interfaces port-channel


100 | grep MTU
MTU 9216 bytes, IP MTU 9198 bytes

For the Dell EMC S5224F PowerSwitch switch, type the


following:

S5224F#show interfaces port-channel 100


| grep MTU
MTU 9216 bytes, IP MTU 9198 bytes

For the Dell EMC S4148F PowerSwitch switch, type the


following:

S4148F#show interfaces port-channel 100


| grep MTU
MTU 9216 bytes, IP MTU 9198 bytes

Cisco Nexus Cisco_Access-A# show interface port-


channel 100 | grep MTU
MTU 9216 bytes, BW 1000000 Kbit, DLY 10
usec

Checking the maximum transmission unit on a VMkernel port


Use this procedure to check maximum transmission unit on a VMkernel port.

Steps
1. In VMware vSphere Client, navigate to the VMware ESXi host.
2. Click the Configure tab, and click Networking.
3. Select VMkernel adapters.
4. Select the VMkernel adapter from the table.

18 Administering the network


5. Click Edit.
6. Verify the MTU setting is set to 9000.

Checking the maximum transmission unit on all port groups or


ports
Use this procedure for checking the maximum transmission unit on all port groups or ports.

Steps
1. Log in to VMware vCenter web interface.
2. On the Menu click Home.
3. From the navigation pane, click Networking.
4. Select the virtual switch that you want to check.
5. Click the Configure tab.
6. In the navigation pane, select Settings > Properties.
7. In the Properties window, under Advanced, verify the MTU setting is set to 9000.

Adding a network to the deployed service using


PowerFlex Manager
Use this procedure to add a network to the deployed service using PowerFlex Manager.

Steps
1. Log in to PowerFlex Manager.
2. From the menu, click Services.
3. Select a service for which you want to add a network and in the right pane, click View Details.
4. Under Resource Action, from the Add Resources list, click Add Network.
The Add Network window is displayed. All used resources and networks are displayed under Resource Name and
Networks.
5. From the Available Networks list, select the network, and click Add.
The selected network is displayed under Network Name. You can define a new network by clicking Define a new network
and select check box to configure Static IP Ranges.

Name Description Network VLAN Gateway Primary DNS Starting IP Ending IP


Type ID address address
app10 Customer General 10 192.168.10.254 192.168.200.101 192.168.10.1 192.168.10.10
Applications_1 Purpose
LAN

6. Select Port Group from the Select Port Group list.


NOTE: Select New Port group to create a port group for the newly defined network.

7. Click Save.
It may take about 15 minutes for PowerFlex Manager to complete the actions of adding the VLAN to the access switches
and the VMware ESXi cluster.
NOTE: PowerFlex Manager supports scale up to 400 general-purpose LAN networks.

Administering the network 19


Add a network to a service
You can add an available network to a service or choose to define a new network for a configuration that was initially deployed
outside of PowerFlex Manager. You cannot remove an added network using PowerFlex Manager.

About this task


Before you can add a network to a service, define the network.
You can add a static route to allow nodes to communicate across different networks. The static route can also be used to
support replication in storage-only and hyperconverged services.

Prerequisites
Ensure that a new VLAN is created on any switches that need access to that VLAN and is added to any management cluster
server-facing ports. The VLAN is then added it to any northbound trunks to other switches that it must communicate with.

Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Services.
3. Select a service for which you want to add a network and in the right pane, click View Details.
4. Under Resource Action, from the Add Resources list, click Add Network.
The Add Network window is displayed. All used resources and networks are displayed under Resource Name and
Networks.
5. Click Add Additional Network to add an additional network:
a. From the Available Networks list, select the network, and click Add.
The selected network is displayed under Network Name. You can define a new network by clicking Define a New
Network.
b. Select Port Group from the Select Port Group list.
c. Click Save.
6. Click Add Additional Static Route to add an additional static route:
a. Click Add New Static Route.
b. Select a Source Network.
The source network must be a PowerFlex data network or a replication network.

c. Select a Destination Network.


The destination network must be a PowerFlex data network or a replication network.

d. Type the IP address for the Gateway.


e. Click Save.

Add a VLAN to an access switch connected to a


PowerFlex appliance cluster
Configure an access switch that is connected to a PowerFlex appliance cluster.

About this task


The following commands are an example of how to add VLAN 10 to the uplink port channel 100. You must perform these
commands on both access switches.

Steps
On the command prompt, type the following:

20 Administering the network


Switch type Command
Dell EMC PowerSwitch
Dell#configure
Dell(conf)#interface vlan<vlan id>
Dell(config-vlan)#no shutdown
Dell#copy running-config startup-config

Cisco Nexus
Cisco_Access-A# configure
Cisco_Access-A(config)# vlan 10
Cisco_Access-A(config-vlan)# exit
Cisco_Access-A(config)# interface port-channel 100
Cisco_Access-A(config-if)# switchport trunk allowed vlan
add 10
Cisco_Access-A(config-if)# end
Cisco_Access-A# copy running-config startup-config

Verifying a VLAN configuration


Verify the VLAN as part of adding a VLAN to the production network.

Steps
1. Create a VM on PowerFlex compute-only node or PowerFlex hyperconverged node.
2. Assign the newly created distributed port group to the VM.
3. Configure an IP address, mask, and gateway on the VM that corresponds to the new VLAN.
4. Ping the gateway from the VM.
5. After you have successfully pinged the gateway from the VM, delete the VM.

Gather logs from the network switch for


troubleshooting
Generate logs to troubleshoot your network switch.

Steps
1. Open an SSH session with the Cisco Nexus switch using PuTTY or a similar SSH client.
2. Log in with admin or other credentials with privileges and type show tech-support.
3. Enable session logging. If using PuTTY, right-click the menu bar and go Change settings > Sessions > Logging.
4. Select All session output.
5. Type a log file name and click Apply.
6. In the switch CLI, type the following:

Switch type Code required

Dell EMC PowerSwitch show tech-support


show process cpu

Cisco Nexus show tech-support details | no-more


show tech-support vpc | no-more
show process cpu history| no more

Administering the network 21


Customer switch port configuration examples
If PowerFlex Manager is deploying a template with partial network automation, you must configure the access switches manually
before deployment. This section explains how to configure link aggregation control protocol (LACP) if you are using your own
access switches.
Each brand of switch commands can be configured differently, see the vendor documentation for correct commands. Due to
the number of switch vendors available, it is not possible to provide configurations for each switch. However, here are four
configuration examples that are provided for the following switches:
● Cisco Nexus 93180YC-EX
● Dell EMC PowerSwitch S5248
● Dell EMC PowerSwitch S5048
● Dell EMC PowerSwitch S5296F
● Arista 7280-C
The following VLANs within the examples are represented as follows:
● flex-node-mgmt-<vlanid>
● flex-vmotion-<vlanid>
● flex-stor-mgmt-<vlanid>
● flex-data1-<vlanid>
● flex-data2-<vlanid>
● flex-data3-<vlanid>
● flex-data4-<vlanid>
● 1000 - flex-prod

Cisco Nexus 93180YC-EX switch configuration example


The following example pertains to PowerFlex hyperconverged node or VMware ESXi PowerFlex compute-only node
connectivity. Port examples for management as follows:

Switch A Switch B
Port
channel interface port-channel37 interface port-channel37
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
104,106,150 104,106,150
spanning-tree port type edge trunk spanning-tree port type edge trunk
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
speed 25000 speed 25000
mtu 9216 mtu 9216
no lacp suspend-individual no lacp suspend-individual
vpc 37 vpc 37

Ethernet
port interface Ethernet1/15 interface Ethernet1/15
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
104,106,150 104,106,150
spanning-tree port type edge trunk spanning-tree port type edge trunk
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
speed 25000 speed 25000
mtu 9216 mtu 9216
channel-group 37 mode active channel-group 37 mode active
no shutdown no shutdown

The following table provides port examples for PowerFlex:

22 Administering the network


Switch A Switch B
Port
channel interface port-channel38 interface port-channel38
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
151,152,153,154 151,152,153,154
spanning-tree port type edge trunk spanning-tree port type edge trunk
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
speed 25000 speed 25000
mtu 9216 mtu 9216
no lacp suspend-individual no lacp suspend-individual
vpc 38 vpc 38

Ethernet
port interface Ethernet1/16 interface Ethernet1/16
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
151,152,153,154 151,152,153,154
spanning-tree port type edge trunk spanning-tree port type edge trunk
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
speed 25000 speed 25000
mtu 9216 mtu 9216
channel-group 38 mode active channel-group 38 mode active
no shutdown no shutdown

The following example pertains to PowerFlex storage-only node connectivity.


NOTE: In a non-two layer deployment, the data1 network and data3 network are defined on port 1 along with PowerFlex
management. Port 2 will have data2 network and data4 network.
Port examples for management as follows:

Switch A Switch B
Port
channel interface port-channel37 interface port-channel37
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
150,151,153,1000 150,151,153,1000
spanning-tree port type edge trunk spanning-tree port type edge trunk
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
speed 25000 speed 25000
mtu 9216 mtu 9216
no lacp suspend-individual no lacp suspend-individual
vpc 37 vpc 37

Ethernet
port interface Ethernet1/15 interface Ethernet1/15
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
150,151,153,1000 150,151,153,1000
spanning-tree port type edge trunk spanning-tree port type edge trunk
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
speed 25000 speed 25000
mtu 9216 mtu 9216
channel-group 37 mode active channel-group 37 mode active
no shutdown no shutdown

The following table provides port examples for PowerFlex:

Administering the network 23


Switch A Switch B
Port
channel interface port-channel38 interface port-channel38
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 152,154 switchport trunk allowed vlan 152,154
spanning-tree port type edge trunk spanning-tree port type edge trunk
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
speed 25000 speed 25000
mtu 9216 mtu 9216
no lacp suspend-individual no lacp suspend-individual
vpc 38 vpc 38

Ethernet
port interface Ethernet1/16 interface Ethernet1/16
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 152,154 switchport trunk allowed vlan 152,154
spanning-tree port type edge trunk spanning-tree port type edge trunk
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
speed 25000 speed 25000
mtu 9216 mtu 9216
channel-group 38 mode active channel-group 38 mode active
no shutdown no shutdown

The following example pertains to PowerFlex storage-only node connectivity with SDS and SDC traffic.
NOTE: In a two layer deployment, the SDC only data1 (SDC traffic only) network and SDC only data2 (SDC traffic only)
network are defined on port 1 along with PowerFlex management. Port 2 will have SDS only data1 (SDS traffic only) and
SDS only data2 (SDS traffic only).
Port examples for management as follows:

Switch A Switch B
Port
channel interface port-channel37 interface port-channel37
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
150,151,152,1000 150,151,152,1000
spanning-tree port type edge trunk spanning-tree port type edge trunk
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
speed 25000 speed 25000
mtu 9216 mtu 9216
no lacp suspend-individual no lacp suspend-individual
vpc 37 vpc 37

Ethernet
port interface Ethernet1/15 interface Ethernet1/15
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
150,151,152,1000 150,151,152,1000
spanning-tree port type edge trunk spanning-tree port type edge trunk
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
speed 25000 speed 25000
mtu 9216 mtu 9216
channel-group 37 mode active channel-group 37 mode active
no shutdown no shutdown

The data networks for these ports are used for SDS traffic only. The following table provides port examples for PowerFlex:

24 Administering the network


Switch A Switch B
Port
channel interface port-channel38 interface port-channel38
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 153,154 switchport trunk allowed vlan 153,154
spanning-tree port type edge trunk spanning-tree port type edge trunk
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
speed 25000 speed 25000
mtu 9216 mtu 9216
no lacp suspend-individual no lacp suspend-individual
vpc 38 vpc 38

Ethernet
port interface Ethernet1/16 interface Ethernet1/16
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 153,154 switchport trunk allowed vlan 153,154
spanning-tree port type edge trunk spanning-tree port type edge trunk
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
speed 25000 speed 25000
mtu 9216 mtu 9216
channel-group 38 mode active channel-group 38 mode active
no shutdown no shutdown

Dell PowerSwitch S5248 and Dell PowerSwitch S5296F-ON switch


configuration example
The following example pertains to PowerFlex hyperconverged node or VMware ESXi PowerFlex compute-only node
connectivity. Port examples for management as follows:

Switch A Switch B
Port
channel interface port-channel117 interface port-channel117
no shutdown no shutdown
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
104,105,150 104,105,150
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
spanning-tree port type edge spanning-tree port type edge
lacp fallback enable lacp fallback enable
mtu 9216 mtu 9216
vlt-port-channel 117 vlt-port-channel 117
spanning-tree port type edge spanning-tree port type edge

Ethernet
port interface Ethernet1/1/7 interface Ethernet1/1/7
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
104,105,150 104,105,150
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
spanning-tree port type edge spanning-tree port type edge
no switchport no switchport
mtu 9216 mtu 9216
speed 25000 speed 25000
flowcontrol receive off flowcontrol receive off
channel-group 117 mode active channel-group 117 mode active
no shutdown no shutdown

The following table provides port examples for PowerFlex:

Administering the network 25


Switch A Switch B
Port
channel interface port-channel118 interface port-channel118
no shutdown no shutdown
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
150,152,154 150,152,154
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
spanning-tree port type edge spanning-tree port type edge
lacp fallback enable lacp fallback enable
mtu 9216 mtu 9216
vlt-port-channel 117 vlt-port-channel 117
spanning-tree port type edge spanning-tree port type edge

Ethernet
port interface Ethernet1/1/8 interface Ethernet1/1/8
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
150,152,154 150,152,154
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
spanning-tree port type edge spanning-tree port type edge
no switchport no switchport
mtu 9216 mtu 9216
speed 25000 speed 25000
flowcontrol receive off flowcontrol receive off
channel-group 118 mode active channel-group 118 mode active
no shutdown no shutdown

The following example pertains to PowerFlex storage-only node connectivity.


NOTE: In a two layer deployment, the data1 and data3 networks are defined on port 1 along with PowerFlex management.
Port 2 will have the data2 network and data4 network.
Port examples for management as follows:

Switch A Switch B
Port
channel interface port-channel117 interface port-channel117
no shutdown no shutdown
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
150,151,152 150,151,152
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
spanning-tree port type edge spanning-tree port type edge
lacp fallback enable lacp fallback enable
mtu 9216 mtu 9216
vlt-port-channel 117 vlt-port-channel 117
spanning-tree port type edge spanning-tree port type edge

Ethernet
port interface Ethernet1/1/7 interface Ethernet1/1/7
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
150,151,152 150,151,152
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
spanning-tree port type edge spanning-tree port type edge
no switchport no switchport
mtu 9216 mtu 9216
speed 25000 speed 25000
flowcontrol receive off flowcontrol receive off
channel-group 117 mode active channel-group 117 mode active
no shutdown no shutdown

The following table provides port examples for PowerFlex:

26 Administering the network


Switch A Switch B
Port
channel interface port-channel118 interface port-channel118
no shutdown no shutdown
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
150,153,154 150,153,154
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
spanning-tree port type edge spanning-tree port type edge
lacp fallback enable lacp fallback enable
mtu 9216 mtu 9216
vlt-port-channel 117 vlt-port-channel 117
spanning-tree port type edge spanning-tree port type edge

Ethernet
port interface Ethernet1/1/8 interface Ethernet1/1/8
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
150,153,154 150,153,154
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
spanning-tree port type edge spanning-tree port type edge
no switchport no switchport
mtu 9216 mtu 9216
speed 25000 speed 25000
flowcontrol receive off flowcontrol receive off
channel-group 118 mode active channel-group 118 mode active
no shutdown no shutdown

The following example pertains to PowerFlex storage-only node connectivity with SDS and SDC traffic only.
NOTE: In a two layer deployment, the SDC only data1 (SDC traffic only) network and SDC only data2 (SDC traffic only)
network defined on port 1 along with PowerFlex management. Port 2 will have SDS only data1 (SDS traffic only) and only
data2 (SDS traffic only).
Port examples for management as follows:

Switch A Switch B
Port
channel interface port-channel117 interface port-channel117
no shutdown no shutdown
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
150,151,152 150,151,152
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
spanning-tree port type edge spanning-tree port type edge
lacp fallback enable lacp fallback enable
mtu 9216 mtu 9216
vlt-port-channel 117 vlt-port-channel 117
spanning-tree port type edge spanning-tree port type edge

Ethernet
port interface Ethernet1/1/7 interface Ethernet1/1/7
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
150,151,152 150,151,152
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
spanning-tree port type edge spanning-tree port type edge
no switchport no switchport
mtu 9216 mtu 9216
speed 25000 speed 25000
flowcontrol receive off flowcontrol receive off
channel-group 117 mode active channel-group 117 mode active
no shutdown no shutdown

Administering the network 27


The following table provides port examples for PowerFlex:

Switch A Switch B
Port
channel interface port-channel117 interface port-channel117
no shutdown no shutdown
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
150,153,154 150,153,154
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
spanning-tree port type edge spanning-tree port type edge
lacp fallback enable lacp fallback enable
mtu 9216 mtu 9216
vlt-port-channel 117 vlt-port-channel 117
spanning-tree port type edge spanning-tree port type edge

Ethernet
port interface Ethernet1/1/7 interface Ethernet1/1/7
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
150,153,154 150,153,154
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
spanning-tree port type edge spanning-tree port type edge
no switchport no switchport
mtu 9216 mtu 9216
speed 25000 speed 25000
flowcontrol receive off flowcontrol receive off
channel-group 117 mode active channel-group 117 mode active
no shutdown no shutdown

Dell PowerSwitch S5048 switch configuration example


The following example pertains to PowerFlex hyperconverged node or VMware ESXi PowerFlex compute-only node
connectivity. Port examples for management as follows:

Switch A Switch B
Port
channel interface Port-channel 37 interface Port-channel 37
no ip address no ip address
mtu 9216 mtu 9216
portmode hybrid portmode hybrid
switchport switchport
spanning-tree mstp edge-port spanning-tree mstp edge-port
spanning-tree rstp edge-port spanning-tree rstp edge-port
spanning-tree 0 portfast spanning-tree 0 portfast
spanning-tree pvst edge-port spanning-tree pvst edge-port
vlt-peer-lag port-channel 37 vlt-peer-lag port-channel 37
no shutdown no shutdown

LACP
lacp ungroup member-independent port- lacp ungroup member-independent
channel 37 port-channel 37

Ethernet
port interface twentyFiveGigE 1/35 interface twentyFiveGigE 1/35
no ip address no ip address
mtu 9216 port-channel-protocol LACP mtu 9216 port-channel-protocol
port-channel 37 mode active LACP
no shutdown port-channel 37 mode active
no shutdown

28 Administering the network


Switch A Switch B
Add VLANs
Interface vlan 105 Interface vlan 105
tagged Port-channel 37 tagged Port-channel 37
Interface vlan 106 Interface vlan 106
tagged Port-channel 37 tagged Port-channel 37
Interface vlan 150 Interface vlan 150
tagged Port-channel 37 tagged Port-channel 37
Interface vlan 1000 Interface vlan 1000
tagged Port-channel 37 tagged Port-channel 37

The following table provides port examples for PowerFlex:

Switch A Switch B
Port
channel interface Port-channel 38 interface Port-channel 38
no ip address no ip address
mtu 9216 mtu 9216
portmode hybrid portmode hybrid
switchport switchport
spanning-tree mstp edge-port spanning-tree mstp edge-port
spanning-tree rstp edge-port spanning-tree rstp edge-port
spanning-tree 0 portfast spanning-tree 0 portfast
spanning-tree pvst edge-port spanning-tree pvst edge-port
vlt-peer-lag port-channel 38 vlt-peer-lag port-channel 38
no shutdown no shutdown

LACP
lacp ungroup member-independent port- lacp ungroup member-independent
channel 38 port-channel 38

Ethernet
port interface twentyFiveGigE 1/35 interface twentyFiveGigE 1/35
no ip address no ip address
mtu 9216 port-channel-protocol LACP mtu 9216 port-channel-protocol
port-channel 38 mode active LACP
no shutdown port-channel 38 mode active
no shutdown

Add VLANs
Interface vlan 151 Interface vlan 151
tagged Port-channel 38 tagged Port-channel 38
Interface vlan 152 Interface vlan 152
tagged Port-channel 38 tagged Port-channel 38
Interface vlan 153 Interface vlan 153
tagged Port-channel 38 tagged Port-channel 38
Interface vlan 154 Interface vlan 154
tagged Port-channel 38 tagged Port-channel 38

The following example pertains to PowerFlex storage-only node connectivity.


NOTE: In a non-two layer deployment, the data1 and data3 networks are defined on port 1 along with PowerFlex
management. Port 2 will have the data2 network and data4 network.
Port examples for management as follows:

Switch A Switch B
Port
channel interface Port-channel 37 interface Port-channel 37
no ip address no ip address
mtu 9216 mtu 9216
portmode hybrid portmode hybrid
switchport switchport
spanning-tree mstp edge-port spanning-tree mstp edge-port
spanning-tree rstp edge-port spanning-tree rstp edge-port
spanning-tree 0 portfast spanning-tree 0 portfast

Administering the network 29


Switch A Switch B

spanning-tree pvst edge-port spanning-tree pvst edge-port


vlt-peer-lag port-channel 37 vlt-peer-lag port-channel 37
no shutdown no shutdown

LACP
lacp ungroup member-independent port- lacp ungroup member-independent
channel 37 port-channel 37

Ethernet
port interface twentyFiveGigE 1/35 interface twentyFiveGigE 1/35
no ip address no ip address
mtu 9216 port-channel-protocol LACP mtu 9216 port-channel-protocol
port-channel 37 mode active LACP
no shutdown port-channel 37 mode active
no shutdown

Add VLANs
Interface vlan 150 Interface vlan 150
tagged Port-channel 37 tagged Port-channel 37
Interface vlan 151 Interface vlan 151
tagged Port-channel 37 tagged Port-channel 37
Interface vlan 153 Interface vlan 153
tagged Port-channel 37 tagged Port-channel 37
Interface vlan 1000 Interface vlan 1000
tagged Port-channel 37 tagged Port-channel 37

The following table provides port examples for PowerFlex:

Switch A Switch B
Port
channel interface Port-channel 37 interface Port-channel 37
no ip address no ip address
mtu 9216 mtu 9216
portmode hybrid portmode hybrid
switchport switchport
spanning-tree mstp edge-port spanning-tree mstp edge-port
spanning-tree rstp edge-port spanning-tree rstp edge-port
spanning-tree 0 portfast spanning-tree 0 portfast
spanning-tree pvst edge-port spanning-tree pvst edge-port
vlt-peer-lag port-channel 37 vlt-peer-lag port-channel 37
no shutdown no shutdown

LACP
lacp ungroup member-independent port- lacp ungroup member-independent
channel 37 port-channel 37

Ethernet
port interface twentyFiveGigE 1/35 interface twentyFiveGigE 1/35
no ip address no ip address
mtu 9216 port-channel-protocol LACP mtu 9216 port-channel-protocol
port-channel 37 mode active LACP
no shutdown port-channel 37 mode active
no shutdown

Add VLANs
Interface vlan 152 Interface vlan 152
tagged Port-channel 37 tagged Port-channel 37
Interface vlan 154 Interface vlan 154
tagged Port-channel 37 tagged Port-channel 37

The following example pertains to PowerFlex storage-only node connectivity with SDS and SDC traffic only.

30 Administering the network


NOTE: In a two layer deployment, the SDC only data1 (SDC traffic only) network and SDC only data2 (SDC traffic only)
network defined on port 1 along with PowerFlex management. Port 2 will have SDS only data1 (SDS traffic only) and only
data2 (SDS traffic only).
Port examples for management as follows:

Switch A Switch B
Port
channel interface Port-channel 37 interface Port-channel 37
no ip address no ip address
mtu 9216 mtu 9216
portmode hybrid portmode hybrid
switchport switchport
spanning-tree mstp edge-port spanning-tree mstp edge-port
spanning-tree rstp edge-port spanning-tree rstp edge-port
spanning-tree 0 portfast spanning-tree 0 portfast
spanning-tree pvst edge-port spanning-tree pvst edge-port
vlt-peer-lag port-channel 37 vlt-peer-lag port-channel 37
no shutdown no shutdown

LACP
lacp ungroup member-independent port- lacp ungroup member-independent
channel 37 port-channel 37

Ethernet
port interface twentyFiveGigE 1/35 interface twentyFiveGigE 1/35
no ip address no ip address
mtu 9216 port-channel-protocol LACP mtu 9216 port-channel-protocol
port-channel 37 mode active LACP
no shutdown port-channel 37 mode active
no shutdown

Add VLANs
Interface vlan 150 Interface vlan 150
tagged Port-channel 37 tagged Port-channel 37
Interface vlan 151 Interface vlan 151
tagged Port-channel 37 tagged Port-channel 37
Interface vlan 152 Interface vlan 152
tagged Port-channel 37 tagged Port-channel 37
Interface vlan 1000 Interface vlan 1000
tagged Port-channel 37 tagged Port-channel 37

The following table provides port examples for PowerFlex. The data networks for these ports are used for SDS traffic only.

Switch A Switch B
Port
channel interface Port-channel 37 interface Port-channel 37
no ip address no ip address
mtu 9216 mtu 9216
portmode hybrid portmode hybrid
switchport switchport
spanning-tree mstp edge-port spanning-tree mstp edge-port
spanning-tree rstp edge-port spanning-tree rstp edge-port
spanning-tree 0 portfast spanning-tree 0 portfast
spanning-tree pvst edge-port spanning-tree pvst edge-port
vlt-peer-lag port-channel 37 vlt-peer-lag port-channel 37
no shutdown no shutdown

LACP
lacp ungroup member-independent port- lacp ungroup member-independent
channel 37 port-channel 37

Ethernet
port interface twentyFiveGigE 1/35 interface twentyFiveGigE 1/35
no ip address no ip address
mtu 9216 port-channel-protocol LACP mtu 9216

Administering the network 31


Switch A Switch B

port-channel 37 mode active port-channel-protocol LACP


no shutdown port-channel 37 mode active
no shutdown

Add VLANs
Interface vlan 153 Interface vlan 153
tagged Port-channel 37 tagged Port-channel 37
Interface vlan 154 Interface vlan 154
tagged Port-channel 37 tagged Port-channel 37

Arista 7280-C switch configuration example


The following example pertains to PowerFlex hyperconverged node or VMware ESXi PowerFlex compute-only node
connectivity. Port examples for management as follows:

Switch A Switch B
Port
channel interface Port-Channel104 interface Port-Channel104
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
105,150,1000 105,150,1000
port-channel lacp fallback individual port-channel lacp fallback
port-channel lacp fallback timeout 5 individual
mlag 104 port-channel lacp fallback
spanning-tree portfast timeout 5
no spanning-tree portfast auto mlag 104
spanning-tree bpduguard enable spanning-tree portfast
no spanning-tree portfast auto
spanning-tree bpduguard enable

Ethernet
port interface Ethernet35/4 interface Ethernet35/4
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunkallowed vlan
105,150,1000 105,150,1000
spanning-tree portfast spanning-tree portfast
no spanning-tree portfast auto no spanning-tree portfast auto
spanning-tree bpduguard enable spanning-tree bpduguard enable
mtu 9216 mtu 9216
speed forced 25gfull speed forced 25gfull
channel-group 104 mode active channel-group 104 mode active

The following table provides port examples for PowerFlex:

Switch A Switch B
Port
channel interface Port-Channel105 interface Port-Channel105
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
151,152,153,154 151,152,153,154
port-channel lacp fallback individual port-channel lacp fallback
port-channel lacp fallback timeout 5 individual
mlag 104 port-channel lacp fallback
spanning-tree portfast timeout 5
no spanning-tree portfast auto mlag 104
spanning-tree bpduguard enable spanning-tree portfast
no spanning-tree portfast auto
spanning-tree bpduguard enable

Ethernet
port interface Ethernet35/5 interface Ethernet35/5
switchport mode trunk switchport mode trunk

32 Administering the network


Switch A Switch B

switchport trunk allowed vlan switchport trunkallowed vlan


151,152,153,154 151,152,153,154
spanning-tree portfast spanning-tree portfast
no spanning-tree portfast auto no spanning-tree portfast auto
spanning-tree bpduguard enable spanning-tree bpduguard enable
mtu 9216 mtu 9216
speed forced 25gfull speed forced 25gfull
channel-group 105 mode active channel-group 105 mode active

The following example pertains to PowerFlex storage-only node connectivity.


NOTE: In a non-two layer deployment, the data1 and data3 networks are defined on port 1 along with PowerFlex
management. Port 2 will have the data2 network and data4 network.
Port examples for management as follows:

Switch A Switch B
Port
channel interface Port-Channel104 interface Port-Channel104
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
150,151,153,1000 150,151,153,1000
port-channel lacp fallback individual port-channel lacp fallback
port-channel lacp fallback timeout 5 individual
mlag 104 port-channel lacp fallback
spanning-tree portfast timeout 5
no spanning-tree portfast auto mlag 104
spanning-tree bpduguard enable spanning-tree portfast
no spanning-tree portfast auto
spanning-tree bpduguard enable

Ethernet
port interface Ethernet35/4 interface Ethernet35/4
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunkallowed vlan
150,151,153,1000 150,151,153,1000
spanning-tree portfast spanning-tree portfast
no spanning-tree portfast auto no spanning-tree portfast auto
spanning-tree bpduguard enable spanning-tree bpduguard enable
mtu 9216 mtu 9216
speed forced 25gfull speed forced 25gfull
channel-group 104 mode active channel-group 104 mode active

The following table provides port examples for PowerFlex:

Switch A Switch B
Port
channel interface Port-Channel105 interface Port-Channel105
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 152,154 switchport trunk allowed vlan
port-channel lacp fallback individual 152,154
port-channel lacp fallback timeout 5 port-channel lacp fallback
mlag 104 individual
spanning-tree portfast port-channel lacp fallback
no spanning-tree portfast auto timeout 5
spanning-tree bpduguard enable mlag 104
spanning-tree portfast
no spanning-tree portfast auto
spanning-tree bpduguard enable

Ethernet
port interface Ethernet35/5 interface Ethernet35/5
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 152,154 switchport trunkallowed vlan
spanning-tree portfast 152,154

Administering the network 33


Switch A Switch B

no spanning-tree portfast auto spanning-tree portfast


spanning-tree bpduguard enable no spanning-tree portfast auto
mtu 9216 spanning-tree bpduguard enable
speed forced 25gfull mtu 9216
channel-group 105 mode active speed forced 25gfull
channel-group 105 mode active

The following example pertains to PowerFlex storage-only node connectivity.


NOTE: In a non-two layer deployment, the SDC only data1 network and the SDC only data2 network are defined on port 1
along with PowerFlex management. Port 2 will have the SDS only data1 and SDS only data2 network.
Port examples for management as follows:

Switch A Switch B
Port
channel interface Port-Channel104 interface Port-Channel104
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
150,151,152,1000 150,151,152,1000
port-channel lacp fallback individual port-channel lacp fallback
port-channel lacp fallback timeout 5 individual
mlag 104 port-channel lacp fallback
spanning-tree portfast timeout 5
no spanning-tree portfast auto mlag 104
spanning-tree bpduguard enable spanning-tree portfast
no spanning-tree portfast auto
spanning-tree bpduguard enable

Ethernet
port interface Ethernet35/4 interface Ethernet35/4
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunkallowed vlan
150,151,152,1000 150,151,152,1000
spanning-tree portfast spanning-tree portfast
no spanning-tree portfast auto no spanning-tree portfast auto
spanning-tree bpduguard enable spanning-tree bpduguard enable
mtu 9216 mtu 9216
speed forced 25gfull speed forced 25gfull
channel-group 104 mode active channel-group 104 mode active

The following table provides port examples for PowerFlex. The data networks for these ports are used for SDS traffic only.

Switch A Switch B
Port
channel interface Port-Channel105 interface Port-Channel105
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 153,154 switchport trunk allowed vlan
port-channel lacp fallback individual 153,154
port-channel lacp fallback timeout 5 port-channel lacp fallback
mlag 104 individual
spanning-tree portfast port-channel lacp fallback
no spanning-tree portfast auto timeout 5
spanning-tree bpduguard enable mlag 104
spanning-tree portfast
no spanning-tree portfast auto
spanning-tree bpduguard enable

Ethernet
port interface Ethernet35/5 interface Ethernet35/5
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 153,154 switchport trunkallowed vlan
spanning-tree portfast 153,154
no spanning-tree portfast auto spanning-tree portfast
spanning-tree bpduguard enable no spanning-tree portfast auto

34 Administering the network


Switch A Switch B

mtu 9216 spanning-tree bpduguard enable


speed forced 25gfull mtu 9216
channel-group 105 mode active speed forced 25gfull
channel-group 105 mode active

Upgrading Dell EMC PowerSwitch switches using


ONIE
Retrieve the needed binaries, release notes and more information from https://www.force10networks.com/CSPortal20/
Software/SSeriesDownloads.aspx (login is required).

Administering the network 35


3
Administering the storage
Perform the following procedures to administer the PowerFlex appliance storage.
Observe the following considerations when administering the storage:
● Using PowerFlex Manager to enter a node into maintenance mode ensures that no more than one host is in maintenance
mode at any given time.
● If you make manual changes outside of PowerFlex Manager, (for example, using PowerFlex GUI or scli), you might need to
perform some steps within PowerFlex Manager to ensure that the external changes are reflected within the user interface
and the environment is kept in a healthy state. See Managing external changes in the PowerFlex Manager online help.

Determining and switching the PowerFlex Metadata


Manager
Use this procedure to switch the PowerFlex Metadata Manager.

Steps
1. Log in to PowerFlex Manager, to determine the primary MDM.
2. To view the details of a service, select the component. Scroll down on the Service Details page, the following information is
displayed based on the resource types in the service:

Section Description
Physical Nodes View the following information about the nodes that are part of the service:
● Health
● Asset/Service Tag
● iDRAC Management IP
● Hostname
● PowerFlex Mode
The mode for each node is one of the following:
○ Hyper-converged includes both SDS and SDC components.
○ Storage Only includes only the SDS component.
○ Compute Only includes only the SDC component.
● Associated IPs
● MDM Role
The MDM role is the metadata manager role. The MDM role applies only to those
nodes that part of a PowerFlex cluster. The MDM role is one of the following:
○ Primary: The MDM in the cluster that controls the SDSs and SDCs. The
primary MDM contains and updates the MDM repository, the database that
stores the SDS configuration, and how data is distributed between the SDSs.
This repository is constantly replicated to the secondary MDMs, so they can
take over with no delay.
Every PowerFlex cluster has one primary MDM.
○ Secondary: An MDM in the cluster that is ready to take over the primary MDM
role if necessary.
○ Tie Breaker: An MDM whose sole role is to help determine which MDM is the
primary.
○ Standby MDM: A standby MDM can be called on to assume the position of a
manager MDM when it is promoted to be a cluster member.

36 Administering the storage


Section Description
○ Standby Tie Breaker: A standby node that is prepared to take over as a
tiebreaker.
● Fault Set: A logical group of SDSs within a protection domain that defines by the
way it is grouped where the copies of data exist.

3. Access the primary MDM:


a. In a hyperconverged deployment, use SSH to connect to the SVM that is acting as primary MDM.
b. In a two-layer deployment, connect to the PowerFlex storage-only node that is acting as primary MDM using SSH.
4. From the PowerFlex CLI, type the following:
a. Type scli --login --username admin --password MDM_password to connect to the source node.
b. Type scli --query_cluster to reverify the primary MDM.
c. Type scli --switch_mdm_ownership to switch the primary MDM to the secondary MDM.
d. Type scli --query_cluster to reverify the primary MDM.
5. Connect to the new SVM that is acting as primary MDM using SSH.
a. Type scli --query_all_sds and verify that all servers are connected.
b. Type scli --query_all_sdc and verify that all servers are connected.
6. Run inventory on PowerFlex Gateway to update PowerFlex Manager with location of the new primary MDM.
a. On the menu bar, click Resources.
b. On the Resources page, click the All Resources tab.
c. From the list of resources, click a resource, and in the Details pane, click Run Inventory.
The resource state changes to Pending. When the inventory is complete, the resource state changes to Available. See
PowerFlex Manager logs to view the start time and end time of the resource inventory operation.

Update resource inventory


Use this procedure to run inventory to incorporate external changes that are made to resource data outside of PowerFlex
Manager.

About this task


NOTE: Run inventory to incorporate external changes that are made to resource data outside of PowerFlex Manager. After
running the inventory to incorporate these changes, you can update the details on any service that must include the new
resource data.
Administrators and standard users can run the inventory on any resources. Standard users can run the inventory only on
resources that are part of a node pool for which they have permission.

Steps
1. On the menu bar, click Resources.
2. On the Resources page, click the All Resources tab.
3. From the list of resources, select the check box next to the resources that you want to inventory.
4. From the Details pane, click Run Inventory.

Next steps
See the PowerFlex Manager logs, go to Settings > Logs to view the start time and end time of the resource inventory
operation.

Administering the storage 37


Add volumes to the service
Use this task to add volumes to the service.

About this task


After the service is deployed, if the volumes are not created for the service, then add two volumes for a fully functional cluster.
In PowerFlex Manager, when a PowerFlex hyperconverged node deployment is complete, PowerFlex Manager will automatically
create two volumes with 16 GB, thin provisioned named powerflex-service-vol-1 and powerflex-service-vol-2.
For PowerFlex storage-only node deployment, the service is incomplete, follow the below steps to add the volume.
For PowerFlex compute-only node deployment, the service is in lifecycle mode as there is no information on protection
domain(PD) and storage. The vCLS VMs must be moved using the migration wizard.
If you are using PowerFlex Manager 3.6 and prior, volumes are added volumes to a service manually. Verify that the machines
are in a connected state.

Steps
1. Log in to PowerFlex Manager.
2. On the Services page, click the Add Resources button and choose Add Volumes.
3. When PowerFlex Manager displays the Add Volume wizard, click Add Existing Volumes or Create New Volumes.
NOTE: The Add Existing Volumes option is only available for a PowerFlex hyperconverged node service.

4. If you select Add Existing Volumes, select the Volume and provide the Datastore Name Template from Add Existing
Volumes page.
5. If you are creating a new volume for a hyperconverged service, provide the following information:
a. Click Add New Volume.
b. In the Volume Name field, select Create New Volume to create a new volume now, or select Auto generate name
when you create multiple volumes.
c. In the New Volume Name field, type the volume name, if you are creating a new volume.
d. In the Datastore Name field, select Create New Datastore to create a new datastore, or select an existing datastore.
If you choose a volume that is mapped to a datastore that was created previously in another hyperconverged or
compute-only service, you need to select the same datastore that was associated with the volume in the other service.
e. In the New Datastore Name field, type the datastore name, if you are creating a new datastore.
f. In the Storage Pool drop-down, choose the storage pool where the volume will reside.
g. Select the Enable Compression check box to take advantage of the PowerFlex NVDIMM compression feature.
h. In the Volume Size (GB) field, select the size in GB. The minimum size is 8 GB and the value you specify must be
divisible by eight.
i. In the Volume Type field, select thick or thin.
A thick volume provides a larger amount of storage in advance, whereas a thin volume provides on-demand storage and
faster setup and startup times.

j. In the New Volume Name field, if you select Auto Generate name, complete the following:

Field Description
Volume Name Template Modify the template based on your volume naming
convention.
How Many Volume Enter number volumes need to be created.
Datastore Name Template Modify the template based on your datastore naming
convention.
Storage Pool Choose the storage pool where the volume will reside.
Volume Size (GB) Select the size in GB. The minimum size is 8 GB and the
value you specify must be divisible by eight.
Volume Type Select Thick or Thin.

38 Administering the storage


k. Click Next > Finish.
If you are creating a new volume for a storage-only service, provide the following information:

a. In the Volume Name field, select Create New Volume to create a new volume now.
b. In the New Volume Name field, type the volume name.
c. In the Storage Pool drop-down, choose the storage pool where the volume will reside.
d. Select the Enable Compression check box to take advantage of the PowerFlex NVDIMM compression feature.
e. In the Volume Size (GB) field, select the size in GB. The minimum size is 8 GB and the value you specify must be
divisible by eight.
f. In the Volume Type field, select thick or thin.
A thick volume provides a larger amount of storage in advance, whereas a thin volume provides on-demand storage and
faster setup and startup times.
If you enable compression for the volume, thin is the only option available for Volume Type.

g. In the New Volume Name field, if you select Auto Generate name, complete the following:

Field Description
Volume Name Template Modify the template based on your volume naming
convention.
How Many Volume Enter number volumes need to be created.
Datastore Name Template Modify the template based on your datastore naming
convention.
Storage Pool Choose the storage pool where the volume will reside.
Volume Size (GB) Select the size in GB. The minimum size is 8 GB and the
value you specify must be divisible by eight.
Volume Type Select Thick or Thin.

h. Click Next > Finish.


If you are creating a new volume for a compute-only service, provide the following information:

a. In the Volume Name field, select an existing volume. For a compute-only service, you can only select an existing volume
that has not yet been mapped.
b. In the Datastore Name field, select Create New Datastore to create a new datastore, or select an existing datastore.
The Datastore Name field is only available for a hyperconverged or compute-only service, as it applies only to services
with ESXi. If the volume was originally created in a storage-only service, you must select Create New Datastore to
create a new datastore. Alternatively, if the volume was originally created in a hyperconverged service, you must select
the datastore that was already mapped to the selected volume in the other service.
c. In the New Datastore Name field, type the datastore name, if you are creating a new datastore.
6. Optionally, click Add volume again to add another volume. Then, provide the required information for the volume.
7. Click Save.
The service moves to the In Progress state and the new volume icons appear on the Service Details page. After the
deployment completes successfully, the new volumes are displayed and indicated by a check mark in the Storage list on the
Service Details page. The PowerFlex 3.0.1.2 and older GUI shows the new volumes under the storage pool. In PowerFlex
3.5, new volumes are under Configuration > Volumes. For a storage-only service, the volumes are created, but not
mapped. For a compute-only or hyperconverged service, the volumes are mapped to SDCs. In the vSphere client, you can
see the volumes in the storage section and also see the hosts that are mapped to the volumes, once the mappings are in
place.

Administering the storage 39


Adding a PowerFlex appliance node to an existing
cluster
Use this procedure to add a node to an existing cluster.

Steps
1. Connect the new PowerFlex appliance nodes network interface cards (NICs) to access switches and management switch
exactly like the existing nodes.
2. Ensure that the newly connected switch ports are not shut down.
3. Set the IP address of the iDRAC management port, username, password, and SNMP settings to what is expected by
PowerFlex Manager.
4. Log in to PowerFlex Manager.
5. In the Services page, click Add Resources and click Add Nodes.
6. In the Duplicate Node wizard:
a. From the Resource to Duplicate list, select a node.
Select a node that is of the same type as the other nodes within the service.
b. In the Number of Instances box, enter the number of nodes instances that you want to add to the service.
The number of instances is fixed for this action.
c. Click Next.
d. Under PowerFlex Settings, specify the PowerFlex Storage Pool Spare Capacity setting by choosing one of the
following options:
i.Recommended Spare Capacity <n>% sets the spare capacity to 1 divided by the current number of SDSs in the
protection domain, plus the number of nodes that you want to duplicate. For example, if you have three SDSs and you
want to add one more node instance, the recommended spare capacity is set to 25 percent, based on the formula
1/4.
ii. Current Spare Capacity <n>% sets the spare capacity to 1 divided by the current number of SDSs in the protection
domain. For example, if you currently have three Storage Data Servers (SDSs) in the protection domain, the current
spare capacity is set to 34 percent, based on the formula 1/3, rounded up.
e. Under OS Settings, set the Host Name Selection to Auto-Generate, Specify at Deployment Time, or Reverse
DNS Lookup.
f. If you choose Specify at Deployment, provide a name for the host in the Host Name field. If you choose Auto-
Generate, specify a template for the name in the Host Name Template field.
For an existing service that was not deployed by PowerFlex Manager, the Host Name Selection option is automatically
set to Specify at Deployment Time and you must type the hostname.
g. If you are adding a node to a hyperconverged service, specify the Host Name Selection under SVM OS Settings and
provide details about the hostname, as you did for the OS Settings.
h. In the IP Source box, provide an IP address. For an existing service that was not deployed with PowerFlex Manager,
the default choice is User Entered IP and the IP settings for each network default to Manual Entry. However, you can
change the setting to PowerFlex Manager Selected IP.
Under Hardware Settings, the Target Boot Device option is automatically set to Local Flash Storage for Dell EMC
PowerFlex for an existing hyperconverged or compute only service that was not deployed by PowerFlex Manager.
i. Under Hardware Settings, in the Node Source box, select Node Pool or Manual Entry.
For an existing service not deployed by PowerFlex Manager, the node source defaults to Manual Entry, but you can
change it to Node Pool.
j. In the Node Pool box, select the node pool. Alternatively, if you chose Manual Entry, select the specific node in the
Choose Node box.
You can view all user-defined node pools and the global pool. Standard users can see only the pools for which they have
permission.
For an existing service not deployed by PowerFlex Manager, the Node Pool defaults to Global.

40 Administering the storage


k. Click Next.
l. Review the Summary page and click Finish.
If the node you are adding has a different type of disk than the base deployment, PowerFlex Manager displays a banner
at the top of the Summary page to inform you of the different disk types. You can still go to the node expansion.
However, your service may have suboptimal performance.

Removing a PowerFlex node for maintenance


Use this procedure for removing a PowerFlex node for maintenance.

Steps
1. Log in to PowerFlex Manager.
2. From the menu, click Services.
3. On the Services page, select a service and click View Details.
4. Click Enter Service Mode on the Service Details page.
5. Select one or more nodes on the Node Lists page and click Next.
You can only put multiple nodes in service mode simultaneously if all the nodes are in the same fault set.
6. Select one of the following options:
● Instant Maintenance Mode enables you to perform short-term maintenance that lasts less than 30 minutes.
● Protected Maintenance Mode enables you to perform long-term maintenance that lasts more than 30 minutes.
● Evacuate Node from PowerFlex enables you to perform long-term maintenance that lasts more than 30 minutes.
NOTE: Evacuate node is only available from PowerFlex versions previous to 3.5.
7. Click Enter Service Mode.
PowerFlex Manager displays a yellow warning banner at the top of the service page. The Service Mode icon is displayed for
the Overall Service Health and for the Resource Health for the selected node.
8. When you are ready to leave service mode, click Service Actions > Exit Service Mode.

Entering and exiting service mode


PowerFlex Manager enables you to put a node in service mode when you must perform maintenance operations on the node.
When you put a node in service mode, you can specify whether you are performing short-term maintenance or long-term
maintenance work. The option that you use for long-term maintenance depends on the PowerFlex version you are using.

Prerequisites
Before evacuating a node for long-term maintenance work, ensure that you have at least four nodes in the cluster. Also, ensure
that you have sufficient storage space on the remaining nodes to evacuate the data from the node that is placed in service
mode. If you are using protected maintenance mode (PowerFlex 3.5), the sum of the spare capacity and the free capacity must
be greater than the size of the node being put in protected maintenance mode.

About this task


PowerFlex Manager detects when a node is in VMware ESXi or PowerFlex maintenance mode. It automatically places the node
in service mode and also ensures that the service itself goes into service mode.
If DAS Cache is installed on a node, or if the node has a VMware NSX-T or NSX-V configuration, PowerFlex Manager does
not enable you to enter service mode. PowerFlex Manager also does not enable you to enter service mode if the PowerFlex
Gateway used in the service is being updated on the Resources page.

Steps
1. On the menu bar, click Services.
2. On the Services page, select a service and click View Details in the right pane.
3. Click Enter Service Mode under Service Actions.
4. Select one or more nodes on the Node Lists page and click Next.

Administering the storage 41


You can only put multiple nodes in service mode simultaneously if all the nodes are in the same fault set.

5. Specify the type of maintenance you want to perform by selecting one of the following options:
● Instant Maintenance Mode enables you to perform short-term maintenance that lasts less than 30 minutes. PowerFlex
Manager does not migrate the data.
● Protected Maintenance Mode enables you to perform maintenance that requires longer than 30 minutes in a safe
and protected manner. When you use protected maintenance mode, PowerFlex makes a temporary copy of the data
so that the cluster is fully protected from data loss. Protected maintenance mode applies only to hyperconverged and
storage-only services.
● Evacuate Node from PowerFlex (earlier versions of PowerFlex) enables you to perform long-term maintenance that
lasts more than 30 minutes. PowerFlex Manager migrates the data to other nodes in the cluster. It takes longer to
evacuate a node, but it is safer because there is no risk of a reboot causing data to be unavailable. Evacuation mode
applies only to hyperconverged and storage-only services.
6. Click Finish.
PowerFlex Manager displays a yellow warning banner at the top of the service page. The Service Mode icon displays for the
Deployment State and Overall Service Health, as well as for the Resource Health for the selected nodes.
7. When you are ready to leave service mode, click Service Actions > Exit Service Mode.

Rebooting a PowerFlex node


Use this procedure to reboot a PowerFlex appliance node.

About this task


PowerFlex Manager prevents two nodes from being in service mode simultaneously to protect data.

Steps
1. See Entering and exiting service mode to put the node in service mode.
2. After PowerFlex Manager shows that the node has entered service mode, turn off the PowerFlex node by using the iDRAC
interface to run a graceful shutdown.
3. Use the iDRAC interface to power on the PowerFlex node.
4. See Entering and exiting service mode to exit the node from service mode.

Resize a volume
After adding volumes to a service, you can resize the volumes.

About this task


For a storage-only service, you can increase the volume size. For a VMware ESXi compute-only service, you can increase the
size of the datastore that is associated with the volume. For a hyperconverged service, you can increase the size of both the
volume and the datastore.
If you resize a volume in a storage-only service, you must update the datastore size in the corresponding VMware ESXi
compute-only service. The datastore size cannot exceed the size of the volume.

Steps
1. On the Services page, click the volume component and choose Volume Actions > Resize.
2. Choose the volume that you want to resize:
a. Click Select Volume.
b. Enter a volume or datastore name search string in the Search Text box.
c. Optionally, apply additional search criteria by specifying values for the Size, Type, Compression, and Storage filters.
d. Click Search.
PowerFlex Manager updates the results to show only those volumes that satisfy the search criteria. If the search returns
more than 50 volumes, you must refine the search criteria to return only 50 volumes.
e. Select the row for the volume you want to resize.

42 Administering the storage


f. Click Apply.
3. Update the sizing information:
If you are resizing a volume for a hyperconverged service, perform these steps:

a. In the New Volume Size (GB) field, specify a value that is greater than the current volume size.
b. Optionally, select Resize Datastore to increase the size of the datastore.
If you are resizing a volume for a storage-only service, enter a value in the New Volume Size (GB) field. Specify a value
that is greater than the current volume size. Values must be in multiples of eight, or an error occurs.

If you are resizing a volume for a compute-only service, review the Volume Size (GB) field to see if the volume size is
greater than Current Datastore Size (GB). If it is, PowerFlex Manager expands the datastore size.

4. Click Save.

Unmapping a volume
Use this procedure to unmap an existing volume from the PowerFlex cluster using the PowerFlex GUI presentation server.

Steps
1. Log in to the PowerFlex GUI presentation server.
2. Click the Configuration tab.
3. Click Volumes.
4. Select Volume, and click Mapping.
5. Click Unmap.
6. Select the nodes from the shown list and click Unmap.

Unmapping a volume using a PowerFlex version prior


to 3.5
Use this procedure to unmap an existing volume from the PowerFlex cluster, using a PowerFlex version prior to 3.5.

Steps
1. In the PowerFlex GUI, select Frontend > Volumes.
2. Expand the correct storage pool to see the mapped volumes.
3. Right-click the volume that you want to unmap and select Unmap.
4. Select the nodes from which you want to unmap this volume and click Unmap Volumes.

Removing a volume
Use this procedure to remove a volume.

About this task


If using a PowerFlex version prior to 3.5, see Removing a volume using a PowerFlex version prior to 3.5.

Prerequisites
PowerFlex Manager does not currently support removing a volume.

Steps
1. Log in to the PowerFlex GUI and click the Configuration tab.
2. Click Volumes that you want to unmap and click Mapping > unmap.

Administering the storage 43


3. Select the nodes from which you want to unmap this volume and click Unmap.
4. Click the volume that you want to remove, click More and select Remove.

Removing a volume using a PowerFlex version prior to


3.5
Use this procedure to remove a volume with the PowerFlex GUI management software.

About this task


PowerFlex Manager does not currently support removing a volume.

Steps
1. In the PowerFlex GUI, select Frontend > Volumes.
2. Expand the correct storage pool to see the mapped volumes.
3. Right-click the volume that you want to delete and select Unmap.
4. Select ALL the nodes to unmap this volume and click Unmap Volumes.
5. Right-click the volume that you want to delete and select Remove > Volume and click OK.
6. Type the MDM password when prompted and click Close.
7. Update the PowerFlex Manager inventory, by doing the following steps:
a. In the PowerFlex Manager GUI, go to Resources page, select the PowerFlex Gateway and click Run Inventory.
b. To confirm that the process completes with no errors, go Settings > Logs.
c. In the PowerFlex Manager GUI, go to Services page for the hyperconverged, storage, and compute clusters and then
click Update Service Details.
d. After Update Service Details process completes, confirm that all cluster objects report as healthy (green check mark).

Disabling persistent checksum on medium granularity


storage pools
All medium granularity storage pools have the persistent checksum that is enabled by default PowerFlex. PowerFlex calculates
and validates the checksum value for the payload during transit to protect data-in-flight. Checksum protection is applied to all
inputs and outputs. Use the following procedures to disable the persistent checksum, if wanted.

Using PowerFlex GUI presentation server to disable persistent


checksum
Use this procedure to disable the persistent checksum using PowerFlex GUI presentation server.

Prerequisites
You will need the following information:
● IP or hostname of the PowerFlex GUI presentation server
● Valid credentials for the PowerFlex cluster
● Names of the protection domains to be worked on
● Names of the storage pools to be modified

Steps
1. Log in to the PowerFlex GUI presentation server with access to the PowerFlex cluster containing the Storage Pool you
want to modify.
2. Expand the Configuration menu in the navigation pane (underneath Dashboard), by left clicking the entry.

44 Administering the storage


3. Select Storage Pools.
4. Select the check box to the left of the Storage Pool you plan to modify.
5. Click More.
6. Select Background Device Scanner.
7. Clear Enable Background Device Scanner .
8. Click Apply.
9. Click Settings.
10. Click General.
11. In the resulting dialog box, leave the box checked for Enable Inflight / Persistent Checksum.
12. Clear the Persistent option.
13. Click Apply.
14. Repeat steps 1 through 13 for all additional Storage Pools to be modified.
NOTE: The background scanner service will not be reenabled automatically once the persistent checksum is disabled. To
reenable, repeat the above steps 1 through 6 to determine that the Enable Background Device Scanner option has
not been rechecked, if wanted, recheck and click Apply.

Enabling persistent checksum for medium granularity


storage pools
Systems upgraded to PowerFlex 3.5 do not have persistent checksum that is enabled for existing medium granularity storage
pools. Persistent checksum is enabled using either the PowerFlex or the PowerFlex SCLI command-line tool.

Using PowerFlex to enable persistent checksum


This procedure shows you how to disable the persistent checksum using PowerFlex.

Prerequisites
You will need the following information:
● IP or hostname of the PowerFlex presentation server
● Valid credentials for the PowerFlex cluster
● Names of the protection domains to be worked on
● Names of the storage pools to be modified

Steps
1. Log in to PowerFlex with access to the PowerFlex cluster containing the Storage Pool you want to modify.
2. Expand the Configuration menu in the navigation pane (underneath Dashboard), by left clicking the entry.
3. Select Storage Pools.
4. Select the check box to the left of the Storage Pool you plan to modify.
5. Click More.
6. Select Background Device Scanner.
7. Clear Enable Background Device Scanner .
8. Click Apply.
9. Click Settings.
10. Click General.
11. In the resulting dialog box, leave the box checked for Enable Inflight / Persistent Checksum.
12. Select one or both of the Inflight and Persistent options.
13. If wanted, check Validate on read (this validation may incur a performance penalty).
14. Click Apply.

Administering the storage 45


NOTE: The background scanner service should be reenabled automatically once the persistent checksum is disabled.
Repeat steps 1 through 6 to determine that the Enable Background Device Scanner to option has been rechecked.

Add licenses to PowerFlex and PowerFlex Manager


Use this procedure to add licenses to PowerFlex and PowerFlex Manager.

Steps
1. To add a license for PowerFlex, do the following:
a. Identify and copy the contents of the PowerFlex license file.
b. In the PowerFlex GUI presentation server, click Settings > Licenses.
c. Paste the contents of the license file into the space provided.
2. To add a PowerFlex Manager license:
a. Log in to PowerFlex Manager.
b. On the Licensing page of the Initial Setup wizard, click Choose File to the right of the Upload License field, and
select a valid license file.
Based on the license selected, the following information is displayed :
● Type—Displays the license type. PowerFlex Manager supports two license types:
○ Standard—Full-access license type.
○ Trial—Evaluation license that expires after a specified number of days and only supports a limited number of
resources. The number of days before expiration and the number of resources supported both depend on the
license you choose.
● Total Resources—Displays the maximum number of resources allowed by the license.
● Expiration Date—Displays the expiration date of the license (only shown for a trial license).
c. To activate the license, click Save and Continue.

Managing volumes, nodes, and network components


Use this procedure to manage components using PowerFlex Manager.

Steps
Log in to PowerFlex Manager.
The following table describes common tasks for managing system components and what steps to take in PowerFlex Manager to
initiate each.

If you want to... Do this in PowerFlex Manager...


View network topology a. Click Services.
b. On the Services page, select a service.
c. On the Service Details tab, click the Port View tab.
Run inventory (nodes, switches, a. Click Resources and then click the All Resources tab.
PowerFlex Gateway, and VMware b. Click the check box for the resource you want to update and then click Run
vCenter cluster) Inventory.
c. After running the inventory, click Update Service Details on the Services
page for any service that requires the updated resource data.
Add an existing service Click Services and click +Add Existing Service.
Perform node expansion a. Click Services. On the Services page, select a service.
b. On the Service Details tab, under Resource Actions, expand the Add
Resources list and click Add Nodes.
The procedure is the same for new services and existing services.
Remove a node a. Click Services.

46 Administering the storage


If you want to... Do this in PowerFlex Manager...
b. On the Services page, select a service.
c. On the Service Details tab, under Resource Actions, click Remove
Resource.
d. Select the node and click Next.
e. Select Delete Resource for the Resource removal type.
Enter service mode a. Click Services.
b. On the Services page, select a service.
c. On the Service Details tab, under Service Actions, click Enter Service
Mode.
Exit service mode a. Click Services.
b. On the Services page, select a service.
c. On the Service Details tab, under Service Actions, click Exit Service Mode.
Replace a drive a. Click Services.
b. On the Services page, select a service.
c. On the Service Details tab, select a node and click Node Actions>Drive
Replacement.
Reconfigure MDM roles a. Click Services.
b. On the Services page, select a service.
c. On the Service Details tab, select a node and click Node
Actions>Reconfigure MDM Roles or click Reconfigure MDM Roles under
Service Actions.
You can also reconfigure MDM roles from the Resources page. Select a PowerFlex
Gateway and click View Details. Then, click Reconfigure MDM Roles.

Monitoring system health


Use this procedure monitor system health.

Steps
Log in to PowerFlex Manager.
The following table describes common tasks for monitoring system health and managing software and firmware compliance and
what steps to take in PowerFlex Manager to initiate each.

If you want to... Do this in PowerFlex Manager...


Monitor system resources and health On the Dashboard, look at the Service Overview and Resource Overview
sections.
Monitor software and firmware a. Click Services.
compliance b. On the Services page, select a service.
c. On the Details page, under Service Actions, click View Compliance Report.
Perform software and firmware From the compliance report, view the firmware or software components. Click
remediation Update Resources to update non-compliant resources.
Generate a troubleshooting bundle a. Click Settings and then click Virtual Appliance Management.
b. Click Generate Troubleshooting Bundle.
Download a report that lists compliance a. Click Resources.
details for all resources b. Click Export Report and select either PDF or CSV from the drop-down list.
View alerts Click Settings and then click Alerts.

Administering the storage 47


Upgrading PowerFlex appliance firmware
Use this procedure for upgrading the intelligent catalog and operating system repositories.

Steps
1. Log in to PowerFlex Manager.
2. Click Settings > Compliance & OS Repositories.
3. Select the Compliance Versions tab to load compliance versions and specify a default version for compliance checking.
The +Add button is available in both the Compliance Versions and OS Image Repositories tab.
You cannot make a minimal compliance version the default version for compliance checking, since it only includes server
firmware updates. The default version must include the full set of compliance update capabilities. PowerFlex Manager does
not show any minimal compliance versions in the Default Version dropdown menu.
The Compliance Versions tab displays the following information:
● State —Displays an icon indicating one of the following states:
○ Available—Indicates that the compliance file is downloaded and copied successfully.
○ Downloading—Indicates that the compliance file is being downloaded and provides the percentage complete for the
download operation.
○ Synchronizing—Indicates that the compliance file is being synchronized with the virtual appliance after unpacking.
○ Unpacking—Indicates that the compliance file is being unpacked and provides the percentage complete for the
unpacking operation.
○ Pending—Indicates that the compliance file download process is in progress.
○ Error—Indicates that there is an issue downloading the compliance file.
● Version—Display the compliance version.
● Source—Displays the share path of the compliance version in a file share.
● File Size—Displays the size of the compliance file in GB.
● Type—Displays Minimal if the compliance file only contains firmware updates, or Full if it contains firmware and
software updates.
● View bundles—Displays details about any bundles added for the compliance version.
● Available Actions—Select one of the following options:
○ Delete
○ Resynchronize

4. Select the OS Image Repositories tab to create operating system image repositories and view the following information:
● State — Displays the following states:
○ Available—Indicates that the operating system image repository is downloaded and copied successfully on the
appliance.
○ Pending—Indicates that the operating system image repository download process is in progress.
○ Error—Indicates that there is an issue downloading the operating system image repository.
● Repositories—Display the name of the repository.
● Image Type—Displays the operating system type.
● Source Path—Displays the share path of the repository in a file share.
● In Use—Displays the following options:
○ True—Indicates that the operating system image repository is in use.
○ False—Indicates that the operating system image repository is not in use.
● Available Actions—Select one of the following options:
○ Delete
○ Resynchronize
You cannot perform any actions on repositories that are in use. However, you can delete repositories that are in an Available
state, but not in use and not set as a default version.
All the options are available only for repositories in an Error state. The Resynchronize option appears only when you must
perform a backup and restore of a previous image.
If a new compliance version becomes available, the Compliance and OS Repositories page displays a notification banner
at the top of the screen with the text A new compliance version is available for download. View Details. To the far
right of the banner, you should see an Actions menu that gives you the following choices:

48 Administering the storage


● View details lets you see details about the new compliance version and download it.
● Hide lets you hide the banner for the new compliance version. This action applies only to the current user and session. If
the current user logs off, the banner reappears when this user logs in again. In addition, the banner appears if a different
user logs in. The notification banner also displays if another compliance version becomes available.
● Dismiss 30 days lets you dismiss the banner for this particular compliance version for 30 days. This action applies
only to the current user. The banner appears if a different user logs in. The notification banner also displays if another
compliance version becomes available.
The new compliance version banner shows up only if you have registered with Secure Remote Services.

Mapping a volume using PowerFlex version prior to


3.5 to a Windows PowerFlex compute-only node
Use this procedure to map a PowerFlex volume to a Windows PowerFlex compute-only node.

Steps
1. Open the PowerFlex GUI, click Front-end, and select Volumes.
2. Right-click the volume, and then select Map.
3. Select the Windows compute-only nodes, and click Map Volumes.
4. Log in to the Windows Server compute-only node and open disk management.
5. Right-click the Windows icon, and then select Disk Management.
6. Rescan the disk by selecting Action > Rescan Disks.
7. Find the disk in the bottom frame, right-click in left area of the disk, and select Online.
8. Initialize the disk by doing the following steps:
a. Find the disk in the bottom frame, right click in right area of disk, and then select New Simple Volume.
b. In the New Simple Volume Wizard, click Next.
c. Select the default, and click Next.
d. Assign the drive letter, and click Next.
e. Select the default, and click Next.
f. Click Finish.

Mapping a volume using Windows PowerFlex


compute-only node
Use this procedure to map a PowerFlex volume to a Windows PowerFlex compute-only node.

Steps
1. Log in to the PowerFlex GUI and click the Configuration tab.
2. Click Volumes.
3. Select volume and click Mapping and select Map.
4. Select the required Windows compute-only node and click Map.
5. Select the volume to map and click Apply.
6. Select the Windows compute-only nodes, and click Map Volumes.
7. Log in to the Windows Server compute-only node and open disk management.
8. Right-click the Windows icon, and then select Disk Management.
9. Rescan the disk by selecting Action > Rescan Disks.
10. Find the disk in the bottom frame, right-click in left area of the disk, and select Online.
11. Initialize the disk by performing the following steps:
a. Find the disk in the bottom frame, right click in right area of disk, and then select New Simple Volume.
b. In the New Simple Volume Wizard, click Next.

Administering the storage 49


c. Select the default, and click Next.
d. Assign the drive letter, and click Next.
e. Select the default, and click Next.
f. Click Finish.

Enabling and disabling SDC authentication


PowerFlex allows authentication and authorization be enabled for all SDCs connected to a cluster. Once authentication and
authorization are enabled, older SDC clients and SDCs without a configured password will be disconnected.
NOTE: If SDC authentication is enabled in a production environment, data unavailability may occur if clients are not properly
configured.

Preparing for SDC authentication


Prerequisites
You will need the following information:
● Primary and secondary MDM IP address
● PowerFlex cluster credentials

Steps
1. Log in to the primary MDM.
2. Authenticate against the PowerFlex cluster using the credentials provided.
3. List and record all connected SDCs (either NAME, GUID, ID, or IP), type: scli --query_all_sdc.
4. For each SDC in your list, use the identifier you recorded to generate and record a CHAP secret, type: scli --
generate_sdc_password --sdc_IP (or NAME, GUID, or ID) --reason "CHAP setup".

NOTE: This secret is specific to that SDC and cannot be reused for subsequent SDC entries.

For example, scli --generate_sdc_password --sdc_IP 172.16.151.36 --reason "CHAP setup"


Example output:

[root@svm1 ~]# scli


--generate_sdc_password --sdc_ip 172.16.151.36 --reason "CHAP
setup"
Successfully generated SDC with
IP 172.16.151.36 password:
AQAAAAAAAAAAAAA8UKVYp0LHCDFD59BrnEXNPVKSlGfLrwAk

Configuring SDCs to use authentication


Use this procedure to configure all the SDCs for authentication.

About this task


For each SDC, you must populate the generated CHAP password. On an VMware ESXi host, this requires setting a new scini
parameter using the esxcli tool. Use this procedure to perform the configuration change. For Windows and Linux SDC hosts, the
included drv_cfg utility can be used to update the driver and configuration file in real time. An example will be given after the
VMware ESXi procedure.

NOTE: VMware ESXi hosts must be rebooted for the new parameter to take effect.

Prerequisites
Ensure you have generated preshared secrets (passwords) for all SDCs to be configured.

50 Administering the storage


Ensure you have the following information:
● Primary and secondary MDM IP address or NAMEs
● Credentials to access all SDC hosts or VMs

Steps
1. SSH to the VMware ESXi host using the provided credentials.
2. List the hosts current scini parameters esxcli system module parameters list, type -m scini | grep Ioctl.

IoctlIniGuidStr string 10cb8ba6-5107-47bc-8373-5bb1dbe6efa3


Ini Guid, for example: 12345678-90AB-CDEF-1234-567890ABCDEF

IoctlMdmIPStr string 172.16.151.40,172.16.152.40


Mdms IPs, IPs for MDM in same cluster should be comma separated.
To configure more than one cluster use '+' to separate between IPs.For Example:
10.20.30.40,50.60.70.80+11.22.33.44. Max 1024 characters

IoctlMdmPasswordStr string
Mdms passwords. Each value is <ip>-<password>, Multiple passwords separated by ';'
signFor example: 10.20.30.40-AQAAAAAAAACS1pIywyOoC5t;11.22.33.44-tppW0eap4cSjsKIcMax
1024 characters

NOTE: The third parameter, IoctlMdmPasswordStr is currently empty.

3. Using esxcli, configure the driver with the existing and new parameters. For specifying multiple IP address here, use a
semi-colon (;) between the entries, as shown in the following example:

esxcli system module parameters set -m


scini -p "IoctlIniGuidStr=10cb8ba6-5107-47bc-8373-5bb1dbe6efa3
IoctlMdmIPStr=172.16.151.40,172.16.152.40 IoctlMdmPasswordStr=172.16.151.40-
AQAAAAAAAAA8UKVYp0LHCFD59BrnExNPvKSlGfLrwAk;172.16.152.40-
AQAAAAAAAAA8UKVYp0LHCFD59BrnExNPvKSlGfLrwAk"

NOTE: The spaces between the Ioctl parameter fields and the opening/closing quotes. The above is entered on a single
line.

4. Now the SDC configuration is ready to be applied. On VMware ESXi nodes a reboot is necessary for this to happen. If the
SDC is a hyperconverged node, proceed with step 5. Otherwise, skip to step 8.
5. For hyperconverged nodes, use PowerFlex or the scli tool to place the corresponding SDS into maintenance mode.
6. If the SDS is also the cluster primary MDM, switch the cluster ownership to a secondary MDM and verify cluster state
before proceeding, type: scli --switch_mdm_ownership --mdm_name <secondary MDM name>
7. Once the cluster ownership has been switched (if needed) and the SDS is in maintenance mode, the SVM may be powered
down safely.
8. Place the ESXi host in maintenance mode. If workloads need to be manually migrated to other hosts, have those actions
performed now prior to maintenance mode being engaged.
9. Reboot the ESXi host.
10. Once the host has completed rebooting, remove it from maintenance mode and power on the SVM (if present)
11. Take the SDS out of maintenance mode (if present).
12. Repeat steps 1 through 11 for all VMware ESXi SDC hosts.

Windows and Linux SDC nodes


Windows and Linux hosts have access to the drv_cfg utility, which allows driver modification and configuration in real time. See
below for an example. The --file option allows for persistent configuration to be written to the driver's configuration file (so that
the SDC remains configured after a reload or reboot).

Windows drv_cfg --set_mdm_password --ip <MDM IP> --port 6611 --password <secret>
Linux /opt/emc/scaleio/sdc/bin/drv_cfg --set_mdm_password --ip <MDM IP> --port
6611 --password <secret> --file /etc/emc/scaleio/drv_cfg.txt

Iterate through the relevant SDCs, using the command examples above along with the recorded information.

Administering the storage 51


Enabling SDC authentication
Once the SDCs have been prepared and configured for SDC authentication, you may proceed with enabling the feature.

Prerequisites
Ensure all SDCs are configured with their appropriate CHAP secret. Any older or unconfigured SDC will be disconnected from
the system when authentication is turned on.
You will need the following information:
● The primary MDM IP address
● Credentials to access the PowerFlex cluster

Steps
1. SSH to the primary MDM address.
2. Log in to the PowerFlex cluster using the provided credentials.
3. Enable the SDC authentication, type: scli --set_sdc_authentication --enable
4. Verify that the SDC authentication and authorization is turned on, and the SDCs are connected with passwords, type: scli
--check_sdc_authentication_status
Example output:

[root@svm1 ~]# scli --check_sdc_authentication_status SDC authentication and


authorization is enabled.
Found 4 SDCs.
The number of SDCs with generated
password: 4
The number of SDCs with updated
password set: 4

5. If the number of SDCs do not match, or you experience disconnected SDCs, list any or all disconnected SDCs
and then disable the SDC authentication by using the commands: scli --query_all_sdc | grep "State:
Disconnected" scli --set_sdc_authentication --disable
Recheck the disconnected SDCs to ensure they have the proper configuration applied. If necessary, regenerate their shared
secret and reconfigure the SDC. If unable to resolve SDC disconnection, leave the feature disabled and engage Dell EMC
support as needed.

Disabling SDC authentication


Prerequisites
Ensure all SDCs are configured with their appropriate CHAP secret. Any older or unconfigured SDC will be disconnected from
the system when authentication is turned on.
You will need the following information:
● Primary MDM IP address
● Credentials to access the PowerFlex cluster

Steps
1. SSH to the primary MDM address.
2. Log in to the PowerFlex cluster using the provided credentials.
3. Disable the SDC authentication, type: scli --set_sdc_authentication --disable
Once disabled, SDCs will reconnect automatically unless otherwise configured.

52 Administering the storage


Expanding an existing PowerFlex cluster with SDC authentication
enabled
Once a PowerFlex cluster has SDC authentication that is enabled, new SDCs must have the configuration step that is performed
after the client is installed.

Prerequisites
Ensure you have the following information:
● Primary MDM IP address
● Credentials for the PowerFlex cluster
● The IP address of the new cluster members
Ensure you have added the SDC authentication enabled on the PowerFlex cluster.

Steps
1. Install and add the SDCs as per normal procedures (whether using PowerFlex Manager or manual expansion process).
NOTE: New SDCs will show as Disconnected at this point, as they cannot authenticate to the system.

2. SSH to the primary MDM.


3. Log in to the PowerFlex cluster using the scli tool.
4. For each of your newly added SDCs, generate and record a new CHAP secret, type: scli --generate_sdc_password
--sdc_IP <IP of SDC> --reason "CHAP setup - expansion."
5. SSH and log in to the SDC host.
6. If the new SDC is an VMware ESXi host, follow the rest of this procedure. If Windows or Linux based, see Adding Windows
or Linux Authenticated SDCs.
7. Type -m scini | grep Ioctl and esxcli system module parameters list -m scini to list the current
scini parameters of the host.
8. Using esxcli, type esxcli system module parameters set -m scini -p to configure the driver with the existing
and new parameters.
For example, esxcli system module parameters set
-m scini -p "IoctlIniGuidStr=09bde878-281a-4c6d-ae4f-d6ddad3c1a8f
IoctlMdmIPStr=10.234.134.194,192.168.152.199,192.168".
9. At this stage, the SDC's configuration is ready to be applied. On ESXi nodes a reboot is necessary for this to happen. If the
SDC is a hyperconverged node, proceed with step 10. Otherwise, go to step 12.
10. For PowerFlex hyperconverged nodes, use the presentation manager or scli tool to place the corresponding SDS into
maintenance mode.
11. Once the SDS is in maintenance mode, the SVM may be powered off safely.
12. Place the ESXi host in maintenance mode. No workloads should be running on the node, as we have not yet configured the
SDC.
13. Reboot the ESXi host.
14. Once the host has completed rebooting, remove it from maintenance mode and power on the SVM (if present)
15. Take the SDS out of maintenance mode (if present).
16. Repeat steps 5 through 15 for all ESXi SDC hosts.

Administering the storage 53


4
Administering the storage with asynchronous
replication
Perform the following procedures to administer the PowerFlex appliance storage with asynchronous replication.

Remote replication on PowerFlex hyperconverged


nodes
Remote replication ensures data protection of the PowerFlex appliance. It creates a remote copy of one volume from one
cluster to another. PowerFlex appliance supports asynchronous replication.
Setting up the peer system is the first step when configuring remote protection. The volumes from each of the systems must be
the same size. If the network is up, then the systems should be connected.

Remote consistency group (RCG)


Remote Consistency Group (RCG) is an entity that includes a set of consistent volume pairs. The volume on the source from a
single Protection Domain(PD) is replicated to a remote volume from a single PD on the target. This creates a consistent pair of
volumes.
When replication is first activated for an RCG, the target volumes will be synchronized with the source volumes. For each
volume pair, the entire contents of each source volume are copied to the corresponding target volume. When there is more than
one volume pair in the RCG, the order in which the volumes are synchronized is determined by the order in which the volume
pairs were created. The initial synchronization occurs while all applications are running and performing I/O. Any writes to an
area of the volume that has already been synchronized will be sent to the journal. Writes to an area of the volume that has not
already been synchronized will be ignored, as the updated content will be copied over eventually as part of the synchronization.
The initial synchronization can also take place while the system is offline, however the application I/O must first be paused. You
can add and manage RCG on both the source and target systems.

Replication direction and mapping


Replication direction and mapping according to subsequent Remote Consistency Group (RCG) operations and possible actions
are as follows:

Subsequent RCG Possible actions Replication direction / access Access to volumes


operations
Normal Switchover / test A to B Access to volumes os allowed only
failover / failover through the source (system A)

Remove
After failover Reverse / restore N/A - data is not replicated By default, access to the vulme is
allowed through the original target
Remove (system B).

It is possible to enable access


through the original source (system
A).

54 Administering the storage with asynchronous replication


Subsequent RCG Possible actions Replication direction / access Access to volumes
operations
After failover + reverse Switchover / test B to A Access to the volumes is allowed
NOTE: Switchover failover / failover only through the original target
and test failover (system B)
are only possible Remove
after the peers are
synchronized.

After failover + restore Switchover / test A to B Access to the volumes is allowed


NOTE: Switchover failover / failover only through the source (system A)
and test failover
are only possible Remove
after the peers are
synchronized.

After Switchover Switchover / test B to A Access to the volumes is allowed


failover / failover only through the original target
(system B)
Remove
After test failover Switchover / test A to B Access to the volumes is allowed
failover / failover through both systems (system A
and system B)
Remove

Adding a replication consistency group


Use this procedure to add a replication consistency group (RCG).

Steps
1. Log in to the PowerFlex GUI presentation server; https://presentation_server_ip:8443.

NOTE: Use the primary MDM IP address and credentials to log in to the PowerFlex cluster.

2. In the left pane, select Protection > RCGs.


3. Click Add.
a. In the General tab, provide the RCG Name and RPO (recovery point objective).
RPO is the amount of time of data loss that is tolerated if replication between the systems is compromised.
NOTE: It is recommended to enter the minimal amount of time the feature allows. In this case, it is one minute.

b. Select the Source Protection Domain, Target System, and Target Protection Domain from the menu and click
Next.
4. On the Add Replication Pairs page:
a. Click the volume from the Source column and click the corresponding size volume from the Target column.
NOTE: Source and destination volumes should be identical.

b. Click Add pair, select the added pair that must be replicated and click Next.
5. On the Review Pairs page:
a. Select the added pair and click Add RCG & Start Replication and start replication.
b. Verify that the operation completes successfully and click Dismiss.
The RCG is added to both the source and target systems. It is necessary to wait for the end of the initial copy transmit
before start to use.

Administering the storage with asynchronous replication 55


Checking the current copy status
Use this procedure to check the current copy status through SCLI or the PowerFlex GUI.

Steps
1. Using SCLI, complete the following:
a. Log in to the primary MDM using SSH and log in to scli using the following command to add the peer system.
b. Log in to scli to add the peer system, type: scli --login --username admin.
c. Enter the MDM cluster password.
d. Type #scli --query_all_replication_pairs to verify replication status.
Once initial copy is complete, PowerFlex replication system is ready for use.
2. Using the PowerFlex GUI, complete the following:
a. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
b. In the right pane, select the relevant RCG check box.
c. Select the Volume Pairs tab and in the Details pane, verify the initial copy status and progress.
Once initial copy is complete, PowerFlex replication system is ready for use.

Modifying the recovery point objective


Use this procedure to modify the recovery point objective (RPO) if it is required.

Steps
1. In the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click Modify > Modify RPO.
3. In the Modify RPO for RCG <rcg name> dialog box, enter the new RPO time and click Apply.
4. Verify that the operation completed successfully and click Dismiss.

Adding a replication pair to a remote consistency


group
Use this procedure to add a replication pair to a remote consistency group (RCG).

Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click Modify > Add Pair.
3. In the Add Pairs wizard, on the Add Replication Pairs page, select a volume from the source and a volume from the target
and then click Add Pair.
4. Click Next.
5. In the Review Pairs page, verify the selected volumes are the correct volumes and click Add Pairs.

Unpairing from a remote consistency group


Use this procedure to unpair from a remote consistency group (RCG).

Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and in the Details pane, in the Volume Pairs tab, click Unpair.
3. In the Remove Pair from RCG <RCG name> dialog box, click Remove Pair.

56 Administering the storage with asynchronous replication


4. Verify the operation completed successfully and click Dismiss.

Freezing a remote consistency group


Use this procedure to freeze a remote consistency group (RCG).

About this task


Freezing stops writing data from the target journal to the target volume. This option is used while creating a snapshot or copy
of the replicated volume.

Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Freeze apply.
3. Click Freeze Apply.
4. Verify that the operation completed successfully and click Dismiss.

Unfreezing a remote consistency group


Use this procedure to unfreeze a remote consistency group (RCG).

About this task


Unfreezing the RCG is used while creating a snapshot or copy of the replicated volume.

Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Unfreeze apply.
3. Click Unfreeze Apply to resume data transfer from target journal to target volume.
4. Verify that the operation completed successfully and click Dismiss.

Setting the target to inconsistent mode


Use this procedure to set the target to inconsistent mode.

About this task


Set the target to inconsistent mode to pause apply from the target journal to the target volume until the source journal has
completed sending data to the target journal. If there is no consistent image on the target journal, then the system does not
apply.
NOTE: It is recommended to take a snapshot of the target before setting the target to inconsistent mode for recovery
purposes of a consistent snapshot.

Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click Modify > Set Target to Inconsistent Mode.
3. In the Set Target to Inconsistent Mode RCG <RCG name> dialog box, click Apply.
4. Verify that the operation completed successfully and click Dismiss.

Administering the storage with asynchronous replication 57


Setting the target to consistent mode
Use this procedure to set the target to consistent.

About this task


If target is set to inconsistent, you can set it back to consistent. As data is transferred from source to target, the SDR verifies
that the data in the journal is consistent with the data from the source. The SDR then sends an apply to the journal to prompt
SDR to send data to the volume.

Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click Modify > Set Target to Consistent Mode.
3. In the Set Target to Consistent Mode RCG <RCG name> dialog box, click Apply.
4. Verify that the operation completed successfully and click Dismiss.

Running a test failover


Use this procedure to run a test failover of the latest copy of snapshots of source and target systems before running a failover.

About this task


Running a test failover provides the following:
● Enables you to perform resource-intensive operations on secondary storage without impacting production
● Test application upgrades on the target system without production impact
● Ability to attach different, and higher-performing compute systems or media in the target environment
● Ability to attach systems with different hardware attributes such as GPUs in the target domain
● Ability to run analytics on the data without impeding your operational systems
● Perform what-if actions on the data because that data will not be written back to prod
● Eliminates many manual storage tasks because the test is fully automated along with the snapshots

Prerequisites
Ensure replication is still running and is in a healthy state.
Before running a test failover, map the target volumes with appropriate access mode. By default, volumes are mapped with
read_write access. This creates a conflict with the mapping of target volumes, since Powerflex set the remote access mode
of the Replication Consistency Group (RCG) point-of-view to read_only. This is incompatible with the default mapping access
mode of read_write volume mapping offered by the Powerflex GUI, therefore log onto the target system and manually map all
volumes in the RCG to the target system using scli command.

Example: # scli --map_volume_to_sdc --volume_name volume1 --sdc_id 47c091f200000004 --


access_mode read_only
Once the remote volumes are mapped, we can test the RCG failover, the test failover command:
● Creates a snapshot on the target system for all volumes attached to the RCG.
● Replaces the pointer used by the volume mapping for each volume with a pointer to its snapshot.
● Changes the access mode of the volume mapping of each volume on the target system to read_write.

A test failover operation is only possible after the peers are synchronized.

Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Test Failover.
3. In the RCG <RCG name> Test Failover dialog box, click Start Test Failover.
4. In the RCG <RCG name> Test Failover using target volumes dialog box, click Proceed.
5. Verify that the operation completed successfully and click Dismiss.

58 Administering the storage with asynchronous replication


Stopping test failover
This procedure automatically deletes the snapshots created during test failover.

Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Test Failover Stop.
3. Click Approve.
4. Verify that the operation completed successfully and click Dismiss.

Running a failover
Use this procedure to failover the source role to the target system.

About this task


If the system is not healthy, you can failover the source role to the target system. When the source is compromised, data
from the host stops sending I/Os to the source volume, replication is then stopped, and the target system takes on the role of
source. The host on the target starts sending I/Os to the volume. The target takes on the role of source, and the source takes
on the role of target.
There are two options when choosing to failover a remote consistency group (RCG):
● Switchover -This option is a complete synchronization and failover between the source and the target. Application I/Os are
stopped at the source, and the source and target volumes are synchronized. Access mode is changed of the target volumes
to the target host, the roles are switched, and finally new source volumes access mode are changed to read/write.
● Latest PiT - The system prevents any write to the source volumes.

Prerequisites
Before performing failover, ensure you stop the application and unmount the file-systems at the source (if the source is
available). Target volumes are only be mapped after performing a failover. Target volumes can also be mapped using scli.

Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Failover.
3. In the Failover RCG <RCG name> dialog box, select one of the following options:
● Switchover: (sync and failover)
● Latest PiT: (date and time)
4. Click Apply Failover.
5. In the RCG <RCG name> Sync & Failover dialog box, click Proceed.
6. Verify that the operation completed successfully and click Dismiss.
7. From the top right, click Running Jobs and check the progress of the failover.

Restoring replication
Use this procedure to restore replication when the remote consistency group (RCG) is in failover.

About this task


When the RCG is in failover mode, you can reverse or restore the replication. Restoring replication maintains the replication
direction from the original source and overwrites all data at the target. This option may be selected from either source or target
systems.

Administering the storage with asynchronous replication 59


Prerequisites
This option is available when RCG is in failover mode, or when the target system is not available. It is recommended to take a
snapshot of the original destination before restoring the replication for backup purposes.

Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Restore.
3. In the Restore Replication RCG <RCG name> dialog box, click Apply.
4. Verify that the operation completed successfully and click Dismiss.

Reversing replication
Use this procedure to reverse replication if the remote consistency group (RCG) is in failover or switchover mode.

About this task


When the RCG is in failover or switchover mode, you can reverse or restore the replication. Reversing replication changes the
direction so that the original target becomes the source. All data at the original source is overwritten by the data at the target.
This option may be selected from either source or target systems.

Prerequisites
This option is available when RCG is in failover mode, or when the target system is not available. It is recommended to take a
snapshot of the original source before reversing the replication for backup purposes.

Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Reverse.
3. In the Restore Replication RCG <RCG name> dialog box, click Apply.
4. Verify that the operation completed successfully and click Dismiss.

Creating a snapshot of the remote consistency group


(RCG) volume
Use this procedure to create a snapshot of the RCG.

About this task


Create a snapshot of the RCG volume from the target system. The latest image of the volume is used for the snapshot. When
creating a snapshot, the RCG enters a freeze mode.

Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Create Snapshots.
3. In the Create Snapshots RCG <RCG name> dialog box, click Create Snapshots.
4. Verify that the operation completed successfully and click Dismiss.

60 Administering the storage with asynchronous replication


Pausing the remote consistency group
Use this procedure to pause the replication for the remote consistency group (RCG).

About this task


Pausing stops the transfer of data from the source to the target.

Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Pause RCG.
3. In the Pause RCG <RCG name> dialog box, click one of the following options:
● Stop data transfer - this option saves all the data in the source journal volume until there is not any available capacity.
● Track Changes - this option enables manual slim mode where only metadata in the source journal volumes is saved.
4. Click Pause.
5. Verify that the operation completed successfully and click Dismiss.

Pausing the initial copy


Use this procedure to pause replication of the initial copy from the source to the target.

Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click Initial copy > Pause Initial copy.
3. In the Pause Initial Copy <RCG name> dialog box, click Pause Initial Copy.

Resuming the initial copy


Use this procedure to resuming replication of the initial copy from the source to the target.

Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Resume.
3. In the Resume Initial Copy <RCG name> dialog box, click Resume Initial Copy.
4. Verify that the operation completed successfully and click Dismiss.

Resuming the replication consistency group


Use this procedure to resume the replication consistency group (RCG).

Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Resume.
3. In the Pause RCG <RCG name> dialog box, click one of the following options:
4. Click Resume RCGs.
● Stop Data Transfer - this option saves all the data in the source journal volume until there is not any available capacity.
● Track Changes - this option enables manual slim mode where only metadata in the source journal volumes is saved.
5. Verify that the operation completed successfully and click Dismiss.

Administering the storage with asynchronous replication 61


Setting priority
Use this procedure to set the order priority for copying volume pairs.

About this task


Set the priority to the highest priority for pairs to be copied first, or set to the lowest priority to be copied last.

NOTE: Setting the priority is only valid during initial copy.

Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box.
3. In the Volumes Pairs tab, click Initial copy > Set Priority.
4. In the Set Priority for Pair <RCG name> dialog box, select Default or High and click Save.
5. Verify that the operation completed successfully and click Dismiss.

Mapping remote consistency groups to the Storage


Data Clients (SDC)
Use this procedure to designate which SDCs can access the remote consistency groups (RCGs) from the target volumes.

Prerequisites
This mapping is only enabled from the target RCG.

Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, click the relevant RCG check box and click Mapping > Map.
3. In the Map RCG Target Volumes dialog box, click the relevant SDC check box, and click Map.
4. In the Mappings section of the dialog box, select the volume check box and select the access mode.
NOTE: Read Access mode applies to all platforms, except Windows clusters, which require the No Access mode.

5. In the Map RCG Target Volumes dialog box, click Map RCG Target Volumes.
6. Click Apply.
7. Verify that the operation completed successfully and click Dismiss.

Mounting a VMFS datastore copy on the target


VMware ESXi cluster
Use this procedure to mount a VMFS datastore copy on the target VMware ESXi cluster.

Prerequisites
Ensure you perform a storage rescan on your host to update the view of storage devices that are presented to the host.

Steps
1. In the VMware vSphere web client navigator, browse to a host, a cluster, or a data center.
2. From the right-click menu, select Storage > New datastore.
3. Select VMFS as the datastore type.
4. Enter the datastore name and if necessary, select the placement location for the datastore.

62 Administering the storage with asynchronous replication


5. From the list of storage devices, select the volume that is mapped to the cluster, and click Next.
6. Select Keep existing signature and click Next.
NOTE: Assign a new signature option is only recommended when you want to mount the volume on the same
VMware ESXi host where you have original volume present. Also, be aware creating a new signature is irreversible
operation.

7. Click Finish.
8. Click OK.
9. Rescan for new VMFS volumes
a. In the VMware vSphere client, browse to a host, a cluster, or a data center.
b. From the right-click menu, select Storage > Rescan Storage > Scan for new VMFS Volumes.
c. Click OK.

Unmapping an Storage Data Client (SDC) from the


remote consistency group target volumes
Use this procedure to unmap an Storage Data Client (SDC) from the remote consistency group (RCG) target volumes.

Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, click the relevant RCG check box and click Mapping > Unmap.
3. In the Unmap dialog box, click the relevant SDC check box, and click Unmap.
4. Verify that the operation completed successfully and click Dismiss.

Configuring replication on PowerFlex storage-only


nodes
This section describes how to enable or disable replication on PowerFlex storage-only nodes manually.

Add storage data replication to PowerFlex


Use this task to add storage data replication to PowerFlex.

Prerequisites
Replication is supported on PowerFlex storage-only nodes with dual CPU. The node should be migrated to an LACP bonding NIC
port design.

Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
2. Click the Protection tab in the left pane.
3. Click SDR > Add, and enter the storage data replication name.
4. Choose the protection domain.
5. Enter the IP address to be used and choose, and click Add IP. Repeat this for each IP address you are adding, and click Add
SDR.
NOTE: While adding storage data replication it is recommended to add IP addresses for flex-data1-<vlanid>, flex-data2-
<vlanid>, flex-data3-<vlanid>, and flex-data4-<vlanid> along with flex-rep1-<vlanid>, and flex-rep2-<vlanid>. Choose the
role of Application and Storage for all data IP addresses and choose role as External for the replication IP addresses.

6. Repeat steps 3 through 5 for all the storage data replicator you are adding.

Administering the storage with asynchronous replication 63


7. Click Protection > Journal Capacity > Add, and provide the capacity percentage as 10%, which is the default. You can
customize if needed.

Extract and add the MDM certificate


Use this procedure to add MDM certificate.

About this task


The MDM certificate must be exchanged between the replicating clusters to protect from possible security attacks. This
procedure is performed using the PowerFlex scli. On each system, a certificate is created and sent to the other host in the
replicated pair.

Prerequisites

NOTE: This procedure can only be completed when the secondary site is active.

Steps
1. Log in to the primary MDM, by using the SSH on source and destination.
2. Run command: scli --login --username admin on the scli command and provide the MDM cluster password, when
prompted.
See the following example to extract the certificate on source and destination primary MDM.
● Example for source: scli --extract_root_ca --certificate_file /tmp/Source.crt
● Example for destination: scli --extract_root_ca --certificate_file /tmp/destination.crt
3. Copy the extracted certificated of source (primary MDM) to destination (primary MDM) using the SCP and vice versa.
See the following example to add the copied certificate:
● Example for source: scli --add_trusted_ca --certificate_file /tmp/destination.crt --comment
destination_crt
● Example for destination: scli --add_trusted_ca --certificate_file /tmp/source.crt --comment
source_crt
4. Run scli --list_trusted_ca to verify the added certificate.
5. Once all the Journal Capacity is set, log in to the primary DM using SSH, and log in to scli using scli --login
--username admin for adding the Peer.

Logged in. User role is SuperUser. System ID is 2e6ccfd208ef120f

Note the system ID.


6. Add a peer system on the primary site: scli --add_replication_peer_system --peer_system_ip.
7. Add a peer system on the remote site: scli --add_replication_peer_system --peer_system_ip (primary
master mdm Mgmt ip,primary slave mdm Mgmt ip) --peer_system_id ( id of primary site )
--peer_system_name (primary sitename).
NOTE:
● For a three node cluster, add two management IP addresses (primary and secondary).
● For a five node cluster, add three management IP addresses (primary, secondary1, and secondary2).

Create the replication consistency group


Use this task to create the RCG when the remote site is up and running only.

About this task


The RCG is a logical container for volumes whose application data must be replicated consistency to each other. It includes a
set of consistent volume pairs. The volume on the source from a single protection domain is replicated to a remote volume from
a single protection domain on the target. This creates a consistent pair of volumes. You can add and manage RCG on both the
source and target systems.

64 Administering the storage with asynchronous replication


Before proceeding, create source and destination volumes of the same size. It is recommended, but not mandatory, that
the volumes in the volume pair have the same attributes (including zero padding and granularity), not doing so can impact
performance and capacity.
If you already have volume in source site, create the volume in destination site with same size.

NOTE: Do not map the volume that is created on target system to SDC.

Steps
1. Log in to the source site presentation server: <https://presentation_server_IP>:8443.

NOTE: Use the primary MDM IP address and credentials to log in to the PowerFlex cluster.

2. In the left pane, click REPLICATION > RCGs.


3. In the right pane, click Add.
4. In the Add RCG wizard, enter the following on the General page:
a. Enter the RCG Name.
b. Enter the number of RPO (recovery point objective) minutes. This is the amount of time of data loss that is tolerated if
replication between the systems is compromised.
c. Select Source Protection Domain.
d. Select Target System.
e. Select Target Protection Domain.
5. Click Next.
6. On the Add Replication Pairs page:
a. Click the volume from the Source column and then click the same size volume from the Target column.
b. Click Add Pair. The volume pair is added.
c. Click Next.
7. On the Review Pairs page:
a. Ensure that the correct source and volume pair are selected and click ADD RCG & START REPLICATION.
b. Verify that the operation completed successfully and click Dismiss.
The RCG is added to both the source and target systems.
It is necessary to wait for the end of the initial copy transmit before start to use.

Finding the current copy status


Use this task to find the current copy status.

Steps
1. Log in to the primary MDM using SSH and log in to scli, type: # scli --login --username admin after the password
prompt and enter the MDM cluster password.
2. Verify the replication status, type: # scli --query_all_replication_pairs.
Once initial copy is complete, PowerFlex replication system is ready for use.

Modifying the recovery point objective


Use this task to update the recovery point objective (RPO) time as required.

Steps
1. From https://Presentation_Server_IP:8443 (PowerFlex GUI), in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click Modify > Modify RPO.
3. In the Modify RPO for RCG <rcg name> dialog box, enter the updated RPO time and click Apply.
4. Verify that the operation completed successfully and click Dismiss.

Administering the storage with asynchronous replication 65


Disabling replication on PowerFlex storage-only nodes
Use this workflow to disable replication on PowerFlex storage-only nodes.

Steps
1. Freeze the remote consistency group.
2. Remove the remote consistency group.
3. Remove a peer system.
4. Remove a peer system and certificates.
5. Remove replication trust for peer system.
6. Enter SDS into maintenance mode.
7. Remove the storage data replication from PowerFlex.
8. Remove a storage data replication RPM.
9. Clean up the network configurations.
10. Exit SDS from maintenance mode.
11. Remove the journal capacity.
12. Remove the target volumes from the destination system.

Freeze the remote consistency group


Perform this procedure to freeze the remote consistency group (RCG). Freeze stops writing data from the target journal to the
target volume. Use this option while creating a snapshot or copying the replicated volume.

Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
2. From the left pane, click Protection > RCGs.
3. In the right pane, select the relevant RCG check box, and click More > Freeze Apply.
4. Verify that the operation completes successfully and click Dismiss.

Remove the remote consistency group


Use this procedure to remove the volume pairs and stop all remote consistency group (RCG) replication input and output.

Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
2. From the left pane, click Protection > RCGs.
3. In the right pane, select the relevant RCG, and click More > Remove RCG.
4. Verify that the operation completes successfully, and click Dismiss.

Remove a peer system


Use this procedure to remove replication between the peer systems.

Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
2. From the left pane, click Protection > Peer Systems.
3. In the right pane, select the relevant peer system, and click Remove.
4. Verify that the operation completes successfully, and click Dismiss.

66 Administering the storage with asynchronous replication


Remove replication trust for peer system
Use this optional procedure to remove the trusted certificates from the source and target systems.

Steps
1. Open an SSH session using PuTTY or a similar SSH client.
2. Log in to the primary MDM with admin credentials.
3. In the PowerFlex CLI, type scli --list_trusted_ca to display the list of trusted certificates in the system. Note the
fingerprint details.
4. Type scli --remove_trusted_ca --fingerprint <fingerprint> to remove the certificate.
5. Verify that the following message is received:
The Certificate was successfully removed.
6. Type rm /tmp/target.crt scli --list_trusted_ca
9A:14:00:5F:3F:A0:01:73:D9:8F:69:E3:9C:53:C5:FB:CB:7B:AE:CA scli --remove_trusted_ca
--fingerprint 9A:14:00:5F:3F:A0:01:73:D9:8F:69:E3:9C:53:C5:FB:CB:7B:AE:CA
and rm /tmp/source.crt scli --list_trusted_ca
E4:07:A4:BF:A3:2B:6B:DD:93:F4:76:87:C0:8A:8C:6D:31:83:7A:23 scli --remove_trusted_ca --
fingerprint E4:07:A4:BF:A3:2B:6B:DD:93:F4:76:87:C0:8A:8C:6D:31:83:7A:23 to remove the source
and target certificates.
7. Verify that the following message is received:
The Certificate was successfully removed.

Enter SDS in maintenance mode


Use this procedure to place an SDS into maintenance mode to perform nondisruptive maintenance on the SDS.

About this task


Perform this procedure if you need to clean the network configurations.

Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
2. In the left pane, click Configuration > SDSs.
3. In the right pane, select the relevant SDS and click More > Enter Maintenance Mode.
4. In the Enter SDS into Maintenance Mode dialog box, select Instant. If maintenance mode takes more than 30 minutes,
select PMM.
5. Click Enter Maintenance Mode.
6. Verify that the operation completes successfully and click Dismiss.

Remove storage data replication from PowerFlex


Use this procedure to remove storage data replication from PowerFlex.

Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
2. In the left pane, click Protection > SDRs.
3. In the right pane, select the SDR Name and click More > Remove.
4. Repeat for all SDRs.

Administering the storage with asynchronous replication 67


Remove a storage data replication RPM
Use this procedure to remove a storage data replication RPM.

Steps
1. SSH to the PowerFlex node.
2. List all installed Dell EMC RPMs on a PowerFlex node by entering the following command: rpm -qa | grep -i emc.
3. Identify the SDR rpm - EMC-ScaleIO-sdr-x.x.xxx.el7.x86_64.rpm.
4. Remove the RPM by entering the following command: rpm -e EMC-ScaleIO-sdr-x.x.xxx.el7.x86_64.rpm
5. Verify that RPM is removed and the service is stopped.

Clean up network configurations


Use this procedure to clean a network configuration.

About this task


If this network is used for other functions, these steps are optional.

Steps
1. Remove the route-bond# files that are associated with the replication network, using the following commands:
cd /etc/sysconfig/network-scripts/

rm route-bond(x).xxx

Repeat this command for the second route.

2. Remove the ifcfg-bond# files that are associated with the replication network, using the following commands:
cd /etc/sysconfig/network-scripts/

rm ifcfg-bond(x).xxx

Repeat this command for the second interface.

Exit SDS in maintenance mode


Use this procedure to exit an SDS from maintenance mode.

Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
2. In the left pane, click Configuration > SDSs.
3. In the right pane, select the relevant SDS and click More > Exit Maintenance Mode.
4. In the Exit SDS into Maintenance Mode dialog box, select Instant.
5. Click Exit Maintenance Mode.
6. Verify that the operation completes successfully and click Dismiss.
Repeat for each PowerFlex node in the protection domain.

68 Administering the storage with asynchronous replication


Remove journal capacity
Use this procedure to remove the journal capacity.

Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
2. From the left pane, click Protection > Journal Capacity.
3. In the right pane, select the Protection Domain, and click Remove.
4. Verify that the operation completes successfully and click Dismiss.

Remove target volumes from the destination system


Use this procedure to remove target volumes from the destination system.

Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
2. Remove the volumes used as target in the volume pair.
3. From the left pane, click Configuration > Volumes.
4. In the right pane, select the target volumes.
5. Click More > Remove.
6. Select Remove volume with all of its snapshots.
7. Click Remove.
8. Verify that the operation completes successfully and click Dismiss.

Administering the storage with asynchronous replication 69


5
Configuring and viewing alerts
You can configure PowerFlex Manager to receive and display email alerts from discovered PowerFlex appliance components.
The alert connector is available through PowerFlex Manager. It sends email alerts on the health of PowerFlex nodes securely
through Secure Remote Services. Secure Remote Services routes alerts to the Dell EMC support queue for diagnosis and
dispatch.
When using the alert connector with Secure Remote Services, critical alerts can automatically generate service requests.
Dell Technologies Services continuously evaluates and updates which alerts automatically generate service requests. For more
information, contact Dell Technologies Services.
During node discovery, you can configure iDRAC nodes to automatically send alerts to PowerFlex Manager. PowerFlex Manager
receives SNMP alerts directly from iDRAC and forwards them to Secure Remote Services. You must manually configure
CloudLink and Dell EMC Networking OS10 switches to send alerts to PowerFlex Manager.
If not done at discovery, you can configure iDRAC nodes to automatically send alerts to PowerFlex Manager by editing the alert
connector settings and selecting the Configure nodes for alert connector option.
PowerFlex Manager fetches telemetry reports from PowerFlex and forwards these reports as is to Secure Remote Services.
The reports are then sent to the Dell Managed File Transfer (MFT) portal, where they can be leveraged by CloudIQ.
PowerFlex Manager forwards four different reports:
● The configuration report is sent once a day.
● The capacity report is sent every hour.
● The performance report is sent every five minutes.
● The alerts report is sent every five minutes.
CloudIQ integration is enabled by default. CloudIQ enables PowerFlex Manager to transport telemetry data, alerts and analytics
via Secure Remote Services to assist Dell EMC in providing support.
NOTE: The alert connector does not replace any monitoring software that you might already have, including any already
available through the PowerFlex appliance such as PowerFlex Connect Home.
As of PowerFlex Manager version 3.3, OpenManage Enterprise is no longer required to connect to Secure Remote Services. If
you use OpenManage Enterprise for other functionality, note that it is no longer installed and we recommend using PowerFlex
Manager instead. In future upgrades, we will recommend removing this software module.

Configure the alert connector


Configure the alert connector to register the device with Secure Remote Services using a unique software ID.

About this task


Configuring the alert connector enables critical and error alerting for node and PowerFlex resources that are managed by
PowerFlex Manager.
CloudIQ is enabled by default.

Prerequisites
Before you configure the alert connector, ensure:
● The primary MDM in the PowerFlex cluster is valid and up and running.
● Secure Remote Services gateway is configured in the data center and connected to Secure Remote Services.

Steps
1. Log in to PowerFlex Manager (username: admin and password: admin).
2. On the menu bar, click Settings and click Virtual Appliance Management.
3. Click Add in the Alert connector section.

70 Configuring and viewing alerts


4. Complete the following steps in the Device Registration section:
a. Select the device type.
b. Enter your unique software ID in the Enterprise License Management Systems (ELMS) Software Unique ID box. For
information about how to obtain the ID, see the License Authorization email that you received.
c. Enter the unique number associated with your system in the Solution Serial Number box, for example, V1234567.
d. Select one or more of the following options for the Connection type:
● Secure Remote Services
● Email
e. Optionally, disable CloudIQ integration by clearing Enable CloudIQ.
f. Select the severity level for which you want to see alerts by choosing one of the following Alert Filter values:
● Critical (Recommended)
● Warning
● Info
g. Specify how often you want to check for alerts by entering an Alert Polling Interval value in hours or minutes.
5. For a Secure Remote Services configuration, complete the following steps in the Secure Remote Services Section under
Connector Settings:
a. Enter a node address for the Secure Remote Services gateway in the SRS Gateway Host IP or FQDN field.
NOTE: Secure Remote Services support recommends using the IP address when registering.

b. Enter the port number in the SRS Gateway Host Port field.
c. Enter the required username in the User ID field.
d. Enter the required password in the Password or NT Token field.
6. For an email configuration, complete the following steps in the Email Server Configuration under Connector Settings:
a. Choose the Server type.
● SMTP
● SMTPS over SSL
● SMTPS STARTTLS
b. Enter an IP address or fully qualified domain name for the email server in the Server IP or FQDN field.
c. Enter the port number for the email server in the Port field.
d. Enter the required username in the User ID field.
e. Enter the required password in the Password field.
f. Enter the email address for the sender in the Sender Address field.
g. Enter one or more email recipient addresses.
7. Click Save.
8. Click Send Test Alert, to verify that the alert connector is receiving alerts.
9. Click Test Connection, to verify the connection.
When the device is registered for alerting, topology and telemetry reports are automatically sent to Secure Remote Services
weekly, starting at the time that the device was registered.

Configuring SNMP trap and syslog forwarding


You can configure PowerFlex Manager for SNMP trap and syslog forwarding.
Configure SNMP communication to enable PowerFlex Manager to receive and forward SNMP traps. PowerFlex Manager can
receive SNMP traps from system devices and forward them to one or more remote network management systems.
You can configure PowerFlex Manager to forward syslogs it receives from system components to a remote network
management system. Authentication is provided by PowerFlex Manager, through the configuration settings you provide.

Configuring and viewing alerts 71


Configure SNMP trap forwarding
To configure SNMP trap forwarding, specify the access credentials for the SNMP version you are using and then add the
remote server as a trap destination.

About this task


PowerFlex Manager supports different SNMP versions, depending on the communication path and function. The following table
summarizes the functions and supported SNMP versions:

Function SNMP version


PowerFlex Manager receives traps from all devices, including iDRAC v2
PowerFlex Manager receives traps from iDRAC devices only v3
PowerFlex Manager forwards traps to the network management system v2, v3

NOTE: SNMPv1 is supported wherever SNMPv2 is supported.

PowerFlex Manager can receive an SNMPv2 trap and forward it as an SNMPv3 trap.
SNMP trap forwarding configuration supports multiple forwarding destinations. If you provide more than one destination, all
traps coming from all devices are forwarded to all configured destinations in the appropriate format.
PowerFlex Manager stores up to 5 GB of SNMP alerts. Once this threshold is exceeded, PowerFlex Manager automatically
purges the oldest data to free up space.
For SNMPv2 traps to be sent from a device to PowerFlex Manager, you must provide PowerFlex Manager with the community
strings on which the devices are sending the traps. If during resource discovery you selected to have PowerFlex Manager
automatically configure iDRAC nodes to send alerts to PowerFlex Manager, you must enter the community string used in that
credential here.
For a network management system to receive SNMPv2 traps from PowerFlex Manager, you must provide the community
strings to the network management system. This configuration happens outside of PowerFlex Manager.
For a network management system to receive SNMPv3 traps from PowerFlex Manager, you must provide the PowerFlex
Manager engine ID, user details, and security level to the network management system. This configuration happens outside of
PowerFlex Manager.

Prerequisites
PowerFlex Manager and the network management system use access credentials with different security levels to establish
two-way communication. Review the access credentials that you need for each supported version of SNMP. Determine the
security level for each access credential and whether the credential supports encryption.
To configure SNMP communication, you need the access credentials and trap targets for SNMP, as shown in the following
table:

If adding... You must know...


SNMPv2 Community strings by which traps are received and forwarded
SNMPv3 User and security settings

Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Settings, and click Virtual Appliance Management.
3. On the Virtual Appliance Management page, in the SNMP Trap Configuration section, click Edit.
4. To configure trap forwarding as SNMPv2, click Add community string. In the Community String box, provide the
community string by which PowerFlex Manager receives traps from devices and by which it forwards traps to destinations.
You can add more than one community string. For example, add more than one if the community string by which PowerFlex
Manager receives traps differs from the community string by which it forwards traps to a remote destination.

72 Configuring and viewing alerts


NOTE: An SNMPv2 community string that is configured in the credentials during discovery of the iDRAC or through
management is also displayed here. You can create a new community string or use the existing one.

5. To configure trap forwarding as SNMPv3, click Add User. Enter the Username, which identifies the ID where traps are
forwarded on the network management system. The username must be at most 16 characters. Select a Security Level:

Security Level Details Description authPassword privPassword


Minimal noAuthNoPriv No authentication and Not required Not required
no encryption
Moderate authNoPriv Messages are Required Not required
authenticated but not
encrypted

(MD5 at least 8
characters)
Maximum authPriv Messages are Required Required
authenticated and
encrypted

(MD5 and DES both at


least 8 characters)

Note the current engine ID (automatically populated), username, and security details. Provide this information to the remote
network management system so it can receive traps from PowerFlex Manager.
You can add more than one user.

6. In the Trap Forwarding section, click Add Trap Destination to add the forwarding details.
a. In the Target Address (IP) box, enter the IP address of the network management system to which PowerFlex Manager
forwards SNMP traps.
b. Provide the Port for the network management system destination. The SNMP Trap Port is 162.
c. Select the SNMP Version for which you are providing destination details.
d. In the Community String/User box, enter either the community string or username, depending on whether you are
configuring an SNMPv2 or SNMPv3 destination. For SNMPv2, if there is more than one community string, select the
appropriate community string for the particular trap destination. For SNMPv3, if there is more than one user-defined,
select the appropriate user for the particular trap destination.
7. Click Save.
The Virtual Appliance Management page displays the configured details as shown below:
Trap Forwarding <destination-ip>(SNMP v2 community string or SNMP v3 user)

NOTE: To configure nodes with PowerFlex Manager SNMP changes, go to Settings > Virtual Appliance
Management, and click Configure nodes for alert connector.

Configure syslog forwarding


You can configure PowerFlex Manager to forward syslogs it receives from system components to a remote network
management system. PowerFlex Manager provides authentication through the configuration settings you provide.

About this task


You can configure PowerFlex Manager to forward syslogs to up to five destination remote servers. You can set only one
forwarding entry per remote server.
You can apply forwarding filters based on facility type and severity level. For example, you can configure PowerFlex Manager to
forward all syslog messages to one remote server and then forward syslog messages of a given severity to a different remote
server. The default is to forward syslog messages of all facilities and severity levels to the remote syslog server.

Configuring and viewing alerts 73


Prerequisites
Ensure that the system components are configured to send syslog messages to PowerFlex Manager. This configuration happens
outside of PowerFlex Manager.
Ensure that you have the following information:
● Obtain the IP address of the hostname for the remote syslog server and the port where the server is accepting syslog
messages.
● If sending only some syslog messages to a remote server, you must know the facility and severity of the log messages to
forward.

Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Settings and click Virtual Appliance Management.
3. On the Virtual Appliance Management page, in the Syslog section, click Edit.
4. Click Add syslog forward.
5. For Host, enter the destination IP address of the remote server to which you want to forward syslogs.
6. Enter the destination Port 514 where the remote server is accepting syslog messages.
7. Select the network Protocol used to transfer the syslog messages. The default is UDP.
8. Optionally enter the Facility and Severity Level to filter the syslogs that are forwarded. The default is to forward all.
9. Click Save to add the syslog forwarding destination.
The Virtual Appliance Management page displays the configured details as shown below:
Syslog Forwarding <destination-ip>(<Facility><Severity Level>)

74 Configuring and viewing alerts


6
Administering PowerFlex Manager
This section includes information about key PowerFlex Manager activities.
These activities include:
● Backing up and restoring data
● Adding or modifying user accounts
● Assigning users to services
● Recovering lost passwords

Back up and restore PowerFlex Manager


Use this task to schedule backups, perform an immediate backup, and perform a restore.

About this task


Performing a backup saves all user-created data to a remote share from which it can be restored.

Steps
1. Log in to PowerFlex Manager.
2. Click Settings and click Backup and Restore.
3. The Backup and Restore page displays information about the last backup operation that was performed on the PowerFlex
Manager virtual appliance. Information in the Settings and Details section applies to both manual and automatically
scheduled backups and includes the following:
● Last backup date
● Last backup status
● Backup directory path to an NFS or a CIFS share
● Backup directory username

4. The Backup and Restore page also displays information about the status of automatically scheduled backups (enabled or
disabled).
On this page, you can:
● Manually start an immediate backup - using Backup Now option
● Restore an earlier configuration - using Restore Now option
● Edit general backup settings
● Edit automatically scheduled backup settings

Add or modify user accounts


Add or modify user accounts using PowerFlex Manager.

Steps
Log in to PowerFlex Manager.

Administering PowerFlex Manager 75


If you want to ... Do this...
To create a user a. On the menu bar, click Settings and click Users.
b. On the Users page, click Create.
c. Enter current password (password of user that is currently logged in).
d. Enter a unique User Name to identify the user account.
e. Enter a Password that a user enters to access PowerFlex Manager. Confirm the password. The
password length must be between 8–32 characters and must include at least one number, one
capital letter, one lowercase letter.
f. Enter the user’s First Name and Last Name.
g. From the Role drop-down list, select one of the following roles:
● Administrator
● Standard
● Read-only
● Operator
h. Enter the Email address and Phone number for contacting the user.
i. Select Enable User to create the account with an Enabled status, or clear this option to
create the account with a Disabled status.
j. Click Save.
Edit a user a. On the menu bar, click Settings and click Users.
b. On the Users page, select the user account that you want to edit.
c. Click Edit. For security purpose, confirm your password before editing the user.
d. You can modify the following user account information from this window:
● First name
● Last name
● Role
● Email
● Phone
● Enable user (If you select the Enable user check box, the user can log in to PowerFlex
Manager. If you disable the check box, the user cannot log in.)
e. Click Save.
Delete a user a. On the menu bar, click Settings and click Users.
b. On the Users page, select the user account that you want to delete.
c. Click Delete. Click Yes in the warning message to delete the accounts.
Enable or disable a user a. On the menu bar, click Settings and click Users.
b. On the Users page, select one or more user accounts to enable/disable.
c. In the menu, click Enable or Disable, to update the state to enabled or disabled, as selected.
NOTE: For an already enabled user account State, the Enable option in the menu is
deactivated, and for an already disabled user account State, the Disable option in the menu
is deactivated.

Assigning users to services


You can assign users to services using PowerFlex Manager.

Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Services.
3. On the Services page, click the service, and in the right pane of the Service Details page, click View Details.
4. On the Service Details page, in the right pane, click Edit.
5. Specify permissions for the service under Who should have access to the service deployed from this template?.
● Only PowerFlex administrators - The service has access to users with administration rights

76 Administering PowerFlex Manager


● PowerFlex Manager administrators and specific standard and operator users - This option allows to restrict
access to specific users
● PowerFlex Manager administrators and all standard and operator users - Allows all users
6. Click Save.

Recovering a lost password


To recover a lost password, contact Dell Technologies Support.

Access switch password management


Use this procedure to change the password of the access switches.

About this task


During predeployment, the person doing the installation sets the password for the access switches. During deployment of
PowerFlex Manager, you also set the switch password in PowerFlex Manager. When the access switch password is changed
after deployment, you must also change the access switch password within PowerFlex Manager to maintain manageability by
PowerFlex Manager.
The terms <OLD_PASSWORD> and <NEW_PASSWORD> are used to see current and new passwords respectively.

Steps
1. Change the PowerFlex Manager access switch password by doing the following:
a. In PowerFlex Manager, go to Settings > Credentials Management, select the access switch credential, click Edit,
change the Password to the <NEW_PASSWORD>, and click Save. See Credentials management for more information.
2. Change the password of the access switches by doing the following:
a. Use an SSH client program like PuTTY to log in to an access switch console.
b. Type the following commands:

Switch type Command

Dell EMC PowerSwitch configure


username admin password <NEW_PASSWORD>
privilege 15
end
copy running-config startup-config

Cisco Nexus configure


username admin password 0
<NEW_PASSWORD>
end
copy running-config startup-config

3. Test the changes.


a. In the PowerFlex Manager GUI, go to Resources page, select the access switches, and click the Run Inventory.
b. To confirm that the process completes with no errors, check Settings > Logs.

Administering PowerFlex Manager 77


VMware vCenter password management
Use this procedure to change the VMware vCenter password.

About this task


During deployment of PowerFlex appliance, the person doing the installation sets the VMware vCenter password in PowerFlex
Manager. When the VMware vCenter password is changed after deployment, the password must also be changed within
PowerFlex Manager to maintain manageability by PowerFlex Manager.
The terms <OLD_PASSWORD> and <NEW_PASSWORD> are used to see current and new passwords respectively.

Steps
1. Change the PowerFlex Manager VMware vCenter password by completing the following:
a. In PowerFlex Manager, go to Settings > Credentials Management, select the VMware vCenter credential, click Edit,
change the Password to the <NEW_PASSWORD>, and click Save. See Credentials management for more information.
2. Change the VMware vCenter password by completing the following:
a. Log in to the VMware vCenter web interface using the <OLD_PASSWORD>.
b. Click the username in upper right of page and select Change password.
c. Type the <OLD_PASSWORD> and the <NEW_PASSWORD> and click OK.
3. Test the changes. Even though the cluster is operating properly, because of the time between changing the password in
PowerFlex Manager and changing the password in the ESXi OS, nodes may show a critical error on the Services page in
PowerFlex Manager. The following steps will return the nodes to the healthy state.
a. In the PowerFlex Manager GUI, go to Resources page, select vCenter and click Run Inventory.
b. To confirm that the process completes with no errors, check Settings > Logs.
c. In the PowerFlex Manager GUI, go to Services page for ESXi nodes and click Update Service Details.
d. After Update Service Details completes the process, confirm that all cluster objects report as healthy (green check
mark).

VMware ESXi operating system password


management
Use this procedure to change the VMware ESXi operating system root password.

About this task


During deployment of PowerFlex appliance, the person completing the installation sets the VMware ESXi operating system
password in PowerFlex Manager. When PowerFlex Manager deploys VMWare ESXi, it sets the password in the operating
system. When the VMware ESXi operating system password is changed after deployment, the ESXi operating system password
must also be changed within PowerFlex Manager to maintain manageability by PowerFlex Manager.
In the following procedure, the terms <OLD_PASSWORD> and <NEW_PASSWORD> are used to see the current and new
passwords respectively.

Steps
1. To change the PowerFlex Manager VMware ESXi operating system password, complete the following:
a. In PowerFlex Manager, go to Settings > Credential Management, select the VMware ESXi operating system
credential, click Edit, change the Password to the <NEW_PASSWORD>, and click Save. See Credentials management for
more information.
2. To change the VMware ESXi operating system root password on every hyperconverged or PowerFlex compute-only node,
complete the following:
a. Log in to VMWare ESXi web interface on the PowerFlex node using root and the <OLD_PASSWORD>.
b. In upper right of page, click the root@<ip address>, and select Change password.
c. Type the <NEW_PASSWORD> twice and click Change password.

78 Administering PowerFlex Manager


3. Test the changes: Even though the cluster is operating properly, because of the time between changing the password in
PowerFlex Manager and changing the password in the VMware ESXi operating system, nodes may show a critical error on
the Services page in PowerFlex Manager. The following steps return the nodes to the healthy state.
a. In the PowerFlex Manager GUI, go to Resources page, select the VMware ESXi nodes and VMware vCenter and click
Run Inventory.
b. To confirm that the process completes with no errors, check Settings > Logs.
c. In the PowerFlex Manager GUI, go to Services page for VMware ESXi nodes and click Update Service Details.
d. After the Update Service Details process completes, confirm that all cluster objects report as healthy (green check
mark).

Adding a non-root user to VMware ESXi


Steps
1. Log in to VMware ESXi UI (paste the IP in browser and access it using root user).
2. Go to Manage > Security & Users > Users
3. Click Add user.
4. Provide username and password.
5. Confirm password and click Add.
6. Go to Host > Actions > Permissions.
7. Click Add User and select the user you created from the drop down menu.
8. Select administrator in the second menu and click Add user.
Once user is added, you will be able to see the user with admin role.
9. Verify the login with new user.

Minimum VMware vCenter permissions


PowerFlex Manager supports managing VMware vCenter objects without root permissions. There are three PowerFlex Manager
VMware vCenter management modes available:
● Monitoring mode
● Lifecycle mode
● Management mode
These procedures provide information for creating VMware vCenter user accounts with access for the various modes listed
(monitoring mode etc). VMware vCenter default permissions to meet the specified requirements so no additional permission
changes are needed.

Create a user in monitoring mode


Steps
1. Log in to VMware vSphere client, select Administration > Users and Groups.
2. Click Add User to create a user account and enter the username and password.
3. In root view of the VMware vSphere client, click Administration and select Roles.
a. Create a new role and select the following permissions:
● Profile-driven storage > Profile-driven storage view
● VM > Read customization specifications (under provisioning)
b. Assign a name to the new role.
4. Click Hosts and Clusters and right-click the VMware vCenter.
a. Choose Add permission and select the user account previously created.
b. Select the name of the role you created and check Propagate to children.
5. Log in to PowerFlex Manager.
6. Click Settings and create a new credential of type vCenter.

Administering PowerFlex Manager 79


NOTE: Ensure the username and password coincide with the vSphere credentials created earlier.

7. Create a user credential for the vCenter server that matches the account created in vCenter earlier.
8. Add the vCenter server object to the inventory using those credentials from PowerFlex Manager. For more information on
the PowerFlex Manager credential creation see the PowerFlex Manager online help.

Create a user in lifecycle mode


Steps
1. Log in to VMware vSphere client, select Administration > Users and Groups.
2. Click Add User to create a user account and enter the username and password.
3. In root view of the VMware vSphere client, click Administration and select Roles.
a. Create a new role and select the following permissions:
● Profile-driven storage > Profile-driven storage view
● VM > Read customization specifications (under provisioning)
● Host: Connection, firmware, maintenance, power, query patch, system management, system resources
b. Assign a name to the new role.
4. Click Hosts and Clusters and right-click the VMware vCenter.
a. Choose Add permission and select the user account previously created.
b. Select the name of the role you created and check Propagate to children.
5. Log in to PowerFlex Manager.
6. Click Settings and create a new credential of type vCenter.
NOTE: Ensure the username and password coincide with the vSphere credentials created earlier.

7. Create a user credential for the vCenter server that matches the account created in vCenter earlier.
8. Add the vCenter server object to the inventory using those credentials from PowerFlex Manager. For more information on
the PowerFlex Manager credential creation see the PowerFlex Manager online help.

Create a user in managed mode


Steps
1. Log in to VMware vSphere client, select Administration > Users and Groups.
2. Click Add User to create a user account and enter the username and password.
3. In root view of the VMware vSphere client, click Administration and select Roles.
4. Click Hosts and Clusters and right-click the VMware vCenter.
a. Choose Add permission and select the user account previously created.
b. Select the name of the role you created and check Propagate to children.
5. Log in to PowerFlex Manager.
6. Click Settings and create a new credential of type vCenter.
NOTE: Ensure the username and password coincide with the vSphere credentials created earlier.

7. Create a user credential for the vCenter server that matches the account created in vCenter earlier.
8. Add the vCenter server object to the inventory using those credentials from PowerFlex Manager. For more information on
the PowerFlex Manager credential creation see the PowerFlex Manager online help.

80 Administering PowerFlex Manager


Windows server operating system password
management
Use this procedure to change the Windows server operating system password.

About this task


The terms <OLD_PASSWORD> and <NEW_PASSWORD> are used to see the current and new passwords respectively.

Steps
1. Log in to PowerFlex Manager GUI (admin/admin) from a web browser.
2. In PowerFlex Manager, go to Settings > Credentials Management, select the Windows Compute-Only nodes
credential. See Credentials management for more information.
3. Click Edit and change the password to the <NEW_PASSWORD> and click Save.
4. To change the Windows Server operating system password on every hyperconverged or compute-only node, complete the
following:
5. Log in to the server either directly or by using Remote Desktop.
6. Right-click Computer, and select Manage.
7. Select Configuration.
8. Click Local Users and Groups > Users.
9. Find and right-click the Administrator user.
10. Click Set Password > Proceed.
11. Type and confirm the new password.
12. Test the changes. The PowerFlex nodes may show a critical error. The error is due to the time lag between changing the
password in PowerFlex Manager and changing the password in the Windows operating system. The following steps return
the PowerFlex nodes to a healthy state:
a. In the PowerFlex Manager GUI, go to Resources page, select the Windows CO nodes, and click Run Inventory.
b. To confirm that the process completes with no errors, check Settings > Logs.
c. In the PowerFlex Manager GUI, go to Services page for Red Hat Enterprise Linux nodes and click Update Service
Details.
d. After the Update Service Details process completes, confirm that all cluster objects report as healthy (green check
mark).

Updating passwords in PowerFlex Manager


Use this procedure to update the passwords for iDRAC, VMware ESXi compute-only node, and SVM storage-only nodes
operating system

Steps
1. Log in to PowerFlex Manager.
2. Go to the Resources page and select the required node.
3. In the Update Password wizard, select the component that you want to update password and click Next.
4. In the Select Credentials page, select the new credential from the menu or create a credential.
5. Click Finish.
6. Click Yes to confirm.

Update passwords for the PowerFlex Gateway


Steps
1. Log in to PowerFlex Manager.
2. Go to the Resources page, select PowerFlex Gateway and click Update Password.

Administering PowerFlex Manager 81


3. In the Update Password wizard, select the Component and click Next.
4. Select the new credential (which includes admin and root) from the menu or create a credential and click Finish.
5. Click Yes to confirm.
6. Once completed, verify both PowerFlex Gateway UI and the operating system logins.

Updating passwords for PowerFlex Gateway components


You can update the passwords for one or more PowerFlex Gateway components from PowerFlex Manager.

Steps
1. On the menu bar, click Resources.
2. On the All Resources tab, select one or more PowerFlex Gateway components for which you want to change the
passwords.
3. Click Update Password.
PowerFlex Manager displays the Update Password wizard.
4. On the Select Components page, select PowerFlex Password.
5. Click Next.
6. On the Select Credentials page, create a credential with a new password or change to a different credential.
a. Open the PowerFlex ( n ) object under the Type column to see details about each gateway you selected on the
Resources page.
b. To create a credential that has the new password, click the plus sign (+) under the Credentials column.
Specify the Credential Name, as well as the Gateway Admin User Name and Gateway OS User Name for which you
want to change passwords. Enter the new passwords for both users and confirm these passwords.

c. To modify the credential, click the pencil icon for one of the nodes under the Credentials column and select a different
credential.
d. Click Save.
7. Click Finish.
8. Click Yes to confirm.

Results
PowerFlex Manager starts a new job for the password update operation, and a separate job for the device inventory. If
PowerFlex Manager is managing a cluster for any of the selected PowerFlex Gateway components, it updates the credentials
for the Gateway Admin User and Gateway OS User, as well as any related credentials, such as the LIA and lockbox
credentials. If PowerFlex Manager is not managing the cluster, it only updates the credentials for the Gateway Admin User and
Gateway OS User.

Updating passwords for system components


You can update the passwords for some system components from PowerFlex Manager.

Steps
1. On the menu bar, click Resources.
2. On the All Resources tab, select one or more resources of the same type for which you want to change passwords.
For example, you could select one or more iDRAC nodes or you could select one or more PowerFlex Gateway components.
3. Click Update Password.
PowerFlex Manager displays the Update Password wizard.
4. On the Select Components page, select one or more components for which you want to update a password and click
Next.
The component choices vary depending on which resource type you initially selected on the Resources page.
5. On the Select Credentials page, create a credential or change to a different credential having the same username.
6. Click Finish and click Yes to confirm the changes.

82 Administering PowerFlex Manager


Updating passwords for nodes
You can update the passwords for one or more nodes from PowerFlex Manager.

Steps
1. On the menu bar, click Resources.
2. On the All Resources tab, select one or more nodes for which you want to change the passwords.
3. Click Update Password.
PowerFlex Manager displays the Update Password wizard.
4. On the Select Components page, specify which passwords you want to update for the selected nodes by clicking one or
more of the following check boxes.
● iDRAC Password
● Node Operating System Password
● SVM Operating System Password
PowerFlex Manager does not support password changes for the Windows operating system.

5. Click Next.
6. On the Select Credentials page, create a credential with a new password or change to a different credential.
a. Open the iDRAC ( n ) object under the Type column to see details about each node you selected on the Resources
page.
b. To create a credential that has the new password, click the plus sign (+) under the Credentials column.
Specify the Credential Name and the User Name for which you want to change the password. Enter the new password
in the Password and Confirm Password fields.

c. To modify the credential, click the pencil icon for the nodes under the Credentials column and select a different
credential.
d. Click Save.
You must perform the same steps for the node operating system and SVM operating system password changes. For a node
operating system credential, only the OS Admin credential type is updated.

7. Click Finish.
8. Click Yes to confirm.

Results
PowerFlex Manager starts a new job for the password update operation, and a separate job for the device inventory. The
node operating system and SVM operating components are updated only if PowerFlex Manager is managing a cluster with the
operating system and SVM. If PowerFlex Manager is not managing a cluster with these components, these components are not
displayed and their credentials are not updated. Credential updates for iDRAC are allowed for managed and reserved nodes only.
Unmanaged nodes do not provide the option to update credentials.

Embedded operating system password management


Use this procedure to change the embedded operating system root password. The embedded operating system is the Linux
operating systems on PowerFlex storage-only nodes.

About this task


During deployment of PowerFlex appliance, the person doing the installation sets the embedded operating system password in
PowerFlex Manager. When PowerFlex Manager deploys the embedded operating system, it sets the password in the operating
system. When the embedded operating system password is changed after deployment, you must also change the embedded
operating system password within PowerFlex Manager to maintain manageability by PowerFlex Manager.
The terms <OLD_PASSWORD> and <NEW_PASSWORD> are used to see the current and new passwords respectively.

Steps
1. To change the PowerFlex Manager embedded operating system password, complete the following:

Administering PowerFlex Manager 83


a. In PowerFlex Manager, go to Settings > Credential Management, select the embedded operating system credential,
click Edit, change the Password to the <NEW_PASSWORD>, and click Save. See Credentials management for more
information.
2. To change the embedded operating system root password on every PowerFlex storage-only node, complete the following:
a. Use an SSH client program like PuTTY to log in as root to the embedded operating system console using the
<OLD_PASSWORD>.
b. Change the embedded operating system root password using passwd command:

[root@node1 ~]# passwd


Changing password for user root.
New password: <NEW_PASSWORD>
Retype new password: <NEW_PASSWORD>
passwd: all authentication tokens updated successfully.

3. Test the changes: Even though the cluster is operating properly, because of the time between changing the password in
PowerFlex Manager and changing the password in the embedded operating system, nodes may show a critical error on the
Services page in PowerFlex Manager. The following steps return the nodes to the healthy state.
a. In the PowerFlex Manager GUI, go to Resources page, select the embedded operating system nodes, and click Run
Inventory.
b. To confirm that the process completes with no errors, check Settings > Logs.
c. In the PowerFlex Manager, go to Services page for embedded operating system nodes and click Update Service
Details.
d. After the Update Service Details process completes, confirm that all cluster objects report as healthy (green check
mark).

Adding users
Steps
1. If you are signed in as the root user, you can create a user at any time by typing: adduser username.
2. If you are a sudo user, add a new user by typing: sudo adduser username.
3. Give your user a password so that they can log in, type: passwd username.

NOTE: If you are signed in nonroot user with sudo privileges, add sudo ahead of the command.

4. Type in the password twice to confirm it.


The user is set up and ready for use

Granting sudo privileges to a user


If the new user should have the ability to run commands with root (administrative) privileges, you must give the new user
access to sudo.

Steps
To get sudo privileges the user is added to the wheel group (which gives sudo access to all its members by default) using
gpasswd.

If you are logged in as ... Type the following:


root user gpasswd -a username wheel

nonroot user with sudo privileges sudo gpasswd -a username wheel

Now the new user can run commands with administrative privileges, type sudo ahead of the command that you want to run as
an administrator:
sudo some_command
You are prompted to enter the password of the regular user account that you are signed in as. Once the correct password has
been submitted, the command you entered is performed with root privileges.

84 Administering PowerFlex Manager


Managing users with sudo privileges
About this task
While you can add and remove users from a group (such as wheel) with gpasswd, the command doesn't have a way to show
which users are members of a group. In order to see which users are part of the wheel group (and thus have sudo privileges by
default), you can use the lid function. lid is normally used to show which groups a user belongs to, but with the -g flag, you can
reverse it and show which users belong in a group using the following sudo lid -g wheel. The output will show you the
usernames and unique identifiers UIDs that are associated with the group. This is a good way of confirming that your previous
commands were successful, and that the user has the privileges that they need.

Deleting users
The choice of deletion method depends on if you are deleting the user and user files or the user account only.

Steps
1. SSH to the server and log in as root.
2. In the command prompt, choose either of the following:

If you want to delete the user type the following:


without deleting any of their files userdel username

home directory along with the user account itself userdel -r username

NOTE: Add sudo ahead of the command if you are signed in as a nonroot user with sudo privileges.

With either command, the user is automatically removed from any groups that they were added to. This includes the
wheel group if they were given sudo privileges. If you later add another user with the same name, they have to be added
to the wheel group again to gain sudo access.

Presentation server root password management


Use this procedure to change the presentation server root password.

About this task


During deployment of PowerFlex appliance, the person doing the installation sets the presentation server root password in
PowerFlex Manager. When PowerFlex Manager deploys the presentation server, it sets the password in the operating system.
When the presentation server password is changed after deployment, the presentation server password must also be changed
within PowerFlex Manager to maintain manageability by PowerFlex Manager.
The terms <OLD_PASSWORD> and <NEW_PASSWORD> are used to see the current and new passwords respectively.

Steps
1. To change the PowerFlex Manager presentation server root password, do the following:
a. In PowerFlex Manager, go to Settings > Credential Management, select the presentation server credential, click Edit,
change the Password to the <NEW_PASSWORD>, and click Save. See Credentials management for more information.
2. To change the presentation server root password, do the following:
a. Use an SSH client program like PuTTY to log in as root to the presentation server using <OLD_PASSWORD>.
b. Change the presentation server root password using passwd command:

[root@presentation-server ~]# passwd


Changing password for user root. New password: <NEW_PASSWORD>
Retype new password: <NEW_PASSWORD>
passwd: all authentication tokens updated successfully

Administering PowerFlex Manager 85


Red Hat Enterprise Linux user and password
management
Steps
1. To create a new user, SSH to the jump server and log in as root and type useradd <options> username.
Where <options> are command-line options as outlined in the following table:

Option

-c <comment> <comment> can be replaced with any string. This option is generally used to specify the full
name of a user.

-d home_directory Home directory to be used instead of default /home/username/.

-e date Date for the account to be disabled in the format YYYY-MM-DD.

-f days Number of days after the password expires until the account is disabled. If 0 is specified, the
account is disabled immediately after the password expires. If -1 is specified, the account is not
disabled after the password expires.

-g group_name Group name or group number for the user's default (primary) group. The group must exist prior
to being specified here.

-G group_list List of additional (supplementary, other than default) group names


or group numbers, separated by commas, of which the user is a
member. The groups must exist prior to being specified here.

-m Create the home directory if it does not exist.

-M Do not create the home directory.

-N Do not create a user private group for the user.

-p password The password encrypted with crypt.

-r Create a system account with a UID less than 1000 and without a home directory.

-s User's login shell, which defaults to /bin/bash.

-u uid User ID for the user, which must be unique and greater than 999.

2. By default, useradd creates a locked user account. To unlock the account, run the following command as root to assign a
password: passwd username.

Enabling sudo on a user


Steps
To enable sudo for your user on Red Hat Enterprise Linux, add your user ID (uid) to the wheel group:
a. SSH to the jump server and log in as root and type running su.
b. Type usermod -aG wheel <your_user_id>.
c. Log out and log in again.

86 Administering PowerFlex Manager


SUSE user and password management
Creating users
About this task
useradd allows you to add users and specify certain criteria such as: comments, the users home directory, shell type, and
many others account properties for SUSE Linux operating system.

Steps
1. SSH to the server, and type: Server1:~# useradd -m -c "<test username>" -s /bin/bash <test>.
Where <test> is a shell type of bash.
The following table that explains what each qualifier is used for:

Qualifier Description
-m This qualifier makes the useradd command create the users home directory.

-c "test username" This qualifier specifies a comment about the user.

-s /bin/bash This qualifier specifies which shell the user should use.

test The final qualifier is the username of the user.

2. Set the associated password, type: server1:~ # passwd <test>.

server1:~ # passwd <test>


Changing password for <test>.
New Password:
Reenter New Password:
Password changed.

Once the password is set, the user can successfully log in to the server.

Deleting users
The command to delete users is userdel and is specified with the -r qualifier which removes the home directory and mail
spool.

Steps
SSH to the server and type:server1:~ # userdel -r <test>
Once you have issued the userdel command, you will notice that the /home/<test> directory is removed. If you only want
to delete the user but leave their home directory intact, you can issue the same command but without the -r qualifier.

Enabling sudo on a user


Steps
1. SSH to the server and log in as root.
2. Type the following: sudo usermod -a -G wheel USERNAME

Administering PowerFlex Manager 87


Credentials management
PowerFlex Manager requires a root-level username and password to access and manage nodes, switches, VMware vCenter,
element managers, PowerFlex Gateway, Presentation Server, and operating system resources.
The Credentials Management page displays the following information about the credentials:
● Name—A user-defined name that identifies the credentials.
● Type—A type of resource that uses the credential.
● Resources—The total number of resources to which the credential is assigned.
From the credential list, click a credential to view its details in the Summary tab:
● Name of the user who created and modified the credential.
● Date and time that the credential was created and last modified.
On the Credentials Management page, you can:
● Create credentials
● Edit existing credentials
● Delete existing credentials

Restarting the PowerFlex Manager virtual appliance


Use this task to restart PowerFlex Manager.

About this task


To restart the virtual appliance, you must be a user with the administrator role. The restart operation logs off all other users and
cancel any running jobs.

Steps
1. On the menu bar, click Settings, and then click Virtual Appliance Management.
2. On the Virtual Appliance Management page, click Reboot Virtual Appliance. A message displays confirming that you
want to restart the virtual appliance.
3. Click Yes to confirm. The system restarts.
4. Once the reboot is complete, click Click to log in and provide your credentials.

88 Administering PowerFlex Manager


7
Deploying PowerFlex nodes using PowerFlex
Manager
This section provides steps on how to automate node configuration with PowerFlex Manager. PowerFlex Manager provides two
different type of deployments:
● Full network automation
● Partial network automation
The full network automation feature allows for the configuration of nodes with supported switches and the partial network
automation allows for the configuration of nodes with unsupported switches.
If you choose to use partial network automation, you give up the error handling and network automation features that are
available with a full network configuration that includes supported switches. It also requires more manual configuration before
deployments can proceed successfully.
The table below provides the supported features of both the full and partial network automation templates.

Template components Full network automation Partial network automation


Operating system images ● VMware ESXi ● VMware ESXi and 6.7 and 7.0
● CentOS ● Embedded OS
● Red Hat ● Red Hat 7.6
● Windows
PowerFlex roles ● Hyperconverged ● Hyperconverged
● Compute-only ● Compute-only (ESXi or Red Hat/
● Storage-only CentOS)
● Storage-only (embedded operating
system)
Switch PowerFlex roles ● Port channel (LACP-enabled) Port channel (LACP-enabled)
● Port channel
● Trunk port
Target boot device settings ● Local flash storage for Dell EMC Local flash storage
● Local hard drive
Network settings ● 10 GB, 25 GB, 100 GB ● 25 GB
● Required PXE network ● No PXE network (using iDRAC virtual
● Network automation type: full media)
● Network automation type: partial

Full network automation


Full network automation allows for the configuration of nodes with supported switches.

Deploying PowerFlex nodes using PowerFlex Manager 89


Full network automation: Deploying a PowerFlex compute-only
node with Red Hat Enterprise Linux or CentOS
This procedure describes how to deploy a PowerFlex compute-only node with Red Hat Enterprise Linux or embedded OS using
full network automation option with PowerFlex Manager. The full network automation option will configure changes to physical
switches.

Prerequisites
This procedure steps through how to deploy a service by creating a new template. A sample template can be used to create a
template but does not show the steps here. To create a new template from a clone, do the following:
1. Templates > Add a Template to open Add a Template wizard.
2. Select Clone an existing PowerFlex Manager template.
3. Click ? to access the online help.
4. Follow the instructions on how to add a new template from a sample template.

Steps
1. Log in to PowerFlex Manager.
2. Add OS image to the repository.
NOTE: Skip this step if the OS Image is already added to the repository.

a. Click Settings from menu bar and click Compliance and OS Repositories.
b. Click the OS Image Repositories tab.
c. Click Add to open Add OS Image Repository wizard and enter the following:

OS image information Details


Repository name Enter <Enter Red Hat or Embedded OS image name>.
Image type Select Red Hat/CentOS 7.
Source path and filename Enter http://<server IP>/<folder>/<image name>.iso.
Username Enter <username>.
Password Enter <password>.

d. Validate the path is working correctly, click Test Connection.


e. Click Add to upload the ISO image.
3. Click Templates.
4. Click Add a Template to open the Add a Template page.
a. In Create a New Template, enter the Template Name.
b. Click Next.
c. Under Create Template, enter the following details:

Clone template Details


Template name Enter <Template name>.

Template Category Select Create New Category.


Enter a name in the New Category Name box.

Template Description Enter <Template Description> .


Example: compute-only nodes with RedHat or
CentOS

Firmware and software compliance Select the latest Intelligent Catalog version from the
list.

90 Deploying PowerFlex nodes using PowerFlex Manager


Clone template Details
Who should have access to the service deployed from this Select from list who should have access to this service
template? template.

5. Click Save.
6. Click Add Node to open the Node wizard and select Full Network Automation.
a. Click Continue.
b. Enter the following details:

Node Details
Component name Enter <Red Hat or CentOS>.

Number of instances Enter <Number>.

Related components Select Associate Selected.


Check checkbox for PowerFlex cluster.

c. Click Continue.
d. Under OS Settings, enter the following settings:

Description Values
Host name selection Select <appropriate host name selection>.
OS image Select < Red Hat or CentOS Image>.
OS credentials Select <OS Credential Name>.
Timezone Select <Time zone>
NTP server Select <NTP Server>
Use node For Dell EMC PowerFlex Click checkbox.
PowerFlex role Select Compute Only.
Enable encryption Leave checkbox cleared.
Switch port configuration Select Port Channel (LACP enabled).
Teaming and bonding configuration Select Mode4(IEEE 802.3ad policy).

e. Under Hardware Settings, enter the details within the following table:

Hardware settings Details


Target boot device Select Local Flash Storage for DellEMC PowerFlex.
Node pool Select <compute pool>.

f. Under BIOS Settings, enter the details within the following table:

BIOS settings Details


System profile Select Performance.
User accessible USB ports Select All Ports On.
Number of cores per processor Select All.
Virtualization technology Select Enabled.
Logical processor Select Enabled.
Execute disable Select Enabled.

Deploying PowerFlex nodes using PowerFlex Manager 91


BIOS settings Details
Node interleaving Select Enabled.

g. Under Network Settings, follow the steps below to add interfaces.


h. Click Add New Interfaceto create the first interface.
i. Under Interface 1, enter the following details:

Network settings Details


Port layout Select Two port 25 gigabit.

j. Under Port 1, click Choose Networks to open Interface 1 Port 1 Network Configuration window.
k. Select the checkboxes for the following networks:

Selected networks Description


Powerflex-mgmt-<vlanid > Powerflex-Management
pxe<vlanid> PXE Network
general purpose <vlanid> General purpose network

l. Click >> to add the selected networks to the right column and click Save.
m. Under Port 2, click Choose Networks to open Interface 1 Port 2 Network Configuration window.
n. Select the checkboxes for the following networks:

Selected networks Description


powerflex-data1-<vlanid > Powerflex-Data1
powerflex-data2-<vlanid > Powerflex-Data2
powerflex-data3-<vlanid > Powerflex-Data3
powerflex-data4-<vlanid > Powerflex-Data4

o. Click >> to add the selected networks to the right column and click Save.
p. Click Add New Interfaceto create the second interface.
q. Under Interface 2, enter the following details:

Hardware settings Details


Port Layout Select Two port 25 gigabit

r. Under Port 2, click Choose Networks to open Interface 2 Port 1 Network Configuration window.
s. Select the checkboxes for the following networks:

Selected networks Description


Powerflex-mgmt-<vlanid > Powerflex-Management
pxe<vlanid> PXE Network
general purpose <vlanid> General purpose network

t. Click >> to add the selected networks to the right column and click Save.
u. Under Port 2, click Choose Networks to open Interface 2 Port 2 Network Configuration window.
v. Select the checkboxes for the following networks:

Selected networks Description


powerflex-data1-<vlanid > Powerflex-Data1
powerflex-data2-<vlanid > Powerflex-Data2

92 Deploying PowerFlex nodes using PowerFlex Manager


Selected networks Description
powerflex-data3-<vlanid > Powerflex-Data3
powerflex-data4-<vlanid > Powerflex-Data4

w. Click >> to add the selected networks to the right column and click Save.
x. Click Validate Settings, if there are any errors, correct them and click Close.
y. Click Save to complete the clone creation.
7. Create the clusters.
a. Click Add Cluster to create PowerFlex Cluster.
b. Click Component Name > PowerFlex Cluster.
c. Select Associate All or Associate Selected.
d. Click Continue.
e. Under PowerFlex Settings, enter the details within the following table:

PowerFlex settings Details


Target PowerFlex Gateway Select <New Target PowerFlex Gateway VM>.

f. Click Save.
8. In the Template Information box, click Publish Template.
9. In the pop-up, click Yes.
10. On the Compute Template page, under Template Information, click Deploy and select the following:

Deploy Settings Details


Select Published template Select <Current Name of Template>.
Service name Enter < Service Name>.
Service description Enter < Service Description>.
Firmware and software compliance Select the latest Intelligent Catalog version from list.
Who should have access to the service deployed from Select from list who should have access to this service
this template template.

11. Click Next to Deployment setting page.


12. Validate settings and click Next to Schedule Deployment page.
13. Leave default to Deploy Now.
14. Click Next.
15. Verify the summary page and click Finish.

Full network automation: Deploying a PowerFlex storage-only node


This procedure describes how to deploy a PowerFlex storage-only node with Red Hat Enterprise Linux or CentOS using full
network automation option with PowerFlex Manager. The full network automation option will configure changes to physical
switches.

Prerequisites
This procedure shows how to deploy a service by creating a new template. A sample template can be used to create a template
but does not show the steps here. To create a new template from a clone, do the following:
1. Templates > Add a Template to open Add a Template wizard.
2. Select Clone an existing PowerFlex Manager template.
3. Click ? to access the online help.
4. Follow the instructions on how to add a new template from a sample template.

Deploying PowerFlex nodes using PowerFlex Manager 93


Steps
1. Log in to PowerFlex Manager.
2. Add OS image to the repository.
NOTE: Skip this step if the OS Image is already added to the repository.

a. Click Settings from menu bar and click Compliance and OS Repositories.
b. Click the OS Image Repositories tab.
c. Click Add to open Add OS Image Repository wizard and enter the following:

OS image information Details


Repository name Enter <Enter Embedded OS image name>.
Image type Select Red Hat / CentOS 7.
Source path and filename Enter http://<server IP>/<folder>/<image name>.iso.
Username Enter <username>.
Password Enter <password>.

d. Validate the path is working correctly, click Test Connection.


e. Click Add to upload the ISO image.
3. Click Templates.
4. Click Add a Template to open the Add a Template page.
a. Enter the Template Name.
b. Click Next.
c. Under Create Template, enter the following details:

Clone template Details


Template name Enter <Template name>.

Template Category Select Create New Category..


Enter a name in the New Category Name box.

Template Description Enter <Template Description> .


Example: storage-only nodes with RedHat or
CentOS

Firmware and software compliance Select the latest Intelligent Catalog version from the
list.
Who should have access to the service deployed from this Select from list who should have access to this service
template? template.

5. Click Save.
6. Click Add Node to open the Node wizard and select Full Network Automation.
a. Click Continue.
b. Enter the following details:

Node Details
Component name Enter <Embedded OS Image>.

Number of instances Enter <Number>.

Related components Select Associate Selected.


Check checkbox for PowerFlex cluster.

c. Click Continue.

94 Deploying PowerFlex nodes using PowerFlex Manager


d. Under OS Settings, enter the following settings:

Description Values
Host name selection Select <appropriate host name selection>.
OS image Select < Embedded OS Image>.
OS credentials Select <OS Credential Name>.
Timezone Select <Time zone>
NTP server Select <NTP Server>
Use node for Dell EMC PowerFlex Click checkbox.
PowerFlex role Select Storage Only.
Enable compression Select the check box (based on your requirement).
Enable encryption Select the check box (based on your requirement).
Enable replication Select the check box (based on your requirement).
Switch port configuration Select Port Channel (LACP enabled)
Teaming and bonding configuration Select Mode4 (IEEE 802.3ad policy)

e. Under SVM OS Settings, enter the details within the following table:

SVM OS Settings Details


Host name selection Select < appropriate host name selection >.
Host name template Enter < host name template >.
OS credentials Select < OS Credentials >.
NTP server Enter < IP address >.

f. Under Hardware Settings, enter the details within the following table:

Hardware settings Details


Target boot device Select Local Flash Storage for DellEMC PowerFlex.
Node pool Select <pool name >.

g. Under BIOS Settings, enter the details within the following table:

BIOS settings Details


System profile Select Performance.
User accessible USB ports Select All Ports On.
Number of cores per processor Select All.
Virtualization technology Select Enabled.
Logical processor Select Enabled.
Execute disable Select Enabled.
Node interleaving Select Enabled.

h. Under Network Settings, follow the steps below to add interfaces.


i. Click Add New Interface to create the first interface.
j. Under Interface 1, enter the following details:

Deploying PowerFlex nodes using PowerFlex Manager 95


Network settings Details
Port layout Select Two port 25 gigabit.

k. Under Port 1, click Choose Networks to open Interface 1 Port 1 Network Configuration window.
l. Select the checkboxes for the following networks:

Selected networks Description


powerflex-data2-<vlanid > Powerflex-Data2
powerflex-data4-<vlanid > Powerflex-Data4
Powerflex-mgmt-<vlanid > Powerflex-Management
powerflex-prod-<vlanid> Production Network

m. Click >> to add the selected networks to the right column and click Save.
n. Under Port 2, click Choose Networks to open Interface 1 Port 2 Network Configuration window.
o. Select the checkboxes for the following networks:

Selected networks Description


powerflex-data1-<vlanid > Powerflex-Data1
powerflex-data3-<vlanid > Powerflex-Data3

p. Click >> to add the selected networks to the right column and click Save.
q. Click Add New Interface to create the second interface.
r. Under Interface 2, enter the following details:

Hardware settings Details


Port Layout Select Two port 25 gigabit.

s. Under Port 2, click Choose Networks to open Interface 2 Port 1 Network Configuration window.
t. Select the checkboxes for the following networks:

Selected networks Description


powerflex-data2-<vlanid > Powerflex-Data2
powerflex-data4-<vlanid > Powerflex-Data4
Powerflex-mgmt-<vlanid > Powerflex-Management
pxe-<vlanid> PXE Network

u. Click >> to add the selected networks to the right column and click Save.
v. Under Port 2, click Choose Networks to open Interface 2 Port 2 Network Configuration window.
w. Select the checkboxes for the following networks:

Selected networks Description


powerflex-data1-<vlanid > Powerflex-Data1
powerflex-data3-<vlanid > Powerflex-Data3

x. Click >> to add the selected networks to the right column and click Save.
y. Click Validate Settings, if there are any errors, correct them and click Close.
z. Click Save to complete the clone creation.
7. Create the clusters.
a. Click Add Cluster to create PowerFlex Cluster.
b. Click Component Name > PowerFlex Cluster.
c. Select Associate All or Associate Selected.

96 Deploying PowerFlex nodes using PowerFlex Manager


d. Click Continue.
e. Under PowerFlex Settings, enter the details within the following table:

PowerFlex settings Details


Target PowerFlex Gateway Select < New Target PowerFlex Gateway VM >.
Protection domain name Select Auto generate protection domain name
(Recommended).
Protection domain name template Leave as default: PD-${num}
Acceleration pool name Select Auto generate acceleration pool name
NOTE: Only available if compression is enabled. (Recommended).

Acceleration pool name template Leave as default: Site-AP-${num}


NOTE: Only available if compression is enabled.

Storage pool name Select Auto generate storage pool name


(Recommended).
Number of storage pools Select < number >.
Storage pool name template Leave as default.
Granularity Select < Fine or Medium >.
NOTE: Only available if compression is enabled.

Enable fault sets Check box if fault sets need to be enabled.


NOTE: If the PowerFlex configuration includes fault
sets, contact Dell EMC support for assistance. Do not
go to the procedure until you have received guidance
from a support representative.

f. Click Save.
8. In the Template Information box, click Publish Template.
9. In the pop-up, click Yes.
10. On the Storage Template page, under Template Information, click Deploy and select the following:

Deploy Settings Details


Select published template Select < Current Name of Template >.
Service name Enter < Service Name >.
Service description Enter < Service Description >.
Firmware and software compliance Select the latest Intelligent Catalog version from list.
Who should have access to the service deployed from Select from list who should have access to this service
this template template.

11. Click Next to Deployment setting page.


12. Validate settings and click Next to Schedule Deployment page.
13. Leave default to Deploy Now.
14. Click Next.
15. Verify the summary page and click Finish.

Deploying PowerFlex nodes using PowerFlex Manager 97


Full network automation: Deploying a VMware ESXi PowerFlex
hyperconverged node or PowerFlex compute-only node
This procedure describes how to deploy a PowerFlex compute-only node with VMware ESXi using full network automation
option with PowerFlex Manager. The full network automation option will configure changes to physical switches.

Prerequisites
This procedure shows how to deploy a service by creating a new template. A sample template can be used to create a template
but does not show the steps here. To create a new template from a clone, do the following:
1. Templates > Add a Template to open Add a Template wizard.
2. Select Clone an existing PowerFlex Manager template, choose Category and Template to be cloned.
3. In Category, select Sample Template and in Template to be cloned select Compute Only ESXi and click Next.
4. Click ? > Help to access the online help for more information on Category and Sample Templates.
5. Follow the instructions on how to add a new template from a sample template.

Steps
1. Log in to PowerFlex Manager.
2. Add OS image to the repository.
NOTE: Skip this step if the OS Image is already added to the repository.

a. Click Settings from menu bar and click Compliance and OS Repositories.
b. Click the OS Repositories tab.
c. Click Add to open Add OS Image Repository wizard and enter the following:

OS image information Details


Repository name Enter <Enter ESXi image name>.
Image type Select ESXi.
Source path and filename Enter http://<server IP>/<folder>/<image name>.iso.
Username Enter <username>.
Password Enter <password>.

d. Validate the path is working correctly, click Test Connection.


e. Click Add to upload the ISO image.
3. Click Templates.
4. Click Add a Template to open the Add a Template page.
a. Enter the Template Name.
b. Click Next.
c. Under Create Template, enter the following details:

Clone template Details


Template name Enter <Template name>.
Example name: HCI or CO VMware ESXi Compute
Template.

Template Category Select Create New Category..


Enter a name in the New Category Name box.

Template Description Enter <Template Description> .


Example: compute-only or HC nodes with
VMware ESXi

98 Deploying PowerFlex nodes using PowerFlex Manager


Clone template Details
Firmware and software compliance Select the latest Intelligent Catalog version from the
list.
Who should have access to the service deployed from this Select from list who should have access to this service
template? template.

5. Click Save.
6. Click Add Node to open the Node wizard and select Full Network Automation.
a. Click Continue.
b. Enter the following details:

Node Details
Component name Enter <ESXi>.

Number of instances Enter <Number>.

Related components Select Associate Selected.


Check checkbox for PowerFlex cluster.

c. Click Continue.
d. Under OS Settings, enter the following settings:

Description Values
Host name selection Select <appropriate host name selection>.
Host name template (auto-generated) Enter <host name template >
OS image Select < ESXi Image>.
OS credentials Select <OS Credential Name>.
NTP server Enter <NTP server IP address>

NOTE: Multiple NTP server IP addresses can be


entered by using commas.

Use node for Dell EMC PowerFlex Click checkbox.


PowerFlex role Select Compute Only or Hyperconverged.
Enable compression Select appropriate.
Enable encryption Select appropriate.
Enable replication Select appropriate.
Switch port configuration Select Port Channel (LACP enabled).
Teaming and bonding configuration Select Route Based on IP hash.

e. Under SVM OS Settings, enter the details within the following table:

SVM OS Settings Details


Host name selection Select < appropriate host name selection >.
Host name template Enter < host name template >.
OS credential Select < OS credentials >.
NTP server Enter < IP address >.

f. Under Hardware Settings, enter the details within the following table:

Deploying PowerFlex nodes using PowerFlex Manager 99


Hardware settings Details
Target boot device Select Local Flash Storage for DellEMC PowerFlex.
Node pool Select < pool name >.

g. Under BIOS Settings, enter the details within the following table:

BIOS settings Details


System profile Select Performance.
User accessible USB ports Select All ports on.
Number of cores per processor Select All.
Virtualization technology Select Enabled.
Logical processor Select Enabled.
Execute disable Select Enabled.
Node interleaving Select Enabled.

h. Under Network Settings, follow the steps below to add interfaces.


i. Click Add New Interface to create the first interface.
j. Under Interface 1, enter the following details:

Network settings Details


Port Layout Select Two port 25 gigabit.

k. Under Port 1, click Choose Networks to open Interface 1 Port 1 Network Configuration window.
l. Select the checkboxes for the following networks:

Selected networks Description


powerflex-esx-mgmt-<vlanid> Hypervisor management
powerflex-vmotion-<vlanid> Hypervisor migration
Powerflex-mgmt-<vlanid > Powerflex management
pxe-<vlanid> PXE network

m. Click >> to add the selected networks to the right column and click Save.
n. Under Port 2, click Choose Networks to open Interface 1 Port 2 Network Configuration window.
o. Select the checkboxes for the following networks:

Selected networks Description


powerflex-data1-<vlanid > Powerflex-Data1
powerflex-data2-<vlanid > Powerflex-Data2
powerflex-data3-<vlanid > Powerflex-Data3
powerflex-data4-<vlanid > Powerflex-Data4

p. Click >> to add the selected networks to the right column and click Save.
q. Click Add New Interface to create the second interface.
r. Under Interface 2, enter the following details:

Hardware settings Details


Port Layout Select Two port 25 gigabit.

s. Under Port 2, click Choose Networks to open Interface 2 Port 1 Network Configuration window.

100 Deploying PowerFlex nodes using PowerFlex Manager


t. Select the checkboxes for the following networks:

Selected networks Description


powerflex-esx-mgmt-<vlanid> Hypervisor management
powerflex-vmotion-<vlanid> Hypervisor migration
Powerflex-mgmt-<vlanid > Powerflex management
pxe-<vlanid> PXE network

u. Click >> to add the selected networks to the right column and click Save.
v. Under Port 2, click Choose Networks to open Interface 2 Port 2 Network Configuration window.
w. Select the checkboxes for the following networks:

Selected networks Description


powerflex-data1-<vlanid > Powerflex-Data1
powerflex-data2-<vlanid > Powerflex-Data2
powerflex-data3-<vlanid > Powerflex-Data3
powerflex-data4-<vlanid > Powerflex-Data4

x. Click >> to add the selected networks to the right column and click Save.
y. In Static Routes, select Enabled.
z. Click Validate Settings, if there are any errors, correct them and click Close.
aa. Click Save to complete the clone creation.
7. Create the clusters.
a. Click Add Cluster to create PowerFlex Cluster.
b. Click Component Name > PowerFlex Cluster.
c. Select Associate All or Associate Selected.
d. Click Continue.
e. Under PowerFlex Settings, enter the details within the following table:

PowerFlex settings Details


Target PowerFlex Gateway Select < New Target PowerFlex Gateway VM >.
Protection domain name Select Auto generate protection domain name
(Recommended)
Protection domain name template Leave as default: PD-${num}
Acceleration pool name Select Auto generate acceleration pool name
NOTE: Only available if compression is enabled. (Recommended).

Acceleration pool name template Leave as default: Site-AP-${num}


NOTE: Only available if compression is enabled.

Storage pool name Select Auto generate storage pool name


(Recommended)
Number of storage pools Select < number >.
Storage pool name template Leave as default.
Granularity Select < Fine or Medium >.
NOTE: Only available if compression is enabled.

Enable fault sets Check box if fault sets need to be enabled.


NOTE: If the PowerFlex configuration includes fault
sets, contact Dell EMC support for assistance. Do not

Deploying PowerFlex nodes using PowerFlex Manager 101


PowerFlex settings Details

go to the procedure until you have received guidance


from a support representative.

f. Click Save.
8. Create VMware Cluster.
a. Click Add Cluster to create VMware Cluster.
b. Select VMware Cluster for the component name.
c. Select Associated All option.
d. Click Continue
e. Under Cluster Settings, enter the details within the following table:

Cluster Settings Details


Target virtual machine manager Select < vCenter Server hostname >.
Data center name Select < Create New Datacenter or an existing
Datacenter >.
New datacenter name Select < Datacenter Name >.
Cluster name Select < Create New Cluster or an existing Cluster >.
New cluster name Select Cluster Name.
Cluster HA enabled Select checkbox to enable.
Cluster DRS enabled Select checkbox to enable.

f. Under vSphere VDS Settings, click Configure VDS Settings button to open Configure VDS Settings wizard.
g. Select Existing port group or create new port group.
h. Assuming deployment is standard, select Auto Create All Port Groups.
i. Click Next to VDS Naming page.
j. Enter the details within the following table:

VDS Label Details


VDS1 VDS Name Enter <VDS Name>
VDS2 VDS Name Enter <VDS Name>

9. Click Next to Port Group Select page.


10. Validate the port group names that was automatically generated.
11. Click Next to continue to the Port Group Select page.
12. Validate the appropriate port group are created on the correct VDS.
13. In Advanced Networking Selection, select the appropriate MTU values for the port group <esxi-mgmt> and <vmotion> and
click Next.
14. Click Finish.
15. In the Confirm pop-up, click Yes.
16. Click Save.
17. In the Template Information box, click Publish Template.
18. In the pop-up, click Yes.
19. On the Compute Template page, under Template Information, click Deploy and select the following:

Deploy Settings Details


Select Published template Select <Name of Template>.
Service name Enter < Service Name >.
Service description Enter < Service Description >.

102 Deploying PowerFlex nodes using PowerFlex Manager


Deploy Settings Details
Firmware and software compliance Select the latest Intelligent Catalog version from list.
Who should have access to the service deployed from Select from list who should have access to this service
this template template.

20. Click Next to Deployment setting page.


21. Validate settings and click Next to Schedule Deployment page.
22. Leave default to Deploy Now.
23. Click Next.
24. Verify the summary page and click Finish.

Adding volumes to a PowerFlex hyperconverged node or PowerFlex


compute-only node
Steps
1. In PowerFlex Manager, click Services.
2. Click Service Name to open service.
3. In the Service Information action box, click Add Resources > Add Volumes in Resource Actions to open the Add
Volume wizard.
4. In the Add Volume wizard, select Add existing volume or Create new volume and click Next.
5. In the Create New Volume page, click Add New Volume and select options and enter the following details:

Volume 1 Details
Volume name Select Create New Volume.
New volume name Enter < New Volume Name >.
Storage pool Select Storage Pools .
Volume size (GB) Enter < Size Number >.
Datastore name Select Datastore Name .
New datastore name Enter < New Datastore Name>.
Volume type Select Thick or Thin.

6. Repeat Steps 1 through 5 for each additional volume.


7. Click Save.

Partial network automation


The partial network automation allows for the configuration of nodes with unsupported switches.
Partial network automation does not have the error handling and network automation features that are available with a full
network configuration that includes supported switches. It requires more manual configuration before deployments can proceed
successfully.

Deploying PowerFlex nodes using PowerFlex Manager 103


Partial network automation: Deploying a PowerFlex compute-only
node with Red Hat Enterprise Linux or CentOS
This procedure describes how to deploy a PowerFlex compute-only node with Red Hat Enterprise Linux or CentOS using partial
network automation option with PowerFlex Manager.

About this task


The partial network automation option will not configure any changes to physical switches. The changes on the switch ports are
required to be configured by the customer before executing this service. See the switch example configurations in Customer
Switch Port Configuration Examples.

Prerequisites
The procedure steps through how to deploy a service by creating a new template. A sample template can be used to create a
template but does not show the steps here. To create a new template from a clone, do the following:
1. Templates > Add a Template to open Add a Template wizard.
2. Select Clone an existing PowerFlex Manager template.
3. Click ? to access the online help.
4. Follow the instructions on how to add a new template from a sample template.

Steps
1. Log in to PowerFlex Manager.
2. Add OS image to the repository.
NOTE: Skip this step if the OS Image is already added to the repository.

a. Click Settings from menu bar and click Compliance and OS Repositories.
b. Click the OS Image Repositories tab.
c. Click Add to open Add OS Image Repository wizard and enter the following:

OS image information Details


Repository name Enter <Enter Red Hat or CentOS image name>
Image type Select Red Hat/CentOS 7
Source path and filename Enter http://<server IP>/<folder>/<image name>.iso
Username Enter <username>
Password Enter <password>

d. Validate the path is working correctly, click Test Connection.


e. Click Add to upload the ISO image.
3. Click Templates.
4. Click Add a Template to open the Add a Template page.
a. Enter the Template Name.
b. Click Next.
c. Under Create Template, enter the following details:

Clone template Details


Template name Enter <Template name>.
Example name: CO redhat or CentOS Compute
Template.

Template Category Select Create New Category.


Enter a name in the New Category Name box.

104 Deploying PowerFlex nodes using PowerFlex Manager


Clone template Details
Template Description Enter <Template Description>
Example: compute-only nodes with RedHat or
CentOS

Firmware and software compliance Select the latest Intelligent Catalog version from the
list.
Who should have access to the service deployed from this Select from list who should have access to this service
template? template.

5. Click Save.
6. Click Add Node to open the Node wizard and select Partial Network Automation.
a. Click Continue.
b. Enter the following details:

Node Details
Component name Enter <Red Hat or CentOS>.

Number of instances Enter <Number>

Related components Select Associate Selected.


Check checkbox for PowerFlex cluster.

c. Click Continue.
d. Under OS Settings, enter the following settings:

Description Values
Host name selection Select <appropriate host name selection>
OS image Select < Red Hat or CentOS Image>
OS credentials Select <OS Credential Name>
Use node for Dell EMC PowerFlex Click checkbox
PowerFlex role Select Compute Only
Switch port configuration Select Port Channel (LACP enabled)
Teaming and bonding configuration Select Mode 4 (IEEE 802.3d policy)

e. Under Hardware Settings, enter the details within the following table:

Hardware settings Details


Target boot device Select Local Flash Storage for Dell EMC PowerFlex
Node pool Select < compute pool >

f. Under BIOS Settings, enter the details within the following table:

BIOS settings Details


System profile Select Performance
User accessible USB ports Select All Ports On
Number of cores per processor Select All
Virtualization technology Select Enabled
Logical processor Select Enabled

Deploying PowerFlex nodes using PowerFlex Manager 105


BIOS settings Details
Execute disable Select Enabled
Node interleaving Select Enabled

g. Under Network Settings, follow the steps below to add interfaces.


h. Click Add New Interfaceto create the first interface.
i. Under Interface 1, enter the following details:

Network settings Details


Port Layout Select Two port 25 gigabit

j. Under Port 1, click Choose Networks to open Interface 1 Port 1 Network Configuration window.
k. Select the checkboxes for the following networks:

Selected networks Description


powerflex-data2-<vlanid > Powerflex-Data2
powerflex-data4-<vlanid > Powerflex-Data4
Powerflex-mgmt-<vlanid > Powerflex-Management
pxe-<vlanid> PXE Network

l. Click >> to add the selected networks to the right column and click Save.
m. Under Port 2, click Choose Networks to open Interface 1 Port 2 Network Configuration window.
n. Select the checkboxes for the following networks:

Selected networks Description


powerflex-data1-<vlanid > Powerflex-Data1
powerflex-data3-<vlanid > Powerflex-Data3

o. Click >> to add the selected networks to the right column and click Save.
p. Click Add New Interfaceto create the first interface.
q. Under Interface 2, enter the following details:

Hardware settings Details


Port Layout Select Two port 25 gigabit
Redundancy Leave default checkbox cleared .

r. Under Port 2, click Choose Networks to open Interface 2 Port 1 Network Configuration window.
s. Select the checkboxes for the following networks:

Selected networks Description


powerflex-data2-<vlanid > Powerflex-Data2
powerflex-data4-<vlanid > Powerflex-Data4
Powerflex-mgmt-<vlanid > Powerflex-Management
pxe-<vlanid> PXE Network

t. Click >> to add the selected networks to the right column and click Save.
u. Under Port 2, click Choose Networks to open Interface 2 Port 2 Network Configuration window.
v. Select the checkboxes for the following networks:

106 Deploying PowerFlex nodes using PowerFlex Manager


Selected networks Description
powerflex-data1-<vlanid > Powerflex-Data1
powerflex-data3-<vlanid > Powerflex-Data3

w. Click >> to add the selected networks to the right column and click Save.
x. Click Validate Settings, if there are any errors, correct them and click Close.
y. Click Saveto complete the clone creation.
7. Create the cluster.
a. Click Add Cluster.
b. Click Component Name > PowerFlex Cluster.
c. Select Associate All or Associate Selected.
d. Click Continue.
e. Under PowerFlex Settings, enter the details within the following table:

PowerFlex settings Details


Target PowerFlex Gateway Select < New Target PowerFlex Gateway VM >

f. Click Save.
8. In the Template Information box, click Publish Template.
9. In the pop-up, click Yes.
10. On the Compute Template page, under Template Information, click Deploy and select the following:

Deploy Settings Details


Select published template Select <Current Name of Template>.
Service name Enter < Service Name >.
Service description Enter < Service Description >.
Firmware and software compliance Select the latest Intelligent Catalog version from list.
Who should have access to the service deployed from Select from list who should have access to this service
this template template.

11. Click Next to Deployment setting page.


12. Validate settings and click Next to Schedule Deployment page.
13. Leave default to Deploy Now.
14. Click Next.
15. Verify the summary page and click Finish.

Partial network automation: Deploying a PowerFlex storage-only


node
This procedure describes how to deploy a PowerFlex storage-only node with embedded operating system using partial network
automation option with PowerFlex Manager.

About this task


The partial network automation option will not configure any changes to physical switches. The changes on the switch ports are
required to be configured by the customer before executing this service. See switch example configurations in Customer Switch
Port Configuration Examples.

Prerequisites
The procedure steps through how to deploy a service by creating a new template. A sample template can be used to create a
template but does not show the steps here. To create a new template from a clone, do the following:

Deploying PowerFlex nodes using PowerFlex Manager 107


1. Templates > Add a Template to open Add a Template wizard.
2. Select Clone an existing PowerFlex Manager template.
3. Click ? to access the online help.
4. Follow the instructions on how to add a new template from a sample template.

Steps
1. Log in to PowerFlex Manager.
2. Add OS image to the repository.
NOTE: Skip this step if the OS Image is already added to the repository.

a. Click Settings from menu bar and click Compliance and OS Repositories.
b. Click the OS Image Repositories tab.
c. Click Add to open Add OS Image Repository wizard and enter the following:

OS image information Details


Repository name Enter <Enter embedded OS image name>
Image type Select Red Hat / CentOS 7
Source path and filename Enter http://<server IP>/<folder>/<image name>.iso
Username Enter <username>
Password Enter <password>

d. Validate the path is working correctly, click Test Connection.


e. Click Add to upload the ISO image.
3. Click Templates.
4. Click Add a Template to open the Add a Template page.
a. Enter the Template Name.
b. Click Next.
c. Under Create Template, enter the following details:

Clone template Details


Template name Enter <Template name>.
Example name: embedded os storage Template.

Template Category Select Create New Category.


Enter a name in the New Category Name box.

Template Description Enter <Template Description>


Example: storage-only nodes with embedded os

Firmware and software compliance Select the latest Intelligent Catalog version from the
list.
Who should have access to the service deployed from this Select from list who should have access to this service
template? template.

5. Click Save.
6. Click Add Node to open the Node wizard and select Partial Network Automation.
a. Click Continue.
b. Enter the following details:

Node Details
Component name Enter <embedded os image>.

108 Deploying PowerFlex nodes using PowerFlex Manager


Node Details
Number of instances Enter <Number>

Related components Select Associate Selected.


Check checkbox for PowerFlex cluster.

c. Click Continue.
d. Under OS Settings, enter the following settings:

Description Values
Host name selection Select <appropriate host name selection>
OS image Select < Embedded OS Image>
OS credentials Select <OS Credential Name>
Timezone Select <Time zone>
NTP server Select <NTP Server>
Use Node for Dell EMC PowerFlex Click checkbox
PowerFlex role Select Storage Only
Enable compression Select checkbox.
Enable encryption Select checkbox.
Enable replication Select checkbox.
Switch port configuration Select Port Channel (LACP enabled)
Teaming and bonding configuration Select Mode4 (IEEE 802.3ad policy)

e. Under SVM OS Settings, enter the details within the following table:

SVM OS Settings Details


Host name selection Select < appropriate host name selection >
Host name template Enter < host name template >
OS credentials Select < OS Credentials >.
NTP server Enter < IP Address >.

f. Under Hardware Settings, enter the details within the following table:

Hardware settings Details


Target boot device Select Local Flash Storage for DellEMC PowerFlex
Node pool Select < pool name >

g. Under BIOS Settings, enter the details within the following table:

BIOS settings Details


System profile Select Performance
User accessible USB ports Select All Ports On
Number of cores per processor Select All
Virtualization technology Select Enabled
Logical processor Select Enabled
Execute disable Select Enabled

Deploying PowerFlex nodes using PowerFlex Manager 109


BIOS settings Details
Node interleaving Select Enabled

h. Under Network Settings, follow the steps below to add interfaces.


i. Click Add New Interfaceto create the first interface.
j. Under Interface 1, enter the following details:

Network settings Details


Port Layout Select Two port 25 gigabit

k. Under Port 1, click Choose Networks to open Interface 1 Port 1 Network Configuration window.
l. Select the checkboxes for the following networks:

Selected networks Description


powerflex-data2-<vlanid > Powerflex-Data2
powerflex-data4-<vlanid > Powerflex-Data4
Powerflex-mgmt-<vlanid > Powerflex-Management
powerflex-prod-<vlanid> Production Network

m. Click >> to add the selected networks to the right column and click Save.
n. Under Port 2, click Choose Networks to open Interface 1 Port 2 Network Configuration window.
o. Select the checkboxes for the following networks:

Selected networks Description


powerflex-data1-<vlanid > Powerflex-Data1
powerflex-data3-<vlanid > Powerflex-Data3

p. Click >> to add the selected networks to the right column and click Save.
q. Click Add New Interfaceto create the first interface.
r. Under Interface 2, enter the following details:

Hardware settings Details


Port Layout Select Two port 25 gigabit

s. Under Port 2, click Choose Networks to open Interface 2 Port 1 Network Configuration window.
t. Select the checkboxes for the following networks:

Selected networks Description


powerflex-data2-<vlanid > Powerflex-Data2
powerflex-data4-<vlanid > Powerflex-Data4
Powerflex-mgmt-<vlanid > Powerflex-Management
pxe-<vlanid> PXE Network

u. Click >> to add the selected networks to the right column and click Save.
v. Under Port 2, click Choose Networks to open Interface 2 Port 2 Network Configuration window.
w. Select the checkboxes for the following networks:

Selected networks Description


powerflex-data1-<vlanid > Powerflex-Data1

110 Deploying PowerFlex nodes using PowerFlex Manager


Selected networks Description
powerflex-data3-<vlanid > Powerflex-Data3

x. Click >> to add the selected networks to the right column and click Save.
y. Click Validate Settings, if there are any errors, correct them and click Close.
z. Click Saveto complete the clone creation.
7. Create the cluster.
a. Click Add Cluster.
b. Click Component Name > PowerFlex Cluster.
c. Select Associate All or Associate Selected.
d. Click Continue.
e. Under PowerFlex Settings, enter the details within the following table:

PowerFlex settings Details


Target PowerFlex Gateway Select < New Target PowerFlex Gateway VM >

f. Click Save.
8. In the Template Information box, click Publish Template.
9. In the pop-up, click Yes.
10. On the Storage Template page, under Template Information, click Deploy and select the following:

Deploy Settings Details


Select published template Select <Current Name of Template>.
Service name Enter < Service Name >.
Service description Enter < Service Description >.
Firmware and software compliance Select the latest RCM version from list.
Who should have access to the service deployed from Select from list who should have access to this service
this template template.

11. Click Next to Deployment setting page.


12. Validate settings and click Next to Schedule Deployment page.
13. Leave default to Deploy Now.
14. Click Next.
15. Verify the summary page and click Finish.

Partial network automation: Deploying a VMware ESXi PowerFlex


hyperconverged node or PowerFlex compute-only node
This procedure describes how to deploy a PowerFlex hyperconverged node or PowerFlex compute-only node VMware ESXi
using partial network automation option with PowerFlex Manager.

Prerequisites
This procedure shows how to deploy a service by creating a new template. A sample template can be used to create a template
but does not show the steps here. To create a new template from a clone, do the following:
1. Templates > Add a Template to open Add a Template wizard.
2. Select Clone an existing PowerFlex Manager template.
3. Click ? to access the online help.
4. Follow the instructions on how to add a new template from a sample template.

Steps
1. Log in to PowerFlex Manager.

Deploying PowerFlex nodes using PowerFlex Manager 111


2. Add OS image to the repository.
NOTE: Skip this step if the OS Image is already added to the repository.

a. Click Settings from menu bar and click Compliance and OS Repositories.
b. Click the OS Image Repositories tab.
c. Click Add to open Add OS Image Repository wizard and enter the following:

OS image information Details


Repository name Enter <Enter ESXi image name>
Image type Select ESXi
Source path and filename Enter http://<server IP>/<folder>/<image name>.iso
Username Enter <username>
Password Enter <password>

d. Validate the path is working correctly, click Test Connection.


e. Click Add to upload the ISO image.
3. Click Templates.
4. Click Add a Template to open the Add a Template page.
a. In Create a New Template, enter the Template Name.
b. Enter the Template Name.
c. Click Next.
d. Under Create Template, enter the following details:

Clone template Details


Template name Enter <Template name>.
Example name: HCI or CO VMware ESXi Compute
Template.

Template Category Select Create New Category.


Enter a name in the New Category Name box.

Template Description Enter <Template Description>


Example: compute-only or HC nodes with
VMware ESXi

Firmware and software compliance Select the latest RCM version from the list.
Who should have access to the service deployed from this Select from list who should have access to this service
template? template.

5. Click Save.
6. Click Add Node to open the Node wizard and select Partial Network Automation.
a. Click Continue.
b. Enter the following details:

Node Details
Component name Enter <ESXi>.

Number of instances Enter <Number>

Related components Select Associate Selected.


Check checkbox for PowerFlex cluster.

c. Click Continue.

112 Deploying PowerFlex nodes using PowerFlex Manager


d. Under OS Settings, enter the following settings:

Description Values
Host name selection Select <appropriate host name selection>
OS image Select < ESXi Image>
OS credentials Select <OS Credential Name>
NTP server Enter <NTP server IP address>

NOTE: Multiple NTP server IP addresses can be


entered by using commas.

Use Node For Dell EMC PowerFlex Click checkbox


PowerFlex Role Select Compute Only or Hyperconverged
Enable compression Select appropriate.
Enable encryption Select appropriate.
Enable replication Select appropriate.
Switch Port Configuration Select Port Channel (LACP enabled)
Teaming and bonding configuration Select Route Based on IP hash

e. Under SVM OS Settings, enter the details within the following table:

SVM OS Settings Details


Host name selection Select < appropriate host name selection >
Host name template Enter < host name template >
OS credentials Select < OS Credentials >.
NTP server Enter < IP address >.

f. Under Hardware Settings, enter the details within the following table:

Hardware settings Details


Target boot device Select Local Flash Storage for DellEMC PowerFlex
Node pool Select < pool name >

g. Under BIOS Settings, enter the details within the following table:

BIOS settings Details


System profile Select Performance
User accessible USB ports Select All ports on
Number of cores per processor Select All
Virtualization technology Select Enabled
Logical processor Select Enabled
Execute disable Select Enabled
Node interleaving Select Enabled

h. Under Network Settings, follow the steps below to add interfaces.


i. Click Add New Interface to create the first interface.
j. Under Interface 1, enter the following details:

Deploying PowerFlex nodes using PowerFlex Manager 113


Network settings Details
Port Layout Select Two port 25 gigabit

k. Under Port 1, click Choose Networks to open Interface 1 Port 1 Network Configuration window.
l. Select the checkboxes for the following networks:

Selected networks Description


powerflex-esx-mgmt-<vlanid> Hypervisor Management
powerflex-vmotion-<vlanid> Hypervisor Migration
Powerflex-mgmt-<vlanid > Powerflex-Management
pxe-<vlanid> PXE network

m. Click >> to add the selected networks to the right column and click Save.
n. Under Port 2, click Choose Networks to open Interface 1 Port 2 Network Configuration window.
o. Select the checkboxes for the following networks:

Selected networks Description


powerflex-data1-<vlanid > Powerflex-Data1
powerflex-data2-<vlanid > Powerflex-Data2
powerflex-data3-<vlanid > Powerflex-Data3
powerflex-data4-<vlanid > Powerflex-Data4

p. Click >> to add the selected networks to the right column and click Save.
q. Click Add New Interfaceto create the second interface.
r. Under Interface 2, enter the following details:

Hardware settings Details


Port layout Select Two port 25 gigabit.

s. Under Port 2, click Choose Networks to open Interface 2 Port 1 Network Configuration window.
t. Select the checkboxes for the following networks:

Selected networks Description


powerflex-esx-mgmt-<vlanid> Hypervisor Management
powerflex-vmotion-<vlanid> Hypervisor Migration
Powerflex-mgmt-<vlanid > Powerflex-Management
pxe-<vlanid> PXE network

u. Click >> to add the selected networks to the right column and click Save.
v. Under Port 2, click Choose Networks to open Interface 2 Port 2 Network Configuration window.
w. Select the checkboxes for the following networks:

Selected networks Description


powerflex-data1-<vlanid > Powerflex-Data1
powerflex-data2-<vlanid > Powerflex-Data2
powerflex-data3-<vlanid > Powerflex-Data3
powerflex-data4-<vlanid > Powerflex-Data4

x. Click >> to add the selected networks to the right column and click Save.

114 Deploying PowerFlex nodes using PowerFlex Manager


y. Click Validate Settings, if there are any errors, correct them and click Close.
z. Click Saveto complete the clone creation.
7. Create the clusters.
a. Click Add Cluster to create PowerFlex Cluster.
b. Select PowerFlex Cluster for the component name.
c. Select Associate All option.
d. Click Continue.
e. Under PowerFlex Settings, enter the details within the following table:

PowerFlex settings Details


Target PowerFlex Gateway Select < New Target PowerFlex Gateway VM >

f. Click Save.
8. Create VMware Cluster.
a. Click Add Cluster to create VMware Cluster.
b. Select VMware Cluster for the component name.
c. Select Associate All option.
d. Click Continue
e. Under Cluster Settings, enter the details within the following table:

Cluster Settings Details


Target Virtual Machine Manager Select < vCenter Server hostname >
Data Center Name Select < Create New Datacenter or an existing
Datacenter >.
New Datacenter Name Select < Datacenter Name >.
Cluster Name Select < Create New Cluster or an existing Cluster >.
New Cluster Name Select < Cluster Name >.
Cluster HA Enabled Select checkbox to enable.
Cluster DRS Enabled Select checkbox to enable.

f. Under vSphere VDS Settings, click Configure VDS Settings button to open Configure VDS Settings wizard.
g. Assuming deployment is standard, select Auto Create All Port Groups or Create New Port Groups.
h. Click Next to the VDS Naming page.
i. Enter the details within the following table:

VDS Label Details


VDS1 VDS Name Enter <VDS Name>
VDS2 VDS Name Enter <VDS Name>

9. Click Next to Port Group Select page.


10. Validate the Port Group Names that was automatically generated.
11. Click Next to continue to the Advanced Networking page.
12. Validate the appropriate port group are created on the correct VDS.
13. Click Finish.
14. In the Confirm pop-up, click Yes.
15. Click Save.
16. In the Template Information box, click Publish Template.
17. In the pop-up, click Yes.
18. On the Compute Template page, under Template Information, click Deploy and select the following:

Deploying PowerFlex nodes using PowerFlex Manager 115


Deploy Settings Details
Select Published template Select <Name of Template>.
Service name Enter < Service Name >.
Service description Enter < Service Description >.
Firmware and software compliance Select the latest RCM version from list.
Who should have access to the service deployed from Select from list who should have access to this service
this template template.

19. Click Next to Deployment setting page.


20. Validate settings and click Next to Schedule Deployment page.
21. Leave default to Deploy Now.
22. Click Next.
23. Verify the summary page and click Finish.

Adding volumes to a PowerFlex hyperconverged node or PowerFlex


compute-only node
Steps
1. In PowerFlex Manager, click Services.
2. Click Service Name to open service.
3. In the Service Information action box, click Add Resources > Add Volumes in Resource Actions to open the Add
Volume wizard.
4. In the Add Volume wizard, select Add existing volume or Create new volume and click Next.
5. In the Create New Volume page, click Add New Volume and select options and enter the following details:

Volume 1 Details
Volume name Select Create New Volume.
New volume name Enter < New Volume Name >.
Storage pool Select Storage Pools .
Volume size (GB) Enter < Size Number >.
Datastore name Select Datastore Name .
New datastore name Enter < New Datastore Name>.
Volume type Select Thick or Thin.

6. Repeat Steps 1 through 5 for each additional volume.


7. Click Save.

116 Deploying PowerFlex nodes using PowerFlex Manager


8
Restoring the PowerFlex Gateway
Use this procedure when the PowerFlex Gateway has been lost and must be restored.

About this task


During deployment, PowerFlex Manager sets the same password for the PowerFlex Gateway admin account, the PowerFlex
Gateway lockbox, MDMs, and LIA. When restoring a lost PowerFlex Gateway, you must set these passwords in the PowerFlex
Gateway to match the PowerFlex Gateway admin password set during deployment (or in the PowerFlex Manager Settings >
Credentials Management page) to maintain manageability by PowerFlex Manager.
You must also set the PowerFlex Gateway root password to match the PowerFlex Gateway root password set during
deployment (or in the PowerFlex Manager Settings > Credentials Management) page to maintain manageability by
PowerFlex Manager.

Prerequisites
You must have the following information available before beginning this procedure. The identifiers in brackets (<IDENTIFIER>)
are used in the procedure to represent the required values.

Description Identifier
PowerFlex Management IP address <MGMT IP>
PowerFlex Management VLAN <MGMT VLAN>
PowerFlex Data 1 IP <DATA1 IP>
PowerFlex VLAN <DATA1 VLAN>
PowerFlex IP <DATA2 IP>
PowerFlexVLAN <DATA2 VLAN>
PowerFlex Gateway root password <ROOT PWD>
PowerFlex Gateway admin password <ADMIN PWD>
Default Gateway IP <DEF GW IP>
DNS Server IP <DNS IP>
NTP Server IP <NTP IP>
PowerFlex Gateway Domain <DOMAIN>
PowerFlex Gateway hostname <HOSTNAME>
Primary MDM IP <PRIMARY MDM IP>
Secondary MDM IP 1 <SECONDARY MDM IP 1>
Secondary IP 2 (if 5 node MDM cluster) <SECONDARY MDM IP 2>

Steps
1. Install the PowerFlex Gateway.
a. Install the PowerFlex Gateway OVF and VMDK files.
b. Change the root password.
c. Configure the PowerFlex Gateway network interfaces.
d. Configure thePowerFlex Gateway DNS client.
e. Configure the PowerFlex Gateway NTP client.
f. Install the JAVA and PowerFlex Gateway PRMs.

Restoring the PowerFlex Gateway 117


2. Restore the PowerFlex Gateway configuration.

Configure SNMP for PowerFlex


Perform this procedure to configure SNMP for PowerFlex.

Prerequisites
Ensure that a lockbox exists and that it contains MDM credentials.
Enable the SNMP feature in the gatewayUser.properties file.

Steps
1. Use a text editor to open the gatewayUser.properties file, which is located in the following directory on the
PowerFlex installer / PowerFlex Gateway server:
● Linux: /opt/emc/scaleio/gateway/webapps/ROOT/WEB-INF/classes
● Windows: C:\Program Files\EMC\ScaleIO\Gateway\webapps \ROOT\WEB-INF\classes\
2. Locate the parameter features.enable_snmp and edit it as follows:
features.enable_snmp=true

3. Add the PowerFlex Manager IP address by editing the parameter snmp.traps_receiver_ip.


The SNMP trap receivers' IP address parameter supports up to two comma-separated or semicolon-separated host names
or IP addresses.

4. Optionally change the following parameters:

Option Description
snmp.sampling_frequency The MDM sampling period. The default is 30.
snmp.resend_frequency The frequency of resending existing traps. The default is 0, which means that traps for active
alerts are sent every sampling cycle.
5. Save and close the file.
6. Run the following command to restart the PowerFlex Gateway service:
service scaleio-gateway restart

Installing the PowerFlex Gateway


Use this procedure to install the PowerFlex Gateway.

Steps
1. Log in to PowerFlex Manager.
2. Click Templates > Sample template > Management PowerFlex Gateway > Clone.
3. In Template Name enter a template name.
4. Select a template category from the Template Category list. To create a template category, select Create New Category
and enter the Category name.
5. In Template Description enter a description for the template.
6. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or
select Use PowerFlex Manager appliance default catalog.
NOTE: You cannot select a minimal compliance version for a template, since it only includes server firmware updates.
The compliance version for a template must include the full set of compliance update capabilities. PowerFlex Manager
does not show any minimal compliance versions in the Firmware and Software Compliance list.

7. Indicate who should have access to the service deployed from this template by selecting one of the following options:

118 Restoring the PowerFlex Gateway


● Grant access to only PowerFlex Manager administrators.
● Grant access to PowerFlex Manager administrators and specific standard and operator users. Click Add Users to add
one or more standard and or operator users to the list. Click Remove Users to remove users from the list.
● Grant access to PowerFlex Manager administrators and all standard and operator users.
8. On the Additional Settings page, provide new values for the Network Settings, PowerFlex Gateway Settings, and
Cluster settings.
9. Click Finish.
10. Once template is created, click Templates, select the PowerFlex Gateway template and click Edit.
11. Edit each component (PowerFlex Gateway and VMware Cluster), select the required field and Save.
12. Publish the template.

Installing the PowerFlex Gateway prior to PowerFlex


3.5
Use this procedure to install the PowerFlex Gateway on the PowerFlex management environment.

About this task


Download the PowerFlex Gateway from Dell support site. The PowerFlex Gateway uses the SVM OVF and
VMDK files. The SVM OVF and VMDK files have the filename format of: ScaleIOVM_Xnics_x.x.xxxxxxx.xxx.ovf, and
ScaleIOVM_Xnics_x.x.xxxxxxx.xxx.vmdk.

Steps
1. Download the SVM OVF and VMDK files and save them to a location that is accessible to the vCenter being used to manage
the PowerFlex appliance management environment.
2. Log in to VMware vCenter.
3. Deploy the PowerFlex Gateway OVF and VMDK files.
4. Type a unique name for the PowerFlex Gateway VM name and select a location for the VM.
5. Select a compute resource for the PowerFlex Gateway VM.
6. On the Review details page, click Next.
7. On the Select storage page, complete the following:
a. Select Virtual disk provisioning: Thick Provision Lazy Zeroed.
b. VM Storage policy: Datastore Default.
c. Select a datastore on which to install the VM. Do not install VMs on BOSS cards.
d. Click Next.
8. On the Select networks page:
a. Set VM Networks to <MGMT VLAN>.
b. Click Next.
9. Review details on Ready to complete page and then click Finish.
10. Wait for the PowerFlex Gateway OVF deployment to complete.
11. Right-click the PowerFlex Gateway VM and select Edit Settings. Set the following:
a. Network adapter 1: <MGMT VLAN>
b. Network adapter 2: <DATA1 VLAN>
c. Network adapter 3: <DATA2 VLAN>
d. Network adapter 4: <DATA3 VLAN>
e. Network adapter 5: <DATA4 VLAN>
12. Click OK.

Restoring the PowerFlex Gateway 119


Changing the root password on the VM
After deploying the PowerFlex Gateway OVF, you must change the root password of the VM.

Steps
1. Log in to VMware vCenter.
2. Power on the PowerFlex Gateway VM.
3. Use VMware virtual console to connect to the PowerFlex Gateway VM.
4. Log in using these credentials: User is root, password is admin.
5. Use the Linux passwd command to change the default password to <ROOT PWD>.
6. To logout of the console, type exit.
7. Log in to the root account using the new password <ROOT PWD> to ensure it works.

Configuring the PowerFlex Gateway network


interfaces
Use this procedure to configure your PowerFlex Gateway network interfaces.

Steps
1. Find the MAC addressees of the PowerFlex Gateway VM, by doing the following:
a. Log in to vCenter.
b. Right-click the PowerFlex Gateway VM and select Edit Settings.
c. Select Network adapter 1 (<MGMT VLAN>) and record the MAC address.
d. Repeat step for Network adapter 2 (<DATA1 VLAN>), Network adapter 3 (<DATA2 VLAN>), Network adapter 4
(<DATA3 VLAN>), and Network adapter 5 (<DATA4 VLAN>).
e. Use the VMware virtual console to connect to the PowerFlex Gateway VM.
f. At the command prompt, type:

nmtui

g. In the NetworkManager TUI screen, select Edit a connection.


h. Under Ethernet, select Wired connection 1.
i. Compare the MAC address listed on the Device line to the MAC addresses recorded above so that you know, which
VLAN corresponds to Wired connection 1.
j. Select Cancel.
k. Repeat for Wired connection 2 and Wired connection 3 so that you have recorded which VLAN corresponds to each
wired connection.
2. Using the nmtui command, configure the Wired connection corresponding to PowerFlex appliance management VLAN
(<MGMT VLAN>).
a. On the =ETHERNET line, select Show.
b. Leave the Cloned MAC address line blank.
c. Leave the MTU line blank.
d. On the =IPv4 CONFIGURATION line, select Automatic and change to Manual then select Show.
e. On the Addressees line, select Add and enter the IP address of this interface (<MGMT IP>).
f. On the Gateway line, enter the default gateway (<DEF GW IP>).
g. On the DNS Servers line, select Add and enter the DNS server IP address (<DNS IP>).
h. On the Search domains line, select Add and enter the domain (<DOMAIN>). Do not select Never use this network
for the default route.
i. Select Ignore automatically obtained routes.
j. Select Ignore automatically obtained DNS parameters.
k. Select Require IPv4 addressing for this connection.
l. On the =IPv6 CONFIGURATION line, select Automatic and change to Ignore.
m. Select Automatically connect.

120 Restoring the PowerFlex Gateway


n. Select Available to all users.
o. To exit the screen, select OK.
3. Using the nmtui command, configure the Wired connection corresponding to PowerFlex Data 1 VLAN (<DATA1 VLAN>).
a. On the =ETHERNET line, select Show.
b. Leave the Cloned MAC address line blank.
c. Set MTU to 9000.
d. On the =IPv4 Configuration line, select Automatic and change to Manual then select Show.
e. On the Addresses line, select Add and enter the IP address of this interface (<DATA1 IP>).
f. Leave the Gateway line blank.
g. Leave the DNS Servers line blank.
h. Leave the Search domains line blank.
i. Select Never use this network for the default route.
j. Select Ignore automatically obtained routes.
k. Select Ignore automatically obtained DNS parameters.
l. Select Required IPv4 addressing for this connection.
m. On the =IPv6 CONFIGURATION line, select Automatic and change to Ignore.
n. Select Automatically connect.
o. Select Available to all users.
p. To exit, select OK.
4. Using the nmtui command, configure the Wired connection corresponding to the PowerFlex Data2 VLAN (<DATA2
VLAN>).
a. On the =ETHERNET line, select Show.
b. Leave the Cloned MAC address line blank.
c. Set MTU to 9000.
d. On the =IPv2 CONFIGURATION line, select Automatic and change to Manual then select Show.
e. On the Addresses line, select Add and enter the IP address of this interface (<DATA2 IP>).
f. Leave the Gateway line blank.
g. Leave the DNS Servers line blank.
h. Leave the Search domains line blank.
i. Select Never use this network for the default route.
j. Select Ignore automatically obtained routes.
k. Select Ignore automatically obtained DNS parameters.
l. Select Require IPv4 addressing for this connection.
m. On the =IPv6 CONFIGURATION line, select Automatic and change to Ignore.
n. Select Automatically connect.
o. Select Available to all users.
p. Select OK.
NOTE: If adding data5 and data6 VLANs for native asynchronous replication, repeat Step 3 and 4.

5. On the Ethernet screen, select Back.


6. On NetworkManager TUI screen, select Quit and then OK.
7. Review network configuration with the Linux ip addr command.
8. Verify that you can ping each of the network interfaces and the default gateway (<DEF GW IP>). Also, ping a PowerFlex
node on all three networks.

Configuring the PowerFlex Gateway NTP client


Use this procedure to configure the PowerFlex Gateway NTP client.

Steps
1. Edit the chrony.conf file: vi /etc/chrony.conf
2. At about line 7, add a line : server <NTP IP> iburst.
3. Save chrony.conf file and quit the editor: <ESC>:wq!
4. Set the timezone. For example, for the Chicago timezone: timedatectl set-timezone America/Chicago.

Restoring the PowerFlex Gateway 121


5. Reboot the PowerFlex Gateway by typing: reboot.
6. A few minutes after the system boots, the time synchronizes with the NTP server time. Verify this by using the Linux
command date in the VMware virtual console.

Configuring the PowerFlex Gateway hostname


Use this procedure to set the PowerFlex Gateway hostname.

Steps
1. Use the VMware virtual console to connect to the PowerFlex Gateway VM.
2. At the command prompt, type:

hostnamectl set-hostname <HOSTNAME>

Installing the Java and PowerFlex Gateway RPMs


Use this procedure to install the Java and PowerFlex Gateway RPMs.

Steps
1. Use VMware virtual console to connect to PowerFlex Gateway VM.
2. At the command prompt, type: cd /root/install
3. Install Java RPM by typing the following: #rpm -ivh java-1.8.0-openjdk-
headless-1.8.0.292.b10-1.el7_9.rpm.
4. Install gateway RPM by typing:

GATEWAY_ADMIN_PASSWORD=<ADMIN PWD> rpm -I EMC-ScaleIO-gateway-3.0-100.208.x86_64.rpm

5. Confirm the correct network configuration and the installation of the rpms by using a web browser to connect to the
PowerFlex Gateway (<MGMT IP>).
The PowerFlex Installer login dialog box opens.
6. Close the PowerFlex Installer box without logging in.
7. If there is a system failure, Dell EMC recommends creating a snapshot of the PowerFlex Gateway to allow recovery.

Restoring the PowerFlex Gateway configuration


Use this procedure to restore the PowerFlex Gateway configuration.

Steps
1. Use an SSH client program like PuTTy to log in to the PowerFlex Gateway console (for example: Login: root, Password:
<ROOT PWD>).
2. Modify the gatewayUser.properties file:
a. Enter: cd /opt/emc/scaleio/gateway/webapps/ROOT/WEB_INF/classes.
NOTE: See Determining and switching the PowerFlex Meta Data Manager to find MDM primary and secondary IP
addresses.

b. Enter: vi gatewayUser.properties to edit the file and modify the following. IP addresses should be on
<MGMTIP> network:
● If you have a 3-node cluster: At about line 17: mdm.ip.addresses=<PRIMARY MDM IP>,<SECONDARY MDM IP
1>
● If you have a 5-node MDM cluster: At about line 17, mdm.ip.addresses=<PRIMARY MDM IP>,<SECONDARY
MDM IP 1>, <SECONDARY MDM IP 2>

122 Restoring the PowerFlex Gateway


● At about line 53: features.notification_method=none
● At about line 82: Security.bypass_certificate_check=true
3. Create the PowerFlex Gateway lockbox credentials:

/opt/emc/scaleio/gateway/bin/FOSGWTool.sh --change_lb_passphrase --new_passphrase


<ADMIN PWD>

4. Create the PowerFlex Gateway MDM credentials:

/opt/emc/scaleio/gateway/bin/FOSGWTool.sh –-set_mdm_credentials --mdm_user admin –-


mdm_password <ADMIN PWD>

5. Create the PowerFlex Gateway LIA password:

/opt/emc/scaleio/gateway/bin/FOSGWTool.sh --set_lia_password --lia_password <ADMIN


PWD>

6. Restart the PowerFlex Gateway service:

service scaleio-gateway restart

7. Log in to PowerFlex Manager, go to the Resources page, select the PowerFlex Gateway, and then click Run Inventory.
8. Go to Services and verify Overall Service Health.

Deploying the PowerFlex GUI presentation server


You can use a sample template to clone a PowerFlex GUI presentation server and deploy it using the PowerFlex Manager.

Prerequisites
Discover and set the PowerFlex management controller VMware vCenter as Managed in the PowerFlex Manager and select
this VMware vCenter and vSAN datastore for the presentation server template.

Steps
1. Log in to PowerFlex Manager.
2. On the PowerFlex Manager menu bar, click Template > Sample template > Management - presentation server and
click Clone in the right pane.
3. In the Clone Template dialog box, enter a template name under Template Name.
4. Select a template category from the Template Category list. To create a template category, select Create New Category
and enter the Category name.
5. In the Template Description, enter a description for the template.
6. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or
choose Use PowerFlex Manager appliance default catalog.
You cannot select a minimal compliance version for a template, since it only includes server firmware updates. The
compliance version for a template must include the full set of compliance update capabilities. PowerFlex Manager does
not show any minimal compliance versions in the firmware and software compliance list.
7. Indicate access rights to the service deployed from this template by selecting one of the following options:
● PowerFlex Manager administrators
● PowerFlex Manager administrators and specific standard and operator users
○ Click Add Users to add one or more standard and or operator users to the list and click Remove Users to remove
users from the list.
● PowerFlex Manager administrators and all standard and operator users
8. Click Next.
9. On the Additional Settings page, provide new values for the Network Settings, PowerFlex Presentation Server
Settings, and Cluster Settings.

Restoring the PowerFlex Gateway 123


Under PowerFlex Presentation Server settings, select the presentation server credential that is created for the
presentation server.
10. Select the PowerFlex management controller VMware vCenter or single vCenter.
11. Click Finish.
12. Once template is created, click Templates, select the PowerFlex presentation server template and click Edit.
13. Edit each component (PowerFlex presentation server and VMware Cluster), select the required field and Save.
14. Select the Publish template and click Deploy.
NOTE: The presentation server is autodiscovered on the Resource page on the successful deployment of the service.

Linking and unlinking the MDM to the presentation


server web UI
You can only link one MDM at any given time (1:1). Unlink the existing system if you to want to link another system. Unlink the
MDM cluster from the web UI if you want to connect to another MDM cluster, and follow the first-time log in procedure to log
in to the new MDM cluster.

Link the MDM to the presentation server web UI


Steps
1. Log in to the presentation server web UI link (https://Presentation_Server_IP_Address:8443/).
2. Enter the primary MDM IP address.
NOTE: This is a one-time setup wizard, the first time you link the presentation server to primary MDM.

3. Approve Certificates.
4. Enter the MDM cluster username and password.

Unlink the MDM to the presentation server web UI


About this task

NOTE: Unlinking should be done from the presentation server login page.

Steps
1. Log in to the presentation server web UI link (https://Presentation_Server_IP_Address:8443/).
2. Log in to the PowerFlex GUI.
3. Click Settings > Unlink system.

124 Restoring the PowerFlex Gateway


9
Upgrading VMware vSphere for patch
releases
This section provides details on the upgrading a patch release of VMware vSphere on PowerFlex appliance controller node.

Upgrading VMware vSphere infrastructure


management components
Use this task to upgrade the VMware vCenter Server Appliance (vCSA) on the PowerFlex appliance controller node.

About this task


Upgrading to VMware vSphere 7.0 Update 2a , VMware vCenter deploys vSphere Cluster Services (vCLS) VMs that are
automatically deployed once the host is added to the cluster or on the existing clusters. These VMs are managed by VMware
vCenter. Avoid making any changes to these VMs as it impacts you HA and DRS service on VMware vCenter.
vCLS VMs should be migrated to shared storage, when performing any maintenance activity these VMs are migrated to next
available host/datastore in the cluster. The first three hosts added to cluster have these VMs created and there is a maximum
of three VMs per cluster.

Prerequisites
Ensure the following are completed before you initiate the upgrade process:
● Back up the VMware vSphere infrastructure management components.
● Download the appropriate VMware vSphere vCenter patch and VMware ESXi ISO files from the download repository to the
jump server in the PowerFlex controller node.
● Take a snapshot of the vCSA prior to upgrading. See the VMware KB article for more information.

Steps
1. Take a snapshot of the VMware vSphere Management VMs (PowerFlex Manager appliance, VMware Controller vCenter,
embedded operating system jump server, Secure Remote Services Gateway, PowerFlex Gateway, PowerFlex presentation
server - optional is the CloudLink Center).
a. Check the datastore disk usage to verify that enough disk space is available to create snapshots.
b. Right-click and select Snapshot > Take Snapshot.
c. Enter a name and description, clear the Snapshot the virtual machine's memory and click OK.
d. Repeat these steps for each management VM.
2. Log in to the VMware vCenter appliance management port and create a backup.
a. Use the backup utility https://{FQDN}:5480 to create a backup.
3. Using the VMware vSphere client, upload the VMware-vCenter-Server-Appliance-X.x.x.xxxxx-xxxxxxx-
patch-FP.iso to the local datastore on the PowerFlex controller node.
4. From the VMware vSphere client, click Storage > Datacenter > PERC-01 > Files.
a. Create a folder named ISO (if not created already).
b. Click the upload icon and upload the required ISO file.
NOTE: This step may fail if the browser finds a certificate that it does not trust. If a failure occurs, upload the ISO
files to an existing folder.

c. Allow time for the upload to complete. You can view the status at the bottom of the screen.
d. From the VMware vSphere client, select the VM to attach the ISO.
e. Go to Hosts and Clusters.
f. On the Summary screen, expand VM Hardware.

Upgrading VMware vSphere for patch releases 125


g. Click Edit settings, then from the CD and DVD drive row, choose Datastore ISO file from menu and check
connected.
h. Click Browse, choose the ISO folder and, select VMware-vCenter-Server-Appliance-X.xxxx. Click OK.
i. Note the IP addresses of the VM. This is used later.
5. Open Mozilla Firefox on the jump server and go to the IP address of the vCSA appliance VM to be upgraded on port 5480 as
noted in the previous step. For example, https://<IP-ADDRESS>:5480 .
6. Log in to the interface with username root and default password VMwar3!!.
NOTE: If you have to change the root password, see the VMware KB article.

7. On the left menu, select Update > Check Updates. Click Check CD ROM. Wait while the system validates the ISO
attached earlier.
8. When complete, select Stage and Install. Click I accept and click Next. Clear Join the VMware Customer... and click
Next. Check I have backed up vCenter... and click Finish.
9. Click OK. To reboot, right-click the VM from vCenter and Power > Restart Guest OS. Allow up to 10 minutes for the VM
to reboot.
NOTE: When rebooting the PowerFlex management controller vCSA, web client connectivity is lost. After the reboot,
log back on to the web client.

10. Log in to the VMware vSphere client again and validate that the SSO domain is running, and disconnect the ISO.
NOTE: The VMware vSphere client may take some time to start, as the vCSA can take up to 15 additional minutes to
start all VMware vCenter services.

11. Verify that you have the correct Intelligent Catalog vCenter version, as follows:
a. Use the vSphere client to log in to the vCenter server.
b. Click Help > About VMWare vSphere.
c. A dialog appears with the build number of the VMware vCenter Server. Verify that it matches the requirement.

Stage and upgrade the iDRAC and firmware


Use this task to stage the iDRAC and firmware.

Prerequisites
The iDRAC firmware upgrade must be done before any other upgrades. Perform the iDRAC firmware upgrade first, then upgrade
the other component firmware.

Steps
1. Log in to the iDRAC web interface by opening a Mozilla Firefox or Google Chrome browser and go to https://<ip-
address-of-idrac>.
NOTE: Under Server Information, review the System Host Name and verify that you have connected to the correct
hostname.

2. Select Maintenance > System Update > Manual Update and click Choose File.
3. Go to the Intelligent Catalog folder /shares/xxxxx and select the component update file. The components to update
include:
● iDRAC service module
● Dell BIOS
● Dell BOSS controller
● Dell iDRAC/Lifecycle controller
● Dell Intel X550/X540/i350
● Dell Mellanox ConnectX-4 LX
● Dell PERC H740P mini raid controller
4. Click Upload.
5. Select the firmware that you uploaded and click Install Next Reboot.

126 Upgrading VMware vSphere for patch releases


CAUTION: Do NOT click Install and Reboot, as it could cause a system outage.

NOTE: The installation will be in the job queue for the next reboot. Click Job Queue from the prompted information
message to monitor the progress for the installation.

Shutting down all the VMs running on the controller


host
Use this task to shut down all the VMs running on the controller node.

Steps
1. Log in to the web UI of the controller VMware ESXi host directly.
2. Go to Virtual Machines.
3. Shut down all the VMs except the jump server running on the controller host.

Upgrading VMware vSphere ESXi


Use this task to upgrade VMware vSphere ESXi.

Steps
1. Use WinSCP to copy the ESXi-X.x.0-xxxxxx.zip patch file to the /vmfs/volumes/PERC-01/ISO folder on the
VMware ESXi server.
2. Using SSH, connect to the VMware ESXi host and check for the uploaded file by typing the following command: cd /
vmfs/ volumes/PERC-01/ISO.
3. For VMware ESXi 7.0 use the following command to install VMware ESXi .zip patches: esxcli software vib update
-d /vmfs/volumes/PERC-01/ISO/VMware-ESXi-7.0<version>-depot.zip

NOTE: To perform this command, use the path used to connect from the WinSCP and transfer the ZIP file.

4. To update the Profile image on the host, complete the following:


a. To optionally list the profile of the VMware ESXi .zip archive, type esxcli software sources profile list
-d /vmfs/ volumes/PERC-01/<VMWare.zip>.
b. To upgrade the VMware ESXi version, type esxcli software profile update -p DellEMC- ESXi-
X.x-xxxxxxxxx-xxx -d / vmfs/volumes/PERC-01/VMware-VMvisor- Installer- X.x-xxxxxxxxx-
xxx.zip.
When the upgrade completes successfully, the following message displays, followed by the list of upgraded packages:
5. Go to iDRAC > Launch virtual console and select Boot > UEFI Device Path to enter system BIOS.
NOTE: Steps 5 through 8 apply only to PowerEdge R640, R740xd, and R840 servers for the initial bootmode change
from BIOS to UEFI.

6. Reboot the VMware ESXi host. Select Power > Reset (Warm Boot).
7. Press F2 to enter system setup.
8. Click System BIOS > Boot setting, select Boot mode as UEFI.
NOTE: Ensure that the BOSS card is set as the primary boot device from the UEFI Device Path under the Boot tab.
If the BOSS card is not set as the primary boot device, reboot the server and change the UEFI boot sequence from
System BIOS > Boot setting > UEFI BOOT Settings.

9. Click Back > Back > Finish > Yes > Finish > OK > Finish > Yes. The node reboots. Go to Exit maintenance mode.

Upgrading VMware vSphere for patch releases 127


Powering on all the VMs running on the controller
host
Use this task to power on all the VMs running on the controller node.

Steps
1. Log in to the web UI of the controller VMware ESXi host directly.
2. Go to Virtual Machines.
3. Power on all the VMs running on the controller host.

Upgrading the iDRAC service module


Use this task to upgrade the iDRAC Service Module (iSM).

Steps
1. Use WinSCP to upload ISM-Dell-Web-3.4.x-xxxx.VIB-ESX6i-Live_A00.zip to the /vmfs/volumes/
DASxx/ISO folder.
2. Use SSH to access the VMware ESXi nodes and type esxcli software vib install -d /vmfs/volumes/
DASxx/ISO/ ISM-Dell-Web-3.4.x-xxxx.VIB-ESX6i-Live_A00.zip.

Change the SVM CPU clock reservation


Use this task to change the SVM CPU clock reservation and CPU shares on VMware vCenter.

Prerequisites
Ensure you have completed the following:
● Take a snapshot of the SVM.
● Check the CPU and clock speed.

Steps
1. Log in to the VMware vCenter with administrator credentials.
2. Right-click the SVM and select Edit Settings.
3. Expand CPU.
4. Select Reservation and enter the value in GHz.
5. Select Shares and select High from the menu.

Reservation = (SVM Core count /2 ) x (the ghz of the underlying CPU)


CPU Clock speed (GHz) SVM vCPU Reservation (GHz)
6248R 3 16 24
6230 2.1 10 10.5
6242 2.8 14 19.6
5215 2.5 8 10.5

128 Upgrading VMware vSphere for patch releases


Find the CPU and clock speed
Steps
1. Log in to VMware vCenter.
2. Click Host and Cluster.
3. Expand the Cluster and select Physical node.
4. Find the details against Processor Type under the Summary tab.

Migrating vCLS VMs on controller nodes


Use this task to migrate vCLS VMs on controller nodes.

About this task


The VMware vSphere vCSA 7.0Ux update creates VMware vCLS VMs when the host is added to the cluster.
WARNING: VMware vCSA manages these VMs, no changes should be made to these VMs as it may impact the
HA and DRS services. Skip this task, in case the VMs are already migrated to shared datastore.

Steps
1. Click Administration > vCenter Server Extension > vSphere ESX Agent Manager > VMs. The VMs are also visible on
the VMs and templates view.
2. VMs are created under the vCLS folder once the host is added to the cluster.
3. On the VMs and Templates view, click the vCLS folder.
4. Right-click the VM and click Migrate.
5. On the window, click Yes.
6. Click Change storage only.
7. For controller nodes, migrate them to the VSAN datastore.
8. Repeat the above procedure for all the vCLS VMs.

Upgrading the embedded operating system jump VM


Use this procedure to upgrade the embedded operating system jump VM.

Steps
1. Obtain the updated switch image from the IC software repository .
2. Deploy the existing the embedded jump VM and assign a valid IP address with Internet connectivity. A valid DNS entry must
be defined.
3. Run df -h to verify that there is enough available free space on the /shares partition of the embedded jump VM to
download the RPM packages and create the ZIP file. At least 15 GB is recommended.

Upgrading VMware vSphere for patch releases 129


4. Run uname -a to determine the embedded operating system version and verify the Linux kernel version by reviewing the
output and the values in the file (/etc/centos-release).

5. Run cat /etc/centos-release to verify the embedded operating system version.

Installing the offline repository


Use this task to install an offline repository.

Steps
1. Create a directory in the /shares volume called centos-RPM, type: sudo mkdir /shares/Centos-RPM.
2. Copy the repository update ZIP file to the /tmp directory of the embedded operating system VM using WinSCP or similar.
3. Extract the contents of the repository update ZIP file to the /shares/Centos-RPM directory, type: sudo unzip /tmp/
repofilename.zip -d /shares/Centos-RPM.
4. Create and modify a new repository file in the (/etc/yum.repos.d) directory, type: sudo vi /etc/yum.repos.d/
centos.rpm.repo. In this example, the file that is created is (/etc/yum.repos.d/centos.rpm.repo).
5. Clean the yum cache, type: # sudo yum clean all.
6. Verify access to the new repository, type: # sudo yum repolist.
7. Deploy the updates from the repository, type: yum update. When prompted answer (y).
8. When the process is complete, reboot the system, type: reboot.
9. Once the system reboot has completed, verify kernel version, type: uname -a viewing the (/etc/centos-release) file.
10. Verify the embedded operating system version, type: cat /etc/centos-release.
11. Remove the RPM files, type: sudo rm -f -r /shares/Centos-RPM.
12. Remove the repository index file, type: sudo rm /etc/yum.repos.d/centos.rpm.repo.
13. Clean yum cache, type sudo yum clean all.

130 Upgrading VMware vSphere for patch releases


10
Upgrading a PowerFlex appliance
environment
Use this section when there are new versions of PowerFlex Manager and Intelligent Catalog available.

About this task


You can update the PowerFlex appliance environment by following these steps in the order that is shown here.
If your system has native asynchronous replication enabled, note the following:
● The standard upgrade process should be followed on each system.
● Upgrade one system fully, before upgrading the second system.
● It is ideal to have both systems running the same Intelligent Catalog version. The recommendation is to upgrade them both
when logistically possible.
● Do not pause the replication process when following standard upgrade procedures.
WARNING: If the PowerFlex hyperconverged nodes or PowerFlex compute-only nodes are part of NSX-T, ensure
you or VMware Services upgrades the NSX-T Data Center before upgrading VMware vSphere ESXi on the nodes.
Dell EMC is not responsible for installing or upgrading NSX-T Data Center.

Prerequisites
Complete the following workflow to upgrade a PowerFlex appliance environment:
● Upgrade PowerFlex Manager.
● Add new compliance file (Intelligent Catalog) and operating system images to PowerFlex Manager.
● In PowerFlex Manager, add the new compatibility management file, if you are using PowerFlex Manager 3.6 or prior, skip this
step.
● Upgrade the PowerFlex Gateway.
● Upgrade to a supported version of VMware vCenter.
NOTE: The version depends on the VMware ESXi version available with Intelligent Catalog, for example; if you are going
to upgrade VMware ESXi from 6.5 to 6.7 you should first upgrade your VMware vCenter to 6.7 before starting the
upgrade.
● Upgrade the PowerFlex appliance.
● Upgrade the PowerFlex GUI presentation server.

Intelligent catalog (IC) trains and the upgrade process


An Intelligent Catalog (IC) is a catalog of a storage, firmware, and drivers that have been engineered and validated together.
Staying on an engineered IC reduces the chance of a system outage due to conflicting components or other problems such as,
known issues.
A new IC train is created when major version changes occur.

NOTE: IC jumps greater than two are considered high risk. Contact Dell EMC Support before proceeding.

To upgrade to a new IC train, you first upgrade to the end of the IC train on which your system resides, and then upgrade to the
new IC train. Performing these two upgrades keeps the system on an engineered and validated path. This is the safest choice
for system stability and data integrity.
For example, the following diagram shows the multihop upgrade from IC 33_30_00 to IC 37_361_00 and a two-step upgrade,
if the customer is upgrade from IC 36_360_00 to IC 37_361_00.

Upgrading a PowerFlex appliance environment 131


Change the maximum transmission unit (MTU) value
This section provides details on changing the MTU values on VMware VMkernal port group and the dvSwitch on the VMware
vCenter.
In order for the MTU value to be updated, back up the switch port configuration. and verify the port channel for the
impacted host is updated to 9216 using show running-configuration interface port-channel <portchannel
number>. When this is completed, verify the dvSwitch is backed up. To back up the dvSwitch, see Back up the dvswitch
configuration.

NOTE: If the MTU value is already set to 9000, ignore the Change the maximum transmission unit... tasks.

Find the following tables for more details on MTU values:

Switch MTU
Default/current Recommended
Dell PowerSwitch - 9216

Cisco Nexus - 9216

cust_dvswitch 1500 9000

VMK MTU
Default/current Recommended
vMotion 1500 9000

mgnt 1500 1500/9000

Back up and verify the dvSwitch configuration


Steps
1. Click the menu and choose Networking.
2. Click the impacted dvSwitch and click on the Configure tab.
3. On the properties screen, verify the MTU value.
If the MTU value is already set to 9000 ignore below configuration.

132 Upgrading a PowerFlex appliance environment


Change the maximum transmission unit (MTU) on the access
switch
Use this task to change the maximum transmission unit (MTU) values to 9216/jumbo on physical switch port.

Steps
Log in in to the access switch with administrative credentials.

If you are updating a... Type the following...


Cisco Nexus switch
interface port-channel31
description Downlink-Port-Channel-to-
r840-01-dvswitch1
no shutdown
switchport mode trunk
switchport trunk allowed vlan
89,91-92,152,160
mtu 9216
vlt-port-channel 31
spanning-tree port type edge

Dell PowerSwitch switch


interface port-channel31
description Downlink-Port-Channel-to-
r840-01-dvswitch1
no shutdown
switchport mode trunk
switchport trunk allowed vlan
89,91-92,152,160
mtu 9216
vpc 31
spanning-tree port type edge

Change the maximum transmission unit (MTU) on the


cust_dvswitch
Use this task to change the MTU on the cust_dvswitch.

Steps
1. Log in to the VMware vCenter with administrator credentials.
2. Select Networking.
3. Select cust_dvswitch.
4. Right-click and select Edit Settings.
5. Select Advanced and change the MTU value to 9000.

Change the maximum transmission unit (MTU) for VMware


vMotion VMK
About this task
This task is optional for the management VMware VMK. If you are ready to use or implement jumbo frames repeat this task on
the management VMware VMK.

Steps
1. Click Host and Clusters.

Upgrading a PowerFlex appliance environment 133


2. Select the node and click Configure.
3. From Networking, select Vmkernal adapters.
4. Select the vMotion VMK and click Edit.
5. On the Port Properties tab, change the MTU to 9000.
6. Repeat steps 1 through 5 for the other nodes.

Upgrading the PowerFlex Manager virtual appliance


Use the backup and restore method to upgrade to PowerFlex Manager version 3.7.
PowerFlex Manager 3.7 introduces file system sizing changes on the virtual appliance. To take advantage of these changes,
we recommend using the backup and restore method to upgrade. For details, see https://www.dell.com/support/kbdoc/en-us/
000189267/powerflex-manager-how-to-upgrade-to-version-3-7-0-using-backup-and-restore-method?lang=en.
In version 3.7, 100 GB of additional capacity is added to the /var partition, increasing the total size to 300 GB for the PowerFlex
Manager virtual appliance. This accommodates larger database sizes and backing up the database. If you choose to upgrade
with Secure Remote Services or upgrade from a local path, you get the latest PowerFlex Manager code, but the capacity is still
200 GB. If you then want to get 300 GB of capacity, you can do the backup/restore step on top of the Secure Remote Services
or local upgrade.

Upgrade PowerFlex Manager using backup and restore


PowerFlex Manager 3.7 includes significant changes in the filesystem. To take advantages of the changes, it is recommended
you follow the following workflow:
● Back up the PowerFlex Manager 3.6 appliance
● Deploy PowerFlex Manager 3.7
● Restore the backup of the PowerFlex Manager appliance.
NOTE: You must upgrade to PowerFlex Manager 3.6 to use backup and restore for upgrading.

Back up using PowerFlex Manager


Use this procedure to run the backup of the appliance manually.

About this task


● PowerFlex Manager backup files include the following information:
○ Activity logs
○ Credentials
○ Deployments
○ Resource inventory and status
○ Events
○ Initial setup
○ IP addresses
○ Jobs
○ Licensing
○ Networks
○ Templates
○ Users and roles
○ Resource module configuration files
○ Performance metrics
CAUTION: If you back up a PowerFlex Manager virtual appliance with a working alert connector configuration,
and restore that backup onto a different IP address, the alert connector shows an error state. The Secure
Remote Services gateway allows communication on only the original IP address. You must deregister the alert
connector after restoring the backup and then re-register the alert connector.

134 Upgrading a PowerFlex appliance environment


Steps
1. From the menu, click Settings > Backup and Restore.
2. From the Backup and Restore page, click Backup Now.
3. Select one of the following:
● To use general settings that are applied to all backup files, from Settings and Details, click Use Backup Directory
Path and Encryption Password.
● To use custom settings, from Backup Directory Path:
○ Enter a file path where the backup file is saved using either NFS (host/share) or CIFS (\\host\share\).
○ Optionally, enter a username and password in the Backup Directory, User Name, and Backup Directory Password
fields.
○ From the Encryption Password field, enter a password that is required to open the backup file, and verify the
encryption password by entering the password in the Confirm Encryption Password field. The password can
include any alphanumeric characters.
4. Click Backup Now.
NOTE: Note the PowerFlex Manager management, OOB IP address, netmask, gateway, PXE network, DNS and domain
name for configuring the IP address after the deployment of new PowerFlex Manager appliance.

Power off the PowerFlex Manager appliance


Use this procedure to power off the PowerFlex Manager appliance.

Steps
1. Log in to the Management Controller VMware vCenter.
2. Right-click on the PowerFlex Manager appliance.
3. Click Power > Shutdown Guest OS.

Take a snapshot of the PowerFlex Manager appliance


Use this procedure to take a snapshot of the PowerFlex Manager appliance.

Steps
1. Log in to the Management Controller VMware vCenter.
2. Right-click on the PowerFlex Manager appliance.
3. Select Snapshots > Take snapshot.
4. Uncheck Snapshot the virtual machine's memory and enter a description.
5. Click OK.

Deploy PowerFlex Manager


Use this procedure to download the PowerFlex Manager Open Virtual Appliance (OVA) and deploy the PowerFlex Manager
virtual appliance.

Prerequisites
Log in to the Dell Technologies Support site, download the PowerFlex Manager OVA file, and save it to a location that is
accessible to the VMware vSphere Client.

Steps
1. Log in to VMware vSphere Client.
2. Right-click Management ESXi host and select Deploy OVF Template.
The Deploy OVF Template wizard displays.
3. On the Select template page, enter the URL where the OVA is located or select Local file and browse to the location
where the OVA is saved.

Upgrading a PowerFlex appliance environment 135


4. Click Next.
5. On the Select name and folder page, enter a name for the VM, (up to 80 characters) and select a datacenter where the
template will be stored.
6. Click Next.
7. On the Select a compute resource page, select a location where the deployed template runs.
8. Click Next.
9. On the Review details page, verify that the template details are correct and click Next.
10. On the License agreements page, read the license agreement, and select I accept all License agreements and click
Next.
11. On the Select storage page, complete the following:
a. Select Thin provision from the Select virtual disk format menu.
b. Select a datastore from the Datastore Clusters menu and click Next.
12. On the Select networks page, complete the following:
a. Select a destination network for the management source network.
b. Select a destination network for theOS Installation source network.
The operating system installation network is the PXE network.
c. Select a destination network for the OOB Management source network.
The OOB management network is the dedicated iDRAC network.
d. Click Next.
13. On the Ready to Complete page, review the configuration data and click Finish to deploy the PowerFlex Manager virtual
appliance.

Configure the networks


Configure the VMware ESXi management, out-of-band (OOB) management, and operating system installation networks through
the Dell EMC Initial Appliance Configuration UI.

Prerequisites
● Ensure you have the information gathered in Back up using PowerFlex Manager (IP address, netmask, gateway, DNS and
domain) from the old PowerFlex Manager to configure the new PowerFlex Manager.

Steps
1. Map the networks:
a. Log in to VMware vSphere Client.
b. Right-click the PowerFlex Manager virtual appliance and select Edit Settings .
c. On the Virtual Hardware tab, click the Network adapter menu for the VMware ESXi management, the OOB
management, and the operating system installation networks and take note of the Port ID and MAC Address values for
each network.
d. Power on PowerFlex Manager.
e. Log in to the PowerFlex Manager virtual appliance through the VM console using the following credentials:
● Username: delladmin
● Password: delladmin
f. Click Agree to accept the License Agreement and click Submit.
g. Log out of the Dell EMC Initial Appliance Configuration UI.
h. Enter ifconfig.
The network connections display.
i. To identify which network connection is mapped to which network, you must check the MAC address of each of the
three network connections that are displayed against the MAC address of each of the three networks that you noted
above in step c.
The networks are mapped as follows:

Network Network 2 Network connection


adapter
1 VMware ESXi management ens160

136 Upgrading a PowerFlex appliance environment


Network Network 2 Network connection
adapter
2 OOB management (dedicated iDRAC network) ens192
3 Operating system installation (PXE network) ens224

2. Enter pfxm_init_shell to restart the Dell EMC Initial Appliance Configuration UI and click Network Configuration.
If the UI does not display, enter sudo pfxm_init_shell and enter the username delladmin and your password.
3. Configure the VMware ESXi management network, complete the following:
a. On the Network Connections page, select ens<network_connection> and click Edit the selected connection.
b. On the General tab, ensure the Automatically connect to this network when it is available check box is selected.
c. Click the IPv4 Settings tab and from the Method list, select Manual.
d. In the Addresses pane, click Add, and enter the Address, Netmask, and Gateway.
e. Enter the IP addresses in the DNS servers box.
f. Enter the search domain.
g. Click Save.
4. Configure the OOB management network, which is the dedicated iDRAC network, complete the following:
a. On the Network Connections page, select ens<network_connection> and click Edit the selected connection.
b. On the General tab, ensure the Automatically connect to this network when it is available check box is selected.
c. Click the IPv4 Settings tab and from the Method list, select Manual.
d. In the Addresses window, click Add, and enter the Address and Netmask.
e. Click Routes and ensure use this connection only for resources on its network is selected.
f. Click Save.
5. Configure the operating system installation network, which is the PXE network, complete the following:
a. On the Network Connections page, select ens<network_connection> and click Edit the selected connection.
b. On the General tab, ensure the Automatically connect to this network when it is available check box is selected.
c. Click the IPv4 Settings tab and from the Method list, select Manual.
d. In the Addresses pane, click Add, and enter the Address and Netmask.
e. Click Routes and ensure the Use this connection only for resources on its network check box is selected and click
OK.
f. Click Save and exit the window.
g. Select Date/Time Properties click the Time Zone tab and verify the system clock uses UTC.
h. Select Time zone > OK.
i. Click Change hostname and enter the hostname.
j. Click Update Hostname > OK.
6. Log out of the PowerFlex Manager virtual appliance.

Next steps
Log in to the PowerFlex Manager UI through your browser using the URL that is displayed in the Dell EMC Initial Appliance
Configuration UI. For example, https://<IP_Address>/ui. Using the following:
● Username: admin
● Password: admin
If you can successfully log in to the PowerFlex Manager UI, PowerFlex Manager successfully deployed.
If you cannot log in to PowerFlex Manager, ensure you are using the correct <IP_Address> by entering ip address in the
command line and searching for the IP address of the PowerFlex Manager virtual appliance. The <IP_Address> should be the
same <IP_Address> that displayed in the Dell EMC Initial Appliance Configuration UI.
Click Cancel to cancel the Setup Wizard.

Map networks
Use this procedure is the networks are not mapped.

Steps
1. Log in to VMware vSphere Client.

Upgrading a PowerFlex appliance environment 137


2. Right-click the PowerFlex Manager virtual appliance and select Edit Settings .
3. On the Virtual Hardware tab, click the Network adapter menu for the VMware ESXi management, the OOB management,
and the operating system installation networks and take note of the Port ID and MAC Address values for each network.
4. Power on PowerFlex Manager.
5. Log in to the PowerFlex Manager virtual appliance through the VM console using the following credentials:
● Username: delladmin
● Password: delladmin
6. Click Agree to accept the License Agreement and click Submit.
7. Log out of the Dell EMC Initial Appliance Configuration UI.
8. Enter ifconfig.
The network connections display.
9. To identify which network connection is mapped to which network, you must check the MAC address of each of the three
network connections that are displayed against the MAC address of each of the three networks that you noted above in
step 3.
The networks are mapped as follows:

Network adapter Network 2 Network connection


1 VMware ESXi management ens160
2 OOB management (dedicated iDRAC network) ens192
3 Operating system installation (PXE network) ens224

Restore PowerFlex Manager


Use this procedure to restore PowerFlex Manager user created data to an earlier configuration that is saved in a backup file.

About this task


Ensure you perform frequent backups to prevent data loss and corruption.
CAUTION: Restoring an earlier configuration restarts PowerFlex Manager and deletes data created after the
backup file to which you are restoring. Any running jobs could be terminated.

Steps
1. On the menu bar, click Settings and click Backup and Restore.
2. On the Backup and Restore page, click Restore Now.
3. Enter a file path name in the backup directory path and file name box that specifies the backup file to be restored. Use one
of the following formats:
● NFS—host:/share/filename.tar.gz
● CIFS—\\host\share\filename.tar.gz
4. Enter the username and password in the Backup Directory User Name and Backup Directory Password fields to log in
to the location where the backup file is stored
5. Enter the encryption password in the Encryption Password field to access the backup file. This is the password that was
provided when the backup file was created.
6. Click Test Connection > Close and click Restore Now.
7. Click Yes or No to action when a confirmation message is displayed.
The restore process starts. While restoring, the PowerFlex Manager would reboot.
NOTE: If you back up a PowerFlex Manager virtual appliance with a working alert connector configuration and restore
that backup onto a different IP address, the alert connector comes up in an error state. The Secure Remote Services
gateway allows communication on only the original IP address. In this case, deregister the alert connector after restoring
the backup, and then re-register it.

138 Upgrading a PowerFlex appliance environment


Resynchronize an operating system image repository and compliance
version
Use the resynchronize option to restore the OS image and Compliance version from the database after a backup and restore.

About this task


If an operating system image was uploaded as a part of an ISO file, you must resynchronize the OS image repository from the
OS Image Repositories tab. However, if the operating system image was uploaded as a part of a compliance ZIP file, go to the
Compliance Versions tab in Compliance and OS Repositories and resynchronize the ZIP file there.
This procedure provides steps for performing the resynchronization from the OS Image Repositories and Compliance
version tabs.

Steps
1. To resynchronize the operating system image repository for an operating system image that was uploaded as part of an ISO
file:
a. On the Compliance and OS Repositories page, click the OS Image Repositories tab.
b. From the Available Actions drop-down menu, click Resynchronize for a repository in an Error state.
The Resynchronize OS Repository page is displayed.
c. Enter the user credentials and click Test Connection to test the network connection.
NOTE: You cannot edit the Source Path and Filename.

d. Click Resynchronize.
The repository state changes to Copying state.
2. To resynchronize the Compliance bundle at Compliance version tab:
a. On the Compliance and OS Repositories page, click Compliance Versions tab.
The compliance bundle will be in error state.
b. On the Available Actions drop-down menu, click Resynchronize for a repository in an Error state.

Add a new compatibility management file


Use this procedure to add a new compatibility management file to PowerFlex Manager.

About this task


Compatibility management helps PowerFlex Manager to recognize the correct Intelligent Catalog version and provides valid
upgrade path details for the appliance and Intelligent Catalog.
NOTE: If the compatibility management file is not uploaded to the PowerFlex appliance, upgrading the PowerFlex Manager
appliance and service to latest version will be blocked.
It also helps bring the system into compliance and provide the details about supported and valid upgrade paths.

Steps
1. Log in to PowerFlex Manager.
2. Click Settings and select Virtual appliance management.
3. On the Compatibility management section, click Add.
4. Download the compatibility management file from Dell Technologies Support site to the jump server.
5. Click Upload from Local to use a local file. Then, click Choose File to select the GPG file and click Save.

Upgrading a PowerFlex appliance environment 139


Enable remote access to the PowerFlex Manager VM
Use this procedure to enable SSH and access the PowerFlex Manager VM through the command line to complete tasks like
administration or system maintenance tasks.

Steps
1. Log in to the PowerFlex Manager appliance console using the delladmin username. If you do not see the command line
prompt, log out of the shell and log back in. Type sudo su. Enter the delladmin password.
2. Type:
systemctl enable sshd
systemctl start sshd
3. Type exit to log out, and return to the delladmin user.
4. Connect to the PowerFlex Manager management IP address with the SSH client to verify that SSH is enabled.

Post validation
Ensure all the PowerFlex services in the PowerFlex Manager are up and accessible.

Delete snapshot of old PowerFlex appliance


After successful deployment and restore of the PowerFlex Manager appliance, delete the snapshot of the old PowerFlex
Manager VM.

Steps
1. Log in to Management Controller VMware vCenter.
2. Right-click on the PowerFlex Manager appliance.
3. Select Snapshots > Manage Snapshots.
4. Select the snapshot and click Delete.
5. On the Delete Snapshot window, click Delete to delete the VM snapshot.

Delete the old PowerFlex Manager VM


After successful deployment and restore of the PowerFlex Manager appliance, delete the old PowerFlex Manager VM from the
disk.

Steps
1. Log in to Management Controller VMware vCenter.
2. Right-click on the PowerFlex Manager appliance.
3. Click Delete from Disk.
4. On the Confirm Delete window, click Delete.

Next steps
Add the compatibility management file.

Upgrading the PowerFlex Manager virtual appliance using Secure


Remote Services
You can use Secure Remote Services to upgrade the PowerFlex Manager virtual appliance.

About this task


To upgrade the virtual appliance with Secure Remote Services, you must first configure the alert connector.

140 Upgrading a PowerFlex appliance environment


If you have registered with Secure Remote Services, whenever a new version of the virtual appliance is available, PowerFlex
Manager displays a banner at the top of all pages to notify you of the new release. The banner displays only for users with the
administrator role. It displays until a user with the administrator role takes an action.

Prerequisites
Take a backup of the PowerFlex Manager appliance settings.

Steps
1. Log in to PowerFlex Manager.
2. To perform a backup of the appliance, go to Settings > Backup and Restore.
a. If a backup has never been performed, you must configure backup settings before you can click the Backup Now button.
b. For help on how to configure the backup settings, click Edit next to settings and click ? in the Settings and Details
screen. This action takes you to the online help in PowerFlex Manager, which provides information about configuring the
backup settings.
3. On the banner, click View Details on the Actions menu.
Alternatively, on the menu bar, click Settings, and then click Virtual Appliance Management.

4. In the Appliance Upgrade Settings section, you can see the Current Virtual Appliance Version and check the Available
Virtual Appliance Version field to see if a newer version of PowerFlex Manager is available.
5. To the right of the Appliance Upgrade Settings section, click Edit.
6. To update to the latest version using Secure Remote Services, select Update Appliance from configured Dell EMC
Secure Remote Services.
If you indicate that you want to update the virtual appliance using Secure Remote Services, the Repository Path field on
the Virtual Appliance Management page shows Dell EMC Secure Remote Services (SRS). Otherwise, the field
shows the network path that is entered in the Edit Appliance Upgrade Settings dialog.
7. At the top of the Virtual Appliance Management page, click Update Virtual Appliance.
8. On the Update PowerFlex Manager page, verify the following fields are correct:
● PowerFlex Manager version compatible
● Are current Intelligent Catalogs compatible
● Current virtual appliance version
● Available virtual appliance version
● Repository path

9. In the Type UPDATE POWERFLEX MANAGER to confirm field, type Update PowerFlex Manager and click Yes to
update your appliance.
The update process displays messages indicating the progress of the update. Once the update is complete, the system
restarts and you are redirected to the login page.

10. Log in to the PowerFlex Manager virtual appliance.

Next steps
If you are updating PowerFlex Manager from a release prior to 3.3, you must configure iDRAC nodes to automatically send alerts
to PowerFlex Manager:
1. Click Settings.
2. Under Settings, click Credentials Management.
3. On the Credentials Management page, edit the credential for each node and ensure that the correct SNMP community
string is included in the credential.
Select a node and click Edit to review the SNMP v2 community string and make any required changes.
The default community string is public. To use a different value, overwrite this string. The string that you specify must
match the current community string setting on the iDRAC server.
4. Under Settings, click Virtual Appliance Management.
5. In the SNMP Trap Forwarding section, review the iDRAC SNMP community strings.

Upgrading a PowerFlex appliance environment 141


Click Edit to see the list of SNMP community strings. The list should include any that were previously added on the
Credentials Management page. For these community strings, the Used By column shows all the credentials that use
these community strings, and the Created By column shows Admin. You cannot update or delete the SNMP community
strings that are being used by credentials. Once the credential having the community string is deleted from the Credentials
Management page, then this deleted community string will automatically be removed from the SNMP Trap Forwarding
page.
6. In the Alert Connector section, click Configure nodes for alert connector.
Check the Jobs page to see the running job. Wait for it to complete before proceeding.
7. To verify that the alert connector is receiving alerts, click Send Test Alert.
8. Go to the Alerts page to verify that you are receiving the alerts that you expect to see from the servers.

Add a new compatibility management file using Secure Remote Services


Use this procedure to add a new compatibility management file to PowerFlex Manager.

About this task


Compatibility management helps PowerFlex Manager to recognize the correct Intelligent Catalog version and provides valid
upgrade path details for the appliance and Intelligent Catalog.
NOTE: If the compatibility management file is not uploaded to the PowerFlex appliance, upgrading the PowerFlex Manager
appliance and service to latest version will be blocked.
It also helps bring the system into compliance and provide the details about supported and valid upgrade paths.

Steps
1. Log in to PowerFlex Manager.
2. Click Settings and select Virtual appliance management.
3. On the Compatibility management section, click Add.
4. Click Download from Secure Remote Services (Recommended).
5. Click Upload from Local to use a local file. Then, click Choose File to select the GPG file and click Save.

Restarting the PowerFlex Manager virtual appliance


Use this task to restart PowerFlex Manager.

About this task


To restart the virtual appliance, you must be a user with the administrator role. The restart operation logs off all other users and
cancel any running jobs.

Steps
1. On the menu bar, click Settings, and then click Virtual Appliance Management.
2. On the Virtual Appliance Management page, click Reboot Virtual Appliance. A message displays confirming that you
want to restart the virtual appliance.
3. Click Yes to confirm. The system restarts.
4. Once the reboot is complete, click Click to log in and provide your credentials.

Upgrading components
About this task
If a PowerFlex Manager upgrade added new required fields to components within the template from which a service was
deployed, Confirm Service Settings is displayed on the Services page. Although upgrading components is not mandatory,
certain service or resource functions are not available until the upgrade is complete.

142 Upgrading a PowerFlex appliance environment


Steps
1. Click Confirm Service Settings to launch the Upgrade Service Components window.
Fields in this window vary depending on which components contain newly required settings.
2. Complete all the displayed fields, and click Save.

Adding a new Intelligent Catalog file and OS images to


PowerFlex Manager
Add a new compliance file (Intelligent Catalog) and new OS images files using PowerFlex Manager.

About this task


PowerFlex Manager only supports Intelligent Catalog upgrades. Intelligent Catalog downgrades are not supported. Once you
initiate an upgrade, it must run to completion. Contact your Dell EMC account team if you need further assistance with an
upgrade.

Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Settings and select Compliance and OS Repositories.
3. Click the question mark ? in the upper right corner of Add Compliance File page and follow the online help.
4. Verify that Make this the default version for compliance checking is selected.
NOTE: The Intelligent Catalog does not contain OS image files. You must load OS files separately by clicking Settings
and selecting Compliance and OS Repositories.

5. Go to dell.com/support and log in using the Service Tag associated with any of the PowerFlex nodes in the PowerFlex
appliance.
6. Go to the Drivers & Download tab to download the Intelligent Catalog and OS image files.
NOTE: To be notified when new software releases are available, click Driver Notifications at the bottom of the
Drivers & Downloads tab.

7. On the Compliance and OS Repositories page, on the OS Image Repositories tab and click Add.
8. In the Add OS Image Repository dialog box, enter the name of the repository, the image type, and the path of the OS
image file name.
9. Click Add.

Upgrade the PowerFlex presentation server


Use this procedure to upgrade the PowerFlex presentation server.

Prerequisites
Ensure that you have the package for the PowerFlex presentation server, and access to the server hosting the PowerFlex
presentation server.

Steps
1. Use SSH/SFTP to copy the PowerFlex presentation server package to the /tmp directory on the PowerFlex presentation
server.
2. Type rpm -Uvh EMC-ScaleIO-mgmt-server-3.5-X..noarch.rpm.
3. After the upgrade is complete, reconnect the MDM to the PowerFlex GUI:
a. Navigate to https://<presentation server IP>:8443.
b. Enter the MDM IP address and click Next.
c. Agree to the certificates.
d. Log in with administrative credentials.

Upgrading a PowerFlex appliance environment 143


Upgrading PowerFlex Gateway
PowerFlex Gateway requires a minimum of 8 GB memory and 2 vCPUs for upgrading to and running PowerFlex 3.0.x.x and later
versions. Verify the PowerFlex Gateway memory and update it if required before performing this task.

About this task


In PowerFlex Manager, the PowerFlex Gateway upgrade includes the RPM, operating system patch and all the software
components.
NOTE: PowerFlex Manager requires the LIA password to be the same as the MDM cluster or PowerFlex Gateway admin
password. If it is different, during the upgrade of PowerFlex storage-only components using PowerFlex Manager, PowerFlex
Manager will reset the LIA password to same as MDM cluster or PowerFlex Gateway admin password.

Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Resources.
3. On the Resources page, select the checkbox for the PowerFlex Gateway resource and click Update Resources.
4. In the Update Resources wizard, check the Needs Attention section to see whether any of the nodes need to be
reconfigured before upgrade. Select any nodes that you want to reconfigure. To select all nodes, click the button to the left
of SDS Name.
5. Click Next.
6. On the Summary page, select Allow PowerFlex Manager to perform non-disruptive updates now or Schedule
nondisruptive updates to run later.
7. Specify the type of update you want to perform by selecting one of the following options:
● Instant Maintenance Mode enables you to perform updates quickly. PowerFlex Manager does not migrate the data.
● Protected Maintenance Mode (PowerFlex 3.5 and later) enables you to perform updates that require longer than 30
minutes in a safe and protected manner.
NOTE: To verify that the PowerFlex appliance node is ready for a PowerFlex upgrade, check the Needs attention tab.
If a node appears in this tab, select the node and click Finish. This ensures that the SVM has the required CPU and
RAM capacity.

8. If you only selected a subset of the nodes for reconfiguration, confirm the reconfiguration by typing Reconfigure nodes.
Otherwise, confirm the update action by typing Update PowerFlex.
If you reconfigured only a subset of the nodes, you need to restart the wizard later to reconfigure the remaining nodes
before you can complete the upgrade process.
9. If you are updating a PowerFlex Gateway, type Update PowerFlex to confirm that you are ready to proceed with the
update.
10. Click Finish and click Yes to confirm.
NOTE: When you perform this task, PowerFlex Gateway, all MDMs, and all SDS nodes are updated in a rolling,
nondisruptive update. After the update is initiated, you cannot stop this process until it completes.

Upgrading Java on the PowerFlex Gateway and


PowerFlex GUI presentation server
Upgrade Java to OpenJDK on the PowerFlex Gateway and PowerFlex GUI presentation server.

Prerequisites
● Skip this task if using PowerFlex Manager 3.7. In PowerFlex Manager 3.7, Java gets updated as part of the PowerFlex
Gateway upgrade using PowerFlex Manager.
● In PowerFlex Manager 3.6 or prior, Java is updated manually by this task.
● Download OpenJDK and its dependency packages from release repository.

144 Upgrading a PowerFlex appliance environment


● Copy the downloaded files (JavaPackages.tar.gz and java-1.8.0-openjdk-
headless-1.8.0.292.b10-1.el7_9.rpm) to /root/install on the PowerFlex Gateway VM and PowerFlex GUI
presentation server using WinSCP.
● On the PowerFlex Gateway and PowerFlex GUI presentation server VM, verify the version, type: # java -version.

Steps
1. For the PowerFlex Gateway, shut down the gateway service, type: # systemctl stop scaleio-gateway.
2. Install or upgrade the OpenJDK dependencies on the PowerFlex Gateway and PowerFlex GUI presentation server.
a. Change directory to /root/install, type: # cd /root/install.
b. List the dependencies, type: # ls
c. Check for the copied OpenJDK dependency for javapackages.tar.gz.
d. Decompress the file, type: # tar -zxf JavaPackages.tar.gz# tar -zxf JavaPackages.tar.gz.
e. Install or upgrade OpenJDK dependency packages, type:

#cd JavaPackages/
#rpm -Uvh *.rpm

3. Remove existing oracle java, by querying the java package information, type: # rpm -qa | grep -i jre.
4. Capture the existing Java version and delete it, type: #rpm -e <java_version>.
5. Install new OpenJDK, type: #rpm -ivh /root/install/java-1.8.0-openjdk-
headless-1.8.0.292.b10-1.el7_9.rpm.
6. Verify the version, type: # java -version.
The version is OpenJDK 64-bit server VM build 25.XXX-b09, mixed mode.
NOTE: The upgrade will automatically upgrade LockBox and restart the service.

7. Validate the gateway service is running, type: # systemctl status scaleio-gateway.service.


8. Validate the LockBox credentials, type:

# cd /opt/emc/scaleio/gateway/bin
# ./FOSGWTool.sh --query_esx_credentials

Example of the command output:

Default ESX credentials exist.


Specific ESX credentials configuration:
Ip: 192.168.100.1-192.168.100.20

9. Validate the gateway is working:


a. Log in to PowerFlex Gateway GUI.
b. Retrieve the system topology, enter the MDM details.
c. Verify all PowerFlex clusters details are listed here.

Update the PowerFlex GUI presentation server


Use this procedure to upgrade the PowerFlex GUI presentation server using PowerFlex Manager.

About this task


PowerFlex GUI presentation server can be upgraded using the PowerFlex Manager. The update includes the RPM, OS patch and
all required software components.
NOTE: If you are using PowerFlex Manager version 3.6 or prior, you need to discover manually the presentation server in
PowerFlex Manager.

Prerequisites
To discover the resource, complete the following:

Upgrading a PowerFlex appliance environment 145


1. In the menu bar, click Resources. On the Resources page, click Discover from the All Resources tab.
2. On the Welcome page, read the instructions, and click Next.
3. On the Identify Resources page, click Add Resource Type, and perform the following steps:
a. From the Resource Type list, select a Presentation server.
b. Enter the PowerFlex GUI presentation server IP address in the IP/hostname range field.
c. Select one of the Managed options from the Resource State list.
d. Select an existing or create a credential from the Credentials list to discover resource types. To create a credential,
click + to the right of Credentials. PowerFlex Manager maps the credential type to the type of resource you are
discovering. Click Next > Finish.
The discovered resources the presentation server should be listed on the Resources page.

Steps
1. Log in to PowerFlex Manager.
2. Click Resources.
3. Select the PowerFlex GUI presentation server and click Update Resources.
4. On Apply Resource page, choose either:
● Allow PowerFlex Manager to perform firmware and software updates now

PowerFlex Manager applies the firmware updates and reboot this resource immediately. This update could be disruptive if
the resource is in use.
● Schedule firmware and software updates

PowerFlex Manager will applies the firmware updates at the selected date and time and reboot this resource. This update
could be disruptive if the resource is in use.

5. Click Apply.
6. On the Confirm page, click Yes.

Update PowerFlex appliance nodes


You can update a PowerFlex appliance node using PowerFlex Manager.

Steps
1. In PowerFlex Manager web UI, select Services.
2. Select the existing deployment that you are upgrading to view its details.
3. Click Services. On the Services page, select a service. To change the IC, select View Compliance Report.
4. On the Node Compliance Report page, click Change on the Compliance status. Select the RCM on the preferred
compliance file. Confirm the change by typing CHANGE COMPLIANCE FILE and click Save and Close.
NOTE: In earlier versions of PowerFlex Manager, on the Details page, see the Target version/Target IC version at
the top right. To change the IC, click Change Target /Change Target IC. You can set it to the default IC or different
IC.

5. On the Service Details page, in the right pane under Service Actions, click View Compliance Report.
6. From the compliance report, view the firmware or software components, select the specific nodes that are non-compliant,
and click Update Resources.
a. To perform a non-disruptive update right away, select Allow PowerFlex Manager to perform firmware and software
updates now. Select one of the following to specify the type of update:
● Instant Maintenance Mode - provides quick updates. PowerFlex does not migrate the data.
● Protected Maintenance Mode - provides updates that require longer than 30 minutes in a safe and protected
manner.
b. To perform a non-disruptive update at a later time, select Schedule firmware and software updates.
c. To perform a disruptive update right away for a full upgrade, select Allow PowerFlex Manager to perform disruptive
updates now.

146 Upgrading a PowerFlex appliance environment


The full system upgrade process is faster. However, the nodes, as well as all of the data, are unavailable while the
upgrade is in process. If you are certain that you want to proceed, type REBOOT ALL NODES AT ONCE.
d. Click Apply and click Yes to confirm.
The update process handles node, BIOS, firmware, VMware ESXi driver updates, and VMware ESXi major version upgrades
automatically. For PowerFlex, the update process also updates SDCs in any hyperconverged and compute-only services, if
these SDCs are not in compliance with the new version for the service. For CloudLink, PowerFlex Manager automatically
updates the CloudLink Agent.
PowerFlex Manager does not upgrade VMware vCenter itself. However, it does check the VMware vCenter version to
determine if it matches the VMware ESXi version. If the VMware ESXi version is greater than the VMware vCenter version,
PowerFlex Manager blocks the VMware ESXi host upgrade and displays an error. PowerFlex Manager instructs you to
upgrade VMware vCenter first, or use a different compliance version that is compatible with the installed VMware vCenter
version.

7. If you encounter any errors while performing firmware or software updates, you can view the PowerFlex Manager logs for
the service to see where the error might have occurred.
a. On the Service Details page, in the right pane, under Service Actions, click Generate Troubleshooting Bundle.
This creates a compressed file that contains PowerFlex Manager application logs, PowerFlex Gateway logs, iDRAC
lifecycle logs, Dell EMC PowerSwitch switch logs, Cisco Nexus switch logs, and VMware ESXi logs. The logs are for the
current service only.
Alternatively, you can access the logs from a VMware console, or by using SSH to log in to PowerFlex Manager, if you
have SSH enabled.

Migrating VMware vSphere Cluster Services (vCLS)


VMs
This task helps migrate the VMware vCLS VMs to service datastore using the Migrate vCLS VM wizard in PowerFlex Manager
and bring the service in managed mode.

About this task


VMware vCLS is a new feature in VMware vSphere 7.0 Update 2a . This feature ensures cluster services such as vSphere DRS
and vSphere HA are available to maintain the resources and health of the workloads running in the clusters independent of the
VMware vCenter server instance availability.

Steps
1. Log in to PowerFlex Manager and select the Service tab.
2. Select the existing hyperconverged cluster.
3. Go to Service page and click the Migrate vCLS wizard.
4. Select the volume and datastore to migrate the vCLS VMs.
NOTE: For example, volume could be named powerflex-service-vol-1,powerflex-service-vol-2 and datastore named
powerflex-esxclustershotname-ds1,powerflex-esxclustershotname-ds2.

5. Click Finish.
This action will creates two volumes and two datastores each of 16 GB and the VMs are migrate to service datastore.

Upgrading Cisco NX-OS 7.x to Cisco NX-OS 9.x


Use this procedure to upgrade Cisco NX-OS 7.x to Cisco NX-OS 9.x.

About this task


Extra steps are required to compact the Cisco NX-OS image files for the upgrade to complete successfully.

Upgrading a PowerFlex appliance environment 147


Steps
1. Start an SSH session to the switch.
2. Enter the following command to commit to persistent storage. In addition, copy the config to a remote server (jump server):
copy running-config startup-config.
3. Enter the show version command to determine the current running version, type: show version.
NOTE: The output from running the show version command displays a running firmware version. Depending on your
switch model, near the bottom of the display, the previous running version may display and should not be confused with
the current running version.

Software
BIOS: version 07.61
NXOS: version 7.0(3)I7(3

4. Check the contents of the bootflash directory to verify that enough free space is available for the new Cisco NX-OS
software image.
a. To check the free space on the flash, type: dir bootflash.
For example:

Usage for bootflash:// 1275002880 bytes used


375902208 bytes free
1650905088 bytes total

b. Delete older firmware files to make additional space, if needed, type: delete bootflash:nxos.7.0.2.I7.6.bin.
NOTE: Do not delete the current running version of the firmware files, as shown in the previous show version.
The Cisco Nexus 3000 and Cisco Nexus 9000 switches do not provide a confirmation prompt before deleting them.

5. If upgrading a Cisco Nexus 3000 series switch, type the following to compact the current running image file: switch#
install all nxos bootflash:nxos.7.0.3.I7.bin compact
6. Using SCP, FTP, TFTP server, type the following to copy the firmware file to local storage on the Cisco Nexus switch:
● Use below TFTP command to copy image.
copy tftp://XXX.XXX.XXX.XXX/nxos.9.3.3.bin bootflash:

● Use SCP to copy image.


copy scp://filescp@x.x.x.x//home/filescp/image/nxos.9.3.3.bin bootflash:
The firmware files are hardware model-specific. The firmware follows the same naming convention as the current running
firmware files (show version).
NOTE: For Cisco Nexus 3000 series switches use the following command to copy the image:

copy scp://filescp@x.x.x.x//home/filescp/image/nxos.9.3.3.bin bootflash:nxos.9.3.3


compact

NOTE: If warnings of not enough space to copy files continues, perform an SCP copy with the compact option to
compact the file as it is copied over. Doing this may result with encountering a defect. The work-around for this defect
requires cabling the management port and configuring its IP address on a shared network with the SCP server, allowing
the copy to take place across that management port. Once complete, go to Step 7.

7. Identify the upgrade impact, type: show install all impact.

switch# show install all impact nxos bootflash:nxos.9.3.3.bin

Validate the output if the image is compatible for an upgrade.

8. Start the upgrade process, type: install all nxos bootflash:nxos.9.3.3.bin.


NOTE: For Cisco Nexus 3000 series switches use the following command for the install process:

install all nxos bootflash:nxos.9.3.3.bin compact

148 Upgrading a PowerFlex appliance environment


NOTE: If you receive errors regarding free space on the bootflash, go to Step 3 and ensure you removed older firmware
files to free additional disk space for the upgrade to take place. Check all subdirectories on bootflash when searching for
older bootflash files.
Installer is forced disruptive. Pre-upgrade check failed. Return code 0x40930062 (free
space in the filesyste, is below threshold)).
After the upgrade, the switch reboot could take 5 to 10 minutes. Use a continuous ping command from the jump server to
validate when the switch is back online.

Installer will perform compatibility check first. Please wait. Installer is forced
disruptive

Verifying image bootflash:/nxos.9.3.3.bin for boot variable "nxos".


[###############################] 100% -- SUCCESS
Verifying image type. [###############################] 100% -- SUCCESS Preparing
"nxos" version info using image bootflash:/nxos.9.3.3.bin
[###############################] 100% -- SUCCESS
Preparing "bios" version info using image bootflash:/ nxos.9.3.3.bin
[###############################] 100% -- SUCCESS
Performing module support checks. [###############################] 100% -- SUCCESS
Notifying services about system upgrade. [###############################] 100% --
SUCCESS

Switch will be reloaded for disruptive upgrade.


Do you want to continue with the installation (y/n)? [n] y Install is in progress,
please wait.
Performing runtime checks, [###############################] 100% -- SUCCESS Setting
boot variables. [###############################] 100% -- SUCCESS Performing
configuration copy. [###############################] 100% -- SUCCESS Module 1:
Refreshing compact flash and upgrading bios/loader/bootrom.
Warning: please do not remove or power off the module at this time.
[###############################] 100% -- SUCCESS

Finishing the upgrade, switch will reboot in 10 seconds.

● For continuous ping, type: ping 1.1.1.1 -t.


9. Using SSH, log back in to the switch with username and password.
10. Display the entire upgrade process, type: switch# show install all status.
11. Verify that the switch is running the correct new version, type: switch# show version.
For example:

Software
BIOS: version 07.66
NXOS: versio 9.3(3)
BIOS compile time: 06/11/2019
NXOS image file is: bootflas://nxos.9.3.3.bin
NXOS compile time: 12/22/2019 2:00:00 [12/22/2019 09:00:37]

Upgrading the electronic programmable logic device


(EPLD)
Steps
1. Start an SSH session to the switch.
2. Enter the following command to commit to persistent storage. In addition, copy the configuration to a jump server: copy
running-config startup-config.
3. Determine the current running version, type show version module <number> epld.

Wasps-N93180YC-TOR1-A# show version module 1 epld

EPLD Device Version

Upgrading a PowerFlex appliance environment 149


-------------------------------
MI FPGA 0X4
IO FPGA 0X9

4. Check the contents of the bootflash directory to verify that enough free space is available for the image.
a. Check the free space on the flash, type: dir bootflash:.
Example command output:

Usage for bootflash:// 1275002880 bytes used


375902208 bytes free
1650905088 bytes total

b. Delete older firmware files to make additional space, if needed.


NOTE: The Cisco Nexus 3000 and Cisco Nexus 9000 switches do not provide a confirmation prompt before deleting
them.

NOTE: The Cisco Nexus 3172 switch and Cisco Nexus 3132 switch do not require EPLD upgrade.

5. Using SCP, FTP, TFTP server, type the following to copy the firmware file to local storage on the Cisco Nexus switch:
● Use below TFTP command to copy image.
copy tftp://XXX.XXX.XXX.XXX/ n9000-epld.9.3.3.img bootflash:

● Use SCP to copy image.


copy scp://filescp@x.x.x.x//home/filescp/image/ n9000-epld.9.3.3.img bootflash:
6. To determine if you must upgrade, type show install all impact epld bootflash: n9000-epld.9.3.3.img.

Wasps-N93180YC-TOR1-A# show install all impact epld bootflash:n9000-epld.9.3.3.img

Retrieving EPLD versions.... Please wait.


Images will be upgraded according to following table:
Module Type EPLD Running-Version New-Version
Upg-Required
------ ---- ----------- --------------- -----------
------------
1 SUP MI FPGA 0x04 0x04
No
1 SUP IO FPGA 0x09 0x15
Yes
Compatibility check:
Module Type Upgradable Impact Reason
------------------------------------------------
1 SUP Yes disruptive Mobile Upgradable

7. Start the upgrade process, type: install epld bootflash: n9000-epld.9.3.3.img module all.

Wasps-N93180YC-TOR1-A# install epld bootflash:n9000-epld.9.3.3.img module all


Digital signature verification is successful
Compatibility check:
Module Type Upgradable Impact Reason
------------------------------------------------
1 SUP Yes disruptive Mobile Upgradable

Retreiving EPLD versions... Please wait


Images will be upgraded according to following table:
Module Type EPLD Running-Version New-Version
Upg-Required
------ ---- ----------- --------------- -----------
------------
1 SUP MI FPGA 0x04 0x04
No
1 SUP IO FPGA 0x09 0x15
Yes
The above modules require upgrade.
The switch will be reloaded at the end of the upgrade
Do you want to continue (y/n) ? [n] y

150 Upgrading a PowerFlex appliance environment


Proceeding to upgrade Modules.

Starting Module 1 EPLD Upgrade

Module 1 : IO FPGA [Programming] : 100.00% ( 64 of 64 sectors)


Module 1 EPLD upgrade is successful.
Module Type EPLD Running-Version New-Version
Upg-Required
------ ---- ----------- --------------- -----------
------------
1 SUP MI FPGA 0x04 0x04
No
Module 1 EPLD upgrade is successful.

NOTE: After the upgrade, the switch reboot could take 5 to 10 minutes. Use a continuous ping command from the jump
server to validate when the switch is back online.

8. Using SSH, log back in to the switch with username and password.
9. Verify that the switch is running the correct new version, type:switch# show install epld status.

Wasps-N93180YC-TOR1-A# show install epld status

1) Module 1 upgraded on Wed Apr 8 02:26:31 2020(545665 us)


EPLD Install Image: EPLD image file 9.3.3. built on Sun Dec 22 02:25:45 2019

Status: EPLD Upgrade was Successful

EPLD
Curr Ver Old Ver
------------------------------------------------------
IO FPGA
0x15 0x9

2) Module 1 upgraded on Wed Apr 8 02:23:31 2020 (545546 us)


EPLD Install Image: EPLD image file 9.3.3. built on Sun Dec 22 02:25:45 2019

Status: EPLD Upgrade was Successful

The Golden (primary backup) copy of the EPLD now needs to be updated.

10. Type show version module 1 epld.

Vikings-N93180YC-A# sh version module 1 epld

EPLD Device Version


---------------------------------------
MI FPGA 0x10
IO FPGA 0x17

11. Update the Golden EPLD image, type install epld bootflash: n9000-epld.9.3.3.img module 1 golden.

Vikings-N93180YC-A# install epld bootflash:n9000-epld.9.3.3.img module 1 golden


Digital signature verification is successful
Compatibility check:
Module Type Upgradable Impact Reason
------ ----------------- ---------- ---------- ------
1 SUP Yes disruptive Module Upgradable

Retrieving EPLD versions.... Please wait.


Images will be upgraded according to following table:
Module Type EPLD Running-Version New-Version Upg-Required
------ ---- ------------- --------------- ----------- ------------
1 SUP MI FPGA 0x10 0x10 Yes
1 SUP IO FPGA 0x17 0x20 Yes
The above modules require upgrade.
The switch will be reloaded at the end of the upgrade
Do you want to continue (y/n) ? [n] y

Upgrading a PowerFlex appliance environment 151


Proceeding to upgrade Modules.

Starting Module 1 EPLD Upgrade

Module 1 : MI FPGA [Programming] : 100.00% ( 64 of 64 sectors)


Module 1 : IO FPGA [Programming] : 100.00% ( 64 of 64 sectors)
Module 1 EPLD upgrade is successful.
Module Type Upgrade-Result
------ ------------------ --------------
1 SUP Success

Module 1 EPLD upgrade is successful.

Reseting Active SUP (Module 1) FPGAs. Please wait...

NOTE: After the upgrade, the switch reboot could take 5 to 10 minutes. Use a continuous ping command from the jump
server to validate when the switch is back online.

12. Using SSH, log back in to the switch with username and password.
13. Verify that the switch is running the correct new version, type: switch# show version module 1 epld.

Vikings-N93180YC-A# sh version module 1 epld

EPLD Device Version


---------------------------------------
MI FPGA 0x10
IO FPGA 0x20

152 Upgrading a PowerFlex appliance environment


11
Upgrading VMware NSX-T Edge nodes
Use the procedures in this chapter to upgrade the VMware NSX-T Edge nodes to the latest Intelligent Catalog when available.
To upgrade the VMware NSX-T Edge nodes to the latest Intelligent Catalog, you or VMware Services must upgrade the NSX-T
Data Center before upgrading VMware vSphere ESXi on the NSX-T Edge nodes.
Upgrade one VMware NSX-T Edge node fully before proceeding to upgrade the next node.
Ideally, all the NSX-T Edge nodes should be running the same Intelligent Catalog version. The recommendation is to upgrade
them both when logistically possible.
Use the following workflow to complete the upgrade:
1. Stage and upgrade the iDRAC and firmware.
2. Validate that the vSAN is error free (vSAN storage option only).
3. Shutdown all the VMs running on the NSX-T Edge Gateway hosts.
4. Place the NSX-T Edge Gateway ESXi host in maintenance mode.
5. Upgrade VMware vSphere ESXi.
6. Power on all the VMs running on the NSX-T Edge Gateway hosts.
7. Upgrade the iDRAC service module.
8. Upgrade the VMware distributed switches
9. Upgrade the VMware vSAN disk format (vSAN storage option only)
10. Verify the VMware vSAN health (vSAN storage option only)
11. Migrate the vCLS VM to NSX-T Edge nodes (vSAN storage option only)

Stage and upgrade the iDRAC and firmware


Stage the iDRAC and firmware for the VMware NSX-T Edge nodes.

Prerequisites
The iDRAC firmware upgrade must be done before any other upgrades. Perform the iDRAC firmware upgrade first, then upgrade
the other component firmware.

Steps
1. Log in to the iDRAC web interface by opening a Mozilla Firefox or Google Chrome browser and go to https://<ip-
address-of-idrac>.
NOTE: Under Server Information, review the System Host Name and verify that you have connected to the correct
hostname.

2. Select Maintenance > System Update > Manual Update and click Choose File.
3. Go to the Intelligent Catalog folder /shares/xxxxx and select the component update file. The components to update
include:
● iDRAC service module
● Dell BIOS
● Dell BOSS controller
● Dell iDRAC/Lifecycle controller
● Dell Intel X550/X540/i350
● Dell Mellanox ConnectX-4 LX
● Dell PERC H740P mini raid controller
4. Click Upload.
5. Select the firmware that you uploaded and click Install Next Reboot.

Upgrading VMware NSX-T Edge nodes 153


CAUTION: Do NOT click Install and Reboot, as it could cause a system outage.

NOTE: The installation will be in the job queue for the next reboot. Click Job Queue from the prompted information
message to monitor the progress for the installation.

Validate the vSAN health


Validate that the vSAN is error free only if the vSAN is configured as the storage option on the NSX-T Edge Gateway nodes.

Steps
1. From the VMware vSphere Client, click Cluster > Monitor > vSAN > SkylineHealth.
2. Ensure the vSAN is healthy.
If the vSAN is not healthy address the issues before continuing with the upgrade.

Shut down all the VMs on the NSX-T Edge Gateway


host
Use this procedure to shut down all the VMs running on the NSX-T Edge Gateway node.

Steps
1. Log in to the web UI of the controller VMware ESXi host directly.
2. Select Virtual Machines.
3. Shut down all the VMs except the jump server running on the NSX-T Edge Gateway host.

Put VMware NSX-T Edge Gateway host into


maintenance mode
Place the VMware NSX-T Edge Gateway host into maintenance mode.

Prerequisites
Migrate the online VMs before putting the host into maintenance mode.

Steps
1. On the VMware vSphere Client, click Hosts and Clusters.
2. Right-click the host and select Maintenance Mode > Enter Maintenance Mode.
3. Verify Move powered-off and suspended virtual machines to other hosts in the cluster is not selected.
4. Verify Ensure data accessibility is selected.
5. Click OK to put the host into maintenance mode.

Upgrade VMware vSphere ESXi


Use this task to upgrade VMware vSphere ESXi.

Steps
1. Use WinSCP to copy the ESXi-6.x.0-xxxxxx.zip patch file to the /vmfs/volumes/vsanDatastore/ISO folder
on the VMware ESXi server (where XX is unique for each host).

154 Upgrading VMware NSX-T Edge nodes


2. Using the SSH shell, connect to the VMware ESXi host and check for the uploaded file by typing the following command:
cd /vmfs/volumes/vsanDatastore/ISO.
3. To update the profile image on the host:
a. To optionally list the profile of the ESXi zip archive, type esxcli software sources profile list
-d /vmfs/volumes/vsanDatastore/ISO/Esxi-6.7.0-16713306-3.5.4.0_Dell_14G.zip. The following
output appears:

Name Vendor Acceptance Level


------------------------------------------ ------------ ----------------
ESXi-6.7.0-20200804001-standard-customized VMware, Inc. PartnerSupported

b. To upgrade the VMware ESXi version, type esxcli software profile update
-p ESXi-6.7.0-20200804001-standard-customized -d /vmfs/volumes/vsanDatastore/ISO/
Esxi-6.7.0-16713306-3.5.4.0_Dell_14G.zip
When the upgrade completes successfully, the following message displays, followed by the list of upgraded packages:

Update Result
Message: The update completed successfully, but the system needs to be rebooted for
the changes to be effective.
Reboot Required: true

4. To upgrade the iDRAC service module:


a. Use WinSCP to upload ISM-Dell-Web-3.x.x-xxxx.VIB-ESX6i-Live_A00.zip to the /vmfs/volumes/
vsanDatastore/ISO folder.
b. Use SSH to access the VMware ESXi nodes and type esxcli software vib install -d /vmfs/volumes/
vsanDatastore/ISO/ISM-Dell-Web-3.x.x-xxxx.VIB-ESX6i-Live_A00.zip.
5. Reboot the ESXi host. Select Power > Reset (Warm Boot).
6. Press F2 to enter system setup.
7. Under System BIOS > Boot setting > select Boot mode as UEFI.
NOTE: Ensure that the BOSS card is set as the primary boot device from the UEFI Device Path under the Boot tab.
If the BOSS card is not set as the primary boot device, reboot the server and change the UEFI boot sequence from
System BIOS > BOOT settings > UEFI BOOT settings.

8. Click Back > Back > Finish > Yes > Finish > OK > Finish > Yes. The node reboots. Proceed to the Exit maintenance mode
section.
9. Repeat these steps on all VMware ESXi servers.

Next steps
You must complete the upgrade for all hosts before proceeding to the Distributed Virtual Switch upgrade.

Exit maintenance mode


Use this procedure to take the VMware NSX-T Edge Gateway host out of maintenance mode.

Steps
1. From the VMware vSphere Client Home screen, select Hosts and Clusters.
2. Right-click the host and select Exit Maintenance Mode.

Upgrading VMware NSX-T Edge nodes 155


Power on all VMs running on the VMware NSX-T Edge
Gateway host
Use this procedure to power on all the VMs running on the VMware NSX-T Edge Gateway node.

Steps
1. Log in to the VMware NSX-T Edge Gateway host.
2. Select Virtual Machines.
3. Power on all the VMs running on the VMware NSX-T Edge Gateway host.

Upgrade the iDRAC service module


Use this procedure to upgrade the iDRAC Service Module (iSM).

Steps
1. Use WinSCP to upload ISM-Dell-Web-3.x.x-xxxx.VIB-ESX6i-Live_A00.zip to the /vmfs/volumes/
vsanDatastore/ISO folder.
2. Use SSH to access the VMware ESXi nodes and type esxcli software vib install -d /vmfs/volumes/
vsanDatastore/ISO/ISM-Dell-Web-3.x.x-xxxx.VIB-ESX6i-Live_A00.zip.

Upgrade the VMware vSphere Distributed Switch


Use this procedure to upgrade the VMware vSphere Distributed Switch.

Steps
1. Connect to the VMware vCenter Server using the VMware vSphere Client.
2. Click Networking and select the VMware Distributed Switch you want to upgrade.
3. Right-click the DVswitch and select Settings > Export Configuration.
4. Select configuration to export Distributed switch and all port groups.
5. Enter a description and click OK and Yes.
6. Select the location, enter the file name and click Save.
NOTE: For VMware vSphere 6.7, from the vSphere client HTML5 and Mozilla Firefox, click OK twice. There is no
prompt for filename or save. With Google Chrome, click OK once. There is no prompt for filename or save.

7. To upgrade VMware vSphere Distributed Switch, right-click Distributed Switch > Upgrade > Upgrade Distributed
Switch.
NOTE: There are two Upgrade options available: Upgrade Network I/O Control, and Enhanced LACP Support. The
Network I/O Control upgrade is required. The Enhanced LACP Support option is required only if it is enabled.

8. On the first screen, select Next to confirm the upgrade.


9. VMware vCenter performs compatibility checking if connected hosts are compatible. Click Next to continue.
The last screen summarizes the steps of the upgrade to the VMware Distributed Switch.
10. Click Finish.
11. Repeat the steps for all VMware vSphere Distributed Switches.

156 Upgrading VMware NSX-T Edge nodes


Upgrade the VMware vSAN disk format (vSAN
storage option only)
Use this procedure to upgrade the VMware vSAN disk format.

Prerequisites
● Verify that you are using the updated version of VMware vCenter Server.
● Verify that you are using the latest version of VMware NSX-T Edge Gateway hosts.
● Verify that the disks are in a healthy state. (In the vSphere Client, navigate to Host and Clusters, highlight your PowerFlex
management controller cluster, then click on the vSAN tab, and click on physical disks to verify the object status in the right
hand column.
● Verify that your hosts are not in maintenance mode. When upgrading the disk format, do not place the hosts in maintenance
mode. When any member host of a vSAN cluster enters maintenance mode, the member host no longer contributes capacity
to the cluster. The cluster capacity is reduced and the cluster upgrade might fail.

Steps
1. Navigate to the vSAN cluster in the VMware vSphere Client.
2. From Host and Clusters, highlight your PowerFlex management controller cluster and click the Configure tab on the right
hand pane.
3. Under vSAN, select General.
4. Under On-Disk Format Version, click Pre-Check Upgrade.
The upgrade pre-check analyzes the cluster to uncover any issues that might prevent a successful upgrade. Some of the
items checked are host status, disk status, network status, and object status. Upgrade issues appear in the disk pre-check
status text box.
The pre-check should be run before initiating on-disk format upgrade task.

5. Under On-Disk Format Version, click Upgrade.


6. Verify the check box beside Allow Reduced Redundancy is cleared.
7. Click Yes on the Upgrade box to perform the upgrade of the on-disk format.

Verifying VMware vSAN health (vSAN storage option


only)
Use this procedure to verify VMware vSAN health.

Steps
1. From the VMware vSphere Client, navigate to the vSAN cluster.
2. Navigate to Home > Host and Clusters, and highlight the PowerFlex management controller cluster.
3. Click Monitor > vSAN > Skyline Health.
4. Verify that all the tests have passed.

Upgrading VMware NSX-T Edge nodes 157


12
Enable replication on existing PowerFlex
hyperconverged nodes
Use this chapter to convert existing non-replication PowerFlex hyperconverged nodes to replication enabled PowerFlex
hyperconverged nodes.
This guide assumes you have two standard PowerFlex appliances (source and target) deployed, each having a separate MDM
cluster. Networking must be in place between the two sites before proceeding with replication.
It is also possible to create replication between PowerFlex hyperconverged nodes and PowerFlex storage-only nodes.

Prerequisites
The following requirements are needed before proceeding with enabling replication:
● LACP bonded NIC port design
● PowerFlex node with PowerFlex 3.6
● PowerFlex hyperconverged node with minimum 2 sockets *12 cores each
● Journal capacity (sized on delta change rate for each replicated volume)
● Additional external VLANs for replication must be added (flex-rep1-<vlanid>, flex-rep2-<vlanid>) used for Storage Data
Replication (SDR) to SDR communication between source and destination sites for replicating data.
● At least one protection domain (source and destination)
● At least one storage pool (source and destination)
● SDS devices that have been added to the appropriate storage pool (source and destination)
● PowerFlex systems installed at the source and destination sites with communication between them. (MDM to MDM
communication required in addition to external networks)
● At least one identical size volume on both source and destination sites. The volume at the source site must be mapped and
the volume on the destination site is used for replication and must be unmapped.

Workflow
The workflow for removing the existing PowerFlex hyperconverged nodes from PowerFlex Manager and enabling replication on
the existing PowerFlex hyperconverged nodes is as follows:
1. Remove the existing PowerFlex hyperconverged nodes from PowerFlex Manager.
2. Create and configure replication port groups (flex-rep1-<vlanid> and flex-rep2-<vlanid>) in flex_dvswitch.
3. Prepare SVM for replication, as follows:
a. Enter the Storage Data Server (SDS) node (SVMs) into maintenance mode.
b. Add the virtual NICs to the SVMs for Storage Data Replication (SDR) external communication.
c. Modify vCPU, memory, virtual Non-Uniform Memory Access (vNUMA), and CPU reservation settings on SVMs.
4. Power on the SVM and configure the network interfaces.
5. Install the SDR on the SDS nodes (SVMs).
6. Exit SDS maintenance mode.
7. Add journal capacity percentage. The recommended starting value is 10%.
8. Add the Storage Data Replicator to PowerFlex nodes.
9. Create the peer system between the source and destination site.
10. Add the peer system.
11. Create the replication consistency group (RCG).
12. Define network for replication in PowerFlex Manager. Do not define the gateway.
13. Add an existing service to PowerFlex Manager.

158 Enable replication on existing PowerFlex hyperconverged nodes


Remove an existing PowerFlex hyperconverged
service from PowerFlex Manager
Remove an existing PowerFlex hyperconverged service from PowerFlex Manager to install replication components.

About this task


It is important to remove a service, to add a service back to PowerFlex Manager after replication components are installed.
Hyperconverged services must be removed at both the source and destination sites only if both are hyperconverged, else do it
only on one site.

Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Services.
3. On the Services page, click the service and in the right pane, click View Details.
4. On the Service Details page, in the right pane, under Service Actions, click Remove Service.
5. In the Remove Service dialog box, select Remove Service.
6. Select Leave nodes in PowerFlex Manager inventory and set the state to Managed.
7. Click Remove.

Create and configure replication port groups


Use this task to create and configure replication port groups (flex-rep1-<vlanid> and flex-rep2-<vlanid>) in flex_dvswitch.

Steps
1. Log in to the VMware vSphere client and select the Networking inventory view.
2. Select Inventory, right-click flex_dvswitch and select New Port Group.
3. Type flex-rep1 and click Next.
4. From VLAN type menu, select VLAN and in VLAN ID enter 161 (as per the Logical Configuration Survey (LCS)).
5. Select Customize default policies configuration under Advanced option.
6. Click Next > Next > Next.
7. From Teaming and failover tab:
a. Change Load Balancing to Route based on IP hash.
b. Move up the LACP-Lag uplink under Active uplinks.
c. Move down uplink1 and uplink2 under Unused uplinks.
d. Click Next.
8. Click Next > Next > Finish.
9. Repeat Steps 2 through 8 to create the following port group: flex-rep2 (VLAN ID as per the LCS).

Preparing the SVMs for replication


Use the following tasks to prepare the SVMs for replication.

Set the SDS NUMA


Use this task to allow the SDS to use the memory from the other NUMA.

Steps
1. Log in to the SDS (SVMs) using PuTTY.

Enable replication on existing PowerFlex hyperconverged nodes 159


2. Append the line numa_memory_affinity=0 to SDS configuration file /opt/emc/scaleio/sds/cfg/conf.txt,
type: echo #numa_memory_affinity=0 >> /opt/emc/scaleio/sds/cfg/conf.txt.
3. Verify that the line is appended by running: #cat /opt/emc/scaleio/sds/cfg/conf.txt.

Enabling replication on a PowerFlex appliance with FG Pool


Use this task to enable replication on a PowerFlex appliance with FG Pool.

About this task


If the PowerFlex appliance has FG Pool and want to enable replication, set the SDS thread count to ten, from default of eight.

Steps
1. SSH to primary MDM, then log in to PowerFlex cluster, using #scli --login --username admin.
2. Query the current value, type: #scli --query_performance_parameters --print_all --tech --all_sds|
grep -i SDS_NUMBER_OS_THREADS.
3. Set the value of SDS_number_OS_threads to 10, type: # scli --set_performance_parameters -sds_id
<ID> --tech --sds_number_os_threads 10.

NOTE: Do not set the SDS threads globally, set the SDS threads per SDS.

Verify Network Manager is disabled


Use this task to ensure that Network Manager is disabled.

Steps
1. Log in the SDS (SVMs) using PuTTY.
2. Run # systemctl status NetworkManager to ensure that Network Manager is not running.
Output must display disabled and inactive.
3. If it is enabled and active, stop and disable the service, run:

# systemctl stop NetworkManager


# systemctl disable NetworkManager

Update the network configuration


Use this task to update the network configuration file for all the network interfaces.

Steps
1. Log in to SDS (SVMs) using PuTTY.
2. Make a note of MAC addresses of all the interfaces, using: #ifconfig or #ip a.

160 Enable replication on existing PowerFlex hyperconverged nodes


3. Edit all the interface configuration files (ifcfg-eth0, ifcfg-eth1, ifcfg-eth2, ifcfg-eth3, ifcfg-eth4) and update the NAME,
DEVICE and HWADDR to ensure correct MAC address and NAME are assigned.
NOTE: Ignore the entries with correct values.
● Use the vi editor to update the file # vi /etc/sysconfig/network-scripts/ifcfg-ethX
or
● Append the line using the following command:

# echo NAME=ethX >> /etc/sysconfig/network-scripts/ifcfg-ethX


# echo HWADDR=xx:xx:xx:xx:xx:xx >> /etc/sysconfig/network-scripts/ifcfg-ethX

Example file:

BOOTPROTO=none
ONBOOT=yes
HOTPLUG=yes
TYPE=Ethernet
DEVICE=eth2
IPADDR=192.168.155.46
NETMASK=255.255.254.0
DEFROUTE=no
MTU=9000
PEERDNS=no
NM_CONTROLLED=no
NAME=eth2
HWADDR=00:50:56:80:fd:82

Update the grub configuration file


Use this task to update the grub configuration file.

About this task


Remove net.ifnames=0 and biosdevname=0 from the /etc/default/grub file to avoid the interface name issue when you
add virtual NICs to SVM for SDR communication.

Steps
1. Log in to the SVM using PuTTY.
2. Edit the grub configuration file located in /etc/default/grub, type: # vi /etc/default/grub.
3. From the last line, remove net.ifnames=0 and biosdevname=0, and save the file.
4. Rebuild the grub configuration file, using: # grub2-mkconfig -o /boot/grub2/grub.cfg

Enable replication on existing PowerFlex hyperconverged nodes 161


Enter the SDS nodes into maintenance mode and
power off
Use this task to put the SDS into maintenance mode.

About this task


While entering the SDS nodes into maintenance mode, if the SDS node is a primary MDM as well, ensure to switch the MDM
role before placing the SDS node into maintenance mode.

NOTE: Place one SDS node into maintenance mode at a time.

Steps
1. Log in to the PowerFlex GUI presentation server: https://<presentation_server_IP>:8443.
2. In the left pane, click Configuration > SDSs.
3. In the right pane, select the relevant SDS and click More > Enter Maintenance Mode.
4. In the Enter SDS into Maintenance Mode dialog box, select Instant (if maintenance mode takes more than 30 minutes,
then select Protected).
5. Click Enter Maintenance Mode.
6. Verify that the operation is completed successfully and click Dismiss.
7. Shut down the appropriate SVM:
a. Log in to VMware vCenter using VMware vSphere Client.
b. Select the SVM, right-click Power > Shut-down Guest OS.

Add virtual NICs to SVMs


Use this task to add two more NICs to each SVM for SDR external communication.

Steps
1. Log in to the VMware vCenter vSphere client and go to Host and Clusters.
2. Right-click the SVM and click Edit Setting.
3. Click Add new device, select Network Adapter from the list.
4. Select the appropriate port group created for SDR external communication, click OK.
5. Repeat steps 2 to 4 for creating additional NICs.

Record the MAC address of the newly added network


interface controllers
Use this task to record the MAC addresses of the newly added adapters from the VMware vCenter.

Steps
1. Right-click the SVM and click Edit Setting.
2. Click the newly added network interface controllers from Virtual Hardware list and make note of the MAC address.

162 Enable replication on existing PowerFlex hyperconverged nodes


Modifying the vCPU, memory, vNUMA and CPU
reservation settings on SVMs
There are specific memory and CPU settings must be updated when you enable replication on your PowerFlex Appliance with
PowerFlex hyperconverged nodes.

Modify the memory size


Use this task to modify the memory size according to the SDR requirements on a replication-enabled PowerFlex node.

About this task


NOTE: 12 GB of additional memory is required for SDR. For example, if you have 24 GB memory existing in the SVM for an
MG pool, add 12 GB for enabling replication, 24+12 = 36 GB. If you have 32 GB memory existing in the SVM for an FG pool,
add 12 GB for enabling replication so it would be 32 + 12 = 44 GB.

Steps
1. Log in to the VMware vCenter vSphere client.
2. Right-click the VM you want to change and select Edit Settings.
3. Under the Virtual Hardware tab, expand Memory and modify the memory size according to the SDR requirement.
4. Click OK.

Increase the vCPU count


Use this task to increase the vCPU count according to the SDR requirement.

About this task


The physical core requirement is two sockets with ten cores each (vCPU * per NUMA domain cannot exceed physical cores).
Consider the following examples for the vCPU count:
● Total number of vCPUs for an MG pool: 8 (SDS)+8 (SDR)+2 (MDM/TB) + 2(CloudLink) = 20 vCPUs.
● Total number of vCPUs for an FG pool: 10 (SDS)+10 (SDR)+2 (MDM/TB) + 2(CloudLink) = 24 vCPUs

Steps
1. Log in to VMware vCenter vSphere client.
2. Right-click the virtual machine that you want to change, then select Edit Settings.
3. Under the Virtual Hardware tab, expand CPU and increase the vCPU count according to the SDR requirement.
4. Click OK.

Setting the vNUMA advanced option


Use this task to set numa.vcpu.maxPerVirtualNode.

About this task


Ensure the CPU hot plug feature is disabled, in case it is enabled disable it before configuring vNUMA parameter.

Steps
1. Log in to the production VMware vCenter using vSphere client.
2. Right-click the VM that you want to change and select Edit Settings.
3. Under the Virtual Hardware tab, expand CPU, ensure CPU Hot Plug option is unchecked.

Enable replication on existing PowerFlex hyperconverged nodes 163


Set the vNUMA advanced option
Use this task to set the SVM numa.vcpu.maxPerVirtualNode value to half the vCPUs assigned to the SVM.

About this task


For example, if the SVM for an MG pool has 20 vCPUs, set numa.vcpu.maxPerVirtualNode=10. If the SVM for an FG
pool has 24 vCPUs, set numa.vcpu.maxPerVirtualNode = 12.

Prerequisites
Ensure that the CPU hot plug is disabled. Do the following to disable the CPU hot plug feature before configuring vNUMA
parameter:
1. Log in to the VMware vCenter vSphere client.
2. Right-click the VM that you want to change and select Edit Settings.
3. Under the Virtual Hardware tab, expand CPU and verify that the CPU Hot Plug option is cleared.

Steps
1. Go to the SVM in the VMware vSphere client.
2. Select a data center, folder, cluster, resource pool, or host to find a VM.
3. Click the VMs tab.
4. Right-click the VM and select Edit Settings.
5. Click VM Options and expand Advanced.
6. Under Configuration Parameters, click Edit Configuration.
7. In the dialog box that appears, click Add Configuration Params to enter a new parameter name and its value.
For example, if the SVM for an MG pool has 20 vCPUs, set numa.vcpu.maxPerVirtualNode = 10. If the SVM for an
FG pool has 24 vCPUs, set numa.vcpu.maxPerVirtualNode = 12.
8. Click OK twice.
Ensure the following:
● Under CPU, Shares are set to High.
● 50% of the vCPUs is reserved on the SVM. For example, if the SVM for an MG pool is configured with 20 vCPUs and
CPU speed is 2.8 GHz, set a reservation of 28 GHz (20x2.8/2). If the SVM for an FG pool is configured with 24 vCPUs
and CPU speed is 3 GHz, set a reservation of 36 GHz (24x3/2).

9. Right-click the VM you want to change and select Edit Settings.


10. Under the Virtual Hardware tab, expand CPU, verify Reservation and Shares as mentioned.

Modifying the memory size according to the SDR requirements for


FG pool-based PowerFlex systems with replication
Use this task to add additional memory required for SDR.

About this task


NOTE: 12 GB of additional memory is required for SDR. For example, if you have 32 GB memory existing in the SVM, add 12
GB for enabling replication so it would be 32 + 12 = 44 GB.

Prerequisites

Steps
1. Log in to the production VMware vCenter using vSphere client.
2. Right-click the VM you want to change and select Edit Settings.
3. Under the Virtual Hardware tab, expand Memory, modify the memory size according to SDR requirement.
4. Click OK.

164 Enable replication on existing PowerFlex hyperconverged nodes


Increasing the vCPU count according to the SDR requirement
Use this task to increase the vCPU count according to the SDR requirement.

About this task


The physical core requirement is two sockets with ten cores each (vCPU * per NUMA domain cannot exceed physical cores).
vCPU total: 10(SDS) + 10 (SDR) + 2 (MDM/TB) + 2(CloudLink) = 24 vCPUs

Steps
1. Log in to the production VMware vCenter using VMware vSphere client.
2. Right-click the virtual machine that you want to change, then select Edit Settings.
3. Under the Virtual Hardware tab, expand CPU, increase the vCPU count according to SDR requirement.
4. Click OK.

Setting the vNUMA advanced option


Use this task to set numa.vcpu.maxPerVirtualNode.

About this task


Ensure the CPU hot plug feature is disabled, in case it is enabled disable it before configuring vNUMA parameter.

Steps
1. Log in to the production VMware vCenter using vSphere client.
2. Right-click the VM that you want to change and select Edit Settings.
3. Under the Virtual Hardware tab, expand CPU, ensure CPU Hot Plug option is unchecked.

Editing the SVM configuration


Use this task to set the SVM numa.vcpu.maxPerVirtualNode to half the vCPU's assigned to the SVM.

About this task


For example, if the SVM has 24 vCPU, set this numa.vcpu.maxPerVirtualNode = 12.

Steps
1. Browse to the SVM in the VMware vSphere client.
2. To find a VM select a data center, folder, cluster, resource pool, or host.
3. Click the VMs tab.
4. Right-click the VM and select Edit Settings.
5. Click VM Options and expand Advanced.
6. Under Configuration Parameters, click Edit Configuration.
7. In the dialog box that appears, click Add Configuration Params to enter a new parameter name and its value.
Example: numa.vcpu.maxPerVirtualNode = 12
8. Click OK > OK.
Ensure the following:
● CPU shares are set to high.
● 50% of the vCPU's reserved on the SVM.
For example, if the SVM is configured with 24 vCPUs and CPU speed is 3 GHz, set a reservation of 36 GHz (24x3/2).

9. Right-click the VM you want to change and select Edit Settings.


10. Under the Virtual Hardware tab, expand CPU, verify Reservation and Shares as mentioned.

Enable replication on existing PowerFlex hyperconverged nodes 165


Powering on the SVM and configuring network
interfaces
Use the following tasks to power on the SVMs and create interface configuration files for the newly added network adapters:
● Configure new added network interface controllers for the SVM
● Add a permanent static route for replication external networks

Configure the newly added network interface controllers for SVMs


Use this task to configure the newly added network interface controllers for the SVMs.

Steps
1. Log in to VMware vCenter using vSphere client.
2. Select the SVM, right-click Power > Power on.
3. Log in to SVM using PuTTY.
4. Create rep1 network interface, type: cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/
network-scripts/ifcfg-eth5.
5. Create rep2 network interface, type: cp etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/
network-scripts/ifcfg-eth6.
6. Edit newly created configuration files (ifcfg-eth5, ifcfg-eth6) using the vi editor and modify the entry for IPADDR,
NETMASK, GATEWAY, DEFROUTE, DEVICE, NAME and HWADDR, where:
● DEVICE is the newly created device of eth5 and eth6
● IPADDR is the IP address of the rep1 and rep2 networks
● NETMASK is the subnet mask
● GATEWAY is the gateway for the SDR external communication
● DEFROUTE change to no
● HWADDR=MAC address collected from the topic Adding virtual NICs to SVMs
● NAME=newly created device name for eth5 and eth6
NOTE: Ensure that the MTU value is set to 9000 for SDR interfaces on both primary and secondary site and also end to
end devices. Confirm with the customer or see the Logical Configuration Survey (LCS) about their existing MTU values,
and configure it.

Add a permanent static route for replication external networks


Use this task to create a permanent route.

Steps
1. Go to /etc/sysconfig/network-scripts and create a file called route-interface and type:

#touch /etc/sysconfig/network-scripts/route-eth5
#touch /etc/sysconfig/network-scripts/route-eth6

2. Edit each file and add the appropriate network information.


For example, 10.0.10.0/23 via 10.0.30.1, where 10.0.10.0/23 is the network address and prefix length of the
remote or destination network. The IP address 10.0.30.1 is the gateway address leading to the remote network.
Sample file

/etc/sysconfig/network-scripts/route-eth5
10.0.10.0/23 via 10.0.30.1
/etc/sysconfig/network-scripts/route-eth6
10.0.20.0/23 via 10.0.40.1

166 Enable replication on existing PowerFlex hyperconverged nodes


3. Reboot the SVM, type: #reboot.
4. Ensure all the changes are persistent after reboot.
5. Once SVM has come up, ensure all the interfaces are configured properly, type: #ifconfig or #ip a.
6. Verify the new route added to the system, type: #netstat -rn.

Install SDR RPMs on the SDS nodes (SVMs)


About this task
The SDR RPM must be installed on all SVMs, both at the source and destination sites only if both sites have PowerFlex
hyperconverged nodes. Storage Data Replicators (SDR) are responsible for processing all I/Os of replication volumes. All
application I/Os of replicated volumes are processed by the source SDRs. At the source, application I/Os are sent by the SDC
to the SDR. The I/Os are sent to the target SDRs and stored in their journals. The target SDRs journals apply the I/Os to the
target volumes. A minimum of two SDRs are deployed at both the source and target systems to maintain high availability. If one
SDR fails, the MDM directs the SDC to send the I/Os to an available SDR.

Steps
1. Use WinSCP or SCP to copy the SDR package to the tmp folder.
2. SSH to SVM and run the following to install the SDR package:#rpm -ivh /tmp/EMC-ScaleIO-sdr-3.6-
x.xxx.el7.x86_64.rpm.

Exit SDS maintenance mode


Steps
1. Log in to the source site presentation server: https://presentation_server_IP:8443.
2. In the left pane, click Configuration > SDSs.
3. In the right pane, select the relevant SDS and click More > Exit maintenance mode.
4. Select Exit Maintenance Mode.
5. Verify that the operation completed successfully and click Dismiss.
6. Wait for the rebuild and rebalance operation to finish before starting activity on next SVM
7. Repeat the following tasks on all the SVMs in source and destination sites:
● Prepare SVMs for replication
● Enteri SDS node (SVMs) into maintenance mode and power off the SVM
● Addi virtual NICs to SVMs
● Modify memory and CPU settings on SVMs
● Power on the SVM and configure network interfaces
● Exit SDS maintenance mode

Verify communication between the source and


destination
Steps
1. Log in to all the SVMs and PowerFlex nodes in source and destination sites.
2. Ping the following IP addresses from each of the SVM and PowerFlex nodes in source site:
● Management IP addresses of the primary and secondary MDMs
● External IP addresses configured for SDR-SDR communication
3. Ping the following IP addresses from each of the SVM and PowerFlex nodes in destination site:
● Management IP addresses of the primary and secondary MDMs
● External IP addresses configured for SDR-SDR communication

Enable replication on existing PowerFlex hyperconverged nodes 167


Add journal capacity percentage
The journal is a component of the SDR. It stores the data at the source before it is sent to the destination. At the destination,
the journal stores the data before it is applied to the destination volumes. At the source, application I/Os are sent by the SDC
to the SDR. The SDR packages I/Os in bundles and sends them to the target journal. Once the I/Os are sent to the destination
journal, they are cleared from the source journal. Once the I/Os are applied to the target volumes, they are cleared from the
destination journal.
Journal capacity is defined as a percentage of the total storage capacity (usable capacity) in the storage pool and must equal
at least 28 GB per SDR. The journal capacity is allocated from every pool where there are replicated volumes. The capacity
allocated from each pool is at least 5% of the usable capacity of the replicated volumes. The total allocated journal capacity
from all the pools in the PD must be at least equal to number of SDRs x 28 GB.
Example of PowerFlex system with 1PD and 1SP: If you have four SDRs in your PD and SP with 36 TB usable capacity of
replicated volume, then the minimum journal capacity should be maximum (5% of 36 TB, (4x28 GB) which is maximum (1.8 TB,
112 GB) = 1.8 TB.
NOTE: The journal capacity is defined as a percentage of the total storage capacity in the storage pool, increasing the total
storage capacity by adding devices will increase the journal capacity. Similarly, if you decrease the total storage capacity by
removing devices from the storage pool, the journal capacity will automatically decrease.

Calculate journal capacity to allocate


The journal is shared between all of the replicated RCGs in the protection domain.

About this task


Journal capacity should be allocated from storage pools as fast as (or faster than) the storage pool of the fastest replicated
application in the protection domain. It should use the same drive technology and about the same drive count and distribution in
nodes.

Steps
1. Select the storage pool from which to allocate the journal capacity.
2. Consider the minimal requirements needed (28 GB multiplied by the number of SDR sessions). Journal capacity will be the
maximum of these two factors.
Consider the expected outage time. The minimal outage allowance is one hour, but at least three hours are recommended.
3. Calculate the journal capacity needed per application: maximal application throughput x maximum outage interval.
4. Calculate the percentage of capacity based on the previously calculated needs as journal capacity is defined as a percentage
of storage pool capacity.
For example, an application generates 1 GB/s of writes. The maximal supported outage is three hours (3 hours x 3600
seconds = 10800 seconds). The journal capacity needed for this application is 1 GB/s x 10800 s = ~10.547 TB. Since the
journal capacity is expressed as a percentage of the storage pool capacity, divide the 10.547 TB by the size of the storage
pool usable capacity, which is 200 TB:100 x 10.547 TB/200 TB = 5.27% round this up to 6%.

5. Repeat this for each application being replicated.


NOTE: When the storage pool capacity is critical, capacity cannot be allocated for new volumes or for expanding
existing volumes. This behavior must be considered when planning the capacity available for journal usage. The volume
usage must leave enough capacity available in the storage pool to allow provisioning of journal volumes. The plan should
account for the storage pool staying below critical capacity even when the journal capacity is almost fully utilized.

Add allocated journal capacity


Add allocated journal capacity from the storage pool.

Steps
1. In the left pane, click Replication > Journal Capacity.

168 Enable replication on existing PowerFlex hyperconverged nodes


2. In the right pane, click Add.
3. In the Add Journal Capacity dialog box, select the relevant storage pool and add the percentage for journal capacity.
4. Click Add to allocate journal capacity from the storage pool.
5. Verify the operation completed successfully and click Dismiss.

Adding the Storage Data Replicator to a PowerFlex


appliance
Use this task to add the SDR to the PowerFlex appliance.

Prerequisites
The IP address of the node must be configured for SDR. The SDR communicates with several components:
● SDC (application)
● SDS (storage)
● Remote SDR (external)

Steps
1. In the left pane, click Protection > SDRs.
2. In the right pane, click Add.
3. In the Add SDR dialog box, enter the connection information of the SDR:
a. Enter the SDR name.
b. Update the SDR Port, if required (default is 11088).
c. Select the relevant Protected Domain.
d. Enter the IP Address of the MDM that is configured for SDR.
e. Select Role External for the SDR to SDR external communication.
f. Select Role Application and Storage for the SDR to SDC and SDR to SDS communication.
g. Click ADD SDR to initiate a connection with the peer system.
4. Verify that the operation completed successfully and click Dismiss.
5. Modify the IP address role if required:
a. From the PowerFlex GUI, in the left pane, click Protection > SDRs.
b. In the right pane, select the relevant SDR check box, and click Modify > Modify IP Role.
c. In the <SDR name> Modify IPs Role dialog box, select the relevant role for the IP address.
d. Click Apply.
e. Verify that the operation completed successfully and click Dismiss.
6. Repeat both tasks Adding journal capacity and Adding Storage Data Replicator (SDR) to PowerFlex system for source and
destination PowerFlex appliances.

Create the peer system between the source and


destination site
Use this task to create the peer system between the source and destination site.

Steps
1. Log in the primary MDM using SSH on the source and destination to extract and add the MDM certificate.
2. Type: #scli -login -username admin after the password prompt and enter the MDM cluster password.
3. Extract the certificate on the source and destination primary MDM, type:
● For the source: #scli --extract_root_ca --certificate_file /tmp/source.crt
● For the destination: # scli --extract_root_ca --certificate_file /tmp/destination.crt
4. Copy the extracted certificate of the source (primary MDM) to the destination (primary MDM) using SCP and conversely.

Enable replication on existing PowerFlex hyperconverged nodes 169


● From source MDM: #scp /tmp/source.crt <MDM_Mgmt_IP_of_Destination>:/tmp/
From destination MDM: #scp /tmp/destination.crt <MDM_Mgmt_IP_of_Source>:/tmp/

5. Add the copied certificate, type:


● For source: # scli --add_trusted_ca --certificate_file /tmp/destination.crt --comment
destination_crt
● For destination: # scli --add_trusted_ca --certificate_file /tmp/source.crt --comment
source_crt
6. Verify the new certificate by typing: # scli --list_trusted_ca.
7. Click Journal Capacity in the left pane from the Replication tab to verify that the journal capacity is set according to the
requirement by clicking .

Adding the peer system


Use this task to add the peer system.

Steps
1. Type scli -login -username admin after the password prompt and enter the MDM cluster password.
NOTE: From the output, obtain the System ID. It is used in the following step to add a peer system on the primary site.

Example output: Logged in. User role is SuperUser. System ID is 2e6ccfd208ef120f

2. Add the peer system to the primary site, type: # scli --add_replication_peer_system --peer_system_ip
(remote system mdm management ips) --peer_system_id (system id of remote site) --
peer_system_name (remote site name)
3. Add the peer system to the remote site, type: # scli --add_replication_peer_system --peer_system_ip
(primary system mdm management ips) --peer_system_id (system id of primary site) --
peer_system_name (primary site name)
NOTE:
● For a 3-node cluster, you need two IP addresses - comma separated (primary, secondary).
● For a 5-node cluster, you need three IP addresses - comma separated (primary, secondary1, secondary2).

Create the replication consistency group


Use this task to create the RCG when the remote site is up and running only.

About this task


The RCG is a logical container for volumes whose application data must be replicated consistency to each other. It includes a
set of consistent volume pairs. The volume on the source from a single protection domain is replicated to a remote volume from
a single protection domain on the target. This creates a consistent pair of volumes. You can add and manage RCG on both the
source and target systems.
Before proceeding, create source and destination volumes of the same size. It is recommended, but not mandatory, that
the volumes in the volume pair have the same attributes (including zero padding and granularity), not doing so can impact
performance and capacity.
If you already have volume in source site, create the volume in destination site with same size.

NOTE: Do not map the volume that is created on target system to SDC.

Steps
1. Log in to the source site presentation server: <https://presentation_server_IP>:8443.

NOTE: Use the primary MDM IP address and credentials to log in to the PowerFlex cluster.

170 Enable replication on existing PowerFlex hyperconverged nodes


2. In the left pane, click REPLICATION > RCGs.
3. In the right pane, click Add.
4. In the Add RCG wizard, enter the following on the General page:
a. Enter the RCG Name.
b. Enter the number of RPO (recovery point objective) minutes. This is the amount of time of data loss that is tolerated if
replication between the systems is compromised.
c. Select Source Protection Domain.
d. Select Target System.
e. Select Target Protection Domain.
5. Click Next.
6. On the Add Replication Pairs page:
a. Click the volume from the Source column and then click the same size volume from the Target column.
b. Click Add Pair. The volume pair is added.
c. Click Next.
7. On the Review Pairs page:
a. Ensure that the correct source and volume pair are selected and click ADD RCG & START REPLICATION.
b. Verify that the operation completed successfully and click Dismiss.
The RCG is added to both the source and target systems.
It is necessary to wait for the end of the initial copy transmit before start to use.

Finding the current copy status


Use this task to find the current copy status.

Steps
1. Log in to the primary MDM using SSH and log in to scli, type: # scli --login --username admin after the password
prompt and enter the MDM cluster password.
2. Verify the replication status, type: # scli --query_all_replication_pairs.
Once initial copy is complete, PowerFlex replication system is ready for use.

Modifying the recovery point objective


Use this task to update the recovery point objective (RPO) time as required.

Steps
1. From https://Presentation_Server_IP:8443 (PowerFlex GUI), in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click Modify > Modify RPO.
3. In the Modify RPO for RCG <rcg name> dialog box, enter the updated RPO time and click Apply.
4. Verify that the operation completed successfully and click Dismiss.

Defining the network for replication in PowerFlex


Manager
Use this task to define the network for SDR external communication.

Steps
1. On the menu bar, click Settings > Networks.
The Networks page opens.
2. Click Define.

Enable replication on existing PowerFlex hyperconverged nodes 171


The Define Network page opens.
3. In the Name field, enter the name of the network. Optionally, in the Description field, enter a description for the network.
4. From the Network Type drop-down menu, select PowerFlex Replication.
5. In the VLAN ID field, enter a VLAN ID between 1 and 4094.
6. Select the Configure Static IP Address Ranges check box, and then do the following:
a. In the Subnet box, enter the IP address for the subnet. The subnet is used to support static routes for data and
replication networks.
b. In the Subnet Mask box, enter the subnet mask.
NOTE: Do not define the gateway when you define the network for PowerFlex replication and PowerFlex data.

c. Optionally, in the Primary DNS and Secondary DNS fields, enter the IP addresses of primary DNS and secondary DNS.
d. Optionally, in the DNS Suffix field, enter the DNS suffix to append for hostname resolution.
e. To add an IP address range, click Add IP Address Range. In the row, specify a starting and ending IP address for the
range.
Repeat this step to add IP address ranges based on the requirement. For example, you can use one range for flex-rep1
network and the second range for flex-rep2 network.
7. Click Save.

Adding an existing service to PowerFlex Manager


Use this task to add an existing service to discover and import hardware resources that were not originally deployed with
PowerFlex Manager.

Prerequisites
Ensure the following conditions are met before you add an existing service:
● The vCenter, PowerFlex Gateway, CloudLink Center, and hosts must be discovered in the resource list.
● The PowerFlex Gateway must be in the service.

Steps
1. On the menu bar, click Services and then click + Add Existing Service.
2. On the Add Existing Service page, enter a service name in the Name field.
3. Enter a description in the Description field.
4. Select the Type for the service.
The choices are Hyperconverged, Compute Only, and Storage Only.
PowerFlex Manager checks to see whether there are any vCLS VMs on local storage. If it finds any, it puts the service in
lifecycle mode and gives you the opportunity to migrate these to shared storage.

5. To specify the compliance version to use for compliance, select the version from the Firmware and Software Compliance
list or choose Use PowerFlex Manager appliance default catalog.
You cannot specify a minimal compliance version when you add an existing service, since it only includes server firmware
updates. The compliance version for an existing service must include the full set of compliance update capabilities.
PowerFlex Manager does not show any minimal compliance versions in the Firmware and Software Compliance list.
NOTE: Changing the compliance version might update the firmware level on nodes for this service. Firmware on shared
devices is maintained by the global default firmware repository.

6. Specify the service permissions under Who should have access to the service deployed from this template? by
performing one of the following actions:
● To restrict access to administrators, select the Only PowerFlex Manager Administrators option.
● To grant access to administrators and specific standard users, select the PowerFlex Manager Administrators and
Specific Standard and Operator Users option, and perform the following tasks:
a. Click Add User(s) to add one more standard or operator users to the list.
b. To delete a standard or operator user from the list, select the user and click Remove User(s).

172 Enable replication on existing PowerFlex hyperconverged nodes


c. After adding the standard and or operator users, select or clear the check box next to the standard or operator users
to grant or block access to use this template.
● To grant access to administrators and all standard users, select the PowerFlex Manager Administrators and All
Standard and Operator Users option.
7. Click Next.
8. Choose one of the following network automation types:
● Full Network Automation
● Partial Network Automation
When you choose Partial Network Automation, PowerFlex Manager skips the switch configuration step, which is normally
performed for a service with Full Network Automation. Partial network automation allows you to work with unsupported
switches. However, it also requires more manual configuration before a deployment can proceed successfully. If you choose
to use partial network automation, you give up the error handling and network automation features that are available with a
full network configuration that includes supported switches.

In the Number of Instances box, provide the number of component instances that you want to include in the template.

9. On the Cluster Information page, enter a name for the cluster component in the Component Name field.
10. Select values for the cluster settings:
For a hyperconverged or compute-only service, select values for these cluster settings:
a. Target Virtual Machine Manager—Select the vCenter name where the cluster is available.
b. Data Center Name—Select the data center name where the cluster is available.
NOTE: Ensure that selected vCenter has unique names for clusters in case there are multiple clusters in the
vCenter.
c. Cluster Name—Select the name of the cluster you want to discover.
d. OS Image—Select the image or choose Use Compliance File ESXi image if you want to use the image provided with
the target compliance version. PowerFlex Manager filters the operating system image choices to show only ESXi images
for a hyperconverged or compute-only service.
For a storage-only service, select values for these cluster settings:
a. Target PowerFlex Gateway—Select the gateway where the cluster is available.
b. Protection Domain—Select the name of the protection domain in PowerFlex.
c. OS Image—Select the image or choose Use Compliance File Linux image if you want to use the image provided with
the target compliance version. PowerFlex Manager filters the operating system image choices to show only Linux images
for a storage-only service.
11. Click Next.
12. On OS Credentials page, select the OS credential that you want to use for each node and SVM.
You can select one credential for all nodes (or SVMs), or choose credentials for each item separately. You can create the
operating system credentials on the Credentials Management page under Settings.
PowerFlex Manager validates the credentials for the nodes and SVMs before it creates the service. This validation makes it
possible for PowerFlex Manager to run a full inventory on all nodes and SVMs before creating the service. The process of
running the inventory can take five to ten seconds to complete.
To import a VMware NSX-T or NSX-V configuration, PowerFlex Manager must have the operating system inventory to
recognize that NSX VIBs are on the node. Without the inventory, it is unable to tell if a node has NSX-T or NSX-V.
PowerFlex Manager runs the inventory on all nodes and SVMs for which the credentials are valid. The service uses any
nodes and SVMs for which it has a successful inventory. For example, if you have four nodes, and one node has an invalid
operating system password, PowerFlex Manager adds the three nodes for which the credentials are valid and ignores the
one with the invalid password.

13. Click Next.


The list of resources available in the cluster is displayed on the Inventory Summary page.
14. Review the inventory on the Inventory Summary screen.
The summary shows all nodes that are available. If a node is not available, it might be because this node does not match the
Type you selected for the service (Hyper-converged, Compute only, or Storage only).
Depending on how the node is configured, the summary might show additional inventory information. For example, for a node
that has NVDIMM compression, the summary shows additional information about the acceleration pool and compression
settings.

Enable replication on existing PowerFlex hyperconverged nodes 173


If the resources are discovered and in an available state, the Available Inventory displays the components as Yes. An
unavailable PowerFlex Gateway is shown as No.
If the credentials are invalid for a node or SVM, or if you have a network connectivity problem, PowerFlex Manager displays
No in the Available Inventory column for the node, and displays an error message to notify you about the problem.
PowerFlex Manager cannot update firmware and software versions for PowerFlex clusters that do not have available
PowerFlex Gateways. If expected PowerFlex Gateways are not shown as available, you can discover the gateways and run
the wizard again.
NOTE: PowerFlex Manager retrieves the hostname value from iDRAC and not the operating system. If the hostname
field is not updated in iDRAC, an incorrect value can be displayed in PowerFlex Manager. Certain operating systems
require extra packages that are installed for iDRAC to update the correct hostname.
15. Click Next.
16. On the Network Mapping page, review the networks that are mapped to port groups and make any required edits.
PowerFlex Manager attempts to select the correct network based on the VLAN ID, subnet, or IP ranges entered in
PowerFlex Manager. If PowerFlex Manager finds only one network for a given network type, it selects the network
automatically. If it finds more than one, you must select the network from the Network drop-down list. The OS Installation
network does not get a VLAN ID.
NOTE: If the OS Install VLAN is not already configured in your environment, add it. This network is required to perform
node expansions. This network is typically added during PowerFlex Manager configuration.
If there are any port groups for which you do not want PowerFlex Manager to manage access, leave those port groups
cleared. If no network is selected for a particular port group, PowerFlex Manager leaves it out of the deployment data and
does not add it to the nodes.
For an existing service that supports NSX-T, PowerFlex Manager shows VDS switches that are sharing uplinks.

17. To import a large number of general-purpose VLANs from vCenter, perform these steps:
a. Click Import Networks on the Network Mapping page.
PowerFlex Manager displays the Import Networks wizard. In the Import Networks wizard, PowerFlex Manager lists
the port groups that are defined on the vCenter as Available Networks. You can see the port groups and the VLAN IDs.
b. Optionally, search for a VLAN name or VLAN ID.
PowerFlex Manager filters the list of available networks to include only those networks that match your search.
c. Click each network that you want to add under Available Networks. If you want to add all the available networks, click
the check box to the left of the Name column.
d. Click the double arrow (>>) to move the networks you chose to Selected Networks.
PowerFlex Manager updates the Selected Networks to show the ones you have chosen.
e. Click Save.
18. Click Next.
19. Review the Summary page and click Finish when you are ready to add the service.
The process of adding an existing service causes no disruption to the underlying hardware resources. It does not shut down
any of the nodes or the vCenter.
For an existing service, the Reference Template field shows Generated Existing Service Template on the Service
Details page. You can distinguish existing services from new services that were deployed with PowerFlex Manager.
When PowerFlex Manager must put a service in lifecycle mode, the Summary page for the Add Existing Service wizard
displays a warning message indicating the reason.
In some situations, an imported configuration might not meet the minimal requirements for lifecycle mode. In this case,
PowerFlex Manager does not allow you to add the service.

Next steps
When you add an existing service, PowerFlex Manager matches the hosts, vCenter, and other items it finds with discovered
resources in the resource list. If you missed a component initially, you can change your resource inventory, and update the
service to reflect these changes. Go back to the resources list, select the component, and mark it as Managed by selecting
Change resource state to Managed. Then, perform an Update Service Details operation on the service to pull in the
missing component.
When you deploy an existing service, PowerFlex Manager reserves any IP addresses from vCenter or the PowerFlex Gateway
that it needs. If you later tear down the service, it releases those IP addresses so that they can be reused.

174 Enable replication on existing PowerFlex hyperconverged nodes


If you add an existing service that supports NSX-T or NSX-V, PowerFlex Manager displays a banner indicating that the service
supports a limited set of actions. Most service actions are disabled for an NSX-T or NSX-V configuration, except the ability to
update the firmware and software components, remove resources (or the service as a whole), and update service details.
When you add an existing service, PowerFlex Manager checks to see whether there are any vCLS VMs on local storage. If it
finds any, it displays a banner on the Service Details page indicating that it has put the service in lifecycle mode and gives you
the opportunity to migrate the VMs to shared storage.

Enable replication on existing PowerFlex hyperconverged nodes 175


13
Retrieving PowerFlex performance metrics

Retrieving PowerFlex performance metrics using the


PowerFlex GUI
Use this procedure to retrieve PowerFlex performance metrics using the PowerFlex GUI.

Prerequisites
Use a standard tool to generate simulated IOPS. A simple way to do this is to load a Linux VM and use flexible I/O Tests (fio) to
generate IOPS. Following, is the command line using fio to generate random reads and writes:

fio --name=randrw --rw=randrw --direct=1 --ioengine=libaio --bs=16k --numjobs=8 --


rwmixread=90 -- size=1G --runtime=600 --group_reporting

Steps
1. To retrieve overall performance metrics:
a. Launch the PowerFlex GUI.
b. In the Dashboard, look at the PERFORMANCE data.
c. The Dashboard displays the following:
● Overall system IOPs
● Overall system bandwidth
● Overall system latency
2. To retrieve volume-specific metrics:
a. Launch the PowerFlex GUI.
b. In the Dashboard, select CONFIGURATION > Volumes.
3. To retrieve SDS-specific metrics:
a. Launch the PowerFlex GUI.
b. In the Dashboard, Select CONFIGURATION > SDSs.

Retrieving PowerFlex performance metrics using a


PowerFlex version prior to 3.5
Use this procedure to retrieve PowerFlex performance metrics for a PowerFlex version prior to 3.5.

Prerequisites
Use a standard tool to generate simulated IOPS. A simple way to do this is to load a Linux VM and use flexible I/O Tests (fio) to
generate IOPS. Following, is the command line using fio to generate random reads and writes:

fio --name=randrw --rw=randrw --direct=1 --ioengine=libaio --bs=16k --numjobs=8 --


rwmixread=90 -- size=1G --runtime=600 --group_reporting

Steps
1. To retrieve overall performance metrics:
a. Launch the PowerFlex GUI.

176 Retrieving PowerFlex performance metrics


b. In the Dashboard, look at the IO Workload page.
c. The Dashboard displays the following:
● Overall system IOPs
● Overall system bandwidth
● Read/write statistics
● Average I/O size
2. To retrieve volume-specific metrics:
a. Select Frontend > Volumes
b. Select a volume and click the Property Sheet icon.
c. The volume performance metrics are displayed in the General section of the Volume Properties pane.
3. To retrieve host-specific metrics:
a. Select Frontend > SDCs.
b. Select a host, and click the Property Sheet icon.
c. The host performance metrics are displayed in the General section of the Host SDC Properties pane.

Retrieving PowerFlex performance metrics 177


14
Performing maintenance activities in a
PowerFlex cluster
You place a node in maintenance mode to repair, replace, or upgrade hardware components.
The following table describes the available maintenance modes:

Mode Description
Instant maintenance mode Perform short-term maintenance that lasts less than 30
minutes. The node is immediately and temporarily removed
from active participation. PowerFlex Manager does not
migrate the data.
Protected maintenance mode Perform maintenance or updates that require longer than 30
minutes in a safe and protected manner. PowerFlex makes
a temporary copy of the data, so that the cluster is fully
protected from data loss.
Protected maintenance mode applies only to the PowerFlex
hyperconverged and storage-only nodes. Protected
maintenance mode requires that the sum of the spare
capacity and the free capacity must be greater than the size
of the node being put into protected maintenance mode.

Keep the following restrictions in mind when using protected maintenance mode:
● Do not put two nodes from the same protection domain into an instant maintenance mode or protected maintenance mode
simultaneously.
● You cannot mix protected maintenance mode and instant maintenance mode on the same protection domain simultaneously.
● All SDSs in protected maintenance mode concurrently must belong to the same fault set (no inter-protection domain
dependencies for protected maintenance mode).

Maintenance modes
Different types of maintenance are available through PowerFlex.

Types of maintenance
Instant When a node is placed in instant maintenance mode, the node is immediately and temporarily removed
maintenance from active participation. This node does not build a new copy of the data on any of the other nodes.
mode Existing data is temporarily unavailable. A rebuild is not triggered when the node goes offline. The system
suffers data unavailability until the node under maintenance is recovered and the changes are applied
to it. Instant maintenance mode enables you to perform updates quickly. PowerFlex Manager does not
migrate the data when the node is placed in instant maintenance mode.
Protected Protected maintenance mode is designed to avoid the disadvantages of instant maintenance mode.
Maintenance Protected maintenance mode has duplicate copies of the data always and avoids many-to-one rebuilds.
Mode When a node is placed in protected maintenance mode, PowerFlex creates a new, temporary copy of the
data by leveraging the many-to-many rebalance and leave the data on the node being maintained in place.
This makes for three copies, but only two are available.
● Protected maintenance mode enables you to perform updates that require longer than 30 minutes in a
safe and protected manner.
● Protected maintenance mode is more secure maintenance mode compare to instant maintenance
mode.

178 Performing maintenance activities in a PowerFlex cluster


● During protected maintenance mode only the changes are tracked for writes that would have affected
the SDS under maintenance. When exit SDS from maintenance, protected maintenance mode does
not need to rehydrate the SDS with data, only resyncing the deltas back that occurred during
maintenance.

Protected maintenance mode requirements


● Protected maintenance mode requires enough spare and free capacity to entering maintenance, requires at least the size of
the node which is removed for maintenance (free + spare - 5% of the SP >= size of protected maintenance mode nodes).
● Protected maintenance mode uses both the free capacity and the spare capacity, so to best use available capacity at time of
protected maintenance mode, there is no way to ignore the capacity requirements like with instant maintenance mode. The
SDS entering the protected maintenance mode can have degraded capacity like instant maintenance mode. Also, other SDS
in the same fault set may have degraded capacity.

Protected maintenance mode restrictions


● Do not put two nodes from same PD into instant maintenance mode or protected maintenance mode simultaneously.
● Can not mix protected maintenance mode and instant maintenance mode on the same PD simultaneously.
● Per PD: All SDSs in protected maintenance mode concurrently must belong to the same fault set (no inter-PD dependencies
for protected maintenance mode).
● Can take down several SDS in the same fault set, either instant maintenance mode or protected maintenance mode, not
both.

Entering protected maintenance mode


Use this procedure to enter protected maintenance mode using PowerFlex Manager.

Steps
1. Log in to PowerFlex Manager.
2. On the Services page, select a service, and click View Details in the right pane.
3. Click Enter Service mode under Service Actions.
NOTE: The service should have at least three nodes to enter into protected performance maintenance mode using
PowerFlex Manager.

4. Select one or more nodes on the Node Lists page, and click Next.
NOTE: For an environment with Fault set, PowerFlex Manager can put a single node or full fault set in protected
maintenance mode. For an environment without Fault sets, PowerFlex Manager can put a four node minimum in
protected maintenance mode.

5. Select Protected Maintenance Mode.


6. Click Enter Service Mode.
7. Verify that the node shows as Service Mode (Protected Maintenance) in PowerFlex Manager.

Exiting protected maintenance mode


Use this procedure to exit protected maintenance mode using PowerFlex Manager.

Steps
1. Log in to PowerFlex Manager.
2. On the Services page, select the service.
3. Click Exit Service Mode.

Performing maintenance activities in a PowerFlex cluster 179


15
Administering the CloudLink Center

Adding and managing CloudLink Center licenses


Perform the following procedures to add CloudLink Center licenses and manage CloudLink Center licenses through PowerFlex
Manager.

License CloudLink Center


Use this procedure to add licenses to CloudLink Center.

About this task


CloudLink license files determine the number of machine instances, CPU sockets, encrypted storage capacity, or physical
machines with self-encrypting drives (SEDs) that your organization can manage using CloudLink Center. License files also define
the CloudLink Center usage duration.
NOTE: CloudLink center can act as a key management interoperability protocol (KMIP) server if you upload a KMIP license
to it.

Steps
1. Log in to CloudLink Center.
2. Select System > License.
3. Click Upload License.
4. Browse to the license file and click Upload.
NOTE: If the CloudLink environment is managed by PowerFlex Manager, after you update the license, go to the
Resources page, select the CloudLink VMs, and click Run Inventory.

Add the CloudLink Center license in PowerFlex Manager


Use this procedure to add CloudLink Center in PowerFlex Manager.

Steps
1. Log in to PowerFlex Manager.
2. Click Settings > Software Licenses, and click Add.
3. Click Choose File, and browse the license file.
4. Select Type as CloudLink, and click Save.
5. From Resource, select the CloudLink VMs, and click Run inventory.

180 Administering the CloudLink Center


Delete expired or unused CloudLink Center licenses from
PowerFlex Manager
Use this procedure to delete expired or unused CloudLink Center licenses from PowerFlex Manager.

Steps
1. Log in to PowerFlex Manager.
2. Click Settings > Software Licenses.
3. Select the license you want to delete, and click Delete.
4. From Resource, select the CloudLink VMs, and click Run inventory.

Configure custom syslog message format


Use this procedure to configure the custom syslog message format.

Steps
1. Log in to CloudLink Center.
2. Click Server > Change Syslog Format. The Change Syslog Format dialog box is displayed.
3. From the Syslog Format list, select Custom.
4. Enter the string for the syslog entry, and click Change.

Registering KMIP on CloudLink Center


Prerequisites
Ensure you have KMIP server details and the required KMIP server permission files (key.pem, cert.pem, ca.pem). If these
files are not available, log in to the KMIP server and download the certificate ZIP file.

About this task


This procedure explains how to add CloudLink Center to a KMIP server and create a KMIP keystore.

Steps
1. Log in to the CloudLink Center.
2. Go to System > Keystore > Add.
3. Provide any name and description and click Next.
4. Select Key Location Type as Local Database.
5. Select the Protector Type as KMIP.
6. Enter the following information:
● KMIP server address
● Username (secadmin)
● Password
● Upload the three ZIP files downloaded from the KMIP server
7. Click Test. A successful message is displayed that Protector is accessible.
8. Click Add. The KMIP keystore is available under the CloudLink keystore.
● To use this KMIP for a new service, while creating the template in PowerFlex Manager, select CloudLink Center
Settings > KMIP keystore.
● For an existing service, edit the machine group used by the service.
a. Go to CloudLink Center > Agents > Machine Group > Actions > Modify.
b. Change the Keystore to KMIP Keystore and click Modify.
c. Once the Keystore is changed, remove the service from PowerFlex Manager and add the existing service from the
Services page.

Administering the CloudLink Center 181


Manage a self-encrypting drive (SED) from CloudLink
Center
Use this procedure to manage an SED device through CloudLink Center.

About this task


When managing SEDs from CloudLink Center, be aware of the following:
● CloudLink Center can manage encryption keys for self-encrypting drives (SEDs).
● Managing SEDs with CloudLink Center is functional when the CloudLink agent is installed on machines with SEDs.
● When managed by CloudLink Center, SED encryption keys are stored in the current keystore for the machine group they are
in.
● The functionality for managing SEDs requires a separate SED license.
● If the SED cannot retrieve the key from CloudLink Center, the SED remains locked.

Steps
From the CloudLink Center, select Agent > Machines, click Actions and select Manage SED. Ownership of the encryption key
is enabled.

NOTE: This option is only available if an SED license is uploaded and an SED is detected in the physical machine managed
by CloudLink Center. The Manage SED option does not change data on an SED it only takes ownership of the encryption
key.

Manage a self-encrypting drive from the command


line
As an alternative to CloudLink Center, use the command line to manage an SED.

Steps
1. Log in to the Storage Data Servers (SDS).
2. To manage the SED from the command line, type svm manage [device name].
For example, svm manage /dev/sdb.

182 Administering the CloudLink Center


Release a self-encrypting drive
Use this procedure to release an SED that is managed by CloudLink.

About this task


This option allows you to release ownership of an SED that is managed by CloudLink. This option is only available if an SED
license is uploaded and an SED is detected in the physical machine managed by CloudLink Center.
When CloudLink releases an SED, the encryption key is released in CloudLink Center.

Steps
1. From CloudLink Center, go to Agents > Machines and select SDS Machine. Click Release SED.

2. From RELEASE SED, use the menu to select the SED drive that you want to release and click Release.

The status of the SED drive changes to Releasing Control.

Administering the CloudLink Center 183


Once CloudLink releases the control, the SED device status shows as Unmanaged.

NOTE: The Release SED option does not change any data on the SED.

Release management of a self-encrypting drive from


the command line
Use this procedure to release an SED using the command line.

Steps
1. Log in to the Storage Data Server (SDS).
2. To release the SED from the command line, type svm release [device name].
For example, svm release /dev/sdb.

Changing the CloudLink secadmin user password


Use this procedure to change the password of the CloudLink secadmin user.

About this task


During predeployment, the administrator completing the CloudLink Center deployment VMs sets the password for the CloudLink
secadmin user.
During the deployment of PowerFlex Manager, the CloudLink secadmin user password in PowerFlex Manager is set. If the
CloudLink secadmin user password is changed after deployment, you must also change the CloudLink secadmin user password
within PowerFlex Manager to maintain manageability by PowerFlex Manager.

Steps
1. Open a web browser and log in to either CloudLink VM.
2. Log in with secadmin username and password (VMwar3123!!).
3. On the upper right corner, click secadmin, and click Change Password.

184 Administering the CloudLink Center


4. On the CHANGE PASSWORD screen, type Current Password and New Password to the respective fields, and click
Change.
5. On the upper right corner click secadmin, and then select Logout.
6. Log in with secadmin username and the new password.
7. Change the CloudLink password in PowerFlex Manager by doing the following steps:
a. In PowerFlex Manager, go to Settings > Credentials management, select the CloudLink credential, click Edit, change
the Password, and click Save. See Credentials management for more information.
8. Test the changes:
a. In the PowerFlex Manager GUI, go to Resources page, select the CloudLink center, and click Run Inventory.
b. To confirm that the process completes with no errors, check Settings > Logs.

Unlocking the CloudLink secadmin user


Use this procedure to unlock the CloudLink secadmin user.

About this task


The CloudLink secadmin user account gets locked after three unsuccessful login attempts (by default).

Steps
1. Log in to the VMware vCenter that manages CloudLink center VMs and launch the CloudLink VM Console.
2. Log in with CloudLink user credentials.
The Summary page displays.
3. Click OK.
4. Type the CloudLink user password on the Re-enter password page.
5. On Update Menu, select Unlock User, and click OK.
The User secadmin has been unlocked message is displayed.
6. Click OK.
7. To test the changes, log in to CloudLink VM IP using the secadmin user, and the correct password.

Setting CloudLink Vault passcodes


During the initial server configuration, the vault passcodes are set.

Steps
1. To set or change the passcode, log in to the CloudLink Center.
2. Go to System > Vault > Actions > Set passcodes.
3. Update passcodes and click Set passcodes.
NOTE: You can change passcodes at any time.

Back up and restore CloudLink Center


Viewing back up information
Use this procedure to view the back up page information.

Steps
To view the backup information, log in to the CloudLink Center, and click System > Backup. The Backup page lists the
following information:

Administering the CloudLink Center 185


Terminology Information
Backup File Prefix The prefix used for the backup files.
Current Key ID The identifier for the current RSA-2048 key pair.
Current Backup File The name of the current backup file.
Current Backup Time The date and time that the current backup file was generated.
Backup Schedule The schedule for generating automatic backups.
Next Backup In The time remaining before the next automatic backup is generated.

When a backup file is downloaded, the Backup page lists the following additional information:

Terminology Information
Last Downloaded File The name of the backup file that was last downloaded. Only shown
when a backupfile has been downloaded.
Last Downloaded Time The date and time of the last back up file download. Only shown when a
backup file has been downloaded.
Backup Store The backup store configuration type. If you havenot configured a
backup store, the value is Local, which is stored on the local desktop

You can also use the FTP or SFTP servers as backup stores. To change the backup store, click System > Backup > Actions >
Change Backup Store
If you have configured an FTP or SFTP backup store, the following additional information is available:

Terminology Information
Host The remote FTP, SFTP, or FTPS host where you saved the CloudLink
Center backups. You can set this value to the host IP address or
hostname (if DNS is configured).
Port The port used to access the backup store.
User The user with permission to access the backup store.
Directory The directory in the backup store where backup files are available.

Changing the schedule for automatic backups


CloudLink Center automatically generates a backup file each day at midnight (UTC time).

Steps
To change the schedule for generating automatic backups, click System > Backup > Actions > Change Backup Schedule.

Generating a backup file manually


If you want to preserve CloudLink Center before the next automatic back up, you can generate a backup manually.

Steps
In the CloudLink Center, click System > Backup > Actions > Generate new backup.

186 Administering the CloudLink Center


Generating a backup key pair
Use this procedure to generate a new backup key pair.

About this task


For example, if the private key for a backup key pair is lost, you can generate a new key pair. You cannot access your backup
files without the associated private key. When you generate a new key pair, CloudLink Center automatically generates a new
backup file to ensure that the current backup can be opened with the private key of the current key pair.
Dell EMC recommends the following practices when you generate a new backup key pair.

Steps
1. Download the private key to the Downloads folder for the current user account. For example, C:
\Users\Admnistrator\Downloads

NOTE: The previously generated backup key will not open the backup file created, after a new key is generated.

2. Click System > Backup > Actions > Generate And Download New Key.

Downloading the current backup file


You can download the current backup file at any time.

About this task


The current backup file is either:
● The last backup file that CloudLink Center automatically created.
● The last backup file that you manually generated after the last automatic backup.

Steps
1. Click System > Backup > Actions > Download Backup.
2. In the Download Current Backup dialog box, click Download.
When you download the current backup file, CloudLink Center shows the age of the backup file.

Administering the CloudLink Center 187


Restoring the CloudLink backup
Restore the CloudLink backup.

Steps
1. Log in to the CloudLink Center.
2. Click System > Backup > Actions > Restore keystores.

3. In the Restore Keystores dialog box, complete the following steps:


a. In the Key box, browse to the private key file.
b. In the Backup box, browse to the backup file.
c. In the Unlock box, type the passcode that was set during the initial configuration of the CloudLink Center.
d. Click Restore.

A Restore Keystores succeeded message is displayed.


NOTE: If the CloudLink backup is not associated with a key pair, the file is corrupted or key mismatch error message
is displayed. In such a scenario, go to Generating a new key pair, and Downloading the current backup file.

188 Administering the CloudLink Center


16
Powering off and on the PowerFlex appliance
cluster

Powering off a PowerFlex appliance hyperconverged


cluster
To safely power off the PowerFlex appliance cluster, power off one component at a time in the order specified in this procedure.
This procedure applies to PowerFlex appliance nodes with VMware ESXi.

Prerequisites
Verify that all startup configurations for the network switches are saved.

Steps
1. Launch the PowerFlex GUI and log in to the primary PowerFlex MDM. Verify the PowerFlex cluster is healthy and no rebuild
or rebalances are running by observing the Rebuild and Rebalance widgets on the dashboard.

2. Log in to the VMware vSphere Client of the vCenter that manages the PowerFlex appliance cluster.
a. Expand the clusters.
b. Shut down all customer/application VMs(not SVMs) running on the PowerFlex storage datastores.
CAUTION: Do not shut down the SVMs as this can cause data loss.

3. In PowerFlex GUI:

PowerFlex Versions prior to PowerFlex 3.5


Inactivate PowerFlex Protection Domains (both source and a. Click Backend > Storage and change the view
destination protection domains if asynchronous replication is to By SDSs.
enabled). b. Right-click a protection domain, and select
a. In Configuration, select the Protected Domains and click Inactivate.
More > Inactive. c. Click OK and then type the administrator
b. Click Inactivate in the pop up. password when prompted.
c. Verify the operation is completed successfully and click d. Repeat for each protection domain and verify
Dismiss. that each is deactivated.
d. Click OK and then type the administrator password when e. Exit the PowerFlex GUI.
prompted.
e. Repeat for each protection domain and verify that each is
deactivated.
f. Exit the PowerFlex GUI presentation server.

4. From the VMware vSphere Client of the vCenter that manages the PowerFlex appliance cluster:
a. Shut down all the SVMs.
b. Disable DRS and HA on the PowerFlex appliance cluster and put the nodes into Maintenance Mode.
5. From the VMware vSphere Client of the vCenter that manages the PowerFlex Gateway VM and CloudLink center VM:
a. Shut down the PowerFlex Gateway VM.
b. Shut down both CloudLink Center presentation server VMs.
6. Use iDRAC to do a Graceful Shutdown on the PowerFlex appliance nodes.

Powering off and on the PowerFlex appliance cluster 189


7. Using the appropriate VMware vSphere Client:
a. Shut down the PowerFlex Manager VM.
NOTE: If you shut down the PowerFlex Manager VM while a job (such as a service deployment) is still in progress,
the job will not complete successfully.

8. If required, power off the access switches first and then the management switch.

Powering on a PowerFlex appliance hyperconverged


cluster
To safely power on the PowerFlex appliance, power on one component at a time in the order specified in this procedure.

About this task


This procedure applies to PowerFlex appliance nodes with VMware hypervisors (ESXi).

Prerequisites
Verify that all connections are correct and seated properly.

Steps
1. Power on the network components in the following order:
NOTE: Network components take about 10 minutes to power on.

a. Management switch
b. Access switches
NOTE: Ping the management IP address of the switches to verify power on is complete.

2. Using the appropriate VMware vSphere Client power on these VMs in the following order:
a. PowerFlex Gateway presentation server
b. Both CloudLink Center VMs
c. PowerFlex Manager
3. Power on the PowerFlex appliance nodes and do the following:
a. Use SSH to connect to all network switches.
b. Verify that connected interfaces are not in a not connected/down state, with the command: show interface
status.
c. Use iDRAC to power on all the PowerFlex appliance compute nodes and verify that they are fully booted to the ESXi
screen.
d. Using the VMware vSphere client of the vCenter that manages the PowerFlex appliance cluster, and take each
PowerFlex appliance node out of Maintenance Mode.
i. Power on all SVMs
ii. Enable DRS and HA on the PowerFlex appliance cluster.

e. Log in to PowerFlex.

PowerFlex Versions prior to PowerFlex 3.5


i. Verify that all software-defined storage (SDS) is i. Verify that all software-defined storage (SDS) is online.
online. Verify that all disks are online. Verify that all disks are online.
ii. In Configuration > Protected domain, select ii. Select Backend > Storage > Protection Domain >
the protected domain, click More > Activate and Activate and repeat for each protection domain.
repeat for each protection domain.
iii. Repeat the steps for source and destination, if
asynchronous replication is enabled

190 Powering off and on the PowerFlex appliance cluster


PowerFlex Versions prior to PowerFlex 3.5
iv. Verify the following if asynchronous replication is
enabled
v. Click Protection > SDR. Verify all the SDRs are
healthy.
vi. Click Protection > Journal Capacity. Ensure
journal capacity has already added.
vii. Click Protection > RCGs. Verify that the RCG in
the replication cluster returns to a working state.

f. From the VMware vSphere client that manages the PowerFlex appliance cluster, do the following:
i. Rescan to rediscover PowerFlex storage datastores.
ii. Power on the customer VMs. VMs might be displayed as inaccessible because PowerFlex storage is not available until
all the SVMs complete initialization.

Powering off PowerFlex appliance two-layer cluster


This procedure applies to PowerFlex appliance two-layer cluster with VMware ESXi for compute nodes and the embedded
operating system based on CentOS for storage-only nodes

About this task


To safely power off the PowerFlex appliance two-layer cluster, power off one component at a time in the order specified in this
procedure.

Prerequisites
Verify that all startup configurations for the network switches are saved.

Steps
1. Launch the PowerFlex GUI and log in to the primary PowerFlex MDM. Verify the PowerFlex cluster is healthy and no rebuild
or rebalances are running by noting the Rebuild and the Rebalance widgets on the dashboard.
2. In the VMware vSphere Web Client that manages the PowerFlex appliance cluster compute-only nodes:
a. Expand the clusters and shut down all application VMs running on the PowerFlex storage datastores.
b. Disable DRS and HA on the customer compute cluster.
c. Put the PowerFlex appliance compute nodes into Maintenance Mode.
3. Use iDRAC to do a Graceful Shutdown on the PowerFlex appliance compute nodes.
4. In PowerFlex GUI:

PowerFlex Versions prior to PowerFlex 3.5


Inactivate PowerFlex Protection Domains (both a. Click Backend > Storage and change the view to By SDSs.
source and destination protection domains if b. Right-click on a protection domain and select Inactivate.
asynchronous replication is enabled). c. Click OK and type the administrator password when prompted.
a. In Configuration, select the Protected Domains d. Repeat for each protection domain and verify that each is
and click More > Inactive. deactivated.
b. Click Inactivate in the pop up. e. Exit the PowerFlex GUI.
c. Verify the operation is completed successfully and
click Dismiss.
d. Click OK and then type the administrator
password when prompted.
e. Repeat for each protection domain and verify that
each is deactivated.
f. Exit the PowerFlex GUI presentation server.
a. Click Configuration > Protection Domain.
b. For each protection domain click More >
Inactive.

Powering off and on the PowerFlex appliance cluster 191


PowerFlex Versions prior to PowerFlex 3.5
c. Click OK and type the administrator password
when prompted.
d. Repeat for each protection domain and verify that
each is deactivated.
e. Exit the PowerFlex GUI.

5. SSH to each of the PowerFlex appliance storage only nodes and shutdown the nodes by typing shutdown -h.
6. Use iDRAC to confirm the PowerFlex appliance storage nodes have been powered off.
7. In the VMware vSphere Web Client that manages the PowerFlex Gateway VM:
a. Shut down the PowerFlex Gateway by running the command shutdown -h in the console.
b. Confirm PowerFlex Gateway VM is shut down by observing if vSphere shows the VM as Powered Off.
8. Using the appropriate VMware vSphere web client, shut down both CloudLink center VMs.
9. Using the appropriate VMware vSphere web client, shut down the PowerFlex Manager VM.
NOTE: If you shut down the PowerFlex Manager VM while a job (such as a service deployment) is still in progress, the
job will not complete successfully.

10. Power off the access switches first and then the management switch.

Powering on PowerFlex appliance two-layer cluster


To safely power on the PowerFlex appliance cluster, power on one component at a time in the order specified in this procedure.

About this task


This procedure applies to PowerFlex appliance two-layer cluster with ESXi for compute nodes and the embedded operating
system based on CentOS for storage only nodes.

Prerequisites
Verify that all connections are correct and properly seated.

Steps
1. Power on the network components in the following order:
NOTE: Network components take about 10 minutes to power on.

a. Management switch
b. Access switches
NOTE: Ping the management IP address of the switches to verify power on is complete.

2. Using the appropriate VMware vSphere web client power on these VMs in the following order:
a. PowerFlex Gateway
b. Both CloudLink Center VMs.
c. PowerFlex Manager
3. Power on the PowerFlex appliance nodes by doing the following:
a. Use SSH to connect to all network switches:
● To verify that connected interfaces are not in a "not connected/down" state, use the command: show interface
status
b. Use iDRAC to power on all the PowerFlex appliance storage nodes and verify that they are fully booted to the Linux
prompt.
c. Log in to the PowerFlex GUI.

192 Powering off and on the PowerFlex appliance cluster


PowerFlex Versions prior to PowerFlex 3.5
i. Verify that all software-defined storage (SDS) is online. i. Verify that all software-defined storage (SDS) is
Verify that all disks are online. online by noting SDSs widget on the Dashboard.
ii. In Configuration > Protected domain, select the ii. Verify that all disks are online by looking at Backend >
protected domain, click More > Activate and repeat for Property Sheet > General for each SDS node.
each protection domain. iii. Select Backend/Storage > Protection Domain >
Activate and repeat for each protection domain.
iii. Repeat the steps for source and destination, if
asynchronous replication is enabled iv. Verify that there are no errors, warning, or alerts on
the system.
iv. Verify the following if asynchronous replication is
enabled v. Exit the PowerFlex GUI.
v. Click Protection > SDR. Verify all the SDRs are
healthy.
vi. Click Protection > Journal Capacity. Ensure journal
capacity has already added.
vii. Click Protection > RCGs. Verify that the RCG in the
replication cluster returns to a working state.

d. Use iDRAC to power on all the PowerFlex appliance compute nodes and verify that they are fully booted to the VMware
ESXi console screen.
e. Using the VMware vSphere Web Client of the vCenter that manages the PowerFlex appliance cluster:
i. Take each PowerFlex appliance compute-only node out of maintenance mode.
ii. Enable DRS and HA on the PowerFlex appliance compute-only cluster.
iii. Rescan to rediscover PowerFlex storage datastores.
iv. Power on the customer VMs.

Powering off PowerFlex compute-only nodes with


Windows Server 2016 or 2019
Use this procedure to power off PowerFlex Gateway with Windows Server.

Steps
1. Connect to the Windows Server system from the Remote Desktop with an account set up with an administrator privilege.
2. Power off through any one of the following modes:
a. GUI : Click Start > Power > Shutdown.
b. Command line using PowerShell: Run the Stop-Computer cmdlet.

Powering off PowerFlex compute-only nodes with Red


Hat
Use this procedure to power off PowerFlex compute-only nodes with Red Hat.

Steps
SSH to the PowerFlex appliance Red Hat compute-only nodes and shutdown the nodes by using the command:

shutdown -h

Powering off and on the PowerFlex appliance cluster 193


17
Ports and authentication protocols

PowerFlex Manager ports and protocols


PowerFlex Manager uses the following ports and protocols for data communication:

Port Protocol Port type Direction Use


22 SSH TCP Inbound/outbound I/O module
SSH with root account is disabled by
default

22, 80, 135 N/A TCP/IP Outbound Duplicate IP detection


53 DNS UDP Outbound DNS server
67, 68 DHCP UDP Outbound DHCP server
69 TFTP UDP Inbound Firmware updates
TFTP is used only for operating system
installation (PXE) boot when provisioning
servers

80, 8080 HTTP TCP Inbound/outbound HTTP communication


All traffic is redirected to HTTPs

111 rpcbind TCP Inbound/outbound NFS


123 NTP UDP Outbound Time synchronization
162, 11620 SNMP UDP Inbound SNMP synchronization
443 HTTPs TCP Inbound/outbound Secure HTTP communication
SSL v3, TLS v1.0, and TLS v1.1 are
disabled

443, 4433 WS-MAN TCP Outbound iDRAC and CMC communication


139, 445 CIFS TCP Inbound/outbound Back up to CIFS share
514 rsyslog TCP Outbound Remote syslog server communication
2049 NFS TCP/UDP Inbound/outbound Back up to NFS share
4002, 4003 NFS TCP/UDP Inbound/outbound nlockmgr and mountd
8140 Puppet over TCP Inbound New node provisioning
HTTPs
9443 HTTPS TCP Outbound Secure Remote Services gateway
communication

PowerFlex ports and authentication


For information about the ports and protocols used by PowerFlex components, see the Dell EMC PowerFlex Security
Configuration Guide.

194 Ports and authentication protocols


VMware vSphere ports and protocols
This section contains information for VMware vSphere ports and protocols.

VMware vSphere 7.0


For information about ports and protocols for VMware vCenter Server and VMware ESXi hosts, see VMware Ports and
Protocols.

VMware vSphere 6.7


For information about ports and protocols for VMware vCenter Server and Platform Services Controller, see Required Ports for
vCenter Server and Platform Services Controller or Additional vCenter Server TCP and UPD Ports.
For information about ports and protocols for VMware ESXi hosts, see Incoming and Outgoing Firewall Ports for ESXi Hosts.

VMware vSphere 6.5


For information about ports and protocols for VMware vCenter Server and Platform Services Controller, see Required Ports for
vCenter Server and Platform Services Controller.
For information about ports and protocols for VMware ESXi hosts, see Incoming and Outgoing Firewall Ports for ESXi Hosts.

Red Hat Virtualization Manager and Red Hat


Virtualization Host ports and protocols
For ports and protocols for Red Hat Virtualization Manager and Red Hat Virtualization Host, see the firewall requirements
sections in the Red Hat Virtualization Installation Guide.

CloudLink Center ports and protocols


CloudLink Center uses the following ports and protocols for data communication:

Port Protocol Port type Direction Use


80 HTTP TCP Inbound/outbound CloudLink agent download and cluster
communication
443 HTTPs TCP Inbound/outbound CloudLink Center web access and cluster
communication
1194 Proprietary TCP, UDP Inbound CloudLink agent communication
over TLS 1.2
5696 KMIP TCP Inbound KMIP service
123 NTP UDP Outbound NTP traffic
162 SNMP UDP Outbound SNMP traffic
514 syslog UDP Outbound Remote syslog server communication

Ports and authentication protocols 195


18
Additional documentation
The following information contains documentation resources to complete administrative procedures on PowerFlex appliance, and
general resources:
● Dell EMC PowerFlex
○ Access PowerFlex documentation here: docs.delltechnologies.com/.
● VMware vSphere Web Server
○ Refer to VMware Docs and select the appropriate version for detailed information to complete administrative procedures
for vSphere Server on PowerFlex appliance.
■ Changing the vCenter administrative password
■ Adding or replacing NTP servers in the VMware vCenter Server Appliance configuration
■ Configuring the DNS, IP address, and proxy settings.
■ Joining the VMware vCenter Server Appliance to the Active Directory domain
■ Leaving an Active Directory domain
■ Setting an alarm
■ Migrating VMs
■ Using and migrating vSphere Update Manager
■ Configuring VMware vCenter High availability
● Secure Remote Services
○ Access Secure Remote Services Technical Documentation and Downloads here: support.emc.com/products/
37716_EMC-Secure-Remote-Services-Virtual-Edition.
● Related information
○ VMware Documentation - docs.vmware.com/en/VMware-vSphere/index.html

196 Additional documentation

You might also like