Professional Documents
Culture Documents
Administration Guide
July 2021
Rev. 7.1
Notes, cautions, and warnings
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
© 2019 - 2021 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Other trademarks may be trademarks of their respective owners.
Contents
Revision history........................................................................................................................................................................ 10
Chapter 1: Introduction................................................................................................................ 12
Contents 3
Using PowerFlex to enable persistent checksum................................................................................................45
Add licenses to PowerFlex and PowerFlex Manager................................................................................................46
Managing volumes, nodes, and network components............................................................................................. 46
Monitoring system health................................................................................................................................................ 47
Upgrading PowerFlex appliance firmware...................................................................................................................48
Mapping a volume using PowerFlex version prior to 3.5 to a Windows PowerFlex compute-only node.... 49
Mapping a volume using Windows PowerFlex compute-only node...................................................................... 49
Enabling and disabling SDC authentication.................................................................................................................50
Preparing for SDC authentication........................................................................................................................... 50
Configuring SDCs to use authentication............................................................................................................... 50
Windows and Linux SDC nodes................................................................................................................................ 51
Enabling SDC authentication ...................................................................................................................................52
Disabling SDC authentication................................................................................................................................... 52
Expanding an existing PowerFlex cluster with SDC authentication enabled................................................53
4 Contents
Remove replication trust for peer system............................................................................................................. 67
Enter SDS in maintenance mode............................................................................................................................. 67
Remove storage data replication from PowerFlex.............................................................................................. 67
Remove a storage data replication RPM............................................................................................................... 68
Clean up network configurations.............................................................................................................................68
Exit SDS in maintenance mode................................................................................................................................ 68
Remove journal capacity............................................................................................................................................69
Remove target volumes from the destination system....................................................................................... 69
Contents 5
Chapter 7: Deploying PowerFlex nodes using PowerFlex Manager............................................... 89
Full network automation.................................................................................................................................................. 89
Full network automation: Deploying a PowerFlex compute-only node with Red Hat Enterprise
Linux or CentOS...................................................................................................................................................... 90
Full network automation: Deploying a PowerFlex storage-only node.............................................................93
Full network automation: Deploying a VMware ESXi PowerFlex hyperconverged node or
PowerFlex compute-only node............................................................................................................................ 98
Adding volumes to a PowerFlex hyperconverged node or PowerFlex compute-only node ................... 103
Partial network automation........................................................................................................................................... 103
Partial network automation: Deploying a PowerFlex compute-only node with Red Hat Enterprise
Linux or CentOS.....................................................................................................................................................104
Partial network automation: Deploying a PowerFlex storage-only node......................................................107
Partial network automation: Deploying a VMware ESXi PowerFlex hyperconverged node or
PowerFlex compute-only node............................................................................................................................ 111
Adding volumes to a PowerFlex hyperconverged node or PowerFlex compute-only node .................... 116
6 Contents
Change the maximum transmission unit (MTU) on the cust_dvswitch....................................................... 133
Change the maximum transmission unit (MTU) for VMware vMotion VMK.............................................. 133
Upgrading the PowerFlex Manager virtual appliance............................................................................................. 134
Upgrade PowerFlex Manager using backup and restore................................................................................. 134
Upgrading the PowerFlex Manager virtual appliance using Secure Remote Services............................. 140
Restarting the PowerFlex Manager virtual appliance....................................................................................... 142
Upgrading components............................................................................................................................................ 142
Adding a new Intelligent Catalog file and OS images to PowerFlex Manager.................................................. 143
Upgrade the PowerFlex presentation server............................................................................................................ 143
Upgrading PowerFlex Gateway.................................................................................................................................... 144
Upgrading Java on the PowerFlex Gateway and PowerFlex GUI presentation server.................................. 144
Update the PowerFlex GUI presentation server...................................................................................................... 145
Update PowerFlex appliance nodes............................................................................................................................ 146
Migrating VMware vSphere Cluster Services (vCLS) VMs.................................................................................. 147
Upgrading Cisco NX-OS 7.x to Cisco NX-OS 9.x.................................................................................................... 147
Upgrading the electronic programmable logic device (EPLD)............................................................................. 149
Contents 7
Modifying the memory size according to the SDR requirements for FG pool-based PowerFlex
systems with replication ..................................................................................................................................... 164
Increasing the vCPU count according to the SDR requirement.................................................................... 165
Setting the vNUMA advanced option...................................................................................................................165
Editing the SVM configuration............................................................................................................................... 165
Powering on the SVM and configuring network interfaces .................................................................................166
Configure the newly added network interface controllers for SVMs........................................................... 166
Add a permanent static route for replication external networks .................................................................. 166
Install SDR RPMs on the SDS nodes (SVMs)...........................................................................................................167
Exit SDS maintenance mode......................................................................................................................................... 167
Verify communication between the source and destination................................................................................. 167
Add journal capacity percentage..................................................................................................................................168
Calculate journal capacity to allocate................................................................................................................... 168
Add allocated journal capacity................................................................................................................................ 168
Adding the Storage Data Replicator to a PowerFlex appliance........................................................................... 169
Create the peer system between the source and destination site .................................................................... 169
Adding the peer system ................................................................................................................................................ 170
Create the replication consistency group.................................................................................................................. 170
Finding the current copy status.............................................................................................................................. 171
Modifying the recovery point objective.................................................................................................................171
Defining the network for replication in PowerFlex Manager................................................................................. 171
Adding an existing service to PowerFlex Manager..................................................................................................172
8 Contents
Generating a backup file manually......................................................................................................................... 186
Generating a backup key pair..................................................................................................................................187
Downloading the current backup file.................................................................................................................... 187
Restoring the CloudLink backup............................................................................................................................ 188
Chapter 16: Powering off and on the PowerFlex appliance cluster.............................................. 189
Powering off a PowerFlex appliance hyperconverged cluster............................................................................. 189
Powering on a PowerFlex appliance hyperconverged cluster.............................................................................. 190
Powering off PowerFlex appliance two-layer cluster..............................................................................................191
Powering on PowerFlex appliance two-layer cluster.............................................................................................. 192
Powering off PowerFlex compute-only nodes with Windows Server 2016 or 2019....................................... 193
Powering off PowerFlex compute-only nodes with Red Hat................................................................................193
Contents 9
Revision history
Date Document revision Description of changes
July 2021 7.1 Updated Upgrade PowerFlex Manager
using backup and restore process.
June 2021 7.0 Added content for
● Administering storage with
asynchronous replication
● Remote replication on PowerFlex
storage-only nodes
● Minimum VMware vCenter
permissions required to support
PowerFlex Manager
● VMware vCLS VM migration
● Enabling replication on existing
PowerFlex hyperconverged nodes
● Dell PowerSwitch S5296F
● Upgrading VMware NSX-T Edge
Gateway nodes
December 2020 6.1 Added content for
● Upgrading VMware vSphere for
patch releases
Updated content for
● Native asynchronous replication
November 2020 6.0 Added content for
● Customer switch port examples
● Persistent checksum for data
integrity
● SDC authentication
● Full and partial network automation
Updated content for
● CloudLink
September 2020 5.1 Updated content for
● PowerFlex Gateway
June 2020 5.0 Added content for
● Storage data replication (SDR)
● Cisco NX-OS upgrade to 9.x
● PowerFlex 3.5
● CloudLink 6.9
● Protected maintenance mode (PMM)
March 2020 4.0 Updated content for
● CloudLink
● Windows compute-only nodes
● Dell EMC Networking
November 2019 3.0 Updated for
● CloudLink support
● Windows Server OS support
● changes to embedded operating
systems
Removed
● OpenManage Enterprise tasks
10 Revision history
Date Document revision Description of changes
September 2019 2.0 Updating and adding new topics for the
September release
August 2019 1.0 Initial release
Revision history 11
1
Introduction
This guide provides procedures for administering the PowerFlex appliance.
It provides the following information:
● Administering the operating system, network, and storage
● Managing components with PowerFlex Manager
● Monitoring system health
● Monitoring and alerting using Secure Remote Services
● Configuring SNMP trap and syslog forwarding
● Backing up and restoring
● Managing PowerFlex appliance passwords
● Powering on and off
The dvswitch names are for example only and may not match the configured system. Do not change these names or a data
unavailable or data lost event may occur.
Depending on when the system was built, it uses an embedded operating system-based jump server or a Windows-based jump
server. The specific procedures in this guide describe using the Windows-based jump server. You can accomplish the same tasks
using the tools available for the embedded operating system-based jump server. If you are using a system with an embedded
operating system-based jump server, refer to Using an embedded operating system-based jump server for more details.
Dell EMC PowerFlex appliance was previously known as Dell EMC VxFlex appliance. Similarly, Dell EMC PowerFlex Manager
was previously known as Dell EMC VxFlex Manager, and Dell EMC PowerFlex was previously known as Dell EMC VxFlex OS.
References in the documentation will be updated over time.
PowerFlex Manager provides the management and orchestration functionality for PowerFlex appliance.
See the Glossary for terms, definitions, and acronyms.
12 Introduction
2
Administering the network
Perform these procedures to administer the PowerFlex appliance network.
Jump server
The PowerFlex appliance management environment may include a jump server used to complete routine maintenance and
troubleshooting. Remote access is provided using VNC (GUI) and SSH, which is always on. The jump server has an integrated
configuration for various file sharing services which can be enabled and disabled as needed. The enable and disable services
scripts are located on the desktop.
The VM installation is relatively minimal installation, but also has Xorg and KDE (a graphical desktop environment). A nonroot
account (admin) is provided for use. The admin account does have full administrator escalation privileges (sudo) which must
be used in order to perform some tasks (account password is required). All yum repos are disabled or non-existent, to prevent
inadvertent, or ad hoc updates from being applied.
NOTE: Most maintenance, management, and orchestration operations are still intended to be performed using PowerFlex
Manager.
OpenSSH
An openSSH server is listening on the default port (22/tcp). Non-root connections are permitted, and any client capable of
handling the ciphers suites that are presented can connect without issue. SSH client selection and configuration are beyond the
scope of this guide.
Steps
1. Locate the embedded operating system-based iDRAC tools and installation instructions on the Dell Technologies Support
site. The latest Linux version is available here.
2. Run the following command on the embedded operating system-based jump box to create a specific symlink to satisfy SSL
requirements:
Steps
1. Obtain the updated embedded OS image from the IC software repository.
2. Deploy the existing the embedded jump VM and assign a valid IP address with internet connectivity. A valid DNS entry must
be defined.
The embedded OS jump VM will replace the existing Windows server.
3. Run df -h to verify that there is enough available free space on the /shares partition of the embedded jump VM to
download the RPM packages and create the ZIP file. At least 15 GB is recommended.
Steps
1. Create a directory in the /shares volume called centos-RPM, type: sudo mkdir /shares/Centos-RPM.
2. Copy the repository update ZIP file to the /tmp directory of the embedded operating system VM using WinSCP or similar.
3. Extract the contents of the repository update ZIP file to the /shares/Centos-RPM directory, type: sudo unzip /tmp/
repofilename.zip -d /shares/Centos-RPM.
4. Create and modify a new repository file in the (/etc/yum.repos.d) directory, type: sudo vi /etc/yum.repos.d/
centos.rpm.repo. In this example, the file that is created is (/etc/yum.repos.d/centos.rpm.repo).
5. Clean the yum cache, type: # sudo yum clean all.
6. Verify access to the new repository, type: # sudo yum repolist.
7. Deploy the updates from the repository, type: yum update. When prompted answer (y).
8. When the process is complete, reboot the system, type: reboot.
9. Once the system reboot has completed, verify kernel version, type: uname -a viewing the (/etc/centos-release) file.
10. Verify the embedded operating system version, type: cat /etc/centos-release.
11. Remove the RPM files, type: sudo rm -f -r /shares/Centos-RPM.
12. Remove the repository index file, type: sudo rm /etc/yum.repos.d/centos.rpm.repo.
13. Clean yum cache, type sudo yum clean all.
Steps
1. Open an SSH session with a VMware ESXi host using PuTTy or a similar SSH client.
2. Log in to the host using root.
3. Type vmkping <ping command> to ping each SDC using the following commands:
4. Repeat from each VMware ESXi host using the following commands:
For example:
NOTE: The following command requires to use vmk number to reference the port group. For a standard build, flex-
data3-<vlanid> is vmk2 and flex-data4-<vlanid> is vmk3.
Steps
1. Open an SSH session with an SDS host using PuTTy or a similar SSH client.
2. Log in to the host using root.
3. Ping each SDS and the PowerFlex using a 9000 byte packet without fragmentation on the SDS to SDS data networks.
4. Repeat for each SDS host.
5. Repeat for the PowerFlex Gateway.
In the ping command example below:
For example:
Steps
1. From the switch CLI, log in to the switch you want to check.
2. Check each interface for their MTU configuration.
Steps
1. In VMware vSphere Client, navigate to the VMware ESXi host.
2. Click the Configure tab, and click Networking.
3. Select VMkernel adapters.
4. Select the VMkernel adapter from the table.
Steps
1. Log in to VMware vCenter web interface.
2. On the Menu click Home.
3. From the navigation pane, click Networking.
4. Select the virtual switch that you want to check.
5. Click the Configure tab.
6. In the navigation pane, select Settings > Properties.
7. In the Properties window, under Advanced, verify the MTU setting is set to 9000.
Steps
1. Log in to PowerFlex Manager.
2. From the menu, click Services.
3. Select a service for which you want to add a network and in the right pane, click View Details.
4. Under Resource Action, from the Add Resources list, click Add Network.
The Add Network window is displayed. All used resources and networks are displayed under Resource Name and
Networks.
5. From the Available Networks list, select the network, and click Add.
The selected network is displayed under Network Name. You can define a new network by clicking Define a new network
and select check box to configure Static IP Ranges.
7. Click Save.
It may take about 15 minutes for PowerFlex Manager to complete the actions of adding the VLAN to the access switches
and the VMware ESXi cluster.
NOTE: PowerFlex Manager supports scale up to 400 general-purpose LAN networks.
Prerequisites
Ensure that a new VLAN is created on any switches that need access to that VLAN and is added to any management cluster
server-facing ports. The VLAN is then added it to any northbound trunks to other switches that it must communicate with.
Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Services.
3. Select a service for which you want to add a network and in the right pane, click View Details.
4. Under Resource Action, from the Add Resources list, click Add Network.
The Add Network window is displayed. All used resources and networks are displayed under Resource Name and
Networks.
5. Click Add Additional Network to add an additional network:
a. From the Available Networks list, select the network, and click Add.
The selected network is displayed under Network Name. You can define a new network by clicking Define a New
Network.
b. Select Port Group from the Select Port Group list.
c. Click Save.
6. Click Add Additional Static Route to add an additional static route:
a. Click Add New Static Route.
b. Select a Source Network.
The source network must be a PowerFlex data network or a replication network.
Steps
On the command prompt, type the following:
Cisco Nexus
Cisco_Access-A# configure
Cisco_Access-A(config)# vlan 10
Cisco_Access-A(config-vlan)# exit
Cisco_Access-A(config)# interface port-channel 100
Cisco_Access-A(config-if)# switchport trunk allowed vlan
add 10
Cisco_Access-A(config-if)# end
Cisco_Access-A# copy running-config startup-config
Steps
1. Create a VM on PowerFlex compute-only node or PowerFlex hyperconverged node.
2. Assign the newly created distributed port group to the VM.
3. Configure an IP address, mask, and gateway on the VM that corresponds to the new VLAN.
4. Ping the gateway from the VM.
5. After you have successfully pinged the gateway from the VM, delete the VM.
Steps
1. Open an SSH session with the Cisco Nexus switch using PuTTY or a similar SSH client.
2. Log in with admin or other credentials with privileges and type show tech-support.
3. Enable session logging. If using PuTTY, right-click the menu bar and go Change settings > Sessions > Logging.
4. Select All session output.
5. Type a log file name and click Apply.
6. In the switch CLI, type the following:
Switch A Switch B
Port
channel interface port-channel37 interface port-channel37
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
104,106,150 104,106,150
spanning-tree port type edge trunk spanning-tree port type edge trunk
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
speed 25000 speed 25000
mtu 9216 mtu 9216
no lacp suspend-individual no lacp suspend-individual
vpc 37 vpc 37
Ethernet
port interface Ethernet1/15 interface Ethernet1/15
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
104,106,150 104,106,150
spanning-tree port type edge trunk spanning-tree port type edge trunk
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
speed 25000 speed 25000
mtu 9216 mtu 9216
channel-group 37 mode active channel-group 37 mode active
no shutdown no shutdown
Ethernet
port interface Ethernet1/16 interface Ethernet1/16
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
151,152,153,154 151,152,153,154
spanning-tree port type edge trunk spanning-tree port type edge trunk
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
speed 25000 speed 25000
mtu 9216 mtu 9216
channel-group 38 mode active channel-group 38 mode active
no shutdown no shutdown
Switch A Switch B
Port
channel interface port-channel37 interface port-channel37
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
150,151,153,1000 150,151,153,1000
spanning-tree port type edge trunk spanning-tree port type edge trunk
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
speed 25000 speed 25000
mtu 9216 mtu 9216
no lacp suspend-individual no lacp suspend-individual
vpc 37 vpc 37
Ethernet
port interface Ethernet1/15 interface Ethernet1/15
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
150,151,153,1000 150,151,153,1000
spanning-tree port type edge trunk spanning-tree port type edge trunk
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
speed 25000 speed 25000
mtu 9216 mtu 9216
channel-group 37 mode active channel-group 37 mode active
no shutdown no shutdown
Ethernet
port interface Ethernet1/16 interface Ethernet1/16
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 152,154 switchport trunk allowed vlan 152,154
spanning-tree port type edge trunk spanning-tree port type edge trunk
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
speed 25000 speed 25000
mtu 9216 mtu 9216
channel-group 38 mode active channel-group 38 mode active
no shutdown no shutdown
The following example pertains to PowerFlex storage-only node connectivity with SDS and SDC traffic.
NOTE: In a two layer deployment, the SDC only data1 (SDC traffic only) network and SDC only data2 (SDC traffic only)
network are defined on port 1 along with PowerFlex management. Port 2 will have SDS only data1 (SDS traffic only) and
SDS only data2 (SDS traffic only).
Port examples for management as follows:
Switch A Switch B
Port
channel interface port-channel37 interface port-channel37
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
150,151,152,1000 150,151,152,1000
spanning-tree port type edge trunk spanning-tree port type edge trunk
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
speed 25000 speed 25000
mtu 9216 mtu 9216
no lacp suspend-individual no lacp suspend-individual
vpc 37 vpc 37
Ethernet
port interface Ethernet1/15 interface Ethernet1/15
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
150,151,152,1000 150,151,152,1000
spanning-tree port type edge trunk spanning-tree port type edge trunk
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
speed 25000 speed 25000
mtu 9216 mtu 9216
channel-group 37 mode active channel-group 37 mode active
no shutdown no shutdown
The data networks for these ports are used for SDS traffic only. The following table provides port examples for PowerFlex:
Ethernet
port interface Ethernet1/16 interface Ethernet1/16
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 153,154 switchport trunk allowed vlan 153,154
spanning-tree port type edge trunk spanning-tree port type edge trunk
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
speed 25000 speed 25000
mtu 9216 mtu 9216
channel-group 38 mode active channel-group 38 mode active
no shutdown no shutdown
Switch A Switch B
Port
channel interface port-channel117 interface port-channel117
no shutdown no shutdown
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
104,105,150 104,105,150
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
spanning-tree port type edge spanning-tree port type edge
lacp fallback enable lacp fallback enable
mtu 9216 mtu 9216
vlt-port-channel 117 vlt-port-channel 117
spanning-tree port type edge spanning-tree port type edge
Ethernet
port interface Ethernet1/1/7 interface Ethernet1/1/7
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
104,105,150 104,105,150
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
spanning-tree port type edge spanning-tree port type edge
no switchport no switchport
mtu 9216 mtu 9216
speed 25000 speed 25000
flowcontrol receive off flowcontrol receive off
channel-group 117 mode active channel-group 117 mode active
no shutdown no shutdown
Ethernet
port interface Ethernet1/1/8 interface Ethernet1/1/8
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
150,152,154 150,152,154
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
spanning-tree port type edge spanning-tree port type edge
no switchport no switchport
mtu 9216 mtu 9216
speed 25000 speed 25000
flowcontrol receive off flowcontrol receive off
channel-group 118 mode active channel-group 118 mode active
no shutdown no shutdown
Switch A Switch B
Port
channel interface port-channel117 interface port-channel117
no shutdown no shutdown
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
150,151,152 150,151,152
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
spanning-tree port type edge spanning-tree port type edge
lacp fallback enable lacp fallback enable
mtu 9216 mtu 9216
vlt-port-channel 117 vlt-port-channel 117
spanning-tree port type edge spanning-tree port type edge
Ethernet
port interface Ethernet1/1/7 interface Ethernet1/1/7
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
150,151,152 150,151,152
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
spanning-tree port type edge spanning-tree port type edge
no switchport no switchport
mtu 9216 mtu 9216
speed 25000 speed 25000
flowcontrol receive off flowcontrol receive off
channel-group 117 mode active channel-group 117 mode active
no shutdown no shutdown
Ethernet
port interface Ethernet1/1/8 interface Ethernet1/1/8
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
150,153,154 150,153,154
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
spanning-tree port type edge spanning-tree port type edge
no switchport no switchport
mtu 9216 mtu 9216
speed 25000 speed 25000
flowcontrol receive off flowcontrol receive off
channel-group 118 mode active channel-group 118 mode active
no shutdown no shutdown
The following example pertains to PowerFlex storage-only node connectivity with SDS and SDC traffic only.
NOTE: In a two layer deployment, the SDC only data1 (SDC traffic only) network and SDC only data2 (SDC traffic only)
network defined on port 1 along with PowerFlex management. Port 2 will have SDS only data1 (SDS traffic only) and only
data2 (SDS traffic only).
Port examples for management as follows:
Switch A Switch B
Port
channel interface port-channel117 interface port-channel117
no shutdown no shutdown
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
150,151,152 150,151,152
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
spanning-tree port type edge spanning-tree port type edge
lacp fallback enable lacp fallback enable
mtu 9216 mtu 9216
vlt-port-channel 117 vlt-port-channel 117
spanning-tree port type edge spanning-tree port type edge
Ethernet
port interface Ethernet1/1/7 interface Ethernet1/1/7
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
150,151,152 150,151,152
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
spanning-tree port type edge spanning-tree port type edge
no switchport no switchport
mtu 9216 mtu 9216
speed 25000 speed 25000
flowcontrol receive off flowcontrol receive off
channel-group 117 mode active channel-group 117 mode active
no shutdown no shutdown
Switch A Switch B
Port
channel interface port-channel117 interface port-channel117
no shutdown no shutdown
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
150,153,154 150,153,154
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
spanning-tree port type edge spanning-tree port type edge
lacp fallback enable lacp fallback enable
mtu 9216 mtu 9216
vlt-port-channel 117 vlt-port-channel 117
spanning-tree port type edge spanning-tree port type edge
Ethernet
port interface Ethernet1/1/7 interface Ethernet1/1/7
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
150,153,154 150,153,154
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
spanning-tree port type edge spanning-tree port type edge
no switchport no switchport
mtu 9216 mtu 9216
speed 25000 speed 25000
flowcontrol receive off flowcontrol receive off
channel-group 117 mode active channel-group 117 mode active
no shutdown no shutdown
Switch A Switch B
Port
channel interface Port-channel 37 interface Port-channel 37
no ip address no ip address
mtu 9216 mtu 9216
portmode hybrid portmode hybrid
switchport switchport
spanning-tree mstp edge-port spanning-tree mstp edge-port
spanning-tree rstp edge-port spanning-tree rstp edge-port
spanning-tree 0 portfast spanning-tree 0 portfast
spanning-tree pvst edge-port spanning-tree pvst edge-port
vlt-peer-lag port-channel 37 vlt-peer-lag port-channel 37
no shutdown no shutdown
LACP
lacp ungroup member-independent port- lacp ungroup member-independent
channel 37 port-channel 37
Ethernet
port interface twentyFiveGigE 1/35 interface twentyFiveGigE 1/35
no ip address no ip address
mtu 9216 port-channel-protocol LACP mtu 9216 port-channel-protocol
port-channel 37 mode active LACP
no shutdown port-channel 37 mode active
no shutdown
Switch A Switch B
Port
channel interface Port-channel 38 interface Port-channel 38
no ip address no ip address
mtu 9216 mtu 9216
portmode hybrid portmode hybrid
switchport switchport
spanning-tree mstp edge-port spanning-tree mstp edge-port
spanning-tree rstp edge-port spanning-tree rstp edge-port
spanning-tree 0 portfast spanning-tree 0 portfast
spanning-tree pvst edge-port spanning-tree pvst edge-port
vlt-peer-lag port-channel 38 vlt-peer-lag port-channel 38
no shutdown no shutdown
LACP
lacp ungroup member-independent port- lacp ungroup member-independent
channel 38 port-channel 38
Ethernet
port interface twentyFiveGigE 1/35 interface twentyFiveGigE 1/35
no ip address no ip address
mtu 9216 port-channel-protocol LACP mtu 9216 port-channel-protocol
port-channel 38 mode active LACP
no shutdown port-channel 38 mode active
no shutdown
Add VLANs
Interface vlan 151 Interface vlan 151
tagged Port-channel 38 tagged Port-channel 38
Interface vlan 152 Interface vlan 152
tagged Port-channel 38 tagged Port-channel 38
Interface vlan 153 Interface vlan 153
tagged Port-channel 38 tagged Port-channel 38
Interface vlan 154 Interface vlan 154
tagged Port-channel 38 tagged Port-channel 38
Switch A Switch B
Port
channel interface Port-channel 37 interface Port-channel 37
no ip address no ip address
mtu 9216 mtu 9216
portmode hybrid portmode hybrid
switchport switchport
spanning-tree mstp edge-port spanning-tree mstp edge-port
spanning-tree rstp edge-port spanning-tree rstp edge-port
spanning-tree 0 portfast spanning-tree 0 portfast
LACP
lacp ungroup member-independent port- lacp ungroup member-independent
channel 37 port-channel 37
Ethernet
port interface twentyFiveGigE 1/35 interface twentyFiveGigE 1/35
no ip address no ip address
mtu 9216 port-channel-protocol LACP mtu 9216 port-channel-protocol
port-channel 37 mode active LACP
no shutdown port-channel 37 mode active
no shutdown
Add VLANs
Interface vlan 150 Interface vlan 150
tagged Port-channel 37 tagged Port-channel 37
Interface vlan 151 Interface vlan 151
tagged Port-channel 37 tagged Port-channel 37
Interface vlan 153 Interface vlan 153
tagged Port-channel 37 tagged Port-channel 37
Interface vlan 1000 Interface vlan 1000
tagged Port-channel 37 tagged Port-channel 37
Switch A Switch B
Port
channel interface Port-channel 37 interface Port-channel 37
no ip address no ip address
mtu 9216 mtu 9216
portmode hybrid portmode hybrid
switchport switchport
spanning-tree mstp edge-port spanning-tree mstp edge-port
spanning-tree rstp edge-port spanning-tree rstp edge-port
spanning-tree 0 portfast spanning-tree 0 portfast
spanning-tree pvst edge-port spanning-tree pvst edge-port
vlt-peer-lag port-channel 37 vlt-peer-lag port-channel 37
no shutdown no shutdown
LACP
lacp ungroup member-independent port- lacp ungroup member-independent
channel 37 port-channel 37
Ethernet
port interface twentyFiveGigE 1/35 interface twentyFiveGigE 1/35
no ip address no ip address
mtu 9216 port-channel-protocol LACP mtu 9216 port-channel-protocol
port-channel 37 mode active LACP
no shutdown port-channel 37 mode active
no shutdown
Add VLANs
Interface vlan 152 Interface vlan 152
tagged Port-channel 37 tagged Port-channel 37
Interface vlan 154 Interface vlan 154
tagged Port-channel 37 tagged Port-channel 37
The following example pertains to PowerFlex storage-only node connectivity with SDS and SDC traffic only.
Switch A Switch B
Port
channel interface Port-channel 37 interface Port-channel 37
no ip address no ip address
mtu 9216 mtu 9216
portmode hybrid portmode hybrid
switchport switchport
spanning-tree mstp edge-port spanning-tree mstp edge-port
spanning-tree rstp edge-port spanning-tree rstp edge-port
spanning-tree 0 portfast spanning-tree 0 portfast
spanning-tree pvst edge-port spanning-tree pvst edge-port
vlt-peer-lag port-channel 37 vlt-peer-lag port-channel 37
no shutdown no shutdown
LACP
lacp ungroup member-independent port- lacp ungroup member-independent
channel 37 port-channel 37
Ethernet
port interface twentyFiveGigE 1/35 interface twentyFiveGigE 1/35
no ip address no ip address
mtu 9216 port-channel-protocol LACP mtu 9216 port-channel-protocol
port-channel 37 mode active LACP
no shutdown port-channel 37 mode active
no shutdown
Add VLANs
Interface vlan 150 Interface vlan 150
tagged Port-channel 37 tagged Port-channel 37
Interface vlan 151 Interface vlan 151
tagged Port-channel 37 tagged Port-channel 37
Interface vlan 152 Interface vlan 152
tagged Port-channel 37 tagged Port-channel 37
Interface vlan 1000 Interface vlan 1000
tagged Port-channel 37 tagged Port-channel 37
The following table provides port examples for PowerFlex. The data networks for these ports are used for SDS traffic only.
Switch A Switch B
Port
channel interface Port-channel 37 interface Port-channel 37
no ip address no ip address
mtu 9216 mtu 9216
portmode hybrid portmode hybrid
switchport switchport
spanning-tree mstp edge-port spanning-tree mstp edge-port
spanning-tree rstp edge-port spanning-tree rstp edge-port
spanning-tree 0 portfast spanning-tree 0 portfast
spanning-tree pvst edge-port spanning-tree pvst edge-port
vlt-peer-lag port-channel 37 vlt-peer-lag port-channel 37
no shutdown no shutdown
LACP
lacp ungroup member-independent port- lacp ungroup member-independent
channel 37 port-channel 37
Ethernet
port interface twentyFiveGigE 1/35 interface twentyFiveGigE 1/35
no ip address no ip address
mtu 9216 port-channel-protocol LACP mtu 9216
Add VLANs
Interface vlan 153 Interface vlan 153
tagged Port-channel 37 tagged Port-channel 37
Interface vlan 154 Interface vlan 154
tagged Port-channel 37 tagged Port-channel 37
Switch A Switch B
Port
channel interface Port-Channel104 interface Port-Channel104
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
105,150,1000 105,150,1000
port-channel lacp fallback individual port-channel lacp fallback
port-channel lacp fallback timeout 5 individual
mlag 104 port-channel lacp fallback
spanning-tree portfast timeout 5
no spanning-tree portfast auto mlag 104
spanning-tree bpduguard enable spanning-tree portfast
no spanning-tree portfast auto
spanning-tree bpduguard enable
Ethernet
port interface Ethernet35/4 interface Ethernet35/4
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunkallowed vlan
105,150,1000 105,150,1000
spanning-tree portfast spanning-tree portfast
no spanning-tree portfast auto no spanning-tree portfast auto
spanning-tree bpduguard enable spanning-tree bpduguard enable
mtu 9216 mtu 9216
speed forced 25gfull speed forced 25gfull
channel-group 104 mode active channel-group 104 mode active
Switch A Switch B
Port
channel interface Port-Channel105 interface Port-Channel105
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
151,152,153,154 151,152,153,154
port-channel lacp fallback individual port-channel lacp fallback
port-channel lacp fallback timeout 5 individual
mlag 104 port-channel lacp fallback
spanning-tree portfast timeout 5
no spanning-tree portfast auto mlag 104
spanning-tree bpduguard enable spanning-tree portfast
no spanning-tree portfast auto
spanning-tree bpduguard enable
Ethernet
port interface Ethernet35/5 interface Ethernet35/5
switchport mode trunk switchport mode trunk
Switch A Switch B
Port
channel interface Port-Channel104 interface Port-Channel104
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
150,151,153,1000 150,151,153,1000
port-channel lacp fallback individual port-channel lacp fallback
port-channel lacp fallback timeout 5 individual
mlag 104 port-channel lacp fallback
spanning-tree portfast timeout 5
no spanning-tree portfast auto mlag 104
spanning-tree bpduguard enable spanning-tree portfast
no spanning-tree portfast auto
spanning-tree bpduguard enable
Ethernet
port interface Ethernet35/4 interface Ethernet35/4
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunkallowed vlan
150,151,153,1000 150,151,153,1000
spanning-tree portfast spanning-tree portfast
no spanning-tree portfast auto no spanning-tree portfast auto
spanning-tree bpduguard enable spanning-tree bpduguard enable
mtu 9216 mtu 9216
speed forced 25gfull speed forced 25gfull
channel-group 104 mode active channel-group 104 mode active
Switch A Switch B
Port
channel interface Port-Channel105 interface Port-Channel105
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 152,154 switchport trunk allowed vlan
port-channel lacp fallback individual 152,154
port-channel lacp fallback timeout 5 port-channel lacp fallback
mlag 104 individual
spanning-tree portfast port-channel lacp fallback
no spanning-tree portfast auto timeout 5
spanning-tree bpduguard enable mlag 104
spanning-tree portfast
no spanning-tree portfast auto
spanning-tree bpduguard enable
Ethernet
port interface Ethernet35/5 interface Ethernet35/5
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 152,154 switchport trunkallowed vlan
spanning-tree portfast 152,154
Switch A Switch B
Port
channel interface Port-Channel104 interface Port-Channel104
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunk allowed vlan
150,151,152,1000 150,151,152,1000
port-channel lacp fallback individual port-channel lacp fallback
port-channel lacp fallback timeout 5 individual
mlag 104 port-channel lacp fallback
spanning-tree portfast timeout 5
no spanning-tree portfast auto mlag 104
spanning-tree bpduguard enable spanning-tree portfast
no spanning-tree portfast auto
spanning-tree bpduguard enable
Ethernet
port interface Ethernet35/4 interface Ethernet35/4
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan switchport trunkallowed vlan
150,151,152,1000 150,151,152,1000
spanning-tree portfast spanning-tree portfast
no spanning-tree portfast auto no spanning-tree portfast auto
spanning-tree bpduguard enable spanning-tree bpduguard enable
mtu 9216 mtu 9216
speed forced 25gfull speed forced 25gfull
channel-group 104 mode active channel-group 104 mode active
The following table provides port examples for PowerFlex. The data networks for these ports are used for SDS traffic only.
Switch A Switch B
Port
channel interface Port-Channel105 interface Port-Channel105
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 153,154 switchport trunk allowed vlan
port-channel lacp fallback individual 153,154
port-channel lacp fallback timeout 5 port-channel lacp fallback
mlag 104 individual
spanning-tree portfast port-channel lacp fallback
no spanning-tree portfast auto timeout 5
spanning-tree bpduguard enable mlag 104
spanning-tree portfast
no spanning-tree portfast auto
spanning-tree bpduguard enable
Ethernet
port interface Ethernet35/5 interface Ethernet35/5
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 153,154 switchport trunkallowed vlan
spanning-tree portfast 153,154
no spanning-tree portfast auto spanning-tree portfast
spanning-tree bpduguard enable no spanning-tree portfast auto
Steps
1. Log in to PowerFlex Manager, to determine the primary MDM.
2. To view the details of a service, select the component. Scroll down on the Service Details page, the following information is
displayed based on the resource types in the service:
Section Description
Physical Nodes View the following information about the nodes that are part of the service:
● Health
● Asset/Service Tag
● iDRAC Management IP
● Hostname
● PowerFlex Mode
The mode for each node is one of the following:
○ Hyper-converged includes both SDS and SDC components.
○ Storage Only includes only the SDS component.
○ Compute Only includes only the SDC component.
● Associated IPs
● MDM Role
The MDM role is the metadata manager role. The MDM role applies only to those
nodes that part of a PowerFlex cluster. The MDM role is one of the following:
○ Primary: The MDM in the cluster that controls the SDSs and SDCs. The
primary MDM contains and updates the MDM repository, the database that
stores the SDS configuration, and how data is distributed between the SDSs.
This repository is constantly replicated to the secondary MDMs, so they can
take over with no delay.
Every PowerFlex cluster has one primary MDM.
○ Secondary: An MDM in the cluster that is ready to take over the primary MDM
role if necessary.
○ Tie Breaker: An MDM whose sole role is to help determine which MDM is the
primary.
○ Standby MDM: A standby MDM can be called on to assume the position of a
manager MDM when it is promoted to be a cluster member.
Steps
1. On the menu bar, click Resources.
2. On the Resources page, click the All Resources tab.
3. From the list of resources, select the check box next to the resources that you want to inventory.
4. From the Details pane, click Run Inventory.
Next steps
See the PowerFlex Manager logs, go to Settings > Logs to view the start time and end time of the resource inventory
operation.
Steps
1. Log in to PowerFlex Manager.
2. On the Services page, click the Add Resources button and choose Add Volumes.
3. When PowerFlex Manager displays the Add Volume wizard, click Add Existing Volumes or Create New Volumes.
NOTE: The Add Existing Volumes option is only available for a PowerFlex hyperconverged node service.
4. If you select Add Existing Volumes, select the Volume and provide the Datastore Name Template from Add Existing
Volumes page.
5. If you are creating a new volume for a hyperconverged service, provide the following information:
a. Click Add New Volume.
b. In the Volume Name field, select Create New Volume to create a new volume now, or select Auto generate name
when you create multiple volumes.
c. In the New Volume Name field, type the volume name, if you are creating a new volume.
d. In the Datastore Name field, select Create New Datastore to create a new datastore, or select an existing datastore.
If you choose a volume that is mapped to a datastore that was created previously in another hyperconverged or
compute-only service, you need to select the same datastore that was associated with the volume in the other service.
e. In the New Datastore Name field, type the datastore name, if you are creating a new datastore.
f. In the Storage Pool drop-down, choose the storage pool where the volume will reside.
g. Select the Enable Compression check box to take advantage of the PowerFlex NVDIMM compression feature.
h. In the Volume Size (GB) field, select the size in GB. The minimum size is 8 GB and the value you specify must be
divisible by eight.
i. In the Volume Type field, select thick or thin.
A thick volume provides a larger amount of storage in advance, whereas a thin volume provides on-demand storage and
faster setup and startup times.
j. In the New Volume Name field, if you select Auto Generate name, complete the following:
Field Description
Volume Name Template Modify the template based on your volume naming
convention.
How Many Volume Enter number volumes need to be created.
Datastore Name Template Modify the template based on your datastore naming
convention.
Storage Pool Choose the storage pool where the volume will reside.
Volume Size (GB) Select the size in GB. The minimum size is 8 GB and the
value you specify must be divisible by eight.
Volume Type Select Thick or Thin.
a. In the Volume Name field, select Create New Volume to create a new volume now.
b. In the New Volume Name field, type the volume name.
c. In the Storage Pool drop-down, choose the storage pool where the volume will reside.
d. Select the Enable Compression check box to take advantage of the PowerFlex NVDIMM compression feature.
e. In the Volume Size (GB) field, select the size in GB. The minimum size is 8 GB and the value you specify must be
divisible by eight.
f. In the Volume Type field, select thick or thin.
A thick volume provides a larger amount of storage in advance, whereas a thin volume provides on-demand storage and
faster setup and startup times.
If you enable compression for the volume, thin is the only option available for Volume Type.
g. In the New Volume Name field, if you select Auto Generate name, complete the following:
Field Description
Volume Name Template Modify the template based on your volume naming
convention.
How Many Volume Enter number volumes need to be created.
Datastore Name Template Modify the template based on your datastore naming
convention.
Storage Pool Choose the storage pool where the volume will reside.
Volume Size (GB) Select the size in GB. The minimum size is 8 GB and the
value you specify must be divisible by eight.
Volume Type Select Thick or Thin.
a. In the Volume Name field, select an existing volume. For a compute-only service, you can only select an existing volume
that has not yet been mapped.
b. In the Datastore Name field, select Create New Datastore to create a new datastore, or select an existing datastore.
The Datastore Name field is only available for a hyperconverged or compute-only service, as it applies only to services
with ESXi. If the volume was originally created in a storage-only service, you must select Create New Datastore to
create a new datastore. Alternatively, if the volume was originally created in a hyperconverged service, you must select
the datastore that was already mapped to the selected volume in the other service.
c. In the New Datastore Name field, type the datastore name, if you are creating a new datastore.
6. Optionally, click Add volume again to add another volume. Then, provide the required information for the volume.
7. Click Save.
The service moves to the In Progress state and the new volume icons appear on the Service Details page. After the
deployment completes successfully, the new volumes are displayed and indicated by a check mark in the Storage list on the
Service Details page. The PowerFlex 3.0.1.2 and older GUI shows the new volumes under the storage pool. In PowerFlex
3.5, new volumes are under Configuration > Volumes. For a storage-only service, the volumes are created, but not
mapped. For a compute-only or hyperconverged service, the volumes are mapped to SDCs. In the vSphere client, you can
see the volumes in the storage section and also see the hosts that are mapped to the volumes, once the mappings are in
place.
Steps
1. Connect the new PowerFlex appliance nodes network interface cards (NICs) to access switches and management switch
exactly like the existing nodes.
2. Ensure that the newly connected switch ports are not shut down.
3. Set the IP address of the iDRAC management port, username, password, and SNMP settings to what is expected by
PowerFlex Manager.
4. Log in to PowerFlex Manager.
5. In the Services page, click Add Resources and click Add Nodes.
6. In the Duplicate Node wizard:
a. From the Resource to Duplicate list, select a node.
Select a node that is of the same type as the other nodes within the service.
b. In the Number of Instances box, enter the number of nodes instances that you want to add to the service.
The number of instances is fixed for this action.
c. Click Next.
d. Under PowerFlex Settings, specify the PowerFlex Storage Pool Spare Capacity setting by choosing one of the
following options:
i.Recommended Spare Capacity <n>% sets the spare capacity to 1 divided by the current number of SDSs in the
protection domain, plus the number of nodes that you want to duplicate. For example, if you have three SDSs and you
want to add one more node instance, the recommended spare capacity is set to 25 percent, based on the formula
1/4.
ii. Current Spare Capacity <n>% sets the spare capacity to 1 divided by the current number of SDSs in the protection
domain. For example, if you currently have three Storage Data Servers (SDSs) in the protection domain, the current
spare capacity is set to 34 percent, based on the formula 1/3, rounded up.
e. Under OS Settings, set the Host Name Selection to Auto-Generate, Specify at Deployment Time, or Reverse
DNS Lookup.
f. If you choose Specify at Deployment, provide a name for the host in the Host Name field. If you choose Auto-
Generate, specify a template for the name in the Host Name Template field.
For an existing service that was not deployed by PowerFlex Manager, the Host Name Selection option is automatically
set to Specify at Deployment Time and you must type the hostname.
g. If you are adding a node to a hyperconverged service, specify the Host Name Selection under SVM OS Settings and
provide details about the hostname, as you did for the OS Settings.
h. In the IP Source box, provide an IP address. For an existing service that was not deployed with PowerFlex Manager,
the default choice is User Entered IP and the IP settings for each network default to Manual Entry. However, you can
change the setting to PowerFlex Manager Selected IP.
Under Hardware Settings, the Target Boot Device option is automatically set to Local Flash Storage for Dell EMC
PowerFlex for an existing hyperconverged or compute only service that was not deployed by PowerFlex Manager.
i. Under Hardware Settings, in the Node Source box, select Node Pool or Manual Entry.
For an existing service not deployed by PowerFlex Manager, the node source defaults to Manual Entry, but you can
change it to Node Pool.
j. In the Node Pool box, select the node pool. Alternatively, if you chose Manual Entry, select the specific node in the
Choose Node box.
You can view all user-defined node pools and the global pool. Standard users can see only the pools for which they have
permission.
For an existing service not deployed by PowerFlex Manager, the Node Pool defaults to Global.
Steps
1. Log in to PowerFlex Manager.
2. From the menu, click Services.
3. On the Services page, select a service and click View Details.
4. Click Enter Service Mode on the Service Details page.
5. Select one or more nodes on the Node Lists page and click Next.
You can only put multiple nodes in service mode simultaneously if all the nodes are in the same fault set.
6. Select one of the following options:
● Instant Maintenance Mode enables you to perform short-term maintenance that lasts less than 30 minutes.
● Protected Maintenance Mode enables you to perform long-term maintenance that lasts more than 30 minutes.
● Evacuate Node from PowerFlex enables you to perform long-term maintenance that lasts more than 30 minutes.
NOTE: Evacuate node is only available from PowerFlex versions previous to 3.5.
7. Click Enter Service Mode.
PowerFlex Manager displays a yellow warning banner at the top of the service page. The Service Mode icon is displayed for
the Overall Service Health and for the Resource Health for the selected node.
8. When you are ready to leave service mode, click Service Actions > Exit Service Mode.
Prerequisites
Before evacuating a node for long-term maintenance work, ensure that you have at least four nodes in the cluster. Also, ensure
that you have sufficient storage space on the remaining nodes to evacuate the data from the node that is placed in service
mode. If you are using protected maintenance mode (PowerFlex 3.5), the sum of the spare capacity and the free capacity must
be greater than the size of the node being put in protected maintenance mode.
Steps
1. On the menu bar, click Services.
2. On the Services page, select a service and click View Details in the right pane.
3. Click Enter Service Mode under Service Actions.
4. Select one or more nodes on the Node Lists page and click Next.
5. Specify the type of maintenance you want to perform by selecting one of the following options:
● Instant Maintenance Mode enables you to perform short-term maintenance that lasts less than 30 minutes. PowerFlex
Manager does not migrate the data.
● Protected Maintenance Mode enables you to perform maintenance that requires longer than 30 minutes in a safe
and protected manner. When you use protected maintenance mode, PowerFlex makes a temporary copy of the data
so that the cluster is fully protected from data loss. Protected maintenance mode applies only to hyperconverged and
storage-only services.
● Evacuate Node from PowerFlex (earlier versions of PowerFlex) enables you to perform long-term maintenance that
lasts more than 30 minutes. PowerFlex Manager migrates the data to other nodes in the cluster. It takes longer to
evacuate a node, but it is safer because there is no risk of a reboot causing data to be unavailable. Evacuation mode
applies only to hyperconverged and storage-only services.
6. Click Finish.
PowerFlex Manager displays a yellow warning banner at the top of the service page. The Service Mode icon displays for the
Deployment State and Overall Service Health, as well as for the Resource Health for the selected nodes.
7. When you are ready to leave service mode, click Service Actions > Exit Service Mode.
Steps
1. See Entering and exiting service mode to put the node in service mode.
2. After PowerFlex Manager shows that the node has entered service mode, turn off the PowerFlex node by using the iDRAC
interface to run a graceful shutdown.
3. Use the iDRAC interface to power on the PowerFlex node.
4. See Entering and exiting service mode to exit the node from service mode.
Resize a volume
After adding volumes to a service, you can resize the volumes.
Steps
1. On the Services page, click the volume component and choose Volume Actions > Resize.
2. Choose the volume that you want to resize:
a. Click Select Volume.
b. Enter a volume or datastore name search string in the Search Text box.
c. Optionally, apply additional search criteria by specifying values for the Size, Type, Compression, and Storage filters.
d. Click Search.
PowerFlex Manager updates the results to show only those volumes that satisfy the search criteria. If the search returns
more than 50 volumes, you must refine the search criteria to return only 50 volumes.
e. Select the row for the volume you want to resize.
a. In the New Volume Size (GB) field, specify a value that is greater than the current volume size.
b. Optionally, select Resize Datastore to increase the size of the datastore.
If you are resizing a volume for a storage-only service, enter a value in the New Volume Size (GB) field. Specify a value
that is greater than the current volume size. Values must be in multiples of eight, or an error occurs.
If you are resizing a volume for a compute-only service, review the Volume Size (GB) field to see if the volume size is
greater than Current Datastore Size (GB). If it is, PowerFlex Manager expands the datastore size.
4. Click Save.
Unmapping a volume
Use this procedure to unmap an existing volume from the PowerFlex cluster using the PowerFlex GUI presentation server.
Steps
1. Log in to the PowerFlex GUI presentation server.
2. Click the Configuration tab.
3. Click Volumes.
4. Select Volume, and click Mapping.
5. Click Unmap.
6. Select the nodes from the shown list and click Unmap.
Steps
1. In the PowerFlex GUI, select Frontend > Volumes.
2. Expand the correct storage pool to see the mapped volumes.
3. Right-click the volume that you want to unmap and select Unmap.
4. Select the nodes from which you want to unmap this volume and click Unmap Volumes.
Removing a volume
Use this procedure to remove a volume.
Prerequisites
PowerFlex Manager does not currently support removing a volume.
Steps
1. Log in to the PowerFlex GUI and click the Configuration tab.
2. Click Volumes that you want to unmap and click Mapping > unmap.
Steps
1. In the PowerFlex GUI, select Frontend > Volumes.
2. Expand the correct storage pool to see the mapped volumes.
3. Right-click the volume that you want to delete and select Unmap.
4. Select ALL the nodes to unmap this volume and click Unmap Volumes.
5. Right-click the volume that you want to delete and select Remove > Volume and click OK.
6. Type the MDM password when prompted and click Close.
7. Update the PowerFlex Manager inventory, by doing the following steps:
a. In the PowerFlex Manager GUI, go to Resources page, select the PowerFlex Gateway and click Run Inventory.
b. To confirm that the process completes with no errors, go Settings > Logs.
c. In the PowerFlex Manager GUI, go to Services page for the hyperconverged, storage, and compute clusters and then
click Update Service Details.
d. After Update Service Details process completes, confirm that all cluster objects report as healthy (green check mark).
Prerequisites
You will need the following information:
● IP or hostname of the PowerFlex GUI presentation server
● Valid credentials for the PowerFlex cluster
● Names of the protection domains to be worked on
● Names of the storage pools to be modified
Steps
1. Log in to the PowerFlex GUI presentation server with access to the PowerFlex cluster containing the Storage Pool you
want to modify.
2. Expand the Configuration menu in the navigation pane (underneath Dashboard), by left clicking the entry.
Prerequisites
You will need the following information:
● IP or hostname of the PowerFlex presentation server
● Valid credentials for the PowerFlex cluster
● Names of the protection domains to be worked on
● Names of the storage pools to be modified
Steps
1. Log in to PowerFlex with access to the PowerFlex cluster containing the Storage Pool you want to modify.
2. Expand the Configuration menu in the navigation pane (underneath Dashboard), by left clicking the entry.
3. Select Storage Pools.
4. Select the check box to the left of the Storage Pool you plan to modify.
5. Click More.
6. Select Background Device Scanner.
7. Clear Enable Background Device Scanner .
8. Click Apply.
9. Click Settings.
10. Click General.
11. In the resulting dialog box, leave the box checked for Enable Inflight / Persistent Checksum.
12. Select one or both of the Inflight and Persistent options.
13. If wanted, check Validate on read (this validation may incur a performance penalty).
14. Click Apply.
Steps
1. To add a license for PowerFlex, do the following:
a. Identify and copy the contents of the PowerFlex license file.
b. In the PowerFlex GUI presentation server, click Settings > Licenses.
c. Paste the contents of the license file into the space provided.
2. To add a PowerFlex Manager license:
a. Log in to PowerFlex Manager.
b. On the Licensing page of the Initial Setup wizard, click Choose File to the right of the Upload License field, and
select a valid license file.
Based on the license selected, the following information is displayed :
● Type—Displays the license type. PowerFlex Manager supports two license types:
○ Standard—Full-access license type.
○ Trial—Evaluation license that expires after a specified number of days and only supports a limited number of
resources. The number of days before expiration and the number of resources supported both depend on the
license you choose.
● Total Resources—Displays the maximum number of resources allowed by the license.
● Expiration Date—Displays the expiration date of the license (only shown for a trial license).
c. To activate the license, click Save and Continue.
Steps
Log in to PowerFlex Manager.
The following table describes common tasks for managing system components and what steps to take in PowerFlex Manager to
initiate each.
Steps
Log in to PowerFlex Manager.
The following table describes common tasks for monitoring system health and managing software and firmware compliance and
what steps to take in PowerFlex Manager to initiate each.
Steps
1. Log in to PowerFlex Manager.
2. Click Settings > Compliance & OS Repositories.
3. Select the Compliance Versions tab to load compliance versions and specify a default version for compliance checking.
The +Add button is available in both the Compliance Versions and OS Image Repositories tab.
You cannot make a minimal compliance version the default version for compliance checking, since it only includes server
firmware updates. The default version must include the full set of compliance update capabilities. PowerFlex Manager does
not show any minimal compliance versions in the Default Version dropdown menu.
The Compliance Versions tab displays the following information:
● State —Displays an icon indicating one of the following states:
○ Available—Indicates that the compliance file is downloaded and copied successfully.
○ Downloading—Indicates that the compliance file is being downloaded and provides the percentage complete for the
download operation.
○ Synchronizing—Indicates that the compliance file is being synchronized with the virtual appliance after unpacking.
○ Unpacking—Indicates that the compliance file is being unpacked and provides the percentage complete for the
unpacking operation.
○ Pending—Indicates that the compliance file download process is in progress.
○ Error—Indicates that there is an issue downloading the compliance file.
● Version—Display the compliance version.
● Source—Displays the share path of the compliance version in a file share.
● File Size—Displays the size of the compliance file in GB.
● Type—Displays Minimal if the compliance file only contains firmware updates, or Full if it contains firmware and
software updates.
● View bundles—Displays details about any bundles added for the compliance version.
● Available Actions—Select one of the following options:
○ Delete
○ Resynchronize
4. Select the OS Image Repositories tab to create operating system image repositories and view the following information:
● State — Displays the following states:
○ Available—Indicates that the operating system image repository is downloaded and copied successfully on the
appliance.
○ Pending—Indicates that the operating system image repository download process is in progress.
○ Error—Indicates that there is an issue downloading the operating system image repository.
● Repositories—Display the name of the repository.
● Image Type—Displays the operating system type.
● Source Path—Displays the share path of the repository in a file share.
● In Use—Displays the following options:
○ True—Indicates that the operating system image repository is in use.
○ False—Indicates that the operating system image repository is not in use.
● Available Actions—Select one of the following options:
○ Delete
○ Resynchronize
You cannot perform any actions on repositories that are in use. However, you can delete repositories that are in an Available
state, but not in use and not set as a default version.
All the options are available only for repositories in an Error state. The Resynchronize option appears only when you must
perform a backup and restore of a previous image.
If a new compliance version becomes available, the Compliance and OS Repositories page displays a notification banner
at the top of the screen with the text A new compliance version is available for download. View Details. To the far
right of the banner, you should see an Actions menu that gives you the following choices:
Steps
1. Open the PowerFlex GUI, click Front-end, and select Volumes.
2. Right-click the volume, and then select Map.
3. Select the Windows compute-only nodes, and click Map Volumes.
4. Log in to the Windows Server compute-only node and open disk management.
5. Right-click the Windows icon, and then select Disk Management.
6. Rescan the disk by selecting Action > Rescan Disks.
7. Find the disk in the bottom frame, right-click in left area of the disk, and select Online.
8. Initialize the disk by doing the following steps:
a. Find the disk in the bottom frame, right click in right area of disk, and then select New Simple Volume.
b. In the New Simple Volume Wizard, click Next.
c. Select the default, and click Next.
d. Assign the drive letter, and click Next.
e. Select the default, and click Next.
f. Click Finish.
Steps
1. Log in to the PowerFlex GUI and click the Configuration tab.
2. Click Volumes.
3. Select volume and click Mapping and select Map.
4. Select the required Windows compute-only node and click Map.
5. Select the volume to map and click Apply.
6. Select the Windows compute-only nodes, and click Map Volumes.
7. Log in to the Windows Server compute-only node and open disk management.
8. Right-click the Windows icon, and then select Disk Management.
9. Rescan the disk by selecting Action > Rescan Disks.
10. Find the disk in the bottom frame, right-click in left area of the disk, and select Online.
11. Initialize the disk by performing the following steps:
a. Find the disk in the bottom frame, right click in right area of disk, and then select New Simple Volume.
b. In the New Simple Volume Wizard, click Next.
Steps
1. Log in to the primary MDM.
2. Authenticate against the PowerFlex cluster using the credentials provided.
3. List and record all connected SDCs (either NAME, GUID, ID, or IP), type: scli --query_all_sdc.
4. For each SDC in your list, use the identifier you recorded to generate and record a CHAP secret, type: scli --
generate_sdc_password --sdc_IP (or NAME, GUID, or ID) --reason "CHAP setup".
NOTE: This secret is specific to that SDC and cannot be reused for subsequent SDC entries.
NOTE: VMware ESXi hosts must be rebooted for the new parameter to take effect.
Prerequisites
Ensure you have generated preshared secrets (passwords) for all SDCs to be configured.
Steps
1. SSH to the VMware ESXi host using the provided credentials.
2. List the hosts current scini parameters esxcli system module parameters list, type -m scini | grep Ioctl.
IoctlMdmPasswordStr string
Mdms passwords. Each value is <ip>-<password>, Multiple passwords separated by ';'
signFor example: 10.20.30.40-AQAAAAAAAACS1pIywyOoC5t;11.22.33.44-tppW0eap4cSjsKIcMax
1024 characters
3. Using esxcli, configure the driver with the existing and new parameters. For specifying multiple IP address here, use a
semi-colon (;) between the entries, as shown in the following example:
NOTE: The spaces between the Ioctl parameter fields and the opening/closing quotes. The above is entered on a single
line.
4. Now the SDC configuration is ready to be applied. On VMware ESXi nodes a reboot is necessary for this to happen. If the
SDC is a hyperconverged node, proceed with step 5. Otherwise, skip to step 8.
5. For hyperconverged nodes, use PowerFlex or the scli tool to place the corresponding SDS into maintenance mode.
6. If the SDS is also the cluster primary MDM, switch the cluster ownership to a secondary MDM and verify cluster state
before proceeding, type: scli --switch_mdm_ownership --mdm_name <secondary MDM name>
7. Once the cluster ownership has been switched (if needed) and the SDS is in maintenance mode, the SVM may be powered
down safely.
8. Place the ESXi host in maintenance mode. If workloads need to be manually migrated to other hosts, have those actions
performed now prior to maintenance mode being engaged.
9. Reboot the ESXi host.
10. Once the host has completed rebooting, remove it from maintenance mode and power on the SVM (if present)
11. Take the SDS out of maintenance mode (if present).
12. Repeat steps 1 through 11 for all VMware ESXi SDC hosts.
Windows drv_cfg --set_mdm_password --ip <MDM IP> --port 6611 --password <secret>
Linux /opt/emc/scaleio/sdc/bin/drv_cfg --set_mdm_password --ip <MDM IP> --port
6611 --password <secret> --file /etc/emc/scaleio/drv_cfg.txt
Iterate through the relevant SDCs, using the command examples above along with the recorded information.
Prerequisites
Ensure all SDCs are configured with their appropriate CHAP secret. Any older or unconfigured SDC will be disconnected from
the system when authentication is turned on.
You will need the following information:
● The primary MDM IP address
● Credentials to access the PowerFlex cluster
Steps
1. SSH to the primary MDM address.
2. Log in to the PowerFlex cluster using the provided credentials.
3. Enable the SDC authentication, type: scli --set_sdc_authentication --enable
4. Verify that the SDC authentication and authorization is turned on, and the SDCs are connected with passwords, type: scli
--check_sdc_authentication_status
Example output:
5. If the number of SDCs do not match, or you experience disconnected SDCs, list any or all disconnected SDCs
and then disable the SDC authentication by using the commands: scli --query_all_sdc | grep "State:
Disconnected" scli --set_sdc_authentication --disable
Recheck the disconnected SDCs to ensure they have the proper configuration applied. If necessary, regenerate their shared
secret and reconfigure the SDC. If unable to resolve SDC disconnection, leave the feature disabled and engage Dell EMC
support as needed.
Steps
1. SSH to the primary MDM address.
2. Log in to the PowerFlex cluster using the provided credentials.
3. Disable the SDC authentication, type: scli --set_sdc_authentication --disable
Once disabled, SDCs will reconnect automatically unless otherwise configured.
Prerequisites
Ensure you have the following information:
● Primary MDM IP address
● Credentials for the PowerFlex cluster
● The IP address of the new cluster members
Ensure you have added the SDC authentication enabled on the PowerFlex cluster.
Steps
1. Install and add the SDCs as per normal procedures (whether using PowerFlex Manager or manual expansion process).
NOTE: New SDCs will show as Disconnected at this point, as they cannot authenticate to the system.
Remove
After failover Reverse / restore N/A - data is not replicated By default, access to the vulme is
allowed through the original target
Remove (system B).
Steps
1. Log in to the PowerFlex GUI presentation server; https://presentation_server_ip:8443.
NOTE: Use the primary MDM IP address and credentials to log in to the PowerFlex cluster.
b. Select the Source Protection Domain, Target System, and Target Protection Domain from the menu and click
Next.
4. On the Add Replication Pairs page:
a. Click the volume from the Source column and click the corresponding size volume from the Target column.
NOTE: Source and destination volumes should be identical.
b. Click Add pair, select the added pair that must be replicated and click Next.
5. On the Review Pairs page:
a. Select the added pair and click Add RCG & Start Replication and start replication.
b. Verify that the operation completes successfully and click Dismiss.
The RCG is added to both the source and target systems. It is necessary to wait for the end of the initial copy transmit
before start to use.
Steps
1. Using SCLI, complete the following:
a. Log in to the primary MDM using SSH and log in to scli using the following command to add the peer system.
b. Log in to scli to add the peer system, type: scli --login --username admin.
c. Enter the MDM cluster password.
d. Type #scli --query_all_replication_pairs to verify replication status.
Once initial copy is complete, PowerFlex replication system is ready for use.
2. Using the PowerFlex GUI, complete the following:
a. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
b. In the right pane, select the relevant RCG check box.
c. Select the Volume Pairs tab and in the Details pane, verify the initial copy status and progress.
Once initial copy is complete, PowerFlex replication system is ready for use.
Steps
1. In the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click Modify > Modify RPO.
3. In the Modify RPO for RCG <rcg name> dialog box, enter the new RPO time and click Apply.
4. Verify that the operation completed successfully and click Dismiss.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click Modify > Add Pair.
3. In the Add Pairs wizard, on the Add Replication Pairs page, select a volume from the source and a volume from the target
and then click Add Pair.
4. Click Next.
5. In the Review Pairs page, verify the selected volumes are the correct volumes and click Add Pairs.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and in the Details pane, in the Volume Pairs tab, click Unpair.
3. In the Remove Pair from RCG <RCG name> dialog box, click Remove Pair.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Freeze apply.
3. Click Freeze Apply.
4. Verify that the operation completed successfully and click Dismiss.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Unfreeze apply.
3. Click Unfreeze Apply to resume data transfer from target journal to target volume.
4. Verify that the operation completed successfully and click Dismiss.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click Modify > Set Target to Inconsistent Mode.
3. In the Set Target to Inconsistent Mode RCG <RCG name> dialog box, click Apply.
4. Verify that the operation completed successfully and click Dismiss.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click Modify > Set Target to Consistent Mode.
3. In the Set Target to Consistent Mode RCG <RCG name> dialog box, click Apply.
4. Verify that the operation completed successfully and click Dismiss.
Prerequisites
Ensure replication is still running and is in a healthy state.
Before running a test failover, map the target volumes with appropriate access mode. By default, volumes are mapped with
read_write access. This creates a conflict with the mapping of target volumes, since Powerflex set the remote access mode
of the Replication Consistency Group (RCG) point-of-view to read_only. This is incompatible with the default mapping access
mode of read_write volume mapping offered by the Powerflex GUI, therefore log onto the target system and manually map all
volumes in the RCG to the target system using scli command.
A test failover operation is only possible after the peers are synchronized.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Test Failover.
3. In the RCG <RCG name> Test Failover dialog box, click Start Test Failover.
4. In the RCG <RCG name> Test Failover using target volumes dialog box, click Proceed.
5. Verify that the operation completed successfully and click Dismiss.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Test Failover Stop.
3. Click Approve.
4. Verify that the operation completed successfully and click Dismiss.
Running a failover
Use this procedure to failover the source role to the target system.
Prerequisites
Before performing failover, ensure you stop the application and unmount the file-systems at the source (if the source is
available). Target volumes are only be mapped after performing a failover. Target volumes can also be mapped using scli.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Failover.
3. In the Failover RCG <RCG name> dialog box, select one of the following options:
● Switchover: (sync and failover)
● Latest PiT: (date and time)
4. Click Apply Failover.
5. In the RCG <RCG name> Sync & Failover dialog box, click Proceed.
6. Verify that the operation completed successfully and click Dismiss.
7. From the top right, click Running Jobs and check the progress of the failover.
Restoring replication
Use this procedure to restore replication when the remote consistency group (RCG) is in failover.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Restore.
3. In the Restore Replication RCG <RCG name> dialog box, click Apply.
4. Verify that the operation completed successfully and click Dismiss.
Reversing replication
Use this procedure to reverse replication if the remote consistency group (RCG) is in failover or switchover mode.
Prerequisites
This option is available when RCG is in failover mode, or when the target system is not available. It is recommended to take a
snapshot of the original source before reversing the replication for backup purposes.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Reverse.
3. In the Restore Replication RCG <RCG name> dialog box, click Apply.
4. Verify that the operation completed successfully and click Dismiss.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Create Snapshots.
3. In the Create Snapshots RCG <RCG name> dialog box, click Create Snapshots.
4. Verify that the operation completed successfully and click Dismiss.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Pause RCG.
3. In the Pause RCG <RCG name> dialog box, click one of the following options:
● Stop data transfer - this option saves all the data in the source journal volume until there is not any available capacity.
● Track Changes - this option enables manual slim mode where only metadata in the source journal volumes is saved.
4. Click Pause.
5. Verify that the operation completed successfully and click Dismiss.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click Initial copy > Pause Initial copy.
3. In the Pause Initial Copy <RCG name> dialog box, click Pause Initial Copy.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Resume.
3. In the Resume Initial Copy <RCG name> dialog box, click Resume Initial Copy.
4. Verify that the operation completed successfully and click Dismiss.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click More > Resume.
3. In the Pause RCG <RCG name> dialog box, click one of the following options:
4. Click Resume RCGs.
● Stop Data Transfer - this option saves all the data in the source journal volume until there is not any available capacity.
● Track Changes - this option enables manual slim mode where only metadata in the source journal volumes is saved.
5. Verify that the operation completed successfully and click Dismiss.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box.
3. In the Volumes Pairs tab, click Initial copy > Set Priority.
4. In the Set Priority for Pair <RCG name> dialog box, select Default or High and click Save.
5. Verify that the operation completed successfully and click Dismiss.
Prerequisites
This mapping is only enabled from the target RCG.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, click the relevant RCG check box and click Mapping > Map.
3. In the Map RCG Target Volumes dialog box, click the relevant SDC check box, and click Map.
4. In the Mappings section of the dialog box, select the volume check box and select the access mode.
NOTE: Read Access mode applies to all platforms, except Windows clusters, which require the No Access mode.
5. In the Map RCG Target Volumes dialog box, click Map RCG Target Volumes.
6. Click Apply.
7. Verify that the operation completed successfully and click Dismiss.
Prerequisites
Ensure you perform a storage rescan on your host to update the view of storage devices that are presented to the host.
Steps
1. In the VMware vSphere web client navigator, browse to a host, a cluster, or a data center.
2. From the right-click menu, select Storage > New datastore.
3. Select VMFS as the datastore type.
4. Enter the datastore name and if necessary, select the placement location for the datastore.
7. Click Finish.
8. Click OK.
9. Rescan for new VMFS volumes
a. In the VMware vSphere client, browse to a host, a cluster, or a data center.
b. From the right-click menu, select Storage > Rescan Storage > Scan for new VMFS Volumes.
c. Click OK.
Steps
1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.
2. In the right pane, click the relevant RCG check box and click Mapping > Unmap.
3. In the Unmap dialog box, click the relevant SDC check box, and click Unmap.
4. Verify that the operation completed successfully and click Dismiss.
Prerequisites
Replication is supported on PowerFlex storage-only nodes with dual CPU. The node should be migrated to an LACP bonding NIC
port design.
Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
2. Click the Protection tab in the left pane.
3. Click SDR > Add, and enter the storage data replication name.
4. Choose the protection domain.
5. Enter the IP address to be used and choose, and click Add IP. Repeat this for each IP address you are adding, and click Add
SDR.
NOTE: While adding storage data replication it is recommended to add IP addresses for flex-data1-<vlanid>, flex-data2-
<vlanid>, flex-data3-<vlanid>, and flex-data4-<vlanid> along with flex-rep1-<vlanid>, and flex-rep2-<vlanid>. Choose the
role of Application and Storage for all data IP addresses and choose role as External for the replication IP addresses.
6. Repeat steps 3 through 5 for all the storage data replicator you are adding.
Prerequisites
NOTE: This procedure can only be completed when the secondary site is active.
Steps
1. Log in to the primary MDM, by using the SSH on source and destination.
2. Run command: scli --login --username admin on the scli command and provide the MDM cluster password, when
prompted.
See the following example to extract the certificate on source and destination primary MDM.
● Example for source: scli --extract_root_ca --certificate_file /tmp/Source.crt
● Example for destination: scli --extract_root_ca --certificate_file /tmp/destination.crt
3. Copy the extracted certificated of source (primary MDM) to destination (primary MDM) using the SCP and vice versa.
See the following example to add the copied certificate:
● Example for source: scli --add_trusted_ca --certificate_file /tmp/destination.crt --comment
destination_crt
● Example for destination: scli --add_trusted_ca --certificate_file /tmp/source.crt --comment
source_crt
4. Run scli --list_trusted_ca to verify the added certificate.
5. Once all the Journal Capacity is set, log in to the primary DM using SSH, and log in to scli using scli --login
--username admin for adding the Peer.
NOTE: Do not map the volume that is created on target system to SDC.
Steps
1. Log in to the source site presentation server: <https://presentation_server_IP>:8443.
NOTE: Use the primary MDM IP address and credentials to log in to the PowerFlex cluster.
Steps
1. Log in to the primary MDM using SSH and log in to scli, type: # scli --login --username admin after the password
prompt and enter the MDM cluster password.
2. Verify the replication status, type: # scli --query_all_replication_pairs.
Once initial copy is complete, PowerFlex replication system is ready for use.
Steps
1. From https://Presentation_Server_IP:8443 (PowerFlex GUI), in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click Modify > Modify RPO.
3. In the Modify RPO for RCG <rcg name> dialog box, enter the updated RPO time and click Apply.
4. Verify that the operation completed successfully and click Dismiss.
Steps
1. Freeze the remote consistency group.
2. Remove the remote consistency group.
3. Remove a peer system.
4. Remove a peer system and certificates.
5. Remove replication trust for peer system.
6. Enter SDS into maintenance mode.
7. Remove the storage data replication from PowerFlex.
8. Remove a storage data replication RPM.
9. Clean up the network configurations.
10. Exit SDS from maintenance mode.
11. Remove the journal capacity.
12. Remove the target volumes from the destination system.
Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
2. From the left pane, click Protection > RCGs.
3. In the right pane, select the relevant RCG check box, and click More > Freeze Apply.
4. Verify that the operation completes successfully and click Dismiss.
Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
2. From the left pane, click Protection > RCGs.
3. In the right pane, select the relevant RCG, and click More > Remove RCG.
4. Verify that the operation completes successfully, and click Dismiss.
Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
2. From the left pane, click Protection > Peer Systems.
3. In the right pane, select the relevant peer system, and click Remove.
4. Verify that the operation completes successfully, and click Dismiss.
Steps
1. Open an SSH session using PuTTY or a similar SSH client.
2. Log in to the primary MDM with admin credentials.
3. In the PowerFlex CLI, type scli --list_trusted_ca to display the list of trusted certificates in the system. Note the
fingerprint details.
4. Type scli --remove_trusted_ca --fingerprint <fingerprint> to remove the certificate.
5. Verify that the following message is received:
The Certificate was successfully removed.
6. Type rm /tmp/target.crt scli --list_trusted_ca
9A:14:00:5F:3F:A0:01:73:D9:8F:69:E3:9C:53:C5:FB:CB:7B:AE:CA scli --remove_trusted_ca
--fingerprint 9A:14:00:5F:3F:A0:01:73:D9:8F:69:E3:9C:53:C5:FB:CB:7B:AE:CA
and rm /tmp/source.crt scli --list_trusted_ca
E4:07:A4:BF:A3:2B:6B:DD:93:F4:76:87:C0:8A:8C:6D:31:83:7A:23 scli --remove_trusted_ca --
fingerprint E4:07:A4:BF:A3:2B:6B:DD:93:F4:76:87:C0:8A:8C:6D:31:83:7A:23 to remove the source
and target certificates.
7. Verify that the following message is received:
The Certificate was successfully removed.
Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
2. In the left pane, click Configuration > SDSs.
3. In the right pane, select the relevant SDS and click More > Enter Maintenance Mode.
4. In the Enter SDS into Maintenance Mode dialog box, select Instant. If maintenance mode takes more than 30 minutes,
select PMM.
5. Click Enter Maintenance Mode.
6. Verify that the operation completes successfully and click Dismiss.
Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
2. In the left pane, click Protection > SDRs.
3. In the right pane, select the SDR Name and click More > Remove.
4. Repeat for all SDRs.
Steps
1. SSH to the PowerFlex node.
2. List all installed Dell EMC RPMs on a PowerFlex node by entering the following command: rpm -qa | grep -i emc.
3. Identify the SDR rpm - EMC-ScaleIO-sdr-x.x.xxx.el7.x86_64.rpm.
4. Remove the RPM by entering the following command: rpm -e EMC-ScaleIO-sdr-x.x.xxx.el7.x86_64.rpm
5. Verify that RPM is removed and the service is stopped.
Steps
1. Remove the route-bond# files that are associated with the replication network, using the following commands:
cd /etc/sysconfig/network-scripts/
rm route-bond(x).xxx
2. Remove the ifcfg-bond# files that are associated with the replication network, using the following commands:
cd /etc/sysconfig/network-scripts/
rm ifcfg-bond(x).xxx
Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
2. In the left pane, click Configuration > SDSs.
3. In the right pane, select the relevant SDS and click More > Exit Maintenance Mode.
4. In the Exit SDS into Maintenance Mode dialog box, select Instant.
5. Click Exit Maintenance Mode.
6. Verify that the operation completes successfully and click Dismiss.
Repeat for each PowerFlex node in the protection domain.
Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
2. From the left pane, click Protection > Journal Capacity.
3. In the right pane, select the Protection Domain, and click Remove.
4. Verify that the operation completes successfully and click Dismiss.
Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
2. Remove the volumes used as target in the volume pair.
3. From the left pane, click Configuration > Volumes.
4. In the right pane, select the target volumes.
5. Click More > Remove.
6. Select Remove volume with all of its snapshots.
7. Click Remove.
8. Verify that the operation completes successfully and click Dismiss.
Prerequisites
Before you configure the alert connector, ensure:
● The primary MDM in the PowerFlex cluster is valid and up and running.
● Secure Remote Services gateway is configured in the data center and connected to Secure Remote Services.
Steps
1. Log in to PowerFlex Manager (username: admin and password: admin).
2. On the menu bar, click Settings and click Virtual Appliance Management.
3. Click Add in the Alert connector section.
b. Enter the port number in the SRS Gateway Host Port field.
c. Enter the required username in the User ID field.
d. Enter the required password in the Password or NT Token field.
6. For an email configuration, complete the following steps in the Email Server Configuration under Connector Settings:
a. Choose the Server type.
● SMTP
● SMTPS over SSL
● SMTPS STARTTLS
b. Enter an IP address or fully qualified domain name for the email server in the Server IP or FQDN field.
c. Enter the port number for the email server in the Port field.
d. Enter the required username in the User ID field.
e. Enter the required password in the Password field.
f. Enter the email address for the sender in the Sender Address field.
g. Enter one or more email recipient addresses.
7. Click Save.
8. Click Send Test Alert, to verify that the alert connector is receiving alerts.
9. Click Test Connection, to verify the connection.
When the device is registered for alerting, topology and telemetry reports are automatically sent to Secure Remote Services
weekly, starting at the time that the device was registered.
PowerFlex Manager can receive an SNMPv2 trap and forward it as an SNMPv3 trap.
SNMP trap forwarding configuration supports multiple forwarding destinations. If you provide more than one destination, all
traps coming from all devices are forwarded to all configured destinations in the appropriate format.
PowerFlex Manager stores up to 5 GB of SNMP alerts. Once this threshold is exceeded, PowerFlex Manager automatically
purges the oldest data to free up space.
For SNMPv2 traps to be sent from a device to PowerFlex Manager, you must provide PowerFlex Manager with the community
strings on which the devices are sending the traps. If during resource discovery you selected to have PowerFlex Manager
automatically configure iDRAC nodes to send alerts to PowerFlex Manager, you must enter the community string used in that
credential here.
For a network management system to receive SNMPv2 traps from PowerFlex Manager, you must provide the community
strings to the network management system. This configuration happens outside of PowerFlex Manager.
For a network management system to receive SNMPv3 traps from PowerFlex Manager, you must provide the PowerFlex
Manager engine ID, user details, and security level to the network management system. This configuration happens outside of
PowerFlex Manager.
Prerequisites
PowerFlex Manager and the network management system use access credentials with different security levels to establish
two-way communication. Review the access credentials that you need for each supported version of SNMP. Determine the
security level for each access credential and whether the credential supports encryption.
To configure SNMP communication, you need the access credentials and trap targets for SNMP, as shown in the following
table:
Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Settings, and click Virtual Appliance Management.
3. On the Virtual Appliance Management page, in the SNMP Trap Configuration section, click Edit.
4. To configure trap forwarding as SNMPv2, click Add community string. In the Community String box, provide the
community string by which PowerFlex Manager receives traps from devices and by which it forwards traps to destinations.
You can add more than one community string. For example, add more than one if the community string by which PowerFlex
Manager receives traps differs from the community string by which it forwards traps to a remote destination.
5. To configure trap forwarding as SNMPv3, click Add User. Enter the Username, which identifies the ID where traps are
forwarded on the network management system. The username must be at most 16 characters. Select a Security Level:
(MD5 at least 8
characters)
Maximum authPriv Messages are Required Required
authenticated and
encrypted
Note the current engine ID (automatically populated), username, and security details. Provide this information to the remote
network management system so it can receive traps from PowerFlex Manager.
You can add more than one user.
6. In the Trap Forwarding section, click Add Trap Destination to add the forwarding details.
a. In the Target Address (IP) box, enter the IP address of the network management system to which PowerFlex Manager
forwards SNMP traps.
b. Provide the Port for the network management system destination. The SNMP Trap Port is 162.
c. Select the SNMP Version for which you are providing destination details.
d. In the Community String/User box, enter either the community string or username, depending on whether you are
configuring an SNMPv2 or SNMPv3 destination. For SNMPv2, if there is more than one community string, select the
appropriate community string for the particular trap destination. For SNMPv3, if there is more than one user-defined,
select the appropriate user for the particular trap destination.
7. Click Save.
The Virtual Appliance Management page displays the configured details as shown below:
Trap Forwarding <destination-ip>(SNMP v2 community string or SNMP v3 user)
NOTE: To configure nodes with PowerFlex Manager SNMP changes, go to Settings > Virtual Appliance
Management, and click Configure nodes for alert connector.
Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Settings and click Virtual Appliance Management.
3. On the Virtual Appliance Management page, in the Syslog section, click Edit.
4. Click Add syslog forward.
5. For Host, enter the destination IP address of the remote server to which you want to forward syslogs.
6. Enter the destination Port 514 where the remote server is accepting syslog messages.
7. Select the network Protocol used to transfer the syslog messages. The default is UDP.
8. Optionally enter the Facility and Severity Level to filter the syslogs that are forwarded. The default is to forward all.
9. Click Save to add the syslog forwarding destination.
The Virtual Appliance Management page displays the configured details as shown below:
Syslog Forwarding <destination-ip>(<Facility><Severity Level>)
Steps
1. Log in to PowerFlex Manager.
2. Click Settings and click Backup and Restore.
3. The Backup and Restore page displays information about the last backup operation that was performed on the PowerFlex
Manager virtual appliance. Information in the Settings and Details section applies to both manual and automatically
scheduled backups and includes the following:
● Last backup date
● Last backup status
● Backup directory path to an NFS or a CIFS share
● Backup directory username
4. The Backup and Restore page also displays information about the status of automatically scheduled backups (enabled or
disabled).
On this page, you can:
● Manually start an immediate backup - using Backup Now option
● Restore an earlier configuration - using Restore Now option
● Edit general backup settings
● Edit automatically scheduled backup settings
Steps
Log in to PowerFlex Manager.
Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Services.
3. On the Services page, click the service, and in the right pane of the Service Details page, click View Details.
4. On the Service Details page, in the right pane, click Edit.
5. Specify permissions for the service under Who should have access to the service deployed from this template?.
● Only PowerFlex administrators - The service has access to users with administration rights
Steps
1. Change the PowerFlex Manager access switch password by doing the following:
a. In PowerFlex Manager, go to Settings > Credentials Management, select the access switch credential, click Edit,
change the Password to the <NEW_PASSWORD>, and click Save. See Credentials management for more information.
2. Change the password of the access switches by doing the following:
a. Use an SSH client program like PuTTY to log in to an access switch console.
b. Type the following commands:
Steps
1. Change the PowerFlex Manager VMware vCenter password by completing the following:
a. In PowerFlex Manager, go to Settings > Credentials Management, select the VMware vCenter credential, click Edit,
change the Password to the <NEW_PASSWORD>, and click Save. See Credentials management for more information.
2. Change the VMware vCenter password by completing the following:
a. Log in to the VMware vCenter web interface using the <OLD_PASSWORD>.
b. Click the username in upper right of page and select Change password.
c. Type the <OLD_PASSWORD> and the <NEW_PASSWORD> and click OK.
3. Test the changes. Even though the cluster is operating properly, because of the time between changing the password in
PowerFlex Manager and changing the password in the ESXi OS, nodes may show a critical error on the Services page in
PowerFlex Manager. The following steps will return the nodes to the healthy state.
a. In the PowerFlex Manager GUI, go to Resources page, select vCenter and click Run Inventory.
b. To confirm that the process completes with no errors, check Settings > Logs.
c. In the PowerFlex Manager GUI, go to Services page for ESXi nodes and click Update Service Details.
d. After Update Service Details completes the process, confirm that all cluster objects report as healthy (green check
mark).
Steps
1. To change the PowerFlex Manager VMware ESXi operating system password, complete the following:
a. In PowerFlex Manager, go to Settings > Credential Management, select the VMware ESXi operating system
credential, click Edit, change the Password to the <NEW_PASSWORD>, and click Save. See Credentials management for
more information.
2. To change the VMware ESXi operating system root password on every hyperconverged or PowerFlex compute-only node,
complete the following:
a. Log in to VMWare ESXi web interface on the PowerFlex node using root and the <OLD_PASSWORD>.
b. In upper right of page, click the root@<ip address>, and select Change password.
c. Type the <NEW_PASSWORD> twice and click Change password.
7. Create a user credential for the vCenter server that matches the account created in vCenter earlier.
8. Add the vCenter server object to the inventory using those credentials from PowerFlex Manager. For more information on
the PowerFlex Manager credential creation see the PowerFlex Manager online help.
7. Create a user credential for the vCenter server that matches the account created in vCenter earlier.
8. Add the vCenter server object to the inventory using those credentials from PowerFlex Manager. For more information on
the PowerFlex Manager credential creation see the PowerFlex Manager online help.
7. Create a user credential for the vCenter server that matches the account created in vCenter earlier.
8. Add the vCenter server object to the inventory using those credentials from PowerFlex Manager. For more information on
the PowerFlex Manager credential creation see the PowerFlex Manager online help.
Steps
1. Log in to PowerFlex Manager GUI (admin/admin) from a web browser.
2. In PowerFlex Manager, go to Settings > Credentials Management, select the Windows Compute-Only nodes
credential. See Credentials management for more information.
3. Click Edit and change the password to the <NEW_PASSWORD> and click Save.
4. To change the Windows Server operating system password on every hyperconverged or compute-only node, complete the
following:
5. Log in to the server either directly or by using Remote Desktop.
6. Right-click Computer, and select Manage.
7. Select Configuration.
8. Click Local Users and Groups > Users.
9. Find and right-click the Administrator user.
10. Click Set Password > Proceed.
11. Type and confirm the new password.
12. Test the changes. The PowerFlex nodes may show a critical error. The error is due to the time lag between changing the
password in PowerFlex Manager and changing the password in the Windows operating system. The following steps return
the PowerFlex nodes to a healthy state:
a. In the PowerFlex Manager GUI, go to Resources page, select the Windows CO nodes, and click Run Inventory.
b. To confirm that the process completes with no errors, check Settings > Logs.
c. In the PowerFlex Manager GUI, go to Services page for Red Hat Enterprise Linux nodes and click Update Service
Details.
d. After the Update Service Details process completes, confirm that all cluster objects report as healthy (green check
mark).
Steps
1. Log in to PowerFlex Manager.
2. Go to the Resources page and select the required node.
3. In the Update Password wizard, select the component that you want to update password and click Next.
4. In the Select Credentials page, select the new credential from the menu or create a credential.
5. Click Finish.
6. Click Yes to confirm.
Steps
1. On the menu bar, click Resources.
2. On the All Resources tab, select one or more PowerFlex Gateway components for which you want to change the
passwords.
3. Click Update Password.
PowerFlex Manager displays the Update Password wizard.
4. On the Select Components page, select PowerFlex Password.
5. Click Next.
6. On the Select Credentials page, create a credential with a new password or change to a different credential.
a. Open the PowerFlex ( n ) object under the Type column to see details about each gateway you selected on the
Resources page.
b. To create a credential that has the new password, click the plus sign (+) under the Credentials column.
Specify the Credential Name, as well as the Gateway Admin User Name and Gateway OS User Name for which you
want to change passwords. Enter the new passwords for both users and confirm these passwords.
c. To modify the credential, click the pencil icon for one of the nodes under the Credentials column and select a different
credential.
d. Click Save.
7. Click Finish.
8. Click Yes to confirm.
Results
PowerFlex Manager starts a new job for the password update operation, and a separate job for the device inventory. If
PowerFlex Manager is managing a cluster for any of the selected PowerFlex Gateway components, it updates the credentials
for the Gateway Admin User and Gateway OS User, as well as any related credentials, such as the LIA and lockbox
credentials. If PowerFlex Manager is not managing the cluster, it only updates the credentials for the Gateway Admin User and
Gateway OS User.
Steps
1. On the menu bar, click Resources.
2. On the All Resources tab, select one or more resources of the same type for which you want to change passwords.
For example, you could select one or more iDRAC nodes or you could select one or more PowerFlex Gateway components.
3. Click Update Password.
PowerFlex Manager displays the Update Password wizard.
4. On the Select Components page, select one or more components for which you want to update a password and click
Next.
The component choices vary depending on which resource type you initially selected on the Resources page.
5. On the Select Credentials page, create a credential or change to a different credential having the same username.
6. Click Finish and click Yes to confirm the changes.
Steps
1. On the menu bar, click Resources.
2. On the All Resources tab, select one or more nodes for which you want to change the passwords.
3. Click Update Password.
PowerFlex Manager displays the Update Password wizard.
4. On the Select Components page, specify which passwords you want to update for the selected nodes by clicking one or
more of the following check boxes.
● iDRAC Password
● Node Operating System Password
● SVM Operating System Password
PowerFlex Manager does not support password changes for the Windows operating system.
5. Click Next.
6. On the Select Credentials page, create a credential with a new password or change to a different credential.
a. Open the iDRAC ( n ) object under the Type column to see details about each node you selected on the Resources
page.
b. To create a credential that has the new password, click the plus sign (+) under the Credentials column.
Specify the Credential Name and the User Name for which you want to change the password. Enter the new password
in the Password and Confirm Password fields.
c. To modify the credential, click the pencil icon for the nodes under the Credentials column and select a different
credential.
d. Click Save.
You must perform the same steps for the node operating system and SVM operating system password changes. For a node
operating system credential, only the OS Admin credential type is updated.
7. Click Finish.
8. Click Yes to confirm.
Results
PowerFlex Manager starts a new job for the password update operation, and a separate job for the device inventory. The
node operating system and SVM operating components are updated only if PowerFlex Manager is managing a cluster with the
operating system and SVM. If PowerFlex Manager is not managing a cluster with these components, these components are not
displayed and their credentials are not updated. Credential updates for iDRAC are allowed for managed and reserved nodes only.
Unmanaged nodes do not provide the option to update credentials.
Steps
1. To change the PowerFlex Manager embedded operating system password, complete the following:
3. Test the changes: Even though the cluster is operating properly, because of the time between changing the password in
PowerFlex Manager and changing the password in the embedded operating system, nodes may show a critical error on the
Services page in PowerFlex Manager. The following steps return the nodes to the healthy state.
a. In the PowerFlex Manager GUI, go to Resources page, select the embedded operating system nodes, and click Run
Inventory.
b. To confirm that the process completes with no errors, check Settings > Logs.
c. In the PowerFlex Manager, go to Services page for embedded operating system nodes and click Update Service
Details.
d. After the Update Service Details process completes, confirm that all cluster objects report as healthy (green check
mark).
Adding users
Steps
1. If you are signed in as the root user, you can create a user at any time by typing: adduser username.
2. If you are a sudo user, add a new user by typing: sudo adduser username.
3. Give your user a password so that they can log in, type: passwd username.
NOTE: If you are signed in nonroot user with sudo privileges, add sudo ahead of the command.
Steps
To get sudo privileges the user is added to the wheel group (which gives sudo access to all its members by default) using
gpasswd.
Now the new user can run commands with administrative privileges, type sudo ahead of the command that you want to run as
an administrator:
sudo some_command
You are prompted to enter the password of the regular user account that you are signed in as. Once the correct password has
been submitted, the command you entered is performed with root privileges.
Deleting users
The choice of deletion method depends on if you are deleting the user and user files or the user account only.
Steps
1. SSH to the server and log in as root.
2. In the command prompt, choose either of the following:
home directory along with the user account itself userdel -r username
NOTE: Add sudo ahead of the command if you are signed in as a nonroot user with sudo privileges.
With either command, the user is automatically removed from any groups that they were added to. This includes the
wheel group if they were given sudo privileges. If you later add another user with the same name, they have to be added
to the wheel group again to gain sudo access.
Steps
1. To change the PowerFlex Manager presentation server root password, do the following:
a. In PowerFlex Manager, go to Settings > Credential Management, select the presentation server credential, click Edit,
change the Password to the <NEW_PASSWORD>, and click Save. See Credentials management for more information.
2. To change the presentation server root password, do the following:
a. Use an SSH client program like PuTTY to log in as root to the presentation server using <OLD_PASSWORD>.
b. Change the presentation server root password using passwd command:
Option
-c <comment> <comment> can be replaced with any string. This option is generally used to specify the full
name of a user.
-f days Number of days after the password expires until the account is disabled. If 0 is specified, the
account is disabled immediately after the password expires. If -1 is specified, the account is not
disabled after the password expires.
-g group_name Group name or group number for the user's default (primary) group. The group must exist prior
to being specified here.
-r Create a system account with a UID less than 1000 and without a home directory.
-u uid User ID for the user, which must be unique and greater than 999.
2. By default, useradd creates a locked user account. To unlock the account, run the following command as root to assign a
password: passwd username.
Steps
1. SSH to the server, and type: Server1:~# useradd -m -c "<test username>" -s /bin/bash <test>.
Where <test> is a shell type of bash.
The following table that explains what each qualifier is used for:
Qualifier Description
-m This qualifier makes the useradd command create the users home directory.
-s /bin/bash This qualifier specifies which shell the user should use.
Once the password is set, the user can successfully log in to the server.
Deleting users
The command to delete users is userdel and is specified with the -r qualifier which removes the home directory and mail
spool.
Steps
SSH to the server and type:server1:~ # userdel -r <test>
Once you have issued the userdel command, you will notice that the /home/<test> directory is removed. If you only want
to delete the user but leave their home directory intact, you can issue the same command but without the -r qualifier.
Steps
1. On the menu bar, click Settings, and then click Virtual Appliance Management.
2. On the Virtual Appliance Management page, click Reboot Virtual Appliance. A message displays confirming that you
want to restart the virtual appliance.
3. Click Yes to confirm. The system restarts.
4. Once the reboot is complete, click Click to log in and provide your credentials.
Prerequisites
This procedure steps through how to deploy a service by creating a new template. A sample template can be used to create a
template but does not show the steps here. To create a new template from a clone, do the following:
1. Templates > Add a Template to open Add a Template wizard.
2. Select Clone an existing PowerFlex Manager template.
3. Click ? to access the online help.
4. Follow the instructions on how to add a new template from a sample template.
Steps
1. Log in to PowerFlex Manager.
2. Add OS image to the repository.
NOTE: Skip this step if the OS Image is already added to the repository.
a. Click Settings from menu bar and click Compliance and OS Repositories.
b. Click the OS Image Repositories tab.
c. Click Add to open Add OS Image Repository wizard and enter the following:
Firmware and software compliance Select the latest Intelligent Catalog version from the
list.
5. Click Save.
6. Click Add Node to open the Node wizard and select Full Network Automation.
a. Click Continue.
b. Enter the following details:
Node Details
Component name Enter <Red Hat or CentOS>.
c. Click Continue.
d. Under OS Settings, enter the following settings:
Description Values
Host name selection Select <appropriate host name selection>.
OS image Select < Red Hat or CentOS Image>.
OS credentials Select <OS Credential Name>.
Timezone Select <Time zone>
NTP server Select <NTP Server>
Use node For Dell EMC PowerFlex Click checkbox.
PowerFlex role Select Compute Only.
Enable encryption Leave checkbox cleared.
Switch port configuration Select Port Channel (LACP enabled).
Teaming and bonding configuration Select Mode4(IEEE 802.3ad policy).
e. Under Hardware Settings, enter the details within the following table:
f. Under BIOS Settings, enter the details within the following table:
j. Under Port 1, click Choose Networks to open Interface 1 Port 1 Network Configuration window.
k. Select the checkboxes for the following networks:
l. Click >> to add the selected networks to the right column and click Save.
m. Under Port 2, click Choose Networks to open Interface 1 Port 2 Network Configuration window.
n. Select the checkboxes for the following networks:
o. Click >> to add the selected networks to the right column and click Save.
p. Click Add New Interfaceto create the second interface.
q. Under Interface 2, enter the following details:
r. Under Port 2, click Choose Networks to open Interface 2 Port 1 Network Configuration window.
s. Select the checkboxes for the following networks:
t. Click >> to add the selected networks to the right column and click Save.
u. Under Port 2, click Choose Networks to open Interface 2 Port 2 Network Configuration window.
v. Select the checkboxes for the following networks:
w. Click >> to add the selected networks to the right column and click Save.
x. Click Validate Settings, if there are any errors, correct them and click Close.
y. Click Save to complete the clone creation.
7. Create the clusters.
a. Click Add Cluster to create PowerFlex Cluster.
b. Click Component Name > PowerFlex Cluster.
c. Select Associate All or Associate Selected.
d. Click Continue.
e. Under PowerFlex Settings, enter the details within the following table:
f. Click Save.
8. In the Template Information box, click Publish Template.
9. In the pop-up, click Yes.
10. On the Compute Template page, under Template Information, click Deploy and select the following:
Prerequisites
This procedure shows how to deploy a service by creating a new template. A sample template can be used to create a template
but does not show the steps here. To create a new template from a clone, do the following:
1. Templates > Add a Template to open Add a Template wizard.
2. Select Clone an existing PowerFlex Manager template.
3. Click ? to access the online help.
4. Follow the instructions on how to add a new template from a sample template.
a. Click Settings from menu bar and click Compliance and OS Repositories.
b. Click the OS Image Repositories tab.
c. Click Add to open Add OS Image Repository wizard and enter the following:
Firmware and software compliance Select the latest Intelligent Catalog version from the
list.
Who should have access to the service deployed from this Select from list who should have access to this service
template? template.
5. Click Save.
6. Click Add Node to open the Node wizard and select Full Network Automation.
a. Click Continue.
b. Enter the following details:
Node Details
Component name Enter <Embedded OS Image>.
c. Click Continue.
Description Values
Host name selection Select <appropriate host name selection>.
OS image Select < Embedded OS Image>.
OS credentials Select <OS Credential Name>.
Timezone Select <Time zone>
NTP server Select <NTP Server>
Use node for Dell EMC PowerFlex Click checkbox.
PowerFlex role Select Storage Only.
Enable compression Select the check box (based on your requirement).
Enable encryption Select the check box (based on your requirement).
Enable replication Select the check box (based on your requirement).
Switch port configuration Select Port Channel (LACP enabled)
Teaming and bonding configuration Select Mode4 (IEEE 802.3ad policy)
e. Under SVM OS Settings, enter the details within the following table:
f. Under Hardware Settings, enter the details within the following table:
g. Under BIOS Settings, enter the details within the following table:
k. Under Port 1, click Choose Networks to open Interface 1 Port 1 Network Configuration window.
l. Select the checkboxes for the following networks:
m. Click >> to add the selected networks to the right column and click Save.
n. Under Port 2, click Choose Networks to open Interface 1 Port 2 Network Configuration window.
o. Select the checkboxes for the following networks:
p. Click >> to add the selected networks to the right column and click Save.
q. Click Add New Interface to create the second interface.
r. Under Interface 2, enter the following details:
s. Under Port 2, click Choose Networks to open Interface 2 Port 1 Network Configuration window.
t. Select the checkboxes for the following networks:
u. Click >> to add the selected networks to the right column and click Save.
v. Under Port 2, click Choose Networks to open Interface 2 Port 2 Network Configuration window.
w. Select the checkboxes for the following networks:
x. Click >> to add the selected networks to the right column and click Save.
y. Click Validate Settings, if there are any errors, correct them and click Close.
z. Click Save to complete the clone creation.
7. Create the clusters.
a. Click Add Cluster to create PowerFlex Cluster.
b. Click Component Name > PowerFlex Cluster.
c. Select Associate All or Associate Selected.
f. Click Save.
8. In the Template Information box, click Publish Template.
9. In the pop-up, click Yes.
10. On the Storage Template page, under Template Information, click Deploy and select the following:
Prerequisites
This procedure shows how to deploy a service by creating a new template. A sample template can be used to create a template
but does not show the steps here. To create a new template from a clone, do the following:
1. Templates > Add a Template to open Add a Template wizard.
2. Select Clone an existing PowerFlex Manager template, choose Category and Template to be cloned.
3. In Category, select Sample Template and in Template to be cloned select Compute Only ESXi and click Next.
4. Click ? > Help to access the online help for more information on Category and Sample Templates.
5. Follow the instructions on how to add a new template from a sample template.
Steps
1. Log in to PowerFlex Manager.
2. Add OS image to the repository.
NOTE: Skip this step if the OS Image is already added to the repository.
a. Click Settings from menu bar and click Compliance and OS Repositories.
b. Click the OS Repositories tab.
c. Click Add to open Add OS Image Repository wizard and enter the following:
5. Click Save.
6. Click Add Node to open the Node wizard and select Full Network Automation.
a. Click Continue.
b. Enter the following details:
Node Details
Component name Enter <ESXi>.
c. Click Continue.
d. Under OS Settings, enter the following settings:
Description Values
Host name selection Select <appropriate host name selection>.
Host name template (auto-generated) Enter <host name template >
OS image Select < ESXi Image>.
OS credentials Select <OS Credential Name>.
NTP server Enter <NTP server IP address>
e. Under SVM OS Settings, enter the details within the following table:
f. Under Hardware Settings, enter the details within the following table:
g. Under BIOS Settings, enter the details within the following table:
k. Under Port 1, click Choose Networks to open Interface 1 Port 1 Network Configuration window.
l. Select the checkboxes for the following networks:
m. Click >> to add the selected networks to the right column and click Save.
n. Under Port 2, click Choose Networks to open Interface 1 Port 2 Network Configuration window.
o. Select the checkboxes for the following networks:
p. Click >> to add the selected networks to the right column and click Save.
q. Click Add New Interface to create the second interface.
r. Under Interface 2, enter the following details:
s. Under Port 2, click Choose Networks to open Interface 2 Port 1 Network Configuration window.
u. Click >> to add the selected networks to the right column and click Save.
v. Under Port 2, click Choose Networks to open Interface 2 Port 2 Network Configuration window.
w. Select the checkboxes for the following networks:
x. Click >> to add the selected networks to the right column and click Save.
y. In Static Routes, select Enabled.
z. Click Validate Settings, if there are any errors, correct them and click Close.
aa. Click Save to complete the clone creation.
7. Create the clusters.
a. Click Add Cluster to create PowerFlex Cluster.
b. Click Component Name > PowerFlex Cluster.
c. Select Associate All or Associate Selected.
d. Click Continue.
e. Under PowerFlex Settings, enter the details within the following table:
f. Click Save.
8. Create VMware Cluster.
a. Click Add Cluster to create VMware Cluster.
b. Select VMware Cluster for the component name.
c. Select Associated All option.
d. Click Continue
e. Under Cluster Settings, enter the details within the following table:
f. Under vSphere VDS Settings, click Configure VDS Settings button to open Configure VDS Settings wizard.
g. Select Existing port group or create new port group.
h. Assuming deployment is standard, select Auto Create All Port Groups.
i. Click Next to VDS Naming page.
j. Enter the details within the following table:
Volume 1 Details
Volume name Select Create New Volume.
New volume name Enter < New Volume Name >.
Storage pool Select Storage Pools .
Volume size (GB) Enter < Size Number >.
Datastore name Select Datastore Name .
New datastore name Enter < New Datastore Name>.
Volume type Select Thick or Thin.
Prerequisites
The procedure steps through how to deploy a service by creating a new template. A sample template can be used to create a
template but does not show the steps here. To create a new template from a clone, do the following:
1. Templates > Add a Template to open Add a Template wizard.
2. Select Clone an existing PowerFlex Manager template.
3. Click ? to access the online help.
4. Follow the instructions on how to add a new template from a sample template.
Steps
1. Log in to PowerFlex Manager.
2. Add OS image to the repository.
NOTE: Skip this step if the OS Image is already added to the repository.
a. Click Settings from menu bar and click Compliance and OS Repositories.
b. Click the OS Image Repositories tab.
c. Click Add to open Add OS Image Repository wizard and enter the following:
Firmware and software compliance Select the latest Intelligent Catalog version from the
list.
Who should have access to the service deployed from this Select from list who should have access to this service
template? template.
5. Click Save.
6. Click Add Node to open the Node wizard and select Partial Network Automation.
a. Click Continue.
b. Enter the following details:
Node Details
Component name Enter <Red Hat or CentOS>.
c. Click Continue.
d. Under OS Settings, enter the following settings:
Description Values
Host name selection Select <appropriate host name selection>
OS image Select < Red Hat or CentOS Image>
OS credentials Select <OS Credential Name>
Use node for Dell EMC PowerFlex Click checkbox
PowerFlex role Select Compute Only
Switch port configuration Select Port Channel (LACP enabled)
Teaming and bonding configuration Select Mode 4 (IEEE 802.3d policy)
e. Under Hardware Settings, enter the details within the following table:
f. Under BIOS Settings, enter the details within the following table:
j. Under Port 1, click Choose Networks to open Interface 1 Port 1 Network Configuration window.
k. Select the checkboxes for the following networks:
l. Click >> to add the selected networks to the right column and click Save.
m. Under Port 2, click Choose Networks to open Interface 1 Port 2 Network Configuration window.
n. Select the checkboxes for the following networks:
o. Click >> to add the selected networks to the right column and click Save.
p. Click Add New Interfaceto create the first interface.
q. Under Interface 2, enter the following details:
r. Under Port 2, click Choose Networks to open Interface 2 Port 1 Network Configuration window.
s. Select the checkboxes for the following networks:
t. Click >> to add the selected networks to the right column and click Save.
u. Under Port 2, click Choose Networks to open Interface 2 Port 2 Network Configuration window.
v. Select the checkboxes for the following networks:
w. Click >> to add the selected networks to the right column and click Save.
x. Click Validate Settings, if there are any errors, correct them and click Close.
y. Click Saveto complete the clone creation.
7. Create the cluster.
a. Click Add Cluster.
b. Click Component Name > PowerFlex Cluster.
c. Select Associate All or Associate Selected.
d. Click Continue.
e. Under PowerFlex Settings, enter the details within the following table:
f. Click Save.
8. In the Template Information box, click Publish Template.
9. In the pop-up, click Yes.
10. On the Compute Template page, under Template Information, click Deploy and select the following:
Prerequisites
The procedure steps through how to deploy a service by creating a new template. A sample template can be used to create a
template but does not show the steps here. To create a new template from a clone, do the following:
Steps
1. Log in to PowerFlex Manager.
2. Add OS image to the repository.
NOTE: Skip this step if the OS Image is already added to the repository.
a. Click Settings from menu bar and click Compliance and OS Repositories.
b. Click the OS Image Repositories tab.
c. Click Add to open Add OS Image Repository wizard and enter the following:
Firmware and software compliance Select the latest Intelligent Catalog version from the
list.
Who should have access to the service deployed from this Select from list who should have access to this service
template? template.
5. Click Save.
6. Click Add Node to open the Node wizard and select Partial Network Automation.
a. Click Continue.
b. Enter the following details:
Node Details
Component name Enter <embedded os image>.
c. Click Continue.
d. Under OS Settings, enter the following settings:
Description Values
Host name selection Select <appropriate host name selection>
OS image Select < Embedded OS Image>
OS credentials Select <OS Credential Name>
Timezone Select <Time zone>
NTP server Select <NTP Server>
Use Node for Dell EMC PowerFlex Click checkbox
PowerFlex role Select Storage Only
Enable compression Select checkbox.
Enable encryption Select checkbox.
Enable replication Select checkbox.
Switch port configuration Select Port Channel (LACP enabled)
Teaming and bonding configuration Select Mode4 (IEEE 802.3ad policy)
e. Under SVM OS Settings, enter the details within the following table:
f. Under Hardware Settings, enter the details within the following table:
g. Under BIOS Settings, enter the details within the following table:
k. Under Port 1, click Choose Networks to open Interface 1 Port 1 Network Configuration window.
l. Select the checkboxes for the following networks:
m. Click >> to add the selected networks to the right column and click Save.
n. Under Port 2, click Choose Networks to open Interface 1 Port 2 Network Configuration window.
o. Select the checkboxes for the following networks:
p. Click >> to add the selected networks to the right column and click Save.
q. Click Add New Interfaceto create the first interface.
r. Under Interface 2, enter the following details:
s. Under Port 2, click Choose Networks to open Interface 2 Port 1 Network Configuration window.
t. Select the checkboxes for the following networks:
u. Click >> to add the selected networks to the right column and click Save.
v. Under Port 2, click Choose Networks to open Interface 2 Port 2 Network Configuration window.
w. Select the checkboxes for the following networks:
x. Click >> to add the selected networks to the right column and click Save.
y. Click Validate Settings, if there are any errors, correct them and click Close.
z. Click Saveto complete the clone creation.
7. Create the cluster.
a. Click Add Cluster.
b. Click Component Name > PowerFlex Cluster.
c. Select Associate All or Associate Selected.
d. Click Continue.
e. Under PowerFlex Settings, enter the details within the following table:
f. Click Save.
8. In the Template Information box, click Publish Template.
9. In the pop-up, click Yes.
10. On the Storage Template page, under Template Information, click Deploy and select the following:
Prerequisites
This procedure shows how to deploy a service by creating a new template. A sample template can be used to create a template
but does not show the steps here. To create a new template from a clone, do the following:
1. Templates > Add a Template to open Add a Template wizard.
2. Select Clone an existing PowerFlex Manager template.
3. Click ? to access the online help.
4. Follow the instructions on how to add a new template from a sample template.
Steps
1. Log in to PowerFlex Manager.
a. Click Settings from menu bar and click Compliance and OS Repositories.
b. Click the OS Image Repositories tab.
c. Click Add to open Add OS Image Repository wizard and enter the following:
Firmware and software compliance Select the latest RCM version from the list.
Who should have access to the service deployed from this Select from list who should have access to this service
template? template.
5. Click Save.
6. Click Add Node to open the Node wizard and select Partial Network Automation.
a. Click Continue.
b. Enter the following details:
Node Details
Component name Enter <ESXi>.
c. Click Continue.
Description Values
Host name selection Select <appropriate host name selection>
OS image Select < ESXi Image>
OS credentials Select <OS Credential Name>
NTP server Enter <NTP server IP address>
e. Under SVM OS Settings, enter the details within the following table:
f. Under Hardware Settings, enter the details within the following table:
g. Under BIOS Settings, enter the details within the following table:
k. Under Port 1, click Choose Networks to open Interface 1 Port 1 Network Configuration window.
l. Select the checkboxes for the following networks:
m. Click >> to add the selected networks to the right column and click Save.
n. Under Port 2, click Choose Networks to open Interface 1 Port 2 Network Configuration window.
o. Select the checkboxes for the following networks:
p. Click >> to add the selected networks to the right column and click Save.
q. Click Add New Interfaceto create the second interface.
r. Under Interface 2, enter the following details:
s. Under Port 2, click Choose Networks to open Interface 2 Port 1 Network Configuration window.
t. Select the checkboxes for the following networks:
u. Click >> to add the selected networks to the right column and click Save.
v. Under Port 2, click Choose Networks to open Interface 2 Port 2 Network Configuration window.
w. Select the checkboxes for the following networks:
x. Click >> to add the selected networks to the right column and click Save.
f. Click Save.
8. Create VMware Cluster.
a. Click Add Cluster to create VMware Cluster.
b. Select VMware Cluster for the component name.
c. Select Associate All option.
d. Click Continue
e. Under Cluster Settings, enter the details within the following table:
f. Under vSphere VDS Settings, click Configure VDS Settings button to open Configure VDS Settings wizard.
g. Assuming deployment is standard, select Auto Create All Port Groups or Create New Port Groups.
h. Click Next to the VDS Naming page.
i. Enter the details within the following table:
Volume 1 Details
Volume name Select Create New Volume.
New volume name Enter < New Volume Name >.
Storage pool Select Storage Pools .
Volume size (GB) Enter < Size Number >.
Datastore name Select Datastore Name .
New datastore name Enter < New Datastore Name>.
Volume type Select Thick or Thin.
Prerequisites
You must have the following information available before beginning this procedure. The identifiers in brackets (<IDENTIFIER>)
are used in the procedure to represent the required values.
Description Identifier
PowerFlex Management IP address <MGMT IP>
PowerFlex Management VLAN <MGMT VLAN>
PowerFlex Data 1 IP <DATA1 IP>
PowerFlex VLAN <DATA1 VLAN>
PowerFlex IP <DATA2 IP>
PowerFlexVLAN <DATA2 VLAN>
PowerFlex Gateway root password <ROOT PWD>
PowerFlex Gateway admin password <ADMIN PWD>
Default Gateway IP <DEF GW IP>
DNS Server IP <DNS IP>
NTP Server IP <NTP IP>
PowerFlex Gateway Domain <DOMAIN>
PowerFlex Gateway hostname <HOSTNAME>
Primary MDM IP <PRIMARY MDM IP>
Secondary MDM IP 1 <SECONDARY MDM IP 1>
Secondary IP 2 (if 5 node MDM cluster) <SECONDARY MDM IP 2>
Steps
1. Install the PowerFlex Gateway.
a. Install the PowerFlex Gateway OVF and VMDK files.
b. Change the root password.
c. Configure the PowerFlex Gateway network interfaces.
d. Configure thePowerFlex Gateway DNS client.
e. Configure the PowerFlex Gateway NTP client.
f. Install the JAVA and PowerFlex Gateway PRMs.
Prerequisites
Ensure that a lockbox exists and that it contains MDM credentials.
Enable the SNMP feature in the gatewayUser.properties file.
Steps
1. Use a text editor to open the gatewayUser.properties file, which is located in the following directory on the
PowerFlex installer / PowerFlex Gateway server:
● Linux: /opt/emc/scaleio/gateway/webapps/ROOT/WEB-INF/classes
● Windows: C:\Program Files\EMC\ScaleIO\Gateway\webapps \ROOT\WEB-INF\classes\
2. Locate the parameter features.enable_snmp and edit it as follows:
features.enable_snmp=true
Option Description
snmp.sampling_frequency The MDM sampling period. The default is 30.
snmp.resend_frequency The frequency of resending existing traps. The default is 0, which means that traps for active
alerts are sent every sampling cycle.
5. Save and close the file.
6. Run the following command to restart the PowerFlex Gateway service:
service scaleio-gateway restart
Steps
1. Log in to PowerFlex Manager.
2. Click Templates > Sample template > Management PowerFlex Gateway > Clone.
3. In Template Name enter a template name.
4. Select a template category from the Template Category list. To create a template category, select Create New Category
and enter the Category name.
5. In Template Description enter a description for the template.
6. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or
select Use PowerFlex Manager appliance default catalog.
NOTE: You cannot select a minimal compliance version for a template, since it only includes server firmware updates.
The compliance version for a template must include the full set of compliance update capabilities. PowerFlex Manager
does not show any minimal compliance versions in the Firmware and Software Compliance list.
7. Indicate who should have access to the service deployed from this template by selecting one of the following options:
Steps
1. Download the SVM OVF and VMDK files and save them to a location that is accessible to the vCenter being used to manage
the PowerFlex appliance management environment.
2. Log in to VMware vCenter.
3. Deploy the PowerFlex Gateway OVF and VMDK files.
4. Type a unique name for the PowerFlex Gateway VM name and select a location for the VM.
5. Select a compute resource for the PowerFlex Gateway VM.
6. On the Review details page, click Next.
7. On the Select storage page, complete the following:
a. Select Virtual disk provisioning: Thick Provision Lazy Zeroed.
b. VM Storage policy: Datastore Default.
c. Select a datastore on which to install the VM. Do not install VMs on BOSS cards.
d. Click Next.
8. On the Select networks page:
a. Set VM Networks to <MGMT VLAN>.
b. Click Next.
9. Review details on Ready to complete page and then click Finish.
10. Wait for the PowerFlex Gateway OVF deployment to complete.
11. Right-click the PowerFlex Gateway VM and select Edit Settings. Set the following:
a. Network adapter 1: <MGMT VLAN>
b. Network adapter 2: <DATA1 VLAN>
c. Network adapter 3: <DATA2 VLAN>
d. Network adapter 4: <DATA3 VLAN>
e. Network adapter 5: <DATA4 VLAN>
12. Click OK.
Steps
1. Log in to VMware vCenter.
2. Power on the PowerFlex Gateway VM.
3. Use VMware virtual console to connect to the PowerFlex Gateway VM.
4. Log in using these credentials: User is root, password is admin.
5. Use the Linux passwd command to change the default password to <ROOT PWD>.
6. To logout of the console, type exit.
7. Log in to the root account using the new password <ROOT PWD> to ensure it works.
Steps
1. Find the MAC addressees of the PowerFlex Gateway VM, by doing the following:
a. Log in to vCenter.
b. Right-click the PowerFlex Gateway VM and select Edit Settings.
c. Select Network adapter 1 (<MGMT VLAN>) and record the MAC address.
d. Repeat step for Network adapter 2 (<DATA1 VLAN>), Network adapter 3 (<DATA2 VLAN>), Network adapter 4
(<DATA3 VLAN>), and Network adapter 5 (<DATA4 VLAN>).
e. Use the VMware virtual console to connect to the PowerFlex Gateway VM.
f. At the command prompt, type:
nmtui
Steps
1. Edit the chrony.conf file: vi /etc/chrony.conf
2. At about line 7, add a line : server <NTP IP> iburst.
3. Save chrony.conf file and quit the editor: <ESC>:wq!
4. Set the timezone. For example, for the Chicago timezone: timedatectl set-timezone America/Chicago.
Steps
1. Use the VMware virtual console to connect to the PowerFlex Gateway VM.
2. At the command prompt, type:
Steps
1. Use VMware virtual console to connect to PowerFlex Gateway VM.
2. At the command prompt, type: cd /root/install
3. Install Java RPM by typing the following: #rpm -ivh java-1.8.0-openjdk-
headless-1.8.0.292.b10-1.el7_9.rpm.
4. Install gateway RPM by typing:
5. Confirm the correct network configuration and the installation of the rpms by using a web browser to connect to the
PowerFlex Gateway (<MGMT IP>).
The PowerFlex Installer login dialog box opens.
6. Close the PowerFlex Installer box without logging in.
7. If there is a system failure, Dell EMC recommends creating a snapshot of the PowerFlex Gateway to allow recovery.
Steps
1. Use an SSH client program like PuTTy to log in to the PowerFlex Gateway console (for example: Login: root, Password:
<ROOT PWD>).
2. Modify the gatewayUser.properties file:
a. Enter: cd /opt/emc/scaleio/gateway/webapps/ROOT/WEB_INF/classes.
NOTE: See Determining and switching the PowerFlex Meta Data Manager to find MDM primary and secondary IP
addresses.
b. Enter: vi gatewayUser.properties to edit the file and modify the following. IP addresses should be on
<MGMTIP> network:
● If you have a 3-node cluster: At about line 17: mdm.ip.addresses=<PRIMARY MDM IP>,<SECONDARY MDM IP
1>
● If you have a 5-node MDM cluster: At about line 17, mdm.ip.addresses=<PRIMARY MDM IP>,<SECONDARY
MDM IP 1>, <SECONDARY MDM IP 2>
7. Log in to PowerFlex Manager, go to the Resources page, select the PowerFlex Gateway, and then click Run Inventory.
8. Go to Services and verify Overall Service Health.
Prerequisites
Discover and set the PowerFlex management controller VMware vCenter as Managed in the PowerFlex Manager and select
this VMware vCenter and vSAN datastore for the presentation server template.
Steps
1. Log in to PowerFlex Manager.
2. On the PowerFlex Manager menu bar, click Template > Sample template > Management - presentation server and
click Clone in the right pane.
3. In the Clone Template dialog box, enter a template name under Template Name.
4. Select a template category from the Template Category list. To create a template category, select Create New Category
and enter the Category name.
5. In the Template Description, enter a description for the template.
6. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or
choose Use PowerFlex Manager appliance default catalog.
You cannot select a minimal compliance version for a template, since it only includes server firmware updates. The
compliance version for a template must include the full set of compliance update capabilities. PowerFlex Manager does
not show any minimal compliance versions in the firmware and software compliance list.
7. Indicate access rights to the service deployed from this template by selecting one of the following options:
● PowerFlex Manager administrators
● PowerFlex Manager administrators and specific standard and operator users
○ Click Add Users to add one or more standard and or operator users to the list and click Remove Users to remove
users from the list.
● PowerFlex Manager administrators and all standard and operator users
8. Click Next.
9. On the Additional Settings page, provide new values for the Network Settings, PowerFlex Presentation Server
Settings, and Cluster Settings.
3. Approve Certificates.
4. Enter the MDM cluster username and password.
NOTE: Unlinking should be done from the presentation server login page.
Steps
1. Log in to the presentation server web UI link (https://Presentation_Server_IP_Address:8443/).
2. Log in to the PowerFlex GUI.
3. Click Settings > Unlink system.
Prerequisites
Ensure the following are completed before you initiate the upgrade process:
● Back up the VMware vSphere infrastructure management components.
● Download the appropriate VMware vSphere vCenter patch and VMware ESXi ISO files from the download repository to the
jump server in the PowerFlex controller node.
● Take a snapshot of the vCSA prior to upgrading. See the VMware KB article for more information.
Steps
1. Take a snapshot of the VMware vSphere Management VMs (PowerFlex Manager appliance, VMware Controller vCenter,
embedded operating system jump server, Secure Remote Services Gateway, PowerFlex Gateway, PowerFlex presentation
server - optional is the CloudLink Center).
a. Check the datastore disk usage to verify that enough disk space is available to create snapshots.
b. Right-click and select Snapshot > Take Snapshot.
c. Enter a name and description, clear the Snapshot the virtual machine's memory and click OK.
d. Repeat these steps for each management VM.
2. Log in to the VMware vCenter appliance management port and create a backup.
a. Use the backup utility https://{FQDN}:5480 to create a backup.
3. Using the VMware vSphere client, upload the VMware-vCenter-Server-Appliance-X.x.x.xxxxx-xxxxxxx-
patch-FP.iso to the local datastore on the PowerFlex controller node.
4. From the VMware vSphere client, click Storage > Datacenter > PERC-01 > Files.
a. Create a folder named ISO (if not created already).
b. Click the upload icon and upload the required ISO file.
NOTE: This step may fail if the browser finds a certificate that it does not trust. If a failure occurs, upload the ISO
files to an existing folder.
c. Allow time for the upload to complete. You can view the status at the bottom of the screen.
d. From the VMware vSphere client, select the VM to attach the ISO.
e. Go to Hosts and Clusters.
f. On the Summary screen, expand VM Hardware.
7. On the left menu, select Update > Check Updates. Click Check CD ROM. Wait while the system validates the ISO
attached earlier.
8. When complete, select Stage and Install. Click I accept and click Next. Clear Join the VMware Customer... and click
Next. Check I have backed up vCenter... and click Finish.
9. Click OK. To reboot, right-click the VM from vCenter and Power > Restart Guest OS. Allow up to 10 minutes for the VM
to reboot.
NOTE: When rebooting the PowerFlex management controller vCSA, web client connectivity is lost. After the reboot,
log back on to the web client.
10. Log in to the VMware vSphere client again and validate that the SSO domain is running, and disconnect the ISO.
NOTE: The VMware vSphere client may take some time to start, as the vCSA can take up to 15 additional minutes to
start all VMware vCenter services.
11. Verify that you have the correct Intelligent Catalog vCenter version, as follows:
a. Use the vSphere client to log in to the vCenter server.
b. Click Help > About VMWare vSphere.
c. A dialog appears with the build number of the VMware vCenter Server. Verify that it matches the requirement.
Prerequisites
The iDRAC firmware upgrade must be done before any other upgrades. Perform the iDRAC firmware upgrade first, then upgrade
the other component firmware.
Steps
1. Log in to the iDRAC web interface by opening a Mozilla Firefox or Google Chrome browser and go to https://<ip-
address-of-idrac>.
NOTE: Under Server Information, review the System Host Name and verify that you have connected to the correct
hostname.
2. Select Maintenance > System Update > Manual Update and click Choose File.
3. Go to the Intelligent Catalog folder /shares/xxxxx and select the component update file. The components to update
include:
● iDRAC service module
● Dell BIOS
● Dell BOSS controller
● Dell iDRAC/Lifecycle controller
● Dell Intel X550/X540/i350
● Dell Mellanox ConnectX-4 LX
● Dell PERC H740P mini raid controller
4. Click Upload.
5. Select the firmware that you uploaded and click Install Next Reboot.
NOTE: The installation will be in the job queue for the next reboot. Click Job Queue from the prompted information
message to monitor the progress for the installation.
Steps
1. Log in to the web UI of the controller VMware ESXi host directly.
2. Go to Virtual Machines.
3. Shut down all the VMs except the jump server running on the controller host.
Steps
1. Use WinSCP to copy the ESXi-X.x.0-xxxxxx.zip patch file to the /vmfs/volumes/PERC-01/ISO folder on the
VMware ESXi server.
2. Using SSH, connect to the VMware ESXi host and check for the uploaded file by typing the following command: cd /
vmfs/ volumes/PERC-01/ISO.
3. For VMware ESXi 7.0 use the following command to install VMware ESXi .zip patches: esxcli software vib update
-d /vmfs/volumes/PERC-01/ISO/VMware-ESXi-7.0<version>-depot.zip
NOTE: To perform this command, use the path used to connect from the WinSCP and transfer the ZIP file.
6. Reboot the VMware ESXi host. Select Power > Reset (Warm Boot).
7. Press F2 to enter system setup.
8. Click System BIOS > Boot setting, select Boot mode as UEFI.
NOTE: Ensure that the BOSS card is set as the primary boot device from the UEFI Device Path under the Boot tab.
If the BOSS card is not set as the primary boot device, reboot the server and change the UEFI boot sequence from
System BIOS > Boot setting > UEFI BOOT Settings.
9. Click Back > Back > Finish > Yes > Finish > OK > Finish > Yes. The node reboots. Go to Exit maintenance mode.
Steps
1. Log in to the web UI of the controller VMware ESXi host directly.
2. Go to Virtual Machines.
3. Power on all the VMs running on the controller host.
Steps
1. Use WinSCP to upload ISM-Dell-Web-3.4.x-xxxx.VIB-ESX6i-Live_A00.zip to the /vmfs/volumes/
DASxx/ISO folder.
2. Use SSH to access the VMware ESXi nodes and type esxcli software vib install -d /vmfs/volumes/
DASxx/ISO/ ISM-Dell-Web-3.4.x-xxxx.VIB-ESX6i-Live_A00.zip.
Prerequisites
Ensure you have completed the following:
● Take a snapshot of the SVM.
● Check the CPU and clock speed.
Steps
1. Log in to the VMware vCenter with administrator credentials.
2. Right-click the SVM and select Edit Settings.
3. Expand CPU.
4. Select Reservation and enter the value in GHz.
5. Select Shares and select High from the menu.
Steps
1. Click Administration > vCenter Server Extension > vSphere ESX Agent Manager > VMs. The VMs are also visible on
the VMs and templates view.
2. VMs are created under the vCLS folder once the host is added to the cluster.
3. On the VMs and Templates view, click the vCLS folder.
4. Right-click the VM and click Migrate.
5. On the window, click Yes.
6. Click Change storage only.
7. For controller nodes, migrate them to the VSAN datastore.
8. Repeat the above procedure for all the vCLS VMs.
Steps
1. Obtain the updated switch image from the IC software repository .
2. Deploy the existing the embedded jump VM and assign a valid IP address with Internet connectivity. A valid DNS entry must
be defined.
3. Run df -h to verify that there is enough available free space on the /shares partition of the embedded jump VM to
download the RPM packages and create the ZIP file. At least 15 GB is recommended.
Steps
1. Create a directory in the /shares volume called centos-RPM, type: sudo mkdir /shares/Centos-RPM.
2. Copy the repository update ZIP file to the /tmp directory of the embedded operating system VM using WinSCP or similar.
3. Extract the contents of the repository update ZIP file to the /shares/Centos-RPM directory, type: sudo unzip /tmp/
repofilename.zip -d /shares/Centos-RPM.
4. Create and modify a new repository file in the (/etc/yum.repos.d) directory, type: sudo vi /etc/yum.repos.d/
centos.rpm.repo. In this example, the file that is created is (/etc/yum.repos.d/centos.rpm.repo).
5. Clean the yum cache, type: # sudo yum clean all.
6. Verify access to the new repository, type: # sudo yum repolist.
7. Deploy the updates from the repository, type: yum update. When prompted answer (y).
8. When the process is complete, reboot the system, type: reboot.
9. Once the system reboot has completed, verify kernel version, type: uname -a viewing the (/etc/centos-release) file.
10. Verify the embedded operating system version, type: cat /etc/centos-release.
11. Remove the RPM files, type: sudo rm -f -r /shares/Centos-RPM.
12. Remove the repository index file, type: sudo rm /etc/yum.repos.d/centos.rpm.repo.
13. Clean yum cache, type sudo yum clean all.
Prerequisites
Complete the following workflow to upgrade a PowerFlex appliance environment:
● Upgrade PowerFlex Manager.
● Add new compliance file (Intelligent Catalog) and operating system images to PowerFlex Manager.
● In PowerFlex Manager, add the new compatibility management file, if you are using PowerFlex Manager 3.6 or prior, skip this
step.
● Upgrade the PowerFlex Gateway.
● Upgrade to a supported version of VMware vCenter.
NOTE: The version depends on the VMware ESXi version available with Intelligent Catalog, for example; if you are going
to upgrade VMware ESXi from 6.5 to 6.7 you should first upgrade your VMware vCenter to 6.7 before starting the
upgrade.
● Upgrade the PowerFlex appliance.
● Upgrade the PowerFlex GUI presentation server.
NOTE: IC jumps greater than two are considered high risk. Contact Dell EMC Support before proceeding.
To upgrade to a new IC train, you first upgrade to the end of the IC train on which your system resides, and then upgrade to the
new IC train. Performing these two upgrades keeps the system on an engineered and validated path. This is the safest choice
for system stability and data integrity.
For example, the following diagram shows the multihop upgrade from IC 33_30_00 to IC 37_361_00 and a two-step upgrade,
if the customer is upgrade from IC 36_360_00 to IC 37_361_00.
NOTE: If the MTU value is already set to 9000, ignore the Change the maximum transmission unit... tasks.
Switch MTU
Default/current Recommended
Dell PowerSwitch - 9216
VMK MTU
Default/current Recommended
vMotion 1500 9000
Steps
Log in in to the access switch with administrative credentials.
Steps
1. Log in to the VMware vCenter with administrator credentials.
2. Select Networking.
3. Select cust_dvswitch.
4. Right-click and select Edit Settings.
5. Select Advanced and change the MTU value to 9000.
Steps
1. Click Host and Clusters.
Steps
1. Log in to the Management Controller VMware vCenter.
2. Right-click on the PowerFlex Manager appliance.
3. Click Power > Shutdown Guest OS.
Steps
1. Log in to the Management Controller VMware vCenter.
2. Right-click on the PowerFlex Manager appliance.
3. Select Snapshots > Take snapshot.
4. Uncheck Snapshot the virtual machine's memory and enter a description.
5. Click OK.
Prerequisites
Log in to the Dell Technologies Support site, download the PowerFlex Manager OVA file, and save it to a location that is
accessible to the VMware vSphere Client.
Steps
1. Log in to VMware vSphere Client.
2. Right-click Management ESXi host and select Deploy OVF Template.
The Deploy OVF Template wizard displays.
3. On the Select template page, enter the URL where the OVA is located or select Local file and browse to the location
where the OVA is saved.
Prerequisites
● Ensure you have the information gathered in Back up using PowerFlex Manager (IP address, netmask, gateway, DNS and
domain) from the old PowerFlex Manager to configure the new PowerFlex Manager.
Steps
1. Map the networks:
a. Log in to VMware vSphere Client.
b. Right-click the PowerFlex Manager virtual appliance and select Edit Settings .
c. On the Virtual Hardware tab, click the Network adapter menu for the VMware ESXi management, the OOB
management, and the operating system installation networks and take note of the Port ID and MAC Address values for
each network.
d. Power on PowerFlex Manager.
e. Log in to the PowerFlex Manager virtual appliance through the VM console using the following credentials:
● Username: delladmin
● Password: delladmin
f. Click Agree to accept the License Agreement and click Submit.
g. Log out of the Dell EMC Initial Appliance Configuration UI.
h. Enter ifconfig.
The network connections display.
i. To identify which network connection is mapped to which network, you must check the MAC address of each of the
three network connections that are displayed against the MAC address of each of the three networks that you noted
above in step c.
The networks are mapped as follows:
2. Enter pfxm_init_shell to restart the Dell EMC Initial Appliance Configuration UI and click Network Configuration.
If the UI does not display, enter sudo pfxm_init_shell and enter the username delladmin and your password.
3. Configure the VMware ESXi management network, complete the following:
a. On the Network Connections page, select ens<network_connection> and click Edit the selected connection.
b. On the General tab, ensure the Automatically connect to this network when it is available check box is selected.
c. Click the IPv4 Settings tab and from the Method list, select Manual.
d. In the Addresses pane, click Add, and enter the Address, Netmask, and Gateway.
e. Enter the IP addresses in the DNS servers box.
f. Enter the search domain.
g. Click Save.
4. Configure the OOB management network, which is the dedicated iDRAC network, complete the following:
a. On the Network Connections page, select ens<network_connection> and click Edit the selected connection.
b. On the General tab, ensure the Automatically connect to this network when it is available check box is selected.
c. Click the IPv4 Settings tab and from the Method list, select Manual.
d. In the Addresses window, click Add, and enter the Address and Netmask.
e. Click Routes and ensure use this connection only for resources on its network is selected.
f. Click Save.
5. Configure the operating system installation network, which is the PXE network, complete the following:
a. On the Network Connections page, select ens<network_connection> and click Edit the selected connection.
b. On the General tab, ensure the Automatically connect to this network when it is available check box is selected.
c. Click the IPv4 Settings tab and from the Method list, select Manual.
d. In the Addresses pane, click Add, and enter the Address and Netmask.
e. Click Routes and ensure the Use this connection only for resources on its network check box is selected and click
OK.
f. Click Save and exit the window.
g. Select Date/Time Properties click the Time Zone tab and verify the system clock uses UTC.
h. Select Time zone > OK.
i. Click Change hostname and enter the hostname.
j. Click Update Hostname > OK.
6. Log out of the PowerFlex Manager virtual appliance.
Next steps
Log in to the PowerFlex Manager UI through your browser using the URL that is displayed in the Dell EMC Initial Appliance
Configuration UI. For example, https://<IP_Address>/ui. Using the following:
● Username: admin
● Password: admin
If you can successfully log in to the PowerFlex Manager UI, PowerFlex Manager successfully deployed.
If you cannot log in to PowerFlex Manager, ensure you are using the correct <IP_Address> by entering ip address in the
command line and searching for the IP address of the PowerFlex Manager virtual appliance. The <IP_Address> should be the
same <IP_Address> that displayed in the Dell EMC Initial Appliance Configuration UI.
Click Cancel to cancel the Setup Wizard.
Map networks
Use this procedure is the networks are not mapped.
Steps
1. Log in to VMware vSphere Client.
Steps
1. On the menu bar, click Settings and click Backup and Restore.
2. On the Backup and Restore page, click Restore Now.
3. Enter a file path name in the backup directory path and file name box that specifies the backup file to be restored. Use one
of the following formats:
● NFS—host:/share/filename.tar.gz
● CIFS—\\host\share\filename.tar.gz
4. Enter the username and password in the Backup Directory User Name and Backup Directory Password fields to log in
to the location where the backup file is stored
5. Enter the encryption password in the Encryption Password field to access the backup file. This is the password that was
provided when the backup file was created.
6. Click Test Connection > Close and click Restore Now.
7. Click Yes or No to action when a confirmation message is displayed.
The restore process starts. While restoring, the PowerFlex Manager would reboot.
NOTE: If you back up a PowerFlex Manager virtual appliance with a working alert connector configuration and restore
that backup onto a different IP address, the alert connector comes up in an error state. The Secure Remote Services
gateway allows communication on only the original IP address. In this case, deregister the alert connector after restoring
the backup, and then re-register it.
Steps
1. To resynchronize the operating system image repository for an operating system image that was uploaded as part of an ISO
file:
a. On the Compliance and OS Repositories page, click the OS Image Repositories tab.
b. From the Available Actions drop-down menu, click Resynchronize for a repository in an Error state.
The Resynchronize OS Repository page is displayed.
c. Enter the user credentials and click Test Connection to test the network connection.
NOTE: You cannot edit the Source Path and Filename.
d. Click Resynchronize.
The repository state changes to Copying state.
2. To resynchronize the Compliance bundle at Compliance version tab:
a. On the Compliance and OS Repositories page, click Compliance Versions tab.
The compliance bundle will be in error state.
b. On the Available Actions drop-down menu, click Resynchronize for a repository in an Error state.
Steps
1. Log in to PowerFlex Manager.
2. Click Settings and select Virtual appliance management.
3. On the Compatibility management section, click Add.
4. Download the compatibility management file from Dell Technologies Support site to the jump server.
5. Click Upload from Local to use a local file. Then, click Choose File to select the GPG file and click Save.
Steps
1. Log in to the PowerFlex Manager appliance console using the delladmin username. If you do not see the command line
prompt, log out of the shell and log back in. Type sudo su. Enter the delladmin password.
2. Type:
systemctl enable sshd
systemctl start sshd
3. Type exit to log out, and return to the delladmin user.
4. Connect to the PowerFlex Manager management IP address with the SSH client to verify that SSH is enabled.
Post validation
Ensure all the PowerFlex services in the PowerFlex Manager are up and accessible.
Steps
1. Log in to Management Controller VMware vCenter.
2. Right-click on the PowerFlex Manager appliance.
3. Select Snapshots > Manage Snapshots.
4. Select the snapshot and click Delete.
5. On the Delete Snapshot window, click Delete to delete the VM snapshot.
Steps
1. Log in to Management Controller VMware vCenter.
2. Right-click on the PowerFlex Manager appliance.
3. Click Delete from Disk.
4. On the Confirm Delete window, click Delete.
Next steps
Add the compatibility management file.
Prerequisites
Take a backup of the PowerFlex Manager appliance settings.
Steps
1. Log in to PowerFlex Manager.
2. To perform a backup of the appliance, go to Settings > Backup and Restore.
a. If a backup has never been performed, you must configure backup settings before you can click the Backup Now button.
b. For help on how to configure the backup settings, click Edit next to settings and click ? in the Settings and Details
screen. This action takes you to the online help in PowerFlex Manager, which provides information about configuring the
backup settings.
3. On the banner, click View Details on the Actions menu.
Alternatively, on the menu bar, click Settings, and then click Virtual Appliance Management.
4. In the Appliance Upgrade Settings section, you can see the Current Virtual Appliance Version and check the Available
Virtual Appliance Version field to see if a newer version of PowerFlex Manager is available.
5. To the right of the Appliance Upgrade Settings section, click Edit.
6. To update to the latest version using Secure Remote Services, select Update Appliance from configured Dell EMC
Secure Remote Services.
If you indicate that you want to update the virtual appliance using Secure Remote Services, the Repository Path field on
the Virtual Appliance Management page shows Dell EMC Secure Remote Services (SRS). Otherwise, the field
shows the network path that is entered in the Edit Appliance Upgrade Settings dialog.
7. At the top of the Virtual Appliance Management page, click Update Virtual Appliance.
8. On the Update PowerFlex Manager page, verify the following fields are correct:
● PowerFlex Manager version compatible
● Are current Intelligent Catalogs compatible
● Current virtual appliance version
● Available virtual appliance version
● Repository path
9. In the Type UPDATE POWERFLEX MANAGER to confirm field, type Update PowerFlex Manager and click Yes to
update your appliance.
The update process displays messages indicating the progress of the update. Once the update is complete, the system
restarts and you are redirected to the login page.
Next steps
If you are updating PowerFlex Manager from a release prior to 3.3, you must configure iDRAC nodes to automatically send alerts
to PowerFlex Manager:
1. Click Settings.
2. Under Settings, click Credentials Management.
3. On the Credentials Management page, edit the credential for each node and ensure that the correct SNMP community
string is included in the credential.
Select a node and click Edit to review the SNMP v2 community string and make any required changes.
The default community string is public. To use a different value, overwrite this string. The string that you specify must
match the current community string setting on the iDRAC server.
4. Under Settings, click Virtual Appliance Management.
5. In the SNMP Trap Forwarding section, review the iDRAC SNMP community strings.
Steps
1. Log in to PowerFlex Manager.
2. Click Settings and select Virtual appliance management.
3. On the Compatibility management section, click Add.
4. Click Download from Secure Remote Services (Recommended).
5. Click Upload from Local to use a local file. Then, click Choose File to select the GPG file and click Save.
Steps
1. On the menu bar, click Settings, and then click Virtual Appliance Management.
2. On the Virtual Appliance Management page, click Reboot Virtual Appliance. A message displays confirming that you
want to restart the virtual appliance.
3. Click Yes to confirm. The system restarts.
4. Once the reboot is complete, click Click to log in and provide your credentials.
Upgrading components
About this task
If a PowerFlex Manager upgrade added new required fields to components within the template from which a service was
deployed, Confirm Service Settings is displayed on the Services page. Although upgrading components is not mandatory,
certain service or resource functions are not available until the upgrade is complete.
Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Settings and select Compliance and OS Repositories.
3. Click the question mark ? in the upper right corner of Add Compliance File page and follow the online help.
4. Verify that Make this the default version for compliance checking is selected.
NOTE: The Intelligent Catalog does not contain OS image files. You must load OS files separately by clicking Settings
and selecting Compliance and OS Repositories.
5. Go to dell.com/support and log in using the Service Tag associated with any of the PowerFlex nodes in the PowerFlex
appliance.
6. Go to the Drivers & Download tab to download the Intelligent Catalog and OS image files.
NOTE: To be notified when new software releases are available, click Driver Notifications at the bottom of the
Drivers & Downloads tab.
7. On the Compliance and OS Repositories page, on the OS Image Repositories tab and click Add.
8. In the Add OS Image Repository dialog box, enter the name of the repository, the image type, and the path of the OS
image file name.
9. Click Add.
Prerequisites
Ensure that you have the package for the PowerFlex presentation server, and access to the server hosting the PowerFlex
presentation server.
Steps
1. Use SSH/SFTP to copy the PowerFlex presentation server package to the /tmp directory on the PowerFlex presentation
server.
2. Type rpm -Uvh EMC-ScaleIO-mgmt-server-3.5-X..noarch.rpm.
3. After the upgrade is complete, reconnect the MDM to the PowerFlex GUI:
a. Navigate to https://<presentation server IP>:8443.
b. Enter the MDM IP address and click Next.
c. Agree to the certificates.
d. Log in with administrative credentials.
Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Resources.
3. On the Resources page, select the checkbox for the PowerFlex Gateway resource and click Update Resources.
4. In the Update Resources wizard, check the Needs Attention section to see whether any of the nodes need to be
reconfigured before upgrade. Select any nodes that you want to reconfigure. To select all nodes, click the button to the left
of SDS Name.
5. Click Next.
6. On the Summary page, select Allow PowerFlex Manager to perform non-disruptive updates now or Schedule
nondisruptive updates to run later.
7. Specify the type of update you want to perform by selecting one of the following options:
● Instant Maintenance Mode enables you to perform updates quickly. PowerFlex Manager does not migrate the data.
● Protected Maintenance Mode (PowerFlex 3.5 and later) enables you to perform updates that require longer than 30
minutes in a safe and protected manner.
NOTE: To verify that the PowerFlex appliance node is ready for a PowerFlex upgrade, check the Needs attention tab.
If a node appears in this tab, select the node and click Finish. This ensures that the SVM has the required CPU and
RAM capacity.
8. If you only selected a subset of the nodes for reconfiguration, confirm the reconfiguration by typing Reconfigure nodes.
Otherwise, confirm the update action by typing Update PowerFlex.
If you reconfigured only a subset of the nodes, you need to restart the wizard later to reconfigure the remaining nodes
before you can complete the upgrade process.
9. If you are updating a PowerFlex Gateway, type Update PowerFlex to confirm that you are ready to proceed with the
update.
10. Click Finish and click Yes to confirm.
NOTE: When you perform this task, PowerFlex Gateway, all MDMs, and all SDS nodes are updated in a rolling,
nondisruptive update. After the update is initiated, you cannot stop this process until it completes.
Prerequisites
● Skip this task if using PowerFlex Manager 3.7. In PowerFlex Manager 3.7, Java gets updated as part of the PowerFlex
Gateway upgrade using PowerFlex Manager.
● In PowerFlex Manager 3.6 or prior, Java is updated manually by this task.
● Download OpenJDK and its dependency packages from release repository.
Steps
1. For the PowerFlex Gateway, shut down the gateway service, type: # systemctl stop scaleio-gateway.
2. Install or upgrade the OpenJDK dependencies on the PowerFlex Gateway and PowerFlex GUI presentation server.
a. Change directory to /root/install, type: # cd /root/install.
b. List the dependencies, type: # ls
c. Check for the copied OpenJDK dependency for javapackages.tar.gz.
d. Decompress the file, type: # tar -zxf JavaPackages.tar.gz# tar -zxf JavaPackages.tar.gz.
e. Install or upgrade OpenJDK dependency packages, type:
#cd JavaPackages/
#rpm -Uvh *.rpm
3. Remove existing oracle java, by querying the java package information, type: # rpm -qa | grep -i jre.
4. Capture the existing Java version and delete it, type: #rpm -e <java_version>.
5. Install new OpenJDK, type: #rpm -ivh /root/install/java-1.8.0-openjdk-
headless-1.8.0.292.b10-1.el7_9.rpm.
6. Verify the version, type: # java -version.
The version is OpenJDK 64-bit server VM build 25.XXX-b09, mixed mode.
NOTE: The upgrade will automatically upgrade LockBox and restart the service.
# cd /opt/emc/scaleio/gateway/bin
# ./FOSGWTool.sh --query_esx_credentials
Prerequisites
To discover the resource, complete the following:
Steps
1. Log in to PowerFlex Manager.
2. Click Resources.
3. Select the PowerFlex GUI presentation server and click Update Resources.
4. On Apply Resource page, choose either:
● Allow PowerFlex Manager to perform firmware and software updates now
PowerFlex Manager applies the firmware updates and reboot this resource immediately. This update could be disruptive if
the resource is in use.
● Schedule firmware and software updates
PowerFlex Manager will applies the firmware updates at the selected date and time and reboot this resource. This update
could be disruptive if the resource is in use.
5. Click Apply.
6. On the Confirm page, click Yes.
Steps
1. In PowerFlex Manager web UI, select Services.
2. Select the existing deployment that you are upgrading to view its details.
3. Click Services. On the Services page, select a service. To change the IC, select View Compliance Report.
4. On the Node Compliance Report page, click Change on the Compliance status. Select the RCM on the preferred
compliance file. Confirm the change by typing CHANGE COMPLIANCE FILE and click Save and Close.
NOTE: In earlier versions of PowerFlex Manager, on the Details page, see the Target version/Target IC version at
the top right. To change the IC, click Change Target /Change Target IC. You can set it to the default IC or different
IC.
5. On the Service Details page, in the right pane under Service Actions, click View Compliance Report.
6. From the compliance report, view the firmware or software components, select the specific nodes that are non-compliant,
and click Update Resources.
a. To perform a non-disruptive update right away, select Allow PowerFlex Manager to perform firmware and software
updates now. Select one of the following to specify the type of update:
● Instant Maintenance Mode - provides quick updates. PowerFlex does not migrate the data.
● Protected Maintenance Mode - provides updates that require longer than 30 minutes in a safe and protected
manner.
b. To perform a non-disruptive update at a later time, select Schedule firmware and software updates.
c. To perform a disruptive update right away for a full upgrade, select Allow PowerFlex Manager to perform disruptive
updates now.
7. If you encounter any errors while performing firmware or software updates, you can view the PowerFlex Manager logs for
the service to see where the error might have occurred.
a. On the Service Details page, in the right pane, under Service Actions, click Generate Troubleshooting Bundle.
This creates a compressed file that contains PowerFlex Manager application logs, PowerFlex Gateway logs, iDRAC
lifecycle logs, Dell EMC PowerSwitch switch logs, Cisco Nexus switch logs, and VMware ESXi logs. The logs are for the
current service only.
Alternatively, you can access the logs from a VMware console, or by using SSH to log in to PowerFlex Manager, if you
have SSH enabled.
Steps
1. Log in to PowerFlex Manager and select the Service tab.
2. Select the existing hyperconverged cluster.
3. Go to Service page and click the Migrate vCLS wizard.
4. Select the volume and datastore to migrate the vCLS VMs.
NOTE: For example, volume could be named powerflex-service-vol-1,powerflex-service-vol-2 and datastore named
powerflex-esxclustershotname-ds1,powerflex-esxclustershotname-ds2.
5. Click Finish.
This action will creates two volumes and two datastores each of 16 GB and the VMs are migrate to service datastore.
Software
BIOS: version 07.61
NXOS: version 7.0(3)I7(3
4. Check the contents of the bootflash directory to verify that enough free space is available for the new Cisco NX-OS
software image.
a. To check the free space on the flash, type: dir bootflash.
For example:
b. Delete older firmware files to make additional space, if needed, type: delete bootflash:nxos.7.0.2.I7.6.bin.
NOTE: Do not delete the current running version of the firmware files, as shown in the previous show version.
The Cisco Nexus 3000 and Cisco Nexus 9000 switches do not provide a confirmation prompt before deleting them.
5. If upgrading a Cisco Nexus 3000 series switch, type the following to compact the current running image file: switch#
install all nxos bootflash:nxos.7.0.3.I7.bin compact
6. Using SCP, FTP, TFTP server, type the following to copy the firmware file to local storage on the Cisco Nexus switch:
● Use below TFTP command to copy image.
copy tftp://XXX.XXX.XXX.XXX/nxos.9.3.3.bin bootflash:
NOTE: If warnings of not enough space to copy files continues, perform an SCP copy with the compact option to
compact the file as it is copied over. Doing this may result with encountering a defect. The work-around for this defect
requires cabling the management port and configuring its IP address on a shared network with the SCP server, allowing
the copy to take place across that management port. Once complete, go to Step 7.
Installer will perform compatibility check first. Please wait. Installer is forced
disruptive
Software
BIOS: version 07.66
NXOS: versio 9.3(3)
BIOS compile time: 06/11/2019
NXOS image file is: bootflas://nxos.9.3.3.bin
NXOS compile time: 12/22/2019 2:00:00 [12/22/2019 09:00:37]
4. Check the contents of the bootflash directory to verify that enough free space is available for the image.
a. Check the free space on the flash, type: dir bootflash:.
Example command output:
NOTE: The Cisco Nexus 3172 switch and Cisco Nexus 3132 switch do not require EPLD upgrade.
5. Using SCP, FTP, TFTP server, type the following to copy the firmware file to local storage on the Cisco Nexus switch:
● Use below TFTP command to copy image.
copy tftp://XXX.XXX.XXX.XXX/ n9000-epld.9.3.3.img bootflash:
7. Start the upgrade process, type: install epld bootflash: n9000-epld.9.3.3.img module all.
NOTE: After the upgrade, the switch reboot could take 5 to 10 minutes. Use a continuous ping command from the jump
server to validate when the switch is back online.
8. Using SSH, log back in to the switch with username and password.
9. Verify that the switch is running the correct new version, type:switch# show install epld status.
EPLD
Curr Ver Old Ver
------------------------------------------------------
IO FPGA
0x15 0x9
The Golden (primary backup) copy of the EPLD now needs to be updated.
11. Update the Golden EPLD image, type install epld bootflash: n9000-epld.9.3.3.img module 1 golden.
NOTE: After the upgrade, the switch reboot could take 5 to 10 minutes. Use a continuous ping command from the jump
server to validate when the switch is back online.
12. Using SSH, log back in to the switch with username and password.
13. Verify that the switch is running the correct new version, type: switch# show version module 1 epld.
Prerequisites
The iDRAC firmware upgrade must be done before any other upgrades. Perform the iDRAC firmware upgrade first, then upgrade
the other component firmware.
Steps
1. Log in to the iDRAC web interface by opening a Mozilla Firefox or Google Chrome browser and go to https://<ip-
address-of-idrac>.
NOTE: Under Server Information, review the System Host Name and verify that you have connected to the correct
hostname.
2. Select Maintenance > System Update > Manual Update and click Choose File.
3. Go to the Intelligent Catalog folder /shares/xxxxx and select the component update file. The components to update
include:
● iDRAC service module
● Dell BIOS
● Dell BOSS controller
● Dell iDRAC/Lifecycle controller
● Dell Intel X550/X540/i350
● Dell Mellanox ConnectX-4 LX
● Dell PERC H740P mini raid controller
4. Click Upload.
5. Select the firmware that you uploaded and click Install Next Reboot.
NOTE: The installation will be in the job queue for the next reboot. Click Job Queue from the prompted information
message to monitor the progress for the installation.
Steps
1. From the VMware vSphere Client, click Cluster > Monitor > vSAN > SkylineHealth.
2. Ensure the vSAN is healthy.
If the vSAN is not healthy address the issues before continuing with the upgrade.
Steps
1. Log in to the web UI of the controller VMware ESXi host directly.
2. Select Virtual Machines.
3. Shut down all the VMs except the jump server running on the NSX-T Edge Gateway host.
Prerequisites
Migrate the online VMs before putting the host into maintenance mode.
Steps
1. On the VMware vSphere Client, click Hosts and Clusters.
2. Right-click the host and select Maintenance Mode > Enter Maintenance Mode.
3. Verify Move powered-off and suspended virtual machines to other hosts in the cluster is not selected.
4. Verify Ensure data accessibility is selected.
5. Click OK to put the host into maintenance mode.
Steps
1. Use WinSCP to copy the ESXi-6.x.0-xxxxxx.zip patch file to the /vmfs/volumes/vsanDatastore/ISO folder
on the VMware ESXi server (where XX is unique for each host).
b. To upgrade the VMware ESXi version, type esxcli software profile update
-p ESXi-6.7.0-20200804001-standard-customized -d /vmfs/volumes/vsanDatastore/ISO/
Esxi-6.7.0-16713306-3.5.4.0_Dell_14G.zip
When the upgrade completes successfully, the following message displays, followed by the list of upgraded packages:
Update Result
Message: The update completed successfully, but the system needs to be rebooted for
the changes to be effective.
Reboot Required: true
8. Click Back > Back > Finish > Yes > Finish > OK > Finish > Yes. The node reboots. Proceed to the Exit maintenance mode
section.
9. Repeat these steps on all VMware ESXi servers.
Next steps
You must complete the upgrade for all hosts before proceeding to the Distributed Virtual Switch upgrade.
Steps
1. From the VMware vSphere Client Home screen, select Hosts and Clusters.
2. Right-click the host and select Exit Maintenance Mode.
Steps
1. Log in to the VMware NSX-T Edge Gateway host.
2. Select Virtual Machines.
3. Power on all the VMs running on the VMware NSX-T Edge Gateway host.
Steps
1. Use WinSCP to upload ISM-Dell-Web-3.x.x-xxxx.VIB-ESX6i-Live_A00.zip to the /vmfs/volumes/
vsanDatastore/ISO folder.
2. Use SSH to access the VMware ESXi nodes and type esxcli software vib install -d /vmfs/volumes/
vsanDatastore/ISO/ISM-Dell-Web-3.x.x-xxxx.VIB-ESX6i-Live_A00.zip.
Steps
1. Connect to the VMware vCenter Server using the VMware vSphere Client.
2. Click Networking and select the VMware Distributed Switch you want to upgrade.
3. Right-click the DVswitch and select Settings > Export Configuration.
4. Select configuration to export Distributed switch and all port groups.
5. Enter a description and click OK and Yes.
6. Select the location, enter the file name and click Save.
NOTE: For VMware vSphere 6.7, from the vSphere client HTML5 and Mozilla Firefox, click OK twice. There is no
prompt for filename or save. With Google Chrome, click OK once. There is no prompt for filename or save.
7. To upgrade VMware vSphere Distributed Switch, right-click Distributed Switch > Upgrade > Upgrade Distributed
Switch.
NOTE: There are two Upgrade options available: Upgrade Network I/O Control, and Enhanced LACP Support. The
Network I/O Control upgrade is required. The Enhanced LACP Support option is required only if it is enabled.
Prerequisites
● Verify that you are using the updated version of VMware vCenter Server.
● Verify that you are using the latest version of VMware NSX-T Edge Gateway hosts.
● Verify that the disks are in a healthy state. (In the vSphere Client, navigate to Host and Clusters, highlight your PowerFlex
management controller cluster, then click on the vSAN tab, and click on physical disks to verify the object status in the right
hand column.
● Verify that your hosts are not in maintenance mode. When upgrading the disk format, do not place the hosts in maintenance
mode. When any member host of a vSAN cluster enters maintenance mode, the member host no longer contributes capacity
to the cluster. The cluster capacity is reduced and the cluster upgrade might fail.
Steps
1. Navigate to the vSAN cluster in the VMware vSphere Client.
2. From Host and Clusters, highlight your PowerFlex management controller cluster and click the Configure tab on the right
hand pane.
3. Under vSAN, select General.
4. Under On-Disk Format Version, click Pre-Check Upgrade.
The upgrade pre-check analyzes the cluster to uncover any issues that might prevent a successful upgrade. Some of the
items checked are host status, disk status, network status, and object status. Upgrade issues appear in the disk pre-check
status text box.
The pre-check should be run before initiating on-disk format upgrade task.
Steps
1. From the VMware vSphere Client, navigate to the vSAN cluster.
2. Navigate to Home > Host and Clusters, and highlight the PowerFlex management controller cluster.
3. Click Monitor > vSAN > Skyline Health.
4. Verify that all the tests have passed.
Prerequisites
The following requirements are needed before proceeding with enabling replication:
● LACP bonded NIC port design
● PowerFlex node with PowerFlex 3.6
● PowerFlex hyperconverged node with minimum 2 sockets *12 cores each
● Journal capacity (sized on delta change rate for each replicated volume)
● Additional external VLANs for replication must be added (flex-rep1-<vlanid>, flex-rep2-<vlanid>) used for Storage Data
Replication (SDR) to SDR communication between source and destination sites for replicating data.
● At least one protection domain (source and destination)
● At least one storage pool (source and destination)
● SDS devices that have been added to the appropriate storage pool (source and destination)
● PowerFlex systems installed at the source and destination sites with communication between them. (MDM to MDM
communication required in addition to external networks)
● At least one identical size volume on both source and destination sites. The volume at the source site must be mapped and
the volume on the destination site is used for replication and must be unmapped.
Workflow
The workflow for removing the existing PowerFlex hyperconverged nodes from PowerFlex Manager and enabling replication on
the existing PowerFlex hyperconverged nodes is as follows:
1. Remove the existing PowerFlex hyperconverged nodes from PowerFlex Manager.
2. Create and configure replication port groups (flex-rep1-<vlanid> and flex-rep2-<vlanid>) in flex_dvswitch.
3. Prepare SVM for replication, as follows:
a. Enter the Storage Data Server (SDS) node (SVMs) into maintenance mode.
b. Add the virtual NICs to the SVMs for Storage Data Replication (SDR) external communication.
c. Modify vCPU, memory, virtual Non-Uniform Memory Access (vNUMA), and CPU reservation settings on SVMs.
4. Power on the SVM and configure the network interfaces.
5. Install the SDR on the SDS nodes (SVMs).
6. Exit SDS maintenance mode.
7. Add journal capacity percentage. The recommended starting value is 10%.
8. Add the Storage Data Replicator to PowerFlex nodes.
9. Create the peer system between the source and destination site.
10. Add the peer system.
11. Create the replication consistency group (RCG).
12. Define network for replication in PowerFlex Manager. Do not define the gateway.
13. Add an existing service to PowerFlex Manager.
Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Services.
3. On the Services page, click the service and in the right pane, click View Details.
4. On the Service Details page, in the right pane, under Service Actions, click Remove Service.
5. In the Remove Service dialog box, select Remove Service.
6. Select Leave nodes in PowerFlex Manager inventory and set the state to Managed.
7. Click Remove.
Steps
1. Log in to the VMware vSphere client and select the Networking inventory view.
2. Select Inventory, right-click flex_dvswitch and select New Port Group.
3. Type flex-rep1 and click Next.
4. From VLAN type menu, select VLAN and in VLAN ID enter 161 (as per the Logical Configuration Survey (LCS)).
5. Select Customize default policies configuration under Advanced option.
6. Click Next > Next > Next.
7. From Teaming and failover tab:
a. Change Load Balancing to Route based on IP hash.
b. Move up the LACP-Lag uplink under Active uplinks.
c. Move down uplink1 and uplink2 under Unused uplinks.
d. Click Next.
8. Click Next > Next > Finish.
9. Repeat Steps 2 through 8 to create the following port group: flex-rep2 (VLAN ID as per the LCS).
Steps
1. Log in to the SDS (SVMs) using PuTTY.
Steps
1. SSH to primary MDM, then log in to PowerFlex cluster, using #scli --login --username admin.
2. Query the current value, type: #scli --query_performance_parameters --print_all --tech --all_sds|
grep -i SDS_NUMBER_OS_THREADS.
3. Set the value of SDS_number_OS_threads to 10, type: # scli --set_performance_parameters -sds_id
<ID> --tech --sds_number_os_threads 10.
NOTE: Do not set the SDS threads globally, set the SDS threads per SDS.
Steps
1. Log in the SDS (SVMs) using PuTTY.
2. Run # systemctl status NetworkManager to ensure that Network Manager is not running.
Output must display disabled and inactive.
3. If it is enabled and active, stop and disable the service, run:
Steps
1. Log in to SDS (SVMs) using PuTTY.
2. Make a note of MAC addresses of all the interfaces, using: #ifconfig or #ip a.
Example file:
BOOTPROTO=none
ONBOOT=yes
HOTPLUG=yes
TYPE=Ethernet
DEVICE=eth2
IPADDR=192.168.155.46
NETMASK=255.255.254.0
DEFROUTE=no
MTU=9000
PEERDNS=no
NM_CONTROLLED=no
NAME=eth2
HWADDR=00:50:56:80:fd:82
Steps
1. Log in to the SVM using PuTTY.
2. Edit the grub configuration file located in /etc/default/grub, type: # vi /etc/default/grub.
3. From the last line, remove net.ifnames=0 and biosdevname=0, and save the file.
4. Rebuild the grub configuration file, using: # grub2-mkconfig -o /boot/grub2/grub.cfg
Steps
1. Log in to the PowerFlex GUI presentation server: https://<presentation_server_IP>:8443.
2. In the left pane, click Configuration > SDSs.
3. In the right pane, select the relevant SDS and click More > Enter Maintenance Mode.
4. In the Enter SDS into Maintenance Mode dialog box, select Instant (if maintenance mode takes more than 30 minutes,
then select Protected).
5. Click Enter Maintenance Mode.
6. Verify that the operation is completed successfully and click Dismiss.
7. Shut down the appropriate SVM:
a. Log in to VMware vCenter using VMware vSphere Client.
b. Select the SVM, right-click Power > Shut-down Guest OS.
Steps
1. Log in to the VMware vCenter vSphere client and go to Host and Clusters.
2. Right-click the SVM and click Edit Setting.
3. Click Add new device, select Network Adapter from the list.
4. Select the appropriate port group created for SDR external communication, click OK.
5. Repeat steps 2 to 4 for creating additional NICs.
Steps
1. Right-click the SVM and click Edit Setting.
2. Click the newly added network interface controllers from Virtual Hardware list and make note of the MAC address.
Steps
1. Log in to the VMware vCenter vSphere client.
2. Right-click the VM you want to change and select Edit Settings.
3. Under the Virtual Hardware tab, expand Memory and modify the memory size according to the SDR requirement.
4. Click OK.
Steps
1. Log in to VMware vCenter vSphere client.
2. Right-click the virtual machine that you want to change, then select Edit Settings.
3. Under the Virtual Hardware tab, expand CPU and increase the vCPU count according to the SDR requirement.
4. Click OK.
Steps
1. Log in to the production VMware vCenter using vSphere client.
2. Right-click the VM that you want to change and select Edit Settings.
3. Under the Virtual Hardware tab, expand CPU, ensure CPU Hot Plug option is unchecked.
Prerequisites
Ensure that the CPU hot plug is disabled. Do the following to disable the CPU hot plug feature before configuring vNUMA
parameter:
1. Log in to the VMware vCenter vSphere client.
2. Right-click the VM that you want to change and select Edit Settings.
3. Under the Virtual Hardware tab, expand CPU and verify that the CPU Hot Plug option is cleared.
Steps
1. Go to the SVM in the VMware vSphere client.
2. Select a data center, folder, cluster, resource pool, or host to find a VM.
3. Click the VMs tab.
4. Right-click the VM and select Edit Settings.
5. Click VM Options and expand Advanced.
6. Under Configuration Parameters, click Edit Configuration.
7. In the dialog box that appears, click Add Configuration Params to enter a new parameter name and its value.
For example, if the SVM for an MG pool has 20 vCPUs, set numa.vcpu.maxPerVirtualNode = 10. If the SVM for an
FG pool has 24 vCPUs, set numa.vcpu.maxPerVirtualNode = 12.
8. Click OK twice.
Ensure the following:
● Under CPU, Shares are set to High.
● 50% of the vCPUs is reserved on the SVM. For example, if the SVM for an MG pool is configured with 20 vCPUs and
CPU speed is 2.8 GHz, set a reservation of 28 GHz (20x2.8/2). If the SVM for an FG pool is configured with 24 vCPUs
and CPU speed is 3 GHz, set a reservation of 36 GHz (24x3/2).
Prerequisites
Steps
1. Log in to the production VMware vCenter using vSphere client.
2. Right-click the VM you want to change and select Edit Settings.
3. Under the Virtual Hardware tab, expand Memory, modify the memory size according to SDR requirement.
4. Click OK.
Steps
1. Log in to the production VMware vCenter using VMware vSphere client.
2. Right-click the virtual machine that you want to change, then select Edit Settings.
3. Under the Virtual Hardware tab, expand CPU, increase the vCPU count according to SDR requirement.
4. Click OK.
Steps
1. Log in to the production VMware vCenter using vSphere client.
2. Right-click the VM that you want to change and select Edit Settings.
3. Under the Virtual Hardware tab, expand CPU, ensure CPU Hot Plug option is unchecked.
Steps
1. Browse to the SVM in the VMware vSphere client.
2. To find a VM select a data center, folder, cluster, resource pool, or host.
3. Click the VMs tab.
4. Right-click the VM and select Edit Settings.
5. Click VM Options and expand Advanced.
6. Under Configuration Parameters, click Edit Configuration.
7. In the dialog box that appears, click Add Configuration Params to enter a new parameter name and its value.
Example: numa.vcpu.maxPerVirtualNode = 12
8. Click OK > OK.
Ensure the following:
● CPU shares are set to high.
● 50% of the vCPU's reserved on the SVM.
For example, if the SVM is configured with 24 vCPUs and CPU speed is 3 GHz, set a reservation of 36 GHz (24x3/2).
Steps
1. Log in to VMware vCenter using vSphere client.
2. Select the SVM, right-click Power > Power on.
3. Log in to SVM using PuTTY.
4. Create rep1 network interface, type: cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/
network-scripts/ifcfg-eth5.
5. Create rep2 network interface, type: cp etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/
network-scripts/ifcfg-eth6.
6. Edit newly created configuration files (ifcfg-eth5, ifcfg-eth6) using the vi editor and modify the entry for IPADDR,
NETMASK, GATEWAY, DEFROUTE, DEVICE, NAME and HWADDR, where:
● DEVICE is the newly created device of eth5 and eth6
● IPADDR is the IP address of the rep1 and rep2 networks
● NETMASK is the subnet mask
● GATEWAY is the gateway for the SDR external communication
● DEFROUTE change to no
● HWADDR=MAC address collected from the topic Adding virtual NICs to SVMs
● NAME=newly created device name for eth5 and eth6
NOTE: Ensure that the MTU value is set to 9000 for SDR interfaces on both primary and secondary site and also end to
end devices. Confirm with the customer or see the Logical Configuration Survey (LCS) about their existing MTU values,
and configure it.
Steps
1. Go to /etc/sysconfig/network-scripts and create a file called route-interface and type:
#touch /etc/sysconfig/network-scripts/route-eth5
#touch /etc/sysconfig/network-scripts/route-eth6
/etc/sysconfig/network-scripts/route-eth5
10.0.10.0/23 via 10.0.30.1
/etc/sysconfig/network-scripts/route-eth6
10.0.20.0/23 via 10.0.40.1
Steps
1. Use WinSCP or SCP to copy the SDR package to the tmp folder.
2. SSH to SVM and run the following to install the SDR package:#rpm -ivh /tmp/EMC-ScaleIO-sdr-3.6-
x.xxx.el7.x86_64.rpm.
Steps
1. Select the storage pool from which to allocate the journal capacity.
2. Consider the minimal requirements needed (28 GB multiplied by the number of SDR sessions). Journal capacity will be the
maximum of these two factors.
Consider the expected outage time. The minimal outage allowance is one hour, but at least three hours are recommended.
3. Calculate the journal capacity needed per application: maximal application throughput x maximum outage interval.
4. Calculate the percentage of capacity based on the previously calculated needs as journal capacity is defined as a percentage
of storage pool capacity.
For example, an application generates 1 GB/s of writes. The maximal supported outage is three hours (3 hours x 3600
seconds = 10800 seconds). The journal capacity needed for this application is 1 GB/s x 10800 s = ~10.547 TB. Since the
journal capacity is expressed as a percentage of the storage pool capacity, divide the 10.547 TB by the size of the storage
pool usable capacity, which is 200 TB:100 x 10.547 TB/200 TB = 5.27% round this up to 6%.
Steps
1. In the left pane, click Replication > Journal Capacity.
Prerequisites
The IP address of the node must be configured for SDR. The SDR communicates with several components:
● SDC (application)
● SDS (storage)
● Remote SDR (external)
Steps
1. In the left pane, click Protection > SDRs.
2. In the right pane, click Add.
3. In the Add SDR dialog box, enter the connection information of the SDR:
a. Enter the SDR name.
b. Update the SDR Port, if required (default is 11088).
c. Select the relevant Protected Domain.
d. Enter the IP Address of the MDM that is configured for SDR.
e. Select Role External for the SDR to SDR external communication.
f. Select Role Application and Storage for the SDR to SDC and SDR to SDS communication.
g. Click ADD SDR to initiate a connection with the peer system.
4. Verify that the operation completed successfully and click Dismiss.
5. Modify the IP address role if required:
a. From the PowerFlex GUI, in the left pane, click Protection > SDRs.
b. In the right pane, select the relevant SDR check box, and click Modify > Modify IP Role.
c. In the <SDR name> Modify IPs Role dialog box, select the relevant role for the IP address.
d. Click Apply.
e. Verify that the operation completed successfully and click Dismiss.
6. Repeat both tasks Adding journal capacity and Adding Storage Data Replicator (SDR) to PowerFlex system for source and
destination PowerFlex appliances.
Steps
1. Log in the primary MDM using SSH on the source and destination to extract and add the MDM certificate.
2. Type: #scli -login -username admin after the password prompt and enter the MDM cluster password.
3. Extract the certificate on the source and destination primary MDM, type:
● For the source: #scli --extract_root_ca --certificate_file /tmp/source.crt
● For the destination: # scli --extract_root_ca --certificate_file /tmp/destination.crt
4. Copy the extracted certificate of the source (primary MDM) to the destination (primary MDM) using SCP and conversely.
Steps
1. Type scli -login -username admin after the password prompt and enter the MDM cluster password.
NOTE: From the output, obtain the System ID. It is used in the following step to add a peer system on the primary site.
2. Add the peer system to the primary site, type: # scli --add_replication_peer_system --peer_system_ip
(remote system mdm management ips) --peer_system_id (system id of remote site) --
peer_system_name (remote site name)
3. Add the peer system to the remote site, type: # scli --add_replication_peer_system --peer_system_ip
(primary system mdm management ips) --peer_system_id (system id of primary site) --
peer_system_name (primary site name)
NOTE:
● For a 3-node cluster, you need two IP addresses - comma separated (primary, secondary).
● For a 5-node cluster, you need three IP addresses - comma separated (primary, secondary1, secondary2).
NOTE: Do not map the volume that is created on target system to SDC.
Steps
1. Log in to the source site presentation server: <https://presentation_server_IP>:8443.
NOTE: Use the primary MDM IP address and credentials to log in to the PowerFlex cluster.
Steps
1. Log in to the primary MDM using SSH and log in to scli, type: # scli --login --username admin after the password
prompt and enter the MDM cluster password.
2. Verify the replication status, type: # scli --query_all_replication_pairs.
Once initial copy is complete, PowerFlex replication system is ready for use.
Steps
1. From https://Presentation_Server_IP:8443 (PowerFlex GUI), in the left pane, click Protection > RCGs.
2. In the right pane, select the relevant RCG check box, and click Modify > Modify RPO.
3. In the Modify RPO for RCG <rcg name> dialog box, enter the updated RPO time and click Apply.
4. Verify that the operation completed successfully and click Dismiss.
Steps
1. On the menu bar, click Settings > Networks.
The Networks page opens.
2. Click Define.
c. Optionally, in the Primary DNS and Secondary DNS fields, enter the IP addresses of primary DNS and secondary DNS.
d. Optionally, in the DNS Suffix field, enter the DNS suffix to append for hostname resolution.
e. To add an IP address range, click Add IP Address Range. In the row, specify a starting and ending IP address for the
range.
Repeat this step to add IP address ranges based on the requirement. For example, you can use one range for flex-rep1
network and the second range for flex-rep2 network.
7. Click Save.
Prerequisites
Ensure the following conditions are met before you add an existing service:
● The vCenter, PowerFlex Gateway, CloudLink Center, and hosts must be discovered in the resource list.
● The PowerFlex Gateway must be in the service.
Steps
1. On the menu bar, click Services and then click + Add Existing Service.
2. On the Add Existing Service page, enter a service name in the Name field.
3. Enter a description in the Description field.
4. Select the Type for the service.
The choices are Hyperconverged, Compute Only, and Storage Only.
PowerFlex Manager checks to see whether there are any vCLS VMs on local storage. If it finds any, it puts the service in
lifecycle mode and gives you the opportunity to migrate these to shared storage.
5. To specify the compliance version to use for compliance, select the version from the Firmware and Software Compliance
list or choose Use PowerFlex Manager appliance default catalog.
You cannot specify a minimal compliance version when you add an existing service, since it only includes server firmware
updates. The compliance version for an existing service must include the full set of compliance update capabilities.
PowerFlex Manager does not show any minimal compliance versions in the Firmware and Software Compliance list.
NOTE: Changing the compliance version might update the firmware level on nodes for this service. Firmware on shared
devices is maintained by the global default firmware repository.
6. Specify the service permissions under Who should have access to the service deployed from this template? by
performing one of the following actions:
● To restrict access to administrators, select the Only PowerFlex Manager Administrators option.
● To grant access to administrators and specific standard users, select the PowerFlex Manager Administrators and
Specific Standard and Operator Users option, and perform the following tasks:
a. Click Add User(s) to add one more standard or operator users to the list.
b. To delete a standard or operator user from the list, select the user and click Remove User(s).
In the Number of Instances box, provide the number of component instances that you want to include in the template.
9. On the Cluster Information page, enter a name for the cluster component in the Component Name field.
10. Select values for the cluster settings:
For a hyperconverged or compute-only service, select values for these cluster settings:
a. Target Virtual Machine Manager—Select the vCenter name where the cluster is available.
b. Data Center Name—Select the data center name where the cluster is available.
NOTE: Ensure that selected vCenter has unique names for clusters in case there are multiple clusters in the
vCenter.
c. Cluster Name—Select the name of the cluster you want to discover.
d. OS Image—Select the image or choose Use Compliance File ESXi image if you want to use the image provided with
the target compliance version. PowerFlex Manager filters the operating system image choices to show only ESXi images
for a hyperconverged or compute-only service.
For a storage-only service, select values for these cluster settings:
a. Target PowerFlex Gateway—Select the gateway where the cluster is available.
b. Protection Domain—Select the name of the protection domain in PowerFlex.
c. OS Image—Select the image or choose Use Compliance File Linux image if you want to use the image provided with
the target compliance version. PowerFlex Manager filters the operating system image choices to show only Linux images
for a storage-only service.
11. Click Next.
12. On OS Credentials page, select the OS credential that you want to use for each node and SVM.
You can select one credential for all nodes (or SVMs), or choose credentials for each item separately. You can create the
operating system credentials on the Credentials Management page under Settings.
PowerFlex Manager validates the credentials for the nodes and SVMs before it creates the service. This validation makes it
possible for PowerFlex Manager to run a full inventory on all nodes and SVMs before creating the service. The process of
running the inventory can take five to ten seconds to complete.
To import a VMware NSX-T or NSX-V configuration, PowerFlex Manager must have the operating system inventory to
recognize that NSX VIBs are on the node. Without the inventory, it is unable to tell if a node has NSX-T or NSX-V.
PowerFlex Manager runs the inventory on all nodes and SVMs for which the credentials are valid. The service uses any
nodes and SVMs for which it has a successful inventory. For example, if you have four nodes, and one node has an invalid
operating system password, PowerFlex Manager adds the three nodes for which the credentials are valid and ignores the
one with the invalid password.
17. To import a large number of general-purpose VLANs from vCenter, perform these steps:
a. Click Import Networks on the Network Mapping page.
PowerFlex Manager displays the Import Networks wizard. In the Import Networks wizard, PowerFlex Manager lists
the port groups that are defined on the vCenter as Available Networks. You can see the port groups and the VLAN IDs.
b. Optionally, search for a VLAN name or VLAN ID.
PowerFlex Manager filters the list of available networks to include only those networks that match your search.
c. Click each network that you want to add under Available Networks. If you want to add all the available networks, click
the check box to the left of the Name column.
d. Click the double arrow (>>) to move the networks you chose to Selected Networks.
PowerFlex Manager updates the Selected Networks to show the ones you have chosen.
e. Click Save.
18. Click Next.
19. Review the Summary page and click Finish when you are ready to add the service.
The process of adding an existing service causes no disruption to the underlying hardware resources. It does not shut down
any of the nodes or the vCenter.
For an existing service, the Reference Template field shows Generated Existing Service Template on the Service
Details page. You can distinguish existing services from new services that were deployed with PowerFlex Manager.
When PowerFlex Manager must put a service in lifecycle mode, the Summary page for the Add Existing Service wizard
displays a warning message indicating the reason.
In some situations, an imported configuration might not meet the minimal requirements for lifecycle mode. In this case,
PowerFlex Manager does not allow you to add the service.
Next steps
When you add an existing service, PowerFlex Manager matches the hosts, vCenter, and other items it finds with discovered
resources in the resource list. If you missed a component initially, you can change your resource inventory, and update the
service to reflect these changes. Go back to the resources list, select the component, and mark it as Managed by selecting
Change resource state to Managed. Then, perform an Update Service Details operation on the service to pull in the
missing component.
When you deploy an existing service, PowerFlex Manager reserves any IP addresses from vCenter or the PowerFlex Gateway
that it needs. If you later tear down the service, it releases those IP addresses so that they can be reused.
Prerequisites
Use a standard tool to generate simulated IOPS. A simple way to do this is to load a Linux VM and use flexible I/O Tests (fio) to
generate IOPS. Following, is the command line using fio to generate random reads and writes:
Steps
1. To retrieve overall performance metrics:
a. Launch the PowerFlex GUI.
b. In the Dashboard, look at the PERFORMANCE data.
c. The Dashboard displays the following:
● Overall system IOPs
● Overall system bandwidth
● Overall system latency
2. To retrieve volume-specific metrics:
a. Launch the PowerFlex GUI.
b. In the Dashboard, select CONFIGURATION > Volumes.
3. To retrieve SDS-specific metrics:
a. Launch the PowerFlex GUI.
b. In the Dashboard, Select CONFIGURATION > SDSs.
Prerequisites
Use a standard tool to generate simulated IOPS. A simple way to do this is to load a Linux VM and use flexible I/O Tests (fio) to
generate IOPS. Following, is the command line using fio to generate random reads and writes:
Steps
1. To retrieve overall performance metrics:
a. Launch the PowerFlex GUI.
Mode Description
Instant maintenance mode Perform short-term maintenance that lasts less than 30
minutes. The node is immediately and temporarily removed
from active participation. PowerFlex Manager does not
migrate the data.
Protected maintenance mode Perform maintenance or updates that require longer than 30
minutes in a safe and protected manner. PowerFlex makes
a temporary copy of the data, so that the cluster is fully
protected from data loss.
Protected maintenance mode applies only to the PowerFlex
hyperconverged and storage-only nodes. Protected
maintenance mode requires that the sum of the spare
capacity and the free capacity must be greater than the size
of the node being put into protected maintenance mode.
Keep the following restrictions in mind when using protected maintenance mode:
● Do not put two nodes from the same protection domain into an instant maintenance mode or protected maintenance mode
simultaneously.
● You cannot mix protected maintenance mode and instant maintenance mode on the same protection domain simultaneously.
● All SDSs in protected maintenance mode concurrently must belong to the same fault set (no inter-protection domain
dependencies for protected maintenance mode).
Maintenance modes
Different types of maintenance are available through PowerFlex.
Types of maintenance
Instant When a node is placed in instant maintenance mode, the node is immediately and temporarily removed
maintenance from active participation. This node does not build a new copy of the data on any of the other nodes.
mode Existing data is temporarily unavailable. A rebuild is not triggered when the node goes offline. The system
suffers data unavailability until the node under maintenance is recovered and the changes are applied
to it. Instant maintenance mode enables you to perform updates quickly. PowerFlex Manager does not
migrate the data when the node is placed in instant maintenance mode.
Protected Protected maintenance mode is designed to avoid the disadvantages of instant maintenance mode.
Maintenance Protected maintenance mode has duplicate copies of the data always and avoids many-to-one rebuilds.
Mode When a node is placed in protected maintenance mode, PowerFlex creates a new, temporary copy of the
data by leveraging the many-to-many rebalance and leave the data on the node being maintained in place.
This makes for three copies, but only two are available.
● Protected maintenance mode enables you to perform updates that require longer than 30 minutes in a
safe and protected manner.
● Protected maintenance mode is more secure maintenance mode compare to instant maintenance
mode.
Steps
1. Log in to PowerFlex Manager.
2. On the Services page, select a service, and click View Details in the right pane.
3. Click Enter Service mode under Service Actions.
NOTE: The service should have at least three nodes to enter into protected performance maintenance mode using
PowerFlex Manager.
4. Select one or more nodes on the Node Lists page, and click Next.
NOTE: For an environment with Fault set, PowerFlex Manager can put a single node or full fault set in protected
maintenance mode. For an environment without Fault sets, PowerFlex Manager can put a four node minimum in
protected maintenance mode.
Steps
1. Log in to PowerFlex Manager.
2. On the Services page, select the service.
3. Click Exit Service Mode.
Steps
1. Log in to CloudLink Center.
2. Select System > License.
3. Click Upload License.
4. Browse to the license file and click Upload.
NOTE: If the CloudLink environment is managed by PowerFlex Manager, after you update the license, go to the
Resources page, select the CloudLink VMs, and click Run Inventory.
Steps
1. Log in to PowerFlex Manager.
2. Click Settings > Software Licenses, and click Add.
3. Click Choose File, and browse the license file.
4. Select Type as CloudLink, and click Save.
5. From Resource, select the CloudLink VMs, and click Run inventory.
Steps
1. Log in to PowerFlex Manager.
2. Click Settings > Software Licenses.
3. Select the license you want to delete, and click Delete.
4. From Resource, select the CloudLink VMs, and click Run inventory.
Steps
1. Log in to CloudLink Center.
2. Click Server > Change Syslog Format. The Change Syslog Format dialog box is displayed.
3. From the Syslog Format list, select Custom.
4. Enter the string for the syslog entry, and click Change.
Steps
1. Log in to the CloudLink Center.
2. Go to System > Keystore > Add.
3. Provide any name and description and click Next.
4. Select Key Location Type as Local Database.
5. Select the Protector Type as KMIP.
6. Enter the following information:
● KMIP server address
● Username (secadmin)
● Password
● Upload the three ZIP files downloaded from the KMIP server
7. Click Test. A successful message is displayed that Protector is accessible.
8. Click Add. The KMIP keystore is available under the CloudLink keystore.
● To use this KMIP for a new service, while creating the template in PowerFlex Manager, select CloudLink Center
Settings > KMIP keystore.
● For an existing service, edit the machine group used by the service.
a. Go to CloudLink Center > Agents > Machine Group > Actions > Modify.
b. Change the Keystore to KMIP Keystore and click Modify.
c. Once the Keystore is changed, remove the service from PowerFlex Manager and add the existing service from the
Services page.
Steps
From the CloudLink Center, select Agent > Machines, click Actions and select Manage SED. Ownership of the encryption key
is enabled.
NOTE: This option is only available if an SED license is uploaded and an SED is detected in the physical machine managed
by CloudLink Center. The Manage SED option does not change data on an SED it only takes ownership of the encryption
key.
Steps
1. Log in to the Storage Data Servers (SDS).
2. To manage the SED from the command line, type svm manage [device name].
For example, svm manage /dev/sdb.
Steps
1. From CloudLink Center, go to Agents > Machines and select SDS Machine. Click Release SED.
2. From RELEASE SED, use the menu to select the SED drive that you want to release and click Release.
NOTE: The Release SED option does not change any data on the SED.
Steps
1. Log in to the Storage Data Server (SDS).
2. To release the SED from the command line, type svm release [device name].
For example, svm release /dev/sdb.
Steps
1. Open a web browser and log in to either CloudLink VM.
2. Log in with secadmin username and password (VMwar3123!!).
3. On the upper right corner, click secadmin, and click Change Password.
Steps
1. Log in to the VMware vCenter that manages CloudLink center VMs and launch the CloudLink VM Console.
2. Log in with CloudLink user credentials.
The Summary page displays.
3. Click OK.
4. Type the CloudLink user password on the Re-enter password page.
5. On Update Menu, select Unlock User, and click OK.
The User secadmin has been unlocked message is displayed.
6. Click OK.
7. To test the changes, log in to CloudLink VM IP using the secadmin user, and the correct password.
Steps
1. To set or change the passcode, log in to the CloudLink Center.
2. Go to System > Vault > Actions > Set passcodes.
3. Update passcodes and click Set passcodes.
NOTE: You can change passcodes at any time.
Steps
To view the backup information, log in to the CloudLink Center, and click System > Backup. The Backup page lists the
following information:
When a backup file is downloaded, the Backup page lists the following additional information:
Terminology Information
Last Downloaded File The name of the backup file that was last downloaded. Only shown
when a backupfile has been downloaded.
Last Downloaded Time The date and time of the last back up file download. Only shown when a
backup file has been downloaded.
Backup Store The backup store configuration type. If you havenot configured a
backup store, the value is Local, which is stored on the local desktop
You can also use the FTP or SFTP servers as backup stores. To change the backup store, click System > Backup > Actions >
Change Backup Store
If you have configured an FTP or SFTP backup store, the following additional information is available:
Terminology Information
Host The remote FTP, SFTP, or FTPS host where you saved the CloudLink
Center backups. You can set this value to the host IP address or
hostname (if DNS is configured).
Port The port used to access the backup store.
User The user with permission to access the backup store.
Directory The directory in the backup store where backup files are available.
Steps
To change the schedule for generating automatic backups, click System > Backup > Actions > Change Backup Schedule.
Steps
In the CloudLink Center, click System > Backup > Actions > Generate new backup.
Steps
1. Download the private key to the Downloads folder for the current user account. For example, C:
\Users\Admnistrator\Downloads
NOTE: The previously generated backup key will not open the backup file created, after a new key is generated.
2. Click System > Backup > Actions > Generate And Download New Key.
Steps
1. Click System > Backup > Actions > Download Backup.
2. In the Download Current Backup dialog box, click Download.
When you download the current backup file, CloudLink Center shows the age of the backup file.
Steps
1. Log in to the CloudLink Center.
2. Click System > Backup > Actions > Restore keystores.
Prerequisites
Verify that all startup configurations for the network switches are saved.
Steps
1. Launch the PowerFlex GUI and log in to the primary PowerFlex MDM. Verify the PowerFlex cluster is healthy and no rebuild
or rebalances are running by observing the Rebuild and Rebalance widgets on the dashboard.
2. Log in to the VMware vSphere Client of the vCenter that manages the PowerFlex appliance cluster.
a. Expand the clusters.
b. Shut down all customer/application VMs(not SVMs) running on the PowerFlex storage datastores.
CAUTION: Do not shut down the SVMs as this can cause data loss.
3. In PowerFlex GUI:
4. From the VMware vSphere Client of the vCenter that manages the PowerFlex appliance cluster:
a. Shut down all the SVMs.
b. Disable DRS and HA on the PowerFlex appliance cluster and put the nodes into Maintenance Mode.
5. From the VMware vSphere Client of the vCenter that manages the PowerFlex Gateway VM and CloudLink center VM:
a. Shut down the PowerFlex Gateway VM.
b. Shut down both CloudLink Center presentation server VMs.
6. Use iDRAC to do a Graceful Shutdown on the PowerFlex appliance nodes.
8. If required, power off the access switches first and then the management switch.
Prerequisites
Verify that all connections are correct and seated properly.
Steps
1. Power on the network components in the following order:
NOTE: Network components take about 10 minutes to power on.
a. Management switch
b. Access switches
NOTE: Ping the management IP address of the switches to verify power on is complete.
2. Using the appropriate VMware vSphere Client power on these VMs in the following order:
a. PowerFlex Gateway presentation server
b. Both CloudLink Center VMs
c. PowerFlex Manager
3. Power on the PowerFlex appliance nodes and do the following:
a. Use SSH to connect to all network switches.
b. Verify that connected interfaces are not in a not connected/down state, with the command: show interface
status.
c. Use iDRAC to power on all the PowerFlex appliance compute nodes and verify that they are fully booted to the ESXi
screen.
d. Using the VMware vSphere client of the vCenter that manages the PowerFlex appliance cluster, and take each
PowerFlex appliance node out of Maintenance Mode.
i. Power on all SVMs
ii. Enable DRS and HA on the PowerFlex appliance cluster.
e. Log in to PowerFlex.
f. From the VMware vSphere client that manages the PowerFlex appliance cluster, do the following:
i. Rescan to rediscover PowerFlex storage datastores.
ii. Power on the customer VMs. VMs might be displayed as inaccessible because PowerFlex storage is not available until
all the SVMs complete initialization.
Prerequisites
Verify that all startup configurations for the network switches are saved.
Steps
1. Launch the PowerFlex GUI and log in to the primary PowerFlex MDM. Verify the PowerFlex cluster is healthy and no rebuild
or rebalances are running by noting the Rebuild and the Rebalance widgets on the dashboard.
2. In the VMware vSphere Web Client that manages the PowerFlex appliance cluster compute-only nodes:
a. Expand the clusters and shut down all application VMs running on the PowerFlex storage datastores.
b. Disable DRS and HA on the customer compute cluster.
c. Put the PowerFlex appliance compute nodes into Maintenance Mode.
3. Use iDRAC to do a Graceful Shutdown on the PowerFlex appliance compute nodes.
4. In PowerFlex GUI:
5. SSH to each of the PowerFlex appliance storage only nodes and shutdown the nodes by typing shutdown -h.
6. Use iDRAC to confirm the PowerFlex appliance storage nodes have been powered off.
7. In the VMware vSphere Web Client that manages the PowerFlex Gateway VM:
a. Shut down the PowerFlex Gateway by running the command shutdown -h in the console.
b. Confirm PowerFlex Gateway VM is shut down by observing if vSphere shows the VM as Powered Off.
8. Using the appropriate VMware vSphere web client, shut down both CloudLink center VMs.
9. Using the appropriate VMware vSphere web client, shut down the PowerFlex Manager VM.
NOTE: If you shut down the PowerFlex Manager VM while a job (such as a service deployment) is still in progress, the
job will not complete successfully.
10. Power off the access switches first and then the management switch.
Prerequisites
Verify that all connections are correct and properly seated.
Steps
1. Power on the network components in the following order:
NOTE: Network components take about 10 minutes to power on.
a. Management switch
b. Access switches
NOTE: Ping the management IP address of the switches to verify power on is complete.
2. Using the appropriate VMware vSphere web client power on these VMs in the following order:
a. PowerFlex Gateway
b. Both CloudLink Center VMs.
c. PowerFlex Manager
3. Power on the PowerFlex appliance nodes by doing the following:
a. Use SSH to connect to all network switches:
● To verify that connected interfaces are not in a "not connected/down" state, use the command: show interface
status
b. Use iDRAC to power on all the PowerFlex appliance storage nodes and verify that they are fully booted to the Linux
prompt.
c. Log in to the PowerFlex GUI.
d. Use iDRAC to power on all the PowerFlex appliance compute nodes and verify that they are fully booted to the VMware
ESXi console screen.
e. Using the VMware vSphere Web Client of the vCenter that manages the PowerFlex appliance cluster:
i. Take each PowerFlex appliance compute-only node out of maintenance mode.
ii. Enable DRS and HA on the PowerFlex appliance compute-only cluster.
iii. Rescan to rediscover PowerFlex storage datastores.
iv. Power on the customer VMs.
Steps
1. Connect to the Windows Server system from the Remote Desktop with an account set up with an administrator privilege.
2. Power off through any one of the following modes:
a. GUI : Click Start > Power > Shutdown.
b. Command line using PowerShell: Run the Stop-Computer cmdlet.
Steps
SSH to the PowerFlex appliance Red Hat compute-only nodes and shutdown the nodes by using the command:
shutdown -h