Professional Documents
Culture Documents
October 2022
Rev. 2.2
Internal Use - Confidential
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
© 2021 - 2022 Dell Inc. or its subsidiaries. All rights reserved. Dell Technologies, Dell, and other trademarks are trademarks of Dell Inc. or its
subsidiaries. Other trademarks may be trademarks of their respective owners.
Internal Use - Confidential
Contents
Revision history........................................................................................................................................................................ 14
Chapter 1: Introduction................................................................................................................15
LACP bonding NIC port design VLAN names..............................................................................................................16
Contents 3
Internal Use - Confidential
Part II: Converting a PowerFlex controller node with a PERC H755 to a PowerFlex
management controller 2.0..................................................................................................... 129
4 Contents
Internal Use - Confidential
Part III: Adding a PowerFlex controller node with a PERC H755 to a PowerFlex management
controller 2.0..........................................................................................................................153
Contents 5
Internal Use - Confidential
Chapter 15: Enable PCI passthrough for PERC H755 on the PowerFlex management
controller .......................................................................................................................... 164
Chapter 16: Install the PowerFlex storage data client (SDC) on the PowerFlex management
controller .......................................................................................................................... 165
Chapter 17: Configure PowerFlex storage data client (SDC) on the PowerFlex management
controller........................................................................................................................... 166
Chapter 21: Migrate the PowerFlex controller node to the PowerFlex management
controller 2.0...................................................................................................................... 171
Chapter 23: Add PowerFlex management controller service to PowerFlex Manager............... 178
Chapter 24: Update the PowerFlex management controller 2.0 service details...................... 180
Part IV: Adding a PowerFlex management node to a PowerFlex management controller 1.0
with VMware vSAN..................................................................................................................181
6 Contents
Internal Use - Confidential
Chapter 37: Migrate VMware vCenter server appliance 7.0 from PERC-01 datastore to
vSAN datastore.................................................................................................................. 194
Chapter 39: Enable VMware vCSA high availability on PowerFlex management controller
vCSA ................................................................................................................................. 196
Contents 7
Internal Use - Confidential
Verify newly added SVMs or storage-only nodes machine status in CloudLink Center.......................... 203
Chapter 45: Configuring the hyperconverged or compute-only transport nodes ................... 210
Configure VMware NSX-T overlay distributed virtual port group................................................................. 210
Convert trunk access to LACP-enabled switch ports for cust_dvswitch....................................................211
Chapter 46: Add a Layer 3 routing between an external SDC and SDS.................................... 214
Chapter 51: Configuring the hyperconverged or compute-only transport nodes ................... 226
Configure VMware NSX-T overlay distributed virtual port group................................................................ 226
Convert trunk access to LACP-enabled switch ports for cust_dvswitch.................................................. 227
Chapter 52: Add a Layer 3 routing between an external SDC and SDS................................... 230
8 Contents
Internal Use - Confidential
Contents 9
Internal Use - Confidential
10 Contents
Internal Use - Confidential
Contents 11
Internal Use - Confidential
12 Contents
Internal Use - Confidential
Chapter 64: Updating the storage data client parameters (VMware ESXi 6.x)....................... 420
Contents 13
Internal Use - Confidential
Revision history
Date Document Description of changes
revision
October 2022 2.2 Updated the PowerFlex data network requirements.
May 2022 2.1 Updated Install and configure the SDC
Added support for VMware vSphere Client 7.0 U3c.
14 Revision history
Internal Use - Confidential
1
Introduction
This guide contains information for expanding compute, network, storage, and management components of PowerFlex appliance
after installation at the customer site.
The information in this guide is for PowerFlex appliance based on the following expansion scenarios:
● Dell PowerEdge R650, R750, or R6525 servers that are expanding with PowerEdge R650, R750, or R6525 servers.
● Dell PowerEdge R650, R750, or R6525 servers that are expanding with PowerEdge R640, R740xd, or R840 servers.
● Dell PowerEdge R640, R740xd, or R840 servers that are expanding with PowerEdge R640, R740xd, or R840 servers,
including those servers with VMware NSX-T.
Depending on when the system was built, it will have one of the following PowerFlex management controllers:
Controller Description
PowerFlex management controller 2.0 R650-based PowerFlex management controller that uses PowerFlex storage and a
VMware ESXi hypervisor
PowerFlex management controller 1.0 R640-based PowerFlex management controller that uses VSAN storage and a
VMware ESXi hypervisor
The PowerFlex R650 controller node with PowerFlex can have either of the following RAID controllers:
● PERC H755: PowerFlex Manager puts a PowerFlex management controller 2.0 with PERC H755 service in lifecycle mode.
If you are adding a PowerFlex controller node to a PowerFlex management controller 2.0 with PowerFlex, delete the RAID
and convert the physical disks to non-RAID disks. See Adding a PowerFlex controller node with a PERC H755 to a PowerFlex
management controller 2.0 for more information.
● HBA355: PowerFlex Manager puts a PowerFlex management controller 2.0 with HBA355 service in managed mode. Use
PowerFlex Manager to add a PowerFlex controller node with HBA355i to a PowerFlex management controller 2.0. See
Adding a PowerFlex R650/R750/R6525 node to a PowerFlex Manager service in managed mode for more information.
This guide provides instructions for:
● Performing the initial expansion procedures
● Converting a PowerFlex controller to a controller based on PowerFlex
● Adding a PowerFlex controller node to a PowerFlex management controller 2.0
● Adding a PowerFlex management node to a PowerFlex management controller 1.0 with vSAN
● Adding a PowerFlex R650/R750/R6525 node to a PowerFlex Manager service in managed mode
● Adding a PowerFlex R640/R740xd/R840 node to a PowerFlex Manager service in managed mode
● Adding a PowerFlex R650/R750/R6525 node to a PowerFlex Manager service in lifecycle mode
● Adding a PowerFlex R640/R740xd/R840 node to a PowerFlex Manager service in lifecycle mode
● Adding a VMware NSX-T Edge node
● Completing the expansion
There are UI changes for the VMware vSphere Client 7.0 U3c update:
● On the Home screen, there is no Menu button. The new menu is next to vSphere Client.
● From the Add DVSwitch menu, the New Host is no longer available.
This guide might contain language that is not consistent with Dell Technologies' current guidelines. Dell Technologies plans to
update this over subsequent future releases to revise the language accordingly.
The target audience for this document are Dell Technologies sales engineers, field consultants, and advanced services
specialists.
Introduction 15
Internal Use - Confidential
VLANS flex-vmotion-<vlanid> and flex-vsan-<vlanid> are only required for PowerFlex management controller 1.0.
16 Introduction
Internal Use - Confidential
I
Performing the initial expansion procedures
Use this section to perform the initial procedures common to both expansion scenarios.
Before adding a PowerFlex node, you must complete the following initial set of expansion procedures:
● Preparing to expand a PowerFlex appliance
● Configuring the network
After you complete the initial procedures in this section, see the following sections depending on the expansion scenario:
● Converting a PowerFlex controller node with a PERC H755 to a PowerFlex management controller 2.0
● Adding a PowerFlex controller node with a PERC H755 to a PowerFlex management controller 2.0
● Adding a PowerFlex management node to a PowerFlex management controller 1.0 with VMware vSAN
● Adding a PowerFlex R650/R750/R6525 node to a PowerFlex Manager service in managed mode
● Adding a PowerFlex R640/R740xd/R840 node to a PowerFlex Manager service in managed mode
● Adding a PowerFlex R650/R750/R6525 node in lifecycle mode
● Adding a PowerFlex R640/R740xd/R840 node to a PowerFlex Manager service in lifecycle mode
2
Preparing to expand a PowerFlex appliance
This section contains the steps you take to prepare a PowerFlex appliance.
Before adding a PowerFlex node, make a note of the IP addresses, FQDN, and the type of PowerFlex node for an expansion.
This information is used in multiple locations throughout the document.
Network requirements
Specific networks are required for a PowerFlex appliance deployment. Each network requires enough IP addresses allocated
for the deployment and future expansion. If the access switches are supported by PowerFlex Manager, the switch ports are
configured. Manually configure the switch ports for PowerFlex management controller and for services discovered in lifecycle
mode.
The column definitions are:
● To allow for a takeover of the PowerFlex management controller 2.0, the following VLANs are configured as general-purpose
LAN in PowerFlex Manager: 101,103,151,152,153,154.
NOTE: These VLANs will also need to be configured as their default network types.
● Example VLANs: Lists the VLAN numbers that are used in the PowerFlex hyperconverged node example.
NOTE: VLANs 140 through 143 are only required for PowerFlex management controller 2.0. VLANs 106 and 113 are only
required for PowerFlex management controller 1.0.
● Networks or VLANs: Names the network and or VLAN defined by PowerFlex Manager.
● Description: Describes each network or VLAN.
● Where configured: Indicates which resources have interfaces that are configured on the network or VLAN. The resource
definitions are:
○ PowerFlex node: PowerFlex hyperconverged node
○ PowerFlex Manager: Deploys and manages the PowerFlex appliance
○ PowerFlex gateway: Provides installation services and REST API for the PowerFlex appliance cluster.
○ Access switches: PowerFlex Manager configures the node facing ports of these switches. You configure the other ports
on the switch (management, uplinks, interconnects, and the switch ports for the PowerFlex management node).
NOTE: If the PowerFlex Manager does not support the access switches, the Partial networking template must
be used and the customer configures the switches before deploying the service. For more information, see Customer
switch port configuration examples section of the Dell EMC PowerFlex Appliance Administration Guide.
○ Embedded operating system-based jump VM: The server used to manage PowerFlex appliance.
○ Cloudlink Center: Provides the key management and encryption for PowerFlex.
○ PowerFlex GUI presentation server: Provides web GUI interface for configuring and managing the PowerFlex cluster.
The following table lists VLAN descriptions:
NOTE:
● A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks.
● Make sure that each network has a unique VLAN ID and there are no shared VLANs.
Software requirements
Software requirements must be met before you deploy a PowerFlex appliance.
For all the examples in this document, the following conventions are used:
● The third octets of the example IP addresses match the VLAN of the interface.
● All networks in the example have a subnet mask of 255.255.255.0.
● Use the same password when possible. For example: P@ssw0rd!.
The following table lists the virtual machine sizing guidelines:
192.168.101.46
NOTE: For PowerFlex management controller 2.0, verify the capacity before adding additional VMs to the general volume.
If there is not enough capacity, expand the volume before proceeding. For more information on expanding a volume, see
DellDell EMC PowerFlex Appliance Administration Guide.
The following PowerFlex R6525 node example is from the manufacturing documentation kit:
Related information
Configure the host
The following information describes the cabling requirements for PowerFlex management node:
Slot layout
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
(management, data1,
data3, rep1)
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
PowerFlex R750 node with NVMe and GPU (SW) Dual CPU
Slot 0 (OCP) CX5
Slot 1 Empty
Slot 2 Empty
Slot 3 GPU (A10/T4:SW)
Slot 4 Empty
Slot 5 Empty
Slot 6 CX5
Slot 7 Empty
Slot 8 Empty
Logical network
Slot layout
PowerFlex R750 node with NVMe and 2 GPUs (DW) Dual CPU
Slot 0 (OCP) CX5
Slot 1 Empty
Slot 2 GPU2 (DW)
Slot 3 Empty
Slot 4 Empty
Slot 5 Empty
Slot 6 CX5
Slot 7 GPU1 (DW)
Slot 8 Empty
Logical network
Slot layout
PowerFlex R750 node with NVMe and 2 GPUs (DW) Dual CPU
Slot 0 (OCP) CX5
Slot 1 Empty
Slot 2 Empty
Slot 3 Empty
Slot 4 Empty
Slot 5 Empty
Slot 6 CX5
Slot 7 GPU1 (DW)
Slot 8 Empty
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
the standard as built documents for a standard factory build. This contains complete documentation for adding the hardware
and cabling, and cable labels.
In a default PowerFlex setup two data networks are standard. Four data networks are only required for specific customer
requirements, for example, high performance or use of trunk ports.
The following PowerFlex R740xd node example is from the manufacturing documentation kit:
In the example, PowerFlex R840 compute-only node 1-1-00:01 indicates the following:
● PowerFlex R840 compute-only node 1-1 is the first PowerFlex R840 compute-only node.
● 00:01 is slot:port, which is slot 0 port 1 on the PowerFlex R840 node.
Related information
Configure the host
Slot layout
PowerFlex R640 management Single CPU small Dual CPU large
controller node
Slot 0 X710 / i350 X710 / i350
Slot 1 X710 X710
Slot 2 X550 X550
Slot 3 Empty Empty
Description Slot:Port VMW label DVSwitch for non-bonded DVSwitch for LACP
and static bonding NIC bonding NIC
Trunk 1 to switch A 00:01 vmnic0 vDS0 FE_dvswitch
Trunk 1 to switch B 01:01 vmnic4 vDS0 FE_dvswitch
Trunk 2 to switch A 01:02 vmnic5 vDS1 BE_dvswitch
Trunk 2 to switch B 00:02 vmnic1 vDS1 BE_dvswitch
To management switch 02:01 vmnic6 vDS2 oob_dvswitch
iDRAC - OOB network M0 Not applicable Not applicable Not applicable
Slot layout
PowerFlex R640 management controller node with Single CPU small
BOSS card
Slot 0 X710 / i350
Slot 1 BOSS
Slot 2 X550
Slot 3 Empty
Slot matrix for PowerFlex R640 management controller node with BOSS card - small
NOTE: For non-bonded NIC port design, configure the ports 00:02 and 01:02 as access instead of trunk ports.
Description Slot:Port VMW label DVSwitch for non-bonded and DVSwitch for LACP
static bonding NIC bonding NIC
Trunk 1 to switch A 00:01 vmnic0 vDS0 FE_dvswitch
Trunk 1 to switch B 00:03 vmnic2 vDS0 FE_dvswitch
Trunk 2 to switch A 00:04 vmnic3 vDS1 BE_dvswitch
Trunk 2 to switch B 00:02 vmnic1 vDS1 BE_dvswitch
To management switch 02:01 vmnic4 vDS2 oob_dvswitch
iDRAC - OOB network M0 Not applicable Not applicable Not applicable
Slot layout
PowerFlex R640 management controller node with Dual CPU large
BOSS card
Slot 0 X710 / i350
Slot 1 BOSS
Slot 2 X710
Slot 3 X550
Slot matrix for PowerFlex R640 management controller node with BOSS card - large
Description Slot:Port VMW label DVSwitch for non-bonded DVSwitch for LACP
and static bonding NIC bonding NIC
Trunk 1 to switch A 00:01 vmnic0 vDS0 FE_dvswitch
Trunk 1 to switch B 02:01 vmnic4 vDS0 FE_dvswitch
Trunk 2 to switch A 02:02 vmnic5 vDS1 BE_dvswitch
Trunk 2 to switch B 00:02 vmnic1 vDS1 BE_dvswitch
To management switch 03:01 vmnic6 vDS2 oob_dvswitch
iDRAC - OOB network M0 Not applicable Not applicable Not applicable
Slot layout
dfgsgrg
Logical network
Data networks are configured with Trunk 1 to switch B 03:01 vmnic6 cust_dvswitch
port channel vPC or VLT. Two
Trunk 2 to switch A 03:02 vmnic7 flex_dvswitch
logical data networks are created in
(data)
flex_dvswitch: data1 and data2.
Trunk 2 to switch B 02:02 vmnic5 flex_dvswitch
(data)
iDRAC - OOB M0 Not Not applicable
network applicable
LACP bonding NIC port design Trunk 1 to switch A 02:01 vmnic4 cust_dvswitch
Data networks are configured with Trunk 1 to switch B 03:01 vmnic6 cust_dvswitch
port channel vPC or VLT (LACP
enabled). Four logical data networks Trunk 2 to switch A 03:02 vmnic7 flex_dvswitch
are created in flex_dvswitch: data1, (data)
data2, data3 (if required), and data4
Trunk 2 to switch B 02:02 vmnic5 flex_dvswitch
(if required), rep1, and rep2. There
(data)
are no changes in physical connectivity
from node to switch. iDRAC - OOB M0 Not Not applicable
network applicable
Slot layout
Logical network
logical data networks are created in Trunk 2 to switch A 03:02 vmnic7 flex_dvswitch
flex_dvswitch: data1 and data2. (data)
Trunk 2 to switch B 02:02 vmnic5 flex_dvswitch
(data)
iDRAC - OOB M0 Not applicable Not applicable
network
LACP bonding NIC port design Trunk 1 to switch A 02:01 vmnic4 cust_dvswitch
Data networks are configured with Trunk 1 to switch B 03:01 vmnic6 cust_dvswitch
port channel vPC or VLT (LACP
enabled). Four logical data networks Trunk 2 to switch A 03:02 vmnic7 flex_dvswitch
are created in flex_dvswitch: data1, (data)
data2, data3 (if required), and data4
Trunk 2 to switch B 02:02 vmnic5 flex_dvswitch
(if required). There are no changes
(data)
in physical connectivity from node to
switch. iDRAC - OOB M0 Not applicable Not applicable
network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
Slot layout
Logical network
data4 (if required) are a part of bond1. Trunk 2 to switch A 02:02 p2p2 bond1
There are no changes in physical (data2, data4, rep2)
connectivity from node to switch.
Trunk 2 to switch B 00:02 em2 bond1
NOTE: rep1 and rep2 virtual (data2, data4, rep2)
interfaces are used only if the iDRAC - OOB M0 Not applicable Not applicable
native asynchronous replication is network
enabled.
Slot layout
Logical network
LACP bonding NIC port design Trunk 1 to switch A 02:01 p2p1 bond0
(management, data1)
Data networks are configured with
port channel vPC or VLT (LACP Trunk 1 to switch B 03:01 p3p1 bond0
enabled). There are two logical data (management, data1)
networks: data1 is a part of bond0 and
Trunk 2 to switch A 03:02 p3p2 bond1
data2 is a part of bond1.
(data2)
Trunk 2 to switch B 02:02 p2p2 bond1
(data2)
iDRAC - OOB M0 Not applicable Not applicable
network
LACP bonding NIC port design Trunk 1 to switch A 02:01 p2p1 bond0
(management, data1,
Data networks are configured with
data3, rep1)
port channel vPC or VLT (LACP
enabled) . There are four logical data Trunk 1 to switch B 03:01 p3p1 bond0
networks: data1 and data3 (if required) (management, data1,
are a part of bond0, data2 and data4 data3, rep1)
(if required) are a part of bond1. There
Trunk 2 to switch A 03:02 p3p2 bond1
are no changes in physical connectivity
(data2, data4, rep2)
from node to switch.
Trunk 2 to switch B 02:02 p2p2 bond1
NOTE: rep1 and rep2 virtual
(data2, data4, rep2)
interfaces are used only if the
native asynchronous replication is iDRAC - OOB M0 Not applicable Not applicable
enabled. network
Slot layout
Logical network
NOTE: rep1 and rep2 virtual Trunk 2 to switch B 00:02 em2 bond1
interfaces are used only if the (data2, data4, rep2)
native asynchronous replication is iDRAC - OOB M0 Not applicable Not applicable
enabled. network
Slot layout
Logical network
Data networks are configured with Trunk 1 to switch B 02:01 p2p1 bond0
port channel vPC or VLT (LACP (management, data1)
enabled). There are two logical data
Trunk 2 to switch A 02:02 p2p2 bond1
networks: data1 is a part of bond0 and
(data2)
data2 is a part of bond1.
Trunk 2 to switch B 01:02 p1p2 bond1
(data2)
iDRAC - OOB M0 Not applicable Not applicable
network
LACP bonding NIC port design Trunk 1 to switch A 01:01 p1p1 bond0
(management, data1,
Data networks are configured with
data3, rep1)
port channel vPC or VLT (LACP
enabled). There are four logical data Trunk 1 to switch B 02:01 p2p1 bond0
networks: data1 and data3 (if required) (management, data1,
are a part of bond0, data2 and data4 data3, rep1)
(if required) are a part of bond1. There
are no changes in physical connectivity Trunk 2 to switch A 02:02 p2p2 bond1
from Node to Switch). (data2, data4, rep2)
NOTE: rep1 and rep2 virtual Trunk 2 to switch B 01:02 p1p2 bond1
interfaces are used only if (data2, data4, rep2)
native asynchronous replication is
enabled. iDRAC - OOB M0 Not applicable Not applicable
network
Slot layout
Logical network
Data networks are access ports. Trunk to switch B 08:01 p8p1 bond0
Data 1 to switch A 08:02 p8p2 Not applicable
Data 2 to switch B 01:02 p1p2 Not applicable
iDRAC - OOB M0 Not applicable Not applicable
network
LACP bonding NIC port design Trunk 1 to switch A 01:01 p1p1 bond0
(management, data1)
Data networks are configured with
port channel vPC or VLT (LACP Trunk 1 to switch B 08:01 p8p1 bond0
enabled). There are two logical data (management, data1)
networks: data1 is a part of bond0, and
data2 is a part of bond1. Trunk 2 to switch A 08:02 p8p2 bond1
(data2)
Trunk 2 to switch B 01:02 p1p2 bond1
(data2)
iDRAC - OOB M0 Not applicable Not applicable
network
LACP bonding NIC port design Trunk 1 to switch A 01:01 p1p1 bond0
(management, data1,
Data networks are configured with
data3, rep1)
port channel vPC or VLT (LACP
enabled). There are four logical data Trunk 1 to switch B 08:01 p8p1 bond0
networks: data1 and data3 (if required) (management, data1,
are a part of bond0, and data2 and data3, rep1)
data4 (if required) are a part of bond1.
There are no changes in physical Trunk 2 to switch A 08:02 p8p2 bond1
connectivity from node to switch. (data2, data4, rep2)
NOTE: rep1 and rep2 virtual Trunk 2 to switch B 01:02 p1p2 bond1
interfaces are used only if (data2, data4, rep2)
native asynchronous replication is
enabled. iDRAC - OOB M0 Not applicable Not applicable
network
Slot layout
Slot layout
Logical network for PowerFlex R840 ESXi-based compute-only nodes with GPU
Logical network for PowerFlex R840 Windows-based compute-only nodes with GPU
Slot layout
Logical network for PowerFlex R840 ESXi-based compute-only nodes - GPU capable
Logical network for PowerFlex R840 Windows-based compute-only nodes - GPU capable
Slot layout
Logical network for PowerFlex R840 ESXi-based compute-only nodes without GPU
Logical network for PowerFlex R840 Windows-based compute-only nodes without GPU
Slot layout
VMware NSX-T Edge node SSD Dual CPU
Slot 0 (rNDC) CX4-LX
Slot 1 BOSS
Slot 2 CX4-LX
Slot 3 CX4-LX
Logical network
Logical Network Description Slot:Port Logical port DVSwitch
LACP bonding NIC port design Management to switch A 00:01 vmnic0 dvswitch0
Management network is configured Management to switch B 02:01 vmnic2 dvswitch0
with port channel or vPC or VLT
(LACP enabled)
Aggregation switch A
1-29 1-31
Aggregation switch A
1-30 1-32
Aggregation switch B
1-29 1-31
1-30 1-32
Aggregation switch A
1-31
Aggregation switch B
1-31
The access switches provide two management and two transport traffic links. The following port map shows two VMware
NSX-T Edge nodes with four links connected to the access switches.
Access switch A
1-31
Access switch A
1-32
Access switch B
1-31
1-32
Connecting the VMware NSX-T Edge nodes to the leaf and spine
topology
Use the following options if the PowerFlex appliance based on PowerFlex R640 nodes is using a leaf and spine topology.
Option 1 (default): Connecting all six VMware NSX-T Edge nodes directly to the
VMware NSX-T Edge node border leaf switches
The following port maps show the connections mapped to the border leaf switches. In the following examples, the PowerFlex
R640 node 1G-00:01 indicates the following:
● PowerFlex R640 node 1G is the first new PowerFlex R640 node used for VMware NSX-T Edge node.
● 00:01 is slot:port which is slot 0 port 1 on the PowerFlex R640 node.
Consider the following while connecting the VMware NSX-T Edge nodes to the leaf and spine topology:
● The port numbers can vary from each build.
● The assumption is that the two VMware NSX-T Edge nodes (1E-1F) already exist (not shown in the table) and you are
adding two more edge nodes (1G-1H).
● Each VMware NSX-T Edge node requires three physical port per switch. As the network adapter in each VMware NSX-T
Edge node is 25G, only a maximum number of four connections per port is allowed.
The border leaf switches provide all six connections for the VMware NSX-T Edge nodes. The following port map shows the two
VMware NSX-T Edge nodes have all six links connected to the border leaf switches.
1-30 1-32
1-30 1-32
Option 2: Connecting the VMware NSX-T Edge nodes to both border leaf and leaf
switches
The following port maps show the VMware NSX-T Edge node connections mapped to both the border leaf and leaf switches. In
the following examples, PowerFlex R640 node 1G-00:01 indicates the following:
● PowerFlex R640 node 1G is the first new PowerFlex R640 node used for VMware NSX-T Edge node.
● 00:01 is slot:port which is slot 0 port 1 on the PowerFlex R640 node.
Consider the following while connecting the VMware NSX-T Edge nodes to both border leaf and leaf switches:
● The port numbers can vary from each build.
● The assumption is that the two VMware NSX-T Edge nodes (1E-1F) already exist (not shown in the table) and you are
adding two more edge nodes (1G-1H).
● Each VMware NSX-T Edge node requires one physical port per switch. As the network adapter in each VMware NSX-T Edge
node is 25G, only a maximum number of four connections per port is allowed.
The border leaf switches provide only two VMware NSX-T edge external traffic links. The following port map shows two
VMware NSX-T Edge nodes with two links connected to the border leaf switches.
The leaf switches provide two management and two transport traffic links. The following port map shows two VMware NSX-T
Edge nodes with four links connected to the leaf switches.
Leaf switch A
1-31
1-32
Leaf switch B
1-31
Leaf switch B
1-32
Slot layout
Slot matrix for PowerFlex R640 management controller node - small or large
Slot layout
Slot layout
Slot layout
Slot matrix for PowerFlex R640 hyperconverged nodes with 10* NVMe
Slot layout
Slot layout
Slot layout
Slot matrix for PowerFlex R740xd hyperconverged nodes with two GPUs
Slot layout
PowerFlex R740xd hyperconverged nodes with NVMe and two Dual CPU
GPUs
rNDC (CPU 1) Mellanox CX-4 25 GB
Slot 1 (CPU 1) GPU1 (DW/SW)
Slot 2 (CPU 1) Not applicable
PowerFlex R740xd hyperconverged nodes with NVMe and two Dual CPU
GPUs
Slot 3 (CPU 1) NVMe bridge
Slot 4 (CPU 2) NVMe bridge
Slot 5 (CPU 2) Mellanox CX-4 25 GB
Slot 6 (CPU 1) BOSS
Slot 7 (CPU 2) blocked
Slot 8 (CPU 2) GPU2 (DW)
Slot matrix for PowerFlex R740xd hyperconverged nodes with NVMe and two GPUs
NOTE: PowerFlex R640/R740xd/R840 with three GPUs, 100 GB PowerFlex nodes, and six ports will be updated after the
SPM matrix update. The slot matrix might change after the SPM addendum is complete.
Slot layout
Slot layout
Slot layout
Slot layout
PowerFlex R840 hyperconverged nodes with NVMe and GPU Dual CPU
rNDC (CPU1) Mellanox CX-4 25 GB
Slot 1 (NA) NA
Slot 2 (CPU1) GPU1 (DW)
Slot 3 (CPU1) BOSS
Slot 4 (CPU2) Mellanox CX-4 25 GB
Slot 5 (NA) NA
Slot 6 (CPU2) GPU2 (DW)
Slot matrix for PowerFlex R840 hyperconverged nodes with NVMe and GPU
Slot layout
Slot layout
Slot layout
Slot matrix for PowerFlex R640 storage-only nodes with 10* NVMe
Slot layout
Slot layout
Slot layout
Slot layout
Slot matrix for PowerFlex R740xd compute-only nodes without hard drives
Slot layout
Slot layout
Slot matrix
NOTE:
● A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks.
● VLANs 161 and 162 are used to support native asynchronous replication.
Prerequisites
For console operations, ensure that you have a crash cart. A crash cart enables a keyboard, mouse, and monitor (KVM)
connection to the node.
Steps
1. Connect the KVM to the node.
2. During boot, to access the Main Menu, press F2.
3. From System Setup Main Menu, select the iDRAC Settings menu option. To configure the network settings, do the
following:
a. From the iDRAC Settings pane, select Network.
b. From the iDRAC Settings-Network pane, verify the following parameter values:
● Enable NIC = Enabled
● NIC Selection = Dedicated
c. From the IPv4 Settings pane, configure the IPv4 parameter values for the iDRAC port as follows:
● Enable IPv4 = Enabled
● Enable DHCP = Disabled
● Static IP Address = <ip address > # select the IP address from this range for each node (192.168.101.21 to
192.168.101.24)
● Static Gateway = 192.168.101.254
● Static Subnet Mask = 255.255.255.0
● Static Preferred DNS Server = 192.168.200.101
4. After configuring the parameters, click Back to display the iDRAC Settings pane.
5. From the iDRAC Settings pane, select User Configuration and configure the following:
a. User Name = root
Related information
Discover resources
Prerequisites
Ensure that iDRAC command line tools (including racadm) are installed on the Windows-based jump server. Download the
specific versions and installation instructions of the tools for Windows from the Dell Technologies Support site.
NOTE: Disabling IPMI is only required if the PowerFlex nodes are older, or have had it enabled during deployment.
Steps
1. For a single PowerFlex node:
a. From the jump server, open a PowerShell session.
b. Type racadm -r x.x.x.x -u root -p yyyyy set iDRAC.IPMILan.Enable Disabled
where x.x.x.x is the IP address of the iDRAC node and yyyyy is the iDRAC password.
2. For multiple PowerFlex nodes:
a. From the jump server, at the root of the C: drive, create a folder named ipmi.
b. From the File Explorer, go to View and select the File Name extensions check box.
c. Open a notepad file, and paste this text into the file: powershell -noprofile -executionpolicy bypass
-file ".\disableIPMI.ps1"
d. Save the file and rename it to runme.cmd in C:\ipmi.
e. Open a notepad file, and paste this text into the file: import-csv $pwd\hosts.csv -Header:"Hosts"
| Select-Object -ExpandProperty hosts | % {racadm -r $_ -u root -p XXXXXX set
idrac.ipmilan.enable disabled
where XXXXXX is the customer password that must be changed.
f. Save the file and rename it to disableIPMI.ps1 in C:\ipmi.
g. Open a notepad file and list all of the iDRAC IP addresses that you want to include, one per line.
Prerequisites
Ensure that the iDRAC command-line tools are installed on the embedded operating system-based jump server.
Steps
1. For a single PowerFlex node:
a. From the jump server, open a terminal session.
b. Type racadm -r x.x.x.x -u root -p yyyyy set iDRAC.IPMILan.Enable 0 where x.x.x.x is the IP
address of the iDRAC node and yyyyy is the iDRAC password.
2. For multiple PowerFlex nodes:
a. From the jump server, open a terminal window.
b. Edit the idracs text file and enter IP addresses for each iDRAC, one per line.
c. Save the file.
d. At command line interface, type while read line; do echo “$line” ; racadm -r $line -u root -p
yyyyy set iDRAC.IPMILan.Enable 0; done < idracs where yyyyy is the iDRAC password.
The following output displays the IP address for each iDRAC and the output from the racadm command:
Steps
1. On the Dell Technologies Support site, to see the SHA2 hash value, hover over the question mark (?) next to the File
Description.
2. In the Windows file manager, right-click the downloaded file and select CRC SHA > SHA-256. The CRC SHA option is
available only if you install 7-zip application.
The SHA-256 value is calculated.
3. The SHA2 value that is shown on the Dell Technologies Support site and the SHA-256 value that is generated by Microsoft
Windows must match. If the values do not match, the file is corrupted. Download the file again.
3
Configuring the network
This section covers the network configuration examples of physical switches and virtual networking.
NOTE: The physical switch configuration in this section is used as a reference for the customer to configure the switches.
Related information
Create the distributed port groups for the BE_dvSwitch
Configure the Cisco Nexus access and aggregation switches
Configure the Dell access switches
Configure a port channel with LACP bonding NIC or individual trunk
Configure access switch ports for PowerFlex nodes
VLAN mapping
Configuration data
This section provides the port channel and individual trunk configuration data for full network automation (FNA) or partial
network automation (PNA).
NOTE: If the Cisco Nexus switches contain vlan dot1Q tag native in running-config, the PXE boot fails.
Prerequisites
See Configuring the network for information on the interface type.
Steps
1. Configure port channels:
If the interface type is... Run the following command using command prompt...
Port channel with LACP
interface <interface number>
Description “Connected to <connectivity info>"
channel-group <channel-group> mode <mode>
no shutdown
Port channel
interface <interface number>
Description “Connected to <connectivity info>"
channel-group <channel-group> mode <mode>
no shutdown
Trunk
interface <interface number>
switchport mode trunk
switchport trunk allowed vlan <vlan-list>
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
mtu 9216
channel-group <channel-grp-number> mode active
speed <speed>
Related information
Configuring the network
Prerequisites
See Configuring the network for information on the interface type.
Steps
1. Configure port channels:
If the interface type is... Run the following command using command prompt...
Port channel with LACP
interface <interface number>
Description “Connected to <connectivity info>"
channel-group <channel-group> mode <mode>
no shutdown
Port channel
interface <interface number>
Description “Connected to <connectivity info>"
channel-group <channel-group> mode <mode>
no shutdown
Trunk
interface <interface number>
Description “Connected to <connectivity info>"
switchport mode trunk
switchport trunk allowed vlan <vlan-list>
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
mtu 9216
channel-group <channel-group-number> mode active
speed <speed>
Related information
Configuring the network
Steps
1. Log in to the VMware vSphere Client.
2. Select the host and click Configure on the right pane.
3. Under Networking tab, select the VMkernel adapter.
4. Click Add Networking.
5. Select Connection type as VMkernel network adapter and click Next.
6. Select Target device as Existing network and click Browse to select the appropriate port group.
7. On the port properties, select Enable services, select the appropriate service, and click Next.
For example, for vMotion, select vMotion. The MTU for pfmc-vmotion=1500. For any other networks, retain the default
service.
8. In IPV4 Settings, select Use static IPV4 settings, provide the appropriate IP address and subnet details, and click Next.
9. Verify the details on Ready to Complete and click Finish.
10. Repeat the steps 2 through 9 to create the VMkernel adapters for the following port groups:
● flex-data1-<vlanid>
● flex-data2-<vlanid>
● flex-data3-<vlanid> (if required)
● flex-data4-<vlanid> (if required)
● flex-vmotion-<vlanid>
Related information
Assign VMware vSphere licenses
Steps
1. Log in to the VMware vSphere Client.
2. Select the host and click Configure on the right pane.
3. Under Networking tab, select the VMkernel adapter.
4. Click Add Networking.
5. Select Connection type as VMkernel network adapter and click Next.
6. Select Target device as Existing network and click Browse to select the appropriate port group.
7. On the port properties, select Enable services, select the appropriate service, and click Next.
For example, for vMotion, select vMotion. The MTU for pfmc-vmotion=1500. For any other networks, retain the default
service.
8. In IPV4 Settings, select Use static IPV4 settings, provide the appropriate IP address and subnet details, and click Next.
9. Verify the details on Ready to Complete and click Finish.
10. Repeat the steps 2 through 9 to create the VMkernel adapters for the following port groups:
● pfmc-vmotion-<vlanid>
● pfmc-sds-data1-<vlanid>
● pfmc-sds-data2-<vlanid>
Related information
Assign VMware vSphere licenses
Modify the failover order for the FE_dvSwitch
Add PowerFlex controller nodes to an existing dvSwitch
Steps
1. Log in to the VMware vSphere client and select Networking inventory.
2. Select Inventory, right-click the dvswitch, and select Configure.
3. In Settings, select LACP.
4. Click New, type name as FE-LAG or BE-LAG.
The default number of ports is 2.
5. Select mode as active.
6. Select the load balancing option. See Configuration data for more information.
7. Click OK to create LAG.
Repeat steps 1 through 6 to create LAG on additional dvswitches.
Steps
1. Select the dvSwitch.
2. Click Configure and from Settings, select LACP.
3. Click Migrating network traffic to LAGs.
4. Click Manage Distributed Port Groups, click Teaming and Failover, and click Next.
5. Select all port groups, and click Next.
6. Select LAG and move it to Standby Uplinks.
7. Click Finish.
Prerequisites
See Configuration data for naming information of the dvSwitches.
Steps
1. Select the dvSwitch.
NOTE: If you are not using LACP, right-click and skip to step 4.
Prerequisites
See Configuration data for naming information of the dvSwitches and loadbalancing options.
Steps
1. Select the dvSwitch.
2. Click Configure and from Settings, select LACP.
3. Click Migrating network traffic to LAGs.
4. Click Manage Distributed Port Groups, click Teaming and Failover, and click Next.
5. Select all port groups, and click Next.
6. Select a load balancing option.
7. Select LAG and move it to Active Uplinks.
8. Move Uplink1 and Uplink2 to Unused Uplinks and click Next.
9. Click Finish.
Prerequisites
See Configuration data for naming information of the dvSwitches and loadbalancing options.
Steps
1. Select the dvSwitch.
2. Right-click the dvSwitch, select Distributed Portgroup > Manage distributed portgroups.
3. Select teaming and failover and select all the port groups, and click Next.
4. Select load balancing.
Steps
1. Log in to VMware vSphere client.
2. From Home, click Networking and expand the data center.
3. Right-click the data center and perform the following:
a. Click Distributed Switch > New Distributed Switch.
b. Update the name to oob_dvswitch and click Next.
c. On the Select Version page, select 7.0.0 - ESXi 7.0 and later, and click Next.
d. Under Edit Settings, select 1 for Number of uplinks.
e. Select Enabled from Network I/O Control.
f. Clear the Create default port group option.
g. Click Next.
h. On the Read to complete page, click Finish.
Steps
1. Log in to the VMware vSphere client.
2. Click Networking and select oob_dvswitch.
3. Right-click Add and Manage Hosts.
4. Select Add Hosts and click Next.
5. Click New Host, select the host in maintenance mode, and click OK.
6. Click Next.
7. Select vmnic4 and click Assign Uplink.
8. Select Uplink 1, and click OK.
9. Click Next > Next > Next.
10. Click Finish.
Steps
1. Log in to VMware vSphere Client.
2. On Menu, click Host and Cluster.
3. Select Host.
4. Click Configure > Networking > Virtual Switches.
5. Right-click Standard Switch: vSwitch0 and click ...> Remove.
6. On the Remove Standard Switch window, click Yes.
Steps
1. Start an SSH session to the PowerFlex node using PuTTY.
2. Log in as root.
3. In the PowerFlex CLI, type esxcli network ip route ipv4 add -g <gateway> -n <destination subnet
in CIDR>.
Steps
1. Start an SSH session on the SVM of the PowerFlex hyperconverged nodes using PuTTY.
2. Log in as root.
3. Run cd to /etc/sysconfig/network-scripts/
4. In the PowerFlex CLI, type echo "<destination subnet> via <gateway> dev <SIO Interface>">route-
<SIO Interface>.
Steps
1. Log in as root from the virtual console.
2. Type nmtui to set up the networking.
3. Click Edit a connection.
4. Perform the following to configure the bond interface:
a. Click Add and select Bond.
b. Set Profile name and Device to bond <X>.
c. Set Mode to <Mode>. See Configuring the network for more information.
d. Set IPv4 Configuration to Disabled.
e. Set IPv6 Configuration to Ignore.
f. Set Automatically Connect.
g. Set Available to all users.
h. Click OK.
i. Repeat these steps for additional bond interface.
5. Configure VLANs on the bond interface:
a. Click Add and select VLAN. Press Tab to view the VLANs window.
b. Set Profile name and Device to bond <X>.VLAN#, where VLAN# is the VLAN ID.
c. Set IPv4 Configuration to Manual. Press Tab to view the configuration.
d. Select Add and set the IP for each VLAN using the CIDR notation.
If the IP is 192.168.150.155 and the network mask is 255.255.255.0, then enter 192.168.150.155/24.
e. Set the Gateway to the default gateway of each VLAN. Since data networks are private VLANs, do not add a gateway.
f. Set DNS server to customer DNS servers.
g. Set IPv6 Configuration to Ignore.
h. Set Automatically Connect.
i. Set Available to all users.
j. Click OK.
k. Repeat these steps for additional data and replication VLANs.
l. Click Back > Quit.
6. Configure the physical interface as secondary for bond:
a. Edit the network configuration file using the #vi command.
b. vi ifcfg0em1.
c. Change BOOTPROTO to none.
d. Delete the lines from DEFROUTE to IPV6_ADDR_GEN_MODE.
e. Change ONBOOT to yes.
f. Save the network configuration file.
g. Type MASTER=bond0 SLAVE=yes.
h. Save the file.
i. Type systemctl disable NetworkManager to disable the NetworkManager.
j. Type systemctl status firewalld to check if firewalld is enabled.
k. Type systemctl enable firewalld to enable firewalld. To enable firewalld on all the SDS components, see the
Enabling firewall service on PowerFlex storage-only nodes and SVMs KB article.
l. Type systemctl restart network to restart the network.
7. Verify the settings:
Related information
Configuring the network
Steps
1. Log in to PowerFlex storage-only node as root.
2. Edit the ifcfg-p*p* files to change NM_CONTROLLED=no to NM_CONTROLLED=yes.
3. Remove the GATEWAYDEV line from /etc/sysconfig/networks.
4. Type systemctl restart network to restart the host.
5. Set the data networks:
a. Select Edit a connection.
b. Select Add and select VLAN.
c. Set Profile name to the <interface name>.<vlan id>.
d. Set Device to the <interface name>.<vlan id>.
e. Confirm that the automatically populated parent and VLAN ID values match.
f. Set MTU to the appropriate value.
g. Set IPv4 Configuration to Manual.
h. Highlight Show next to IPv4 Configuration and press Enter to see the remaining values.
i. Select Add next to the address. Enter the appropriate IP address of the VLAN and the CIDR notation for the subnet
prefix.
For example, if the IP is 192.168.150.155 and the network mask is 255.255.255.0, then enter 192.168.150.155/24.
j. Select Never use this network for default route.
k. Set IPv6 configuration to Ignore.
l. Select OK and press Enter.
m. Repeat these steps for the remaining interfaces with appropriate interface names and VLAN numbers.
6. Set up the management network shared on the data interface:
Steps
1. Log in as root from the iDRAC interface (virtual console of the host).
2. Type systemctl status NetworkManager to check the status of Network Manager.
3. Depending on the active status in the output, perform either of the following:
● If the active status is inactive (dead), type systemctl enable NetworkManager --now.
● If the active status is active (running), continue to the next step.
4. Type nmtui to set up the networking.
5. Click Edit a connection.
6. Highlight the <interface name> <mgmt vlan id> interface and press Tab.
7. Select Delete and confirm to delete.
8. Create a bond interface for management:
a. Select Edit a connection.
b. Select Add and choose Bond.
c. Set the Profile Nameand Device to bond0.
d. Move to Add next to Secondary and press Enter.
e. Select Ethernet and select Create.
f. Set Profile name to the interface name bond.
g. Set Device to the interface name.
h. Select Show next to Ethernet.
i. Set MTU to the appropriate value.
j. Select OK.
k. Repeat these steps to set up the remaining three interfaces.
9. Go to Mode and select the specific mode:
● For mode0, select Round robin.
● For mode1, select Active backup.
● For mode6, select Adaptive load balancing (ALB)
10. Go to IPv4 configuration and highlight Automatic. Press Enter and select Disabled.
11. Go to IPv6 configuration and highlight Automatic. Press Enter and select Ignore.
12. Click OK and press Enter.
13. Create a VLAN sub-interface on bond:
a. In the nmtui interface, select Add (create sub-interfaces for management and optional replication VLANs).
b. In the pop-up, go to VLAN.
c. Set the Profile Name and Device to bond0.<vlan id>. This populates the parent and VLAN ID fields. Confirm that
the details match.
d. Set MTU to the appropriate value.
e. Set IPv4 Configuration to Manual.
f. Highlight Show next to IPv4 Configuration and press Enter to view the remaining values.
g. Select Add next to the address. Enter the appropriate IP address of the VLAN and the CIDR notation for the subnet
prefix.
For example, if the IP is 192.168.150.155 and the network mask is 255.255.255.0, then enter 192.168.150.155/24.
h. Set the Gateway to the default gateway of each VLAN. This is applicable only for management VLAN sub-interface.
i. Go to the DNS servers, and select Add. This is applicable only for management VLAN sub-interface.
j. Set the first DNS server. To add more than one DNS server, select Add.
k. Select Require IPv4 addressing for this connection.
l. Go to IPv6 configuration and set to Ignore.
m. Select OK.
n. Repeat step 13 for each VLAN.
o. Select Back.
p. Select Quit.
14. Restart the network:
a. Type systemctl restart network to restart the network.
b. Confirm the connectivity on all interfaces and IP addresses and select OK.
c. Select Back.
d. Select Quit.
15. Type systemctl stop NetworkManager to stop Network Manager.
16. Type systemctl restart network to restart the network.
17. Confirm the connectivity on all interfaces and IP addresses.
Steps
1. Start an SSH session to the PowerFlex storage-only node using PuTTY.
2. Log in as root.
3. Type cd to /etc/sysconfig/network-scripts/.
4. In the PowerFlex CLI, type echo "<destination subnet> via <gateway> dev <SIO Interface>">route-
<SIO Interface>.
II
Converting a PowerFlex controller node with
a PERC H755 to a PowerFlex management
controller 2.0
Use the procedures in this section to convert a standalone PowerFlex R650 controller node with a PERC H755 to a PowerFlex
management controller 2.0.
Before converting a PowerFlex controller node to a PowerFlex management controller 2.0, ensure that the following
prerequisites are met:
● Back up the PowerFlex controller node.
● Latest Intelligent Catalog is available.
● See Cabling the PowerFlex R650/R750/R6525 nodes for cabling information on PowerFlex management node.
● See Configuring the network to configure the management node switches.
Converting a PowerFlex controller node with a PERC H755 to a PowerFlex management controller 2.0 129
Internal Use - Confidential
4
Configuring the new PowerFlex controller
node
Steps
1. In the web browser, enter https://<ip-address-of-idrac>.
2. From the iDRAC dashboard, click Maintenance > System Update > Manual Update.
3. Click Choose File. Browse to the release appropriate Intelligent Catalog folder and select the appropriate files.
Required firmware:
● Dell iDRAC or Lifecycle Controller firmware
● Dell BIOS firmware
● Dell BOSS Controller firmware
● Dell Intel X550 or X540 or i350 firmware
● Dell Mellanox ConnectX-5 EN firmware
● PERC H755P controller firmware
4. Click Upload.
5. Click Install and Reboot.
Steps
1. Launch the virtual console, select Boot from the menu, and select BIOS setup from Boot Controls to enter the system
BIOS.
2. Power cycle the server and enter the BIOS setup.
3. From the menu, click Power > Reset System (Warm Boot).
4. From System Setup main menu, select Device Settings.
5. Select AHCI Controller in Slot x: BOSS-x Configuration Utility.
6. Select Create RAID Configuration.
7. Select both the devices and click Next.
8. Enter VD_R1_1 for name and retain the default values.
9. Click Yes to create the virtual disk and then click OK to apply the new configuration.
10. Click Next > OK.
11. Select VD_R1_1 that was created and click Back > Finish > Yes > OK.
12. Select System BIOS.
13. Select Boot Settings and enter the following settings:
● Boot Mode: UEFI
● Boot Sequence Retry: Enabled
● Hard Disk Failover: Disabled
Steps
1. Connect to the iDRAC web interface.
2. Click Storage > Overview > Physical Disks.
3. Confirm the SSD Name is NonRAID Solid State Disk 0:1:x. If not, proceed to step 4.
4. Click Storage > Overview > Controllers.
5. In the Actions list for the PERC H755 Front (Embedded), select Reset configuration > OK > Apply now.
6. Click Job Queue.
Wait for task to complete.
7. Select Storage > Overview > Physical Disks.
8. In the Actions menu for each SSD, select Convert to Non-Raid (ensure not to select SSD 0 and SSD 1), and click OK.
9. Select Apply Later.
10. Repeat from all SSD drives.
11. Click Storage > Overview > Tasks.
12. Under Pending Operations actions, select PERC H755 FRONT (Embedded).
13. Select Apply Now.
14. Click Job Queue.
Wait for task to complete.
15. Click Storage > Overview > Physical Disks.
16. Confirm that the SSD name is NonRAID Solid State Disk 0:1:x.
Steps
1. Log in to iDRAC and perform the following steps:
a. Connect to the iDRAC interface and launch a virtual remote console from Dashboard and click Launch Virtual Console.
b. Select Virtual Media > Connect Virtual Media > Map CD/DVD.
c. Click Choose File and browse to the folder where the ISO file is saved, select it, and click Open.
d. Click Map Device > Close.
e. Click Boot > Virtual CD/DVD/ISO.
f. Click Yes to confirm boot action.
g. Click Power > Reset System (warm boot).
h. Click Yes.
2. Perform the following steps to install VMware ESXi:
a. On the VMware ESXi installer screen, press Enter to continue.
b. Press F11 to accept the license agreement.
c. Under Local, select ATA DELLBOSS VD as the installation location. If prompted, click Enter.
d. Select US Default as the keyboard layout and press Enter.
e. At the prompt, type the root password, and press Enter.
f. At the Confirm Install screen, press F11.
g. In Virtual Console, click Virtual Media > Disconnect Virtual Media.
h. Click Yes to un-map all devices.
i. Press Enter to reboot the PowerFlex management controller when the installation completes.
Steps
1. Press F2 to customize the system.
2. Enter the root password and press Enter.
3. Go to DCUI and select Troubleshooting Options.
4. Select Enable SSH.
5. Select Enable ESXi Shell.
6. Press ESC to exit from troubleshooting mode options.
7. Go to Direct Console User Interface (DCUI) > Configure Management Network.
8. Set Network Adapter to VMNIC2.
9. Set the ESXi Management VLAN ID to the required VLAN value.
10. Set the IPv4 ADDRESS, SUBNET MASK, and DEFAULT GATEWAY.
11. Select IPV6 Configuration > Disable IPV6 and press Enter.
12. Go to DNS Configuration and set the customer provided value.
13. Go to Custom DNS Suffixes and set the customer provided value.
14. Press ESC to exit the network configuration and press Y to apply the changes.
15. Type Y to commit the changes and the node restarts.
16. Verify the host connectivity by pinging the IP address from the jump server using the command prompt.
Prerequisites
Download the latest supported version from Dell iDRAC Service Module.
Steps
1. Copy ISM-Dell-Web-X.X.X-XXXX.VIB-ESX7i-Live_AXX.zip to the /vmfs/volumes/<datastore>/ folder on
the PowerFlex management node running VMware ESXi.
2. SSH to the new appliance management host running VMware ESXi.
3. To install VMware vSphere 7.x Dell iDRAC service module, type esxcli software vib install -d /vmfs/
volumes/<datastore>/ISM-Dell-Web-X.X.X-XXXX.VIB-ESX7i-Live_AXX.zip.
Steps
1. Log in to VMware ESXi host client as root.
Steps
1. Log in to VMware ESXi host client as a root.
2. On the left pane, click Storage.
3. Click Datastores.
4. Right-click datastore1 and click Rename.
5. Enter PFMC_DS<last_ip_octet>.
6. Click Save.
Steps
1. Log in to the vCSA.
2. Right-click the cluster and click Add Host to add multiple hosts.
3. Enter FQDN of host.
4. Enter root username and password and click Next.
5. Select the certificate and click OK for certificate alert.
6. Verify the Host Summary and click Next.
7. Verify the summary and click Finish.
If the PowerFlex node is in maintenance mode, right-click the VMware ESXi host, and click Maintenance Mode > Exit
Maintenance Mode.
Steps
1. Log in to the VMware vSphere Client.
2. Browse to the existing cluster.
3. Right-click the cluster that you want to configure and click Settings.
4. Under Services,click vSphere DRS, and click Edit.
5. Select Turn ON vSphere DRS and expand DRS Automation.
6. Select Full Automated for Automation Level and select priority 1 and 2 for Migration Threshold.
7. Set Power Management to Off and click OK.
8. To enable vSphere HA, click Services > vSphere Availability, and click Edit.
9. Select Turn ON VMware vSphere HA and click OK.
Steps
1. Log in to the VMware vSphere Client.
2. Select BE_dvSwitch.
3. Click Configure and in Settings, select LACP.
4. Click Migrating network traffic to LAGs > Add and Manage Hosts.
5. Select Add Hosts and click Next.
6. Click New Hosts, select the host, and click OK > Next.
7. Select vmnic3 and click Assign Uplink.
8. Select LAG-BE-0 and select Apply this uplink assignment to the rest of the hosts, and click OK.
9. Select vmnic7 and click Assign Uplink.
10. Select LAG-BE-1 and select Apply this uplink assignment to the rest of the hosts, and click OK.
11. Click Next > Next > Next > Finish.
Steps
1. Log in to the VMware vSphere Client.
2. Click Network.
3. Select FE_dvSwitch.
4. Click Configure and in Settings, select LACP.
5. Click Migrating network traffic to LAGs > Add and Manage Hosts.
6. Select Add Hosts and click Next.
7. Click New Hosts, select the host, and click OK > Next.
8. Select vmnic2 and click Assign Uplink. Select Apply this uplink assignment to the rest of the hosts.
9. Select LAG-FE-0 and click OK.
10. Select vmnic6 and click Assign Uplink. Select Apply this uplink assignment to the rest of the hosts.
11. Select LAG-FE-1 and click OK.
12. In Manage VMkernel Adapter, select vmk0, and click Assign port group.
13. Select flex-node-mgmt-<vlandld>. Select Apply this uplink assignment to the rest of the hosts.
14. Click OK.
15. Click Next > Next > Finish.
Steps
1. Log in to the VMware vSphere Client and click Networking.
2. Right-click FE_dvSwitch and select Distributed Port Group > New Distribution Port Group.
3. Enter flex-install-<vlanid> and click Next.
NOTE: For partial network deployment using PowerFlex Manager, this step is not required.
4. Leave the port related options (port binding, allocation, and number of ports) as the default values.
5. Select VLAN as the VLAN type.
6. Set the VLAN ID to the appropriate VLAN number and click Next.
7. In the Ready to complete screen, verify the details and click Finish.
8. Repeat steps 2 to 7 to create the following port groups:
● flex-node-mgmt-<vlanid>
● flex-stor-mgmt-<vlanid>
Steps
1. Log in to the VMware vSphere Client and click Networking.
2. Right-click BE_dvSwitch and select Distributed Port Group > New Distribution Port Group.
3. Enter flex-data1-<vlanid> and click Next.
4. Leave the port-related options (Port binding, Port allocation, and # of ports) as the default values.
5. Select VLAN as the VLAN type.
6. Set the VLAN ID to the appropriate VLAN number.
7. Clear the Customize default policies configuration and click Next > Finish.
8. Repeat steps 2 through 7 for each additional port group.
Related information
Configuring the network
Steps
1. Log in to the VMware vSphere Client.
2. Click Networking and select BE_dvSwitch.
3. Click Configure, and from Settings, select LACP.
4. Click Migrating network traffic to LAGs.
5. Click Manage Distributed Port Groups, click Teaming and Failover, and click Next.
Steps
1. Log in to the VMware vSphere Client.
2. Click Networking and select FE_dvSwitch.
3. Click Configure, and from Settings, select LACP.
4. Click Migrating network traffic to LAGs.
5. Click Manage Distributed Port Groups, click Teaming and Failover, and click Next.
6. Select All port groups and click Next.
7. Select LAG-FE and move it to Active Uplinks.
8. Move Uplink1 and Uplink2 to Unused Uplinks and click Next.
9. Click Finish.
Related information
Add VMkernel adapter to the PowerFlex controller node hosts
Steps
1. Log in to the VMware ESXi host.
2. Select Manage > Hardware > PCI Devices.
3. Select Broadcom / LSI PERC H755 Front Device > Toggle passthrough.
4. Reboot is required, defer till after SDC install.
NOTE: Ignore the VMware popup warning: Failed to configure passthrough devices.
Steps
1. Copy the SDC file to the local datastore on the VMware ESXi server.
2. Use SSH to log in to each VMware ESXi hosts as root.
3. Type the following command to install the storage data client (SDC): esxcli software component apply -d /
vmfs/volumes/PFMC_DS<last ip octet>/sdc.zip.
Steps
1. To configure the SDC, generate 1 UUID per server (https://www.guidgenerator.com/online-guid-
generator.aspx).
Steps
1. Log in to the VMware vCSA.
2. Select Hosts and Clusters.
3. Right-click the ESXi host > Select Deploy OVF Template.
4. Select Local file > Upload file > Browse to the SVM OVA template.
5. Click Open > Next.
6. Enter pfmc-<svm-ip-address> for VM name.
7. Click Next.
8. Identify the cluster and select the node that you are deploying. Verify that there are not compatibility warnings and click
Next.
9. Click Next.
10. Review details and click Next.
11. Select Local datastore Thin Provision and Disable Storage DRS for this VM and click Next.
12. Select pfmc-sds-mgmt-<vlanid> for VM network and click Next.
13. Click Finish.
Steps
1. To configure the SVM, right-click each SVM, and click Settings and perform the following steps:
a. Set CPU to 12 CPUs with 12 cores per socket.
b. Select Reservation and enter the GHz value: 17.4.
c. Set Memory to 18 GB and check Reserve all guest memory (all locked).
d. Set Network Adapter 1 to the pfmc-sds-mgmt-<vlanid>.
e. Set Network Adapter 2 to the pfmc-sds-data1-<vlanid>.
f. Set Network Adapter 3 to the pfmc-sds-data2-<vlanid>.
g. Click Add New Device and select PCI Device (new single PowerFlex management controller).
h. Enable Toggle DirectPath IO.
i. Select PCI Device = PERC H755 Front BroadCom/LSI.
j. Click OK.
2. Configure the new PowerFlex controller node:
a. Click Add New Device and select PCI Device (new single PowerFlex controller node).
b. Enable DirectPath IO.
c. Select PCI Device = PERC H755 Front BroadCom/LSI.
d. Click OK.
3. Create an additional hard disk on the standalone PowerFlex controller node only:
a. Click Add New Device and select the hard disk.
b. Assign 1 TB for the new hard disk.
c. Click OK.
4. Power on the SVM and open a console.
5. Log in using the following credentials:
● Username: root
● Password: admin
6. To change the root password type passwd and enter the new SVM root password twice.
7. Type nmtui, select Set system hostname, press Enter, and create the hostname.
Steps
1. From the nmtui, select Edit Connection.
2. Select Wired connection 1 to modify connection for pfmc-sds-mgmt-<vlanid>.
IPv4 configuration Select Automatic and change to Manual then select Show.
Steps
1. From the nmtui, select Edit Connection.
2. Select Wired connection 2 to modify connection for pfmc-sds-data1-<vlanid>.
IPv4 configuration Select Automatic and change to Manual then select Show.
Addresses Select Add and enter the IP address of this interface
(pfmc-sds-data1_ip).
Steps
1. From the nmtui, select Edit Connection.
IPv4 configuration Select Automatic and change to Manual then select Show.
Addresses Select Add and enter the IP address of this interface
(pfmc-sds-data2_ip).
Steps
1. On all PowerFlex controller nodes perform the following:
a. Install LIA on all the PowerFlex management controllers by typing the following:
TOKEN=<TOKEN-PASSWORD> rpm -ivh /root/install/EMC-ScaleIO-lia-x.x-x.el7.x86_64.rpm
Where <TOKEN-PASSWORD> is a password used for LIA. The LIA password must be identical in all LIAs within the same
system.
The password must be between 6 and 31, ASCII-printable characters with no blank spaces. It must include at least three
of the following groups: [a-z], [A-Z], [0-9], special chars (!@#$ …).
NOTE: If you use special characters on a Linux-based server, you must escape them when issuing the command.
Prerequisites
This verification uses an MTU of 8972 to verify jumbo frames between SVMs.
Steps
1. Log in to the VMware vCSA.
2. Right-click the SVM.
3. On the VM summary page, select Launch Web Console.
4. Log in to SVM as root.
5. Verify connectivity between SVM, type ping -M do -s 8972 [destination IP].
6. Confirm connectivity for all interfaces to all SVMs.
Steps
1. Run the following command on SVM to create the MDM cluster: scli --create_mdm_cluster --master_mdm_ip
<pfmc-sds-data1-ip,pfmc-sds-data2-ip> --master_mdm_management_ip <pfmc-sds-mgm-ipt> --
cluster_virtual_ip <pfmc-data1-vip, pfmc-data2-vip> --master_mdm_virtual_ip_interface
eth1,eth2 --master_mdm_name <mdm-name(hostname of svm)> --accept_license --
approve_certificate
See PowerFlex management node cabling, for more information.
2. To log in to MDM, type scli --login --username admin (default password is admin).
3. To change default password, type scli --set_password.
4. Log in to the PowerFlex cluster with new password: scli --login --username admin.
5. To add a secondary MDM to the cluster, type scli --add_standby_mdm --mdm_role manager --
new_mdm_ip <pfmc-sds-data1-ip,pfmc-sds-data2-ip> --new_mdm_management_ip <pfmc-sds-mgmt>
--new_mdm_virtual_ip_interface eth1,eth2 --new_mdm_name <mdm-name>
6. To add Tiebreaker MDM to the cluster, type scli --add_standby_mdm --mdm_role tb --new_mdm_ip <pfmc-
sds-data1-ip,pfmc-sds-data2-ip> --new_mdm_name <mdm-name>
Steps
1. Log in to the MDM, type: scli --login --username admin
2. Create the protection domain, type: scli --add_protection_domain --protection_domain_name PFMC
Steps
1. Run the following command to log in to the MDM: scli --login --username admin.
2. To create the storage pool, type scli --add_storage_pool --protection_domain_name PFMC --
dont_use_rmcache --media_type SSD --data_layout medium_granularity --storage_pool_name
PFMC-Pool.
Add SDSs
Use this procedure to add SDSs.
Steps
1. Log in to the MDM: scli --login --username admin.
2. To add SDSs, type scli --add_sds --sds_ip <pfmc-sds-data1-ip,pfcm-sds-data2-ip> --
protection_domain_name PFMC --storage_pool_name PFMC-Pool --disable_rmcache --sds_name
PFMC-SDS-<last ip octet>.
3. Repeat for each PowerFlex management controller.
Set the spare capacity for the medium granularity storage pool
Use this procedure to set the spare capacity for the medium granularity storage pool.
Steps
1. Log in to the primary MDM, type: scli --login --username admin.
2. To modify the capacity pool, type scli --modify_spare_policy --protection_domain_name PFMC --
storage_pool_name PFMC-Pool --spare_percentage <percentage>.
NOTE: Spare percentage is 1/n (where n is the number of nodes in the cluster). For example, a three-node clusters
spare percentage is 34%.
3. Type Y to proceed.
Steps
1. Log in as root to each of the PowerFlex SVMs.
2. To identify all available disks on the SVM, type lsblk.
3. Repeat for all PowerFlex management controller node SVMs.
Steps
1. Log in to the MDM: scli --login --username admin.
2. To add SDS storage devices, type scli --add_sds_device --sds_name <sds name> --storage_pool_name
<storage pool name> --device_path /dev/sd(x).
3. Repeat for all devices and for all PowerFlex management controller SVMs.
Create datastores
Use this procedure to create datastores and add volumes.
Prerequisites
For volume sizes, see PowerFlex management controller and virtual machine details.
Steps
1. Log in to the MDM: scli --login --username admin.
2. To create the vcsa datastore, type scli --add_volume --protection_domain_name PFMC --
storage_pool_name PFMC-Pool --size_gb 3500 --volume_name vcsa -- thin_provisioned --
dont_use_rmcache.
3. To create the general datastore, type scli --add_volume --protection_domain_name PFMC --
storage_pool_name PFMC-Pool --size_gb 1600 --volume_name general --thin_provisioned --
dont_use_rmcache.
4. To create the PowerFlex Manager datastore, type scli --add_volume --protection_domain_name PFMC --
storage_pool_name PFMC-Pool --size_gb 1000 --volume_name PFMC-pfxm --thin_provisioned --
dont_use_rmcache.
Steps
1. Log in to the VMware ESXi host as root.
2. To identify the datastores by their EUI (extended unique identifier), type esxcli storage core device list |
grep -I emc.
3. To increase the queue length, type esxcli storage core device set -d <DEVICE_ID> -O <Outstanding
IOs>.
4. Set increasing queue length to 256.
Example:
esxcli storage core device set -d eui.16bb852c56d3b93e3888003b00000000 -O 256
Steps
1. Log in to the primary MDM: scli --login --username admin.
2. To identify the device path attached to the PowerFlex controller node, type scli --query_device_latency_meters
--sds_id <sds id>.
3. To remove the storage disk device, type scli --remove_sds_device --sds_id <sds id> --device_path
<device path>.
Steps
1. Log in to the VMware vCSA HTML client.
2. Click Storage.
3. Select the PERC-01 datastore.
4. Click the VMs tab to identify existing VMs that need to be migrated to the PowerFlex storage.
5. Right-click the VM to migrate and select Migrate.
6. Select Change both compute resource and storage and click Next.
7. For a compute resource, select one of the PowerFlex controller nodes and click Next.
8. For storage, select the datastore and click Next.
9. For networks, verify that the destination networks are correct and click Next.
10. Retain the default for vMotion priority and click Next.
Wait for vMotion to finish.
11. Repeat these steps for all VMs that need to be migrated.
Steps
1. Right-click PowerFlex SVM > Power > Shut Down Guest OS and click Yes.
2. Right-click PowerFlex SVM and select Edit Settings.
3. Select the 1 TB hard drive and click OK.
4. Right-click PowerFlex SVM > Migrate and select Change Storage Only.
5. For storage, select the local datastore and click Next.
6. Verify the summary and click Finish.
Wait for vMotion to finish.
Steps
1. Log in to the VMware vCenter Server Appliance (vCSA).
2. Click Storage.
3. Right-click the PERC-01 datastore and select Unmount Datastore > Host and click OK.
4. Right-click the PERC-01 datastore and select Delete Datastore. Click Yes to confirm.
5
Convert a standalone PowerFlex controller
node to non-raid mode
Steps
1. Connect to the iDRAC web interface.
2. Click Storage > Overview > Physical Disks.
3. Confirm the SSD Name is NonRAID Solid State Disk 0:1:x. If not, proceed to step 4.
4. Click Storage > Overview > Controllers.
5. In the Actions list for the PERC H755 Front (Embedded), select Reset configuration > OK > Apply now.
6. Click Job Queue.
Wait for task to complete.
7. Select Storage > Overview > Physical Disks.
8. In the Actions menu for each SSD, select Convert to Non-Raid (ensure not to select SSD 0 and SSD 1), and click OK.
9. Select Apply Later.
10. Repeat from all SSD drives.
11. Click Storage > Overview > Tasks.
12. Under Pending Operations actions, select PERC H755 FRONT (Embedded).
13. Select Apply Now.
14. Click Job Queue.
Wait for task to complete.
15. Click Storage > Overview > Physical Disks.
16. Confirm that the SSD name is NonRAID Solid State Disk 0:1:x.
Steps
1. Log in to the VMware ESXi host.
2. Select Manage > Hardware > PCI Devices.
3. Select Broadcom / LSI PERC H755 Front Device > Toggle passthrough.
4. Reboot is required, defer till after SDC install.
NOTE: Ignore the VMware popup warning: Failed to configure passthrough devices.
Steps
1. Copy the SDC file to the local datastore on the VMware ESXi server.
2. Use SSH to log in to each VMware ESXi hosts as root.
3. Type the following command to install the storage data client (SDC): esxcli software component apply -d /
vmfs/volumes/PFMC_DS<last ip octet>/sdc.zip.
Steps
1. To configure the SDC, generate 1 UUID per server (https://www.guidgenerator.com/online-guid-
generator.aspx).
Steps
1. Log in to the VMware vCSA.
2. Select Host and Clusters.
3. Right-click the single PowerFlex Management SVM and select Edit Settings.
4. Click Add New Device and select PCI Device.
5. Select DirectPath IO.
6. Select PCI Device = PERC H755 Front BroadCom/LSI.
7. Click OK.
8. Power on the SVM.
Steps
1. Log in as root to each of the PowerFlex SVMs.
2. To identify all available disks on the SVM, type lsblk.
3. Repeat for all PowerFlex management controller node SVMs.
Steps
1. Log in to the MDM: scli --login --username admin.
2. To add SDS storage devices, type scli --add_sds_device --sds_name <sds name> --storage_pool_name
<storage pool name> --device_path /dev/sd(x).
3. Repeat for all devices and for all PowerFlex management controller SVMs.
Steps
1. Log in as root to the primary MDM: scli -login -username --admin.
2. To verify cluster status (cluster mode is 1_node), type scli --query_cluster.
Output should be similar to the following: Cluster: Mode: 1_node.
3. To convert a single node cluster to a three node cluster, type scli --switch_cluster_mode --cluster_mode
3_node --add_slave_mdm_name <standby-mdm-name> --add_tb_name <tiebreaker-mdm-name>.
4. On the three node cluster, type scli --query_cluster.
Output should be similar to the following: Cluster: Mode: 3_node.
Steps
1. From the PowerFlex Manager menu, click Templates.
2. On the Templates page, click Add a Template.
3. In the Add a Template wizard, click Clone an existing PowerFlex Manager template.
4. For Category, select Sample Templates. For Template to be Cloned, select Management - PowerFlex Gateway. Click
Next.
5. On the Template Information page, provide the template name, template category, template description, firmware and
software compliance, and who should have access to the service deployed from this template. Click Next.
6. On the Additional Settings page, perform the following:
a. Under Network Settings, select PowerFlex Management Network and PowerFlex Data Networks.
b. Under PowerFlex Gateway Settings, select the gateway credential or create a new credential with root and admin
users.
c. Under Cluster Settings, select Management vCenter.
7. Click Finish.
8. Select the VMware cluster and click Edit > Continue. Select the data center name from Data Center Name and a new
cluster name from Cluster Name list and port groups.
NOTE: Deploying PowerFlex gateway from PowerFlex Manager requires a management vCenter, a datacenter, a cluster,
and dvSwitch port groups for the PowerFlex management and PowerFlex data networks.
9. Expand vSphere Network Settings and select the appropriate port groups.
10. Click Save.
11. Select the VM and click Edit > Continue. Under VM Settings, select the datastore and click Save.
12. Click Publish template and click Deploy.
13. In the Deploy Service wizard, click Yes, and do the following:
a. Provide the service name, service description and click Next.
b. Under PowerFlex Gateway Settings, provide the hostname and IP address and click Next.
c. Select Deploy Now or Deploy Later to schedule the deployment and click Next.
d. Review the details in Summary page and click Finish.
After successful deployment, the PowerFlex gateway is discovered automatically.
Prerequisites
Ensure the PowerFlex management controller 2.0 Gateway is deployed and configured before proceeding.
Steps
1. Log in to the primary MDM (capture the system id on login): scli --login --username admin.
2. To identify the virtual IP addresses, type scli --query_cluster.
3. Log in to the PowerFlex gateway as root.
4. Modify the gatewayUser.properties file:
a. Type cd /opt/emc/scaleio/gateway/webapps/ROOT/WEB-INF/classes
b. Type vi gatewayUser.properties to edit the file and modify the following fields.
IP addresses should be on <MGMTIP> network:
● mdm.ip.addresses=<PRIMARY MDM IP>,<SECONDARY MDM IP 1>,<DATA 1 VIP>,<DATA 2 VIP>
● notification_method=none
● bypass_certificate_check=true
Steps
1. From the jump server, browse to the PowerFlex Gateway IP address.
2. Log in as admin.
3. Navigate to the Maintain tab.
4. Complete the following fields:
● Primary MDM IP address
● MDM username
● MDM password
● LIA password
5. Click Retrieve to view the system configuration.
6. Click Cancel.
7. Click OK to accept the pop up message.
8. Populate the fields again if they are empty.
9. Click Test REST configuration.
10. Click Connect to MDM.
11. In the pop up window, type the user name and password.
Steps
1. Using the administrator credentials, log in to the jump server.
2. Copy the PowerFlex license to the primary MDM.
3. Log in to the primary MDM.
4. Run the following command to apply PowerFlex license: scli --mdm_ip <primary mdm ip> --set_license
--license_file <path to license file>
Prerequisites
Ensure the following conditions are met before you add an existing service:
● The VMware vCenter, PowerFlex gateway, switches, and hosts are discovered in the resource list.
● The PowerFlex gateway must be in the service.
NOTE: For PowerFlex management controller 2.0 with a PERC H755, the service will be in lifecycle mode.
Steps
1. In PowerFlex Manager, click Services > + Add Existing Service > Next.
2. On the Service Information page, enter a service name in the Name field.
3. Enter a description in the Description field.
4. Select Hyperconverged for the Type.
5. Select the Intelligent Catalog version applicable from the Firmware and Software Compliance.
6. Click Next.
7. Choose one of the following network automation types:
● Full network automation (FNA)
● Partial network automation (PNA)
NOTE: If you choose PNA, PowerFlex Manager skips the switch configuration step, which is normally performed
for a service with FNA. PNA allows you to work with unsupported switches. However, it also requires more manual
configuration before a deployment can proceed successfully. If you choose to use PNA, you give up the error handling
and network automation features that are available with a full network configuration that includes supported switches.
8. (Optional) In the Number of Instances field, provide the number of component instances that you want to include in the
template.
9. On the Cluster Information page, enter a name for the cluster component in the Component Name field.
10. Select values for the cluster settings:
III
Adding a PowerFlex controller node with a
PERC H755 to a PowerFlex management
controller 2.0
Use the procedures in this section to add a PowerFlex R650 controller node with a PERC H755 to a PowerFlex management
controller 2.0 with PowerFlex.
Before adding a PowerFlex controller node to a PowerFlex management controller 2.0, ensure that the following prerequisites
are met:
● Back up the PowerFlex management node.
● Convert the PowerFlex R650 controller node with a PERC H755 to a PowerFlex management controller 2.0. See Converting
a PowerFlex controller node with a PERC H755 to a PowerFlex management controller 2.0 for more information.
● Latest intelligent catalog is available.
● See Cabling the PowerFlex R650/R750/R6525 nodes for cabling information on PowerFlex management node.
● See Configuring the network to configure the management node switches.
Adding a PowerFlex controller node with a PERC H755 to a PowerFlex management controller 2.0 153
Internal Use - Confidential
6
Discovering the new resource
Whether expanding an existing service or adding a service, the first step is to discover the new resource.
Steps
1. Connect the new NICs of the PowerFlex appliance node to the access switches and the out-of-band management switch
exactly like the existing nodes in the same protection domain. For more details, see Cabling the PowerFlex R650/R750/
R6525 nodes.
2. Ensure that the newly connected switch ports are not shut down.
3. Set the IP address of the iDRAC management port, username, and password for the new PowerFlex appliance nodes.
4. Log in to PowerFlex Manager.
5. Discover the new PowerFlex appliance nodes in the PowerFlex Manager resources. For more details, see Discover resources
in the next section.
Discover resources
Use this procedure to discover and allow PowerFlex Manager access to resources in the environment. Provide the management
IP address and credential for each discoverable resource.
Prerequisites
Verify that the iDRAC network settings are configured. See Configure iDRAC network settings for more information.
NOTE: For partial network deployments, you do not need to discover the switches. The switches need to be pre-
configured. For sample configurations for Dell PowerSwitch, Cisco Nexus, and Arista switches, see the Dell EMC PowerFlex
Appliance Administration Guide.
The following are the specific details for completing the Discovery wizard steps:
** This is optional. For a new CloudLink Center deployment, the CloudLink Center is discovered automatically.
Prerequisites
● Configure the iDRAC network settings.
● Gather the IP addresses and credentials that are associated with the resources.
NOTE: PowerFlex Manager also allows you to use the name-based searches to discover a range of nodes that were
assigned the IP addresses through DHCP to iDRAC. For more information about this feature, see Dell EMC PowerFlex
Manager Online Help.
Steps
1. On the PowerFlex Manager Getting Started page, click Discover Resources.
2. On the Welcome page of the Discovery Wizard, read the instructions and click Next.
3. On the Identify Resources page, click Add Resource Type. From the Resource Type list, select the resource that you
want to discover.
4. Enter the management IP address of the resource in the IP/Hostname Range field. To discover a resource in an IP range,
provide a starting and ending IP address.
5. In the Resource State list, select Managed or Unmanaged.
6. For PowerFlex node, to discover resources into a selected node pool instead of the global pool (default), select the node
pool from the Discover into Node Pool list.
7. Select the appropriate credential from the Credentials list. See the table above for details.
8. For PowerFlex node, if you want PowerFlex Manager to automatically reconfigure the iDRAC IP addresses of the nodes it
discovers, select the Reconfigure discovered nodes with new management IP and credentials check box. This option
is not selected by default, because it is faster to discover the nodes if you bypass the reconfiguration.
NOTE: iDRAC can be discovered using hostname also.
NOTE: For the Resource Type, you can use a range with hostname or IP address, provided the hostname has a valid
DNS entry.
9. For PowerFlex node, select the Auto configure nodes to send alerts to PowerFlex Manager check box to have
PowerFlex Manager automatically configure nodes to send alerts to PowerFlex Manager.
10. Click Next to start discovery. On the Discovered Resources page, select the resources from which you want to collect
inventory data and click Finish. The discovered resources are listed on the Resources page.
Related information
Configure iDRAC network settings
7
Upgrade the firmware
Use this procedure to upgrade the firmware.
Steps
1. In the web browser, enter https://<ip-address-of-idrac>.
2. From the iDRAC dashboard, click Maintenance > System Update > Manual Update.
3. Click Choose File. Browse to the release appropriate Intelligent Catalog folder and select the appropriate files.
Required firmware:
● Dell iDRAC or Lifecycle Controller firmware
● Dell BIOS firmware
● Dell BOSS Controller firmware
● Dell Intel X550 or X540 or i350 firmware
● Dell Mellanox ConnectX-5 EN firmware
● PERC H755P controller firmware
4. Click Upload.
5. Click Install and Reboot.
8
Configure BOSS card
Use this procedure only if the BOSS card RAID1 is not configured.
Steps
1. Launch the virtual console, select Boot from the menu, and select BIOS setup from Boot Controls to enter the system
BIOS.
2. Power cycle the server and enter the BIOS setup.
3. From the menu, click Power > Reset System (Warm Boot).
4. From System Setup main menu, select Device Settings.
5. Select AHCI Controller in Slot x: BOSS-x Configuration Utility.
6. Select Create RAID Configuration.
7. Select both the devices and click Next.
8. Enter VD_R1_1 for name and retain the default values.
9. Click Yes to create the virtual disk and then click OK to apply the new configuration.
10. Click Next > OK.
11. Select VD_R1_1 that was created and click Back > Finish > Yes > OK.
12. Select System BIOS.
13. Select Boot Settings and enter the following settings:
● Boot Mode: UEFI
● Boot Sequence Retry: Enabled
● Hard Disk Failover: Disabled
● Generic USB Boot: Disabled
● Hard-disk Drive Placement: Disabled
● Clean all Sysprep order and variables: None
14. Click Back > Finish > Finish and click Yes to reboot the node.
15. Boot the node into BIOS mode by pressing F2 during boot.
16. Select System BIOS > Boot Settings > UEFI Settings.
17. Select UEFI Boot Sequence to change the order.
18. Click AHCI Controller in Slot 1: EFI Fixed Disk Boot Device 1 and select + to move to the top.
19. Click Back > Back > Finish to reboot the node again.
9
Convert physical disks to non-raid disks
Use this procedure to convert the physical disks on the PowerFlex management controller 2.0 to non-raid disks.
Steps
1. Connect to the iDRAC web interface.
2. Click Storage > Overview > Physical Disks.
3. Confirm the SSD Name is NonRAID Solid State Disk 0:1:x. If not, proceed to step 4.
4. Click Storage > Overview > Controllers.
5. In the Actions list for the PERC H755 Front (Embedded), select Reset configuration > OK > Apply now.
6. Click Job Queue.
Wait for task to complete.
7. Select Storage > Overview > Physical Disks.
8. In the Actions menu for each SSD, select Convert to Non-Raid (ensure not to select SSD 0 and SSD 1), and click OK.
9. Select Apply Later.
10. Repeat from all SSD drives.
11. Click Storage > Overview > Tasks.
12. Under Pending Operations actions, select PERC H755 FRONT (Embedded).
13. Select Apply Now.
14. Click Job Queue.
Wait for task to complete.
15. Click Storage > Overview > Physical Disks.
16. Confirm that the SSD name is NonRAID Solid State Disk 0:1:x.
10
Install VMware ESXi
Use this procedure to install VMware ESXi on the PowerFlex management controller.
Steps
1. Log in to iDRAC and perform the following steps:
a. Connect to the iDRAC interface and launch a virtual remote console from Dashboard and click Launch Virtual Console.
b. Select Virtual Media > Connect Virtual Media > Map CD/DVD.
c. Click Choose File and browse to the folder where the ISO file is saved, select it, and click Open.
d. Click Map Device > Close.
e. Click Boot > Virtual CD/DVD/ISO.
f. Click Yes to confirm boot action.
g. Click Power > Reset System (warm boot).
h. Click Yes.
2. Perform the following steps to install VMware ESXi:
a. On the VMware ESXi installer screen, press Enter to continue.
b. Press F11 to accept the license agreement.
c. Under Local, select ATA DELLBOSS VD as the installation location. If prompted, click Enter.
d. Select US Default as the keyboard layout and press Enter.
e. At the prompt, type the root password, and press Enter.
f. At the Confirm Install screen, press F11.
g. In Virtual Console, click Virtual Media > Disconnect Virtual Media.
h. Click Yes to un-map all devices.
i. Press Enter to reboot the PowerFlex management controller when the installation completes.
11
Configure VMware ESXi
Use this procedure to configure VMware ESXi on the PowerFlex node.
Steps
1. Press F2 to customize the system.
2. Enter the root password and press Enter.
3. Go to DCUI and select Troubleshooting Options.
4. Select Enable SSH.
5. Select Enable ESXi Shell.
6. Press ESC to exit from troubleshooting mode options.
7. Go to Direct Console User Interface (DCUI) > Configure Management Network.
8. Set Network Adapter to VMNIC2.
9. Set the ESXi Management VLAN ID to the required VLAN value.
10. Set the IPv4 ADDRESS, SUBNET MASK, and DEFAULT GATEWAY.
11. Select IPV6 Configuration > Disable IPV6 and press Enter.
12. Go to DNS Configuration and set the customer provided value.
13. Go to Custom DNS Suffixes and set the customer provided value.
14. Press ESC to exit the network configuration and press Y to apply the changes.
15. Type Y to commit the changes and the node restarts.
16. Verify the host connectivity by pinging the IP address from the jump server using the command prompt.
12
Install Dell Integrated Service Module
Use this procedure to install Dell Integrated Service Module (ISM).
Prerequisites
Download the latest supported version from Dell iDRAC Service Module.
Steps
1. Copy ISM-Dell-Web-X.X.X-XXXX.VIB-ESX7i-Live_AXX.zip to the /vmfs/volumes/<datastore>/ folder on
the PowerFlex management node running VMware ESXi.
2. SSH to the new appliance management host running VMware ESXi.
3. To install VMware vSphere 7.x Dell iDRAC service module, type esxcli software vib install -d /vmfs/
volumes/<datastore>/ISM-Dell-Web-X.X.X-XXXX.VIB-ESX7i-Live_AXX.zip.
13
Configure NTP on the host
Use this procedure to configure the NTP on the host.
Steps
1. Log in to VMware ESXi host client as root.
2. In the left pane, click Manage.
3. Click System and Time & Date.
4. Click Edit NTP Settings.
5. Select Use Network Time Protocol (enable NTP client).
6. Select Start and Stop with host from the drop-down list.
7. Enter NTP IP Addresses.
8. Click Save.
9. Click Services > ntpd.
10. Click Start.
14
Rename the BOSS datastore
Use this procedure to rename the BOSS datastore.
Steps
1. Log in to VMware ESXi host client as a root.
2. On the left pane, click Storage.
3. Click Datastores.
4. Right-click datastore1 and click Rename.
5. Enter PFMC_DS<last_ip_octet>.
6. Click Save.
15
Enable PCI passthrough for PERC H755 on
the PowerFlex management controller
Use this procedure to enable PCI passthrough on the PowerFlex management controller.
Steps
1. Log in to the VMware ESXi host.
2. Select Manage > Hardware > PCI Devices.
3. Select Broadcom / LSI PERC H755 Front Device > Toggle passthrough.
4. Reboot is required, defer till after SDC install.
NOTE: Ignore the VMware popup warning: Failed to configure passthrough devices.
164 Enable PCI passthrough for PERC H755 on the PowerFlex management controller
Internal Use - Confidential
16
Install the PowerFlex storage data client
(SDC) on the PowerFlex management
controller
Use this procedure to install the PowerFlex storage data client (SDC) on the PowerFlex management controller.
Steps
1. Copy the SDC file to the local datastore on the VMware ESXi server.
2. Use SSH to log in to each VMware ESXi hosts as root.
3. Type the following command to install the storage data client (SDC): esxcli software component apply -d /
vmfs/volumes/PFMC_DS<last ip octet>/sdc.zip.
Install the PowerFlex storage data client (SDC) on the PowerFlex management controller 165
Internal Use - Confidential
17
Configure PowerFlex storage data client
(SDC) on the PowerFlex management
controller
Use this procedure to manually configure the SDC on the PowerFlex management controller 2.0.
Steps
1. To configure the SDC, generate 1 UUID per server (https://www.guidgenerator.com/online-guid-
generator.aspx).
166 Configure PowerFlex storage data client (SDC) on the PowerFlex management controller
Internal Use - Confidential
18
Create a staging cluster and add a host
Use this procedure to create a staging cluster and add a host.
Steps
1. Create a staging cluster:
a. Log in to the VMware vSphere Client.
b. Right-click Datacenter and click New Cluster.
c. Enter the cluster name as Staging and click Next.
d. Verify the Summary and click Finish.
2. Add the host:
a. Click vCenter > Hosts and Clusters.
b. Right-click the staging cluster and click Add Host.
c. Enter FQDN of host.
d. Enter root username and password and click Next.
e. In the Security Alert dialog box, select the host and click OK.
f. Verify the summary and click Finish.
19
Assign VMware vSphere licenses
Use this procedure to assign VMware vSphere licenses to the new host.
Steps
1. Log in to the vCSA and from the menu, click Administration > Licensing > Licenses.
2. Click Assets, select the newly added VMware ESXi host, and click Assign License.
3. Select the license and click OK.
Related information
Add VMkernel adapter to the PowerFlex hyperconverged or ESXi-based compute-only node hosts
Add VMkernel adapter to the PowerFlex controller node hosts
20
Add PowerFlex controller nodes to an
existing dvSwitch
Use this procedure to add PowerFlex controller nodes to an existing distributed virtual switch.
Steps
1. Log in to the VMware vSphere Client.
2. Click Home and select Networking.
3. Right-click the oob_dvswitch and select Add and Manage Hosts.
4. In the wizard, select Add Hosts and click Next
a. Click +New Hosts, select the installed node, and click OK.
b. Click Next.
c. From Manage Physical Adapters and click Next. Clear the remaining check boxes.
d. Set VMNIC4 as the Network Adapters.
e. Click Assign uplink.
f. Select Uplink1 and click OK.
g. Click Next > Next.
5. Right-click the fe_dvswitch and select Add and Manage Hosts:
a. Select Add hosts and click Next.
b. Click +New Hosts, select the installed node, and click OK.
c. Click Next.
d. Select the Manage Physical Adapters and Manage VMkernel Adapters check boxes and click Next. Clear the
remaining check boxes.
e. Select VMNIC2 and VMNIC6 as the Network Adapters.
f. Click Assign Uplink
g. Select Uplink1 or LAG_FE-0 and click OK.
h. After assigning the VMNICs, click Next.
i. Select VMK0 and select Assign Port Group.
j. Select the port group for the VMK0 Network Label and click OK.
k. Click Next > Next.
l. Click Finish.
6. Right-click the be_dvswitch and select Add and Manage Hosts:
a. Select Add hosts and click Next.
b. Click +New Hosts, select the installed node, and click OK.
c. Click Next.
d. Select the Manage Physical Adapters and Manage VMkernel Adapters check boxes and click Next. Clear the
remaining check boxes.
e. Select VMNIC3 and VMNIC7 as the Network Adapters.
f. Click Assign Uplink.
g. Select Uplink1 or LAG_BE-0 and click OK.
h. Click Assign Uplink and select Uplink2 or LAG_BE-1, and click OK.
i. After assigning the VMNICs, click Next > Next.
j. Click Finish.
7. Click Home and select Hosts and Clusters:
Related information
Add VMkernel adapter to the PowerFlex controller node hosts
21
Migrate the PowerFlex controller node to the
PowerFlex management controller 2.0
Use this procedure to migrate the PowerFlex R650 controller node to the PowerFlex management controller 2.0.
Steps
1. Log in to the PowerFlex controller node VMware vSphere Client.
2. Click Hosts and Clusters.
3. Expand both the Staging cluster and the PowerFlex management controller 2.0 cluster. This is the cluster containing the
current controller hosts.
4. Click the host in the Staging cluster and drag it into the PowerFlex management controller 2.0 cluster. Accept the defaults
to queries, if prompted.
5. Click Finish.
6. Right-click Staging Cluster, and click Delete > Yes.
Migrate the PowerFlex controller node to the PowerFlex management controller 2.0 171
Internal Use - Confidential
22
Manually deploy the SVM
Use this procedure to manually deploy the SVM of the selected Intelligent Catalog.
Steps
1. Log in to the VMware vCSA.
2. Select Hosts and Clusters.
3. Right-click the ESXi host > Select Deploy OVF Template.
4. Select Local file > Upload file > Browse to the SVM OVA template.
5. Click Open > Next.
6. Enter pfmc-<svm-ip-address> for VM name.
7. Click Next.
8. Identify the cluster and select the node that you are deploying. Verify that there are not compatibility warnings and click
Next.
9. Click Next.
10. Review details and click Next.
11. Select Local datastore Thin Provision and Disable Storage DRS for this VM and click Next.
12. Select pfmc-sds-mgmt-<vlanid> for VM network and click Next.
13. Click Finish.
Steps
1. Right-click each SVM, and click Settings.
a. Set CPU to 12 CPUs with 12 cores per socket.
b. Select Reservation and enter the GHz value: 17.4.
c. Set Memory to 18 GB and check Reserve all guest memory (all locked).
d. Set Network Adapter 1 to the pfmc-sds-mgmt-<vlanid>.
e. Set Network Adapter 2 to the pfmc-sds-data1-<vlanid>.
f. Set Network Adapter 3 to the pfmc-sds-data2-<vlanid>.
g. Click Add New Device and select PCI Device (new single PowerFlex management controller).
h. Enable Toggle DirectPath IO.
i. PCI Device = PERC H755 Front BroadCom / LSI
j. Click OK.
2. Power on the SVM and open a console.
Steps
1. From the nmtui, select Edit Connection.
2. Select Wired connection 1 to modify connection for pfmc-sds-mgmt-<vlanid>.
IPv4 configuration Select Automatic and change to Manual then select Show.
Addresses Select Add and enter the IP address of this interface
(pfmc-sds-mgmt_ip) with the subnet mask (Example:
100.65.140.10/24).
Steps
1. From the nmtui, select Edit Connection.
2. Select Wired connection 2 to modify connection for pfmc-sds-data1-<vlanid>.
IPv4 configuration Select Automatic and change to Manual then select Show.
Addresses Select Add and enter the IP address of this interface
(pfmc-sds-data1_ip).
Steps
1. From the nmtui, select Edit Connection.
2. Select Wired connection 3 to modify connection for pfmc-sds-data2-<vlanid>.
IPv4 configuration Select Automatic and change to Manual then select Show.
Addresses Select Add and enter the IP address of this interface
(pfmc-sds-data2_ip).
Prerequisites
● Identify new nodes to use as MDM or tiebreaker.
● Identify the management IP address, data1 IP address, and data2 IP address (log in to each new node or SVM and run the IP
addr command).
● Gather virtual interfaces for the nodes being used for the new MDM or tiebreaker, and note the interface of data1 and data2.
For example, for a PowerFlex storage-only node, the interface is bond0.152 and bond1.160. If it is an SVM, it is eth3 and
eth4.
● Identify the primary MDM.
Steps
1. SSH to each new node or SVM and assign the proper role (MDM or tiebreaker) to each.
2. Transfer the MDM, LIA, and SDS packages to the newly identified MDM cluster nodes.
NOTE: The following steps contain sample versions of PowerFlex files as examples only. Use the appropriate PowerFlex
files for your deployment.
10. Add a new tiebreaker by entering scli --add_standby_mdm --mdm_role tb --new_mdm_ip <new tb
data1,data2 ip’s> --new_mdm_name <new tb name>
11. Enter scli -–query_cluster to find the ID for the newly added Standby MDM and the Standby TB.
12. To switch to five node cluster, enter scli --switch_cluster_mode --cluster_mode 5_node --
add_slave_mdm_id <Standby MDM ID> --add_tb_id <Standby tiebreaker ID>
13. Repeat steps 1 through 9 to add standby MDM and tiebreakers on other PowerFlex nodes.
Prerequisites
This verification uses an MTU of 8972 to verify jumbo frames between SVMs.
Steps
1. Log in to the VMware vCSA.
2. Right-click the SVM.
3. On the VM summary page, select Launch Web Console.
4. Log in to SVM as root.
5. Verify connectivity between SVM, type ping -M do -s 8972 [destination IP].
6. Confirm connectivity for all interfaces to all SVMs.
Add SDSs
Use this procedure to add SDSs.
Steps
1. Log in to the MDM: scli --login --username admin.
2. To add SDSs, type scli --add_sds --sds_ip <pfmc-sds-data1-ip,pfcm-sds-data2-ip> --
protection_domain_name PFMC --storage_pool_name PFMC-Pool --disable_rmcache --sds_name
PFMC-SDS-<last ip octet>.
3. Repeat for each PowerFlex management controller.
Steps
1. Log in as root to each of the PowerFlex SVMs.
2. To identify all available disks on the SVM, type lsblk.
3. Repeat for all PowerFlex management controller node SVMs.
Steps
1. Log in to the MDM: scli --login --username admin.
2. To add SDS storage devices, type scli --add_sds_device --sds_name <sds name> --storage_pool_name
<storage pool name> --device_path /dev/sd(x).
3. Repeat for all devices and for all PowerFlex management controller SVMs.
Steps
1. Log in to the primary MDM, type: scli --login --username admin.
2. To modify the capacity pool, type scli --modify_spare_policy --protection_domain_name PFMC --
storage_pool_name PFMC-Pool --spare_percentage <percentage>.
NOTE: Spare percentage is 1/n (where n is the number of nodes in the cluster). For example, a three-node clusters
spare percentage is 34%.
3. Type Y to proceed.
Steps
1. Using the administrator credentials, log in to the jump server.
2. Copy the PowerFlex license to the primary MDM.
3. Log in to the primary MDM.
4. Run the following command to apply PowerFlex license: scli --mdm_ip <primary mdm ip> --set_license
--license_file <path to license file>
23
Add PowerFlex management controller
service to PowerFlex Manager
Use this procedure to add the PowerFlex management controller to PowerFlex Manager.
Prerequisites
Ensure the following conditions are met before you add an existing service:
● The VMware vCenter, PowerFlex gateway, switches, and hosts are discovered in the resource list.
● The PowerFlex gateway must be in the service.
NOTE: For PowerFlex management controller 2.0 with a PERC H755, the service will be in lifecycle mode.
Steps
1. In PowerFlex Manager, click Services > + Add Existing Service > Next.
2. On the Service Information page, enter a service name in the Name field.
3. Enter a description in the Description field.
4. Select Hyperconverged for the Type.
5. Select the Intelligent Catalog version applicable from the Firmware and Software Compliance.
6. Click Next.
7. Choose one of the following network automation types:
● Full network automation (FNA)
● Partial network automation (PNA)
NOTE: If you choose PNA, PowerFlex Manager skips the switch configuration step, which is normally performed
for a service with FNA. PNA allows you to work with unsupported switches. However, it also requires more manual
configuration before a deployment can proceed successfully. If you choose to use PNA, you give up the error handling
and network automation features that are available with a full network configuration that includes supported switches.
8. (Optional) In the Number of Instances field, provide the number of component instances that you want to include in the
template.
9. On the Cluster Information page, enter a name for the cluster component in the Component Name field.
10. Select values for the cluster settings:
14. On the Network Mapping page, review the networks that are mapped to port groups and make any required edits and click
Next.
15. Review the Summary page and click Finish when the service is ready to be added.
16. Automatically migrate the vCLS VMs:
a. For storage pools, select PFMC-POOL.
b. Type MIGRATE VCLS VIRTUAL MACHINES.
c. Click Confirm.
24
Update the PowerFlex management
controller 2.0 service details
Use this procedure to update the PowerFlex management controller 2.0 service details.
Prerequisites
Ensure the following conditions are met before you add an existing service:
● The VMware vCenter, PowerFlex gateway, switches, and hosts are discovered in the resource list.
● The PowerFlex gateway must be in the service.
Before you update the details for a service, ensure that you run inventory for the VMware vCenter and the PowerFlex
management controller 2.0 Gateway.
1. Log in to PowerFlex Manager.
2. Select the Resources page.
3. Click the PowerFlex management controller 2.0 Gateway and Management vCenter, and click Run Inventory. Wait for the
inventory to finish.
Steps
1. On the menu bar, click Services.
2. On the Services page, click the PowerFlex management controller 2.0 Gateway and Management vCenter, and in the right
pane, click View Details.
3. On the Services Details page, in the right pane, under Services Actions, click Update Service Details.
4. Review the OS Credentials page and click Next.
PowerFlex Manager shows all nodes and credentials, regardless of whether they are in the service. This enables you to
update the username and password for a node if it has changed.
5. Review the Inventory Summary page and click Next.
6. Review the Summary page and click Finish.
IV
Adding a PowerFlex management node to a
PowerFlex management controller 1.0 with
VMware vSAN
Use the procedures in this section to add a PowerFlex management node to a PowerFlex management controller 1.0 with
VMware vSAN.
Before adding a PowerFlex management node, you must complete the initial set of expansion procedures that are common to all
expansion scenarios covered in Performing the initial expansion procedures.
After adding a PowerFlex management node, see Completing the expansion.
Adding a PowerFlex management node to a PowerFlex management controller 1.0 with VMware vSAN 181
Internal Use - Confidential
25
Hardware requirement
Node chassis Role Data drive Boot Data drives Network Description
type configurati
on
PowerFlex Controll SSD (1.92 BOSS H740 controller rNDC: Dual port Mellanox CS4- Dual CPU - 192
R640 node er TB) 5x SSD LX (connected through 1x10 GB RAM (small
GB to access switches and controller)
operating at 10 G).
PCIe: Dual port Mellanox CX4-
LX (connected through 1x10
GB to access switches and
operating at 10 G).
PCIe: Intell X550 dual port
1 GbE (baseT) connected to
OOBM switch.
26
Upgrade the firmware
Use this procedure to upgrade the firmware.
Steps
1. In the web browser, enter https://<ip-address-of-idrac>.
2. From the iDRAC dashboard, click Maintenance > System Update > Manual Update.
3. Click Choose File. Browse to the release appropriate Intelligent Catalog folder and select the appropriate file.
Required firmware:
● Dell iDRAC or Lifecycle Controller firmware
● Dell BIOS firmware
● Dell BOSS Controller firmware
● Dell Intel X550 or X540 or i350 firmware
● Dell Mellanox ConnectX-5 EN firmware
● PERC H740P controller firmware
4. Click Upload.
5. Click Install and Reboot.
27
Configure enhanced HBA mode
Use this procedure to enable enhanced HBA mode on the PowerFlex management node with Dell PERC H740P mini RAID
controller cards.
Steps
1. Log in to BIOS setup.
2. Go to the device settings and click Integrated RAID Controller1:Dell <PERC H740P Mini> Configuration Utility.
3. Click Main menu > Controller management > Advanced Controller management > Manage controller Mode.
4. Click Switch to Enhanced HBA Controller Mode.
5. Select Confirm > Yes and click OK.
6. Click Back > Back > Back > Finish > Finish and click Yes to exit from the BIOS and restart.
28
Configure BOSS card
Use this procedure only if the BOSS card RAID1 is not configured.
Steps
1. Launch the virtual console, select Boot from the menu, and select BIOS setup from Boot Controls to enter the system
BIOS.
2. Power cycle the server and enter the BIOS setup.
3. From the menu, click Power > Reset System (Warm Boot).
4. From System Setup main menu, select Device Settings.
5. Select AHCI Controller in SlotX: BOSS-X Configuration Utility.
6. Select Create RAID Configuration.
7. Select both the devices and click Next.
8. Enter VD_R1_1 for name and retain the default values.
9. Click Yes to create the virtual disk and then click OK to apply the new configuration.
10. Click Next > OK.
11. Select VD_R1_1 that was created and click Back > Finish > Yes > OK.
12. Select System BIOS.
13. Select Boot Settings and enter the following settings:
● Boot Mode: UEFI
● Boot Sequence Retry: Enabled
● Hard Disk Failover: Disabled
● Generic USB Boot: Disabled
● Hard-disk Drive Placement: Disabled
● Clean all Sysprep order and variables: None
14. Click Back > Finish > Finish and click Yes to reboot the node.
15. Boot the node into BIOS mode by pressing F2 during boot.
16. Select System BIOS > Boot Settings > UEFI Settings.
17. Select UEFI Boot Sequence to change the order.
18. Click AHCI Controller in Slot 1: EFI Fixed Disk Boot Device 1 and select + to move to the top.
19. Click Back > Back > Finish to reboot the node again.
29
Install VMware ESXi
Use this procedure to install VMware ESXi on the PowerFlex management controller.
Steps
1. Log in to iDRAC and perform the following steps:
a. Connect to the iDRAC interface and launch a virtual remote console from Dashboard and click Launch Virtual Console.
b. Select Virtual Media > Connect Virtual Media > Map CD/DVD.
c. Click Choose File and browse to the folder where the ISO file is saved, select it, and click Open.
d. Click Map Device > Close.
e. Click Boot > Virtual CD/DVD/ISO.
f. Click Yes to confirm boot action.
g. Click Power > Reset System (warm boot).
h. Click Yes.
2. Perform the following steps to install VMware ESXi:
a. On the VMware ESXi installer screen, press Enter to continue.
b. Press F11 to accept the license agreement.
c. Under Local, select ATA DELLBOSS VD as the installation location. If prompted, click Enter.
d. Select US Default as the keyboard layout and press Enter.
e. At the prompt, type the root password, and press Enter.
f. At the Confirm Install screen, press F11.
g. In Virtual Console, click Virtual Media > Disconnect Virtual Media.
h. Click Yes to un-map all devices.
i. Press Enter to reboot the PowerFlex management controller when the installation completes.
30
Configure VMware ESXi
Use this procedure to configure VMware ESXi on the PowerFlex node.
Steps
1. Press F2 to customize the system.
2. Enter the root password and press Enter.
3. Go to Direct Console User Interface (DCUI) > Configure Management Network.
4. Set Network Adapter to VMNIC2 and VMNIC0.
5. Set the ESXi Management VLAN ID to the required VLAN value.
6. Set the IPv4 ADDRESS, SUBNET MASK, and DEFAULT GATEWAY.
7. Select IPV6 Configuration > Disable IPV6 and reboot.
8. Press Esc to exit the network configuration and press Y to apply the changes.
9. Go to DNS Configuration and set the customer provided value.
10. Go to Custom DNS Suffixes and set the customer provided value.
11. Go to DCUI and select Troubleshooting Options.
12. Select Enable SSH.
13. Select Enable ESXi SSH.
14. Press Esc to exit from troubleshooting mode options.
15. Go to DCUI IPv6 Configuration.
16. Select Configure Management Network > IPv6 Configuration.
17. Disable IPv6.
18. Press ESC to return to the DCUI.
19. Type Y to commit the changes and the node restarts.
20. Verify the host connectivity by pinging the IP address from the jump server using the command prompt.
31
Modify the existing VM network
Use this procedure to modify the existing VM network.
Steps
1. Log in to VMware ESXi host client as root.
2. On the left pane, click Networking.
3. Right-click VM Network and click Edit Settings.
4. Change VLAN ID to flex-node-mgmt-<vlanid> and click Save.
32
Configure NTP on the host
Use this procedure to configure the NTP on the host.
Steps
1. Log in to VMware ESXi host client as root.
2. In the left pane, click Manage.
3. Click System and Time & Date.
4. Click Edit NTP Settings.
5. Select Use Network Time Protocol (enable NTP client).
6. Select Start and Stop with host from the drop-down list.
7. Enter NTP IP Addresses.
8. Click Save.
9. Click Services > ntpd.
10. Click Start.
33
Create a data center and add a host
Use this procedure to create a data center and then add a host to the data center.
Steps
Create a data center:
1. Log in to the VMware vSphere Client.
2. Right-click vCenter and click New Datacenter.
3. Enter data center name as PowerFlex Management and click OK.
Add a host to the data center:
NOTE: The vCLS VMs are deployed on the local datastore when the node is added to the cluster from VCSA 7.0Ux. These
VMs are auto deployed using VMware vCenter. When you add the host cluster they are used for managing the HA and DRS
service on the cluster.
4. Right-click Datacenter and click New Cluster.
5. Enter the cluster name as PowerFlex Management Cluster and retain the default for DRS, HA, and vSAN. Click OK.
6. Right-click the cluster and click Add Host to add multiple hosts.
7. Enter FQDN of host.
8. Enter root username and password and click Next.
9. Select the certificate and click OK for certificate alert.
10. Verify the Host Summary and click Next.
11. For VMware ESXi prior to version 7.0 Ux, perform:
a. Select the valid license and click Next.
b. Select Disabled and click Next > Next.
12. Verify the summary and click Finish.
NOTE: If the node goes into maintenance mode, right-click the VMware ESXi host and click Maintenance Mode > Exit
Maintenance Mode.
34
Add hosts to an existing dvswitch
Use this procedure to add hosts to an existing dvswitch on the PowerFlex management controller.
Steps
1. Log in to VMware vCenter with administrator credentials.
2. Right-click an existing fe_dvswitch.
3. Click Add and Manage Hosts.
4. Click +New Hosts, select the newly added hosts, and click OK.
5. Click Assign Uplink and select lag1-0 for the appropriate VMNIC.
6. Click Assign Uplink and select lag1-1 for the appropriate VMNIC.
7. Click Next and assign port group.
8. Select the mgmt vlan port group and click OK.
9. Retain the default values and click Finish.
10. Repeat step 1 to 9 for be_dvswitch.
11. Repeat these steps for the remaining hosts.
35
Add VMkernel adapter to the hosts
Use this procedure to add VMkernel adapter to the hosts.
Steps
1. Log in to the VMware vSphere Client.
2. Select the host and click Configure in the right pane.
3. Under Networking tab, select the VMkernel adapter.
4. Click Add.
5. Select Connection type as VMkernel network adapter and click Next.
6. Select Target device as Existing network and click Browse to select the appropriate port group.
7. In the port properties, select Enable services, select the appropriate service, and click Next.
For example, for vMotion, select vMotion and for vSan, select vSAN. For any other networks, retain the default service.
8. In IPV4 Settings, select Use static IPV4 settings, provide the appropriate IP address and subnet details, and click Next.
9. Verify the details on Ready to Complete and click Finish.
10. Repeat the steps 2 through 9 to create the VMkernel adapters for the following port groups:
● flex-vmotion-<vlanid>
● flex-vsan-<vlanid>
● flex-vcsa-ha-<vlanid>
36
Configure vSAN on management cluster
Use this procedure to configure the vSAN on management cluster.
Steps
1. Right-click the management cluster, select Configure, and click vSAN.
2. Select single site cluster.
3. Leave all the values to default and click Next.
4. Claim the disks from all nodes.
5. Select Claim for Capacity tier and Cache tier.
6. Retain all the values to default and click Next.
7. Click Finish.
37
Migrate VMware vCenter server appliance
7.0 from PERC-01 datastore to vSAN
datastore
Use this procedure to migrate VMware vCenter server appliance 7.0 from PERC-01 datastore to vSAN datastore.
Steps
1. Log in to VMware vCenter using the admin credentials.
2. Right-click the VM and select Migrate.
3. Select Change both compute and storage and click Next.
4. Select the new node and click Next.
5. Select the vSAN datastore and click Next.
6. Retain the default values and click Next.
7. Click Finish.
8. Repeat steps 2 through 7 for the remaining VMs. Verify all VMs are migrated to vSAN datastore.
194 Migrate VMware vCenter server appliance 7.0 from PERC-01 datastore to vSAN datastore
Internal Use - Confidential
38
Claim disks from PowerFlex management
node
Use this procedure to claim disks from the PowerFlex management node.
Steps
1. Log in to the VMware vCenter using admin credentials.
2. Place the node in maintenance mode.
3. Select the datastore and click Delete to delete the RAID datastore from the PowerFlex management node.
4. Restart the node and log in to BIOS.
5. Delete the virtual disk from the host and restart.
6. Log in to BIOS and configure the enhanced HBA controller mode:
a. Go to the device settings and click Integrated RAID Controller1:Dell <PERC H740P Mini> Configuration Utility.
b. Click Main menu > Controller management > Advanced Controller management > Manage controller Mode.
c. Click Switch to Enhanced HBA Controller Mode.
d. Select Confirm > Yes and click OK.
e. Click Back > Back > Back > Finish > Finish and click Yes to exit from the BIOS and restart.
7. Restart the node.
8. Select the cluster and vSAN.
9. Claim the disks and select Claim for Capacity tier and Cache tier.
39
Enable VMware vCSA high availability on
PowerFlex management controller vCSA
Use this procedure to enable VMware vCSA high availability on the PowerFlex management controller vCSA.
Prerequisites
Valid credentials are required for the PowerFlex management controller vCSA.
Steps
1. Go to the vCenter instance from Host and Cluster view in the VMware vSphere Client.
2. From the Configure tab, click Settings > vCenter HA.
3. Click Set Up vCenter HA.
4. On the Resource Settings page, perform the following steps:
a. For an active node, click Browse and select the vCSA HA VLAN from the list.
b. For a passive node, perform the following steps:
i. Click Edit
ii. Select the datacenter.
iii. Click Next.
iv. Select the host and click Next.
v. Select the vSAN datastore and click Next.
vi. Select the network for management and vCenter HA and click Next.
vii. Verify the details on the Summary page and click Finish.
c. For a witness node, perform the following steps:
i. Select the datacenter and click Next.
ii. Select the host and click Next.
iii. Select the vSAN datastore and click Next.
iv. Select the network for vCenter HA and click Next.
v. Verify the details on the Summary page and click Finish.
5. On the IP settings page, enter the vCenter HA IP addresses for active, passive, and witness nodes and click Finish.
6. Wait for the task to complete and verify that the vCenter HA status appears as Mode:Enabled and State:Healthy.
196 Enable VMware vCSA high availability on PowerFlex management controller vCSA
Internal Use - Confidential
40
Migrate vCLS VMs
Use the following procedure to migrate the vSphere Cluster Services (vCLS) VMs manually on the controller setup.
Steps
1. Log in to VMware vCSA HTML client using the credentials provided in the Workbook.
2. Go to VMs and templates inventory or Administration > vCenter Server Extensions > vSphere ESX Agent Manager
> VMs to view the VMs.
The VMs are in the vCLS folder after the host is added to the cluster.
3. Right-click the VM and click Migrate.
4. In the Migrate dialog box, click Yes.
5. On the Select a migration type page, select Change storage only and click Next.
6. On the Select storage page, select vSAN datastore for controller nodes.
7. Click Next > Finish.
8. Repeat these steps to migrate all the vCLS VMs.
V
Adding a PowerFlex R650/R750/R6525 node
to a PowerFlex Manager service in managed
mode
Expanding a PowerFlex appliance requires the use of an existing PowerFlex gateway. Use the procedures in this section to add a
PowerFlex R650/R750/R6525 node to the PowerFlex Manager services discovered in managed mode.
There are four types of PowerFlex appliance environment expansions:
● Expanding a PowerFlex management controller 2.0 with HBA355i. The PowerFlex management controller 2.0 requires a
separate PowerFlex Gateway.
● Expanding an existing PowerFlex Manager service. This option also expands the existing protection domain if the service is
hyperconverged, storage-only, or compute-only.
● Creating a new PowerFlex Manager service. If the service is hyperconverged or storage-only, this option lets you either
expand to the existing protection domain or create a protection domain.
● Expanding a service in PowerFlex Manager that is discovered in a lifecycle mode. See Adding a PowerFlex R640/R740xd/
R840 node to a PowerFlex Manager service in lifecycle mode for more information.
Before adding a PowerFlex node in managed mode, complete the initial set of expansion procedures that are common to all
expansion scenarios covered in Performing the initial expansion procedures.
The PowerFlex R650 controller node with PowerFlex can have either of the following RAID controllers:
● PERC H755: PowerFlex Manager puts a PowerFlex management controller 2.0 with PERC H755 service in lifecycle mode.
If you are adding a PowerFlex controller node to a PowerFlex management controller 2.0 with PowerFlex, delete the RAID
and convert the physical disks to non-RAID disks. Use the manual expansion procedures in Converting a PowerFlex controller
node with a PERC H755 to a PowerFlex management controller 2.0 and Adding a PowerFlex controller node with a PERC
H755 to a PowerFlex management controller 2.0.
● HBA355: PowerFlex Manager puts a PowerFlex management controller 2.0 with HBA355 service in managed mode. Use
PowerFlex Manager expansion procedures in this section to add a PowerFlex controller node with HBA355 to a PowerFlex
management controller 2.0.
NOTE: PowerFlex Manager 3.8 onwards, deployment and management of Windows-based PowerFlex compute-only nodes
is not supported. See Deploying Windows-based PowerFlex compute-only nodes manually to deploy the Windows-based
PowerFlex compute-only nodes manually.
After adding a PowerFlex node in managed mode, see Completing the expansion.
198 Adding a PowerFlex R650/R750/R6525 node to a PowerFlex Manager service in managed mode
Internal Use - Confidential
41
Discovering the new resource
Whether expanding an existing service or adding a service, the first step is to discover the new resource.
Steps
1. Connect the new NICs of the PowerFlex appliance node to the access switches and the out-of-band management switch
exactly like the existing nodes in the same protection domain. For more details, see Cabling the PowerFlex R650/R750/
R6525 nodes.
2. Ensure that the newly connected switch ports are not shut down.
3. Set the IP address of the iDRAC management port, username, and password for the new PowerFlex appliance nodes.
4. Log in to PowerFlex Manager.
5. Discover the new PowerFlex appliance nodes in the PowerFlex Manager resources. For more details, see Discover resources
in the next section.
Discover resources
Use this procedure to discover and allow PowerFlex Manager access to resources in the environment. Provide the management
IP address and credential for each discoverable resource.
Prerequisites
Verify that the iDRAC network settings are configured. See Configure iDRAC network settings for more information.
NOTE: For partial network deployments, you do not need to discover the switches. The switches need to be pre-
configured. For sample configurations for Dell PowerSwitch, Cisco Nexus, and Arista switches, see the Dell EMC PowerFlex
Appliance Administration Guide.
The following are the specific details for completing the Discovery wizard steps:
** This is optional. For a new CloudLink Center deployment, the CloudLink Center is discovered automatically.
Prerequisites
● Configure the iDRAC network settings.
● Gather the IP addresses and credentials that are associated with the resources.
NOTE: PowerFlex Manager also allows you to use the name-based searches to discover a range of nodes that were
assigned the IP addresses through DHCP to iDRAC. For more information about this feature, see Dell EMC PowerFlex
Manager Online Help.
Steps
1. On the PowerFlex Manager Getting Started page, click Discover Resources.
2. On the Welcome page of the Discovery Wizard, read the instructions and click Next.
3. On the Identify Resources page, click Add Resource Type. From the Resource Type list, select the resource that you
want to discover.
4. Enter the management IP address of the resource in the IP/Hostname Range field. To discover a resource in an IP range,
provide a starting and ending IP address.
5. In the Resource State list, select Managed or Unmanaged.
6. For PowerFlex node, to discover resources into a selected node pool instead of the global pool (default), select the node
pool from the Discover into Node Pool list.
7. Select the appropriate credential from the Credentials list. See the table above for details.
8. For PowerFlex node, if you want PowerFlex Manager to automatically reconfigure the iDRAC IP addresses of the nodes it
discovers, select the Reconfigure discovered nodes with new management IP and credentials check box. This option
is not selected by default, because it is faster to discover the nodes if you bypass the reconfiguration.
NOTE: iDRAC can be discovered using hostname also.
NOTE: For the Resource Type, you can use a range with hostname or IP address, provided the hostname has a valid
DNS entry.
9. For PowerFlex node, select the Auto configure nodes to send alerts to PowerFlex Manager check box to have
PowerFlex Manager automatically configure nodes to send alerts to PowerFlex Manager.
10. Click Next to start discovery. On the Discovered Resources page, select the resources from which you want to collect
inventory data and click Finish. The discovered resources are listed on the Resources page.
Related information
Configure iDRAC network settings
42
Sanitize the NVDIMM
Use this procedure to sanitize the NVDIMM (if present) on the newly added node before expansion of the node.
Prerequisites
● Ensure that the NVDIMM version is same as the other nodes in the cluster.
To verify the NVDIMM version, perform the following:
1. Log in to the iDRAC.
2. Go to System > Inventory.
3. Expand the Firmware inventory and look for entries describing DIMMs.
The following table describes what steps to take in the iDRAC or system BIOS depending on the NVDIMM firmware:
Lower than the firmware running in production ○ Upgrade the NVDIMM firmware in iDRAC.
NOTE: Sanitizing the NVDIMM for a firmware
upgrade is not required.
● Perform the following steps to upload the NVDIMM firmware:
1. From the iDRAC Web Browser section, click Maintenance > System Update.
2. Click Choose File. Browse to the appropriate intelligent catalog release version folder and select the NVDIMM firmware
file, and click Upload.
3. Click Install and Reboot or Install Next Reboot.
The Updating Job Queue message displays.
4. Click the Job Queue page to monitor the progress of the install.
Steps
1. Reboot the server.
2. Press F2 immediately to enter System Setup.
3. Go to System BIOS > Memory Settings > Persistent Memory > NVDIMM-N Persistent Memory.
System BIOS displays the NVDIMM information for the system.
4. Select the NVDIMMs installed on the node.
5. Find the Sanitize NVDIMM setting in the list and select the Enabled option.
A warning appears that NVDIMM data will be erased if changes are saved when exiting BIOS.
6. Click OK.
7. Click Back > Back > Back to exit to System BIOS Settings, and then click Finish > Yes > OK.
8. Click Finish, and then at the prompt click OK.
The system reboots.
43
Expanding a PowerFlex appliance service
Use this section to expand an existing service in a PowerFlex appliance environment.
Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Settings and select Virtual Appliance Management.
3. In the Compatibility Management section, click Add/Edit.
4. If you are using Secure Remote Services, click Download from Secure Remote Services (Recommended) .
5. If you are not using Secure Remote Services, download the compatibility file from Dell Technologies Support site to the jump
server.
6. Click Upload from Local to use a local file. Then, click Choose File to select the GPG file and click Save.
Steps
1. On the menu bar, click Services.
2. Select the service to which you are adding a PowerFlex node to view its details.
3. On the Service Details page, click Add Resources. Select Add Nodes.
4. In the Duplicate Node wizard:
a. From the Resource to Duplicate list, select an existing node that will be duplicated on the additional node.
b. In the Number of Instances box, set the number of instances to 1 and click Next.
c. Under PowerFlex Settings, specify the PowerFlex Storage Pool Spare Capacity setting. For replication-enabled
services, verify and set the journal capacity depending on the requirement.
d. Under OS Settings, set the Host Name Selection. If you select Specify at Deployment, provide a name for the host
in the Host Name field. If you select Auto-Generate, specify a template for the name in the Host Name Template
field.
e. If you are adding a node to a hyperconverged service, specify the Host Name Selection under SVM OS Settings and
provide details about the hostname, as you did for the OS Settings.
Based on the component type, the required settings and properties are displayed automatically and can be edited as
permitted for a node expansion.
Steps
1. Using the administrator credentials, log in to the jump server.
2. Copy the PowerFlex license to the primary MDM.
3. Log in to the primary MDM.
4. Run the following command to apply PowerFlex license: scli --mdm_ip <primary mdm ip> --set_license
--license_file <path to license file>
Steps
1. Log in to CloudLink Center using secadmin credentials.
2. Click Agents > Machines.
3. Ensure that the status of newly added machines is in Connected state.
44
Expanding a PowerFlex appliance with a new
service
You can expand a PowerFlex appliance environment with a service. You can expand a service through cloning a template or
editing an existing template.
Ensure the new PowerFlex appliance nodes are discovered. See the following sections for details about each step mentioned
here.
● Clone a template. You can also edit an existing template. The template that you edit depends on the expansion requirements.
● Deploy a service using the newly created and published template.
● Add a volume.
● After expanding the PowerFlex hyperconverged node or PowerFlex storage-only node, redistribute the MDM cluster.
Redistributing the MDM cluster is not applicable for a PowerFlex compute-only node.
Cloning a template
The clone feature allows you to copy an existing template into a new template. A cloned template contains the components that
existed in the original template. You can edit it to add additional components or modify the cloned components.
Prerequisites
A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks. Verify the
number of logical data networks configured in an existing setup. Accordingly, configure the logical data networks while creating
a template.
Steps
1. Log in to PowerFlex Manager.
2. From the PowerFlex Manager menu bar, click Templates > My Templates.
3. Select a template, and then click Clone in the right pane.
4. In the Clone Template dialog box, enter a template name in the Template Name field.
5. Select a template category from the Template Category list. To create a template category, select Create New Category
6. In the Template Description box, enter a description for the template.
7. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or
choose Use PowerFlex Manager appliance default catalog.
You cannot select a minimal compliance version for a template, because it only includes server firmware updates. The
compliance version for a template must include the full set of compliance update capabilities. PowerFlex Manager does not
show any minimal compliance versions in the Firmware and Software Compliance list.
8. Indicate Who should have access to the service deployed from this template by selecting one of the following options:
● Grant access to Only PowerFlex Manager Administrators.
● Grant access to PowerFlex Manager Administrators and Specific Standard and Operator Users. Click Add
User(s) to add one or more standard and or operator users to the list. Click Remove User(s) to remove users from the
list.
● Grant access to PowerFlex Manager Administrators and All Standard and Operator Users.
9. Click Next.
10. On the Additional Settings page, provide new values for the Network Settings, OS Settings, Cluster Settings,
PowerFlex Gateway Settings, and Node Pool Settings.
If you clone a template that has a Target CloudLink Center setting, the cloned template shows this setting in the Original
Target CloudLink Center field. Change this setting by selecting a new target for the cloned template in the Select New
Target CloudLink Center setting.
When defining a template, you choose a single CloudLink Center as the target for the deployed service. If the CloudLink
Center for the service shuts down, PowerFlex Manager loses communication with the CloudLink Center. If the CloudLink
Center is part of a cluster, PowerFlex Manager moves to another CloudLink Center when you update the service details.
Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Settings and select Virtual Appliance Management.
3. In the Compatibility Management section, click Add/Edit.
4. If you are using Secure Remote Services, click Download from Secure Remote Services (Recommended) .
5. If you are not using Secure Remote Services, download the compatibility file from Dell Technologies Support site to the jump
server.
6. Click Upload from Local to use a local file. Then, click Choose File to select the GPG file and click Save.
Deploy a service
Use this procedure to deploy a service. You cannot deploy a service using a template that is in draft state. Publish the template
before using it to deploy a service.
Steps
1. On the menu bar, click Services > Deploy New Service.
2. On the Deploy Service page, perform the following steps:
a. From the Select Published Template list, select the previously defined and published hyperconverged template to
deploy the service.
b. Enter the Service Name and Service Description that identifies the service.
c. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or
select Use PowerFlex Manager appliance default catalog.
PowerFlex Manager checks the VMware vCenter version to determine if it matches the VMware ESXi version for the
selected compliance version. If the VMware ESXi version is greater than the vCenter version, PowerFlex Manager blocks
the service deployment and displays an error. PowerFlex Manager instructs you to upgrade vCenter first or use a
different compliance version that is compatible with the installed vCenter version.
NOTE: Changing the firmware repository might update the firmware level on nodes for this service. The global
default firmware repository maintains the firmware on the shared devices.
d. Select one of the options from Who should have access to the service deployed from this template? drop-down
list.
NOTE: For a PowerFlex hyperconverged or storage-only node deployment, if you want to use CloudLink encryption,
perform the following:
i. Verify that CloudLink Center is deployed.
ii. In the template, under Node settings, select Enable Encryption (Software Encryption/Self Encrypting
Drive).
iii. Under PowerFlex Cluster settings, select CloudLink Center.
3. Click Next.
4. On the Deployment Settings page, configure the required settings. You can override any of the cluster settings that are
specified in the template.
If you are deploying a service with CloudLink, ensure that the correct CloudLink Center is displayed under the CloudLink
Center Settings.
5. To configure the PowerFlex Settings for a hyperconverged or storage-only service that has replication-enabled in the
template, specify the Journal Capacity. The default journal capacity is 10% of the over all capacity but you can customize
the capacity according to the requirement.
6. To configure the PowerFlex Settings, select one of the following options for PowerFlex MDM Virtual IP Source:
● PowerFlex Manager Selected IP instructs PowerFlex Manager to select the virtual IP addresses.
● User Entered IP enables you to specify the IP address manually for each PowerFlex data network that is part of the
node definition in the service template.
NOTE: If you are using a PowerFlex Manager version prior to 3.8, verify that the correct disk type (NVMe, SSD, or
HDD) is selected. From the Deployment Settings page, select PowerFlex Setting > Storage Pool disk type. Ensure
that you select the correct disk type: (NVMe or SSD).
8. To configure Hardware Settings, select the node source from the Node Source list.
● If you select Node Pool, you can view all user-defined node pools and the global pool. Standard users can see only the
pools for which they have permission. Select the Retry On Failure option to ensure that PowerFlex Manager selects
another node from the node pool for deployment if any node fails. Each node can be retried up to five times.
● If you select Manual Entry, the Choose Node list is displayed. Select the node by its Service Tag for deployment from
the list.
9. Click Next.
10. On the Schedule Deployment page, select one of the following options and click Next:
● Deploy Now—Select this option to deploy the service immediately.
● Deploy Later—Select this option and enter the date and time to deploy the service.
11. Review the Summary page.
The Summary page gives you a preview of what the service will look like after the deployment.
12. Click Finish when you are ready to begin the deployment. For more information, see PowerFlex Manager online help.
Steps
1. On the Services page, open the service that was deployed earlier.
2. Under Resource Actions, click Add Resources > Add Volume > Create New Volume > Next.
3. Click Add New Volume.
4. Enter the following values:
Field Value
Volume 1
Volume Name Create New Volume …
New Volume Name Volume1
Datastore Name Create New Datastore
New Datastore Datastore1
Storage Pool Select the storage pool from the drop-down
Enable Compression Select this check box (if compression is enabled for
deployment)
Volume Size 8 (or any multiple of 8 )
Volume Type Thick
Volume 2
Volume Name Create New Volume …
New Volume Name Volume2
Datastore Name Create New Datastore
New Datastore Datastore2
Storage Pool Do not change this option.
Enable Compression Select this check box ( if Compression is enabled for
deployment)
Volume Size 8 (or any multiple of 8)
Volume Type Thick
NOTE: For PowerFlex version 3.5 and later, the New Medium Granularity Storage Pools have the Persistent
Checksum feature enabled by default. The existing New Medium Granularity Storage Pools will not have this
feature enabled during an upgrade. See the Dell EMC PowerFlex Appliance Administration Guide for manual procedures
for disabling (or enabling) this feature.
The SDS thread count is set to 8 by default. While expanding a PowerFlex storage-only node with 16-core CPU with a new
service, set the SDS thread count to 12. The MD cache is enabled by default on a PowerFlex storage-only node with FG pool.
The MD cache can be calculated using the following formula:
Steps
1. Access the wizard from the Services page or the Resource page. Click Services or Resources from the menu bar to
access the wizard.
a. Select the service or resource with the PowerFlex gateway containing the MDMs.
b. Click View Details.
c. Click Reconfigure MDM Roles. The MDM Reconfiguration page displays.
2. Review the current MDM configuration for the cluster.
3. For each MDM role that you want to reassign, use Select New Node for MDM Role to select the new hostname or IP
address. You can reassign multiple roles at a time.
4. Click Next. The Summary page displays.
5. Type Change MDM Roles to confirm the changes.
6. Click Finish.
Related information
Redistribute the MDM cluster
45
Configuring the hyperconverged or compute-
only transport nodes
This section describes how to configure the hyperconverged or compute-only nodes as part of preparing the PowerFlex
appliance for NSX-T. Before you configure the VMware ESXi hosts as NSX-T transport nodes, you must add the transport
distributed port groups and convert the distributed switch from LACP to individual trunks covered in this section.
NOTE: If you configure VMware NSX-T on PowerFlex hyperconverged or compute-only nodes and add them to PowerFlex
Manager, the services will be in lifecycle mode. If you need to perform an expansion on such a node, see Adding a
PowerFlex R640/R740xd/R840 node to a PowerFlex Manager service in lifecycle mode to add the PowerFlex node.
Contact VMware Support to configure VMware NSX-T on a new PowerFlex node and see Add PowerFlex nodes to a
service to update the service details.
Prerequisites
Ensure that the VMware vSphere vCenter Server and the VMware vSphere Client are accessible.
Steps
1. Log in to the VMware vSphere Client.
2. Click Networking.
3. Expand the PowerFlex Customer-Datacenter.
4. Right-click cust_dvswitch.
5. Click Distributed Port Group > New Distributed Port Group.
6. Update the name to pfmc-nsx-transport-121 and click Next.
7. Select the default Port binding.
8. Select the default Port allocation.
9. Select the default # of ports (default is 8).
10. Select the default VLAN as VLAN Type.
11. Set the VLAN ID to 121.
12. Clear the Customize default policies configuration check box and click Next.
13. Click Finish.
14. Right-click the pfnc-nsx-transport-121 and click Edit Settings....
15. Click Teaming and failover.
16. Verify that Uplink1 and Uplink2 are moved to Active.
17. Click OK.
Prerequisites
Both Cisco Nexus access switch ports for the compute VMware ESXi hosts are configured as trunk access. These ports will be
configured as LACP enabled after the physical adapter is removed from each ESXi host.
WARNING: As the VMK0 (ESXi management) is not configured on cust_dvswitch, both the vmnics are first
migrated to the LAGs simultaneously and then the port channel is configured. Data connectivity to PowerFlex is
lost until the port channels are brought online with both vmnic interfaces connected to LAGs.
Steps
1. Log in to the VMware vSphere Client.
2. Look at VMware vCenter and physical switches to ensure that both ports across all hosts are up.
3. For each compute VMware ESXi host, record the physical switch ports to which vmnic5 (switch-B) and vmnic7 (switch -A)
are connected.
a. Click Home, then select Hosts and Clusters and expand the compute cluster.
b. Select the first compute ESXi host in left pane, and then select Configure tab in right pane.
c. Select Virtual switches under Networking.
d. Expand cust_dvswitch.
e. Expand Uplink1 and click eclipse (…) for vmnic7 and select view settings.
f. Click LLDP tab.
g. Record the Port ID (switch port) and System Name (switch).
h. Repeat step 3 for vmnic5 on Uplink 2.
4. Configure LAG (LACP) on cust_dvswitch within VMware vCenter Server:
a. Click Home, then select Networking.
b. Expand the compute cluster and click cust_dvswitch > Configure > LACP.
c. Click +New to open wizard.
d. Verify that the name is lag1.
e. Verify that the number of ports is 2.
f. Verify that the mode is Active.
g. Change Load Balancing mode to Source and destination IP address, TCP/UDP port..
h. Click OK.
5. Migrate vmnic5 to lag1-0 and vmnic7 to lag1-1 on cust_dvswitch for the compute VMware ESXi host as follows:
a. Click Home, then select Networking and expand the PowerFlex data center.
b. Right-click cust_dvswitch and select Manage host networking to open wizard.
c. Select Add hosts... and click Next.
d. Click Attached hosts..., select all the compute ESXi hosts, and click OK.
e. Click Next.
f. For each ESXi host, select vmnic5 and click Assign uplink.
g. Click lag1-0 and click OK.
h. For each ESXi host, select vmnic7 and click Assign uplink.
i. Click lag1-1 and click OK.
j. Click Next > Next > Next > Finish.
6. Create port-channel (LACP) on switch-A for compute VMware ESXi host.
The following switch configuration is an example of a single compute VMware ESXi host.
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-A switch.
b. Create a port channel on switch-A for each compute VMware ESXi host as follows:
interface port-channel40
Description to flex-compute-esxi-host01
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
speed 25000
no lacp suspend-individual
vpc 40
7. Configure channel-group (LACP) on switch-A access port (vmnic5) for each compute VMware ESXi host.
The following switch port configuration is an example of a single compute VMware ESXi host.
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-A switch.
b. Create port on switch-A as follows:
int e1/1/1
description to flex-compute-esxi-host01 – vmnic5
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 40 mode active
interface port-channel40
Description to flex-compute-esxi-host01
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
speed 25000
no lacp suspend-individual
vpc 40
9. Configure channel-group (LACP) on switch-B access port (vmnic7) for each compute VMware ESXi host.
The following switch port configuration is an example of a single compute VMware ESXi host.
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-B switch.
b. Create port on switch-B as follows:
int e1/1/1
description to flex-compute-esxi-host01 – vmnic7
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 40 mode active
10. Update teaming and policy to route based on physical NIC load for each port group within cust_dvswitch:
a. Click Home and select Networking.
46
Add a Layer 3 routing between an external
SDC and SDS
Use this procedure to enable an external SDC to SDS communication and configure the PowerFlex node for an external SDC
reachability.
Steps
1. In the template, from Node > Network settings, select the required VLANs to enable an external SDC communication on
the SDS data interfaces.
2. From Node > Static routes, select Enabled.
3. Click Add New Static Route.
4. Select the source and destination VLANs, and manually enter the gateway IP address of the SDS data network VLAN.
5. Repeat these steps for each data VLAN.
VI
Adding a PowerFlex R640/R740xd/R840
node to a PowerFlex Manager service in
managed mode
Expanding a PowerFlex appliance requires the use of an existing PowerFlex gateway. Use the procedures in this section to add a
PowerFlex R640/R740xd/R840 node to the PowerFlex Manager services discovered in managed mode.
There are three types of PowerFlex appliance environment expansions:
● Expanding an existing PowerFlex Manager service. This option also expands the existing protection domain if the service is
hyperconverged, storage-only, or compute-only.
● Creating a new PowerFlex Manager service. If the service is hyperconverged or storage-only, this option lets you either
expand to the existing protection domain or create a protection domain.
● Expanding a service in PowerFlex Manager that is discovered in a lifecycle mode. See Adding a PowerFlex R640/R740xd/
R840 node to a PowerFlex Manager service in lifecycle mode for more information.
Before adding a PowerFlex node in managed mode, complete the initial set of expansion procedures that are common to all
expansion scenarios covered in Performing the initial expansion procedures.
After adding a PowerFlex node in managed mode, see Completing the expansion.
Related information
Install the nvme -cli tool and iDRAC Service Module (iSM)
Adding a PowerFlex R640/R740xd/R840 node to a PowerFlex Manager service in managed mode 215
Internal Use - Confidential
47
Discovering the new resource
Whether expanding an existing service or adding a service, the first step is to discover the new resource.
Steps
1. Connect the new NICs of the PowerFlex appliance node to the access switches and the out-of-band management switch
exactly like the existing nodes in the same protection domain. For more details, see Cabling the PowerFlex R640/R740xd/
R840 nodes.
2. Ensure that the newly connected switch ports are not shut down.
3. Set the IP address of the iDRAC management port, username, and password for the new PowerFlex appliance nodes.
4. Log in to PowerFlex Manager.
5. Discover the new PowerFlex appliance nodes in the PowerFlex Manager resources. For more details, see Discover resources.
Discover resources
Use this procedure to discover and allow PowerFlex Manager access to resources in the environment. Provide the management
IP address and credential for each discoverable resource.
Prerequisites
Verify that the iDRAC network settings are configured. See Configure iDRAC network settings for more information.
NOTE: For partial network deployments, you do not need to discover the switches. The switches need to be pre-
configured. For sample configurations for Dell PowerSwitch, Cisco Nexus, and Arista switches, see the Dell EMC PowerFlex
Appliance Administration Guide.
The following are the specific details for completing the Discovery wizard steps:
** This is optional. For a new CloudLink Center deployment, the CloudLink Center is discovered automatically.
Prerequisites
● Configure the iDRAC network settings.
● Gather the IP addresses and credentials that are associated with the resources.
NOTE: PowerFlex Manager also allows you to use the name-based searches to discover a range of nodes that were
assigned the IP addresses through DHCP to iDRAC. For more information about this feature, see Dell EMC PowerFlex
Manager Online Help.
Steps
1. On the PowerFlex Manager Getting Started page, click Discover Resources.
2. On the Welcome page of the Discovery Wizard, read the instructions and click Next.
3. On the Identify Resources page, click Add Resource Type. From the Resource Type list, select the resource that you
want to discover.
4. Enter the management IP address of the resource in the IP/Hostname Range field. To discover a resource in an IP range,
provide a starting and ending IP address.
5. In the Resource State list, select Managed or Unmanaged.
6. For PowerFlex node, to discover resources into a selected node pool instead of the global pool (default), select the node
pool from the Discover into Node Pool list.
7. Select the appropriate credential from the Credentials list. See the table above for details.
8. For PowerFlex node, if you want PowerFlex Manager to automatically reconfigure the iDRAC IP addresses of the nodes it
discovers, select the Reconfigure discovered nodes with new management IP and credentials check box. This option
is not selected by default, because it is faster to discover the nodes if you bypass the reconfiguration.
NOTE: iDRAC can be discovered using hostname also.
NOTE: For the Resource Type, you can use a range with hostname or IP address, provided the hostname has a valid
DNS entry.
9. For PowerFlex node, select the Auto configure nodes to send alerts to PowerFlex Manager check box to have
PowerFlex Manager automatically configure nodes to send alerts to PowerFlex Manager.
10. Click Next to start discovery. On the Discovered Resources page, select the resources from which you want to collect
inventory data and click Finish. The discovered resources are listed on the Resources page.
Related information
Configure iDRAC network settings
48
Sanitize the NVDIMM
Use this procedure to sanitize the NVDIMM (if present) on the newly added node before expansion of the node.
Prerequisites
● Ensure that the NVDIMM version is same as the other nodes in the cluster.
To verify the NVDIMM version, perform the following:
1. Log in to the iDRAC.
2. Go to System > Inventory.
3. Expand the Firmware inventory and look for entries describing DIMMs.
The following table describes what steps to take in the iDRAC or system BIOS depending on the NVDIMM firmware:
Lower than the firmware running in production ○ Upgrade the NVDIMM firmware in iDRAC.
NOTE: Sanitizing the NVDIMM for a firmware
upgrade is not required.
● Perform the following steps to upload the NVDIMM firmware:
1. From the iDRAC Web Browser section, click Maintenance > System Update.
2. Click Choose File. Browse to the appropriate intelligent catalog release version folder and select the NVDIMM firmware
file, and click Upload.
3. Click Install and Reboot or Install Next Reboot.
The Updating Job Queue message displays.
4. Click the Job Queue page to monitor the progress of the install.
Steps
1. Reboot the server.
2. Press F2 immediately to enter System Setup.
3. Go to System BIOS > Memory Settings > Persistent Memory > NVDIMM-N Persistent Memory.
System BIOS displays the NVDIMM information for the system.
4. Select the NVDIMMs installed on the node.
5. Find the Sanitize NVDIMM setting in the list and select the Enabled option.
A warning appears that NVDIMM data will be erased if changes are saved when exiting BIOS.
6. Click OK.
7. Click Back > Back > Back to exit to System BIOS Settings, and then click Finish > Yes > OK.
8. Click Finish, and then at the prompt click OK.
The system reboots.
49
Expanding a PowerFlex appliance service
Use this section to expand an existing service in a PowerFlex appliance environment.
Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Settings and select Virtual Appliance Management.
3. In the Compatibility Management section, click Add/Edit.
4. If you are using Secure Remote Services, click Download from Secure Remote Services (Recommended) .
5. If you are not using Secure Remote Services, download the compatibility file from Dell Technologies Support site to the jump
server.
6. Click Upload from Local to use a local file. Then, click Choose File to select the GPG file and click Save.
Steps
1. On the menu bar, click Services.
2. Select the service to which you are adding a PowerFlex node to view its details.
3. On the Service Details page, click Add Resources. Select Add Nodes.
4. In the Duplicate Node wizard:
a. From the Resource to Duplicate list, select an existing node that will be duplicated on the additional node.
b. In the Number of Instances box, set the number of instances to 1 and click Next.
c. Under PowerFlex Settings, specify the PowerFlex Storage Pool Spare Capacity setting. For replication-enabled
services, verify and set the journal capacity depending on the requirement.
d. Under OS Settings, set the Host Name Selection. If you select Specify at Deployment, provide a name for the host
in the Host Name field. If you select Auto-Generate, specify a template for the name in the Host Name Template
field.
e. If you are adding a node to a hyperconverged service, specify the Host Name Selection under SVM OS Settings and
provide details about the hostname, as you did for the OS Settings.
f. In the IP Source box, enter an IP address.
g. Under Hardware Settings, in the Node Source box, select Node Pool or Manual Entry.
h. In the Node Pool box, select the node pool. Alternatively, if you select Manual Entry, select the specific node in the
Choose Node box.
i. Under PowerFlex Settings, specify the Fault Set for a node:
NOTE: If the PowerFlex configuration includes fault sets, contact Dell support for assistance. Do not go to the
procedure until you have received guidance from a support representative.
● PowerFlex Manager Selected Fault Set instructs PowerFlex Manager to select the fault set name based on the
template settings.
● fault-set-name enables you to select one of the fault sets in an existing protection domain.
You can add nodes within a fault set, but PowerFlex Manager does not allow you to add a new fault set within the same
service. To add a new fault set, you need to deploy a separate service with settings for the fault set you want to create.
j. Click Next.
k. Review the Summary page and click Finish.
If the node you are adding has a different type of disk than the base deployment, PowerFlex Manager displays a banner
at the top of the Summary page to inform you of the different disk types. You can still complete the node expansion.
However, your service may have sub-optimal performance.
Based on the component type, the required settings and properties are displayed automatically and can be edited as
permitted for a node expansion.
Steps
1. Log in to CloudLink Center using secadmin credentials.
2. Click Agents > Machines.
3. Ensure that the status of newly added machines is in Connected state.
4. Ensure the following depending on the drives:
● Software encrypted drives (SEDs): The Status of the Devices and SED appear as Encrypted HW and Managed
respectively.
● Non-SEDs: The Status of the Devices for each newly added machine appears as Encrypted.
50
Expanding a PowerFlex appliance with a new
service
You can expand a PowerFlex appliance environment with a service. You can expand a service through cloning a template or
editing an existing template.
Ensure the new PowerFlex appliance nodes are discovered. See the following sections for details about each step mentioned
here.
● Clone a template. You can also edit an existing template. The template that you edit depends on the expansion requirements.
● Deploy a service using the newly created and published template.
● Add a volume.
● After expanding the PowerFlex hyperconverged node or PowerFlex storage-only node, redistribute the MDM cluster.
Redistributing the MDM cluster is not applicable for a PowerFlex compute-only node.
Cloning a template
The clone feature allows you to copy an existing template into a new template. A cloned template contains the components that
existed in the original template. You can edit it to add additional components or modify the cloned components.
Prerequisites
A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks. Verify the
number of logical data networks configured in an existing setup. Accordingly, configure the logical data networks while creating
a template.
Steps
1. Log in to PowerFlex Manager.
2. From the PowerFlex Manager menu bar, click Templates > My Templates.
3. Select a template, and then click Clone in the right pane.
4. In the Clone Template dialog box, enter a template name in the Template Name field.
5. Select a template category from the Template Category list. To create a template category, select Create New Category
6. In the Template Description box, enter a description for the template.
7. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or
choose Use PowerFlex Manager appliance default catalog.
You cannot select a minimal compliance version for a template, because it only includes server firmware updates. The
compliance version for a template must include the full set of compliance update capabilities. PowerFlex Manager does not
show any minimal compliance versions in the Firmware and Software Compliance list.
8. Indicate Who should have access to the service deployed from this template by selecting one of the following options:
● Grant access to Only PowerFlex Manager Administrators.
● Grant access to PowerFlex Manager Administrators and Specific Standard and Operator Users. Click Add
User(s) to add one or more standard and or operator users to the list. Click Remove User(s) to remove users from the
list.
● Grant access to PowerFlex Manager Administrators and All Standard and Operator Users.
9. Click Next.
10. On the Additional Settings page, provide new values for the Network Settings, OS Settings, Cluster Settings,
PowerFlex Gateway Settings, and Node Pool Settings.
If you clone a template that has a Target CloudLink Center setting, the cloned template shows this setting in the Original
Target CloudLink Center field. Change this setting by selecting a new target for the cloned template in the Select New
Target CloudLink Center setting.
When defining a template, you choose a single CloudLink Center as the target for the deployed service. If the CloudLink
Center for the service shuts down, PowerFlex Manager loses communication with the CloudLink Center. If the CloudLink
Center is part of a cluster, PowerFlex Manager moves to another CloudLink Center when you update the service details.
Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Settings and select Virtual Appliance Management.
3. In the Compatibility Management section, click Add/Edit.
4. If you are using Secure Remote Services, click Download from Secure Remote Services (Recommended) .
5. If you are not using Secure Remote Services, download the compatibility file from Dell Technologies Support site to the jump
server.
6. Click Upload from Local to use a local file. Then, click Choose File to select the GPG file and click Save.
Deploy a service
Use this procedure to deploy a service. You cannot deploy a service using a template that is in draft state. Publish the template
before using it to deploy a service.
Steps
1. On the menu bar, click Services > Deploy New Service.
2. On the Deploy Service page, perform the following steps:
a. From the Select Published Template list, select the previously defined and published hyperconverged template to
deploy the service.
b. Enter the Service Name and Service Description that identifies the service.
c. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or
select Use PowerFlex Manager appliance default catalog.
PowerFlex Manager checks the VMware vCenter version to determine if it matches the VMware ESXi version for the
selected compliance version. If the VMware ESXi version is greater than the vCenter version, PowerFlex Manager blocks
the service deployment and displays an error. PowerFlex Manager instructs you to upgrade vCenter first or use a
different compliance version that is compatible with the installed vCenter version.
NOTE: Changing the firmware repository might update the firmware level on nodes for this service. The global
default firmware repository maintains the firmware on the shared devices.
d. Select one of the options from Who should have access to the service deployed from this template? drop-down
list.
NOTE: For a PowerFlex hyperconverged or storage-only node deployment, if you want to use CloudLink encryption,
perform the following:
i. Verify that CloudLink Center is deployed.
ii. In the template, under Node settings, select Enable Encryption (Software Encryption/Self Encrypting
Drive).
iii. Under PowerFlex Cluster settings, select CloudLink Center.
3. Click Next.
4. On the Deployment Settings page, configure the required settings. You can override any of the cluster settings that are
specified in the template.
If you are deploying a service with CloudLink, ensure that the correct CloudLink Center is displayed under the CloudLink
Center Settings.
5. To configure the PowerFlex Settings for a hyperconverged or storage-only service that has replication-enabled in the
template, specify the Journal Capacity. The default journal capacity is 10% of the over all capacity but you can customize
the capacity according to the requirement.
6. To configure the PowerFlex Settings, select one of the following options for PowerFlex MDM Virtual IP Source:
● PowerFlex Manager Selected IP instructs PowerFlex Manager to select the virtual IP addresses.
● User Entered IP enables you to specify the IP address manually for each PowerFlex data network that is part of the
node definition in the service template.
NOTE: Verify that the correct disk type (NVMe, SSD, or HDD) is selected. From the Deployment Settings page,
select PowerFlex Setting > Storage Pool disk type. Ensure that you select the correct disk type: (NVMe or SSD).
8. To configure Hardware Settings, select the node source from the Node Source list.
● If you select Node Pool, you can view all user-defined node pools and the global pool. Standard users can see only the
pools for which they have permission. Select the Retry On Failure option to ensure that PowerFlex Manager selects
another node from the node pool for deployment if any node fails. Each node can be retried up to five times.
● If you select Manual Entry, the Choose Node list is displayed. Select the node by its Service Tag for deployment from
the list.
9. Click Next.
10. On the Schedule Deployment page, select one of the following options and click Next:
● Deploy Now—Select this option to deploy the service immediately.
● Deploy Later—Select this option and enter the date and time to deploy the service.
11. Review the Summary page.
The Summary page gives you a preview of what the service will look like after the deployment.
12. Click Finish when you are ready to begin the deployment. For more information, see PowerFlex Manager online help.
Steps
1. On the Services page, open the service that was deployed earlier.
2. Under Resource Actions, click Add Resources > Add Volume > Add Existing Volumes > Next.
3. Click on Select Volumes.
4. In the search text box, enter the volume names that are created in the PowerFlex storage-only node service and click
Search.
5. Select the volumes. Click >> to move the volumes.
6. Click ADD.
Field Value
Volume 1
Volume Name Create New Volume …
New Volume Name Volume1
Datastore Name Create New Datastore
New Datastore Datastore1
Storage Pool Select the storage pool from the drop-down
Enable Compression Select this check box (if compression is enabled for
deployment)
Volume Size 8 (or any multiple of 8 )
Volume Type Thick
Volume 2
Volume Name Create New Volume …
New Volume Name Volume2
Datastore Name Create New Datastore
New Datastore Datastore2
Storage Pool Do not change this option.
Enable Compression Select this check box ( if Compression is enabled for
deployment)
Volume Size 8 (or any multiple of 8)
Volume Type Thick
Steps
1. Access the wizard from the Services page or the Resource page. Click Services or Resources from the menu bar to
access the wizard.
a. Select the service or resource with the PowerFlex gateway containing the MDMs.
b. Click View Details.
c. Click Reconfigure MDM Roles. The MDM Reconfiguration page displays.
2. Review the current MDM configuration for the cluster.
3. For each MDM role that you want to reassign, use Select New Node for MDM Role to select the new hostname or IP
address. You can reassign multiple roles at a time.
4. Click Next. The Summary page displays.
5. Type Change MDM Roles to confirm the changes.
6. Click Finish.
Related information
Redistribute the MDM cluster
51
Configuring the hyperconverged or compute-
only transport nodes
This section describes how to configure the hyperconverged or compute-only nodes as part of preparing the PowerFlex
appliance for NSX-T. Before you configure the VMware ESXi hosts as NSX-T transport nodes, you must add the transport
distributed port groups and convert the distributed switch from LACP to individual trunks covered in this section.
NOTE: If you configure VMware NSX-T on PowerFlex hyperconverged or compute-only nodes and add them to PowerFlex
Manager, the services will be in lifecycle mode. If you need to perform an expansion on such a node, see Adding a
PowerFlex R640/R740xd/R840 node to a PowerFlex Manager service in lifecycle mode to add the PowerFlex node.
Contact VMware Support to configure VMware NSX-T on a new PowerFlex node and see Add PowerFlex nodes to a
service to update the service details.
Prerequisites
Ensure that the VMware vSphere vCenter Server and the VMware vSphere Client are accessible.
Steps
1. Log in to the VMware vSphere Client.
2. Click Networking.
3. Expand the PowerFlex Customer-Datacenter.
4. Right-click cust_dvswitch.
5. Click Distributed Port Group > New Distributed Port Group.
6. Update the name to pfmc-nsx-transport-121 and click Next.
7. Select the default Port binding.
8. Select the default Port allocation.
9. Select the default # of ports (default is 8).
10. Select the default VLAN as VLAN Type.
11. Set the VLAN ID to 121.
12. Clear the Customize default policies configuration check box and click Next.
13. Click Finish.
14. Right-click the pfnc-nsx-transport-121 and click Edit Settings....
15. Click Teaming and failover.
16. Verify that Uplink1 and Uplink2 are moved to Active.
17. Click OK.
Prerequisites
Both Cisco Nexus access switch ports for the compute VMware ESXi hosts are configured as trunk access. These ports will be
configured as LACP enabled after the physical adapter is removed from each ESXi host.
WARNING: As the VMK0 (ESXi management) is not configured on cust_dvswitch, both the vmnics are first
migrated to the LAGs simultaneously and then the port channel is configured. Data connectivity to PowerFlex is
lost until the port channels are brought online with both vmnic interfaces connected to LAGs.
Steps
1. Log in to the VMware vSphere Client.
2. Look at VMware vCenter and physical switches to ensure that both ports across all hosts are up.
3. For each compute VMware ESXi host, record the physical switch ports to which vmnic5 (switch-B) and vmnic7 (switch -A)
are connected.
a. Click Home, then select Hosts and Clusters and expand the compute cluster.
b. Select the first compute ESXi host in left pane, and then select Configure tab in right pane.
c. Select Virtual switches under Networking.
d. Expand cust_dvswitch.
e. Expand Uplink1 and click eclipse (…) for vmnic7 and select view settings.
f. Click LLDP tab.
g. Record the Port ID (switch port) and System Name (switch).
h. Repeat step 3 for vmnic5 on Uplink 2.
4. Configure LAG (LACP) on cust_dvswitch within VMware vCenter Server:
a. Click Home, then select Networking.
b. Expand the compute cluster and click cust_dvswitch > Configure > LACP.
c. Click +New to open wizard.
d. Verify that the name is lag1.
e. Verify that the number of ports is 2.
f. Verify that the mode is Active.
g. Change Load Balancing mode to Source and destination IP address, TCP/UDP port..
h. Click OK.
5. Migrate vmnic5 to lag1-0 and vmnic7 to lag1-1 on cust_dvswitch for the compute VMware ESXi host as follows:
a. Click Home, then select Networking and expand the PowerFlex data center.
b. Right-click cust_dvswitch and select Manage host networking to open wizard.
c. Select Add hosts... and click Next.
d. Click Attached hosts..., select all the compute ESXi hosts, and click OK.
e. Click Next.
f. For each ESXi host, select vmnic5 and click Assign uplink.
g. Click lag1-0 and click OK.
h. For each ESXi host, select vmnic7 and click Assign uplink.
i. Click lag1-1 and click OK.
j. Click Next > Next > Next > Finish.
6. Create port-channel (LACP) on switch-A for compute VMware ESXi host.
The following switch configuration is an example of a single compute VMware ESXi host.
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-A switch.
b. Create a port channel on switch-A for each compute VMware ESXi host as follows:
interface port-channel40
Description to flex-compute-esxi-host01
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
speed 25000
no lacp suspend-individual
vpc 40
7. Configure channel-group (LACP) on switch-A access port (vmnic5) for each compute VMware ESXi host.
The following switch port configuration is an example of a single compute VMware ESXi host.
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-A switch.
b. Create port on switch-A as follows:
int e1/1/1
description to flex-compute-esxi-host01 – vmnic5
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 40 mode active
interface port-channel40
Description to flex-compute-esxi-host01
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
speed 25000
no lacp suspend-individual
vpc 40
9. Configure channel-group (LACP) on switch-B access port (vmnic7) for each compute VMware ESXi host.
The following switch port configuration is an example of a single compute VMware ESXi host.
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-B switch.
b. Create port on switch-B as follows:
int e1/1/1
description to flex-compute-esxi-host01 – vmnic7
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 40 mode active
10. Update teaming and policy to route based on physical NIC load for each port group within cust_dvswitch:
a. Click Home and select Networking.
52
Add a Layer 3 routing between an external
SDC and SDS
Use this procedure to enable an external SDC to SDS communication and configure the PowerFlex node for an external SDC
reachability.
Steps
1. In the template, from Node > Network settings, select the required VLANs to enable an external SDC communication on
the SDS data interfaces.
2. From Node > Static routes, select Enabled.
3. Click Add New Static Route.
4. Select the source and destination VLANs, and manually enter the gateway IP address of the SDS data network VLAN.
5. Repeat these steps for each data VLAN.
VII
Adding a PowerFlex R650/R750/R6525 node
in lifecycle mode
Use the procedures in this section to add a PowerFlex R650/R750/R6525 node for the PowerFlex Manager services discovered
in lifecycle mode.
Before adding a PowerFlex node in lifecycle mode, you must complete the initial set of expansion procedures that are common
to all expansion scenarios covered in Performing the initial expansion procedures.
The PowerFlex controller node with PowerFlex can have either of the following RAID controllers:
● If you are using HBA355, see Adding a PowerFlex R650/R750/R6525 node to a PowerFlex Manager service in managed
mode for expansion using PowerFlex Manager.
● If you are using PERC H755, see Converting a PowerFlex controller node with a PERC H755 to a PowerFlex management
controller 2.0 and Adding a PowerFlex controller node with a PERC H755 to a PowerFlex management controller 2.0 for
manual expansion.
After adding a PowerFlex node in lifecycle mode, see Completing the expansion.
53
Performing a PowerFlex storage-only node
expansion
Perform the manual expansion procedure to add a PowerFlex R650/R750 storage-only node to PowerFlex Manager services
that are discovered in a lifecycle mode.
Before adding a PowerFlex node, you must complete the following initial set of expansion procedures:
● Preparing to expand a PowerFlex appliance
● Configuring the network
Discover resources
Use this procedure to discover and grant PowerFlex Manager access to resources in the environment. Provide the management
IP address and credential for each discoverable resource.
Prerequisites
Verify that the iDRAC network settings are configured. See Configure iDRAC network settings for more information.
NOTE: For partial network deployments, you do not need discover the switches. The switches need to be pre-configured.
For sample configurations for Dell PowerSwitch, Cisco Nexus, and Arista switches, see Dell EMC PowerFlex Appliance
Administration Guide.
The following are the specific details for completing the Discovery wizard steps:
** This is optional. For a new CloudLink Center deployment, the CloudLink Center is discovered automatically.
Prerequisites
● Configure the iDRAC network settings.
● Gather the IP addresses and credentials that are associated with the resources.
NOTE: PowerFlex Manager also allows you to use the name-based searches to discover a range of nodes that were
assigned the IP addresses through DHCP to iDRAC. For more information about this feature, see Dell EMC PowerFlex
Manager Online Help.
Steps
1. On the PowerFlex Manager Getting Started page, click Discover Resources.
2. On the Welcome page of the Discovery Wizard, read the instructions and click Next.
3. On the Identify Resources page, click Add Resource Type. From the Resource Type list, select the resource that you
want to discover.
4. Enter the management IP address of the resource in the IP/Hostname Range field. To discover a resource in an IP range,
provide a starting and ending IP address.
5. In the Resource State list, select Managed or Unmanaged.
6. For PowerFlex node, to discover resources into a selected node pool instead of the global pool (default), select the node
pool from the Discover into Node Pool list.
7. Select the appropriate credential from the Credentials list. See the table above for details.
8. For PowerFlex node, if you want PowerFlex Manager to automatically reconfigure the iDRAC IP addresses of the nodes it
discovers, select the Reconfigure discovered nodes with new management IP and credentials check box. This option
is not selected by default, because it is faster to discover the nodes if you bypass the reconfiguration.
NOTE: iDRAC can be discovered using hostname also.
NOTE: For the Resource Type Node, you can use a range with hostname or IP address, provided the hostname has a
valid DNS entry.
9. For PowerFlex node, select the Auto configure nodes to send alerts to PowerFlex Manager check box to have
PowerFlex Manager automatically configure nodes to send alerts to PowerFlex Manager.
10. Click Next to start discovery. On the Discovered Resources page, select the resources from which you want to collect
inventory data and click Finish. The discovered resources are listed on the Resources page.
Prerequisites
● Verify that the customer Red Hat Enterprise Linux or the embedded operating system ISO is available and is located in the
Intelligent Catalog code directory.
● Ensure that the following are installed for specific operating systems:
Steps
1. From the iDRAC web interface, launch the virtual console.
2. Click Connect Virtual Media.
3. Under Map CD/DVD, click Browse for the appropriate ISO.
4. Click Map Device > Close.
5. From the Boot menu, select Virtual CD/DVD/ISO, and click Yes.
6. From the Power menu, select Reset System (warm boot) or Power On System if the machine is off.
7. Set the boot option to UEFI.
a. Press F2 to enter system setup.
b. Under System BIOS > Boot setting, select UEFI as the boot mode.
NOTE: Ensure that the BOSS card is set as the primary boot device from the boot sequence settings. If the BOSS
card is not the primary boot device, reboot the server and change the UEFI boot sequence from System BIOS >
Boot settings > UEFI BOOT settings.
c. Click Back > Back > Finish > Yes > Finish > OK > Yes.
8. Select Install Red Hat Enterprise Linux/CentOS 7.x from the menu.
NOTE: Wait until all configuration checks pass and the screen for language selection is displayed.
Related information
Cabling the PowerFlex R650/R750/R6525 nodes
Install the nvme -cli tool and iDRAC Service Module (iSM)
Use this procedure to install dependency packages for Red Hat Enterprise Linux or embedded operating system.
Steps
1. Copy the Red Hat Enterprise Linux or embedded operating system 7.x image to the /tmp folder of the PowerFlex storage-
only node using SCP or WINSCP.
2. Use PuTTY to log in to the PowerFlex storage-only node.
3. Run #cat /etc/*-release to identify the installed operating system.
4. Type # mount -o loop /tmp/<os.iso> /mnt to mount the iso image at the /mnt mount point.
5. Change directory to /etc/yum.repos.d
6. Type # touch <os.repo> to create a repository file.
7. Edit the file using a vi command and add the following lines:
[repository]
name=os.repo
baseurl=file:///mnt
enabled=1
gpgcheck=0
8. Type #yum repolist to test that you can use yum to access the directory.
9. Install the dependency packages per the installed operating system. To install dependency packages, enter:
# gunzip OM-iSM-Dell-Web-LX-340-1471_A00.tar.gz
# tar -xvf OM-iSM-Dell-Web-LX-340-1471_A00.tar
NOTE: If dcismeng.service is not running, type systemctl start dcismeng.service to start the
service.
i. Type # ip a |grep idrac to verify link local IP address (169.254.0.2) is automatically configured to the interface
idrac on the PowerFlex storage-only node after successful installation of iSM.
j. Type # ping 169.254.0.1 to verify PowerFlex storage-only node operating system can communicate with iDRAC
using ping command (default link local IP address for iDRAC is 169.254.0.1.
11. Type # yum install nvme-cli to install the nvme-cli package.
12. Type # nvme list to ensure that the disk firmware version matches the Intelligent Catalog values.
If the disk firmware version does not match the Intelligent Catalog values, see Related information for information on
upgrading the firmware.
Related information
Adding a PowerFlex R640/R740xd/R840 node to a PowerFlex Manager service in managed mode
Upgrade the disk firmware for NVMe drives
Steps
1. Go to Dell EMC Support, and download the Dell Express Flash NVMe PCIe SSD firmware as per Intelligent Catalog.
2. Log in to the PowerFlex storage-only node.
3. Create a folder in the /tmp directory named diskfw.
4. Use WinSCP to copy the downloaded backplane package to the /tmp/diskfw folder.
5. Change directory to cd /tmp/diskfw/.
6. Change the access permissions of the file using the following command:
NOTE: Package name may differ from the following example depending on the Intelligent Catalog version.
chmod +x Express-Flash-PCIe-SSD_Firmware_R37D0_LN64_1.1.1_A02_01.BIN
Related information
Install the nvme -cli tool and iDRAC Service Module (iSM)
Steps
1. From the management jump server VM, extract all required Red Hat files from the
VxFlex_OS_3.x.x_xxx_Complete_Software/ VxFlex_OS_3.x.x_xxx_RHEL_OEL7 package to the Red Hat
node root folder.
2. Use WinSCP to copy the following Red Hat files from the jump host folder to the /tmp folder on the Red Hat Enterprise
Linux node:
● EMC-ScaleIO-sds-3.x-x.xxx.el7.x86_64.rpm
● EMC-ScaleIO-sdr-3.x-x.xxx.el7.x86_64.rpm
● EMC-ScaleIO-mdm-3.x-x.xxx.el7.x86_64.rpm
● EMC-ScaleIO-lia-3.x-x.xxx.el7.x86_64.rpm
From the appropriate Intelligent Catalog folder, copy the PERC CLI perccli-7.x-xxx.xxxx.rpm rpm package.
NOTE: Verify that the PowerFlex version you install is the same as the version on other Red Hat Enterprise Linux
servers.
3. Use PuTTY and connect to the PowerFlex management IP address of the new node.
4. Go to /tmp, and install the LIA software (use the admin password for the token value).
5. Type #rpm -ivh /tmp/EMC-ScaleIO-sds-3.x-x.xxx.el7.x86_64.rpm to install the storage data server (SDS)
software.
6. To enable replication type rpm -ivh //tmp/EMC-ScaleIO-sdr-3.x-x.xxx.el7.x86_64.rpm to install the storage
data replication (SDR) software.
7. Type rpm -ivh /tmp/perccli-7.x-xxx.xxxx.rpm to install the PERC CLI.
8. Reboot the PowerFlex storage-only node by typing reboot.
Prerequisites
Confirm that the PowerFlex system is functional and no rebuild or rebalances are running. For PowerFlex 3.5 or later, use the
PowerFlex GUI presentation server to add a PowerFlex storage-only node to PowerFlex.
Steps
1. If you are using PowerFlex GUI presentation server:
a. Log in to the PowerFlex GUI presentation server.
b. Click Configuration > SDSs.
c. Click Add.
d. Enter the SDS Name.
e. Select the Protection Domain and SDS Port.
f. Enter the IP address for data1, data2, data3 (if required), and data4 (if required).
g. Select SDS and SDC, as the appropriate communication roles for all the IP addresses that are added.
h. Click SDS.
2. If you are using a PowerFlex version prior to 3.5:
If adding PowerFlex storage-only nodes with NVDIMMs to a new protection domain, see Create an NVDIMM protection
domain. Dell recommends that a minimum of six PowerFlex storage-only nodes be in a protection domain.
Prerequisites
Skip this procedure if NVDIMM is not available in the PowerFlex nodes.
Steps
1. Log in to the jump server.
2. SSH to primary MDM.
3. Log in with administrator credentials.
scli --login --username admin --password 'admin_password'
4. Type scli --version to verify the PowerFlex version.
Sample output:
DellEMC ScaleIO Version: R3_x.x.xxx
Steps
1. Log in the jump server.
2. SSH to the PowerFlex storage-only node.
3. Enter the following command to verify the operating system version:
cat /etc/*-release
4. Enter either of the following commands to verify the operating system version:
● cat /etc/*-release
● rpm -qa | grep release. For example, [root@sio-mgmt-26 ~]# rpm -qa | grep release centos-
release-7-7.1908.0.el7.centos.x86_64
Steps
1. Log in to the jump server.
2. SSH to the PowerFlex storage-only node.
3. Type yum list installed ndctl ndctl-libs daxctl-libs libpmem libpmemblk
Sample output:
4. If the RPMs are not installed, type yum install -y <rpm> to install the RPMs.
Steps
1. Log in to the jump server.
2. SSH to the PowerFlex storage-only node.
3. List the NVDIMM regions by typing the following command:
ndctl list -R
4. Type ndctl destroy-namespace all -f to destroy all default namespaces. If this fails to reclaim space and if you
have already sanitized the NVDIMMs, type ndctl -start-scrub to scrub the NVDIMMs.
5. For each discovered region, type ndctl create-namespace -r region[x] -m raw -f (x corresponds to the
region number) to re-create the namespace.
For example, type:
Steps
1. SSH to the PowerFlex storage-only node.
2. For each NVDIMM, type (starting with namespace0.0):
ndctl create-namespace -f -e namespace[x].0 --mode=devdax --align=4k --no-autolabel
{"dev":"namespace0.0","mode":"devdax","map":"dev","size":"15.75 GiB
(16.91 GB)","uuid":"348d510e-dc70-4855-a6ca-6379046896d5","raw_uuid":
"4ca5cda2-ebd4-4894-aa4e-0cfc823745e2","daxregion":{"id":0,"size":"15.75 GiB (16.91
GB)","align":4096,"devices":[{"chardev":"dax0.0","size":"15.75 GiB (16.91 GB)"}]},
"numa_node":0}
Steps
1. Log in to the PowerFlex GUI presentation server as an administrative user.
2. Click Configuration > Acceleration Pool.
3. Note the acceleration pool name. The name is required while creating a compression storage pool.
Steps
1. Log in to the PowerFlex GUI as an administrative user.
2. Select Backend > Storage.
3. Filter By Storage Pools.
4. Expand the SDSs in the protection domains. Under the Acceleration Type column, identify the protection domain with Fine
Granularity Layout. This is a protection domain that has been configured with NVDIMM accelerated devices.
5. The acceleration pool name (in this example, AP1) is listed under the column Accelerated On. This is needed when creating
a compression storage pool.
Steps
1. If using PowerFlex GUI presentation server:
a. Log in to the PowerFlex GUI presentation server.
b. Click Dashboard > Configuration > Protected Domains, and ADD.
c. In the Add Protection Domain window, enter the name of the protection domain.
d. Click ADD PROTECTION DOMAIN.
2. If using a PowerFlex version prior to 3.5:
a. Log in to the PowerFlex GUI.
b. Click Backend > Storage.
c. Right-click PowerFlex System, and click + Add Protection Domain.
d. Enter the protection domain name, and click OK.
Steps
1. If using PowerFlex GUI presentation server:
a. Log in to the PowerFlex GUI presentation server.
b. Click Dashboard > Configuration > Acceleration Pools, and ADD.
c. Enter the acceleration pool in the Name field of the Add Acceleration pool window.
d. Select NVDIMM as the Pool Type, and select Protection Domain from the drop-down list.
e. In the Add Devices section, select the Add Devices to All SDSs check box, only if the devices are needed to be added
on all SDSs. If not, leave it unchecked.
f. In the Add Device section, select the Add Devices to All SDSs check box, only if the devices are needed to be added
on all SDSs. If not, leave it unchecked.
g. In the Path and Device Name fields, enter the device path and device name respectively. Select the appropriate SDS
from the drop-down menu. Click Add Devices.
h. Repeat the previous step to add devices.
i. Click Add Acceleration Pool.
2. If using a PowerFlex version prior to 3.5:
a. Log in to the PowerFlex GUI.
b. Click Backend > Devices.
c. Right-click Protection Domain, and click + Add > Add Acceleration Pool.
d. Enter the acceleration pool name.
e. Select NVDIMM.
f. Click OK. DAX devices are added later.
g. Click OK > Close.
Steps
1. If using PowerFlex GUI presentation server:
a. Log in to the PowerFlex GUI presentation server.
b. Click Dashboard > Configuration > Storage Pools, and Add.
c. Enter the storage pool name in the Name field of the Add Storage pool window.
d. Select Protection Domain from the drop-down list.
e. Select SSD as the Media Type from the drop-down, and select FINE for Data Layout Granularity.
f. Select the Acceleratio Pool from the drop-down menu, and click Add Storage Pool.
2. If using a PowerFlex version prior to 3.5:
a. Log in to the PowerFlex GUI.
b. Select Backend > Storage.
c. Right-click Protection Domain, and click + Add > Add Storage Pool.
d. Add the new storage pool details:
● Name: Provide name
● Media Type: SSD
● Data Layout: Fine Granularity
● Acceleration Pool: Acceleration pool that was created previously
● Fine Granularity: Enable Compression
e. Click OK > Close
Steps
1. Log in to the primary MDM using SSH.
2. For each SDS with NVDIMM, type the following to add NVDIMM devices to the acceleration pool:
ndctl create-namespace -f -e namespace[x].0 --mode=devdax --align=4k --no-
autolabel #scli--add_sds_device--sds_name <SDS_NAME>--device_path /dev/dax0.0 --
acceleration_pool_name <ACCP_NAME> --force_device_takeover
Steps
1. SSH to the PowerFlex storage-only node.
2. Type lsblk to get the disk devices.
Sample output:
e. Click Advanced.
● Select Force Device Takeover.
Steps
1. If using PowerFlex GUI presentation server:
a. Log in to the PowerFlex GUI presentation server.
b. Click Dashboard > Configuration > Volumes, and ADD.
c. In the ADD Volume window, enter the name in the Volume name field.
d. Select THIN or THICK as the Provisioning option.
e. Enter the size in the Size field. Select the Storage Pool from the drop-down menu.
f. Click Add Volume.
2. If using a PowerFlex version prior to 3.5:
a. Log in to the PowerFlex GUI.
b. Click Frontend > Volumes.
c. Right-click Storage Pool, and click Add Volume.
d. Add the volume details:
● Name: Volume name
● Size: Required volume size
● Enable compression
e. Click OK > Close.
f. Right-click the volume, and select Map.
g. Map to all hosts.
h. Click OK.
Steps
1. If using PowerFlex GUI presentation server to enable zero padding on a storage pool:
a. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
b. Click Storage Pools from the left pane, and select the storage pool.
c. Select Settings from the drop-down menu.
d. Click Modify > General Settings from the drop-down menu.
e. Click Enable Zero Padding Policy > Apply.
NOTE: After the first device is added to a specific pool, you cannot modify the zero padding policy. FG pool is always
zero padded. By default, zero padding is disabled only for MG pool.
2. If using a PowerFlex version prior to 3.5 to enable zero padding on a storage pool:
a. Select Backend > Storage, and right-click Select By Storage Pools from the drop-down menu.
b. Right-click the storage pool, and click Modify zero padding policy.
c. Select Enable Zero Padding Policy, and click OK > Close.
NOTE: Zero padding cannot be enabled when devices are available in the storage pools.
3.
If CloudLink is... Do this...
Enabled See one of the following procedures depending on the
devices:
● Encrypt PowerFlex hyperconverged (SVM) or storage-
only devices (SED drives)
● Encrypt PowerFlex hyperconverged (SVM) or storage-
only devices (non-SED drives)
Disabled Use PuTTY to access the Red Hat Enterprise Linux or an
embedded operating system node.
When adding NVMe drives, keep a separate storage pool for the PowerFlex storage-only node.
e. Repeat these steps on all the SDS, where you want to add the devices.
f. Ensure all the rebuild and balance activities are successfully completed.
g. Verify the space capacity after adding the new node.
6. If you are using a PowerFlex version prior to 3.5:
a. Connect to the PowerFlex GUI.
b. Click Backend.
c. Locate the newly added PowerFlex SDS, right-click, and select Add Device.
d. Type /dev/nvmeXXn1 where X is the value from step 3.
e. Select the Storage Pool.
NOTE: If the existing PD has Red Hat Enterprise Linux nodes, replace or expand with Red Hat Enterprise Linux. If
the existing PD has embedded operating system nodes, replace or expand with embedded operating system.
Prerequisites
Replication is supported on PowerFlex storage-only nodes with dual CPU. The node should be migrated to an LACP bonding NIC
port design.
Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
2. Click the Protection tab in the left pane.
NOTE: In the PowerFlex GUI version 3.5 or prior, this tab is Replication.
3. Click SDR > Add, and enter the storage data replication name.
4. Choose the protection domain.
5. Enter the IP address to be used and click Add IP. Repeat this for each IP address and click Add SDR.
NOTE: While adding storage data replication, Dell recommends to add IP addresses for flex-data1-<vlanid>, flex-data2-
<vlanid>, flex-data3-<vlanid> (if required), and flex-data4-<vlanid> (if required) along with flex-rep1-<vlanid>, and
flex-rep2-<vlanid>. Choose the role of Application and Storage for all data IP addresses and choose role as External
for the replication IP addresses.
6. Repeat steps 3 through 5 for all the storage data replicator you are adding. If you are expanding a replication-enabled
PowerFlex cluster, skip steps 7 through 11.
7. Click Protection > Journal Capacity > Add, and provide the capacity percentage as 10%, which is the default. You can
customize if needed.
8. Extract and add the MDM certificate:
NOTE: You can perform steps 8 through 13 only when the Secondary Site is up and running.
a. Log in to the primary MDM, by using the SSH on source and destination.
b. Type scli command scli --login --username admin. Provide the MDM cluster password, when prompted.
c. See the following example and run the command to extract the certificate on source and destination primary MDM.
Example for source: scli --extract_root_ca --certificate_file /tmp/Source.crt
Example for destination: scli --extract_root_ca --certificate_file /tmp/destination.crt
d. Copy the extracted certificated of source (primary MDM) to destination (primary MDM) using the SCP and conversely.
e. See the following example to add the copied certificate:
Example for source: scli --add_trusted_ca --certificate_file /tmp/destination.crt --comment
destination_crt
Example for destination: scli --add_trusted_ca --certificate_file /tmp/source.crt --comment
source_crt
f. Type scli --list_trusted_ca to verify the added certificate.
9. Create the remote consistency group (RCG).
Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443.
NOTE: Use the primary MDM IP and credentials to log in to the PowerFlex cluster.
10. Click the Protection tab from the left pane. If you are using a PowerFlex version 3.5 or prior, click the Replication tab.
11. Choose RCG (Remote Consistency Group), and click ADD.
12. On the General tab:
a. Enter the RCG name and RPO.
b. Select the Source Protection Domain from the drop-down list.
c. Select the target system and Target protection domain from the drop-down list, and click Next.
d. Under the Pair tab, select the source and destination volumes.
NOTE: The source and destination volumes must be identical in size and provisioning type. Do not map the volume
on the destination site of a volume pair. Retain the read-only permission. Do not create a pair containing a
destination volume that is mapped to the SDCs with a read_write permission.
e. Click Add pair, select the added pair to be replicated, and click Next.
f. In the Review Pairs tab, select the added pair, and select Add RCG, and start replication according to the requirement.
Steps
1. Update the inventory for vCenter (vCSA), switches, gateway VM, and nodes:
a. Click Resources on the home screen.
b. Select the vCenter, switches (applicable only for full networking), gatewayVM, and newly added nodes.
c. Click Run Inventory.
d. Click Close.
e. Wait for the job in progress to complete.
2. Update Services details:
a. Click Services
b. Choose the services on which new a node is expanded and click View Details.
c. On the Services details screen, choose Update Service Details.
d. Choose the credentials for the node and SVM and click Next.
e. On the Inventory Sumary, verify that the newly added nodes are reflecting under Physical Node, and click Next.
f. On the Summary page, verify the details and click Finish.
54
Performing a PowerFlex hyperconverged
node expansion
Perform the manual expansion procedure to add a PowerFlex R650/R750 hyperconverged node to PowerFlex Manager services
that are discovered in a lifecycle mode.
Before adding a PowerFlex node, you must complete the following initial set of expansion procedures:
● Preparing to expand a PowerFlex appliance
● Configuring the network
Type systemctl status firewalld to verify if firewalld is enabled. If disabled, see the Enabling firewall service on
PowerFlex storage-only nodes and SVMs KB article to enable firewalld on all SDS components.
Discover resources
Use this procedure to discover and allow PowerFlex Manager access to resources in the environment. Provide the management
IP address and credential for each discoverable resource.
Prerequisites
Verify that the iDRAC network settings are configured. See Configure iDRAC network settings for more information.
NOTE: For partial network deployments, you do not need to discover the switches. The switches need to be pre-
configured. For sample configurations for Dell PowerSwitch, Cisco Nexus, and Arista switches, see the Dell EMC PowerFlex
Appliance Administration Guide.
The following are the specific details for completing the Discovery wizard steps:
** This is optional. For a new CloudLink Center deployment, the CloudLink Center is discovered automatically.
Prerequisites
● Configure the iDRAC network settings.
● Gather the IP addresses and credentials that are associated with the resources.
NOTE: PowerFlex Manager also allows you to use the name-based searches to discover a range of nodes that were
assigned the IP addresses through DHCP to iDRAC. For more information about this feature, see Dell EMC PowerFlex
Manager Online Help.
Steps
1. On the PowerFlex Manager Getting Started page, click Discover Resources.
2. On the Welcome page of the Discovery Wizard, read the instructions and click Next.
3. On the Identify Resources page, click Add Resource Type. From the Resource Type list, select the resource that you
want to discover.
4. Enter the management IP address of the resource in the IP/Hostname Range field. To discover a resource in an IP range,
provide a starting and ending IP address.
5. In the Resource State list, select Managed or Unmanaged.
6. For PowerFlex node, to discover resources into a selected node pool instead of the global pool (default), select the node
pool from the Discover into Node Pool list.
7. Select the appropriate credential from the Credentials list. See the table above for details.
8. For PowerFlex node, if you want PowerFlex Manager to automatically reconfigure the iDRAC IP addresses of the nodes it
discovers, select the Reconfigure discovered nodes with new management IP and credentials check box. This option
is not selected by default, because it is faster to discover the nodes if you bypass the reconfiguration.
NOTE: iDRAC can be discovered using hostname also.
NOTE: For the Resource Type, you can use a range with hostname or IP address, provided the hostname has a valid
DNS entry.
9. For PowerFlex node, select the Auto configure nodes to send alerts to PowerFlex Manager check box to have
PowerFlex Manager automatically configure nodes to send alerts to PowerFlex Manager.
10. Click Next to start discovery. On the Discovered Resources page, select the resources from which you want to collect
inventory data and click Finish. The discovered resources are listed on the Resources page.
Related information
Configure iDRAC network settings
Steps
1. If you are using PowerFlex presentation server:
a. Log in to the PowerFlex presentation server.
b. Click Settings.
c. Copy the license and click Update License.
2. If you are using a version prior to PowerFlex 3.5:
a. Log in to the PowerFlex GUI and click Preferences > About. Note the current capacity available with the associated
PowerFlex license.
b. If the capacity available is sufficient and does not exceed with the planned expansion, proceed with the expansion
process.
c. If the capacity available exceeds with the planned expansion, obtain an updated license with additional capacity. Engage
the customer account team to obtain an updated license. Once an updated license is available, click Preferences >
System Settings > License > Update License. Verify that the updated capacity is available by selecting Preferences
> About.
Prerequisites
Use the self-SED based license for SED drives, and capacity license for non-SED drives.
Steps
1. Log in to the CloudLink Center web console.
2. Click System > License.
3. Check the limit and verify that there is enough capacity for the expansion.
Prerequisites
Verify the customer VMware ESXi ISO is available and is located in the Intelligent Catalog code directory.
Steps
1. Log in to the iDRAC:
a. Connect to the iDRAC interface, and launch a virtual remote console on the Dashboard.
b. Select Connect Virtual Media.
c. Under Map CD/DVD, click Choose File > Browse and browse to the folder where the ISO file is saved in the Intelligent
Catalog folder, select it and click Open..
d. Click Map Device.
e. Click Boot > Virtual CD/DVD/ISO.
f. Click Power > Reset System (warm boot).
NOTE: If the system is powered off, you must map the ISO image. Change the boot Next Boot to Virtual CD/DVD
ISO and power on the server. It boots with the ISO image. A reset is not required.
h. Under System BIOS > Boot setting, select UEFI as the boot mode.
NOTE: Ensure that the BOSS card is set as the primary boot device from the boot sequence settings. If the BOSS
card is not the primary boot device, reboot the server and change the UEFI boot sequence from System BIOS >
Boot settings > UEFI BOOT settings.
i. Click Back > Finish > Yes > Finish > OK > Finish > Yes.
2. Install VMware ESXi:
a. On the VMware ESXi installer screen, press Enter to continue.
b. Press F11 to accept the license agreement.
c. Select US Default as the keyboard layout.
d. At the Confirm Install screen, press F11.
e. When the installation is complete, remove the installation media before rebooting.
f. Press Enter to reboot the node.
NOTE: Set the first boot device to be the drive on which you installed VMware ESXi in Step 3.
Prerequisites
Ensure that you have access to the customer vCenter.
Steps
1. From the vSphere Client home page, go to Home > Hosts and Clusters.
2. Select a data center.
3. Right-click the data center and select New Cluster.
4. Enter a name for the cluster.
5. Select vSphere DRS and vSphere HA cluster features.
6. Click OK.
7. Select the existing cluster or newly created cluster.
8. From the Configure tab, click Configuration > Quickstart.
9. Click Add in the Add hosts card.
10. On the Add hosts page, in the New hosts tab, add the hosts that are not part of the vCenter Server inventory by entering
the IP address, or hostname and credentials.
11. (Optional) Select the Use the same credentials for all hosts option to reuse the credentials for all added hosts.
12. Click Next.
13. The Host Summary page lists all the hosts to be added to the cluster with related warnings. Review the details and click
Next.
14. On the Ready to complete page, review the IP addresses or FQDN of the added hosts and click Finish.
15. Add the new licenses:
a. Click Menu > Administration
b. In the Administration section, click Licensing.
c. Click Licenses..
d. From the Licenses tab, click Add.
e. Enter or paste the license keys for VMware vSphere and vCenter per line. Click Next.
The license key is a 25-character length of alphabets and digits in the format XXXXX-XXXXX-XXXXX-XXXXX-XXXXX.
You can enter a list of keys in one operation. A new license is created for every license key you enter.
f. On the Edit license names page, rename the new licenses as appropriate and click Next.
g. Optionally, provide an identifying name for each license. Click Next.
h. On the Ready to complete page, review the new licenses and click Finish.
Steps
1. Log in to VMware vCSA HTML Client using the credentials.
2. Go to VMs and templates inventory or Administration > vCenter Server Extensions > vSphere ESX Agent Manager
> VMs to view the VMs.
The VMs are in the vCLS folder once the host is added to the cluster.
3. Right-click the VM and click Migrate.
4. In the Migrate dialog box, click Yes.
5. On the Select a migration type page, select Change storage only and click Next.
6. On the Select storage page, select the PowerFlex volumes for hyperconverged or ESXi-based compute-only node which
will be mapped after the PowerFlex deployment.
NOTE: The volume name is powerflex-service-vol-1 and powerflex-service-vol-2. The datastore name is
powerflex-esxclustershotname-ds1 and powerflex-esxclustershotname-ds2. If these volumes or datastore are
not present, create the volumes or datastores to migrate the vCLS VMs.
Prerequisites
VMware ESXi must be installed with hosts added to the VMware vCenter.
Steps
1. Log in to the VMware vSphere Client.
2. Click Hosts and Clusters.
3. Locate and select the VMware ESXi host.
4. Select Datastores.
5. Right-click the datastore name, and select Rename.
6. Name the datastore using the DASXX convention, with XX being the node number.
Prerequisites
Apply all VMware ESXi updates before installing or loading hardware drivers.
NOTE: This procedure is required only if the ISO drivers are not at the proper Intelligent Catalog level.
Steps
1. Log in to the VMware vSphere Client.
2. Click Hosts and Clusters.
3. Locate and select the VMware ESXi host that you installed.
4. Select Datastores.
5. Right-click the datastore name and select Browse Files.
6. Select the Upload icon (to upload file to the datastore).
7. Browse to the Intelligent Catalog folder or downloaded current solution Intelligent Catalog files.
8. Select the VMware ESXi patch .zip files according to the current solution Intelligent Catalog and node type and click OK to
upload.
9. Select the driver and vib files according to the current Intelligent Catalog and node type and click OK to upload.
10. Click Hosts and Clusters.
11. Locate the VMware ESXi host, right-click, and select Enter Maintenance Mode.
12. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client.
13. Log in as root.
14. Type cd /vmfs/volumes/DASXX where XX is the name of the local datastore that is assigned to the VMware ESXi
server.
15. To display the contents of the directory, type ls.
16. If the directory indicates vib files, type esxcli software vib install –v /vmfs/volumes/DASXX/
patchname.vib to install the vib. These vib files can be individual drivers that are absent in the larger patch cluster
and must be installed separately.
17. Perform either of the following depending on the VMware ESXi version:
a. For VMware ESXi 7.0, type esxcli software vib update -d /vmfs/volumes/DASXXX/VMware-
ESXi-7.0<version>-depot.zip.
b. For VMware ESXi 6.x, type esxcli software vib install -d /vmfs/volumes/DASXXX/<ESXI-patch-
file>.zip
18. Type reboot to reboot the host.
19. Once the host completes rebooting, open an SSH session with the VMware ESXi host, and type esxcli software vib
list |grep net-i to verify that the correct drivers loaded.
20. Select the host and click Exit Maintenance Mode.
21. Update the test plan and host tracker with the results.
Prerequisites
● Type show running-configuration interface port-channel <portchannel number> to back up the
switch port and verify that the port channel for the impacted host are updated to MTU 9216. If the MTU value is set to
9000, skip this procedure.
● Back up the dvswitch configuration:
○ Click Menu, and from the drop-down, click Networking.
○ Click the impacted dvswitch and click the Configure tab.
○ From the Properties page, verify the MTU value. If the MTU value is set to 9000, skip this procedure.
● See the following table for recommended MTU values:
Steps
1. Change the MTU to 9216 or jumbo on physical switch port (Dell EMC PowerSwitch and Cisco Nexus):
a. Dell:
interface port-channel31
description Downlink-Port-Channel-to-Ganga-r840-nvme-01
no shutdown
Dell EMC PowerSwitch switchport mode trunk
switchport trunk allowed vlan 103,106,113,151,152,153,154
mtu 9216
vlt-port-channel 31
spanning-tree port type edge
b. Cisco:
interface port-channel31
description Downlink-Port-Channel-to-Ganga-r840-nvme-01
no shutdown
Cisco Nexus switchport mode trunk
Cisco Nexus switchport trunk allowed vlan 103,106,113,151,152,153,154
mtu 9216
vpc 31
spanning-tree port type edge
Prerequisites
Gather the IP addresses of the primary and secondary MDMs.
Steps
1. Open Direct Console User Interface (DCUI) or use SSH to log in to the new hosts.
2. At the command-line interface, run the following commands to ping each of the primary MDM and secondary MDM IP
addresses.
If ping test fails, you must remediate before continuing.
NOTE: A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks.
Run the following commands for LACP bonding NIC port design. x is the VMkernel adapter number in vmkx.
NOTE: After several host restarts, check the access switches for error or disabled states by running the following
commands:
3. Optional: If errors appear in the counters of any interfaces, type the following and check the counters again.
Output from a Cisco Nexus switch:
4. Optional: If there are still errors on the counter, perform the following to see if the errors are old and irrelevant or new and
relevant.
a. Optional: Type # show interface | inc flapped.
Sample output:
Last link flapped 1d02h
b. Type # show logging logfile | inc failure.
Sample output:
Dec 12 12:34:50.151 access-a %ETHPORT-5-IF_DOWN_LINK_FAILURE: Interface Ethernet1/4/3
is down (Link failure)
5. Optional: Check and reset physical connections, bounce and reset ports, and clear counters until errors stop occurring.
Do not activate new nodes until all errors are resolved and no new errors appear.
Steps
1. In the VMware vSphere Client, select the new ESXi hosts.
2. Click Configure > Hardware > PCI Devices.
3. Click Configure PassThrough.
The Edit PCI Device Availability window opens.
4. From the PCI Device drop-down menu, select the Avago (LSI Logic) Dell HBA330 Mini check box and click OK.
5. Right-click the VMware ESXi host and select Maintenance Mode.
6. Right-click the VMware ESXi host and select Reboot to reboot the host.
Steps
1. Use SSH to log in host.
2. Run the following command to generate a list of NVMe devices:
t10.NVMe____Dell_Express_Flash_PM1725a_800GB_SFF____0A0FB071EB382500
t10.NVMe____Dell_Express_Flash_PM1725a_800GB_SFF____1C0FB071EB382500
t10.NVMe____Dell_Express_Flash_PM1725a_800GB_SFF____3906B071EB382500
3. Run the following command for each NVME device and increment the disk number for each:
vmkfstools -z /vmfs/devices/disks/
t10.NVMe____Dell_Express_Flash_PM1725a_800GB_SFF____0A0FB071EB382500 /vmfs/volumes/
DASxx/<svm_name>/<svm_name>-nvme_disk0.vmdk
vmkfstools -z /vmfs/devices/disks/
t10.NVMe____Dell_Express_Flash_PM1725a_800GB_SFF____0A0FB071EB382500 /vmfs/volumes/
DASxx/<svm_name>/<svm_name>-nvme_disk1.vmdk
vmkfstools -z /vmfs/devices/disks/
t10.NVMe____Dell_Express_Flash_PM1725a_800GB_SFF____1C0FB071EB382500 /vmfs/volumes/
DASxx/<svm_name>/<svm_name>-nvme_disk2.vmdk
Steps
1. Copy the SDC file to the local datastore on the VMware vSphere ESXi server.
2. Use SSH on the host and type esxcli software vib install -d /vmfs/volumes/datastore1/
sdc-3.6.xxxxx.xx-esx7.x.zip -n scaleio-sdc-esx7.x.
3. Reboot the PowerFlex node.
4. To configure the SDC, generate a new UUID:
NOTE: If the PowerFlex cluster is using an SDC authentication, the newly added SDC reports as disconnected when
added to the system. See Configure an authentication enabled SDC for more information.
A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks. Verify the
number of VIPs configured in an existing setup.
7. Reboot the PowerFlex node.
Steps
1. If using PowerFlex presentation server:
a. Log in to the PowerFlex presentation server
b. Go to Configuration > SDCs.
c. Select the SDC and click Modify > Rename and rename the new host to standard.
For example, ESX-10.234.91.84
2. If using a version prior to PowerFlex 3.5:
a. Log in to the PowerFlex GUI.
b. Click Frontend > SDCs and rename the new host to standard.
For example, ESX-10.234.91.84
Steps
1. Using the table, calculate the required RAM capacity.
MG capacity (TiB) Required MG RAM Additional services Total RAM required Total RAM required
capacity (GiB) memory in the SVM (without in the SVM (with
CloudLink) (GiB) CloudLink)
2. Alternatively, you can calculate RAM capacity using the following formula:
NOTE: The calculation is in binary MiB, GiB, and TiB.
3. Open the PowerFlex GUI using the PowerFlex management IP address and the relevant PowerFlex username and password.
4. Select the Storage Data Server (SDS) from the Backend where you want to update the RAM size.
5. Right-click the SDS, select Configure IP addresses, and note the flex-data1-<vlanid> and flex-data2-<vlanid> IP addresses
associated with this SDS. A window appears displaying the IP addresses used on that SDS for data communication. Use
these IP addresses to verify that you powered off the correct PowerFlex VM.
6. Right-click the SDS, select Enter Maintenance Mode, and click OK.
7. Wait for the GUI to display a green check mark, click Close.
8. In the PowerFlex GUI, click Backend, and right-click the SVM and verify the checkbox is deselected for Configure RAM
Read Cache.
9. Power off the SVM.
10. In VMware vCenter, open Edit Settings and modify the RAM size based on the table or formula in step 1. The SVM should
be set as 8 or 12vCPU, configured at 8 or 12 Socket, 8 or 12 Core (for CloudLink, additional 4 threads).
11. Power on the SVM.
12. From the PowerFlex GUI backend, right-click the SDS and select Exit Maintenance Mode and click OK.
13. Wait for the rebuild and rebalance to complete.
14. Repeat steps 6 through 13 for the remaining SDSs.
SDS with Fine Granularity pool vCPU total: 10 (SDS) + 2 (MDM/TB) + 2 RAM_capacity_in_GIB = 5 + (210 *
(CloudLink) = 14 vCPU Total_drive_capacity_in_TIB)/1024
NOTE: Physical core requirement 2
sockets with 14 cores each (vCPU
cannot exceed physical cores).
Prerequisites
● Enter SDS node (SVMs) into maintenance mode and power off the SVM.
● Switch the primary cluster role to secondary if you are putting the primary MDM into maintenance mode (change back to
original node once completed). Do activity only one SDS at a time.
● If you place multiple SDS into maintenance mode at same time, there will be a chance of data loss.
● Ensure that the node has enough CPU cores in each socket.
Steps
1. Log in to the PowerFlex GUI presentation server, https://Presentation_Server_IP:8443.
2. Click Configuration > SDSs.
3. In the right pane, select the SDS and click More > Enter Maintenance Mode.
4. In the Enter SDS into Maintenance Mode dialog box, select Instant.
If maintenance mode takes more than 30 minutes, select PMM.
Steps
1. Log in to the VMware vSphere Client and do the following:
a. Right-click the ESXi host, and select Deploy OVF Template.
b. Click Choose Files and browse the SVM OVA template.
c. Click Next.
2. Go to hosts and templates/EMC PowerFlex and right-click PowerFlex SVM Template, and select the new VM from this
template.
3. Enter a name similar to svm-<hostname>-<SVM IP ADDRESS>, select a datacenter and folder, and click Next.
4. Identify the cluster and select the node that you are deploying. Verify that there are no compatibility warnings and click
Next. Review the details and click Next.
5. Select the local datastore DASXX, and click Next.
6. Leave Customize hardware checked and click Next.
a. Set CPU with 12 cores per socket.
b. Set Memory to 16 GB and check Reserve all guest memory (All locked).
NOTE: Number of vCPUs and size of memory may change based on your system configuration. Check in the existing
SVM and update the CPU and memory settings accordingly.
7. Click Next > Finish and wait for the cloning process to complete.
8. Right-click the new SVM, and select Edit Settings and do the following:
This is applicable only for SSD. For NVMe, see Add NVMe devices as RDMs.
a. From the New PCI device drop-down menu, click DirectPath IO.
b. From the PCI Device drop-down menu, expand Select Hardware, and select Avago (LSI Logic) Dell HBA330 Mini.
c. Click OK.
9. Prepare for asynchronous replication:
NOTE: If replication is enabled, follow the below steps, else skip and go to step 11.
● Select Network Adapter from the list with SDR port group.
● Expand the network adapter and record the details from the MAC Address field.
b. Modify the vCPU, Memory, vNUMA, and CPU reservation settings on SVMs:
The following requirements are for reference:
● 12 GB additional memory is required for SDR.
For example, if you have 24 GB memory existing in SVM, add 12 GB to enable replication. In this case, 24+12=36 GB.
● Additional 8*vCPUs required for SDR:
○ vCPU total for MG Pool based system: 8 (SDS) + 8 (SDR) + 2 (MDM/TB) + 2 (CloudLink) = 20 vCPUs
○ vCPU total for FG Pool based system
○ vCPU total: 10 (SDS) + 10 (SDR) + 2 (MDM/TB) + 2 (CloudLink) = 24 vCPU
● Per SVM, set numa.vcpu.maxPerVirtualNode to half the vCPU value assigned to the SVM.
For example, if the SVM has 20 vCPU, set numa.vcpu.maxPerVirtualNode to 10.
Related information
Add the new SDS to PowerFlex
Steps
1. Log in to the SDS (SVMs) using PuTTy.
2. Append the line numa_memory_affinity=0 to SDS configuration file /opt/emc/scaleio/sds/cfg/conf.txt,
type: echo #numa_memory_affinity=0 >> /opt/emc/scaleio/sds/cfg/conf.txt.
3. Run #cat /opt/emc/scaleio/sds/cfg/conf.txt to verify that the line is appended.
Steps
1. Use SSH to log in to the primary MDM. Log in to PowerFlex cluster using #scli --login --username admin.
2. To query the current value, type, #scli --query_performance_parameters --print_all --tech --
all_sds|grep -i SDS_NUMBER_OS_THREADS.
3. To set the value of SDS_number_OS_threads to 10, type, # scli --set_performance_parameters -sds_id
<ID> --tech --sds_number_os_threads 10.
NOTE: Do not set the SDS threads globally, set the SDS threads per SDS.
Steps
1. Log in to the SDS (SVMs) using PuTTy.
2. Run # systemctl status NetworkManager to ensure that Network Manager is not running.
Output shows Network Manager is disabled and inactive.
3. If Network Manager is enabled and active, run the following command to stop and disable the service:
Steps
1. Log in to SDS (SVMs) using PuTTY.
2. Note the MAC addresses of all the interfaces, type, #ifconfig or #ip a.
3. Edit all the interface configuration files (ifcfg-eth0, ifcfg-eth1, ifcfg-eth2, ifcfg-eth3, ifcfg-eth4) and update the NAME,
DEVICE and HWADDR to ensure correct MAC address and NAME gets assigned.
NOTE: If any of the entries are already there with correct value, then you can ignore such values.
● Use the vi editor to update the file # vi /etc/sysconfig/network-scripts/ifcfg-ethX
or
● Append the line using the following command:
Example file:
BOOTPROTO=none
ONBOOT=yes
HOTPLUG=yes
TYPE=Ethernet
DEVICE=eth2
IPADDR=192.168.155.46
NETMASK=255.255.254.0
DEFROUTE=no
MTU=9000
PEERDNS=no
NM_CONTROLLED=no
NAME=eth2
HWADDR=00:50:56:80:fd:82
Steps
1. Log in to the SVM using PuTTY.
2. Edit the grub file located in /etc/default/grub, type: # vi /etc/default/grub.
3. From the last line, remove net.ifnames=0 and biosdevname=0, and save the file.
4. Rebuild the GRUB configuration file, using: # grub2-mkconfig -o /boot/grub2/grub.cfg
Steps
1. Log in to VMware vCenter using VMware vSphere Client.
2. Select the SVM, right-click Power > Shut-down Guest OS. Ensure you shut down the correct SVM.
Steps
1. Log in to the production VMware vCenter using VMware vSphere Client.
2. Right-click the VM that you want to change and select Edit Settings.
3. Under the Virtual Hardware tab, expand CPU, and clear the CPU Hot Plug check box.
Steps
1. Browse to the SVM in the VMware VMware vSphere Client.
2. To find a VM, select a data center, folder, cluster, resource pool, or host.
3. Click the VMs tab.
4. Right-click the VM and select Edit Settings.
5. Click VM Options and expand Advanced.
6. Under Configuration Parameters, click Edit Configuration.
7. In the dialog box that appears, click Add Configuration Params.
8. Enter a new parameter name and its value depending on the pool:
● If the SVM for an MG pool has 20 vCPU, set numa.vcpu.maxPerVirtualNode to 10.
● If the SVM for an FG pool has 24 vCPU, set numa.vcpu.maxPerVirtualNode to 12.
9. Click OK > OK.
10. Ensure the following:
● CPU shares are set to high.
● 50% of the vCPU reserved on the SVM.
For example:
● If the SVM for an MG pool is configured with 20 vCPUs and CPU speed is 2.8 GHz, set a reservation of 28 GHz
(20*2.8/2).
● If the SVM is configured with 24 vCPUs and CPU speed is 3 GHz, set a reservation of 36 GHz (24*3/2).
11. Find the CPU and clock speed:
a. Log in to VMware vCenter.
Modify the memory size according to the SDR requirements for MG Pool
Use this procedure to add additional memory required for SDR if replication is enabled.
Steps
1. Log in to the production VMware vCenter using vSphere client.
2. Right-click the VM you want to change and select Edit Settings.
3. Under the Virtual Hardware tab, expand Memory, modify the memory size according to SDR requirement.
4. Click OK.
Steps
1. Log in to the production VMware vCenter using VMware vSphere client.
2. Right-click the virtual machine that requires changes and select Edit Settings.
3. Under the Virtual Hardware tab, expand CPU, increase the vCPU count according to SDR requirement.
4. Click OK.
Modify the memory size according to the SDR requirements for FG Pool
Use this procedure to add additional memory required for SDR if replication is enabled.
Steps
1. Log in to the production VMware vCenter using VMware vSphere Client.
2. Right-click the VM that requires changes and select Edit Settings.
3. Under the Virtual Hardware tab, expand Memory, modify the memory size according to SDR requirement.
4. Click OK.
Steps
1. Log in to the production VMware vCenter using VMware vSphere Client.
2. Right-click the virtual machine that requires changes and select Edit Settings.
3. Under the Virtual Hardware tab, expand CPU, and increase the vCPU count according to SDR requirement.
4. Click OK.
Steps
1. Log in to the production VMware vCenter using VMware vSphere Client and navigate to Host and Clusters.
2. Right-click the SVM and click Edit Setting.
3. Click Add new device and select Network Adapter from the list.
4. Select the appropriate port group created for SDR external communication and click OK.
5. Repeat steps 2 through 4 to create the second NIC.
Steps
1. Log in to VMware vCenter using vSphere client.
2. Select the SVM, right-click Power > Power on.
3. Log in to SVM using PuTTY.
4. Create rep1 network interface, type: cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/
network-scripts/ifcfg-eth5.
5. Create rep2 network interface, type: cp etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/
network-scripts/ifcfg-eth6.
6. Edit newly created configuration files (ifcfg-eth5, ifcfg-eth6) using the vi editor and modify the entry for IPADDR,
NETMASK, GATEWAY, DEFROUTE, DEVICE, NAME and HWADDR, where:
● DEVICE is the newly created device of eth5 and eth6
● IPADDR is the IP address of the rep1 and rep2 networks
● NETMASK is the subnet mask
● GATEWAY is the gateway for the SDR external communication
● DEFROUTE change to no
● HWADDR=MAC address collected from the topic Adding virtual NICs to SVMs
● NAME=newly created device name for eth5 and eth6
NOTE: Ensure that the MTU value is set to 9000 for SDR interfaces on both primary and secondary site and also end to
end devices. Confirm with the customer about their existing MTU values and configure it.
Steps
1. Go to /etc/sysconfig/network-scripts and create a file called route-interface and type:
#touch /etc/sysconfig/network-scripts/route-eth5
#touch /etc/sysconfig/network-scripts/route-eth6
/etc/sysconfig/network-scripts/route-eth5
10.0.10.0/23 via 10.0.30.1
/etc/sysconfig/network-scripts/route-eth6
10.0.20.0/23 via 10.0.40.1
Steps
1. Use WinSCP or SCP to copy the SDR package to the tmp folder.
2. SSH to SVM and run the following to install the SDR package:#rpm -ivh /tmp/EMC-ScaleIO-sdr-3.6-
x.xxx.el7.x86_64.rpm.
Prerequisites
The IP address of the node must be configured for SDR. The SDR communicates with several components:
● SDC (application)
● SDS (storage)
● Remote SDR (external)
Steps
1. In the left pane, click Protection > SDRs.
2. In the right pane, click Add.
3. In the Add SDR dialog box, enter the connection information of the SDR:
Steps
1. Log in to all the SVMs and PowerFlex nodes in source and destination sites.
2. Ping the following IP addresses from each of the SVM and PowerFlex nodes in source site:
● Management IP addresses of the primary and secondary MDMs
● External IP addresses configured for SDR-SDR communication
3. Ping the following IP addresses from each of the SVM and PowerFlex nodes in destination site:
● Management IP addresses of the primary and secondary MDMs
● External IP addresses configured for SDR-SDR communication
Steps
1. If you are using a PowerFlex presentation server:
a. Log in to the PowerFlex presentation server.
b. Click Configuration > SDS and click Add.
c. On the Add SDS page, enter the SDS name and select the Protection Domain.
d. Under Add IP, enter the data IP address and click Add SDS.
e. Locate the newly added PowerFlex SDS, right-click and select Add Device.
f. Choose Storage device from the drop-down menu.
g. Locate the newly added PowerFlex SDS, right-click and select Add Device, and choose Acceleration Device from the
drop-down menu.
CAUTION: In case the deployment fails for SSD or NVMe with NVDIMM, it can be due to any one of the
following reasons. Click View Logs and see Configuring the NVDIMM for a new PowerFlex hyperconverged
node for the node configuration table and steps to add SDS and NVDIMM to the FG pool.
● The following error appears if the required NVDIMM size and the RAM size to SVM does not match the node
configuration table.
VMWARE_CANNOT_RETRIEVE_VM_MOR_ID
● If the deployment fails to add the device and SDS to the PowerFlex GUI, you should manually add the SDS and NVDIMM
to FG pool.
2. If you are using a PowerFlex version prior to 3.5:
a. Log in to the PowerFlex GUI, and click Backend > Storage.
b. Right-click the new protection domain, and select +Add > Add SDS.
c. Enter a name.
For example, 10.234.92.84-ESX.
d. Add the following addresses in the IP addresses field and click OK:
● flex-data1-<vlanid>
● flex-data2-<vlanid>
● flex-data3-<vlanid> (if required)
● flex-data4-<vlanid> (if required)
e. Add New Devices from the lsblk output from the previous step.
f. Select the storage pool destination and media type.
g. Click OK and wait for the green check box to appear and click Close.
Related information
Add drives to PowerFlex
Prepare the SVMs for replication
Steps
1. If you are using PowerFlex GUI presentation server to enable zero padding on a storage pool:
a. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
b. Click Storage Pools from the left pane, and select the storage pool.
c. Click Settings from the drop-down menu.
d. Click Modify > General Settings from the drop-down menu.
e. Click Enable Zero Padding Policy > Apply.
NOTE: After the first device is added to a specific pool, you cannot modify the zero padding policy. FG pool is always
zero padded. By default, zero padding is disabled only for MG pool.
2. If you are using a PowerFlex version prior to 3.5 to enable zero padding on a storage pool:
a. Select Backend > Storage, and right-click Select By Storage Pools from the drop-down menu.
b. Right-click the storage pool, and click Modify zero padding policy.
c. Select Enable Zero Padding Policy, and click OK > Close.
NOTE: Zero padding cannot be enabled when devices are available in the storage pools.
3.
If CloudLink is... Do this...
Enabled See one of the following procedures depending on the
devices:
● Encrypt PowerFlex hyperconverged (SVM) or storage-
only devices (SED drives)
● Encrypt PowerFlex hyperconverged (SVM) or storage-
only devices (non-SED drives)
Disabled Use PuTTY to access the Red Hat Enterprise Linux or an
embedded operating system node.
When adding NVMe drives, keep a separate storage pool for the PowerFlex storage-only node.
e. Repeat steps 5a to 5 d on all the SDS, where you want to add the devices.
f. Ensure that all the rebuild and balance activities are successfully completed.
g. Verify the space capacity after adding the new node.
6. If you are using a PowerFlex version prior to 3.5:
a. Connect to the PowerFlex GUI.
b. Click Backend.
c. Locate the newly added PowerFlex SDS, right-click, and select Add Device.
d. Type /dev/nvmeXXn1 where X is the value from step 3.
e. Select the Storage Pool, as identified in the Workbook.
NOTE: If the existing protection domain has Red Hat Enterprise Linux nodes, replace or expand with Red Hat
Enterprise Linux. If the existing protection domain has embedded operating system nodes, replace or expand with
embedded operating system.
Related information
Add the new SDS to PowerFlex
Prerequisites
● Ensure the NVDIMM firmware on the new node is of same version of the existing system in the cluster.
● If NVDIMM firmware is higher than the Intelligent Catalog version, you must manually downgrade NVDIMM firmware.
● The VMware ESXi host and the VMware vCenter server are using version 6.7 or higher.
● The VM version of your SVM is version 14 or higher.
● The firmware of the NVDIMM is version 9324 or higher.
● The VMware ESXi host recognizes the NVDIMM.
Steps
1. Log in to the VMware vCenter.
2. Select the VMware ESXi host.
3. Go to the Summary tab.
4. In the VM Hardware section, verify that the required amount of persistent memory is listed.
Add NVDIMM
Use this procedure to add an NVDIMM.
Steps
1. Using the PowerFlex GUI, perform the following to enter the target SDS into maintenance mode:
NOTE: For the new PowerFlex nodes with NVMe or SSD, remove the SDS or device if it is added to the GUI before
placing the SDS into maintenance mode. Skip this step if the SDS is not added to the GUI.
NOTE: In case the capacity is not matching with the configuration table, use the following formula to calculate the
NVDIMM or RAM capacity for Fine Granularity. The calculation is in binary MiB, GiB, and TiB. Round off the RAM size to
the next GiB. For example, if the output of the equation is 16.75 GiB, round it off to 17 GiB.
5. In Edit Settings, change the Memory size as per the node configuration table, and select the Reserver all guest memory
(All locked) check box.
6. Right-click the SVM, choose Edit settings. Set the SVM as 8 or 12 vCPU, configure at 8 or 12 socket, 8 or 12 core (for
CloudLink additional 4 threads).
7. Use VMware vCenter to turn on the SVM.
8. Using the PowerFlex GUI, remove the SDS from maintenance mode.
9. Create a namespace on the NVDIMM:
a. Connect to the SVM using SSH and type # ndctl create-namespace -f -e namespace0.0 --mode=dax
--align=4K.
10. Perform steps 3 to 8 for every PowerFlex node with NVDIMM.
11. Create an acceleration pool for the NVDIMM devices:
a. Connect using SSH to the primary MDM, type #scli --add_acceleration_pool --
protection_domain_name <PD_NAME> --media_type NVRAM --acceleration_pool_name
<ACCP_NAME> in the SCLI to create the acceleration pool.
NOTE: Use this step only when you want to add the new PowerFlex node to the new acceleration pool. Otherwise,
skip this step and go to the step to add SSD or NVMe device.
b. For each SDS with NVDIMM, type #scli --add_sds_device --sds_name <SDS_NAME> --
device_path /dev/dax0.0 --acceleration_pool_name <ACCP_NAME> --force_device_takeover to
add the NVDIMM devices to the acceleration pool:
NOTE: Use this step only when you want to add the new acceleration device to a new acceleration pool. Otherwise,
skip this step and go to the step to add SSD or NVMe device.
12. Create a storage pool for SSD devices accelerated by an NVDIMM acceleration pool with Fine Granularity data layout:
a. Connect using SSH to the primary MDM and enter #scli --add_storage_pool --protection_domain_name
<PD_NAME> --storage_pool_name <SP_NAME> --media_type SSD --compression_method normal
--fgl_acceleration_pool_name<ACCP_NAME> --fgl_profile high_performance --data_layout
fine_granularity.
NOTE: Use this step only when you want to add the new PowerFlex node to a new storage pool. Otherwise, skip this
step and go to the step to add SSD or NVMe device.
13. Add the SSD or NVMe device to the existing Fine Granularity storage pool using the PowerFlex GUI.
14. Set the spare capacity for the fine granularity storage pool.
When finished, if you are not extending the MDM cluster, see Completing the expansion.
Extend the MDM cluster from three to five nodes using SCLI
Use this procedure to extend the MDM cluster using SCLI.
It is critical that the MDM cluster is distributed across access switches and physical cabinets to ensure maximum resiliency and
availability of the cluster. The location of the MDM components should be checked and validated during every engagement,
and adjusted if found noncompliant with the published guidelines. If an expansion includes adding physical cabinets and access
switches, you should relocate the MDM cluster components. See MDM cluster component layouts for more information.
When adding new MDM or tiebreaker nodes to a cluster, first place the PowerFlex storage-only nodes (if available), followed by
the PowerFlex hyperconverged nodes.
Prerequisites
● Identify new nodes to use as MDM or tiebreaker.
● Identify the management IP address, data1 IP address, and data2 IP address (log in to each new node or SVM and run the IP
addr command).
● Gather virtual interfaces for the nodes being used for the new MDM or tiebreaker, and note the interface of data1 and data2.
For example, for a PowerFlex storage-only node, the interface is bond0.152 and bond1.160. If it is an SVM, it is eth3 and
eth4.
● Identify the primary MDM.
Steps
1. SSH to each new node or SVM and assign the proper role (MDM or tiebreaker) to each.
2. Transfer the MDM and LIA packages to the newly identified MDM cluster nodes.
NOTE: The following steps contain sample versions of PowerFlex files as examples only. Use the appropriate PowerFlex
files for your deployment.
10. Enter scli -–query_cluster to find the ID for the newly added Standby MDM and the Standby TB.
11. To switch to five node cluster, enter scli --switch_cluster_mode --cluster_mode 5_node --
add_slave_mdm_id <Standby MDM ID> --add_tb_id <Standby tiebreaker ID>
12. Repeat steps 1 through 9 to add Standby MDM and tiebreakers on other PowerFlex nodes.
Prerequisites
● Identify new nodes to use as MDM or tiebreaker.
● Identify the management IP address, data1 IP address, and data2 IP address (log in to each new node or SVM and enter the
IP addr command).
● Gather virtual interfaces for the nodes being used for the new MDM or tiebreaker, and note the interface of data1 and data2.
For example, for a PowerFlex storage-only node, the interface is bond0.152 and bond1.160. If it is an SVM, it is eth3 and
eth4.
● Identify the primary MDM.
Steps
1. SSH to each new node or SVM and assign the proper role (MDM or tiebreaker) to each.
2. Transfer the MDM and LIA packages to the newly identified MDM cluster nodes.
NOTE: The following steps contain sample versions of PowerFlex files as examples only. Use the appropriate PowerFlex
files for your deployment.
8. Add a new standby tiebreaker by entering scli --add_standby_mdm --mdm_role tb --new_mdm_ip <new tb
data1, data2 ip’s> --new_mdm_name <new tb name>.
9. Repeat Steps 7 and 8 for each new MDM and tiebreaker that you are adding to the cluster.
10. Enter scli –-query_cluster to find the ID for the current MDM and tiebreaker. Note the IDs of the MDM and
tiebreaker being replaced.
11. To replace the MDM, enter scli --replace_cluster_mdm --add_slave_mdm_id <mdm id to add> --
remove_slave_mdm_id <mdm id to remove>.
Repeat this step for each MDM.
12. To replace the tiebreaker, enter scli --replace_cluster_mdm --add_tb_id <tb id to add> --
remove_tb_id <tb id to remove>.
Repeat this step for each tiebreaker.
13. Enter scli -–query_cluster to find the IDs for MDMs and tiebreakers being removed.
14. Using IDs to remove the old MDM, enter scli --remove_standby_mdm --remove_mdm_id <mdm id to
remove>.
NOTE: This step might not be necessary if this MDM remains in service as a standby. See MDM cluster component
layouts for more information.
15. To remove the old tiebreaker, enter scli --remove_standby_mdm --remove_mdm_id <mdm id to remove>.
NOTE: This step might not be necessary if this tiebreaker remains in service as a standby. See MDM cluster component
layouts for more information.
Steps
1. Update the inventory for vCenter (vCSA), switches, gateway VM, and nodes:
a. Click Resources on the home screen.
b. Select the vCenter, switches (applicable only for full networking), gatewayVM, and newly added nodes.
c. Click Run Inventory.
d. Click Close.
e. Wait for the job in progress to complete.
2. Update Services details:
a. Click Services
b. Choose the services on which new a node is expanded and click View Details.
c. On the Services details screen, choose Update Service Details.
d. Choose the credentials for the node and SVM and click Next.
e. On the Inventory Sumary, verify that the newly added nodes are reflecting under Physical Node, and click Next.
f. On the Summary page, verify the details and click Finish.
Prerequisites
Ensure that the VMware vSphere vCenter Server and the VMware vSphere Client are accessible.
Steps
1. Log in to the VMware vSphere Client.
2. Click Networking.
3. Expand the PowerFlex Customer-Datacenter.
4. Right-click cust_dvswitch.
5. Click Distributed Port Group > New Distributed Port Group.
6. Update the name to pfmc-nsx-transport-121 and click Next.
7. Select the default Port binding.
8. Select the default Port allocation.
9. Select the default # of ports (default is 8).
10. Select the default VLAN as VLAN Type.
11. Set the VLAN ID to 121.
12. Clear the Customize default policies configuration check box and click Next.
13. Click Finish.
14. Right-click the pfnc-nsx-transport-121 and click Edit Settings....
15. Click Teaming and failover.
16. Verify that Uplink1 and Uplink2 are moved to Active.
17. Click OK.
Prerequisites
Both Cisco Nexus access switch ports for the compute VMware ESXi hosts are configured as trunk access. These ports will be
configured as LACP enabled after the physical adapter is removed from each ESXi host.
WARNING: As the VMK0 (ESXi management) is not configured on cust_dvswitch, both the vmnics are first
migrated to the LAGs simultaneously and then the port channel is configured. Data connectivity to PowerFlex is
lost until the port channels are brought online with both vmnic interfaces connected to LAGs.
Steps
1. Log in to the VMware vSphere Client.
2. Look at VMware vCenter and physical switches to ensure that both ports across all hosts are up.
3. For each compute VMware ESXi host, record the physical switch ports to which vmnic5 (switch-B) and vmnic7 (switch -A)
are connected.
a. Click Home, then select Hosts and Clusters and expand the compute cluster.
b. Select the first compute ESXi host in left pane, and then select Configure tab in right pane.
c. Select Virtual switches under Networking.
d. Expand cust_dvswitch.
e. Expand Uplink1 and click eclipse (…) for vmnic7 and select view settings.
f. Click LLDP tab.
g. Record the Port ID (switch port) and System Name (switch).
h. Repeat step 3 for vmnic5 on Uplink 2.
4. Configure LAG (LACP) on cust_dvswitch within VMware vCenter Server:
a. Click Home, then select Networking.
b. Expand the compute cluster and click cust_dvswitch > Configure > LACP.
c. Click +New to open wizard.
d. Verify that the name is lag1.
e. Verify that the number of ports is 2.
f. Verify that the mode is Active.
g. Change Load Balancing mode to Source and destination IP address, TCP/UDP port..
h. Click OK.
5. Migrate vmnic5 to lag1-0 and vmnic7 to lag1-1 on cust_dvswitch for the compute VMware ESXi host as follows:
a. Click Home, then select Networking and expand the PowerFlex data center.
b. Right-click cust_dvswitch and select Manage host networking to open wizard.
c. Select Add hosts... and click Next.
d. Click Attached hosts..., select all the compute ESXi hosts, and click OK.
e. Click Next.
f. For each ESXi host, select vmnic5 and click Assign uplink.
g. Click lag1-0 and click OK.
h. For each ESXi host, select vmnic7 and click Assign uplink.
i. Click lag1-1 and click OK.
j. Click Next > Next > Next > Finish.
6. Create port-channel (LACP) on switch-A for compute VMware ESXi host.
The following switch configuration is an example of a single compute VMware ESXi host.
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-A switch.
b. Create a port channel on switch-A for each compute VMware ESXi host as follows:
interface port-channel40
Description to flex-compute-esxi-host01
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
speed 25000
no lacp suspend-individual
vpc 40
7. Configure channel-group (LACP) on switch-A access port (vmnic5) for each compute VMware ESXi host.
The following switch port configuration is an example of a single compute VMware ESXi host.
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-A switch.
b. Create port on switch-A as follows:
int e1/1/1
description to flex-compute-esxi-host01 – vmnic5
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 40 mode active
interface port-channel40
Description to flex-compute-esxi-host01
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
speed 25000
no lacp suspend-individual
vpc 40
9. Configure channel-group (LACP) on switch-B access port (vmnic7) for each compute VMware ESXi host.
The following switch port configuration is an example of a single compute VMware ESXi host.
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-B switch.
b. Create port on switch-B as follows:
int e1/1/1
description to flex-compute-esxi-host01 – vmnic7
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 40 mode active
10. Update teaming and policy to route based on physical NIC load for each port group within cust_dvswitch:
a. Click Home and select Networking.
b. Expand cust_dvswitch to have all port group in view.
Prerequisites
NOTE: Before adding a VMware NSX-T service using PowerFlex Manager, either the customer or VMware services must
add the new PowerFlex node to NSX-T Data Center using NSX-T UI.
Consider the following:
● Before adding this service Update service details in PowerFlex Manager, verify that the NSX-T Data Center is configured
on the PowerFlex hyperconverged or compute-only nodes.
● If the transport nodes (PowerFlex cluster) is configured with NSX-T, then you cannot replace the field units using PowerFlex
Manager. You must add the node manually by following either of these procedures depending on the node type:
○ Performing a PowerFlex hyperconverged node expansion
○ Performing a PowerFlex compute-only node expansion
Steps
1. Log in to PowerFlex Manager.
2. If NSX-T Data Center 3.0 or higher is deployed and is using VDS (not N-VDS), then add the transport network:
a. From Getting Started, click Define Networks.
b. Click + Define and do the following:
Configure Static IP Address Ranges Select the Configure Static IP Address Ranges check box. Type
the starting and ending IP address of the transport network IP pool
c. Click Next.
d. On the Network Information page, select Full Network Automation, and click Next.
e. On the Cluster Information page, enter the following details:
f. Click Next.
g. On the OS Credentials page, select the OS credentials for each node, and click Next.
h. On the Inventory Summary page, review the summary and click Next.
i. On the Networking Mapping page, verify that the networks are aligned with the correct dvSwitch.
j. On the Summary page, review the summary and click Finish.
4. Verify PowerFlex Manager recognizes NSX-T is configured on the nodes:
a. Click Services.
b. Select the hyperconverged or compute-only service.
c. Verify that a banner appears under the Service Details tab, notifying that NSX-T is configured on a node and is
preventing some features from being used. In case you do not see this banner, check if you have selected the wrong
service or NSX-T is not configured on the hyperconverged or compute-only nodes.
55
Encrypting PowerFlex hyperconverged
(SVM) or storage-only node devices (SED or
non-SED drives)
Prerequisites
NOTE: This procedure is not applicable for PowerFlex storage-only nodes with NVMe drives.
● If you want to add the SVM into a specific machine group, use the -G [group_code] argument with the preceding
command.
where -G group_code specifies the registration code for the machine group to which you want to assign the machine.
NOTE: To obtain the registration code of the machine group, log in to the CloudLink Center using a web browser.
Steps
1. Open a browser, and provide the CloudLink Center IP address.
2. In the Username box, enter secadmin.
3. In the Password box, enter the secadmin password.
4. Click Agents > Machines.
5. Ensure that the hostname of the new SVM or PowerFlex storage-only node is listed, and is in Connected state.
286 Encrypting PowerFlex hyperconverged (SVM) or storage-only node devices (SED or non-SED drives)
Internal Use - Confidential
6. If the SDS has devices that are added to PowerFlex, remove the devices. Otherwise, skip this step.
Encrypting PowerFlex hyperconverged (SVM) or storage-only node devices (SED or non-SED drives) 287
Internal Use - Confidential
NOTE: If the device shows taking control, run #svm status until the device status shows as managed. It
is a known issue that the CLI status of SED drives shows as unencrypted, whereas CloudLink Center UI shows the
device status as Encrypted HW.
NOTE: There are no /dev/mapper devices for SEDs. Use the device name listed in the svm status command. It
is recommended to add self-encrypted drives (SEDs) to their own storage pools.
f. Once all SED drives are Managed, add the encrypted devices to the PowerFlex SDS.
288 Encrypting PowerFlex hyperconverged (SVM) or storage-only node devices (SED or non-SED drives)
Internal Use - Confidential
8. Ensure that rebalance is running and progressing before continuing to another SDS.
Related information
Verify the CloudLink license
Prerequisites
Ensure that the following prerequisites are met:
● If you are using PowerFlex presentation server, see Modifying the vCPU, memory, vNUMA, and CPU reservation settings on
SVMs for the CPU settings.
● If you are using PowerFlex versions prior to 3.6, the Storage VM (SVM) vCPU is set to 12 (one socket and twelve cores),
and RAM is set to 16 GB (applicable for MG pool enabled system only). If you have an FG pool enabled system, change the
RAM size based on the node configuration table specified in Add NVDIMM
● SSH to the SVM or the PowerFlex storage-only node on which you plan to have the encrypted devices.
● Download and install the CloudLink Agent by entering:
curl -O http://cloudlink_ip/cloudlink/securevm && sh securevm -S cloudlink_ip
● If you want to add the SVM into a specific machine group, use the -G [group_code] argument with the preceding
command.
where -G group_code specifies the registration code for the machine group to which you want to assign the machine.
NOTE: To obtain the registration code of the machine group, log in to the CloudLink Center using a web browser.
Steps
1. Open a browser, and enter the CloudLink Center IP address.
2. In the Username box, enter secadmin.
3. In the Password box, enter the secadmin password.
The CloudLink Center home page is displayed.
4. Click Agents > Machines.
5. Ensure that the hostname of the new SVM or PowerFlex storage-only node is listed, and is in Connected state.
Encrypting PowerFlex hyperconverged (SVM) or storage-only node devices (SED or non-SED drives) 289
Internal Use - Confidential
NOTE: Ensure that Storage Data Server (SDS) is installed before CloudLink Agent is installed.
In the vi /opt/emc/extra/pre_run.sh file, type sleep 60 before the last line if it does not already exist.
7. If the SDS has devices that are added to PowerFlex, remove the devices. Otherwise, skip this step.
b. For SSD drives, enter svm encrypt /dev/sdX for each drive you want to encrypt.
where X is the device letter.
c. For NVMe drives, enter use svm encrypt /dev/nvmexxx for each drive you want to encrypt.
d. Enter #svm status to view the status of the devices.
290 Encrypting PowerFlex hyperconverged (SVM) or storage-only node devices (SED or non-SED drives)
Internal Use - Confidential
iv. Enter the new device path and name in the Path
and Name fields of the Add Storage Device to SDS
window.
v. Select the Storage Pool, Media Type you recorded in
the drive information table.
vi. Click Add Device.
vii. Repeat steps c and d to add all the devices, and click
Add Devices.
PowerFlex version prior to 3.5 i. Log in to the PowerFlex GUI.
ii. Click Backend.
iii. Locate the PowerFlex SDS, right-click, and select Add
Device.
iv. In Add device to SDS, enter the Path and select the
Storage Pool for each device.
● If the PowerFlex storage-only node has only SSD
disks, then the path is /dev/mapper/svm_sdX
where X is the device you have encrypted.
● If the PowerFlex storage-only node node has
NVMe disks, then the path is /dev/mapper/
svm_nvmeXnX where X is the device you have
encrypted.
9. Ensure that rebalance is running and progressing before continuing to another SDS.
Related information
Verify the CloudLink license
Encrypting PowerFlex hyperconverged (SVM) or storage-only node devices (SED or non-SED drives) 291
Internal Use - Confidential
Verify newly added SVMs or storage-only nodes machine status in CloudLink Center
292 Encrypting PowerFlex hyperconverged (SVM) or storage-only node devices (SED or non-SED drives)
Internal Use - Confidential
56
Performing a PowerFlex compute-only node
expansion
Perform the manual expansion procedure to add a PowerFlex R650/R750/R6525 compute-only node to PowerFlex Manager
services that are discovered in a lifecycle mode.
Before adding a PowerFlex node, you must complete the following initial set of expansion procedures:
● Preparing to expand a PowerFlex appliance
● Configuring the network
Discover resources
Perform this step to discover and allow PowerFlex Manager access to resources in the environment. Provide the management
IP address and credential for each discoverable resource.
Prerequisites
Verify that the iDRAC network settings are configured. See Configure iDRAC network settings for more information.
NOTE: For partial network deployments, you do not need to discover the switches. The switches need to be pre-
configured. For sample configurations for Dell, Cisco, and Arista switches, see the Dell EMC PowerFlex Appliance
Administration Guide.
The following are the specific details for completing the Discovery wizard steps:
** This is optional. For a new CloudLink Center deployment, the CloudLink Center is discovered automatically.
Prerequisites
● Configure the iDRAC network settings.
● Gather the IP addresses and credentials that are associated with the resources.
NOTE: PowerFlex Manager also allows you to use the name-based searches to discover a range of nodes that were
assigned the IP addresses through DHCP to iDRAC. For more information about this feature, see Dell EMC PowerFlex
Manager Online Help.
Steps
1. On the PowerFlex Manager Getting Started page, click Discover Resources.
2. On the Welcome page of the Discovery Wizard, read the instructions and click Next.
3. On the Identify Resources page, click Add Resource Type. From the Resource Type list, select the resource that you
want to discover.
4. Enter the management IP address of the resource in the IP/Hostname Range field. To discover a resource in an IP range,
provide a starting and ending IP address.
5. In the Resource State list, select Managed or Unmanaged.
6. For PowerFlex node, to discover resources into a selected node pool instead of the global pool (default), select the node
pool from the Discover into Node Pool list.
7. Select the appropriate credential from the Credentials list. See the table above for details.
8. For PowerFlex node, if you want PowerFlex Manager to automatically reconfigure the iDRAC IP addresses of the nodes it
discovers, select the Reconfigure discovered nodes with new management IP and credentials check box. This option
is not selected by default, because it is faster to discover the nodes if you bypass the reconfiguration.
NOTE: iDRAC can be discovered using hostname also.
NOTE: For the Resource Type Node, you can use a range with hostname or IP address, provided the hostname has a
valid DNS entry.
9. For PowerFlex node, select the Auto configure nodes to send alerts to PowerFlex Manager check box to have
PowerFlex Manager automatically configure nodes to send alerts to PowerFlex Manager.
10. Click Next to start discovery. On the Discovered Resources page, select the resources from which you want to collect
inventory data and click Finish. The discovered resources are listed on the Resources page.
Prerequisites
Verify the customer VMware ESXi ISO is available and is located in the Intelligent Catalog code directory.
Steps
1. Log in to the iDRAC:
a. Connect to the iDRAC interface and launch a virtual remote console by clicking Dashboard > Virtual Console and click
Launch Virtual Console.
b. Select Connect Virtual Media.
c. Under Map CD/DVD, click Choose File > Browse and browse to the folder where the ISO file is saved, select it, and
click Open.
d. Click Map Device.
e. Click Menu > Boot > Virtual CD/DVD/ISO.
f. Click Power > Reset System (warm boot).
2. Set the boot option to UEFI.
a. Press F2 to enter system setup.
b. Under System BIOS > Boot setting, select UEFI as the boot mode.
NOTE: Ensure that the BOSS card is set as the primary boot device from the boot sequence settings. If the BOSS
card is not set as the primary boot device, reboot the server and change the UEFI boot sequence from System
BIOS > Boot settings > UEFI BOOT settings.
c. Click Back > Back > Finish > Yes > Finish > OK > Finish > Yes.
3. Install VMware ESXi:
a. On the VMware ESXi installer screen, press Enter to continue.
b. Press F11 to accept the license agreement.
c. Under Local, select DELLBOSS VD as the install location, and click Enter if prompted to do so.
d. Select US Default as the keyboard layout.
e. When prompted, type the root password and press Enter.
f. At the Confirm Install screen, press F11.
g. When the installation is complete, delete the installation media before rebooting.
h. Press Enter to reboot the node.
NOTE: Set the first boot device to be the drive on which you installed VMware ESXi in Step 3.
e. See the VMware ESXi Management VLAN ID field in the Workbook for the required VLAN value.
f. Set IPv4 ADDRESS, SUBNET MASK, and DEFAULT GATEWAY configuration to the values defined in the Workbook.
g. Go to DNS Configuration. See the Workbook for the required DNS value.
h. Go to Custom DNS suffix. See the Workbook (local VXRC DNS).
i. Go to DCUI Troubleshooting Options.
j. Select Enable ESXi Shell and Enable SSH.
k. Press <Alt>-F1
l. Log in as root.
m. To enable the VMware ESXi host to work on the port channel, type:
n. Type vim-cmd hostsvc/datastore/rename datastore1 DASXX to rename the datastore, where XX is the
server number.
o. Type exit to log off.
p. Press <Alt>-F2 to return to the DCUI.
q. Select Disable ESXi Shell.
r. Go to DCUI IPv6 Configuration.
s. Disable IPv6.
t. Press ESC to return to the DCUI.
u. Type Y to commit the changes and the node restarts.
v. Verify host connectivity by pinging the IP address from the jump server, using the command prompt.
Prerequisites
Ensure that you have access to the customer vCenter.
Steps
1. From the vSphere Client home page, go to Home > Hosts and Clusters.
2. Select a data center.
3. Right-click the data center and select New Cluster.
4. Enter a name for the cluster.
5. Select vSphere DRS and vSphere HA cluster features.
6. Click OK.
7. Select the existing cluster or newly created cluster.
8. From the Configure tab, click Configuration > Quickstart.
9. Click Add in the Add hosts card.
10. On the Add hosts page, in the New hosts tab, add the hosts that are not part of the vCenter Server inventory by entering
the IP address, or hostname and credentials.
11. (Optional) Select the Use the same credentials for all hosts option to reuse the credentials for all added hosts.
12. Click Next.
13. The Host Summary page lists all the hosts to be added to the cluster with related warnings. Review the details and click
Next.
14. On the Ready to complete page, review the IP addresses or FQDN of the added hosts and click Finish.
15. Add the new licenses:
a. Click Menu > Administration
b. In the Administration section, click Licensing.
c. Click Licenses..
d. From the Licenses tab, click Add.
e. Enter or paste the license keys for VMware vSphere and vCenter per line. Click Next.
The license key is a 25-character length of alphabets and digits in the format XXXXX-XXXXX-XXXXX-XXXXX-XXXXX.
You can enter a list of keys in one operation. A new license is created for every license key you enter.
f. On the Edit license names page, rename the new licenses as appropriate and click Next.
g. Optionally, provide an identifying name for each license. Click Next.
h. On the Ready to complete page, review the new licenses and click Finish.
Steps
1. Copy the SDC file to the local datastore on the VMware vSphere ESXi server.
2. Use SSH on the host and type esxcli software vib install -d /vmfs/volumes/datastore1/
sdc-3.x.xxxxx.xx-esx7.x.zip -n scaleio-sdc-esx7.x.
3. Reboot the PowerFlex node.
4. To configure the SDC, generate a new UUID:
NOTE: If the PowerFlex cluster is using an SDC authentication, the newly added SDC reports as disconnected when
added to the system. See Configure an authentication enabled SDC for more information.
A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks. Verify the
number of VIPs configured in the existing setup.
7. Reboot the PowerFlex node.
Steps
1. If using PowerFlex GUI presentation server:
a. Log in to the PowerFlex GUI presentation server.
b. Go to Configuration > SDCs
c. Select the SDC and click Modify > Rename and rename the new host to standard.
For example, ESX-10.234.91.84
2. If using a PowerFlex version prior to 3.5:
a. From the PowerFlex GUI, click Frontend > SDCs and rename new host to standard.
For example, ESX-10.234.91.84
Prerequisites
VMware ESXi must be installed with hosts added to the VMware vCenter.
Steps
1. Log in to the VMware vSphere Client.
2. Click Hosts and Clusters.
3. Locate and select the VMware ESXi host.
4. Select Datastores.
5. Right-click the datastore name, and select Rename.
6. Name the datastore using the DASXX convention, with XX being the node number.
Prerequisites
Apply all VMware ESXi updates before installing or loading hardware drivers.
NOTE: This procedure is required only if the ISO drivers are not at the proper Intelligent Catalog level.
Steps
1. Log in to the VMware vSphere Client.
2. Click Hosts and Clusters.
3. Locate and select the VMware ESXi host that you installed.
4. Select Datastores.
5. Right-click the datastore name and select Browse Files.
6. Select the Upload icon (to upload file to the datastore).
7. Browse to the Intelligent Catalog folder or downloaded current solution Intelligent Catalog files.
8. Select the VMware ESXi patch .zip files according to the current solution Intelligent Catalog and node type and click OK to
upload.
9. Select the driver and vib files according to the current Intelligent Catalog and node type and click OK to upload.
10. Click Hosts and Clusters.
11. Locate the VMware ESXi host, right-click, and select Enter Maintenance Mode.
12. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client.
13. Log in as root.
14. Type cd /vmfs/volumes/DASXX where XX is the name of the local datastore that is assigned to the VMware ESXi
server.
15. To display the contents of the directory, type ls.
16. If the directory indicates vib files, type esxcli software vib install –v /vmfs/volumes/DASXX/
patchname.vib to install the vib. These vib files can be individual drivers that are absent in the larger patch cluster
and must be installed separately.
17. Perform either of the following depending on the VMware ESXi version:
a. For VMware ESXi 7.0, type esxcli software vib update -d /vmfs/volumes/DASXXX/VMware-
ESXi-7.0<version>-depot.zip.
b. For VMware ESXi 6.x, type esxcli software vib install -d /vmfs/volumes/DASXXX/<ESXI-patch-
file>.zip
18. Type reboot to reboot the host.
19. Once the host completes rebooting, open an SSH session with the VMware ESXi host, and type esxcli software vib
list |grep net-i to verify that the correct drivers loaded.
20. Select the host and click Exit Maintenance Mode.
21. Update the test plan and host tracker with the results.
The dvswitch names are for example only and may not match the configured system. Do not change these names or a data
unavailable or data lost event may occur.
You can select multiple hosts and apply settings in template mode.
NOTE: If the ESXi host participates in NSX, do not migrate management and vMotion VMkernels to the VDS.
Prerequisites
Gather the IP addresses of the primary and secondary MDMs.
Steps
1. Open Direct Console User Interface (DCUI) or use SSH to log in to the new hosts.
2. At the command-line interface, run the following commands to ping each of the primary MDM and secondary MDM IP
addresses.
If ping test fails, you must remediate before continuing.
NOTE: A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks.
Run the following commands for LACP bonding NIC port design. x is the VMkernel adapter number in vmkx.
NOTE: After several host restarts, check the access switches for error or disabled states by running the following
commands:
3. Optional: If errors appear in the counters of any interfaces, type the following and check the counters again.
Output from a Cisco Nexus switch:
4. Optional: If there are still errors on the counter, perform the following to see if the errors are old and irrelevant or new and
relevant.
a. Optional: Type # show interface | inc flapped.
Sample output:
Last link flapped 1d02h
b. Type # show logging logfile | inc failure.
Sample output:
Dec 12 12:34:50.151 access-a %ETHPORT-5-IF_DOWN_LINK_FAILURE: Interface Ethernet1/4/3
is down (Link failure)
5. Optional: Check and reset physical connections, bounce and reset ports, and clear counters until errors stop occurring.
Do not activate new nodes until all errors are resolved and no new errors appear.
Steps
1. Log in to VMware vCSA HTML Client using the credentials.
2. Go to VMs and templates inventory or Administration > vCenter Server Extensions > vSphere ESX Agent Manager
> VMs to view the VMs.
The VMs are in the vCLS folder once the host is added to the cluster.
3. Right-click the VM and click Migrate.
4. In the Migrate dialog box, click Yes.
5. On the Select a migration type page, select Change storage only and click Next.
6. On the Select storage page, select the PowerFlex volumes for hyperconverged or ESXi-based compute-only node which
will be mapped after the PowerFlex deployment.
NOTE: The volume name is powerflex-service-vol-1 and powerflex-service-vol-2. The datastore name is
powerflex-esxclustershotname-ds1 and powerflex-esxclustershotname-ds2. If these volumes or datastore are
not present, create the volumes or datastores to migrate the vCLS VMs.
Steps
1. Update the inventory for vCenter (vCSA), switches, gateway VM, and nodes:
a. Click Resources on the home screen.
b. Select the vCenter, switches (applicable only for full networking), gatewayVM, and newly added nodes.
c. Click Run Inventory.
d. Click Close.
e. Wait for the job in progress to complete.
2. Update Services details:
a. Click Services
b. Choose the services on which new a node is expanded and click View Details.
c. On the Services details screen, choose Update Service Details.
d. Choose the credentials for the node and SVM and click Next.
e. On the Inventory Sumary, verify that the newly added nodes are reflecting under Physical Node, and click Next.
f. On the Summary page, verify the details and click Finish.
Prerequisites
Ensure that the VMware vSphere vCenter Server and the VMware vSphere Client are accessible.
Steps
1. Log in to the VMware vSphere Client.
2. Click Networking.
3. Expand the PowerFlex Customer-Datacenter.
4. Right-click cust_dvswitch.
5. Click Distributed Port Group > New Distributed Port Group.
6. Update the name to pfmc-nsx-transport-121 and click Next.
7. Select the default Port binding.
8. Select the default Port allocation.
9. Select the default # of ports (default is 8).
10. Select the default VLAN as VLAN Type.
11. Set the VLAN ID to 121.
12. Clear the Customize default policies configuration check box and click Next.
13. Click Finish.
14. Right-click the pfnc-nsx-transport-121 and click Edit Settings....
15. Click Teaming and failover.
16. Verify that Uplink1 and Uplink2 are moved to Active.
17. Click OK.
Prerequisites
Both Cisco access switch ports for the compute VMware ESXi hosts are configured with LACP. These ports will be configured
as trunk access after removing the physical adapter from each ESXi host.
Steps
1. Log in to the VMware vSphere Client.
2. Look at vCenter and physical switches to ensure that both ports across all hosts are up.
3. For each compute ESXi host, record the physical switch port to which vmnic4 (switch-B) and vmnic6 (switch -A) connect.
a. Click Home, then select Hosts and Clusters and expand the compute cluster.
b. Select the first compute ESXi host in the left pane, and then select Configure tab in the right pane.
c. Select Virtual switches under Networking.
d. Expand cust_dvswitch.
e. Expand lag-1 and click eclipse (…) for vmnic4 and select view settings.
f. Click LLDP tab.
g. Record the port ID (switch port) and system name (switch).
h. Repeat step 3 for vmnic6 on lag1-1.
4. Repeat steps 2 and 3 for each additional compute ESXi host.
5. Create a management distributed port group for cust_dvswitch as follows:
a. Right-click cust_dvswitch (Workbook default name).
b. Click Distributed Port Group > New Distributed Port Group.
c. Update the name to pfcc-node-mgmt-105-new and click Next.
d. Select the default Port binding.
e. Select the default Port allocation.
f. Select the default # of ports (default is 8).
g. Select the default VLAN as VLAN Type.
h. Set the VLAN ID to 105.
i. Clear the Customize default policies configuration and click Next.
j. Click Finish.
k. Right-click the pfcc-node-mgmt-105-new and click Edit Settings...
l. Click Teaming and failover.
m. Verify that Uplink1 and Uplink2 are listed under active and LAG is unused.
n. Click OK.
6. Remove channel-group from the port interface (vmnic6) on Switch-B for each compute ESXi host as follows:
NOTE: This step must be done before removing the physical NICs from the VDS. Otherwise, only one physical NIC gets
removed successfully. The other physical NIC fails to remove from the LAG because both ports are bonded to a port
channel.
config t
interface ethernet 1/x
no channel-group
c. Repeat steps 6a and 6b for each switch port for the remaining compute ESXi hosts.
7. Migrate vmnic6 to Uplink2 and VMK0 to pfcc-node-mgmt-105-new on cust_dvswitch for each compute ESXi host as
follows:
a. Click Home, then select Networking and expand the PowerFlex data center.
b. Right-click cust_dvswitch and select Add and Manage hosts to open the wizard.
● Select Manage host networking and click Next.
● Click Attached hosts..., select all the compute ESXi hosts, and click OK.
● Click Next.
● For each ESXi host, select vmnic6 and click Assign uplink.
● Select Uplink2 and click OK.
● Click Next.
● Select vmk0 (esxi-management) and click Assign port group.
● Select pfcc-node-mgmt-105-new and click OK.
● Click Next > Next > Next > Finish.
8. Remove channel-group from the port interface (vmnic4) on Switch-A for each compute ESXi host as follows:
Config t
Interface ethernet 1/x
No channel-group
c. Repeat steps 8a and 8b for each switch port for the remaining compute ESXi hosts.
9. Add vmnic4 to Uplink1 on cust_dvswitch for each compute ESXi host as follows:
a. Click Home, then select Networking and expand the PowerFlex data center.
b. Right-click cust_dvswitch and select Add and Manage Hosts to open the wizard.
● Select Manage host networking and click Next.
● Click Attached hosts..., select all the compute ESXi hosts, and click OK.
● Click Next.
● For each ESXi host, select vmnic4 and click Assign uplink.
● Select Uplink1 and click OK.
● Click Next > Next > Next > Finish.
10. Delete the pfcc-node-mgmt-105 port group on cust_DVswitch:
a. Click Home, select Networking, and expand the PowerFlex data center.
b. Expand cust_dvswitch to view the distributed port groups.
c. Right-click pfcc-node-mgmt-105 and click Delete.
d. Click Yes to confirm deletion of the distributed port group.
11. Rename the pfmc-node-mgmt-105-new port group on cust_dvswitch:
a. Click Home, select Networking, and expand the PowerFlex data center.
b. Expand cust_dvwitch to view the distributed port groups.
c. Right-click pfmc-node-mgmt-105-new and click Rename.
d. Enter pfnc-node-mgmt-105 and click OK.
12. Update teaming and policy to be route based on physical NIC load for port group flex-vmotion-106:
a. Click Home, select Networking, and expand the PowerFlex compute data center.
b. Expand cust_dvswitch to view the distributed port groups.
c. Right-click pfcc-vmotion-106 and click Edit Settings....
d. Click Teaming and failover.
e. Move both Uplink1 and Uplink2 to be Active and lag1 to Unused.
f. Change Load Balancing mode to Route based on originating virtual port.
g. Repeat steps 12c through 12f for the remaining port groups on cust_dvswitch.
Prerequisites
NOTE: Before adding a VMware NSX-T service using PowerFlex Manager, either the customer or VMware services must
add the new PowerFlex node to NSX-T Data Center using NSX-T UI.
Consider the following:
● Before adding this service Update service details in PowerFlex Manager, verify that the NSX-T Data Center is configured
on the PowerFlex hyperconverged or compute-only nodes.
● If the transport nodes (PowerFlex cluster) is configured with NSX-T, then you cannot replace the field units using PowerFlex
Manager. You must add the node manually by following either of these procedures depending on the node type:
○ Performing a PowerFlex hyperconverged node expansion
○ Performing a PowerFlex compute-only node expansion
Steps
1. Log in to PowerFlex Manager.
2. If NSX-T Data Center 3.0 or higher is deployed and is using VDS (not N-VDS), then add the transport network:
Configure Static IP Address Ranges Select the Configure Static IP Address Ranges check box. Type
the starting and ending IP address of the transport network IP pool
c. Click Next.
d. On the Network Information page, select Full Network Automation, and click Next.
e. On the Cluster Information page, enter the following details:
f. Click Next.
g. On the OS Credentials page, select the OS credentials for each node, and click Next.
h. On the Inventory Summary page, review the summary and click Next.
i. On the Networking Mapping page, verify that the networks are aligned with the correct dvSwitch.
j. On the Summary page, review the summary and click Finish.
4. Verify PowerFlex Manager recognizes NSX-T is configured on the nodes:
a. Click Services.
b. Select the hyperconverged or compute-only service.
c. Verify that a banner appears under the Service Details tab, notifying that NSX-T is configured on a node and is
preventing some features from being used. In case you do not see this banner, check if you have selected the wrong
service or NSX-T is not configured on the hyperconverged or compute-only nodes.
Prerequisites
● Ensure that the required information is captured in the Workbook and stored in VAST.
● Prepare the servers by updating all servers to the correct Intelligent Catalog firmware releases and configuring BIOS
settings.
● Ensure that the iDRAC network is configured.
● Ensure that the Windows operating system ISO is downloaded to jump host.
NOTE: As of PowerFlex Manager 3.8, the deployment of Windows compute-only noes is not supported. To manually install
Windows compute-only nodes with LACP bonding NIC port design without PowerFlex Manager, complete the steps in the
following sections.
Related information
Configuring the network
Steps
1. Connect to the iDRAC, and launch a virtual remote console.
2. Click Menu > Virtual Media > Connect Virtual Media > Map Device > Map CD/DVD.
3. Click Choose File and browse and select the customer provided Windows Server 2016 or 2019 DVD ISO and click Open.
4. Click Map Device.
5. Click Close.
6. Click Boot and select Virtual CD/DVD/ISO. Click Yes.
7. Click Power > Reset System (warm boot) to reboot the server.
The host boots from the attached Windows Server 2016 or 2019 virtual media.
Steps
1. Select the desired values for the Windows Setup page, and click Next.
NOTE: The default values are US-based settings.
a. Download DELL EMC Server Update Utility, Windows 64 bit Format, v.x.x.x.iso file from Dell
Technologies Support site.
b. Map the driver CD/DVD/ISO through iDRAC, if the installation requires it.
c. Connect to the server as the administrator.
d. Open and run the mapped disk with elevated permission.
e. Select Install, and click Next.
f. Select I accept the license terms and click Next.
g. Select the check box beside the device drives, and click Next.
h. Click Install, and Finish.
i. Close the window to exit.
Steps
1. Open iDRAC console and log in to the Windows Server 2016 or 2019 using admin credentials.
2. Press Windows+R and enter ncpa.cpl.
3. Select the appropriate management NIC.
4. Perform the following for the Management Network:
a. Select Properties.
b. Click Configure....
c. Click the Advanced tab, and select the VLAN ID option from the Property column.
d. Enter the VLAN ID in the Value column.
e. Click OK and exit.
f. Right-click the appropriate NIC, and click Properties, select Internet Protocol Version 4 (TCP/IPv4) and assign
static IP address of the server.
5. Open the PowerShell console, and perform the following procedures:
Management network if the IPs are not assigned manually as a. Type Add-NetLbfoTeamNic -Team "flex-node-
specified in Step 4 (optional) mgmt-<105>", to map the VLAN to the interface using
this command:
NOTE: Assign the IP address according to the
Workbook.
b. Type New-NetIPAddress -InterfaceAlias
'flex-node-mgmt-<105>' -IPAddress
'IP' -PrefixLength 'Prefix number'
-DefaultGateway 'Gateway IP', to assign the IP
address to the interface.
Data network NOTE: Assign the IP address according to the
Workbook.
a. Type New-NetIPAddress –InterfaceAlias
'Interface name' –IPv4Address 'IP' –
PrefixLength 'prefix' Select NIC2, to create
the Data1 network.
b. Type New-NetIPAddress –InterfaceAlias
'Interface name' –IPv4Address 'IP' –
PrefixLength 'prefix' Select Slot4 Port2,
to create the Data2 network.
Where, Interface name is the NIC assigned for data1 or
data2 and IP is the data1 IP or data2 IP.
The prefix is the CIDR notation. For example, if the
network mask is 255.255.255.0, then the CIDR notation
(prefix) is 24.
6. Applicable for an LACP NIC port bonding design: Modify Team0 settings and create a VLAN:
To... Do this...
Edit Team0 settings: a. Open the Server Manager, and click Local Server > NIC
teaming.
b. In the NIC teaming window, click Tasks > New Team.
c. Enter name as Team0 and select the appropriate
network adapters.
d. Expand the Additional properties, and modify as
follows:
● Teaming mode: LACP
● Load balancing mode: Dynamic
● Standby adapter: None (all adapters active)
e. Click OK to save the changes.
f. Select Team0 from the Teams list.
g. From the Adapters and Interfaces, click the Team
Interfaces tab in
Create a VLAN in Team0 a. Click Tasks and click Add Interface.
b. In the New team interface dialog box, type the name
as, General Purpose LAN.
c. Assign VLAN ID (200) to the new interface in the VLAN
field, and click OK.
d. From the network management console, right-click the
newly created network interface controller, and select
Properties Internet Protocol Version 4 (TCP/IPv4),
and click Properties.
e. Select the Assign the static IP address check box.
7. Remove the IPs from the data1 and data2 network adapters.
8. Create Team1 and VLAN:
9. Repeat step 8 for data2, data3 (if required), and data4 (if required).
A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks. Verify the
number of logical data networks configured in an existing setup and configure the logical data networks accordingly.
Steps
1. Windows Server 2016 or 2019:
a. Press Windows key+R on your keyboard, type control and click OK.
The All Control Panel Items window opens.
b. Click System and Security > Windows Firewall .
c. Click Turn Windows Defender Firewall on or off.
d. Turn off Windows Firewall for both private and public network settings, and click OK.
2. Windows PowerShell:
a. Click Start, type Windows PowerShell.
b. Right-click Windows PowerShell, click More > Run as Administrator.
c. Type Set-NetFirewallProfile -profile Domain, Public, Private -enabled false in the Windows
PowerShell console.
Steps
1. Click Start > Server Manager.
2. In Server Manager, on the Manage menu, click Add Roles and Features.
3. On the Before you begin page, click Next.
4. On the Select installation type page, select Role-based or feature-based installation, and click Next.
5. On the Select destination server page, click Select a server from the server pool, and click Next.
6. On the Select server roles page, select Hyper-V.
An Add Roles and Features Wizard page opens, prompting you to add features to Hyper-V.
7. Click Add Features. On the Features page, click Next.
8. Retain the default selections/locations on the following pages, and click Next:
● Create Virtual Switches
● Virtual Machine Migration
● Default stores
9. On the Confirm installation selections page, verify your selections, and click Restart the destination server
automatically if required, and click Install.
10. Click Yes to confirm automatic restart.
Steps
1. Click Start, type Windows PowerShell.
2. Right-click Windows PowerShell, and select Run as Administrator.
Steps
1. Go to Start > Run.
2. Enter SystemPropertiesRemote.exe and click OK.
3. Select Allow remote connection to this computer.
4. Click Apply > OK.
Steps
1. Download the EMC-ScaleIO-sdc*.msi and LIA software.
2. Double-click EMC-ScaleIO LIA setup.
3. Accept the terms in the license agreement, and click Install.
4. Click Finish.
5. Configure the Windows-based compute-only node depending on the MDM VIP availability:
● If you know the MDM VIPs before installing the SDC component:
a. Type msiexec /i <SDC_PATH>.msi MDM_IP=<LIST_VIP_MDM_IPS>, where <SDC_PATH> is the path where
the SDC installation package is located. The <LIST_VIP_MDM_IPS> is a comma-separated list of the MDM IP
addresses or the virtual IP address of the MDM.
b. Accept the terms in the license agreement, and click Install.
c. Click Finish.
d. Permit the Windows server reboot to load the SDC driver on the server.
● If you do not know the MDM VIPs before installing the SDC component:
a. Click EMC-ScaleIO SDC setup.
b. Accept the terms in the license agreement, and click Install.
c. Click Finish.
d. Type C:\Program Files\EMC\scaleio\sdc\bin>drv_cfg.exe --add_mdm --ip <VIPs_MDMs> to
configure the node in PowerFlex.
● Applicable only if the existing network is an LACP bonding NIC:
a. Add all MDM VIP IPs, and run the command to add C:\Program
Files\EMC\scaleio\sdc\bin>drv_cfg.exe --mod_mdm_ip --ip <existing MDM VIP>--
new_mdm_ip <all 4 MDM VIP>.
Steps
1. Log in to the presentation server at https://<presentation serverip>:8443.
2. In the left pane, click SDCs.
3. In the right pane, select the Windows host.
4. Select the Windows host, click Mapping, and then select Map from the drop-down list.
5. Click Apply. Once the mapping is complete, click Dismiss.
6. To open the disk management console, perform the following steps:
a. Press Windows+R.
b. Enter diskmgmt.msc and press Enter.
7. Rescan the disk and set the disks online:
a. Click Action > Rescan Disks.
b. Right-click each Offline disk, and click Online.
8. Right-click each disk and select Initialize disk.
After initialization, the disks appear online.
9. Right-click Unallocated and select New Simple Volume.
10. Select default and click Next.
11. Assign the drive letter.
12. Select default and click Next.
13. Click Finish.
Steps
1. Open the PowerFlex GUI, click Frontend, and select SDC.
2. Windows-based compute-only nodes are listed as SDCs if configured correctly.
3. Click Frontend again, and select Volumes. Right-click the volume, and click Map.
4. Select the Windows-based compute-only nodes, and then click Map.
5. Log in to the Windows Server compute-only node.
6. To open the disk management console, perform the following steps:
a. Press Windows+R.
b. Enter diskmgmt.msc and press Enter.
7. Rescan the disk and set the disks online:
a. Click Action > Rescan Disks.
b. Right-click each Offline disk, and click Online.
8. Right-click each disk and select Initialize disk.
Steps
1. Using the administrator credentials, log in to the target Windows Server 2016.
2. When the main desktop view appears, click Start and type Run.
3. Type slui 3 and press Enter.
4. Enter the customer provided Product key and click Next.
If the key is valid, Windows Server 2016 is successfully activated.
If the key is invalid, verify that the Product key entered is correct and try the procedure again.
NOTE: If the key is still invalid, try activating without an Internet connection.
Steps
1. Using the administrator credentials, log in to the target Windows Server VM (jump server).
2. When the main desktop view appears, click Start and select Command Prompt (Admin) from the option list.
3. At the command prompt, use the slmgr command to change the current product key to the newly entered key.
4. At the command prompt, use the slui command to initiate the phone activation wizard. For example: C:
\Windows\System32> slui 4.
5. From the drop-down menu, select the geographic location that you are calling and click Next.
6. Call the displayed number, and follow the automated prompts.
After the process is completed, the system provides a confirmation ID.
7. Click Enter Confirmation ID and enter the codes that are provided. Click Activate Windows.
Successful activation can be validated using the slmgr command.
VIII
Adding a PowerFlex R640/R740xd/R840
node to a PowerFlex Manager service in
lifecycle mode
Use the procedures in this section to add a PowerFlex R640/R740xd/R840 node for the PowerFlex Manager services
discovered in lifecycle mode.
Before adding a PowerFlex node in lifecycle mode, you must complete the initial set of expansion procedures that are common
to all expansion scenarios covered in Performing the initial expansion procedures.
After adding a PowerFlex node in lifecycle mode, see Completing the expansion.
Adding a PowerFlex R640/R740xd/R840 node to a PowerFlex Manager service in lifecycle mode 313
Internal Use - Confidential
57
Performing a PowerFlex storage-only node
expansion
Perform the manual expansion procedure to add a PowerFlex storage-only node to PowerFlex Manager services that are
discovered in a lifecycle mode.
See Cabling the PowerFlex R640/R740xd/R840 nodes for cabling information.
Discover resources
Use this procedure to discover and grant PowerFlex Manager access to resources in the environment. Provide the management
IP address and credential for each discoverable resource.
Prerequisites
Verify that the iDRAC network settings are configured. See Configure iDRAC network settings for more information.
NOTE: For partial network deployments, you do not need discover the switches. The switches need to be pre-configured.
For sample configurations for Dell PowerSwitch, Cisco Nexus, and Arista switches, see Dell EMC PowerFlex Appliance
Administration Guide.
The following are the specific details for completing the Discovery wizard steps:
** This is optional. For a new CloudLink Center deployment, the CloudLink Center is discovered automatically.
Prerequisites
● Configure the iDRAC network settings.
● Gather the IP addresses and credentials that are associated with the resources.
NOTE: PowerFlex Manager also allows you to use the name-based searches to discover a range of nodes that were
assigned the IP addresses through DHCP to iDRAC. For more information about this feature, see Dell EMC PowerFlex
Manager Online Help.
Steps
1. On the PowerFlex Manager Getting Started page, click Discover Resources.
2. On the Welcome page of the Discovery Wizard, read the instructions and click Next.
3. On the Identify Resources page, click Add Resource Type. From the Resource Type list, select the resource that you
want to discover.
4. Enter the management IP address of the resource in the IP/Hostname Range field. To discover a resource in an IP range,
provide a starting and ending IP address.
5. In the Resource State list, select Managed or Unmanaged.
6. For PowerFlex node, to discover resources into a selected node pool instead of the global pool (default), select the node
pool from the Discover into Node Pool list.
7. Select the appropriate credential from the Credentials list. See the table above for details.
8. For PowerFlex node, if you want PowerFlex Manager to automatically reconfigure the iDRAC IP addresses of the nodes it
discovers, select the Reconfigure discovered nodes with new management IP and credentials check box. This option
is not selected by default, because it is faster to discover the nodes if you bypass the reconfiguration.
NOTE: iDRAC can be discovered using hostname also.
NOTE: For the Resource Type Node, you can use a range with hostname or IP address, provided the hostname has a
valid DNS entry.
9. For PowerFlex node, select the Auto configure nodes to send alerts to PowerFlex Manager check box to have
PowerFlex Manager automatically configure nodes to send alerts to PowerFlex Manager.
10. Click Next to start discovery. On the Discovered Resources page, select the resources from which you want to collect
inventory data and click Finish. The discovered resources are listed on the Resources page.
Prerequisites
● Verify that the customer Red Hat Enterprise Linux or the embedded operating system ISO is available and is located in the
Intelligent Catalog code directory.
● Ensure that the following are installed for specific operating systems:
Steps
1. From the iDRAC web interface, launch the virtual console.
2. Click Connect Virtual Media.
3. Under Map CD/DVD, click Browse for the appropriate ISO.
4. Click Map Device > Close.
5. From the Boot menu, select Virtual CD/DVD/ISO, and click Yes.
6. From the Power menu, select Reset System (warm boot) or Power On System if the machine is off.
7. Set the boot option to UEFI.
a. Press F2 to enter system setup.
b. Under System BIOS > Boot setting, select UEFI as the boot mode.
NOTE: Ensure that the BOSS card is set as the primary boot device from the boot sequence settings. If the BOSS
card is not the primary boot device, reboot the server and change the UEFI boot sequence from System BIOS >
Boot settings > UEFI BOOT settings.
c. Click Back > Back > Finish > Yes > Finish > OK > Yes.
8. Select Install Red Hat Enterprise Linux/CentOS 7.x from the menu.
NOTE: Wait until all configuration checks pass and the screen for language selection is displayed.
Prerequisites
While performing this procedure, if you see VLANs other than the following listed for services discovered in lifecycle mode,
assign and match the VLAN details with an existing setup. The VLAN names are for example only and may not match the
configured system.
See Cabling the PowerFlex nodes for information on cabling the PowerFlex nodes.
Steps
1. Log in as root from the virtual console.
2. Type nmtui to set up the networking.
3. See Cabling the PowerFlex nodes for information on cabling the PowerFlex R640 and R740xd nodes with SSD or with NVMe
drives.
4. Perform the following to set up flex-data1-<vlanid> at NetworkManager TUI for the non-bonded NIC:
NOTE: Skip this step in case of static bonding NIC or LACP bonding NIC port design.
a. Click Edit a connection, press Tab for OK, and press Enter.
b. See Cabling the PowerFlex nodes for cabling information on the PowerFlex R640 and R740xd nodes with or without
NVMe drives.
c. Select Ethernet, and press Tab for Show.
d. Press the Enter key, and enter 9000 for MTU.
e. Set the IPv4 Configuration to Manual, and click Show to open the configuration.
f. Press Tab to Add, and set the IP for flex-data2-<vlanid> using CIDR notation.
For example, if the IP is 192.168.160.155 and the network mask is 255.255.248.0, and then the line should read
192.168.160.155/21.
g. Set Never use this network for default route by using the Space Bar.
h. Set IPv6 Configuration to Ignore.
i. Set Automatically Connect.
j. Set Available to all users.
k. Select OK.
6. To set up bond0 and bond1 for the non-bonded NIC, static bonding NIC, and LACP bonding NIC port design:
a. Select Add and choose Bond.
b. Set Profile name and Device to bond0.
c. Set Mode to 802.3ad.
d. Set IPv4 Configuration to Disabled.
e. Set IPv6 Configuration to Ignore.
f. Set Automatically Connect.
g. Set Available to all users.
h. Click OK.
i. Repeat these steps to set up bond1 and to change the profile and device name as bond1. Skip bond1 creation in case of
a non-bonded NIC.
7. To set up VLANs on bond0 and bond1:
For example:
● flex-stor-mgmt-<vlanid>(bond0.VLAN) (non-bonded, static bonding, and LACP bonding NIC port design)
● flex-data1-<vlanid>(bond0.VLAN) (static bonding NIC and LACP bonding NIC)
a. vi ifcg-em1
b. Change BOOTPROTO to none.
c. Delete the lines from DEFROUTE to IPV6_ADDR_GEN_MODE.
d. Change ONBOOT to yes.
e. Type MASTER=bond0 SLAVE=yes.
f. Save the file.
11. Edit the network configuration file using the #vi command.
NOTE: The following example is for PowerFlex R640 nodes with SSD. Use the same steps for other PowerFlex
storage-only nodes. See Cabling the PowerFlex nodes for cabling information, and edit the networking based on the
respective node type.
a. vi ifcg-p2p1
b. Change BOOTPROTO to none.
c. Delete the lines from DEFROUTE to IPV6_ADDR_GEN_MODE.
d. Change ONBOOT to yes.
e. Type MASTER=bond0 SLAVE=yes
f. Save the file.
12. Edit the network configuration file using the #vi command.
NOTE: The following example is for PowerFlex R640 nodes with SSD. Use the same steps for other PowerFlex
storage-only nodes. See Cabling the PowerFlex nodes for cabling information and edit the networking based on the
respective node type.
a. vi ifcg-em2
b. Change BOOTPROTO to none.
c. Delete the lines from DEFROUTE to IPV6_ADDR_GEN_MODE.
d. Change ONBOOT to yes.
e. Type MASTER=bond1 SLAVE=yes.
f. Save the file.
13. Edit the network configuration file using the #vi command.
NOTE: The following example is for PowerFlex R640 nodes with SSD. Use the same steps for other PowerFlex
storage-only nodes. See Cabling the PowerFlex nodes and edit the networking based on the respective node type.
a. vi ifcg-p2p2
b. Change BOOTPROTO to none.
c. Delete the lines from DEFROUTE to IPV6_ADDR_GEN_MODE.
d. Change ONBOOT to yes.
e. Type MASTER=bond1 SLAVE=yes.
f. Save the file.
14. Type systemctl disable NetworkManager to disable the NetworkManager.
15. Type systemctl status firewalld to check if firewalld is enabled.
16. Type systemctl enable firewalld to enable firewalld. To enable firewalld on all the SDS components, see the
Enabling firewall service on PowerFlex storage-only nodes and SVMs KB article.
17. Type systemctl restart network to restart the network.
18. To check the settings, type ip addr show.
19. Type cat /proc/net/bonding/bond0 | grep -B 2 -A 2 MII to confirm that the port channel is active.
20. To check the MTU, type grep MTU /etc/sysconfig/network-scripts/ifcfg*
21. Verify MTU flex-data1-<vlanid>, flex-data2-<vlanid>, flex-data3-<vlanid> (if required), and flex-data4-<vlanid> (if required)
settings are set to 9000.
For flex-rep1-<vlanid> and flex-rep2-<vlanid>, set MTU to 1500.
22. To check connectivity, ping the default gateway and the MDM Data IP address.
23. Create a static route for replication VLAN, used only to enable replication between primary and remote site:
a. Log in to the node using SSH.
b. Run cd /etc/sysconfig/network-scripts/.
c. Create the file using vi command route-bond.<vlanid for rep1>.
NOTE: For example, 192.168.161.0/24 through 192.168.163.1 dev bond1.163.
d. Create file using vi command route-bond0.<vlanid for rep2>.
NOTE: For example, 192.168.162.0/24 through 192.168.164.1 dev bond1.164.
Related information
Cabling the PowerFlex R640/R740xd/R840 nodes
Install the nvme -cli tool and iDRAC Service Module (iSM)
Use this procedure to install dependency packages for Red Hat Enterprise Linux or embedded operating system.
NOTE: If PowerFlex Manager is installed, do not use this procedure to install the dependency packages. See Related
information for information on expanding a PowerFlex node using PowerFlex Manager.
Steps
1. Copy the Red Hat Enterprise Linux or embedded operating system 7.x image to the /tmp folder of the PowerFlex storage-
only node using SCP or WINSCP.
2. Use PuTTY to log in to the PowerFlex storage-only node.
3. Run #cat /etc/*-release to identify the installed operating system.
4. Type # mount -o loop /tmp/<os.iso> /mnt to mount the iso image at the /mnt mount point.
5. Change directory to /etc/yum.repos.d
6. Type # touch <os.repo> to create a repository file.
7. Edit the file using a vi command and add the following lines:
[repository]
name=os.repo
baseurl=file:///mnt
enabled=1
gpgcheck=0
8. Type #yum repolist to test that you can use yum to access the directory.
9. Install the dependency packages per the installed operating system. To install dependency packages, enter:
# gunzip OM-iSM-Dell-Web-LX-340-1471_A00.tar.gz
# tar -xvf OM-iSM-Dell-Web-LX-340-1471_A00.tar
NOTE: If dcismeng.service is not running, type systemctl start dcismeng.service to start the
service.
i. Type # ip a |grep idrac to verify link local IP address (169.254.0.2) is automatically configured to the interface
idrac on the PowerFlex storage-only node after successful installation of iSM.
j. Type # ping 169.254.0.1 to verify PowerFlex storage-only node operating system can communicate with iDRAC
using ping command (default link local IP address for iDRAC is 169.254.0.1.
11. Type # yum install nvme-cli to install the nvme-cli package.
12. Type # nvme list to ensure that the disk firmware version matches the Intelligent Catalog values.
If the disk firmware version does not match the Intelligent Catalog values, see Related information for information on
upgrading the firmware.
Related information
Adding a PowerFlex R640/R740xd/R840 node to a PowerFlex Manager service in managed mode
Upgrade the disk firmware for NVMe drives
Steps
1. Go to Dell EMC Support, and download the Dell Express Flash NVMe PCIe SSD firmware as per Intelligent Catalog.
2. Log in to the PowerFlex storage-only node.
3. Create a folder in the /tmp directory named diskfw.
4. Use WinSCP to copy the downloaded backplane package to the /tmp/diskfw folder.
5. Change directory to cd /tmp/diskfw/.
6. Change the access permissions of the file using the following command:
NOTE: Package name may differ from the following example depending on the Intelligent Catalog version.
chmod +x Express-Flash-PCIe-SSD_Firmware_R37D0_LN64_1.1.1_A02_01.BIN
Related information
Install the nvme -cli tool and iDRAC Service Module (iSM)
Prerequisites
A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks.
NOTE: This procedure is applicable only for an LACP bonding NIC port design.
Steps
1. Log in to PowerFlex GUI presentation server and place PowerFlex node into instant maintenance mode. Migrate one
PowerFlex storage-only node at a time.
a. Log in to the node using PuTTY.
b. Go to /etc/sysconfig/network-scripts/.
c. Find vi ifcfg-em2, delete the IP address, mask and save the file.
d. Find vi ifcfg-p2p2, delete the IP address, mask and save the file. Interface names may vary depending on the server.
e. Run vi ifcfg-bond1 to create bond1 and insert below lines in file and save it.
BOOTPROTO=none
ONBOOT=yes
HOTPLUG=yes
TYPE=Bond
DEVICE=bond1
MTU=9000
BONDING_OPTS="miimon=100 mode=802.3ad xmit_hash_policy=layer2+3 lacp_rate=fast"
PEERDNS=no
NM_CONTROLLED=no
DEVICE=em2
HWADDR=24:8A:07:5B:17:68
MASTER=bond1
SLAVE=yes
ONBOOT=yes
HOTPLUG=yes
TYPE=Ethernet
MTU=9000
NM_CONTROLLED=no
BOOTPROTO=none
ONBOOT=yes
HOTPLUG=yes
TYPE=Bond
VLAN=yes
DEVICE=bond0.151
IPADDR=192.168.151.50
NETMASK=255.255.255.0
MTU=9000
BONDING_OPTS="miimon=100 mode=802.3ad xmit_hash_policy=layer2+3 lacp_rate=fast"
PEERDNS=no
NM_CONTROLLED=no
j. Repeat step i and create ifcfg-bond0.153, and insert the respective lines in file and save the file.
k. Type vi ifcfg-bond1.152 and insert below lines and save the file.
BOOTPROTO=none
ONBOOT=yes
HOTPLUG=yes
TYPE=Bond
VLAN=yes
DEVICE=bond1.152
IPADDR=192.168.152.50
NETMASK=255.255.255.0
MTU=9000
BONDING_OPTS="miimon=100 mode=802.3ad xmit_hash_policy=layer2+3 lacp_rate=fast"
PEERDNS=no
NM_CONTROLLED=no
l. Repeat step h and create ifcfg-bond1.154, and insert the respective lines in file and save the file.
2. Type systemctl restart network to restart network services.
3. Type ip addr and check if the newly added bonds are shown in results.
4. Remove the node from the instant maintenance mode.
Related information
Cabling the PowerFlex R640/R740xd/R840 nodes
Prerequisites
A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks.
NOTE: This procedure is applicable only for an LACP bonding NIC port design.
Steps
1. Log in to PowerFlex GUI presentation server, and place the PowerFlex node into instant maintenance mode. Migrate one
PowerFlex storage-only node at a time.
a. Log in to the node using PuTTY.
b. Go to /etc/sysconfig/network-scripts/.
c. Type vi ifcfg-bond0.153 and insert below lines:
BOOTPROTO=none
ONBOOT=yes
HOTPLUG=yes
TYPE=Bond
VLAN=yes
DEVICE=bond0.153
IPADDR=192.168.153.190
NETMASK=255.255.255.0
MTU=9000
BONDING_OPTS="miimon=100 mode=802.3ad xmit_hash_policy=layer2+3 lacp_rate=fast"
PEERDNS=no
NM_CONTROLLED=no
BOOTPROTO=none
ONBOOT=yes
HOTPLUG=yes
TYPE=Bond
VLAN=yes
DEVICE=bond1.154
IPADDR=192.168.154.190
NETMASK=255.255.255.0
MTU=9000
BONDING_OPTS="miimon=100 mode=802.3ad xmit_hash_policy=layer2+3 lacp_rate=fast"
PEERDNS=no
NM_CONTROLLED=no
Steps
1. From the management jump server VM, extract all required Red Hat files from the
VxFlex_OS_3.x.x_xxx_Complete_Software/ VxFlex_OS_3.x.x_xxx_RHEL_OEL7 package to the Red Hat
node root folder.
2. Use WinSCP to copy the following Red Hat files from the jump host folder to the /tmp folder on the Red Hat Enterprise
Linux node:
● EMC-ScaleIO-sds-3.x-x.xxx.el7.x86_64.rpm
● EMC-ScaleIO-sdr-3.x-x.xxx.el7.x86_64.rpm
● EMC-ScaleIO-mdm-3.x-x.xxx.el7.x86_64.rpm
● EMC-ScaleIO-lia-3.x-x.xxx.el7.x86_64.rpm
From the appropriate Intelligent Catalog folder, copy the PERC CLI perccli-7.x-xxx.xxxx.rpm rpm package.
NOTE: Verify that the PowerFlex version you install is the same as the version on other Red Hat Enterprise Linux
servers.
3. Use PuTTY and connect to the PowerFlex management IP address of the new node.
4. Go to /tmp, and install the LIA software (use the admin password for the token value).
5. Type #rpm -ivh /tmp/EMC-ScaleIO-sds-3.x-x.xxx.el7.x86_64.rpm to install the storage data server (SDS)
software.
6. To enable replication type rpm -ivh //tmp/EMC-ScaleIO-sdr-3.x-x.xxx.el7.x86_64.rpm to install the storage
data replication (SDR) software.
7. Type rpm -ivh /tmp/perccli-7.x-xxx.xxxx.rpm to install the PERC CLI.
8. Reboot the PowerFlex storage-only node by typing reboot.
Prerequisites
Confirm that the PowerFlex system is functional and no rebuild or rebalances are running. For PowerFlex 3.5 or later, use the
PowerFlex GUI presentation server to add a PowerFlex storage-only node to PowerFlex.
Steps
1. If you are using PowerFlex GUI presentation server:
a. Log in to the PowerFlex GUI presentation server.
b. Click Configuration > SDSs.
c. Click Add.
d. Enter the SDS Name.
e. Select the Protection Domain and SDS Port.
f. Enter the IP address for data1, data2, data3 (if required), and data4 (if required).
g. Select SDS and SDC, as the appropriate communication roles for all the IP addresses that are added.
h. Click SDS.
2. If you are using a PowerFlex version prior to 3.5:
If adding PowerFlex storage-only nodes with NVDIMMs to a new protection domain, see Create an NVDIMM protection
domain. Dell recommends that a minimum of six PowerFlex storage-only nodes be in a protection domain.
Prerequisites
Skip this procedure if NVDIMM is not available in the PowerFlex nodes.
Steps
1. Log in to the jump server.
2. SSH to primary MDM.
3. Log in with administrator credentials.
scli --login --username admin --password 'admin_password'
4. Type scli --version to verify the PowerFlex version.
Sample output:
DellEMC ScaleIO Version: R3_x.x.xxx
Steps
1. Log in the jump server.
2. SSH to the PowerFlex storage-only node.
3. Enter the following command to verify the operating system version:
cat /etc/*-release
Steps
1. Log in to the jump server.
2. SSH to the PowerFlex storage-only node.
3. Type yum list installed ndctl ndctl-libs daxctl-libs libpmem libpmemblk
Sample output:
4. If the RPMs are not installed, type yum install -y <rpm> to install the RPMs.
Steps
1. Log in to the jump server.
Steps
1. SSH to the PowerFlex storage-only node.
2. For each NVDIMM, type (starting with namespace0.0):
ndctl create-namespace -f -e namespace[x].0 --mode=devdax --align=4k --no-autolabel
{"dev":"namespace0.0","mode":"devdax","map":"dev","size":"15.75 GiB
(16.91 GB)","uuid":"348d510e-dc70-4855-a6ca-6379046896d5","raw_uuid":
"4ca5cda2-ebd4-4894-aa4e-0cfc823745e2","daxregion":{"id":0,"size":"15.75 GiB (16.91
GB)","align":4096,"devices":[{"chardev":"dax0.0","size":"15.75 GiB (16.91 GB)"}]},
"numa_node":0}
Steps
1. Log in to the PowerFlex GUI presentation server as an administrative user.
2. Click Configuration > Acceleration Pool.
3. Note the acceleration pool name. The name is required while creating a compression storage pool.
Steps
1. Log in to the PowerFlex GUI as an administrative user.
2. Select Backend > Storage.
3. Filter By Storage Pools.
4. Expand the SDSs in the protection domains. Under the Acceleration Type column, identify the protection domain with Fine
Granularity Layout. This is a protection domain that has been configured with NVDIMM accelerated devices.
5. The acceleration pool name (in this example, AP1) is listed under the column Accelerated On. This is needed when creating
a compression storage pool.
Steps
1. If using PowerFlex GUI presentation server:
a. Log in to the PowerFlex GUI presentation server.
b. Click Dashboard > Configuration > Protected Domains, and ADD.
c. In the Add Protection Domain window, enter the name of the protection domain.
d. Click ADD PROTECTION DOMAIN.
2. If using a PowerFlex version prior to 3.5:
a. Log in to the PowerFlex GUI.
b. Click Backend > Storage.
c. Right-click PowerFlex System, and click + Add Protection Domain.
d. Enter the protection domain name, and click OK.
Steps
1. If using PowerFlex GUI presentation server:
a. Log in to the PowerFlex GUI presentation server.
b. Click Dashboard > Configuration > Acceleration Pools, and ADD.
c. Enter the acceleration pool in the Name field of the Add Acceleration pool window.
d. Select NVDIMM as the Pool Type, and select Protection Domain from the drop-down list.
e. In the Add Devices section, select the Add Devices to All SDSs check box, only if the devices are needed to be added
on all SDSs. If not, leave it unchecked.
f. In the Add Device section, select the Add Devices to All SDSs check box, only if the devices are needed to be added
on all SDSs. If not, leave it unchecked.
g. In the Path and Device Name fields, enter the device path and device name respectively. Select the appropriate SDS
from the drop-down menu. Click Add Devices.
Steps
1. If using PowerFlex GUI presentation server:
a. Log in to the PowerFlex GUI presentation server.
b. Click Dashboard > Configuration > Storage Pools, and Add.
c. Enter the storage pool name in the Name field of the Add Storage pool window.
d. Select Protection Domain from the drop-down list.
e. Select SSD as the Media Type from the drop-down, and select FINE for Data Layout Granularity.
f. Select the Acceleratio Pool from the drop-down menu, and click Add Storage Pool.
2. If using a PowerFlex version prior to 3.5:
a. Log in to the PowerFlex GUI.
b. Select Backend > Storage.
c. Right-click Protection Domain, and click + Add > Add Storage Pool.
d. Add the new storage pool details:
● Name: Provide name
● Media Type: SSD
● Data Layout: Fine Granularity
● Acceleration Pool: Acceleration pool that was created previously
● Fine Granularity: Enable Compression
e. Click OK > Close
Steps
1. Log in to the primary MDM using SSH.
2. For each SDS with NVDIMM, type the following to add NVDIMM devices to the acceleration pool:
ndctl create-namespace -f -e namespace[x].0 --mode=devdax --align=4k --no-
autolabel #scli--add_sds_device--sds_name <SDS_NAME>--device_path /dev/dax0.0 --
acceleration_pool_name <ACCP_NAME> --force_device_takeover
Steps
1. SSH to the PowerFlex storage-only node.
2. Type lsblk to get the disk devices.
Sample output:
e. Click Advanced.
● Select Force Device Takeover.
Steps
1. If using PowerFlex GUI presentation server:
a. Log in to the PowerFlex GUI presentation server.
b. Click Dashboard > Configuration > Volumes, and ADD.
c. In the ADD Volume window, enter the name in the Volume name field.
d. Select THIN or THICK as the Provisioning option.
e. Enter the size in the Size field. Select the Storage Pool from the drop-down menu.
f. Click Add Volume.
2. If using a PowerFlex version prior to 3.5:
a. Log in to the PowerFlex GUI.
b. Click Frontend > Volumes.
c. Right-click Storage Pool, and click Add Volume.
d. Add the volume details:
● Name: Volume name
● Size: Required volume size
● Enable compression
e. Click OK > Close.
f. Right-click the volume, and select Map.
g. Map to all hosts.
h. Click OK.
Steps
1. If using PowerFlex GUI presentation server to enable zero padding on a storage pool:
a. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
b. Click Storage Pools from the left pane, and select the storage pool.
c. Select Settings from the drop-down menu.
d. Click Modify > General Settings from the drop-down menu.
e. Click Enable Zero Padding Policy > Apply.
NOTE: After the first device is added to a specific pool, you cannot modify the zero padding policy. FG pool is always
zero padded. By default, zero padding is disabled only for MG pool.
2. If using a PowerFlex version prior to 3.5 to enable zero padding on a storage pool:
a. Select Backend > Storage, and right-click Select By Storage Pools from the drop-down menu.
b. Right-click the storage pool, and click Modify zero padding policy.
c. Select Enable Zero Padding Policy, and click OK > Close.
NOTE: Zero padding cannot be enabled when devices are available in the storage pools.
3.
If CloudLink is... Do this...
Enabled See one of the following procedures depending on the
devices:
● Encrypt PowerFlex hyperconverged (SVM) or storage-
only devices (SED drives)
● Encrypt PowerFlex hyperconverged (SVM) or storage-
only devices (non-SED drives)
When adding NVMe drives, keep a separate storage pool for the PowerFlex storage-only node.
e. Repeat these steps on all the SDS, where you want to add the devices.
f. Ensure all the rebuild and balance activities are successfully completed.
g. Verify the space capacity after adding the new node.
Steps
1. To manually configure the PowerFlex node for an external SDC reachability:
a. Log in to the PowerFlex node using ssh <ip address>.
b. Configure one static route per interface for each external network.
echo "<destination subnet> via <gateway> dev <SIO Interface>">route-<SIO Interface>
2. To manually configure the SDS for PowerFlex node reachability:
a. Log in to the PowerFlex node using ssh <ip address>.
b. Configure one static route per interface for each PowerFlex data network.
esxcli network ip route ipv4 add -g <gateway> -n <destination subnet in CIDR>
Prerequisites
Replication is supported on PowerFlex storage-only nodes with dual CPU. The node should be migrated to an LACP bonding NIC
port design.
Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
2. Click the Protection tab in the left pane.
NOTE: In the PowerFlex GUI version 3.5 or prior, this tab is Replication.
3. Click SDR > Add, and enter the storage data replication name.
4. Choose the protection domain.
5. Enter the IP address to be used and click Add IP. Repeat this for each IP address and click Add SDR.
NOTE: While adding storage data replication, Dell recommends to add IP addresses for flex-data1-<vlanid>, flex-data2-
<vlanid>, flex-data3-<vlanid> (if required), and flex-data4-<vlanid> (if required) along with flex-rep1-<vlanid>, and
flex-rep2-<vlanid>. Choose the role of Application and Storage for all data IP addresses and choose role as External
for the replication IP addresses.
6. Repeat steps 3 through 5 for all the storage data replicator you are adding. If you are expanding a replication-enabled
PowerFlex cluster, skip steps 7 through 11.
7. Click Protection > Journal Capacity > Add, and provide the capacity percentage as 10%, which is the default. You can
customize if needed.
8. Extract and add the MDM certificate:
NOTE: You can perform steps 8 through 13 only when the Secondary Site is up and running.
a. Log in to the primary MDM, by using the SSH on source and destination.
b. Type scli command scli --login --username admin. Provide the MDM cluster password, when prompted.
c. See the following example and run the command to extract the certificate on source and destination primary MDM.
Example for source: scli --extract_root_ca --certificate_file /tmp/Source.crt
Example for destination: scli --extract_root_ca --certificate_file /tmp/destination.crt
d. Copy the extracted certificated of source (primary MDM) to destination (primary MDM) using the SCP and conversely.
e. See the following example to add the copied certificate:
Example for source: scli --add_trusted_ca --certificate_file /tmp/destination.crt --comment
destination_crt
Example for destination: scli --add_trusted_ca --certificate_file /tmp/source.crt --comment
source_crt
f. Type scli --list_trusted_ca to verify the added certificate.
9. Create the remote consistency group (RCG).
Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443.
NOTE: Use the primary MDM IP and credentials to log in to the PowerFlex cluster.
10. Click the Protection tab from the left pane. If you are using a PowerFlex version 3.5 or prior, click the Replication tab.
11. Choose RCG (Remote Consistency Group), and click ADD.
12. On the General tab:
a. Enter the RCG name and RPO.
b. Select the Source Protection Domain from the drop-down list.
c. Select the target system and Target protection domain from the drop-down list, and click Next.
d. Under the Pair tab, select the source and destination volumes.
NOTE: The source and destination volumes must be identical in size and provisioning type. Do not map the volume
on the destination site of a volume pair. Retain the read-only permission. Do not create a pair containing a
destination volume that is mapped to the SDCs with a read_write permission.
e. Click Add pair, select the added pair to be replicated, and click Next.
f. In the Review Pairs tab, select the added pair, and select Add RCG, and start replication according to the requirement.
Steps
1. Update the inventory for vCenter (vCSA), switches, gateway VM, and nodes:
a. Click Resources on the home screen.
b. Select the vCenter, switches (applicable only for full networking), gatewayVM, and newly added nodes.
c. Click Run Inventory.
d. Click Close.
e. Wait for the job in progress to complete.
58
Performing a PowerFlex hyperconverged
node expansion
Perform the manual expansion procedure to add a PowerFlex hyperconverged node to PowerFlex Manager services that are
discovered in a lifecycle mode.
See Cabling the PowerFlex R640/R740xd/R840 nodes for cabling information.
Type systemctl status firewalld to verify if firewalld is enabled. If disabled, see the Enabling firewall service on
PowerFlex storage-only nodes and SVMs KB article to enable firewalld on all SDS components.
Discover resources
Use this procedure to discover and allow PowerFlex Manager access to resources in the environment. Provide the management
IP address and credential for each discoverable resource.
Prerequisites
Verify that the iDRAC network settings are configured. See Configure iDRAC network settings for more information.
NOTE: For partial network deployments, you do not need to discover the switches. The switches need to be pre-
configured. For sample configurations for Dell PowerSwitch, Cisco Nexus, and Arista switches, see the Dell EMC PowerFlex
Appliance Administration Guide.
The following are the specific details for completing the Discovery wizard steps:
** This is optional. For a new CloudLink Center deployment, the CloudLink Center is discovered automatically.
Prerequisites
● Configure the iDRAC network settings.
● Gather the IP addresses and credentials that are associated with the resources.
NOTE: PowerFlex Manager also allows you to use the name-based searches to discover a range of nodes that were
assigned the IP addresses through DHCP to iDRAC. For more information about this feature, see Dell EMC PowerFlex
Manager Online Help.
Steps
1. On the PowerFlex Manager Getting Started page, click Discover Resources.
2. On the Welcome page of the Discovery Wizard, read the instructions and click Next.
3. On the Identify Resources page, click Add Resource Type. From the Resource Type list, select the resource that you
want to discover.
4. Enter the management IP address of the resource in the IP/Hostname Range field. To discover a resource in an IP range,
provide a starting and ending IP address.
5. In the Resource State list, select Managed or Unmanaged.
6. For PowerFlex node, to discover resources into a selected node pool instead of the global pool (default), select the node
pool from the Discover into Node Pool list.
7. Select the appropriate credential from the Credentials list. See the table above for details.
8. For PowerFlex node, if you want PowerFlex Manager to automatically reconfigure the iDRAC IP addresses of the nodes it
discovers, select the Reconfigure discovered nodes with new management IP and credentials check box. This option
is not selected by default, because it is faster to discover the nodes if you bypass the reconfiguration.
NOTE: iDRAC can be discovered using hostname also.
NOTE: For the Resource Type, you can use a range with hostname or IP address, provided the hostname has a valid
DNS entry.
9. For PowerFlex node, select the Auto configure nodes to send alerts to PowerFlex Manager check box to have
PowerFlex Manager automatically configure nodes to send alerts to PowerFlex Manager.
10. Click Next to start discovery. On the Discovered Resources page, select the resources from which you want to collect
inventory data and click Finish. The discovered resources are listed on the Resources page.
Related information
Configure iDRAC network settings
Steps
1. If you are using PowerFlex presentation server:
a. Log in to the PowerFlex presentation server.
b. Click Settings.
c. Copy the license and click Update License.
2. If you are using a version prior to PowerFlex 3.5:
a. Log in to the PowerFlex GUI and click Preferences > About. Note the current capacity available with the associated
PowerFlex license.
b. If the capacity available is sufficient and does not exceed with the planned expansion, proceed with the expansion
process.
c. If the capacity available exceeds with the planned expansion, obtain an updated license with additional capacity. Engage
the customer account team to obtain an updated license. Once an updated license is available, click Preferences >
System Settings > License > Update License. Verify that the updated capacity is available by selecting Preferences
> About.
Prerequisites
Use the self-SED based license for SED drives, and capacity license for non-SED drives.
Steps
1. Log in to the CloudLink Center web console.
2. Click System > License.
3. Check the limit and verify that there is enough capacity for the expansion.
Prerequisites
Verify the customer VMware ESXi ISO is available and is located in the Intelligent Catalog code directory.
Steps
1. Log in to the iDRAC:
a. Connect to the iDRAC interface, and launch a virtual remote console on the Dashboard.
b. Select Connect Virtual Media.
c. Under Map CD/DVD, click Choose File > Browse and browse to the folder where the ISO file is saved in the Intelligent
Catalog folder, select it and click Open..
d. Click Map Device.
e. Click Boot > Virtual CD/DVD/ISO.
f. Click Power > Reset System (warm boot).
NOTE: If the system is powered off, you must map the ISO image. Change the boot Next Boot to Virtual CD/DVD
ISO and power on the server. It boots with the ISO image. A reset is not required.
h. Under System BIOS > Boot setting, select UEFI as the boot mode.
NOTE: Ensure that the BOSS card is set as the primary boot device from the boot sequence settings. If the BOSS
card is not the primary boot device, reboot the server and change the UEFI boot sequence from System BIOS >
Boot settings > UEFI BOOT settings.
i. Click Back > Finish > Yes > Finish > OK > Finish > Yes.
2. Install VMware ESXi:
a. On the VMware ESXi installer screen, press Enter to continue.
b. Press F11 to accept the license agreement.
c. Under Local, select DELLBOSS VD as the install location, and click Enter if prompted to do so.
d. Select US Default as the keyboard layout.
e. When prompted, type the root password and press Enter.
f. At the Confirm Install screen, press F11.
g. When the installation is complete, remove the installation media before rebooting.
h. Press Enter to reboot the node.
NOTE: Set the first boot device to be the drive on which you installed VMware ESXi in Step 3.
f. See the ESXi Management VLAN ID field in the Workbook for the required VLAN value.
g. Choose Set static IPv4 address and network configuration. Set IPv4 ADDRESS, SUBNET MASK, and DEFAULT
GATEWAY configuration to the values defined in the Workbook.
h. Choose Use the following DNS server addresses and hostname. Go to DNS Configuration. See the Workbook for
the required DNS value.
i. Go to Custom DNS Suffixes. See the Workbook (local PFMC DNS).
j. Press Esc to exit.
k. Go to Troubleshooting Options.
l. Select Enable ESXi Shell and Enable SSH.
m. Press <Alt>-F1
n. Log in as root.
o. To enable the VMware ESXi host to work on the port channel, type:
Prerequisites
Ensure that you have access to the customer vCenter.
Steps
1. From the vSphere Client home page, go to Home > Hosts and Clusters.
2. Select a data center.
3. Right-click the data center and select New Cluster.
4. Enter a name for the cluster.
5. Select vSphere DRS and vSphere HA cluster features.
6. Click OK.
7. Select the existing cluster or newly created cluster.
8. From the Configure tab, click Configuration > Quickstart.
9. Click Add in the Add hosts card.
10. On the Add hosts page, in the New hosts tab, add the hosts that are not part of the vCenter Server inventory by entering
the IP address, or hostname and credentials.
11. (Optional) Select the Use the same credentials for all hosts option to reuse the credentials for all added hosts.
12. Click Next.
13. The Host Summary page lists all the hosts to be added to the cluster with related warnings. Review the details and click
Next.
14. On the Ready to complete page, review the IP addresses or FQDN of the added hosts and click Finish.
15. Add the new licenses:
a. Click Menu > Administration
b. In the Administration section, click Licensing.
c. Click Licenses..
d. From the Licenses tab, click Add.
e. Enter or paste the license keys for VMware vSphere and vCenter per line. Click Next.
The license key is a 25-character length of alphabets and digits in the format XXXXX-XXXXX-XXXXX-XXXXX-XXXXX.
You can enter a list of keys in one operation. A new license is created for every license key you enter.
f. On the Edit license names page, rename the new licenses as appropriate and click Next.
g. Optionally, provide an identifying name for each license. Click Next.
h. On the Ready to complete page, review the new licenses and click Finish.
Steps
1. Log in to VMware vCSA HTML Client using the credentials.
2. Go to VMs and templates inventory or Administration > vCenter Server Extensions > vSphere ESX Agent Manager
> VMs to view the VMs.
The VMs are in the vCLS folder once the host is added to the cluster.
3. Right-click the VM and click Migrate.
4. In the Migrate dialog box, click Yes.
5. On the Select a migration type page, select Change storage only and click Next.
6. On the Select storage page, select the PowerFlex volumes for hyperconverged or ESXi-based compute-only node which
will be mapped after the PowerFlex deployment.
NOTE: The volume name is powerflex-service-vol-1 and powerflex-service-vol-2. The datastore name is
powerflex-esxclustershotname-ds1 and powerflex-esxclustershotname-ds2. If these volumes or datastore are
not present, create the volumes or datastores to migrate the vCLS VMs.
Prerequisites
VMware ESXi must be installed with hosts added to the VMware vCenter.
Steps
1. Log in to the VMware vSphere Client.
2. Click Hosts and Clusters.
3. Locate and select the VMware ESXi host.
4. Select Datastores.
5. Right-click the datastore name, and select Rename.
6. Name the datastore using the DASXX convention, with XX being the node number.
Prerequisites
Apply all VMware ESXi updates before installing or loading hardware drivers.
NOTE: This procedure is required only if the ISO drivers are not at the proper Intelligent Catalog level.
Steps
1. Log in to the VMware vSphere Client.
2. Click Hosts and Clusters.
3. Locate and select the VMware ESXi host that you installed.
4. Select Datastores.
5. Right-click the datastore name and select Browse Files.
6. Select the Upload icon (to upload file to the datastore).
7. Browse to the Intelligent Catalog folder or downloaded current solution Intelligent Catalog files.
8. Select the VMware ESXi patch .zip files according to the current solution Intelligent Catalog and node type and click OK to
upload.
9. Select the driver and vib files according to the current Intelligent Catalog and node type and click OK to upload.
10. Click Hosts and Clusters.
11. Locate the VMware ESXi host, right-click, and select Enter Maintenance Mode.
12. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client.
13. Log in as root.
14. Type cd /vmfs/volumes/DASXX where XX is the name of the local datastore that is assigned to the VMware ESXi
server.
15. To display the contents of the directory, type ls.
16. If the directory indicates vib files, type esxcli software vib install –v /vmfs/volumes/DASXX/
patchname.vib to install the vib. These vib files can be individual drivers that are absent in the larger patch cluster
and must be installed separately.
17. Perform either of the following depending on the VMware ESXi version:
a. For VMware ESXi 7.0, type esxcli software vib update -d /vmfs/volumes/DASXXX/VMware-
ESXi-7.0<version>-depot.zip.
b. For VMware ESXi 6.x, type esxcli software vib install -d /vmfs/volumes/DASXXX/<ESXI-patch-
file>.zip
18. Type reboot to reboot the host.
19. Once the host completes rebooting, open an SSH session with the VMware ESXi host, and type esxcli software vib
list |grep net-i to verify that the correct drivers loaded.
20. Select the host and click Exit Maintenance Mode.
21. Update the test plan and host tracker with the results.
Prerequisites
● Type show running-configuration interface port-channel <portchannel number> to back up the
switch port and verify that the port channel for the impacted host are updated to MTU 9216. If the MTU value is set to
9000, skip this procedure.
● Back up the dvswitch configuration:
○ Click Menu, and from the drop-down, click Networking.
○ Click the impacted dvswitch and click the Configure tab.
○ From the Properties page, verify the MTU value. If the MTU value is set to 9000, skip this procedure.
● See the following table for recommended MTU values:
Steps
1. Change the MTU to 9216 or jumbo on physical switch port (Dell EMC PowerSwitch and Cisco Nexus):
a. Dell:
interface port-channel31
description Downlink-Port-Channel-to-Ganga-r840-nvme-01
no shutdown
Dell EMC PowerSwitch switchport mode trunk
switchport trunk allowed vlan 103,106,113,151,152,153,154
mtu 9216
vlt-port-channel 31
spanning-tree port type edge
b. Cisco:
interface port-channel31
description Downlink-Port-Channel-to-Ganga-r840-nvme-01
no shutdown
Cisco Nexus switchport mode trunk
Cisco Nexus switchport trunk allowed vlan 103,106,113,151,152,153,154
mtu 9216
vpc 31
spanning-tree port type edge
Prerequisites
Gather the IP addresses of the primary and secondary MDMs.
Steps
1. Open Direct Console User Interface (DCUI) or use SSH to log in to the new hosts.
2. At the command-line interface, run the following commands to ping each of the primary MDM and secondary MDM IP
addresses.
If ping test fails, you must remediate before continuing.
NOTE: A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks.
Run the following commands for LACP bonding NIC port design. x is the VMkernel adapter number in vmkx.
NOTE: After several host restarts, check the access switches for error or disabled states by running the following
commands:
3. Optional: If errors appear in the counters of any interfaces, type the following and check the counters again.
Output from a Cisco Nexus switch:
4. Optional: If there are still errors on the counter, perform the following to see if the errors are old and irrelevant or new and
relevant.
a. Optional: Type # show interface | inc flapped.
Sample output:
Last link flapped 1d02h
b. Type # show logging logfile | inc failure.
Sample output:
Dec 12 12:34:50.151 access-a %ETHPORT-5-IF_DOWN_LINK_FAILURE: Interface Ethernet1/4/3
is down (Link failure)
5. Optional: Check and reset physical connections, bounce and reset ports, and clear counters until errors stop occurring.
Do not activate new nodes until all errors are resolved and no new errors appear.
Steps
1. In the VMware vSphere Client, select the new ESXi hosts.
2. Click Configure > Hardware > PCI Devices.
3. Click Configure PassThrough.
The Edit PCI Device Availability window opens.
4. From the PCI Device drop-down menu, select the Avago (LSI Logic) Dell HBA330 Mini check box and click OK.
5. Right-click the VMware ESXi host and select Maintenance Mode.
6. Right-click the VMware ESXi host and select Reboot to reboot the host.
Steps
1. Use SSH to log in host.
2. Run the following command to generate a list of NVMe devices:
t10.NVMe____Dell_Express_Flash_PM1725a_800GB_SFF____0A0FB071EB382500
t10.NVMe____Dell_Express_Flash_PM1725a_800GB_SFF____1C0FB071EB382500
t10.NVMe____Dell_Express_Flash_PM1725a_800GB_SFF____3906B071EB382500
3. Run the following command for each NVME device and increment the disk number for each:
vmkfstools -z /vmfs/devices/disks/
t10.NVMe____Dell_Express_Flash_PM1725a_800GB_SFF____0A0FB071EB382500 /vmfs/volumes/
DASxx/<svm_name>/<svm_name>-nvme_disk0.vmdk
vmkfstools -z /vmfs/devices/disks/
t10.NVMe____Dell_Express_Flash_PM1725a_800GB_SFF____0A0FB071EB382500 /vmfs/volumes/
DASxx/<svm_name>/<svm_name>-nvme_disk1.vmdk
vmkfstools -z /vmfs/devices/disks/
t10.NVMe____Dell_Express_Flash_PM1725a_800GB_SFF____1C0FB071EB382500 /vmfs/volumes/
DASxx/<svm_name>/<svm_name>-nvme_disk2.vmdk
Steps
1. Copy the SDC file to the local datastore on the VMware vSphere ESXi server.
2. Use SSH on the host and type esxcli software vib install -d /vmfs/volumes/datastore1/
sdc-3.6.xxxxx.xx-esx7.x.zip -n scaleio-sdc-esx7.x.
3. Reboot the PowerFlex node.
4. To configure the SDC, generate a new UUID:
NOTE: If the PowerFlex cluster is using an SDC authentication, the newly added SDC reports as disconnected when
added to the system. See Configure an authentication enabled SDC for more information.
A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks. Verify the
number of VIPs configured in an existing setup.
7. Reboot the PowerFlex node.
Steps
1. If using PowerFlex presentation server:
a. Log in to the PowerFlex presentation server
b. Go to Configuration > SDCs.
c. Select the SDC and click Modify > Rename and rename the new host to standard.
For example, ESX-10.234.91.84
2. If using a version prior to PowerFlex 3.5:
a. Log in to the PowerFlex GUI.
b. Click Frontend > SDCs and rename the new host to standard.
For example, ESX-10.234.91.84
Steps
1. Using the table, calculate the required RAM capacity.
MG capacity (TiB) Required MG RAM Additional services Total RAM required Total RAM required
capacity (GiB) memory in the SVM (without in the SVM (with
CloudLink) (GiB) CloudLink)
2. Alternatively, you can calculate RAM capacity using the following formula:
NOTE: The calculation is in binary MiB, GiB, and TiB.
3. Open the PowerFlex GUI using the PowerFlex management IP address and the relevant PowerFlex username and password.
4. Select the Storage Data Server (SDS) from the Backend where you want to update the RAM size.
5. Right-click the SDS, select Configure IP addresses, and note the flex-data1-<vlanid> and flex-data2-<vlanid> IP addresses
associated with this SDS. A window appears displaying the IP addresses used on that SDS for data communication. Use
these IP addresses to verify that you powered off the correct PowerFlex VM.
6. Right-click the SDS, select Enter Maintenance Mode, and click OK.
7. Wait for the GUI to display a green check mark, click Close.
8. In the PowerFlex GUI, click Backend, and right-click the SVM and verify the checkbox is deselected for Configure RAM
Read Cache.
9. Power off the SVM.
10. In VMware vCenter, open Edit Settings and modify the RAM size based on the table or formula in step 1. The SVM should
be set as 8 or 12vCPU, configured at 8 or 12 Socket, 8 or 12 Core (for CloudLink, additional 4 threads).
11. Power on the SVM.
12. From the PowerFlex GUI backend, right-click the SDS and select Exit Maintenance Mode and click OK.
13. Wait for the rebuild and rebalance to complete.
14. Repeat steps 6 through 13 for the remaining SDSs.
SDS with Fine Granularity pool vCPU total: 10 (SDS) + 2 (MDM/TB) + 2 RAM_capacity_in_GIB = 5 + (210 *
(CloudLink) = 14 vCPU Total_drive_capacity_in_TIB)/1024
NOTE: Physical core requirement 2
sockets with 14 cores each (vCPU
cannot exceed physical cores).
Prerequisites
● Enter SDS node (SVMs) into maintenance mode and power off the SVM.
● Switch the primary cluster role to secondary if you are putting the primary MDM into maintenance mode (change back to
original node once completed). Do activity only one SDS at a time.
● If you place multiple SDS into maintenance mode at same time, there will be a chance of data loss.
● Ensure that the node has enough CPU cores in each socket.
Steps
1. Log in to the PowerFlex GUI presentation server, https://Presentation_Server_IP:8443.
2. Click Configuration > SDSs.
3. In the right pane, select the SDS and click More > Enter Maintenance Mode.
4. In the Enter SDS into Maintenance Mode dialog box, select Instant.
If maintenance mode takes more than 30 minutes, select PMM.
5. Click Enter Maintenance Mode.
6. Verify the operation completed successfully and click Dismiss.
7. Shut down the SVM.
Steps
1. Log in to the VMware vSphere Client and do the following:
a. Right-click the ESXi host, and select Deploy OVF Template.
b. Click Choose Files and browse the SVM OVA template.
c. Click Next.
2. Go to hosts and templates/EMC PowerFlex and right-click PowerFlex SVM Template, and select the new VM from this
template.
3. Enter a name similar to svm-<hostname>-<SVM IP ADDRESS>, select a datacenter and folder, and click Next.
4. Identify the cluster and select the node that you are deploying. Verify that there are no compatibility warnings and click
Next. Review the details and click Next.
5. Select the local datastore DASXX, and click Next.
6. Leave Customize hardware checked and click Next.
a. Set CPU with 12 cores per socket.
b. Set Memory to 16 GB and check Reserve all guest memory (All locked).
NOTE: Number of vCPUs and size of memory may change based on your system configuration. Check in the existing
SVM and update the CPU and memory settings accordingly.
7. Click Next > Finish and wait for the cloning process to complete.
8. Right-click the new SVM, and select Edit Settings and do the following:
This is applicable only for SSD. For NVMe, see Add NVMe devices as RDMs.
a. From the New PCI device drop-down menu, click DirectPath IO.
b. From the PCI Device drop-down menu, expand Select Hardware, and select Avago (LSI Logic) Dell HBA330 Mini.
c. Click OK.
9. Prepare for asynchronous replication:
NOTE: If replication is enabled, follow the below steps, else skip and go to step 11.
Related information
Add the new SDS to PowerFlex
Steps
1. Log in to the SDS (SVMs) using PuTTy.
2. Append the line numa_memory_affinity=0 to SDS configuration file /opt/emc/scaleio/sds/cfg/conf.txt,
type: echo #numa_memory_affinity=0 >> /opt/emc/scaleio/sds/cfg/conf.txt.
3. Run #cat /opt/emc/scaleio/sds/cfg/conf.txt to verify that the line is appended.
Steps
1. Use SSH to log in to the primary MDM. Log in to PowerFlex cluster using #scli --login --username admin.
2. To query the current value, type, #scli --query_performance_parameters --print_all --tech --
all_sds|grep -i SDS_NUMBER_OS_THREADS.
3. To set the value of SDS_number_OS_threads to 10, type, # scli --set_performance_parameters -sds_id
<ID> --tech --sds_number_os_threads 10.
NOTE: Do not set the SDS threads globally, set the SDS threads per SDS.
Steps
1. Log in to the SDS (SVMs) using PuTTy.
2. Run # systemctl status NetworkManager to ensure that Network Manager is not running.
Output shows Network Manager is disabled and inactive.
3. If Network Manager is enabled and active, run the following command to stop and disable the service:
Steps
1. Log in to SDS (SVMs) using PuTTY.
2. Note the MAC addresses of all the interfaces, type, #ifconfig or #ip a.
3. Edit all the interface configuration files (ifcfg-eth0, ifcfg-eth1, ifcfg-eth2, ifcfg-eth3, ifcfg-eth4) and update the NAME,
DEVICE and HWADDR to ensure correct MAC address and NAME gets assigned.
NOTE: If any of the entries are already there with correct value, then you can ignore such values.
● Use the vi editor to update the file # vi /etc/sysconfig/network-scripts/ifcfg-ethX
or
● Append the line using the following command:
Example file:
BOOTPROTO=none
ONBOOT=yes
HOTPLUG=yes
TYPE=Ethernet
DEVICE=eth2
IPADDR=192.168.155.46
NETMASK=255.255.254.0
DEFROUTE=no
MTU=9000
PEERDNS=no
NM_CONTROLLED=no
NAME=eth2
HWADDR=00:50:56:80:fd:82
Steps
1. Log in to the SVM using PuTTY.
2. Edit the grub file located in /etc/default/grub, type: # vi /etc/default/grub.
3. From the last line, remove net.ifnames=0 and biosdevname=0, and save the file.
4. Rebuild the GRUB configuration file, using: # grub2-mkconfig -o /boot/grub2/grub.cfg
Steps
1. Log in to VMware vCenter using VMware vSphere Client.
2. Select the SVM, right-click Power > Shut-down Guest OS. Ensure you shut down the correct SVM.
Steps
1. Log in to the production VMware vCenter using VMware vSphere Client.
2. Right-click the VM that you want to change and select Edit Settings.
3. Under the Virtual Hardware tab, expand CPU, and clear the CPU Hot Plug check box.
Steps
1. Browse to the SVM in the VMware VMware vSphere Client.
2. To find a VM, select a data center, folder, cluster, resource pool, or host.
3. Click the VMs tab.
4. Right-click the VM and select Edit Settings.
5. Click VM Options and expand Advanced.
6. Under Configuration Parameters, click Edit Configuration.
7. In the dialog box that appears, click Add Configuration Params.
8. Enter a new parameter name and its value depending on the pool:
● If the SVM for an MG pool has 20 vCPU, set numa.vcpu.maxPerVirtualNode to 10.
● If the SVM for an FG pool has 24 vCPU, set numa.vcpu.maxPerVirtualNode to 12.
9. Click OK > OK.
10. Ensure the following:
● CPU shares are set to high.
● 50% of the vCPU reserved on the SVM.
For example:
● If the SVM for an MG pool is configured with 20 vCPUs and CPU speed is 2.8 GHz, set a reservation of 28 GHz
(20*2.8/2).
● If the SVM is configured with 24 vCPUs and CPU speed is 3 GHz, set a reservation of 36 GHz (24*3/2).
11. Find the CPU and clock speed:
a. Log in to VMware vCenter.
Modify the memory size according to the SDR requirements for MG Pool
Use this procedure to add additional memory required for SDR if replication is enabled.
Steps
1. Log in to the production VMware vCenter using vSphere client.
2. Right-click the VM you want to change and select Edit Settings.
3. Under the Virtual Hardware tab, expand Memory, modify the memory size according to SDR requirement.
4. Click OK.
Steps
1. Log in to the production VMware vCenter using VMware vSphere client.
2. Right-click the virtual machine that requires changes and select Edit Settings.
3. Under the Virtual Hardware tab, expand CPU, increase the vCPU count according to SDR requirement.
4. Click OK.
Modify the memory size according to the SDR requirements for FG Pool
Use this procedure to add additional memory required for SDR if replication is enabled.
Steps
1. Log in to the production VMware vCenter using VMware vSphere Client.
2. Right-click the VM that requires changes and select Edit Settings.
3. Under the Virtual Hardware tab, expand Memory, modify the memory size according to SDR requirement.
4. Click OK.
Steps
1. Log in to the production VMware vCenter using VMware vSphere Client.
2. Right-click the virtual machine that requires changes and select Edit Settings.
3. Under the Virtual Hardware tab, expand CPU, and increase the vCPU count according to SDR requirement.
4. Click OK.
Steps
1. Log in to the production VMware vCenter using VMware vSphere Client and navigate to Host and Clusters.
2. Right-click the SVM and click Edit Setting.
3. Click Add new device and select Network Adapter from the list.
4. Select the appropriate port group created for SDR external communication and click OK.
5. Repeat steps 2 through 4 to create the second NIC.
Steps
1. Log in to VMware vCenter using vSphere client.
2. Select the SVM, right-click Power > Power on.
3. Log in to SVM using PuTTY.
4. Create rep1 network interface, type: cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/
network-scripts/ifcfg-eth5.
5. Create rep2 network interface, type: cp etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/
network-scripts/ifcfg-eth6.
6. Edit newly created configuration files (ifcfg-eth5, ifcfg-eth6) using the vi editor and modify the entry for IPADDR,
NETMASK, GATEWAY, DEFROUTE, DEVICE, NAME and HWADDR, where:
● DEVICE is the newly created device of eth5 and eth6
● IPADDR is the IP address of the rep1 and rep2 networks
● NETMASK is the subnet mask
● GATEWAY is the gateway for the SDR external communication
● DEFROUTE change to no
● HWADDR=MAC address collected from the topic Adding virtual NICs to SVMs
● NAME=newly created device name for eth5 and eth6
NOTE: Ensure that the MTU value is set to 9000 for SDR interfaces on both primary and secondary site and also end to
end devices. Confirm with the customer about their existing MTU values and configure it.
Steps
1. Go to /etc/sysconfig/network-scripts and create a file called route-interface and type:
#touch /etc/sysconfig/network-scripts/route-eth5
#touch /etc/sysconfig/network-scripts/route-eth6
/etc/sysconfig/network-scripts/route-eth5
10.0.10.0/23 via 10.0.30.1
/etc/sysconfig/network-scripts/route-eth6
10.0.20.0/23 via 10.0.40.1
Steps
1. Use WinSCP or SCP to copy the SDR package to the tmp folder.
2. SSH to SVM and run the following to install the SDR package:#rpm -ivh /tmp/EMC-ScaleIO-sdr-3.6-
x.xxx.el7.x86_64.rpm.
Prerequisites
The IP address of the node must be configured for SDR. The SDR communicates with several components:
● SDC (application)
● SDS (storage)
● Remote SDR (external)
Steps
1. In the left pane, click Protection > SDRs.
2. In the right pane, click Add.
3. In the Add SDR dialog box, enter the connection information of the SDR:
Steps
1. Log in to all the SVMs and PowerFlex nodes in source and destination sites.
2. Ping the following IP addresses from each of the SVM and PowerFlex nodes in source site:
● Management IP addresses of the primary and secondary MDMs
● External IP addresses configured for SDR-SDR communication
3. Ping the following IP addresses from each of the SVM and PowerFlex nodes in destination site:
● Management IP addresses of the primary and secondary MDMs
● External IP addresses configured for SDR-SDR communication
Steps
1. If you are using a PowerFlex presentation server:
a. Log in to the PowerFlex presentation server.
b. Click Configuration > SDS and click Add.
c. On the Add SDS page, enter the SDS name and select the Protection Domain.
d. Under Add IP, enter the data IP address and click Add SDS.
e. Locate the newly added PowerFlex SDS, right-click and select Add Device.
f. Choose Storage device from the drop-down menu.
g. Locate the newly added PowerFlex SDS, right-click and select Add Device, and choose Acceleration Device from the
drop-down menu.
CAUTION: In case the deployment fails for SSD or NVMe with NVDIMM, it can be due to any one of the
following reasons. Click View Logs and see Configuring the NVDIMM for a new PowerFlex hyperconverged
node for the node configuration table and steps to add SDS and NVDIMM to the FG pool.
● The following error appears if the required NVDIMM size and the RAM size to SVM does not match the node
configuration table.
VMWARE_CANNOT_RETRIEVE_VM_MOR_ID
● If the deployment fails to add the device and SDS to the PowerFlex GUI, you should manually add the SDS and NVDIMM
to FG pool.
2. If you are using a PowerFlex version prior to 3.5:
a. Log in to the PowerFlex GUI, and click Backend > Storage.
b. Right-click the new protection domain, and select +Add > Add SDS.
c. Enter a name.
For example, 10.234.92.84-ESX.
d. Add the following addresses in the IP addresses field and click OK:
● flex-data1-<vlanid>
● flex-data2-<vlanid>
● flex-data3-<vlanid> (if required)
● flex-data4-<vlanid> (if required)
e. Add New Devices from the lsblk output from the previous step.
f. Select the storage pool destination and media type.
g. Click OK and wait for the green check box to appear and click Close.
Related information
Add drives to PowerFlex
Prepare the SVMs for replication
Steps
1. If you are using PowerFlex GUI presentation server to enable zero padding on a storage pool:
a. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
b. Click Storage Pools from the left pane, and select the storage pool.
c. Click Settings from the drop-down menu.
d. Click Modify > General Settings from the drop-down menu.
e. Click Enable Zero Padding Policy > Apply.
NOTE: After the first device is added to a specific pool, you cannot modify the zero padding policy. FG pool is always
zero padded. By default, zero padding is disabled only for MG pool.
2. If you are using a PowerFlex version prior to 3.5 to enable zero padding on a storage pool:
a. Select Backend > Storage, and right-click Select By Storage Pools from the drop-down menu.
b. Right-click the storage pool, and click Modify zero padding policy.
c. Select Enable Zero Padding Policy, and click OK > Close.
NOTE: Zero padding cannot be enabled when devices are available in the storage pools.
3.
If CloudLink is... Do this...
Enabled See one of the following procedures depending on the
devices:
● Encrypt PowerFlex hyperconverged (SVM) or storage-
only devices (SED drives)
● Encrypt PowerFlex hyperconverged (SVM) or storage-
only devices (non-SED drives)
Disabled Use PuTTY to access the Red Hat Enterprise Linux or an
embedded operating system node.
When adding NVMe drives, keep a separate storage pool for the PowerFlex storage-only node.
e. Repeat steps 5a to 5 d on all the SDS, where you want to add the devices.
f. Ensure that all the rebuild and balance activities are successfully completed.
g. Verify the space capacity after adding the new node.
6. If you are using a PowerFlex version prior to 3.5:
a. Connect to the PowerFlex GUI.
b. Click Backend.
c. Locate the newly added PowerFlex SDS, right-click, and select Add Device.
d. Type /dev/nvmeXXn1 where X is the value from step 3.
e. Select the Storage Pool, as identified in the Workbook.
NOTE: If the existing protection domain has Red Hat Enterprise Linux nodes, replace or expand with Red Hat
Enterprise Linux. If the existing protection domain has embedded operating system nodes, replace or expand with
embedded operating system.
Related information
Add the new SDS to PowerFlex
Prerequisites
● Ensure the NVDIMM firmware on the new node is of same version of the existing system in the cluster.
● If NVDIMM firmware is higher than the Intelligent Catalog version, you must manually downgrade NVDIMM firmware.
● The VMware ESXi host and the VMware vCenter server are using version 6.7 or higher.
● The VM version of your SVM is version 14 or higher.
● The firmware of the NVDIMM is version 9324 or higher.
● The VMware ESXi host recognizes the NVDIMM.
Steps
1. Log in to the VMware vCenter.
2. Select the VMware ESXi host.
3. Go to the Summary tab.
4. In the VM Hardware section, verify that the required amount of persistent memory is listed.
Add NVDIMM
Use this procedure to add an NVDIMM.
Steps
1. Using the PowerFlex GUI, perform the following to enter the target SDS into maintenance mode:
NOTE: For the new PowerFlex nodes with NVMe or SSD, remove the SDS or device if it is added to the GUI before
placing the SDS into maintenance mode. Skip this step if the SDS is not added to the GUI.
NOTE: In case the capacity is not matching with the configuration table, use the following formula to calculate the
NVDIMM or RAM capacity for Fine Granularity. The calculation is in binary MiB, GiB, and TiB. Round off the RAM size to
the next GiB. For example, if the output of the equation is 16.75 GiB, round it off to 17 GiB.
5. In Edit Settings, change the Memory size as per the node configuration table, and select the Reserver all guest memory
(All locked) check box.
6. Right-click the SVM, choose Edit settings. Set the SVM as 8 or 12 vCPU, configure at 8 or 12 socket, 8 or 12 core (for
CloudLink additional 4 threads).
7. Use VMware vCenter to turn on the SVM.
8. Using the PowerFlex GUI, remove the SDS from maintenance mode.
9. Create a namespace on the NVDIMM:
a. Connect to the SVM using SSH and type # ndctl create-namespace -f -e namespace0.0 --mode=dax
--align=4K.
10. Perform steps 3 to 8 for every PowerFlex node with NVDIMM.
11. Create an acceleration pool for the NVDIMM devices:
a. Connect using SSH to the primary MDM, type #scli --add_acceleration_pool --
protection_domain_name <PD_NAME> --media_type NVRAM --acceleration_pool_name
<ACCP_NAME> in the SCLI to create the acceleration pool.
NOTE: Use this step only when you want to add the new PowerFlex node to the new acceleration pool. Otherwise,
skip this step and go to the step to add SSD or NVMe device.
b. For each SDS with NVDIMM, type #scli --add_sds_device --sds_name <SDS_NAME> --
device_path /dev/dax0.0 --acceleration_pool_name <ACCP_NAME> --force_device_takeover to
add the NVDIMM devices to the acceleration pool:
NOTE: Use this step only when you want to add the new acceleration device to a new acceleration pool. Otherwise,
skip this step and go to the step to add SSD or NVMe device.
12. Create a storage pool for SSD devices accelerated by an NVDIMM acceleration pool with Fine Granularity data layout:
a. Connect using SSH to the primary MDM and enter #scli --add_storage_pool --protection_domain_name
<PD_NAME> --storage_pool_name <SP_NAME> --media_type SSD --compression_method normal
--fgl_acceleration_pool_name<ACCP_NAME> --fgl_profile high_performance --data_layout
fine_granularity.
NOTE: Use this step only when you want to add the new PowerFlex node to a new storage pool. Otherwise, skip this
step and go to the step to add SSD or NVMe device.
13. Add the SSD or NVMe device to the existing Fine Granularity storage pool using the PowerFlex GUI.
14. Set the spare capacity for the fine granularity storage pool.
When finished, if you are not extending the MDM cluster, see Completing the expansion.
Extend the MDM cluster from three to five nodes using SCLI
Use this procedure to extend the MDM cluster using SCLI.
It is critical that the MDM cluster is distributed across access switches and physical cabinets to ensure maximum resiliency and
availability of the cluster. The location of the MDM components should be checked and validated during every engagement,
and adjusted if found noncompliant with the published guidelines. If an expansion includes adding physical cabinets and access
switches, you should relocate the MDM cluster components. See MDM cluster component layouts for more information.
When adding new MDM or tiebreaker nodes to a cluster, first place the PowerFlex storage-only nodes (if available), followed by
the PowerFlex hyperconverged nodes.
Prerequisites
● Identify new nodes to use as MDM or tiebreaker.
● Identify the management IP address, data1 IP address, and data2 IP address (log in to each new node or SVM and run the IP
addr command).
● Gather virtual interfaces for the nodes being used for the new MDM or tiebreaker, and note the interface of data1 and data2.
For example, for a PowerFlex storage-only node, the interface is bond0.152 and bond1.160. If it is an SVM, it is eth3 and
eth4.
● Identify the primary MDM.
Steps
1. SSH to each new node or SVM and assign the proper role (MDM or tiebreaker) to each.
2. Transfer the MDM and LIA packages to the newly identified MDM cluster nodes.
NOTE: The following steps contain sample versions of PowerFlex files as examples only. Use the appropriate PowerFlex
files for your deployment.
10. Enter scli -–query_cluster to find the ID for the newly added Standby MDM and the Standby TB.
11. To switch to five node cluster, enter scli --switch_cluster_mode --cluster_mode 5_node --
add_slave_mdm_id <Standby MDM ID> --add_tb_id <Standby tiebreaker ID>
12. Repeat steps 1 to 9 to add Standby MDM and tiebreakers on other PowerFlex nodes.
Prerequisites
● Identify new nodes to use as MDM or tiebreaker.
● Identify the management IP address, data1 IP address, and data2 IP address (log in to each new node or SVM and enter the
IP addr command).
● Gather virtual interfaces for the nodes being used for the new MDM or tiebreaker, and note the interface of data1 and data2.
For example, for a PowerFlex storage-only node, the interface is bond0.152 and bond1.160. If it is an SVM, it is eth3 and
eth4.
● Identify the primary MDM.
Steps
1. SSH to each new node or SVM and assign the proper role (MDM or tiebreaker) to each.
2. Transfer the MDM and LIA packages to the newly identified MDM cluster nodes.
NOTE: The following steps contain sample versions of PowerFlex files as examples only. Use the appropriate PowerFlex
files for your deployment.
8. Add a new standby tiebreaker by entering scli --add_standby_mdm --mdm_role tb --new_mdm_ip <new tb
data1, data2 ip’s> --new_mdm_name <new tb name>.
9. Repeat Steps 7 and 8 for each new MDM and tiebreaker that you are adding to the cluster.
10. Enter scli –-query_cluster to find the ID for the current MDM and tiebreaker. Note the IDs of the MDM and
tiebreaker being replaced.
11. To replace the MDM, enter scli --replace_cluster_mdm --add_slave_mdm_id <mdm id to add> --
remove_slave_mdm_id <mdm id to remove>.
Repeat this step for each MDM.
12. To replace the tiebreaker, enter scli --replace_cluster_mdm --add_tb_id <tb id to add> --
remove_tb_id <tb id to remove>.
Repeat this step for each tiebreaker.
13. Enter scli -–query_cluster to find the IDs for MDMs and tiebreakers being removed.
14. Using IDs to remove the old MDM, enter scli --remove_standby_mdm --remove_mdm_id <mdm id to
remove>.
NOTE: This step might not be necessary if this MDM remains in service as a standby. See MDM cluster component
layouts for more information.
15. To remove the old tiebreaker, enter scli --remove_standby_mdm --remove_mdm_id <mdm id to remove>.
NOTE: This step might not be necessary if this tiebreaker remains in service as a standby. See MDM cluster component
layouts for more information.
Related information
Redistribute the MDM cluster using PowerFlex Manager
Steps
1. Update the inventory for vCenter (vCSA), switches, gateway VM, and nodes:
a. Click Resources on the home screen.
b. Select the vCenter, switches (applicable only for full networking), gatewayVM, and newly added nodes.
c. Click Run Inventory.
d. Click Close.
e. Wait for the job in progress to complete.
2. Update Services details:
a. Click Services
b. Choose the services on which new a node is expanded and click View Details.
c. On the Services details screen, choose Update Service Details.
d. Choose the credentials for the node and SVM and click Next.
e. On the Inventory Sumary, verify that the newly added nodes are reflecting under Physical Node, and click Next.
f. On the Summary page, verify the details and click Finish.
Prerequisites
Ensure that the VMware vSphere vCenter Server and the VMware vSphere Client are accessible.
Steps
1. Log in to the VMware vSphere Client.
2. Click Networking.
3. Expand the PowerFlex Customer-Datacenter.
4. Right-click cust_dvswitch.
5. Click Distributed Port Group > New Distributed Port Group.
6. Update the name to pfmc-nsx-transport-121 and click Next.
7. Select the default Port binding.
8. Select the default Port allocation.
9. Select the default # of ports (default is 8).
10. Select the default VLAN as VLAN Type.
11. Set the VLAN ID to 121.
12. Clear the Customize default policies configuration check box and click Next.
13. Click Finish.
14. Right-click the pfnc-nsx-transport-121 and click Edit Settings....
15. Click Teaming and failover.
16. Verify that Uplink1 and Uplink2 are moved to Active.
17. Click OK.
Prerequisites
Both Cisco Nexus access switch ports for the compute VMware ESXi hosts are configured as trunk access. These ports will be
configured as LACP enabled after the physical adapter is removed from each ESXi host.
WARNING: As the VMK0 (ESXi management) is not configured on cust_dvswitch, both the vmnics are first
migrated to the LAGs simultaneously and then the port channel is configured. Data connectivity to PowerFlex is
lost until the port channels are brought online with both vmnic interfaces connected to LAGs.
Steps
1. Log in to the VMware vSphere Client.
2. Look at VMware vCenter and physical switches to ensure that both ports across all hosts are up.
3. For each compute VMware ESXi host, record the physical switch ports to which vmnic5 (switch-B) and vmnic7 (switch -A)
are connected.
a. Click Home, then select Hosts and Clusters and expand the compute cluster.
b. Select the first compute ESXi host in left pane, and then select Configure tab in right pane.
interface port-channel40
Description to flex-compute-esxi-host01
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
speed 25000
no lacp suspend-individual
vpc 40
7. Configure channel-group (LACP) on switch-A access port (vmnic5) for each compute VMware ESXi host.
The following switch port configuration is an example of a single compute VMware ESXi host.
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-A switch.
b. Create port on switch-A as follows:
int e1/1/1
description to flex-compute-esxi-host01 – vmnic5
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 40 mode active
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-B switch.
b. Create a port channel on switch-B for each compute VMware ESXi host as follows:
interface port-channel40
Description to flex-compute-esxi-host01
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
speed 25000
no lacp suspend-individual
vpc 40
9. Configure channel-group (LACP) on switch-B access port (vmnic7) for each compute VMware ESXi host.
The following switch port configuration is an example of a single compute VMware ESXi host.
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-B switch.
b. Create port on switch-B as follows:
int e1/1/1
description to flex-compute-esxi-host01 – vmnic7
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 40 mode active
10. Update teaming and policy to route based on physical NIC load for each port group within cust_dvswitch:
a. Click Home and select Networking.
b. Expand cust_dvswitch to have all port group in view.
c. Right-click flex-data-01 and select Edit Settings.
d. Click Teaming and failover.
e. Change Load Balancing mode to Route based on IP hash.
f. Repeat steps 10b to 10e for each remaining port groups.
Prerequisites
NOTE: Before adding a VMware NSX-T service using PowerFlex Manager, either the customer or VMware services must
add the new PowerFlex node to NSX-T Data Center using NSX-T UI.
Consider the following:
● Before adding this service Update service details in PowerFlex Manager, verify that the NSX-T Data Center is configured
on the PowerFlex hyperconverged or compute-only nodes.
● If the transport nodes (PowerFlex cluster) is configured with NSX-T, then you cannot replace the field units using PowerFlex
Manager. You must add the node manually by following either of these procedures depending on the node type:
○ Performing a PowerFlex hyperconverged node expansion
○ Performing a PowerFlex compute-only node expansion
Steps
1. Log in to PowerFlex Manager.
2. If NSX-T Data Center 3.0 or higher is deployed and is using VDS (not N-VDS), then add the transport network:
a. From Getting Started, click Define Networks.
b. Click + Define and do the following:
Configure Static IP Address Ranges Select the Configure Static IP Address Ranges check box. Type
the starting and ending IP address of the transport network IP pool
c. Click Next.
d. On the Network Information page, select Full Network Automation, and click Next.
e. On the Cluster Information page, enter the following details:
f. Click Next.
g. On the OS Credentials page, select the OS credentials for each node, and click Next.
h. On the Inventory Summary page, review the summary and click Next.
i. On the Networking Mapping page, verify that the networks are aligned with the correct dvSwitch.
j. On the Summary page, review the summary and click Finish.
4. Verify PowerFlex Manager recognizes NSX-T is configured on the nodes:
a. Click Services.
b. Select the hyperconverged or compute-only service.
c. Verify that a banner appears under the Service Details tab, notifying that NSX-T is configured on a node and is
preventing some features from being used. In case you do not see this banner, check if you have selected the wrong
service or NSX-T is not configured on the hyperconverged or compute-only nodes.
59
Encrypting PowerFlex hyperconverged
(SVM) or storage-only node devices (SED or
non-SED drives)
Prerequisites
NOTE: This procedure is not applicable for PowerFlex storage-only nodes with NVMe drives.
● If you want to add the SVM into a specific machine group, use the -G [group_code] argument with the preceding
command.
where -G group_code specifies the registration code for the machine group to which you want to assign the machine.
NOTE: To obtain the registration code of the machine group, log in to the CloudLink Center using a web browser.
Steps
1. Open a browser, and provide the CloudLink Center IP address.
2. In the Username box, enter secadmin.
3. In the Password box, enter the secadmin password.
4. Click Agents > Machines.
5. Ensure that the hostname of the new SVM or PowerFlex storage-only node is listed, and is in Connected state.
374 Encrypting PowerFlex hyperconverged (SVM) or storage-only node devices (SED or non-SED drives)
Internal Use - Confidential
6. If the SDS has devices that are added to PowerFlex, remove the devices. Otherwise, skip this step.
Encrypting PowerFlex hyperconverged (SVM) or storage-only node devices (SED or non-SED drives) 375
Internal Use - Confidential
NOTE: If the device shows taking control, run #svm status until the device status shows as managed. It
is a known issue that the CLI status of SED drives shows as unencrypted, whereas CloudLink Center UI shows the
device status as Encrypted HW.
NOTE: There are no /dev/mapper devices for SEDs. Use the device name listed in the svm status command. It
is recommended to add self-encrypted drives (SEDs) to their own storage pools.
f. Once all SED drives are Managed, add the encrypted devices to the PowerFlex SDS.
376 Encrypting PowerFlex hyperconverged (SVM) or storage-only node devices (SED or non-SED drives)
Internal Use - Confidential
8. Ensure that rebalance is running and progressing before continuing to another SDS.
Related information
Verify the CloudLink license
Prerequisites
Ensure that the following prerequisites are met:
● If you are using PowerFlex presentation server, see Modifying the vCPU, memory, vNUMA, and CPU reservation settings on
SVMs for the CPU settings.
● If you are using PowerFlex versions prior to 3.6, the Storage VM (SVM) vCPU is set to 12 (one socket and twelve cores),
and RAM is set to 16 GB (applicable for MG pool enabled system only). If you have an FG pool enabled system, change the
RAM size based on the node configuration table specified in Add NVDIMM
● SSH to the SVM or the PowerFlex storage-only node on which you plan to have the encrypted devices.
● Download and install the CloudLink Agent by entering:
curl -O http://cloudlink_ip/cloudlink/securevm && sh securevm -S cloudlink_ip
● If you want to add the SVM into a specific machine group, use the -G [group_code] argument with the preceding
command.
where -G group_code specifies the registration code for the machine group to which you want to assign the machine.
NOTE: To obtain the registration code of the machine group, log in to the CloudLink Center using a web browser.
Steps
1. Open a browser, and enter the CloudLink Center IP address.
2. In the Username box, enter secadmin.
3. In the Password box, enter the secadmin password.
The CloudLink Center home page is displayed.
4. Click Agents > Machines.
5. Ensure that the hostname of the new SVM or PowerFlex storage-only node is listed, and is in Connected state.
Encrypting PowerFlex hyperconverged (SVM) or storage-only node devices (SED or non-SED drives) 377
Internal Use - Confidential
NOTE: Ensure that Storage Data Server (SDS) is installed before CloudLink Agent is installed.
In the vi /opt/emc/extra/pre_run.sh file, type sleep 60 before the last line if it does not already exist.
7. If the SDS has devices that are added to PowerFlex, remove the devices. Otherwise, skip this step.
b. For SSD drives, enter svm encrypt /dev/sdX for each drive you want to encrypt.
where X is the device letter.
c. For NVMe drives, enter use svm encrypt /dev/nvmexxx for each drive you want to encrypt.
d. Enter #svm status to view the status of the devices.
378 Encrypting PowerFlex hyperconverged (SVM) or storage-only node devices (SED or non-SED drives)
Internal Use - Confidential
iv. Enter the new device path and name in the Path
and Name fields of the Add Storage Device to SDS
window.
v. Select the Storage Pool, Media Type you recorded in
the drive information table.
vi. Click Add Device.
vii. Repeat steps c and d to add all the devices, and click
Add Devices.
PowerFlex version prior to 3.5 i. Log in to the PowerFlex GUI.
ii. Click Backend.
iii. Locate the PowerFlex SDS, right-click, and select Add
Device.
iv. In Add device to SDS, enter the Path and select the
Storage Pool for each device.
● If the PowerFlex storage-only node has only SSD
disks, then the path is /dev/mapper/svm_sdX
where X is the device you have encrypted.
● If the PowerFlex storage-only node node has
NVMe disks, then the path is /dev/mapper/
svm_nvmeXnX where X is the device you have
encrypted.
9. Ensure that rebalance is running and progressing before continuing to another SDS.
Related information
Verify the CloudLink license
Encrypting PowerFlex hyperconverged (SVM) or storage-only node devices (SED or non-SED drives) 379
Internal Use - Confidential
Verify newly added SVMs or storage-only nodes machine status in CloudLink Center
380 Encrypting PowerFlex hyperconverged (SVM) or storage-only node devices (SED or non-SED drives)
Internal Use - Confidential
60
Performing a PowerFlex compute-only node
expansion
Perform the manual expansion procedure to add a PowerFlex compute-only node to PowerFlex Manager services that are
discovered in a lifecycle mode.
See Cabling the PowerFlex R640/R740xd/R840 nodes for cabling information.
Discover resources
Use this procedure to discover and allow PowerFlex Manager access to resources in the environment. Provide the management
IP address and credential for each discoverable resource.
Prerequisites
Verify that the iDRAC network settings are configured. See Configure iDRAC network settings for more information.
NOTE: For partial network deployments, you do not need to discover the switches. The switches need to be pre-
configured. For sample configurations for Dell PowerSwitch, Cisco Nexus, and Arista switches, see the Dell EMC PowerFlex
Appliance Administration Guide.
The following are the specific details for completing the Discovery wizard steps:
** This is optional. For a new CloudLink Center deployment, the CloudLink Center is discovered automatically.
Prerequisites
● Configure the iDRAC network settings.
● Gather the IP addresses and credentials that are associated with the resources.
NOTE: PowerFlex Manager also allows you to use the name-based searches to discover a range of nodes that were
assigned the IP addresses through DHCP to iDRAC. For more information about this feature, see Dell EMC PowerFlex
Manager Online Help.
Steps
1. On the PowerFlex Manager Getting Started page, click Discover Resources.
2. On the Welcome page of the Discovery Wizard, read the instructions and click Next.
3. On the Identify Resources page, click Add Resource Type. From the Resource Type list, select the resource that you
want to discover.
4. Enter the management IP address of the resource in the IP/Hostname Range field. To discover a resource in an IP range,
provide a starting and ending IP address.
5. In the Resource State list, select Managed or Unmanaged.
6. For PowerFlex node, to discover resources into a selected node pool instead of the global pool (default), select the node
pool from the Discover into Node Pool list.
7. Select the appropriate credential from the Credentials list. See the table above for details.
8. For PowerFlex node, if you want PowerFlex Manager to automatically reconfigure the iDRAC IP addresses of the nodes it
discovers, select the Reconfigure discovered nodes with new management IP and credentials check box. This option
is not selected by default, because it is faster to discover the nodes if you bypass the reconfiguration.
NOTE: iDRAC can be discovered using hostname also.
NOTE: For the Resource Type, you can use a range with hostname or IP address, provided the hostname has a valid
DNS entry.
9. For PowerFlex node, select the Auto configure nodes to send alerts to PowerFlex Manager check box to have
PowerFlex Manager automatically configure nodes to send alerts to PowerFlex Manager.
10. Click Next to start discovery. On the Discovered Resources page, select the resources from which you want to collect
inventory data and click Finish. The discovered resources are listed on the Resources page.
Related information
Configure iDRAC network settings
Prerequisites
Verify the customer VMware ESXi ISO is available and is located in the Intelligent Catalog code directory.
Steps
1. Log in to the iDRAC:
a. Connect to the iDRAC interface and launch a virtual remote console by clicking Dashboard > Virtual Console and click
Launch Virtual Console.
b. Select Connect Virtual Media.
c. Under Map CD/DVD, click Choose File > Browse and browse to the folder where the ISO file is saved, select it, and
click Open.
d. Click Map Device.
e. Click Menu > Boot > Virtual CD/DVD/ISO.
f. Click Power > Reset System (warm boot).
2. Set the boot option to UEFI.
a. Press F2 to enter system setup.
b. Under System BIOS > Boot setting, select UEFI as the boot mode.
NOTE: Ensure that the BOSS card is set as the primary boot device from the boot sequence settings. If the BOSS
card is not set as the primary boot device, reboot the server and change the UEFI boot sequence from System
BIOS > Boot settings > UEFI BOOT settings.
c. Click Back > Back > Finish > Yes > Finish > OK > Finish > Yes.
3. Install VMware ESXi:
a. On the VMware ESXi installer screen, press Enter to continue.
b. Press F11 to accept the license agreement.
c. Under Local, select DELLBOSS VD as the install location, and click Enter if prompted to do so.
d. Select US Default as the keyboard layout.
e. When prompted, type the root password and press Enter.
f. At the Confirm Install screen, press F11.
g. When the installation is complete, delete the installation media before rebooting.
h. Press Enter to reboot the node.
NOTE: Set the first boot device to be the drive on which you installed VMware ESXi in Step 3.
e. See the VMware ESXi Management VLAN ID field in the Workbook for the required VLAN value.
f. Set IPv4 ADDRESS, SUBNET MASK, and DEFAULT GATEWAY configuration to the values defined in the Workbook.
g. Go to DNS Configuration. See the Workbook for the required DNS value.
h. Go to Custom DNS suffix. See the Workbook (local VXRC DNS).
i. Go to DCUI Troubleshooting Options.
j. Select Enable ESXi Shell and Enable SSH.
k. Press <Alt>-F1
l. Log in as root.
m. To enable the VMware ESXi host to work on the port channel, type:
n. Type vim-cmd hostsvc/datastore/rename datastore1 DASXX to rename the datastore, where XX is the
server number.
o. Type exit to log off.
p. Press <Alt>-F2 to return to the DCUI.
q. Select Disable ESXi Shell.
r. Go to DCUI IPv6 Configuration.
s. Disable IPv6.
t. Press ESC to return to the DCUI.
u. Type Y to commit the changes and the node restarts.
v. Verify host connectivity by pinging the IP address from the jump server, using the command prompt.
Prerequisites
Ensure that you have access to the customer vCenter.
Steps
1. From the vSphere Client home page, go to Home > Hosts and Clusters.
2. Select a data center.
3. Right-click the data center and select New Cluster.
4. Enter a name for the cluster.
5. Select vSphere DRS and vSphere HA cluster features.
6. Click OK.
7. Select the existing cluster or newly created cluster.
8. From the Configure tab, click Configuration > Quickstart.
9. Click Add in the Add hosts card.
10. On the Add hosts page, in the New hosts tab, add the hosts that are not part of the vCenter Server inventory by entering
the IP address, or hostname and credentials.
11. (Optional) Select the Use the same credentials for all hosts option to reuse the credentials for all added hosts.
12. Click Next.
13. The Host Summary page lists all the hosts to be added to the cluster with related warnings. Review the details and click
Next.
14. On the Ready to complete page, review the IP addresses or FQDN of the added hosts and click Finish.
15. Add the new licenses:
a. Click Menu > Administration
b. In the Administration section, click Licensing.
c. Click Licenses..
d. From the Licenses tab, click Add.
e. Enter or paste the license keys for VMware vSphere and vCenter per line. Click Next.
The license key is a 25-character length of alphabets and digits in the format XXXXX-XXXXX-XXXXX-XXXXX-XXXXX.
You can enter a list of keys in one operation. A new license is created for every license key you enter.
f. On the Edit license names page, rename the new licenses as appropriate and click Next.
g. Optionally, provide an identifying name for each license. Click Next.
h. On the Ready to complete page, review the new licenses and click Finish.
Steps
1. Copy the SDC file to the local datastore on the VMware vSphere ESXi server.
2. Use SSH on the host and type esxcli software vib install -d /vmfs/volumes/datastore1/
sdc-3.x.xxxxx.xx-esx7.x.zip -n scaleio-sdc-esx7.x.
3. Reboot the PowerFlex node.
4. To configure the SDC, generate a new UUID:
NOTE: If the PowerFlex cluster is using an SDC authentication, the newly added SDC reports as disconnected when
added to the system. See Configure an authentication enabled SDC for more information.
A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks. Verify the
number of VIPs configured in the existing setup.
7. Reboot the PowerFlex node.
Steps
1. If using PowerFlex GUI presentation server:
a. Log in to the PowerFlex GUI presentation server.
b. Go to Configuration > SDCs
c. Select the SDC and click Modify > Rename and rename the new host to standard.
For example, ESX-10.234.91.84
2. If using a PowerFlex version prior to 3.5:
a. From the PowerFlex GUI, click Frontend > SDCs and rename new host to standard.
For example, ESX-10.234.91.84
Prerequisites
VMware ESXi must be installed with hosts added to the VMware vCenter.
Steps
1. Log in to the VMware vSphere Client.
2. Click Hosts and Clusters.
3. Locate and select the VMware ESXi host.
4. Select Datastores.
5. Right-click the datastore name, and select Rename.
6. Name the datastore using the DASXX convention, with XX being the node number.
Prerequisites
Apply all VMware ESXi updates before installing or loading hardware drivers.
NOTE: This procedure is required only if the ISO drivers are not at the proper Intelligent Catalog level.
Steps
1. Log in to the VMware vSphere Client.
2. Click Hosts and Clusters.
3. Locate and select the VMware ESXi host that you installed.
4. Select Datastores.
5. Right-click the datastore name and select Browse Files.
6. Select the Upload icon (to upload file to the datastore).
7. Browse to the Intelligent Catalog folder or downloaded current solution Intelligent Catalog files.
8. Select the VMware ESXi patch .zip files according to the current solution Intelligent Catalog and node type and click OK to
upload.
9. Select the driver and vib files according to the current Intelligent Catalog and node type and click OK to upload.
10. Click Hosts and Clusters.
11. Locate the VMware ESXi host, right-click, and select Enter Maintenance Mode.
12. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client.
13. Log in as root.
14. Type cd /vmfs/volumes/DASXX where XX is the name of the local datastore that is assigned to the VMware ESXi
server.
15. To display the contents of the directory, type ls.
16. If the directory indicates vib files, type esxcli software vib install –v /vmfs/volumes/DASXX/
patchname.vib to install the vib. These vib files can be individual drivers that are absent in the larger patch cluster
and must be installed separately.
17. Perform either of the following depending on the VMware ESXi version:
a. For VMware ESXi 7.0, type esxcli software vib update -d /vmfs/volumes/DASXXX/VMware-
ESXi-7.0<version>-depot.zip.
b. For VMware ESXi 6.x, type esxcli software vib install -d /vmfs/volumes/DASXXX/<ESXI-patch-
file>.zip
18. Type reboot to reboot the host.
19. Once the host completes rebooting, open an SSH session with the VMware ESXi host, and type esxcli software vib
list |grep net-i to verify that the correct drivers loaded.
20. Select the host and click Exit Maintenance Mode.
21. Update the test plan and host tracker with the results.
Steps
1. Log in to the VMware vSphere Client.
2. Click Home and select Networking.
3. NOTE: Right-click the appropriate dvswitch and select Add and Manage Hosts: If the ESXi host participates in NSX,
skip this step to keep management on the standard switch.
● For non-bonded NIC port design, right-click dvswitch0 and select Add and Manage Hosts.
● For static bonding and LACP bonding NIC port design, right-click cust_dvswitch and select Add and Manage Hosts.
The cust_dvswitch consists of all the management networks. For example, flex-node-mgmt-<vlanid> and flex-vmotion-
<vlanid>.
a. Select Add hosts and click Next.
b. Click +New Hosts, select the installed node and click OK.
c. Click Next.
d. From Manage Physical Adapters, select the VMNICs and click Next.
e. Select vmnic4 and click Assign Uplink.
f. Select lag-0 and click OK.
g. Select vmnic6 and click Assign Uplink.
h. Select lag-1, click OK and click Next.
i. From Manage VMkernel adapters, select vmk0.
j. Click Assign port group.
k. Select the hypervisor management and click OK.
l. Click Next > Next.
m. Click Finish.
4. Click Home. Select Hosts and Clusters and select the new host.
5. Click the Configure tab then Networking, select VMkernel adapters.
NOTE: If the VMware ESXi host participates in NSX, skip this step to keep vMotion on the standard switch.
8. Right-click the appropriate dvswitch and click Add and Manage Hosts.
● For non-bonded NIC port design, right-click dvswitch1, and select Add and Manage Hosts.
● For static bonding NIC port design, right-click flex_dvswitch, select Add and Manage Hosts, and add flex-data1-
<vlanid> and flex-data2-<vlanid>.
● For an LACP bonding NIC port design, right-click flex_dvswitch, select Add and Manage Hosts, and add flex-data1-
<vlanid>, flex-data2-<vlanid>, flex-data3-<vlanid>, and flex-data4-<vlanid>. A minimum of two logical data networks are
supported. Optionally, you can configure four logical data networks.
9. Select Add hosts, and click Next.
a. Click +New hosts, select the installed node, and click OK.
b. Click Next.
c. From Manage physical adapters, select VMNIC5 and VMNIC7 and click Assign Uplink.
d. Select lag-1, and click OK.
e. Click Next.
f. Select the host and click +New adapter.
g. From Select an existing network, click Browse. Select flex-data1-<vlanid>, and click OK. Click Next > Next.
h. Select Use static IPv4 settings , and type the ESXi PowerFlex Data1 Kernel IP Address and ESXi
PowerFlex Data1 Kernel Subnet Mask values that are recorded in the Logical Configuration Survey in the IPv4
address and Subnet Mask fields, respectively.
i. Click Next, then click Finish.
j. Select vmk2, and click Edit adapter.
k. Select NIC settings.
l. Set the MTU to 9000 and click OK
m. Click Next > Next > Finish.
10. For LACP bonding NIC port design: Repeat steps 8 and 9 for flex-data3-<vlanid> and flex-data4-<vlanid>. A minimum of two
logical data networks are supported. Optionally, you can configure four logical data networks.
11. Click Home > Networking. For non-bonded NIC port design, right-click dvswitch2 and click Add and Manage Hosts.
12. Select Add Hosts, and click Next.
a. Click +New hosts, select the installed node, and click OK.
b. Click Next.
c. Select Manage physical adapters, then Manage VMkernel adapters and click Next.
d. Select vmnic1 and click Assign Uplink.
e. Select lag-0 and click OK.
f. Click Next.
g. Select the host and click +New adapter.
h. From Select an existing network, click Browse.
i. Select flex-data2-<vlanid> and click OK.
j. Click Next > Next.
k. Select Use static IPv4 settings and type the ESXi PowerFlex Data 2 Kernel IP Address and ESXi PowerFlex Data
2 Kernel Subnet Mask.
l. Click Next > Finish.
m. Select vmkx, and click Edit adapter.
n. Select NIC Settings.
o. Set the MTU to 1500 and click OK.
p. Click Next > Next > Finish.
13. Repeat steps 11 and 12 to add flex-data3-<vlanid> and flex-data4-<vlanid> in an LACP bonding NIC port design. A minimum
of two logical data networks are supported. Optionally, you can configure four logical data networks.
Prerequisites
Gather the IP addresses of the primary and secondary MDMs.
Steps
1. Open Direct Console User Interface (DCUI) or use SSH to log in to the new hosts.
2. At the command-line interface, run the following commands to ping each of the primary MDM and secondary MDM IP
addresses.
If ping test fails, you must remediate before continuing.
NOTE: A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks.
Run the following commands for LACP bonding NIC port design. x is the VMkernel adapter number in vmkx.
NOTE: After several host restarts, check the access switches for error or disabled states by running the following
commands:
3. Optional: If errors appear in the counters of any interfaces, type the following and check the counters again.
Output from a Cisco Nexus switch:
4. Optional: If there are still errors on the counter, perform the following to see if the errors are old and irrelevant or new and
relevant.
a. Optional: Type # show interface | inc flapped.
Sample output:
Last link flapped 1d02h
b. Type # show logging logfile | inc failure.
Sample output:
Dec 12 12:34:50.151 access-a %ETHPORT-5-IF_DOWN_LINK_FAILURE: Interface Ethernet1/4/3
is down (Link failure)
5. Optional: Check and reset physical connections, bounce and reset ports, and clear counters until errors stop occurring.
Do not activate new nodes until all errors are resolved and no new errors appear.
Steps
1. Log in to VMware vCSA HTML Client using the credentials.
2. Go to VMs and templates inventory or Administration > vCenter Server Extensions > vSphere ESX Agent Manager
> VMs to view the VMs.
The VMs are in the vCLS folder once the host is added to the cluster.
3. Right-click the VM and click Migrate.
4. In the Migrate dialog box, click Yes.
5. On the Select a migration type page, select Change storage only and click Next.
6. On the Select storage page, select the PowerFlex volumes for hyperconverged or ESXi-based compute-only node which
will be mapped after the PowerFlex deployment.
NOTE: The volume name is powerflex-service-vol-1 and powerflex-service-vol-2. The datastore name is
powerflex-esxclustershotname-ds1 and powerflex-esxclustershotname-ds2. If these volumes or datastore are
not present, create the volumes or datastores to migrate the vCLS VMs.
Steps
1. Update the inventory for vCenter (vCSA), switches, gateway VM, and nodes:
a. Click Resources on the home screen.
b. Select the vCenter, switches (applicable only for full networking), gatewayVM, and newly added nodes.
c. Click Run Inventory.
d. Click Close.
e. Wait for the job in progress to complete.
2. Update Services details:
a. Click Services
b. Choose the services on which new a node is expanded and click View Details.
c. On the Services details screen, choose Update Service Details.
d. Choose the credentials for the node and SVM and click Next.
e. On the Inventory Sumary, verify that the newly added nodes are reflecting under Physical Node, and click Next.
f. On the Summary page, verify the details and click Finish.
Prerequisites
Ensure that the VMware vSphere vCenter Server and the VMware vSphere Client are accessible.
Steps
1. Log in to the VMware vSphere Client.
2. Click Networking.
Prerequisites
Both Cisco Nexus access switch ports for the compute VMware ESXi hosts are configured as trunk access. These ports will be
configured as LACP enabled after the physical adapter is removed from each ESXi host.
WARNING: As the VMK0 (ESXi management) is not configured on cust_dvswitch, both the vmnics are first
migrated to the LAGs simultaneously and then the port channel is configured. Data connectivity to PowerFlex is
lost until the port channels are brought online with both vmnic interfaces connected to LAGs.
Steps
1. Log in to the VMware vSphere Client.
2. Look at VMware vCenter and physical switches to ensure that both ports across all hosts are up.
3. For each compute VMware ESXi host, record the physical switch ports to which vmnic5 (switch-B) and vmnic7 (switch -A)
are connected.
a. Click Home, then select Hosts and Clusters and expand the compute cluster.
b. Select the first compute ESXi host in left pane, and then select Configure tab in right pane.
c. Select Virtual switches under Networking.
d. Expand cust_dvswitch.
e. Expand Uplink1 and click eclipse (…) for vmnic7 and select view settings.
f. Click LLDP tab.
g. Record the Port ID (switch port) and System Name (switch).
h. Repeat step 3 for vmnic5 on Uplink 2.
4. Configure LAG (LACP) on cust_dvswitch within VMware vCenter Server:
a. Click Home, then select Networking.
b. Expand the compute cluster and click cust_dvswitch > Configure > LACP.
c. Click +New to open wizard.
d. Verify that the name is lag1.
e. Verify that the number of ports is 2.
interface port-channel40
Description to flex-compute-esxi-host01
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
speed 25000
no lacp suspend-individual
vpc 40
7. Configure channel-group (LACP) on switch-A access port (vmnic5) for each compute VMware ESXi host.
The following switch port configuration is an example of a single compute VMware ESXi host.
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-A switch.
b. Create port on switch-A as follows:
int e1/1/1
description to flex-compute-esxi-host01 – vmnic5
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 40 mode active
interface port-channel40
Description to flex-compute-esxi-host01
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
speed 25000
no lacp suspend-individual
vpc 40
9. Configure channel-group (LACP) on switch-B access port (vmnic7) for each compute VMware ESXi host.
The following switch port configuration is an example of a single compute VMware ESXi host.
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-B switch.
b. Create port on switch-B as follows:
int e1/1/1
description to flex-compute-esxi-host01 – vmnic7
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 40 mode active
10. Update teaming and policy to route based on physical NIC load for each port group within cust_dvswitch:
a. Click Home and select Networking.
b. Expand cust_dvswitch to have all port group in view.
c. Right-click flex-data-01 and select Edit Settings.
d. Click Teaming and failover.
e. Change Load Balancing mode to Route based on IP hash.
f. Repeat steps 10b to 10e for each remaining port groups.
Prerequisites
NOTE: Before adding a VMware NSX-T service using PowerFlex Manager, either the customer or VMware services must
add the new PowerFlex node to NSX-T Data Center using NSX-T UI.
Consider the following:
● Before adding this service Update service details in PowerFlex Manager, verify that the NSX-T Data Center is configured
on the PowerFlex hyperconverged or compute-only nodes.
● If the transport nodes (PowerFlex cluster) is configured with NSX-T, then you cannot replace the field units using PowerFlex
Manager. You must add the node manually by following either of these procedures depending on the node type:
○ Performing a PowerFlex hyperconverged node expansion
○ Performing a PowerFlex compute-only node expansion
Steps
1. Log in to PowerFlex Manager.
2. If NSX-T Data Center 3.0 or higher is deployed and is using VDS (not N-VDS), then add the transport network:
a. From Getting Started, click Define Networks.
b. Click + Define and do the following:
Configure Static IP Address Ranges Select the Configure Static IP Address Ranges check box. Type
the starting and ending IP address of the transport network IP pool
3. From Getting Started, click Add Existing Service and do the following:
a. On the Welcome page, click Next.
b. On the Service Information page, enter the following details:
c. Click Next.
d. On the Network Information page, select Full Network Automation, and click Next.
e. On the Cluster Information page, enter the following details:
f. Click Next.
g. On the OS Credentials page, select the OS credentials for each node, and click Next.
h. On the Inventory Summary page, review the summary and click Next.
i. On the Networking Mapping page, verify that the networks are aligned with the correct dvSwitch.
j. On the Summary page, review the summary and click Finish.
4. Verify PowerFlex Manager recognizes NSX-T is configured on the nodes:
a. Click Services.
b. Select the hyperconverged or compute-only service.
c. Verify that a banner appears under the Service Details tab, notifying that NSX-T is configured on a node and is
preventing some features from being used. In case you do not see this banner, check if you have selected the wrong
service or NSX-T is not configured on the hyperconverged or compute-only nodes.
Prerequisites
● Ensure that the required information is captured in the Workbook and stored in VAST.
● Prepare the servers by updating all servers to the correct Intelligent Catalog firmware releases and configuring BIOS
settings.
● Ensure that the iDRAC network is configured.
● Ensure that the Windows operating system ISO is downloaded to jump host.
NOTE: As of PowerFlex Manager 3.8, the deployment of Windows compute-only noes is not supported. To manually install
Windows compute-only nodes with LACP bonding NIC port design without PowerFlex Manager, complete the steps in the
following sections.
Steps
1. Configure the aggregation switch out-of-band management (mgmt0) connection to the management switch, type:
interface <interface>
switchport access vlan 101
no shutdown
b. The port channel interface must be either vPC configured for Cisco Nexus switches or vLT configured for Dell EMC
PowerSwitch switches:
show vpc <VPC number> to verify the show vlt <VLT domain id> to verify the
vPC status vlt status
d.
Cisco Nexus switch configurations Dell EMC PowerSwitch switch configurations
3. Type switch# copy running-config startup-config to save the configuration on all switches.
Steps
1. Connect to the iDRAC, and launch a virtual remote console.
2. Click Menu > Connect Virtual Media > Virtual Map > Map CD/DVD.
3. Click Choose File and browse and select the customer provided Windows Server 2016 or 2019 DVD ISO and click Open.
4. Click Map Device.
5. Click Close.
6. Click Boot and select Virtual CD/DVD/ISO. Click Yes.
7. Click Power > Reset System (warm boot) to reboot the server.
The host boots from the attached Windows Server 2016 or 2019 virtual media.
Steps
1. Select the desired values for the Windows Setup page, and click Next.
NOTE: The default values are US-based settings.
a. Download DELL EMC Server Update Utility, Windows 64 bit Format, v.x.x.x.iso file from Dell
Technologies Support site.
b. Map the driver CD/DVD/ISO through iDRAC, if the installation requires it.
c. Connect to the server as the administrator.
d. Open and run the mapped disk with elevated permission.
e. Select Install, and click Next.
f. Select I accept the license terms and click Next.
g. Select the check box beside the device drives, and click Next.
h. Click Install, and Finish.
i. Close the window to exit.
Steps
1. Open iDRAC console and log in to the Windows Server 2016 or 2019 using admin credentials.
2. Press Windows+R and enter ncpa.cpl.
3. Select the appropriate management NIC.
4. Perform the following for the Management Network:
a. Select Properties.
b. Click Configure....
c. Click the Advanced tab, and select the VLAN ID option from the Property column.
d. Enter the VLAN ID in the Value column.
e. Click OK and exit.
f. Right-click the appropriate NIC, and click Properties, select Internet Protocol Version 4 (TCP/IPv4) and assign
static IP address of the server.
5. Open the PowerShell console, and perform the following procedures:
Management network if the IPs are not assigned manually as a. Type Add-NetLbfoTeamNic -Team "flex-node-
specified in Step 4 (optional) mgmt-<105>", to map the VLAN to the interface using
this command:
NOTE: Assign the IP address according to the
Workbook.
b. Type New-NetIPAddress -InterfaceAlias
'flex-node-mgmt-<105>' -IPAddress
'IP' -PrefixLength 'Prefix number'
-DefaultGateway 'Gateway IP', to assign the IP
address to the interface.
Data network NOTE: Assign the IP address according to the
Workbook.
a. Type New-NetIPAddress –InterfaceAlias
'Interface name' –IPv4Address 'IP' –
PrefixLength 'prefix' Select NIC2, to create
the Data1 network.
b. Type New-NetIPAddress –InterfaceAlias
'Interface name' –IPv4Address 'IP' –
PrefixLength 'prefix' Select Slot4 Port2,
to create the Data2 network.
Where, Interface name is the NIC assigned for data1 or
data2 and IP is the data1 IP or data2 IP.
The prefix is the CIDR notation. For example, if the
network mask is 255.255.255.0, then the CIDR notation
(prefix) is 24.
6. Applicable for an LACP NIC port bonding design: Modify Team0 settings and create a VLAN:
To... Do this...
Edit Team0 settings: a. Open the Server Manager, and click Local Server > NIC
teaming.
b. In the NIC teaming window, click Tasks > New Team.
c. Enter name as Team0 and select the appropriate
network adapters.
d. Expand the Additional properties, and modify as
follows:
● Teaming mode: LACP
● Load balancing mode: Dynamic
● Standby adapter: None (all adapters active)
e. Click OK to save the changes.
f. Select Team0 from the Teams list.
g. From the Adapters and Interfaces, click the Team
Interfaces tab in
Create a VLAN in Team0 a. Click Tasks and click Add Interface.
b. In the New team interface dialog box, type the name
as, General Purpose LAN.
c. Assign VLAN ID (200) to the new interface in the VLAN
field, and click OK.
d. From the network management console, right-click the
newly created network interface controller, and select
Properties Internet Protocol Version 4 (TCP/IPv4),
and click Properties.
e. Select the Assign the static IP address check box.
7. Remove the IPs from the data1 and data2 network adapters.
8. Create Team1 and VLAN:
9. Repeat step 8 for data2, data3 (if required), and data4 (if required).
A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks. Verify the
number of logical data networks configured in an existing setup and configure the logical data networks accordingly.
Steps
1. Windows Server 2016 or 2019:
a. Press Windows key+R on your keyboard, type control and click OK.
The All Control Panel Items window opens.
b. Click System and Security > Windows Firewall .
c. Click Turn Windows Defender Firewall on or off.
d. Turn off Windows Firewall for both private and public network settings, and click OK.
2. Windows PowerShell:
a. Click Start, type Windows PowerShell.
b. Right-click Windows PowerShell, click More > Run as Administrator.
c. Type Set-NetFirewallProfile -profile Domain, Public, Private -enabled false in the Windows
PowerShell console.
Steps
1. Click Start > Server Manager.
2. In Server Manager, on the Manage menu, click Add Roles and Features.
3. On the Before you begin page, click Next.
4. On the Select installation type page, select Role-based or feature-based installation, and click Next.
5. On the Select destination server page, click Select a server from the server pool, and click Next.
6. On the Select server roles page, select Hyper-V.
An Add Roles and Features Wizard page opens, prompting you to add features to Hyper-V.
7. Click Add Features. On the Features page, click Next.
8. Retain the default selections/locations on the following pages, and click Next:
● Create Virtual Switches
● Virtual Machine Migration
● Default stores
9. On the Confirm installation selections page, verify your selections, and click Restart the destination server
automatically if required, and click Install.
10. Click Yes to confirm automatic restart.
Steps
1. Click Start, type Windows PowerShell.
2. Right-click Windows PowerShell, and select Run as Administrator.
Steps
1. Go to Start > Run.
2. Enter SystemPropertiesRemote.exe and click OK.
3. Select Allow remote connection to this computer.
4. Click Apply > OK.
Steps
1. Download the EMC-ScaleIO-sdc*.msi and LIA software.
2. Double-click EMC-ScaleIO LIA setup.
3. Accept the terms in the license agreement, and click Install.
4. Click Finish.
5. Configure the Windows-based compute-only node depending on the MDM VIP availability:
● If you know the MDM VIPs before installing the SDC component:
a. Type msiexec /i <SDC_PATH>.msi MDM_IP=<LIST_VIP_MDM_IPS>, where <SDC_PATH> is the path where
the SDC installation package is located. The <LIST_VIP_MDM_IPS> is a comma-separated list of the MDM IP
addresses or the virtual IP address of the MDM.
b. Accept the terms in the license agreement, and click Install.
c. Click Finish.
d. Permit the Windows server reboot to load the SDC driver on the server.
● If you do not know the MDM VIPs before installing the SDC component:
a. Click EMC-ScaleIO SDC setup.
b. Accept the terms in the license agreement, and click Install.
c. Click Finish.
d. Type C:\Program Files\EMC\scaleio\sdc\bin>drv_cfg.exe --add_mdm --ip <VIPs_MDMs> to
configure the node in PowerFlex.
● Applicable only if the existing network is an LACP bonding NIC:
a. Add all MDM VIP IPs, and run the command to add C:\Program
Files\EMC\scaleio\sdc\bin>drv_cfg.exe --mod_mdm_ip --ip <existing MDM VIP>--
new_mdm_ip <all 4 MDM VIP>.
Steps
1. Log in to the presentation server at https://<presentation serverip>:8443.
2. In the left pane, click SDCs.
3. In the right pane, select the Windows host.
4. Select the Windows host, click Mapping, and then select Map from the drop-down list.
5. To open the disk management console, perform the following steps:
a. Press Windows+R.
b. Enter diskmgmt.msc and press Enter.
6. Rescan the disk and set the disks online:
a. Click Action > Rescan Disks.
b. Right-click each Offline disk, and click Online.
7. Right-click each disk and select Initialize disk.
After initialization, the disks appear online.
8. Right-click Unallocated and select New Simple Volume.
9. Select default and click Next.
10. Assign the drive letter.
11. Select default and click Next.
12. Click Finish.
Steps
1. Open the PowerFlex GUI, click Frontend, and select SDC.
2. Windows-based compute-only nodes are listed as SDCs if configured correctly.
3. Click Frontend again, and select Volumes. Right-click the volume, and click Map.
4. Select the Windows-based compute-only nodes, and then click Map.
5. Log in to the Windows Server compute-only node.
6. To open the disk management console, perform the following steps:
a. Press Windows+R.
b. Enter diskmgmt.msc and press Enter.
7. Rescan the disk and set the disks online:
a. Click Action > Rescan Disks.
b. Right-click each Offline disk, and click Online.
8. Right-click each disk and select Initialize disk.
After initialization, the disks appear online.
Steps
1. Using the administrator credentials, log in to the target Windows Server 2016.
2. When the main desktop view appears, click Start and type Run.
3. Type slui 3 and press Enter.
4. Enter the customer provided Product key and click Next.
If the key is valid, Windows Server 2016 is successfully activated.
If the key is invalid, verify that the Product key entered is correct and try the procedure again.
NOTE: If the key is still invalid, try activating without an Internet connection.
Steps
1. Using the administrator credentials, log in to the target Windows Server VM (jump server).
2. When the main desktop view appears, click Start and select Command Prompt (Admin) from the option list.
3. At the command prompt, use the slmgr command to change the current product key to the newly entered key.
4. At the command prompt, use the slui command to initiate the phone activation wizard. For example: C:
\Windows\System32> slui 4.
5. From the drop-down menu, select the geographic location that you are calling and click Next.
6. Call the displayed number, and follow the automated prompts.
After the process is completed, the system provides a confirmation ID.
7. Click Enter Confirmation ID and enter the codes that are provided. Click Activate Windows.
Successful activation can be validated using the slmgr command.
IX
Adding VMware NSX-T Edge nodes
Use this section to add additional VMware NSX-T Edge nodes to expand an existing VMware NSX-T environment.
This section covers the following procedures:
● Configure the PERC Mini Controller for data protection on the VMware NSX-T Edge node (RAID1+0 local storage only)
● Install and configure the VMware ESXi
● Add VMware ESXi host to the VMware vCenter server
● Add the new VMware ESXi local datastore to VMware vSphere host and rename the operating system datastore (RAID1+0
local storage only)
● Claim local disk drives to the vSAN cluster and rename the operating system datastore (vSAN only)
● Configure NTP and scratch partition settings
● Add and configure the VMware NSX-T Edge node to edge_dvswitch0 and edge_dvswitch1
● Patch and install drivers for VMware ESXi and updating the VMware settings
Before adding a VMware NSX-T Edge node, complete the initial set of expansion procedures that are common to all expansion
scenarios, see Performing the initial expansion procedures.
NOTE: For an NSX-T configured transport cluster (hyperconverged service), PowerFlex Manager does not support the
addition or removal of nodes. You must remove the NSX-T compute service, perform the operation manually, and add the
service.
After adding a VMware NSX-T Edge node, see Completing the expansion.
Steps
1. Log in to the VMware vSphere Client.
2. Click Hosts and Clusters.
3. Select the edge cluster and click Configure > vSAN > Services.
4. If vSAN is turned off, proceed to configure the PERC Mini Controller on the VMware NSX-T Edge nodes.
Prerequisites
Consider the following:
● Before configuring the PERC Mini Controller for data protection, you must have a configured and reachable iDRAC.
● Verify if vSAN is turned off.
Steps
1. Launch virtual console and from the Boot option in the menu, select BIOS Setup to enter system BIOS.
2. Power cycle the server and wait for the Boot option to appear.
Prerequisites
Verify that the customer VMware ESXi ISO is available and is located in the Intelligent Catalog code directory.
Steps
1. Configure the iDRAC:
a. Connect to the iDRAC interface and launch a virtual remote console from Dashboard and click Launch Virtual Console.
b. Select Connect Virtual Media > Map CD/DVD.
c. Browse to the folder where the ISO file is saved, select it, and click Open.
d. Click Map Device.
e. Click Boot > Virtual CD/DVD/ISO.
f. Click Yes to confirm the boot action.
g. Click Power > Reset System (warm boot).
h. Click Yes to confirm power action.
2. Install VMware ESXi:
a. On the VMware ESXi installer screen, click Enter to continue.
b. Press F11 to accept the license agreement.
c. Under Local, select DELLBOSS VD as the installation location and click Enter.
d. Select US Default as the keyboard layout and click Enter to continue.
e. At the prompt, type the customer provided root password or use the default password VMwar3!!. Click Enter.
f. When the Confirm Install screen is displayed, press F11.
g. Click Enter to reboot the node.
3. Configure the host:
a. Press F2 to access the System Customization menu.
b. Enter the password for the root user.
c. Go to Direct Console User Interface (DCUI) > Configure Management Network.
d. Set the following options under Configure Management Network:
● Network Adapters: Select vmnic0 and vmnic2.
● VLAN: See Workbook for VLAN. The standard VLAN is 105.
● IPv4 Configuration: Set static IPv4 address and network configuration. See Workbook for the IPv4 address,
subnet mask, and the default gateway.
● DNS Configuration: See Workbook for the primary DNS server and alternate DNS server.
○ Custom DNS Suffixes: See Workbook.
● IPv6 Configuration: Disable IPv6.
e. Press ESC to return to DCUI.
f. Type Y to commit the changes and the node restarts.
4. Use the command line to set the IP hash:
a. From the DCUI, press F2 to customize the system.
b. Enter the password for the root user.
c. Select Troubleshooting Options and press Enter.
d. From the Troubleshooting Mode Options menu, enable the following:
● ESXi Shell
● Enable SSH
e. Press Enter to enable the service.
f. Press <Alt>+F1 and log in.
g. To enable the VMware ESXi host to work on the port channel, type the following commands:
esxcli network vswitch standard policy failover set -v vSwitch0 –1 iphash
esxcli network vswitch standard portgroup policy failover set -p "Management Network"
-l iphash
Prerequisites
Ensure that you have access to the VMware vSphere Client and the VMware NSX-T Edge cluster is already created.
Steps
1. Log in to the VMware vSphere Client.
2. Click the home icon at the top of the screen and select Hosts and Clusters.
3. Expand PowerFlex Customer-Datacenter and select the PFNC cluster.
4. Right-click PFNC and click Add Host....
5. In the Host name or IP address field, type the FQDN of the new host and click Next.
6. Type the root username and password for the host and click Next.
7. At the Host Summary screen, click Next.
8. Verify the summary for accuracy and click Next.
9. Click Finish to add the host to the cluster.
10. Verify that the VMware ESXi edge node is added.
11. Right-click the VMware ESXi edge node and select Exit Maintenance Mode.
Prerequisites
Ensure that you have access to the VMware vSphere Client.
Steps
1. Log in to the VMware vSphere Client.
2. Click the home icon at the top of the screen and select Hosts and Clusters.
3. Expand PowerFlex Customer-Datacenter and select PFNC cluster.
4. Rename the local operating system datastore to BOSS card:
a. Select an NSX-T Edge ESXi host.
b. Click Datastores.
c. Right-click the smaller size datastore (OS) and click Rename.
d. To name the datastore, type PFNC-<nsx-t edge host short name>-DASOS.
5. Right-click the third NSX Edge ESXi server and select Storage > New Datastore to open the wizard. Perform the
following:
a. Verify that VMFS is selected and click Next.
b. Name the datastore using PFNC_DAS01.
c. Click the LUN that has disks created in RAID 10.
d. Click Next > Finish.
6. Repeat steps 1 through 5 for the remaining VMware NSX-T Edge nodes.
Prerequisites
● Ensure that you have access to the VMware vCenter Client.
● Ensure that vSAN is enabled for the VMware NSX-T Edge vSphere cluster.
Steps
1. Log in to the VMware vSphere Client.
2. Click the home icon at the top of the screen and select Hosts and Clusters.
3. Expand PowerFlex Customer-Datacenter and select PFNC cluster.
4. Rename the local operating system datastore for BOSS card:
Add the new VMware ESXi local datastore and rename the operating system datastore (RAID local storage only) 407
Internal Use - Confidential
Prerequisites
Ensure that the NSX-T Edge ESXi hosts are added to VMware vCenter server. VMware ESXi must be installed with hosts added
to the VMware vCenter.
Steps
1. Log in to the VMware vSphere Client.
2. Click the home icon at the top of the screen and select Hosts and Clusters.
3. Expand Datacenter > PFNC.
4. Configure NTP on VMware ESXi NSX-T Edge host as follows:
a. Select a VMware ESXi NSX-T Edge host.
b. Click Configure > System > Time Configuration and click Edit from Network Time Protocol.
c. Select the Enable check box.
d. Enter the NTP servers as recorded in the Workbook. Set the NTP service startup policy as Start and stop with host,
and select Start NTP service.
e. Click OK.
f. Repeat for each controller hosts.
5. Configure a scratch partition for each NSX-T Edge host as follows:
a. Click Host > Datastores.
b. Select the local PFMC-<controller host name>-DASXX datastores.
c. Click New Folder to create a folder with naming convention locker_<hostname> (use a short hostname and not the
FQDN) and click OK.
d. Select the NSX-T Edge host and click Configure > System > Advanced System Settings.
e. Locate ScratchConfig.ConfiguredScratchLocation and edit the path with the following information:
Prerequisites
Ensure you have access to the management VMware vSphere Client.
Steps
1. Log in to the VMware vSphere Client.
2. Click Networking.
3. Expand PowerFlex Customer - Datacenter.
4. Right-click edge_dvswitch0 and select Add and Manage Hosts.
5. Select Add hosts and click Next.
6. Click +New Hosts, select the installed VMware NSX-T Edge nodes to add, and click OK.
7. Click Next to manage the physical adapters.
8. In Manage Physical Network Adapters, perform the following:
a. Select vmnic0 and click Assign Uplink.
b. Select lag1-0 and click OK.
c. Select vmnic2 and click Assign Uplink.
d. Select lag1-1 and click OK.
e. Click Next.
9. In Manage VMkernel network adapters, perform the following:
a. Select vmk0 and click Assign portgroup.
b. Select pfnc-node-mgmt-105 and click OK.
c. Click Next twice.
10. In the Ready to Complete screen, review the details, and click Finish.
11. If vSAN is required, perform the following steps to create and configure the vSAN VMkernel adapters:
NOTE: The vMotion VMkernel network adapter is not configured by default. Availability depends on the NSX-T Edge
Gateway VM service level.
Prerequisites
Ensure that you have access to the management VMware vSphere Client.
Steps
1. Log in to the VMware vSphere Client.
2. Click Networking.
3. Expand PowerFlex Customer - Datacenter.
4. Right-click edge_dvswitch1 and select Add and Manage Hosts.
5. Select Add hosts and click Next.
6. Click +New Hosts, select the installed VMware NSX-T Edge nodes to add, and click OK.
7. Click Next.
8. In Manage Physical Network Adapters, perform the following:
a. Select vmnic1 and click Assign Uplink.
b. Select Uplink 1 and click OK.
c. Select vmnic3 and click Assign Uplink.
d. Select Uplink 2 and click OK.
e. Select vmnic4 and click Assign Uplink.
f. Select Uplink 3 and click OK.
g. Select vmnic5 and click Assign Uplink.
h. Select Uplink 4 and click OK.
i. Click Next.
9. Click Next > Next.
10. In the Ready to Complete screen, review the details, and click Finish.
Prerequisites
Apply all VMware ESXi updates before installing or loading hardware drivers.
NOTE: This procedure is required only if the ISO drivers are not at the proper Intelligent Catalog level.
Steps
1. Log in to the VMware vSphere Client.
2. Click Hosts and Clusters.
3. Locate and select the VMware ESXi NSX-T Edge host that you installed.
4. Select Datastores.
5. Right-click the datastore name and select Browse Files.
6. Select the Upload icon (to upload file to the datastore).
7. Browse to the Intelligent Catalog folder or downloaded current solution Intelligent Catalog files.
8. Select the VMware ESXi patch .zip files according to the current solution Intelligent Catalog and node type and click OK to
upload.
9. Select the driver and vib files according to the current Intelligent Catalog and node type and click OK to upload.
10. Click Hosts and Clusters.
11. Locate the VMware ESXi host, right-click, and select Enter Maintenance Mode.
12. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client.
13. Log in as root.
14. Type cd /vmfs/volumes/PFNC-<controller host name>-DASXX where XX is the name of the local datastore
that is assigned to the VMware ESXi server.
15. To display the contents of the directory, type ls.
16. If the directory indicates vib files, type esxcli software vib install –v /vmfs/volumes/PFNC-<controller
host name>-DASXX/patchname.vib to install the vib. These vib files can be individual drivers that are absent in the
larger patch cluster and must be installed separately.
17. Perform either of the following depending on the VMware ESXi version:
a. For VMware ESXi 7.0, type esxcli software vib update -d /vmfs/volumes/DAS<name>/VMware-
ESXi-7.0<version>-depot.zip.
b. For VMware ESXi 6.x, type esxcli software vib install -d /vmfs/volumes/DASXX/<ESXI-patch-
file>.zip
18. Type reboot to reboot the host.
19. Once the host completes rebooting, open an SSH session with the VMware ESXi host, and type esxcli software vib
list |grep net-i to verify that the correct drivers loaded.
20. Select the host and click Exit Maintenance Mode.
21. Update the test plan and host tracker with the results.
X
Completing the expansion
Use this section to complete the expansion of a PowerFlex appliance.
This section covers the following procedures to complete the expansion of a PowerFlex appliance:
● Update VMware ESXi settings
● Updating the rebuild and rebalance settings
● Performance tuning
● Updating the Storage Data Client parameters
● Post-installation tasks
61
Update VMware ESXi settings
Update the VMware settings for NTP, SNMP, system log, shell warnings, advanced settings, and power management settings
on VMware ESXi hosts, according to the Workbook.
Prerequisites
Verify that VMware ESXi is installed and hosts are added to VMware vCenter.
Steps
1. Obtain the NTP server IP addresses from the Workbook or follow these steps to obtain the IP addresses from VMware
vCenter:
a. Log in to the VMware vSphere Client.
b. Select one of the VMware ESXi PowerFlex nodes from the existing cluster and click Configure.
c. Select and expand the System option.
d. Select Time Configuration.
e. Record the NTP server IP address.
2. Select each of the new expansion PowerFlex nodes and click Configure.
3. Select System.
4. Select Time Configuration.
5. Click Edit to edit Network Time Protocol.
6. Select Use Network Time Protocol to enable the NTP client.
7. Enter the IP address of the NTP servers in NTP Servers.
8. Click NTP Service Startup Policy > Start and Stop with Host.
9. Click Start NTP Service and click OK.
10. Suppress the SSH warning and complete the following tasks:
a. Select each of the new expansion PowerFlex nodes and click Configure.
b. Click Advanced System Settings > Edit.
c. Filter for SSH and change the value of UserVars.SuppressShellWarning to 1. Click OK.
11. Run the commands in the VMware ESXi shell (Alt-F1 from the DCUI) or through a remote SSH session to the VMware ESXi
host, as follows:
a. Obtain SNMP details like CommunityString and IPaddressOfTarget by running following command on one of the existing
VMware ESXi nodes in the cluster.
b. Record the information from Step 11a and use it for Step 11c.
c. To configure SNMP, configure the IP address of the target, according to the Workbook:
d. To configure the system log, configure the log host IP address according to the Workbook:
62
Updating the rebuild and rebalance settings
Steps
1. Select the following options to set the network throttling:
a. Log in to PowerFlex GUI presentation server using the primary MDM IP address.
b. From the Configuration tab, click Protection domain and select the protection domain. Click Modify and choose
the Network Throttling option from the menu. From the pop-up window, verify that Unlimited is selected for all the
parameters.
c. Click Apply.
2. Select the following options to set the I/O priority:
a. Log in to the PowerFlex GUI presentation server using the primary MDM IP address.
b. From the Configuration tab, click Storage Pool and select the storage pool. Click Modify and choose the I/O Priority
option from the menu to view the current policy settings.
c. Before an RCM upgrade, set the following policies:
e. Click Apply.
Steps
1. Select the following options to set the network throttling:
63
Performance tuning
Performance tuning for storage VMs (SVM)
After the expansion process, the storage VMs (SVM) must be tuned for better performance.
Steps
1. Using PuTTy, connect to the SVM.
2. Validate and set jumbo frames on each data interface of the SVM:
a. Type cat /etc/sysconfig/network-scripts/ifcfg-eth1 to check that MTU for eth1 is set.
b. Type cat /etc/sysconfig/network-scripts/ifcfg-eth2 to check that MTU for eth2 is set.
c. For LACP bonding NIC port design:
● Type cat /etc/sysconfig/network-scripts/ifcfg-eth3 to check that MTU for eth3 is set.
● Type cat /etc/sysconfig/network-scripts/ifcfg-eth4 to check that MTU for eth4 is set.
NOTE: A minimum of two logical data networks are supported. Optionally, you can configure four logical data
networks.
3. Optional: If the file does not contain MTU=9000, type echo 'MTU=9000' >> /etc/sysconfig/network-
scripts/ifcfg-eth1 echo 'MTU=9000' >> /etc/sysconfig/network-scripts/ifcfg-eth2.
4. Apply the settings to the SVM:
PowerFlex version prior to 3.5 on PowerFlex R640/R740xd/ a. Log in to the PowerFlex GUI.
R840 nodes b. Click Backend, and locate SDS.
c. Right-click the SDS, and select Maintenance Mode.
d. Click OK > Close.
e. Return to the PuTTy session on the SDS and type
reboot.
Steps
1. Using PuTTY, connect to the PowerFlex storage-only nodes.
2. Validate and set jumbo frames on each PowerFlex data interface.
a. Verify that MTU for data1 is set by entering:
cat /etc/sysconfig/network-scripts/ifcfg-p2p2
cat /etc/sysconfig/network-scripts/ifcfg-p1p2
cat /etc/sysconfig/network-scripts/ifcfg-p1p1
cat /etc/sysconfig/network-scripts/ifcfg-p2p1
4. Set the I/O scheduler of all flash devices on the embedded operating system node (caching, all-SSD, or hybrid configurations
only).
a. Enter:
chmod +x /etc/rc.d/rc.local
b. Enter lsblk -do NAME,TYPE,ROTA to gather the list of disk devices in the system.
c. Note each NAME line that contains a 0 in the ROTA column, and disk in the TYPE column (these are flash).
d. Edit the file /etc/rc.local, and insert a line for each device that is recorded in Step 4a.
For example:
vi /etc/rc.local
echo “noop” > /sys/block/sda/queue/scheduler
echo “noop” > /sys/block/sdb/queue/scheduler
echo “noop” > /sys/block/sdc/queue/scheduler
[…]
echo “noop” > /sys/block/sdz/queue/scheduler
Tuning PowerFlex
Use this procedure to apply certain global performance settings after the PowerFlex nodes are added and tuned successfully.
Steps
1. Using PuTTY, connect to the primary MDM.
2. Enter:
64
Updating the storage data client parameters
(VMware ESXi 6.x)
To complete the expansion process for VMware ESXi 6.x, update the storage data client (SDC) parameters.
Steps
1. Log in to the VMware vSphere Client.
2. Click Home, and click Inventories > VxFlex OS.
3. Click Advanced Tasks > Update SDC Parameters, and follow the on-screen instructions to complete the procedure.
4. Verify that the SDC parameters are updated by typing ESX: cat /etc/vmware/esx.conf |grep scini|grep -i
mdm on each VMware ESXi:
420 Updating the storage data client parameters (VMware ESXi 6.x)
Internal Use - Confidential
65
Post-installation tasks
Use these procedures after the installation process.
PowerFlex Manager can receive an SNMPv2 trap and forward it as an SNMPv3 trap.
SNMP trap forwarding configuration supports multiple forwarding destinations. If you provide more than one destination, all
traps coming from all devices are forwarded to all configured destinations in the appropriate format.
PowerFlex Manager stores up to 5 GB of SNMP alerts. Once this threshold is exceeded, PowerFlex Manager automatically
purges the oldest data to free up space.
For SNMPv2 traps to be sent from a device to PowerFlex Manager, you must provide PowerFlex Manager with the community
strings on which the devices are sending the traps. If during resource discovery you selected to have PowerFlex Manager
automatically configure iDRAC nodes to send alerts to PowerFlex Manager, you must enter the community string used in that
credential here.
For a network management system to receive SNMPv2 traps from PowerFlex Manager, you must provide the community
strings to the network management system. This configuration happens outside of PowerFlex Manager.
For a network management system to receive SNMPv3 traps from PowerFlex Manager, you must provide the PowerFlex
Manager engine ID, user details, and security level to the network management system. This configuration happens outside of
PowerFlex Manager.
Prerequisites
PowerFlex Manager and the network management system use access credentials with different security levels to establish
two-way communication. Review the access credentials that you need for each supported version of SNMP. Determine the
security level for each access credential and whether the credential supports encryption.
To configure SNMP communication, you need the access credentials and trap targets for SNMP, as shown in the following
table:
Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Settings, and click Virtual Appliance Management.
3. On the Virtual Appliance Management page, in the SNMP Trap Configuration section, click Edit.
4. To configure trap forwarding as SNMPv2, click Add community string. In the Community String box, provide the
community string by which PowerFlex Manager receives traps from devices and by which it forwards traps to destinations.
You can add more than one community string. For example, add more than one if the community string by which PowerFlex
Manager receives traps differs from the community string by which it forwards traps to a remote destination.
NOTE: An SNMPv2 community string that is configured in the credentials during discovery of the iDRAC or through
management is also displayed here. You can create a new community string or use the existing one.
5. To configure trap forwarding as SNMPv3, click Add User. Enter the Username, which identifies the ID where traps are
forwarded on the network management system. The username must be at most 16 characters. Select a Security Level:
(MD5 at least 8
characters)
Maximum authPriv Messages are Required Required
authenticated and
encrypted
Note the current engine ID (automatically populated), username, and security details. Provide this information to the remote
network management system so it can receive traps from PowerFlex Manager.
You can add more than one user.
6. In the Trap Forwarding section, click Add Trap Destination to add the forwarding details.
a. In the Target Address (IP) box, enter the IP address of the network management system to which PowerFlex Manager
forwards SNMP traps.
b. Provide the Port for the network management system destination. The SNMP Trap Port is 162.
c. Select the SNMP Version for which you are providing destination details.
d. In the Community String/User box, enter either the community string or username, depending on whether you are
configuring an SNMPv2 or SNMPv3 destination. For SNMPv2, if there is more than one community string, select the
appropriate community string for the particular trap destination. For SNMPv3, if there is more than one user-defined,
select the appropriate user for the particular trap destination.
7. Click Save.
The Virtual Appliance Management page displays the configured details as shown below:
Trap Forwarding <destination-ip>(SNMP v2 community string or SNMP v3 user)
NOTE: To configure nodes with PowerFlex Manager SNMP changes, go to Settings > Virtual Appliance
Management, and click Configure nodes for alert connector.
Steps
1. Log in to PowerFlex Manager.
2. From the menu bar, click Settings and click Virtual Appliance Management.
3. From the Syslog Forwarding section, click Edit.
4. Click Add syslog forward
5. For Host, enter the destination IP address of the remote server to which you want to forward syslogs.
6. Enter the destination Port where the remote server is accepting syslog messages.
7. Select the network Protocol used to transfer the syslog messages. The default is TCP.
8. Optionally enter the Facility and Severity Level to filter the syslogs that are forwarded. The default is to forward all.
9. Click Save to add the syslog forwarding destination.
Steps
1. If using PowerFlex GUI presentation server, perform the following steps to set the performance profile for SDS:
a. Go to Configuration > SDS.
b. Select the SDS and click Modify > Modify Performance Profile.
c. Verify that the setting is High.
2. If using PowerFlex GUI presentation server, perform the following steps to set the performance profile for SDC:
a. Go to Configuration > SDC.
b. Select the SDC and click Modify > Modify Performance Profile.
c. Verify that the setting is High.
3. If using a PowerFlex version prior to 3.5:
a. Click Backend > Storage.
b. Right-click the new PD, and click Set Performance Profile for all SDSs.
c. Verify that the setting is High:
● If it is set to Default, click High and click OK.
● If it is set to High, click Cancel.
d. Click Frontend > SDCs.
e. Right-click the PowerFlex system, and click Set Performance Profile for all SDCs.
f. Verify that the setting is High.
g. Right-click the PowerFlex system, and click Set Performance Profile for all MDMs.
h. Verify that the setting is High.
i. Update the test plan and host tracker with the results.
Prerequisites
● Ensure you have the following:
○ Primary MDM IP address
○ Credentials to access the PowerFlex cluster
○ The IP addresses of the new cluster members
● Ensure you have installed and added the SDCs using PowerFlex Manager or manually.
NOTE: The SDC status is displayed as Disconnected as it cannot authenticate to the system.
Steps
1. Use SSH to log in to the primary MDM.
2. Log in to the PowerFlex cluster using the SCLI tool.
3. Generate and record a new Challenge-Handshake Authentication Protocol (CHAP) secret for the replacement node SDC
using scli --generate_sdc_password --sdc_IP <IP of SDC> --reason "CHAP setup - expansion".
4. Log in to the SDC host.
5. List the current scini parameters of the host.
For example:
6. Using esxcli, configure the driver with the existing and new parameters. To specify multiple IP addresses, use a semi-colon
(;) between the entries, as shown in the following example:
Prerequisites
Only one IP address is required for the command to identify the MDM to modify.
Steps
1. Press Windows +R.
2. To open the command line interface, type cmd.
3. For Windows, type drv_cfg --set_mdm_password --ip <MDM IP> in the drv_cfg utility. For example:
drv_cfg --set_mdm_password --ip <MDM IP> --port 6611 --password <secret>
4. For Linux, type /opt/emc/scaleio/sdc/bin/drv_cfg --set_mdm_password --ip <MDM IP>. For example:
/opt/emc/scaleio/sdc/bin/drv_cfg --set_mdm_password --ip <MDM IP> --port 6611 --password
<secret> --file /etc/emc/scaleio/drv_cfg.txt
Steps
1. Go to Dell Download Center.
2. Click Internal Use Only: Misc Reports and Internal Documentation > Internal Use Only > Dell EMC Systems
Configuration Reporter.
3. Download the most recent version of Dell EMC SCR - Sprint bits.zip.
4. Extract Dell EMC SCR - Sprint bits.zip. If you are using the jump VM, download to the D: drive.
5. Extract vcesystems-configuration-reporter-bundle.zip.
6. Double-click dell-hci-systems-configuration-reporter.bat. SCR opens a command window and run in the
default browser.
Steps
1. Click Manage Collections.
2. Click Create New Profile > Manually Create Profile.
3. Enter collection name and PowerFlex appliance serial number.
4. Click + Component to add PowerFlex appliance components. These components include:
● storage VM (SVM)
● PowerFlex storage array
● Dell EMC PowerFlex nodes
● VMware vCenter Server
● Cisco Nexus 3K network switches
● Optional: Cisco Nexus 9K switches
5. Select storage VM (SVM) and enter the IP address range and credentials for all SVMs, including PowerFlex storage-only
nodes.
6. Select PowerFlex storage array and enter PowerFlex gateway information.
7. Select Dell PowerEdge Server Units, and enter iDRAC IP range along with login credentials and SNMP string.
8. Select VMware vCenter.
9. If you are using PowerFlex R640/R740xd/R840 nodes, enter VXMA/Controller VC IP and credentials.
10. If you are using PowerFlex R650/R750/R6525 nodes, enter PFMCController VC IP and credentials.
11. Repeat for customer or production VC. If the vCenter is an enterprise vCenter, SCR runs for a long time to collect all details.
Use the filter feature to limit the amount of data collected.
12. Click Choose under the vCenter filter box. SCR queries vCenter using the credentials entered and provides a list of the data
centers and clusters. Retrieved individual clusters can be selected or cleared to filter the data collected.
13. Select Nexus 3K Network switches, enter the management switch IP address and credentials, and select Switch Role >
Management.
14. Perform the following steps depending on the Cisco Nexus switch series and switch type (optional):
NOTE: If there are multiple switches of the same type and role, then assign a different Role Group Tag to each pair of
redundant switches.
Steps
1. Perform one of the following:
● Select at the system level to collect data for all components.
● Individually select the components for which you want to collect data.
2. Optional: If PowerFlex appliance uses Cisco Nexus aggregation switches to aggregate multiple sets of Cisco Nexus access
switches and you chose to create a new profile for the switches, select both of the systems you created before.
3. Click Start Multi-Collection. You must have at least one component selected. Data collection begins, which you can
monitor on the screen and in the command prompt.
When collection completes, you can see a green check mark against the components that completed collection, and a red
check mark against the components that failed collection.
4. Optional: Investigate any failed components for proper credentials and IP addresses. You can confirm credentials by logging
directly into the device.
5. Optional: If a component fails, and if you modify the IP or credentials, click Start Multi-Collection to run the collection
again.
6. Optional: Repeat as necessary until data is collected for all components.
Steps
1. Click Review Collected Data > Export Data Collection.
2. Select CRG as the Type of report and select Assessment Report and click Download. SCR initiates reporting, which you
can monitor on the screen and in the command prompt.
3. Copy the resulting .xlsx file in the Downloads folder, or upload it to a temporary FTP site for retrieval later. FTP
accounts are available from ftpaccreq.emc.com.
4. The SCR output spreadsheet lists the components on which the data is collected. Ensure that the data is collected for all
components as per the test plan.
5. Select one of the following to exit the SCR:
● Exit Only: Choose this option if you want to exit out of SCR. This is the normal mode of operation.
● Exit and Purge Data : Choose this option if you do not want to save the data collected. This is only used under rare
conditions.
● Exit, Purge Data, and Configuration : Choose this option if you want to delete the collected data and the profile
created.
Prerequisites
Ensure that iDRAC command line tools are installed on the system jump server.
Steps
1. If you are using an iDRAC version 5.0.10 or higher:
Multiple PowerFlex nodes a. From the jump server, at the root of the C: drive, create
a folder named ipmi.
b. From the File Explorer, go to View and select the File
Name extensions check box.
c. Open a notepad file, and paste this text into the file:
powershell -noprofile -executionpolicy
bypass -file ".\disableIPMI.ps1"
d. Save the file, and rename it runme.cmd in C:\ipmi.
e. Open a notepad file, and paste this text
into the file: import-csv $pwd\hosts.csv
-Header:"Hosts" | Select-Object
-ExpandProperty hosts | % {racadm
-r $_ -u root -p XXXXXX set
iDRAC.IPMILan.Enable Disabled
where XXXXXX is the customer password that must be
changed.
f. Save the file, and rename it disableIPMI.ps1 in C:
\ipmi.
g. Open a notepad file, and list all of the iDRAC IP
addresses that need to be included, one per line.
Multiple PowerFlex nodes a. From the jump server, at the root of the C: drive, create
a folder named ipmi.
b. From the File Explorer, go to View and select the File
Name extensions check box.
c. Open a notepad file, and paste this text into the file:
powershell -noprofile -executionpolicy
bypass -file ".\disableIPMI.ps1"
d. Save the file, and rename it runme.cmd in C:\ipmi.
e. Open a notepad file, and paste this text
into the file: import-csv $pwd\hosts.csv
-Header:"Hosts" | Select-Object
-ExpandProperty hosts | % {racadm -r $_
-u root -p XXXXXX config -g cfgIpmiLan
-o cfgIpmiLanEnable 0}
where XXXXXX is the customer password that must be
changed.
f. Save the file, and rename it disableIPMI.ps1 in C:
\ipmi.
g. Open a notepad file, and list all of the iDRAC IP
addresses that need to be included, one per line.
Prerequisites
Ensure that iDRAC command line tools are installed on the embedded operating system-based jump server.
Steps
1. If you are using an iDRAC version 5.0.10 or higher:
Multiple PowerFlex nodes a. From the jump server, open a terminal window.
b. Edit the idracs.txt file and enter the IP address, one
per line for each iDRAC.
c. Save the idracs.txt.
d. Type while read line ; do echo “$line” ;
racadm -r $line -u root -p yyyyy set
iDRAC.IPMILan.Enable Disabled; done <
idracs
where yyyyy is the iDRAC password.
Multiple PowerFlex nodes a. From the jump server, open a terminal window.
b. Edit the idracs.txt file and enter the IP address, one
per line for each iDRAC.
c. Save the idracs.txt.
d. Type while read line ; do echo “$line” ;
racadm -r $line -u root -p yyyyy config
-p cfgIpmiLan -o cfgIpmiLanEnable 0.;
done < idracs
where yyyyy is the iDRAC password.