You are on page 1of 430

Internal Use - Confidential

Dell EMC PowerFlex Appliance


Expansion Guide

October 2022
Rev. 2.2
Internal Use - Confidential

Notes, cautions, and warnings

NOTE: A NOTE indicates important information that helps you make better use of your product.

CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.

WARNING: A WARNING indicates a potential for property damage, personal injury, or death.

© 2021 - 2022 Dell Inc. or its subsidiaries. All rights reserved. Dell Technologies, Dell, and other trademarks are trademarks of Dell Inc. or its
subsidiaries. Other trademarks may be trademarks of their respective owners.
Internal Use - Confidential

Contents
Revision history........................................................................................................................................................................ 14

Chapter 1: Introduction................................................................................................................15
LACP bonding NIC port design VLAN names..............................................................................................................16

Part I: Performing the initial expansion procedures......................................................................17

Chapter 2: Preparing to expand a PowerFlex appliance............................................................ 18


Network requirements................................................................................................................................................ 18
Software requirements............................................................................................................................................... 19
PowerFlex management controller datastore and virtual machine details.....................................................21
Cabling the PowerFlex R650/R750/R6525 nodes............................................................................................. 22
Cabling requirements for PowerFlex controller node................................................................................... 23
Cabling requirements for PowerFlex hyperconverged nodes..................................................................... 24
Cabling requirements for PowerFlex storage-only nodes............................................................................ 37
Cabling requirements for PowerFlex compute-only nodes......................................................................... 42
Cabling the PowerFlex R640/R740xd/R840 nodes........................................................................................... 60
Cabling requirements for PowerFlex management controller server slot layout................................... 62
Cabling requirements for PowerFlex hyperconverged nodes.....................................................................64
Cabling requirements for PowerFlex storage-only nodes............................................................................ 74
Cabling requirements for PowerFlex compute-only nodes..........................................................................79
Cabling requirements for VMware NSX-T Edge nodes..................................................................................... 86
Connecting the VMware NSX-T Edge nodes....................................................................................................... 87
Connecting the VMware NSX-T Edge nodes to the aggregation and access topology...................... 87
Connecting the VMware NSX-T Edge nodes to the leaf and spine topology........................................ 89
PowerFlex R640/R740xd/R840 nodes rNDC slot layout.................................................................................. 91
PowerFlex management controller rNDC slot layout.................................................................................... 91
PowerFlex hyperconverged node rNDC slot layout...................................................................................... 92
PowerFlex storage-only node rNDC slot layout.............................................................................................99
PowerFlex compute-only node slot layout.....................................................................................................102
Ports and security configuration data.................................................................................................................. 104
Customer network and router requirements...................................................................................................... 105
Access switch configuration requirements......................................................................................................... 106
Configure iDRAC network settings........................................................................................................................107
Disable IPMI using a Windows-based jump server............................................................................................ 108
Disable IPMI using an embedded operating system-based jump server...................................................... 109
Download files from the Dell Technologies Support site.................................................................................. 110

Chapter 3: Configuring the network....................................................................................... 111


VLAN mapping............................................................................................................................................................. 111
VLAN mapping for access and aggregation switches.................................................................................. 111
VLAN mapping for leaf-spine switches........................................................................................................... 112
Configuration data......................................................................................................................................................114
Port-channel with LACP for full network automation or partial network automation.........................114
Port-channel for full network automation...................................................................................................... 115

Contents 3
Internal Use - Confidential

Individual trunk for full network automation or partial network automation..........................................116


Configure the Cisco Nexus access and aggregation switches........................................................................ 117
Configure the Dell access switches....................................................................................................................... 118
Configuring the PowerFlex management controller 2.0, hyperconverged, and ESXi-based
compute-only node network............................................................................................................................... 120
Add VMkernel adapter to the PowerFlex hyperconverged or ESXi-based compute-only node
hosts................................................................................................................................................................... 120
Add VMkernel adapter to the PowerFlex controller node hosts.............................................................. 120
Create LAG on dvSwitches................................................................................................................................ 121
Assign LAG as a standby uplink for the dvSwitch........................................................................................ 121
Add hosts to dvSwitches.................................................................................................................................... 121
Assign LAG as an active uplink for the dvSwitch.........................................................................................122
Set load balancing for dvSwitch.......................................................................................................................123
Create the distributed switch (oob_dvswitch) for the PowerFlex management node network..... 123
Delete the standard switch (vSwitch0)......................................................................................................... 124
Add layer 3 routing on PowerFlex compute-only nodes or PowerFlex hyperconverged nodes
between SDC to layer 3 internal SDS......................................................................................................... 124
Add layer 3 routing on the PowerFlex hyperconverged storage for internal SDS to SDC................ 124
Configuring the PowerFlex storage-only node network.................................................................................. 125
Configure a port channel with LACP bonding NIC or individual trunk.................................................... 125
Configure individual trunk with per NIC VLAN............................................................................................. 126
Configure a high availability bonded management network...................................................................... 127
Add a layer 3 routing on PowerFlex storage-only node for an internal SDS to SDC.......................... 128

Part II: Converting a PowerFlex controller node with a PERC H755 to a PowerFlex
management controller 2.0..................................................................................................... 129

Chapter 4: Configuring the new PowerFlex controller node................................................... 130


Upgrade the firmware...............................................................................................................................................130
Configure BOSS card................................................................................................................................................130
Convert physical disks to non-raid disks...............................................................................................................131
Install VMware ESXi ..................................................................................................................................................131
Configure VMware ESXi...........................................................................................................................................132
Install Dell Integrated Service Module.................................................................................................................. 132
Configure NTP on the host..................................................................................................................................... 132
Rename the BOSS datastore.................................................................................................................................. 133
Add a new PowerFlex management controller 2.0 to VMware vCenter......................................................133
Enable HA and DRS on an existing cluster ......................................................................................................... 133
Migrate interfaces to the BE_dvSwitch.............................................................................................................. 134
Migrate interfaces to the FE_dvSwitch...............................................................................................................134
Create the distributed port group for the FE_dvSwitch................................................................................. 135
Create the distributed port groups for the BE_dvSwitch............................................................................... 135
Modify the failover order for the BE_dvSwitch.................................................................................................135
Modify the failover order for the FE_dvSwitch................................................................................................. 136
Enable PCI passthrough for PERC H755 on the PowerFlex management controller ..............................136
Install the PowerFlex storage data client (SDC) on the PowerFlex management controller ................ 136
Configure PowerFlex storage data client (SDC) on the PowerFlex management controller..................137
Manually deploy the SVM........................................................................................................................................ 137
Configure the SVM..............................................................................................................................................138
Configure the pfmc-sds-mgmt-<vlanid> networking interface................................................................138

4 Contents
Internal Use - Confidential

Configure the pfmc-sds-data1-<vlanid> networking interface................................................................ 139


Configure the pfmc-sds-data2-<vlanid> networking interface................................................................139
Install required PowerFlex packages............................................................................................................... 140
Verify connectivity between the PowerFlex storage VMs ........................................................................141
Manually deploy PowerFlex on the PowerFlex management controller nodes........................................... 141
Create MDM cluster.............................................................................................................................................141
Add protection domain........................................................................................................................................ 141
Add storage pool.................................................................................................................................................. 142
Add SDSs............................................................................................................................................................... 142
Set the spare capacity for the medium granularity storage pool............................................................. 142
Identify the disks on each of the PowerFlex management controller nodes ....................................... 142
Add storage devices............................................................................................................................................ 142
Create datastores................................................................................................................................................ 143
Add PowerFlex storage to new PowerFlex management controller nodes ..........................................143
Create VMFS datastores for PowerFlex management controller nodes .............................................. 143
Optimize VMware ESXi...................................................................................................................................... 144
Remove storage disk device on the PowerFlex controller node.................................................................... 144
Migrate all VMs ................................................................................................................................................... 144
Migrate PowerFlex SVM on PERC-01 datastore......................................................................................... 145
Delete PERC datastore.......................................................................................................................................145

Chapter 5: Convert a standalone PowerFlex controller node to non-raid mode.......................146


Convert physical disks to non-raid disks..............................................................................................................146
Enable PCI passthrough for PERC H755 on the PowerFlex management controller ..............................146
Install the PowerFlex storage data client (SDC) on the PowerFlex management controller ................ 147
Configure PowerFlex storage data client (SDC) on the PowerFlex management controller..................147
Configure the SVM.................................................................................................................................................... 147
Identify the disks on each of the PowerFlex management controller nodes ............................................. 148
Add storage devices..................................................................................................................................................148
Convert a PowerFlex single cluster to a three node cluster ..........................................................................148
Deploy the PowerFlex gateway.............................................................................................................................. 148
Attach PowerFlex management controller 2.0 to the PowerFlex gateway.................................................149
Confirming the PowerFlex Gateway is functional..............................................................................................150
License PowerFlex on PowerFlex management controller cluster................................................................ 150
Add PowerFlex management controller service to PowerFlex Manager...................................................... 151

Part III: Adding a PowerFlex controller node with a PERC H755 to a PowerFlex management
controller 2.0..........................................................................................................................153

Chapter 6: Discovering the new resource...............................................................................154


Discover resources.................................................................................................................................................... 154

Chapter 7: Upgrade the firmware.......................................................................................... 156

Chapter 8: Configure BOSS card........................................................................................... 157

Chapter 9: Convert physical disks to non-raid disks.............................................................. 158

Chapter 10: Install VMware ESXi ...........................................................................................159

Contents 5
Internal Use - Confidential

Chapter 11: Configure VMware ESXi...................................................................................... 160

Chapter 12: Install Dell Integrated Service Module................................................................. 161

Chapter 13: Configure NTP on the host..................................................................................162

Chapter 14: Rename the BOSS datastore............................................................................... 163

Chapter 15: Enable PCI passthrough for PERC H755 on the PowerFlex management
controller .......................................................................................................................... 164

Chapter 16: Install the PowerFlex storage data client (SDC) on the PowerFlex management
controller .......................................................................................................................... 165

Chapter 17: Configure PowerFlex storage data client (SDC) on the PowerFlex management
controller........................................................................................................................... 166

Chapter 18: Create a staging cluster and add a host...............................................................167

Chapter 19: Assign VMware vSphere licenses........................................................................ 168

Chapter 20: Add PowerFlex controller nodes to an existing dvSwitch.................................... 169

Chapter 21: Migrate the PowerFlex controller node to the PowerFlex management
controller 2.0...................................................................................................................... 171

Chapter 22: Manually deploy the SVM....................................................................................172


Configure the SVM.................................................................................................................................................... 172
Configure the pfmc-sds-mgmt-<vlanid> networking interface......................................................................173
Configure the pfmc-sds-data1-<vlanid> networking interface.......................................................................173
Configure the pfmc-sds-data2-<vlanid> networking interface......................................................................174
Extend the MDM cluster from three to five nodes using SCLI...................................................................... 175
Verify connectivity between the PowerFlex storage VMs ............................................................................. 176
Add SDSs..................................................................................................................................................................... 176
Identify the disks on each of the PowerFlex management controller nodes ............................................. 176
Add storage devices.................................................................................................................................................. 176
Add PowerFlex storage to new PowerFlex management controller nodes ................................................ 177
Set the spare capacity for the medium granularity storage pool................................................................... 177
License PowerFlex on PowerFlex management controller cluster.................................................................177

Chapter 23: Add PowerFlex management controller service to PowerFlex Manager............... 178

Chapter 24: Update the PowerFlex management controller 2.0 service details...................... 180

Part IV: Adding a PowerFlex management node to a PowerFlex management controller 1.0
with VMware vSAN..................................................................................................................181

6 Contents
Internal Use - Confidential

Chapter 25: Hardware requirement........................................................................................182

Chapter 26: Upgrade the firmware........................................................................................ 183

Chapter 27: Configure enhanced HBA mode...........................................................................184

Chapter 28: Configure BOSS card......................................................................................... 185

Chapter 29: Install VMware ESXi .......................................................................................... 186

Chapter 30: Configure VMware ESXi......................................................................................187

Chapter 31: Modify the existing VM network......................................................................... 188

Chapter 32: Configure NTP on the host................................................................................. 189

Chapter 33: Create a data center and add a host................................................................... 190

Chapter 34: Add hosts to an existing dvswitch....................................................................... 191

Chapter 35: Add VMkernel adapter to the hosts.................................................................... 192

Chapter 36: Configure vSAN on management cluster.............................................................193

Chapter 37: Migrate VMware vCenter server appliance 7.0 from PERC-01 datastore to
vSAN datastore.................................................................................................................. 194

Chapter 38: Claim disks from PowerFlex management node.................................................. 195

Chapter 39: Enable VMware vCSA high availability on PowerFlex management controller
vCSA ................................................................................................................................. 196

Chapter 40: Migrate vCLS VMs..............................................................................................197

Part V: Adding a PowerFlex R650/R750/R6525 node to a PowerFlex Manager service in


managed mode........................................................................................................................198

Chapter 41: Discovering the new resource............................................................................. 199


Discover resources.................................................................................................................................................... 199

Chapter 42: Sanitize the NVDIMM......................................................................................... 201

Chapter 43: Expanding a PowerFlex appliance service...........................................................202


Adding a compatibility management file..............................................................................................................202
Add PowerFlex nodes to a service....................................................................................................................... 202
License PowerFlex on PowerFlex management controller cluster............................................................... 203

Contents 7
Internal Use - Confidential

Verify newly added SVMs or storage-only nodes machine status in CloudLink Center.......................... 203

Chapter 44: Expanding a PowerFlex appliance with a new service......................................... 205


Cloning a template....................................................................................................................................................205
Adding a compatibility management file..............................................................................................................206
Deploy a service........................................................................................................................................................ 206
Add volumes with PowerFlex Manager............................................................................................................... 208
Redistribute the MDM cluster using PowerFlex Manager.............................................................................. 209

Chapter 45: Configuring the hyperconverged or compute-only transport nodes ................... 210
Configure VMware NSX-T overlay distributed virtual port group................................................................. 210
Convert trunk access to LACP-enabled switch ports for cust_dvswitch....................................................211

Chapter 46: Add a Layer 3 routing between an external SDC and SDS.................................... 214

Part VI: Adding a PowerFlex R640/R740xd/R840 node to a PowerFlex Manager service in


managed mode........................................................................................................................215

Chapter 47: Discovering the new resource............................................................................. 216


Discover resources.................................................................................................................................................... 216

Chapter 48: Sanitize the NVDIMM......................................................................................... 218

Chapter 49: Expanding a PowerFlex appliance service........................................................... 219


Adding a compatibility management file...............................................................................................................219
Add PowerFlex nodes to a service........................................................................................................................ 219
Verify newly added SVMs or storage-only nodes machine status in CloudLink Center.......................... 220

Chapter 50: Expanding a PowerFlex appliance with a new service..........................................221


Cloning a template.....................................................................................................................................................221
Adding a compatibility management file.............................................................................................................. 222
Deploy a service.........................................................................................................................................................222
Add volumes with PowerFlex Manager............................................................................................................... 223
Redistribute the MDM cluster using PowerFlex Manager.............................................................................. 225

Chapter 51: Configuring the hyperconverged or compute-only transport nodes ................... 226
Configure VMware NSX-T overlay distributed virtual port group................................................................ 226
Convert trunk access to LACP-enabled switch ports for cust_dvswitch.................................................. 227

Chapter 52: Add a Layer 3 routing between an external SDC and SDS................................... 230

Part VII: Adding a PowerFlex R650/R750/R6525 node in lifecycle mode.................................... 231

Chapter 53: Performing a PowerFlex storage-only node expansion........................................232


Discover resources................................................................................................................................................... 232
Install the operating system................................................................................................................................... 233
Configure the host....................................................................................................................................................234
Install the nvme -cli tool and iDRAC Service Module (iSM)..................................................................... 235

8 Contents
Internal Use - Confidential

Upgrade the disk firmware for NVMe drives................................................................................................236


Install PowerFlex components on PowerFlex storage-only nodes................................................................ 237
Add a new PowerFlex storage-only node without NVDIMM to PowerFlex................................................ 237
Configuring NVDIMM devices for PowerFlex storage-only nodes............................................................... 238
Verify the PowerFlex version........................................................................................................................... 238
Verify the operating system version...............................................................................................................239
Verify necessary RPMs......................................................................................................................................239
Activate NVDIMM regions................................................................................................................................ 240
Create namespaces and DAX devices............................................................................................................240
Identify NVDIMM acceleration pool in a protection domain..................................................................... 240
Identify NVDIMM acceleration pool in a protection domain using a PowerFlex version prior to
3.5........................................................................................................................................................................241
PowerFlex CLI (SCLI) keys for acceleration pools...................................................................................... 241
Create an NVDIMM protection domain.......................................................................................................... 241
Create an acceleration pool.............................................................................................................................. 242
Create a storage pool.........................................................................................................................................242
Add DAX devices to the acceleration pool....................................................................................................243
Add SDS to NVDIMM protection domain...................................................................................................... 243
Create a compressed volume...........................................................................................................................245
Add drives to PowerFlex......................................................................................................................................... 245
Add storage data replication to PowerFlex........................................................................................................ 248
Add a PowerFlex node to a PowerFlex Manager service in lifecycle mode................................................249

Chapter 54: Performing a PowerFlex hyperconverged node expansion.................................. 250


Discover resources................................................................................................................................................... 250
Expanding compute and storage capacity.......................................................................................................... 252
Verify the CloudLink license............................................................................................................................. 252
Install VMware ESXi................................................................................................................................................. 252
Create a new VMware ESXi cluster to add PowerFlex nodes....................................................................... 254
Migrating vCLS VMs................................................................................................................................................ 254
Renaming the VMware ESXi local datastore......................................................................................................255
Patch and install drivers for VMware ESXi.........................................................................................................255
Add PowerFlex nodes to Distributed Virtual Switches.................................................................................... 256
Change the MTU value............................................................................................................................................256
Validating network configurations........................................................................................................................ 258
Add the new host to PowerFlex............................................................................................................................259
Configure the direct path I/O.......................................................................................................................... 259
Add NVMe devices as RDMs........................................................................................................................... 259
Install and configure the SDC.......................................................................................................................... 260
Rename the SDCs............................................................................................................................................... 260
Calculate RAM capacity for medium granularity SDS................................................................................. 261
Change memory and CPU settings on the SVM......................................................................................... 262
Manually deploy the SVM..................................................................................................................................263
Add the new SDS to PowerFlex....................................................................................................................... 271
Add drives to PowerFlex....................................................................................................................................272
Configuring the NVDIMM for a new PowerFlex hyperconverged node...................................................... 274
Verify that the VMware ESXi host recognizes the NVDIMM...................................................................274
Add NVDIMM ...................................................................................................................................................... 274
Extend the MDM cluster from three to five nodes...........................................................................................277
Extend the MDM cluster from three to five nodes using SCLI................................................................ 277

Contents 9
Internal Use - Confidential

Redistribute the MDM cluster..........................................................................................................................278


MDM cluster component layouts.................................................................................................................... 279
Add a PowerFlex node to a PowerFlex Manager service in lifecycle mode................................................280
Configuring the NSX-T Ready tasks..................................................................................................................... 281
Configure VMware NSX-T overlay distributed virtual port group........................................................... 281
Convert trunk access to LACP-enabled switch ports for cust_dvswitch............................................ 282
Add the VMware NSX-T service using PowerFlex Manager....................................................................284

Chapter 55: Encrypting PowerFlex hyperconverged (SVM) or storage-only node devices


(SED or non-SED drives)................................................................................................... 286
Encrypt PowerFlex hyperconverged (SVM) or storage-only devices (SED drives)................................ 286
Encrypt PowerFlex hyperconverged (SVM) or storage-only devices (non-SED drives)........................289

Chapter 56: Performing a PowerFlex compute-only node expansion......................................293


Discover resources................................................................................................................................................... 293
Install VMware ESXi to expand compute capacity........................................................................................... 294
Create a new VMware ESXi cluster to add PowerFlex nodes....................................................................... 296
Install and configure the SDC.................................................................................................................................297
Rename the SDCs..................................................................................................................................................... 297
Renaming the VMware ESXi local datastore......................................................................................................297
Patch and install drivers for VMware ESXi.........................................................................................................298
Add PowerFlex nodes to Distributed Virtual Switches.................................................................................... 298
Validating network configurations........................................................................................................................ 299
Migrating vCLS VMs................................................................................................................................................ 300
Add a PowerFlex node to a PowerFlex Manager service in lifecycle mode............................................... 300
Configuring the NSX-T Ready tasks..................................................................................................................... 301
Configure VMware NSX-T overlay distributed virtual port group........................................................... 301
Convert trunk access to LACP-enabled switch ports for cust_dvswitch............................................. 301
Add the VMware NSX-T service using PowerFlex Manager....................................................................303
Deploying Windows-based PowerFlex compute-only nodes manually.........................................................305
Installing Windows compute-only node with LACP bonding NIC port design .....................................305
Install and configure a Windows-based compute-only node to PowerFlex........................................... 310
Map a volume to a Windows-based compute-only node using PowerFlex............................................ 311
Mapping a volume to a Windows-based compute-only node using a PowerFlex version prior to
3.5 ....................................................................................................................................................................... 311
Licensing Windows Server 2016 compute-only nodes................................................................................312
Activating a license without an Internet connection...................................................................................312

Part VIII: Adding a PowerFlex R640/R740xd/R840 node to a PowerFlex Manager service in


lifecycle mode.........................................................................................................................313

Chapter 57: Performing a PowerFlex storage-only node expansion........................................ 314


Discover resources.................................................................................................................................................... 314
Install the operating system.................................................................................................................................... 315
Configure the host.....................................................................................................................................................316
Install the nvme -cli tool and iDRAC Service Module (iSM)......................................................................319
Upgrade the disk firmware for NVMe drives................................................................................................ 321
Migrate PowerFlex storage-only nodes from a non-bonded to an LACP bonding NIC port design321
Migrate PowerFlex storage-only nodes from a static to an LACP bonding NIC port design........... 323
Install PowerFlex components on PowerFlex storage-only nodes................................................................324

10 Contents
Internal Use - Confidential

Add a new PowerFlex storage-only node without NVDIMM to PowerFlex................................................324


Configuring NVDIMM devices for PowerFlex storage-only nodes............................................................... 325
Verify the PowerFlex version........................................................................................................................... 325
Verify the operating system version...............................................................................................................326
Verify necessary RPMs......................................................................................................................................326
Activate NVDIMM regions................................................................................................................................ 326
Create namespaces and DAX devices............................................................................................................ 327
Identify NVDIMM acceleration pool in a protection domain..................................................................... 327
Identify NVDIMM acceleration pool in a protection domain using a PowerFlex version prior to
3.5....................................................................................................................................................................... 327
PowerFlex CLI (SCLI) keys for acceleration pools......................................................................................328
Create an NVDIMM protection domain......................................................................................................... 328
Create an acceleration pool.............................................................................................................................. 328
Create a storage pool.........................................................................................................................................329
Add DAX devices to the acceleration pool....................................................................................................329
Add SDS to NVDIMM protection domain......................................................................................................330
Create a compressed volume........................................................................................................................... 332
Add drives to PowerFlex......................................................................................................................................... 332
Add a Layer 3 routing between an external SDC and SDS............................................................................. 334
Add storage data replication to PowerFlex.........................................................................................................334
Add a PowerFlex node to a PowerFlex Manager service in lifecycle mode................................................335

Chapter 58: Performing a PowerFlex hyperconverged node expansion.................................. 337


Discover resources....................................................................................................................................................337
Expanding compute and storage capacity.......................................................................................................... 339
Verify the CloudLink license............................................................................................................................. 339
Install VMware ESXi................................................................................................................................................. 339
Create a new VMware ESXi cluster to add PowerFlex nodes........................................................................ 341
Migrating vCLS VMs................................................................................................................................................. 341
Renaming the VMware ESXi local datastore......................................................................................................342
Patch and install drivers for VMware ESXi.........................................................................................................342
Add PowerFlex nodes to Distributed Virtual Switches.................................................................................... 343
Change the MTU value............................................................................................................................................ 343
Validating network configurations........................................................................................................................ 345
Add the new host to PowerFlex............................................................................................................................346
Configure the direct path I/O.......................................................................................................................... 346
Add NVMe devices as RDMs............................................................................................................................346
Install and configure the SDC...........................................................................................................................347
Rename the SDCs............................................................................................................................................... 347
Calculate RAM capacity for medium granularity SDS................................................................................ 348
Change memory and CPU settings on the SVM......................................................................................... 349
Manually deploy the SVM................................................................................................................................. 350
Add the new SDS to PowerFlex...................................................................................................................... 358
Add drives to PowerFlex................................................................................................................................... 359
Configuring the NVDIMM for a new PowerFlex hyperconverged node.......................................................361
Verify that the VMware ESXi host recognizes the NVDIMM................................................................... 361
Add NVDIMM .......................................................................................................................................................361
Extend the MDM cluster from three to five nodes.......................................................................................... 364
Extend the MDM cluster from three to five nodes using SCLI............................................................... 364
Redistribute the MDM cluster......................................................................................................................... 365

Contents 11
Internal Use - Confidential

MDM cluster component layouts.................................................................................................................... 366


Add a PowerFlex node to a PowerFlex Manager service in lifecycle mode................................................368
Configuring the NSX-T Ready tasks.................................................................................................................... 368
Configure VMware NSX-T overlay distributed virtual port group.......................................................... 369
Convert trunk access to LACP-enabled switch ports for cust_dvswitch............................................ 369
Add the VMware NSX-T service using PowerFlex Manager.....................................................................371

Chapter 59: Encrypting PowerFlex hyperconverged (SVM) or storage-only node devices


(SED or non-SED drives)....................................................................................................374
Encrypt PowerFlex hyperconverged (SVM) or storage-only devices (SED drives).................................374
Encrypt PowerFlex hyperconverged (SVM) or storage-only devices (non-SED drives)........................ 377

Chapter 60: Performing a PowerFlex compute-only node expansion...................................... 381


Discover resources.................................................................................................................................................... 381
Install VMware ESXi to expand compute capacity........................................................................................... 382
Create a new VMware ESXi cluster to add PowerFlex nodes....................................................................... 384
Install and configure the SDC................................................................................................................................ 385
Rename the SDCs.....................................................................................................................................................385
Renaming the VMware ESXi local datastore......................................................................................................385
Patch and install drivers for VMware ESXi.........................................................................................................386
Add the PowerFlex node to Distributed Virtual Switches for a PowerFlex compute-only node
expansion................................................................................................................................................................ 387
Validating network configurations........................................................................................................................ 388
Migrating vCLS VMs................................................................................................................................................ 389
Add a PowerFlex node to a PowerFlex Manager service in lifecycle mode............................................... 390
Configuring the NSX-T Ready tasks.................................................................................................................... 390
Configure VMware NSX-T overlay distributed virtual port group.......................................................... 390
Convert trunk access to LACP-enabled switch ports for cust_dvswitch............................................. 391
Add the VMware NSX-T service using PowerFlex Manager....................................................................393
Deploying Windows-based PowerFlex compute-only nodes manually.........................................................395
Installing Windows compute-only node with LACP bonding NIC port design .....................................395
Install and configure a Windows-based compute-only node to PowerFlex........................................... 401
Map a volume to a Windows-based compute-only node using PowerFlex...........................................402
Mapping a volume to a Windows-based compute-only node using a PowerFlex version prior to
3.5 ......................................................................................................................................................................402
Licensing Windows Server 2016 compute-only nodes...............................................................................403
Activating a license without an Internet connection.................................................................................. 403

Part IX: Adding VMware NSX-T Edge nodes...............................................................................404


Verify if vSAN is configured on an existing VMware vSphere edge cluster.................................................... 404
Configure the PERC Mini Controller on the VMware NSX-T Edge nodes....................................................... 404
Install and configure VMware ESXi............................................................................................................................ 405
Add a VMware ESXi host to the VMware vCenter ............................................................................................... 406
Add the new VMware ESXi local datastore and rename the operating system datastore (RAID local
storage only)................................................................................................................................................................ 407
Claim local disk drives to the vSAN and rename the OS datastore (vSAN storage option)........................ 407
Configure NTP and scratch partition settings.........................................................................................................408
Add the VMware NSX-T Edge node to edge_dvswitch0..................................................................................... 409
Add the VMware NSX-T Edge node to edge_dvswitch1.......................................................................................410
Patch and install drivers for VMware ESXi............................................................................................................... 410

12 Contents
Internal Use - Confidential

Part X: Completing the expansion.............................................................................................. 412

Chapter 61: Update VMware ESXi settings.............................................................................413

Chapter 62: Updating the rebuild and rebalance settings.......................................................415


Set rebuild and rebalance settings........................................................................................................................ 415
Set rebuild and rebalance settings using PowerFlex versions prior to 3.5.................................................. 415

Chapter 63: Performance tuning............................................................................................417


Performance tuning for storage VMs (SVM)..................................................................................................... 417
Performance tuning for PowerFlex storage-only nodes.................................................................................. 418
Tuning PowerFlex ..................................................................................................................................................... 419

Chapter 64: Updating the storage data client parameters (VMware ESXi 6.x)....................... 420

Chapter 65: Post-installation tasks....................................................................................... 421


Configure SNMP trap forwarding..........................................................................................................................421
Configure the node for syslog forwarding.......................................................................................................... 423
Verifying PowerFlex performance profiles..........................................................................................................423
Configure an authentication enabled SDC.......................................................................................................... 424
Add a Windows or Linux authenticated SDC..................................................................................................... 425
Updating System Configuration Reporter...........................................................................................................425
Updating and running System Configuration Reporter..............................................................................425
Configuring System Configuration Reporter................................................................................................ 426
Collecting System Configuration Reporter data..........................................................................................426
Creating a Configuration Reference Guide................................................................................................... 427
Getting technical support for System Configuration Reporter................................................................427
Disabling IPMI for PowerFlex nodes..................................................................................................................... 428
Disabling IPMI for PowerFlex nodes using an embedded operating system-based jump server...........429

Contents 13
Internal Use - Confidential

Revision history
Date Document Description of changes
revision
October 2022 2.2 Updated the PowerFlex data network requirements.
May 2022 2.1 Updated Install and configure the SDC
Added support for VMware vSphere Client 7.0 U3c.

November 2021 2.0 Added support for:


● PowerFlex management controller 2.0, an R650-based
controller that uses PowerFlex storage and a VMware ESXi
hypervisor
● PowerFlex R650 hyperconverged, storage-only, and
compute-only nodes
● PowerFlex R750 hyperconverged, storage-only, and
compute-only nodes
● PowerFlex R6525 compute-only nodes
● Additional network configurations, such as trunk, port
channel, port channel with LACP, and individual links (for
storage only)
● PowerFlex Manager 3.8
June 2021 1.0 Initial release

14 Revision history
Internal Use - Confidential

1
Introduction
This guide contains information for expanding compute, network, storage, and management components of PowerFlex appliance
after installation at the customer site.
The information in this guide is for PowerFlex appliance based on the following expansion scenarios:
● Dell PowerEdge R650, R750, or R6525 servers that are expanding with PowerEdge R650, R750, or R6525 servers.
● Dell PowerEdge R650, R750, or R6525 servers that are expanding with PowerEdge R640, R740xd, or R840 servers.
● Dell PowerEdge R640, R740xd, or R840 servers that are expanding with PowerEdge R640, R740xd, or R840 servers,
including those servers with VMware NSX-T.
Depending on when the system was built, it will have one of the following PowerFlex management controllers:

Controller Description
PowerFlex management controller 2.0 R650-based PowerFlex management controller that uses PowerFlex storage and a
VMware ESXi hypervisor
PowerFlex management controller 1.0 R640-based PowerFlex management controller that uses VSAN storage and a
VMware ESXi hypervisor

The PowerFlex R650 controller node with PowerFlex can have either of the following RAID controllers:
● PERC H755: PowerFlex Manager puts a PowerFlex management controller 2.0 with PERC H755 service in lifecycle mode.
If you are adding a PowerFlex controller node to a PowerFlex management controller 2.0 with PowerFlex, delete the RAID
and convert the physical disks to non-RAID disks. See Adding a PowerFlex controller node with a PERC H755 to a PowerFlex
management controller 2.0 for more information.
● HBA355: PowerFlex Manager puts a PowerFlex management controller 2.0 with HBA355 service in managed mode. Use
PowerFlex Manager to add a PowerFlex controller node with HBA355i to a PowerFlex management controller 2.0. See
Adding a PowerFlex R650/R750/R6525 node to a PowerFlex Manager service in managed mode for more information.
This guide provides instructions for:
● Performing the initial expansion procedures
● Converting a PowerFlex controller to a controller based on PowerFlex
● Adding a PowerFlex controller node to a PowerFlex management controller 2.0
● Adding a PowerFlex management node to a PowerFlex management controller 1.0 with vSAN
● Adding a PowerFlex R650/R750/R6525 node to a PowerFlex Manager service in managed mode
● Adding a PowerFlex R640/R740xd/R840 node to a PowerFlex Manager service in managed mode
● Adding a PowerFlex R650/R750/R6525 node to a PowerFlex Manager service in lifecycle mode
● Adding a PowerFlex R640/R740xd/R840 node to a PowerFlex Manager service in lifecycle mode
● Adding a VMware NSX-T Edge node
● Completing the expansion
There are UI changes for the VMware vSphere Client 7.0 U3c update:
● On the Home screen, there is no Menu button. The new menu is next to vSphere Client.
● From the Add DVSwitch menu, the New Host is no longer available.
This guide might contain language that is not consistent with Dell Technologies' current guidelines. Dell Technologies plans to
update this over subsequent future releases to revise the language accordingly.
The target audience for this document are Dell Technologies sales engineers, field consultants, and advanced services
specialists.

Introduction 15
Internal Use - Confidential

LACP bonding NIC port design VLAN names


Depending on when the system was built, your VLAN names might differ from what is in the documentation. The following table
lists the VLAN names used with the current LACP bonding NIC port design along with the former VLAN names.
If the PowerFlex hyperconverged or compute-only nodes participate in NSX-T, then the DVSwitch0 is not configured in the
port-channel or an LACP bonding. However, the port-group VLAN names are still listed in the following table as LACP bonding.
NOTE: VLANS pfmc-<name>-<name>-<vlanid> are only required for PowerFlex management controller 2.0.

VLANS flex-vmotion-<vlanid> and flex-vsan-<vlanid> are only required for PowerFlex management controller 1.0.

LACP bonding NIC port design VLAN name Former name


flex-oob-mgmt-<vlanid> con-mgmt-<vlanid>
flex-vcsa-ha-<vlanid> vcsa-ha-<vlanid>
flex-install-<vlanid> flexmgr-install-<vlanid>
flex-node-mgmt-<vlanid> hv-mgmt-<vlandid>
flex-vmotion-<vlanid> vm-migration-<vlanid>
flex-vsan-<vlanid> stor-mgmt-<vlanid>
pfmc-sds-mgmt-<vlanid> Not applicable
pfmc-sds-data1-<vlanid> Not applicable
pfmc-sds-data2-<vlanid> Not applicable
pfmc-vmotion-<vlanid> Not applicable
flex-stor-mgmt-<vlanid> fos-mgmt-<vlanid>
flex-rep1-<vlanid> Not applicable
flex-rep2-<vlanid> Not applicable
flex-data1-<vlanid> fos-data1-<vlanid>
flex-data2-<vlanid> fos-data2-<vlanid>
flex-data3-<vlanid> (if required) fos-data3-<vlanid>
flex-data4-<vlanid> (if required) fos-data4-<vlanid>
flex-tenant1-data1-<vlanid> Not applicable
flex-tenant1-data2-<vlanid> Not applicable
flex-tenant1-data3-<vlanid> Not applicable
flex-tenant1-data4-<vlanid> Not applicable
flex-tenant2-data1-<vlanid> Not applicable
flex-tenant2-data2-<vlanid> Not applicable
flex-tenant2-data3-<vlanid> Not applicable
flex-tenant2-data4-<vlanid> Not applicable
temp-dns-<vlanid> temp-dns-<vlanid>
data-prot-<vlanid> data-prot-<vlanid>
nsx-vsan-<vlanid> Not applicable
nsx-transport-<vlanid> Not applicable
nsx-edge1-<vlanid> Not applicable
nsx-edge2-<vlanid> Not applicable

16 Introduction
Internal Use - Confidential

I
Performing the initial expansion procedures
Use this section to perform the initial procedures common to both expansion scenarios.
Before adding a PowerFlex node, you must complete the following initial set of expansion procedures:
● Preparing to expand a PowerFlex appliance
● Configuring the network
After you complete the initial procedures in this section, see the following sections depending on the expansion scenario:
● Converting a PowerFlex controller node with a PERC H755 to a PowerFlex management controller 2.0
● Adding a PowerFlex controller node with a PERC H755 to a PowerFlex management controller 2.0
● Adding a PowerFlex management node to a PowerFlex management controller 1.0 with VMware vSAN
● Adding a PowerFlex R650/R750/R6525 node to a PowerFlex Manager service in managed mode
● Adding a PowerFlex R640/R740xd/R840 node to a PowerFlex Manager service in managed mode
● Adding a PowerFlex R650/R750/R6525 node in lifecycle mode
● Adding a PowerFlex R640/R740xd/R840 node to a PowerFlex Manager service in lifecycle mode

Performing the initial expansion procedures 17


Internal Use - Confidential

2
Preparing to expand a PowerFlex appliance
This section contains the steps you take to prepare a PowerFlex appliance.
Before adding a PowerFlex node, make a note of the IP addresses, FQDN, and the type of PowerFlex node for an expansion.
This information is used in multiple locations throughout the document.

Network requirements
Specific networks are required for a PowerFlex appliance deployment. Each network requires enough IP addresses allocated
for the deployment and future expansion. If the access switches are supported by PowerFlex Manager, the switch ports are
configured. Manually configure the switch ports for PowerFlex management controller and for services discovered in lifecycle
mode.
The column definitions are:
● To allow for a takeover of the PowerFlex management controller 2.0, the following VLANs are configured as general-purpose
LAN in PowerFlex Manager: 101,103,151,152,153,154.
NOTE: These VLANs will also need to be configured as their default network types.
● Example VLANs: Lists the VLAN numbers that are used in the PowerFlex hyperconverged node example.
NOTE: VLANs 140 through 143 are only required for PowerFlex management controller 2.0. VLANs 106 and 113 are only
required for PowerFlex management controller 1.0.
● Networks or VLANs: Names the network and or VLAN defined by PowerFlex Manager.
● Description: Describes each network or VLAN.
● Where configured: Indicates which resources have interfaces that are configured on the network or VLAN. The resource
definitions are:
○ PowerFlex node: PowerFlex hyperconverged node
○ PowerFlex Manager: Deploys and manages the PowerFlex appliance
○ PowerFlex gateway: Provides installation services and REST API for the PowerFlex appliance cluster.
○ Access switches: PowerFlex Manager configures the node facing ports of these switches. You configure the other ports
on the switch (management, uplinks, interconnects, and the switch ports for the PowerFlex management node).
NOTE: If the PowerFlex Manager does not support the access switches, the Partial networking template must
be used and the customer configures the switches before deploying the service. For more information, see Customer
switch port configuration examples section of the Dell EMC PowerFlex Appliance Administration Guide.
○ Embedded operating system-based jump VM: The server used to manage PowerFlex appliance.
○ Cloudlink Center: Provides the key management and encryption for PowerFlex.
○ PowerFlex GUI presentation server: Provides web GUI interface for configuring and managing the PowerFlex cluster.
The following table lists VLAN descriptions:

Default VLANs Network or VLAN Description Where configured


101 Hardware management For connection to node iDRAC PowerFlex appliance nodes,
interfaces and access switch PowerFlex Manager, access
management ports switches, jump server
104 Operating system installation For PowerFlex Manager to PowerFlex appliance nodes,
node communication during PowerFlex Manager
full networking deployment
105 Management Management VMs PowerFlex Manager,
Secure Remote Services
gateway, vCenter, CloudLink
Center, PowerFlex appliance
management environment
VMKernel, embedded

18 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

Default VLANs Network or VLAN Description Where configured


operating system-based jump
VM, and PowerFlex appliance
nodes
106 PowerFlex management For VM migration PowerFlex appliance nodes,
controller 1.0 hypervisor PowerFlex management
migration environment (with PowerFlex
management controller 1.0
only)
140 PowerFlex management For SDS-to-PowerFlex PowerFlex management
controller 2.0 PowerFlex gateway communication controller 2.0, PowerFlex
management management controller 2.0
PowerFlex gateway

141 PowerFlex management For SDS-to-SDS, SDS- PowerFlex management


controller 2.0 Data 1 to-SDC, and SDS-to- controller 2.0, PowerFlex
142 management controller 2.0
PFMC-PowerFlex gateway
PowerFlex management PowerFlex gateway
communication
controller 2.0 Data 2

143 PowerFlex management For VM migration PowerFlex Management


controller 2.0 hypervisor environment (with PowerFlex
migration management controller 2.0)
150 PowerFlex management For SDS-to-PowerFlex PowerFlex appliance
gateway communication nodes, PowerFlex gateway,
PowerFlex GUI presentation
server
151 PowerFlex data 1 For SDS-to-SDS, SDS-to- PowerFlex appliance nodes,
SDC, and SDS-to-PowerFlex PowerFlex gateway
152 PowerFlex data 2
gateway communication
153 PowerFlex data 3 (if required)
154 PowerFlex data 4 (if required)

161 PowerFlex replication 1 For SDR-SDR external PowerFlex appliance nodes


communication
162 PowerFlex replication 2

NOTE:
● A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks.
● Make sure that each network has a unique VLAN ID and there are no shared VLANs.

Software requirements
Software requirements must be met before you deploy a PowerFlex appliance.
For all the examples in this document, the following conventions are used:
● The third octets of the example IP addresses match the VLAN of the interface.
● All networks in the example have a subnet mask of 255.255.255.0.
● Use the same password when possible. For example: P@ssw0rd!.
The following table lists the virtual machine sizing guidelines:

Application RAM (GB) Number of vCPUs Hard disk (GB)


PowerFlex Manager 32 8 300
PowerFlex gateway 8 minimum 2 16
Secure Remote Services 4 2 16

Preparing to expand a PowerFlex appliance 19


Internal Use - Confidential

Application RAM (GB) Number of vCPUs Hard disk (GB)


CloudLink 6 4 64
VMware vCenter Server 32 16 1065-1765
Appliance
Embedded operating system- 8 2 320
based jump server
PowerFlex GUI presentation 6 2 16
server

The following table lists suggested VLAN and IP address configurations:

Software Credentials required License required Example IP addresses VLANs


PowerFlex GUI presentation Yes (primary MDM No 192.168.150.222 150
server credentials)
PowerFlex gateway for non- Yes No 192.168.151.102 151
bonded and static bonding NIC
port design 192.168.152.102 152

PowerFlex gateway for LACP Yes No 192.168.150.102 150


bonding NIC port design
192.168.151.102 151
192.168.152.102 152
192.168.153.102 153
192.168.154.102 154

PowerFlex management Yes No 192.168.140.102 140


controller 2.0 PowerFlex gateway
192.168.141.102 141
192.168.142.102 142

PowerFlex Manager Yes Yes 192.168.101.110 101


192.168.104.110 104
192.168.105.110 105

Secure Remote Services or Yes No 192.168.105.114 105


alerting gateway
PowerFlex nodes (SDC, SVM or Yes 192.168.151.0/24 151
PowerFlex storage-only node) for
non-bonded and static bonding 192.168.152.0/24 152
NIC port design
PowerFlex nodes (SDC, SVM or Yes 192.168.150.0/24 150
PowerFlex storage-only node) for
LACP bonding NIC port design 192.168.151.0/24 151
192.168.152.0/24 152
192.168.153.0/24 153
192.168.154.0/24 154

PowerFlex nodes (SVM or Yes 192.168.161.0/24 161


PowerFlex storage-only node)
with native asynchronous 192.168.162.0/24 162
replication enabled
VMware vCenter Yes Yes 192.168.105.105 105
CloudLink Center Yes Yes 192.168.105.120 105
192.168.105.121

20 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

Software Credentials required License required Example IP addresses VLANs


PowerFlex node iDRAC Yes Yes 192.168.101.0/24 101
PowerFlex appliance node Yes Yes 192.168.105.0/24 105
VMware ESXi
192.168.106.0/24 106

Access switch Yes Yes 192.168.101.45 101

192.168.101.46

Embedded operating system- Yes No 192.168.101.125 101


based jump VM
192.168.105.125 105

PowerFlex management node Yes Yes 192.168.105.17 105


VMware ESXi (optional)
PowerFlex management node Yes Yes 192.168.113.0/24 113
VMware vSAN ESXi
PowerFlex management node Yes No 192.168.103.0/24 103
VMware vCSA HA vCenter
PowerFlex management node Yes No 192.168.106.0/24 106
VMware vMotion ESXi
PowerFlex management cluster Yes Yes 192.168.103.17 103
(optional) based on VMware
vSAN ESXi 192.168.105.17 105
192.168.106.17 106
192.168.113.17 113

PowerFlex management node Yes Yes 192.168.140.17 140


based on PowerFlex
192.168.141.17 141
for PowerFlex management
controller 2.0 192.168.142.17 142
192.168.143.17 143

PowerFlex management cluster Yes Yes 192.168.103.17 103


(optional) with vSAN
192.168.105.17 105
192.168.106.17 106
192.168.113.17 113

DNS server (customer provided) No No Customer provided N/A


NTP server (customer provided) No No Customer provided N/A
Customer router No No 192.168.<VLAN>.254

PowerFlex management controller datastore and


virtual machine details
The following table explains which datastores to use:

Controller Volume Size (GB) VMs Domain name Storage pool


type name
PowerFlex vsan_datastor All available capacity All N/A N/A
management e
controller 1.0

Preparing to expand a PowerFlex appliance 21


Internal Use - Confidential

Controller Volume Size (GB) VMs Domain name Storage pool


type name
PowerFlex vcsa 3500 pfmc_vcsa PFMC PFMC-pool
management
controller 2.0 general 1600 Management VMs PFMC PFMC-pool
For example:
● Management
gateway
● Customer gateway
● Presentation
server
● CloudLink
● Additional VMs

pfxm 1000 PowerFlex Manager PFMC PFMC-pool

NOTE: For PowerFlex management controller 2.0, verify the capacity before adding additional VMs to the general volume.
If there is not enough capacity, expand the volume before proceeding. For more information on expanding a volume, see
DellDell EMC PowerFlex Appliance Administration Guide.

Cabling the PowerFlex R650/R750/R6525 nodes


The following information describes the cabling and ports for PowerFlex R650/R750/R6525 nodes.
Use this information with the expansion documentation that is provided by MFGS-DOCC-00-A01 to cable the expansion nodes.
MFGS-DOCC-00-A01 provides all the cables, labels, elevations, and port maps. Download the Expansion Service VCE_SKU
MFGS-DOCC-00-A01 from Unified Dashboard. This number refers to a service provided by our Manufacturing Technical
Program Management (TPM) team. When a customer orders this service, they receive a set of build documents similar to
the standard as built documents for a standard factory build. This contains complete documentation for adding the hardware
and cabling, and cable labels.
In a default PowerFlex setup two data networks are standard. Four data networks are only required for specific customer
requirements, for example, high performance or use of trunk ports.
The following PowerFlex R750 node example is from the manufacturing documentation kit:

Cisco Nexus 93180YC-FX 1A


1-1 1-3 1-5 1-7
PowerFlex R750 node PowerFlex R750 node PowerFlex R650 node 10- PowerFlex R650 node 10-
1-1-00:01 1-2-00:01 Drive 1-3-03:01 Drive 1-4-03:01
PowerFlex R750 node PowerFlex R750 node PowerFlex R650 node 10- PowerFlex R650 node 10-
1-1-06:02 1-2-06:02 Drive 1-3-00:02 Drive 1-4-00:02
1-2 1-4 1-6 1-8

Cisco Nexus 93180YC-FX 1B


1-1 1-3 1-5 1-7
PowerFlex R750 node PowerFlex R750 node PowerFlex R650 node 10- PowerFlex R650 node 10-
1-1-06:01 1-2-06:01 Drive 1-3-00:01 Drive 1-4-00:01
PowerFlex R750 node PowerFlex R750 node PowerFlex R650 node 10- PowerFlex R650 node 10-
1-1-00:02 1-2-00:02 Drive 1-3-03:02 Drive 1-4-03:02
1-2 1-4 1-6 1-8

In the example, PowerFlex R750 node 1-1-06:01 indicates the following:


● PowerFlex R750 node 1-1 is the first PowerFlex R750 node.
● 06:01 is slot:port, which is slot 6 port 1 on thePowerFlex R750 node.

22 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

The following PowerFlex R6525 node example is from the manufacturing documentation kit:

Cisco Nexus 93180YC-FX 1A


1-1 1-3 1-5
PowerFlex R6525 node 1-1-00:01 Not applicable Not applicable
PowerFlex R6525 node 1-1-01:02 PowerFlex R6525 node 1-2-06:02 Not applicable
1-2 1-4 1-6

In the example, PowerFlex R6525 node 1-1-00:01 indicates the following:


● PowerFlex R6525 node 1-1 is the first PowerFlex R6525 compute-only node.
● 00:01 is slot:port, which is slot 0 port 1 on the PowerFlex R6525 compute-only node.

Cisco Nexus 93180YC-FX 1B


1-1 1-3 1-5
PowerFlex R6525 node 1-1-01:01 Not applicable Not applicable
PowerFlex R6525 node 1-1-00:02 Not applicable Not applicable
1-2 1-4 1-6

In the example, PowerFlex R6525 node 1-1-00:01 indicates the following:


● PowerFlex R6525 node 1-1 is the first PowerFlex R6525 compute-only node.
● 00:01 is slot:port, which is slot 0 port 1 on the PowerFlex R6525 compute-only node.

Related information
Configure the host

Cabling requirements for PowerFlex controller node


The two PowerFlex data network connections are on different NICs and are connected to each access switch. The two
management network connections are on different NICs and are connected to each access switch. Redundancy and throughput
are the main considerations when cabling the PowerFlex management node.
NOTE: The PowerFlex appliance iDRAC NICs are connected to separate switches. This is shown in the figure as the
out-of-band management switch.

The following information describes the cabling requirements for PowerFlex management node:
Slot layout

PowerFlex R650 node Dual CPU


Slot 0 (OCP) CX5
Slot 1 CX5
Slot 2 Empty
Slot 3 CX5

ESXi interface dvSwitch Switch Speed Port


vmnic2 FE_dvSwitch Access switch A 25 GB 00:01

Preparing to expand a PowerFlex appliance 23


Internal Use - Confidential

ESXi interface dvSwitch Switch Speed Port


vmnic3 BE_dvSwitch Access switch B 25 GB 00:02
vmnic6 FE_dvSwitch Access switch B 25 GB 03:01
vmnic7 BE_dvSwitch Access switch A 25 GB 03:02
vmnic4 oob_dvSwitch Out-of-band 10 GB 01:01
management switch
iDRAC N/A iDRAC out-of-band 1 GB M0
network

Cabling requirements for PowerFlex hyperconverged nodes


The following information describes the cabling requirements for PowerFlex hyperconverged nodes.

PowerFlex R650 hyperconverged nodes


PowerFlex R650 hyperconverged slot layout and logical network information.

PowerFlex R650 nodes with NVMe/SAS/SATA (4 x 25 GB)

Slot layout

PowerFlex R650 node Dual CPU


Slot 0 (OCP) CX5
Slot 1 Empty
Slot 2 Empty
Slot 3 CX5

Logical network

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
(management, data1,
data3,rep1)

Trunk 1 to switch B 03:01 vmnic4 cust_dvswitch


(management, data1,
data3,rep1)

Trunk 2 to switch A 03:02 vmnic5 flex_dvswitch


(data2, data4,rep2)

Trunk 2 to switch B 00:02 vmnic3 flex_dvswitch


(data2, data4,rep2)

iDRAC - OOB network M0 Not applicable Not applicable

24 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

PowerFlex R650 nodes NVMe/SAS/SATA with 2 GPU (4 x 25 GB)

Slot layout

PowerFlex R650 nodes 8* with NVMe Dual CPU


Slot 0 (OCP) CX5
Slot 1 GPU2 (T4: SW)
Slot 2 GPU1 (T4: SW)
Slot 3 CX5

Logical network

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
(management, data1,
data3,rep1)

Trunk 1 to switch B 03:01 vmnic4 cust_dvswitch


(management, data1,
data3,rep1)

Trunk 2 to switch A 03:02 vmnic5 flex_dvswitch


(data2, data4,rep2)

Trunk 2 to switch B 00:02 vmnic3 flex_dvswitch


(data2, data4,rep2)

iDRAC - OOB network M0 Not applicable Not applicable

PowerFlex R650 nodes NVMe/SAS/SATA (6 x 25 GB)

Slot layout

PowerFlex R650 nodes 10* with NVMe Dual CPU


Slot 0 (OCP) CX5
Slot 1 CX5
Slot 2 Empty
Slot 3 CX5

Logical network

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch

Preparing to expand a PowerFlex appliance 25


Internal Use - Confidential

Description Slot:Port VMW label dvSwitch

(management, data1,
data3, rep1)

Trunk 2 to switch B 00:02 vmnic3 flex_dvswitch


(data2, data4, rep2)

Customer access A 01:01 vmnic4 Customer fabric


Trunk 1 to switch B 01:02 vmnic5 cust_dvswitch
(management, data1,
data3, rep1)

Customer access B 03:01 vmnic6 Customer fabric


Trunk 2 to switch A 03:02 vmnic7 flex_dvswitch
(data2, data4, rep2)

iDRAC - OOB network M0 Not applicable Not applicable

PowerFlex R650 nodes NVMe/SAS/SATA with 1 GPU (6 x 25 GB)

Slot layout

PowerFlex R650 nodes 10* with NVMe Dual CPU


Slot 0 (OCP) CX5
Slot 1 CX5
Slot 2 GPU1 (T4:SW)
Slot 3 CX5

Logical network

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
(management, data1,
data3, rep1)

Trunk 2 to switch B 00:02 vmnic3 flex_dvswitch


(data2, data4,rep2)

Customer access A 01:01 vmnic4 Customer fabric


Trunk 1 to switch B 01:02 vmnic5 cust_dvswitch
(management, data1,
data3, rep1)

Customer access B 03:01 vmnic6 Customer fabric


Trunk 2 to switch A 03:02 vmnic7 flex_dvswitch
(data2, data4,rep2)

iDRAC - OOB network M0 Not applicable Not applicable

26 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

PowerFlex R650 nodes NVMe/SAS/SATA (4 x 100 GB)

Slot layout

PowerFlex R650 nodes 10* with NVMe Dual CPU


Slot 0 (OCP) Empty OCP
Slot 1 CX6-DX
Slot 2 CX6-DX
Slot 3 Empty

Logical network

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch A 01:01 vmnic2 cust_dvswitch
(management, data1,
data3,rep1)

Trunk 1 to switch B 02:01 vmnic4 cust_dvswitch


(management, data1,
data3,rep1)

Trunk 2 to switch A 01:02 vmnic3 flex_dvswitch


(data2, data4,rep2)

Trunk 2 to switch B 02:02 vmnic5 flex_dvswitch


(data2, data4,rep2)

iDRAC - OOB Network M0 Not applicable Not applicable

PowerFlex R750 hyperconverged nodes


PowerFlex R750 hyperconverged slot layout and logical network information.

PowerFlex R750 nodes with SAS/SATA (4 x 25 GB)

Slot layout

PowerFlex R750 nodes Dual CPU/GPU


Slot 0 (OCP) CX5

Preparing to expand a PowerFlex appliance 27


Internal Use - Confidential

PowerFlex R750 nodes Dual CPU/GPU


Slot 1 Empty
Slot 2 Empty
Slot 3 Empty
Slot 4 Empty
Slot 5 CX5
Slot 6 Empty
Slot 7 Empty
Slot 8 Empty

Logical network

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
Trunk 1 to switch B 05:01 vmnic4 cust_dvswitch
Trunk 2 to switch A 05:02 vmnic5 flex_dvswitch
Trunk 2 to switch B 00:02 vmnic3 flex_dvswitch
iDRAC - OOB network M0 Not applicable Not applicable

PowerFlex R750 nodes with NVMe (4 x 25 GB)

Slot layout

PowerFlex R750 node with NVMe Dual CPU


Slot 0 (OCP) CX5
Slot 1 Empty
Slot 3 Empty
Slot 4 Empty
Slot 5 Empty
Slot 6 CX5
Slot 7 Empty
Slot 8 Empty

Logical network

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch

28 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch B 06:01 vmnic4 cust_dvswitch
Trunk 2 to switch A 01:02 vmnic1 flex_dvswitch
(data)
Trunk 2 to switch B 06:02 vmnic3 flex_dvswitch
(data)
iDRAC - OOB network M0 Not applicable Not applicable

PowerFlex R750 nodes with NVMe and GPU (SW)

Slot layout

PowerFlex R750 node with NVMe and GPU (SW) Dual CPU
Slot 0 (OCP) CX5
Slot 1 Empty
Slot 2 Empty
Slot 3 GPU (A10/T4:SW)
Slot 4 Empty
Slot 5 Empty
Slot 6 CX5
Slot 7 Empty
Slot 8 Empty

Logical network

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
(management, data1,
data3, rep1)
Trunk 1 to switch B 06:01 vmnic4 cust_dvswitch
(management, data1,
data3, rep1)
Trunk 2 to switch A 06:02 vmnic5 flex_dvswitch
(data2, data4, rep2)
Trunk 2 to switch B 00:02 vmnic3 flex_dvswitch
(data2, data4, rep2)
iDRAC - OOB network M0 Not applicable Not applicable

Preparing to expand a PowerFlex appliance 29


Internal Use - Confidential

PowerFlex R750 nodes with NVMe and 2 GPUs (DW)

Slot layout

PowerFlex R750 node with NVMe and 2 GPUs (DW) Dual CPU
Slot 0 (OCP) CX5
Slot 1 Empty
Slot 2 GPU2 (DW)
Slot 3 Empty
Slot 4 Empty
Slot 5 Empty
Slot 6 CX5
Slot 7 GPU1 (DW)
Slot 8 Empty

Logical network

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
(management, data1,
data3, rep1)
Trunk 1 to switch B 06:01 vmnic4 cust_dvswitch
(management, data1,
data3, rep1)
Trunk 2 to switch A 06:02 vmnic5 flex_dvswitch
(data2, data4, rep2)
Trunk 2 to switch B 00:02 vmnic3 flex_dvswitch
(data2, data4, rep2)
iDRAC - OOB network M0 Not applicable Not applicable

PowerFlex R750 nodes with NVMe and 1 GPUs (DW)

Slot layout

30 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

PowerFlex R750 node with NVMe and 2 GPUs (DW) Dual CPU
Slot 0 (OCP) CX5
Slot 1 Empty
Slot 2 Empty
Slot 3 Empty
Slot 4 Empty
Slot 5 Empty
Slot 6 CX5
Slot 7 GPU1 (DW)
Slot 8 Empty

Logical network

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
(management, data1,
data3, rep1)
Trunk 1 to switch B 06:01 vmnic4 cust_dvswitch
(management, data1,
data3, rep1)
Trunk 2 to switch A 06:02 vmnic5 flex_dvswitch
(data2, data4, rep2)
Trunk 2 to switch B 00:02 vmnic3 flex_dvswitch
(data2, data4, rep2)
iDRAC - OOB network M0 Not applicable Not applicable

PowerFlex R750 nodes with NVMe/SAS/SATA and 2 GPUs (SW)

Slot layout

PowerFlex R750 node with NVMe/SAS/SATA Dual CPU


Slot 0 (OCP) CX5
Slot 1 Empty
Slot 2 Empty
Slot 3 GPU2 (SW)
Slot 4 Empty
Slot 5 CX5
Slot 6 GPU1 (SW)

Preparing to expand a PowerFlex appliance 31


Internal Use - Confidential

PowerFlex R750 node with NVMe/SAS/SATA Dual CPU


Slot 7 Empty
Slot 8 Empty

Logical network

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
(management, data1,
data3, rep1)
Trunk 1 to switch B 05:01 vmnic4 cust_dvswitch
(management, data1,
data3, rep1)
Trunk 2 to switch A 05:02 vmnic5 flex_dvswitch
(data2, data4, rep2)
Trunk 2 to switch B 00:02 vmnic3 flex_dvswitch
(data2, data4, rep2)
iDRAC - OOB network M0 Not applicable Not applicable

PowerFlex R750 nodes with NVMe (6 x 25 GB)

Slot layout

PowerFlex R750 node with NVMe Dual CPU


Slot 0 (OCP) CX5
Slot 1 Empty
Slot 2 Empty
Slot 3 CX5
Slot 4 Empty
Slot 5 Empty
Slot 6 CX5
Slot 7 GPU1 (SW)
Slot 8 Empty

Logical network

Description Slot:Port VMW label dvSwitch


iDRAC - OOB network M0 Not applicable Not applicable
Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
(management, data1,
data3, rep1)

32 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

Description Slot:Port VMW label dvSwitch


Trunk 2 to switch B 00:02 vmnic3 flex_dvswitch
(data2, data4, rep2)
Customer access A 03:01 vmnic4 Customer fabric
Trunk 1 to switch B 03:02 vmnic5 cust_dvswitch
(management, data1,
data3, rep1)
Customer access B 06:01 vmnic6 Customer fabric
Trunk 2 to switch A 06:02 vmnic7 flex_dvswitch
(data2, data4, rep2)

PowerFlex R750 nodes with SAS/SATA (6 x 25 GB)

Slot layout

PowerFlex R750 node with SAS/SATA Dual CPU


Slot 0 (OCP) CX5
Slot 1 CX5
Slot 2 Empty
Slot 3 Empty
Slot 4 Empty
Slot 5 CX5
Slot 6 Empty
Slot 7 GPU1 (SW)
Slot 8 Empty

Logical network

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
(management, data1,
data3, rep1)
Trunk 2 to switch B 00:02 vmnic3 flex_dvswitch
(data2, data4, rep2)
Customer access A 01:01 vmnic4 Customer fabric
Trunk 1 to switch B 01:02 vmnic5 cust_dvswitch
(management, data1,
data3, rep1)
Customer access B 06:01 vmnic6 Customer fabric

Preparing to expand a PowerFlex appliance 33


Internal Use - Confidential

Description Slot:Port VMW label dvSwitch


Trunk 2 to switch A 06:02 vmnic7 flex_dvswitch
(data2, data4, rep2)
iDRAC - OOB network M0 Not applicable Not applicable

PowerFlex R750 nodes with SAS/SATA/NVMe - 6 x 25 GB and 2 GPU (SW)

Slot layout

PowerFlex R750 node with SAS/SATA/NVMe Dual CPU


Slot 0 (OCP) CX5
Slot 1 Empty
Slot 2 GPU2 (SW)
Slot 3 CX5
Slot 4 Empty
Slot 5 Empty
Slot 6 CX5
Slot 7 GPU1 (SW)
Slot 8 Empty

Logical network

Description Slot:Port VMW label dvSwitch


iDRAC - OOB network M0 Not applicable Not applicable
Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
(management, data1,
data3, rep1)
Trunk 2 to switch B 00:02 vmnic3 flex_dvswitch
(data2, data4, rep2)
Customer access A 03:01 vmnic4 Customer fabric
Trunk 1 to switch B 03:02 vmnic5 cust_dvswitch
(management, data1,
data3, rep1)
Customer access B 06:01 vmnic6 Customer fabric
Trunk 2 to switch A 06:02 vmnic7 flex_dvswitch
(data2, data4, rep2)

34 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

PowerFlex R750 nodes with SAS/SATA/NVMe - 6 x 25 GB and 1 GPU (SW)

Slot layout

PowerFlex R750 node with SAS/SATA/NVMe Dual CPU


Slot 0 (OCP) CX5
Slot 1 Empty
Slot 2 Empty
Slot 3 CX5
Slot 4 Empty
Slot 5 Empty
Slot 6 CX5
Slot 7 GPU1 (SW)
Slot 8 Empty

Logical network

Description Slot:Port VMW label dvSwitch


iDRAC - OOB network M0 Not applicable Not applicable
Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
(management, data1,
data3, rep1)
Trunk 2 to switch B 00:02 vmnic3 flex_dvswitch
(data2, data4, rep2)
Customer access A 03:01 vmnic4 Customer fabric
Trunk 1 to switch B 03:02 vmnic5 cust_dvswitch
(management, data1,
data3, rep1)
Customer access B 06:01 vmnic6 Customer fabric
Trunk 2 to switch A 06:02 vmnic7 flex_dvswitch
(data2, data4, rep2)

Preparing to expand a PowerFlex appliance 35


Internal Use - Confidential

PowerFlex R750 nodes with NVMe/SAS/SATA (4 x 100 GB)

Slot layout

PowerFlex R750 node with NVMe/SAS/SATA Dual CPU


Slot 0 (OCP) Empty
Slot 1 Empty
Slot 2 Empty
Slot 3 CX6-DX
Slot 4 Empty
Slot 5 Empty
Slot 6 CX6-DX
Slot 7 Empty
Slot 8 Empty

Logical network

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch A 02:01 vmnic2 cust_dvswitch
(management, data1,
data3, rep1)
Trunk 1 to switch B 06:01 vmnic4 cust_dvswitch
(management, data1,
data3, rep1)
Trunk 2 to switch A 06:02 vmnic5 flex_dvswitch
(data2, data4, rep2)
Trunk 2 to switch B 02:02 vmnic3 flex_dvswitch
(data2, data4, rep2)
iDRAC - OOB network M0 Not applicable Not applicable

PowerFlex R750 nodes with SAS/SATA 4 x 100 GB and 2 GPUs (DW)

Slot layout

36 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

PowerFlex R750 node with SAS/SATA Dual CPU


Slot 0 (OCP) Empty
Slot 1 Empty
Slot 2 GPU2
Slot 3 CX6-DX
Slot 4 Empty
Slot 5 Empty
Slot 6 CX6-DX
Slot 7 GPU1
Slot 8 Empty

Logical network

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch A 02:01 vmnic2 cust_dvswitch
(management, data1,
data3, rep1)
Trunk 1 to switch B 06:01 vmnic4 cust_dvswitch
(management, data1,
data3, rep1)
Trunk 2 to switch A 06:02 vmnic5 flex_dvswitch
(data2, data4, rep2)
Trunk 2 to switch B 02:02 vmnic3 flex_dvswitch
(data2, data4, rep2)
iDRAC - OOB network M0 Not applicable Not applicable

Cabling requirements for PowerFlex storage-only nodes


The following information describes the cabling requirements for PowerFlex storage-only nodes.

PowerFlex R650 storage-only nodes


PowerFlex R650 storage only node slot layout and logical network configurations.

PowerFlex R650 NVMe/SAS/SATA 4 x 25 GB (single CPU)

Slot layout

PowerFlex R650 nodes Single CPU


Slot 0 (OCP) CX5
Slot 1 CX5
Slot 2 Empty
Slot 3 Empty

Preparing to expand a PowerFlex appliance 37


Internal Use - Confidential

Logical network

Description Slot:Port VMW label dvSwitch for LACP bonding


NIC
Trunk 1 to switch A 00:01 em3 bond0
Trunk 1 to switch B 01:01 p1p1 bond0
Trunk 2 to switch A 01:02 p1p2 bond1
Trunk 2 to switch B 00:02 em4 bond1
iDRAC - oob network M0 Not applicable Not applicable

PowerFlex R650 NVMe/SAS/SATA 4 x 25 GB

Slot layout

PowerFlex R650 nodes Dual CPU


Slot 0 (OCP) CX5
Slot 1 Empty
Slot 2 Empty
Slot 3 CX5

Logical network

Description Slot:Port VMW label dvSwitch for LACP bonding


NIC
Trunk 1 to switch A 00:01 em3 bond0
Trunk 1 to switch B 03:01 p3p1 bond0
Trunk 2 to switch A 03:02 p3p2 bond1
Trunk 2 to switch B 00:02 em4 bond1
iDRAC - oob network M0 Not applicable Not applicable

PowerFlex R650 with NVMe/SAS/SATA 4 x 100 GB

Slot layout

PowerFlex R650 nodes CPU


Slot 0 (OCP) Empty
Slot 1 CX6-DX
Slot 2 CX6-DX
Slot 3 Empty

38 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

Logical network

Description Slot:Port NIC label dvSwitch for LACP bonding NIC


Trunk 1 to switch A 01:01 p1p1 bond0
Trunk 1 to switch B 02:01 p2p1 bond0
Trunk 2 to switch A 02:02 p2p2 bond1
Trunk 2 to switch B 01:02 p1p2 bond1
iDRAC - oob network M0 Not applicable Not applicable

PowerFlex R750 storage only nodes


PowerFlex R750 storage only node slot layout and logical network configurations.

PowerFlex R750 with NVMe 4 x 25 GB

Slot layout

PowerFlex R750 nodes with NVMe Dual CPU


Slot 0 (OCP) CX5
Slot 1 Empty
Slot 2 Empty
Slot 3 Empty
Slot 4 Empty
Slot 5 Empty
Slot 6 CX5
Slot 7 Empty
Slot 8 Empty

Logical network

Description Slot:Port Device Bond


Trunk 1 to switch A 00:01 em3 bond0
(mgmt,data1,data3, rep1)
Trunk 1 to switch B 06:01 p6p1 bond0
(mgmt,data1,data3, rep1)
Trunk 2 to switch A (data2, 06:02 p6p2 bond1
data4, rep2)
Trunk 2 to switch B (data2, 00:02 em4 bond1
data4, rep2)

Preparing to expand a PowerFlex appliance 39


Internal Use - Confidential

Description Slot:Port Device Bond


iDRAC - oob network M0 Not applicable Not applicable

PowerFlex R750 with SATA/SSD 4 x 25 GB

Slot layout

PowerFlex R750 nodes Dual CPU


Slot 0 (OCP) CX5
Slot 1 Empty
Slot 2 Empty
Slot 3 Empty
Slot 4 Empty
Slot 5 Empty
Slot 6 CX5
Slot 7 Empty
Slot 8 Empty

Logical network

Description Slot:Port Device Bond


Trunk 1 to switch A 00:01 em3 bond0
(mgmt,data1,data3, rep1)
Trunk 1 to switch B 05:01 p5p1 bond0
(mgmt,data1,data3, rep1)
Trunk 2 to switch A (data2, 05:02 p5p2 bond1
data4, rep2)
Trunk 2 to switch B (data2, 00:02 em4 bond1
data4, rep2)
iDRAC - oob network M0 Not applicable Not applicable

40 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

PowerFlex R750 with NVMe/SAS/SATA 4 x 100 GB

Slot layout

PowerFlex R750 nodes Dual CPU


Slot 0 (OCP) CX5
Slot 1 Empty
Slot 2 Empty
Slot 3 CX5
Slot 4 Empty
Slot 5 Empty
Slot 6 CX5
Slot 7 Empty
Slot 8 Empty

Logical network

Description Slot:Port Device Bond


Trunk 1 to switch A 00:01 em3 bond0
(mgmt,data1,data3, rep1)
Trunk 1 to switch B 05:01 p5p1 bond0
(mgmt,data1,data3, rep1)
Trunk 2 to switch A (data2, 05:02 p5p2 bond1
data4, rep2)
Trunk 2 to switch B (data2, 00:02 em4 bond1
data4, rep2)
iDRAC - oob network M0 Not applicable Not applicable

Preparing to expand a PowerFlex appliance 41


Internal Use - Confidential

Cabling requirements for PowerFlex compute-only nodes


The following information describes the cabling requirements for PowerFlex compute-only nodes.

PowerFlex R650 compute-only nodes


PowerFlex R650 compute-only node slot layout and logical network configurations.

PowerFlex R650 with single CPU

Slot layout

PowerFlex R650 nodes with single CPU Single CPU


Slot 0 (OCP) CX5
Slot 1 CX5
Slot 2 Empty
Slot 3 Empty

Logical network

Description Slot:Port VMW label dvSwitch for LACP bonding


NIC
Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
Trunk 1 to switch B 01:01 vmnic4 cust_dvswitch
Trunk 2 to switch A 01:02 vmnic5 flex_dvswitch
Trunk 2 to switch B 00:02 vmnic3 flex_dvswitch
iDRAC - oob network M0 Not applicable Not applicable

PowerFlex R650 with dual CPU

Slot layout

PowerFlex R650 nodes with dual CPU Dual CPU


Slot 0 (OCP) CX5
Slot 1 Empty
Slot 2 Empty
Slot 3 CX5

Logical network

42 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

Description Slot:Port NIC label VMW label


Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
Trunk 1 to switch B 03:01 vmnic4 cust_dvswitch
Trunk 2 to switch A 03:02 vmnic5 flex_dvswitch
Trunk 2 to switch B 00:02 vmnic3 flex_dvswitch
iDRAC - oob network M0 Not applicable Not applicable

PowerFlex R650 with dual CPU 6 x 25 GB

Slot layout

PowerFlex R650 nodes with dual CPU 6x25 GB Dual CPU


Slot 0 (OCP) CX5
Slot 1 CX5
Slot 2 Empty
Slot 3 CX5

Logical network

Description Slot:Port NIC label VMW label


Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
Trunk 2 to switch B 00:02 vmnic3 flex_dvswitch
Customer access A 01:01 vmnic4 Customer fabric
Trunk 1 to switch B 01:02 vmnic5 cust_dvswitch
Customer access B 03:01 vmnic6 Customer fabric
Trunk 2 to switch A 03:02 vmnic7 flex_dvswitch
iDRAC - oob network M0 Not applicable Not applicable

PowerFlex R650 with 6 x 25 GB and GPU

Slot layout

PowerFlex R650 nodes Dual CPU


Slot 0 (OCP) CX5
Slot 1 CX5
Slot 2 GPU1 (T4:SW)
Slot 3 CX5

Preparing to expand a PowerFlex appliance 43


Internal Use - Confidential

Logical network

Description Slot:Port NIC label VMW label


Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
Trunk 2 to switch B 00:02 vmnic3 flex_dvswitch
Customer access A 01:01 vmnic4 Customer fabric
Trunk 1 to switch B 01:02 vmnic5 cust_dvswitch
Customer access B 03:01 vmnic6 Customer fabric
Trunk 2 to switch A 03:02 vmnic7 flex_dvswitch
iDRAC - oob network M0 Not applicable Not applicable

PowerFlex R650 4 x 100 GB

Slot layout

PowerFlex R650 nodes Dual CPU


Slot 0 (OCP) Empty OCP
Slot 1 CX6-DX
Slot 2 CX6-DX
Slot 3 Empty

Logical network

Description Slot:Port NIC label VMW label


Trunk 1 to switch A 01:01 vmnic2 cust_dvswitch
Trunk 1 to switch B 02:01 vmnic4 cust_dvswitch
Trunk 2 to switch A (data) 01:02 vmnic3 flex_dvswitch
Trunk 2 to switch B (data) 02:02 vmnic5 flex_dvswitch
iDRAC - oob network M0 Not applicable Not applicable

PowerFlex R650 4 x 100 GB with GPU

Slot layout

PowerFlex R650 nodes Dual CPU


Slot 0 (OCP) Empty
Slot 1 CX6-DX
Slot 2 GPU1

44 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

PowerFlex R650 nodes Dual CPU


Slot 3 CX6-DX

Logical network

Description Slot:Port NIC label VMW label


Trunk 1 to switch A 01:01 vmnic2 cust_dvswitch
Trunk 1 to switch B 02:01 vmnic4 cust_dvswitch
Trunk 2 to switch A (data) 01:02 vmnic3 flex_dvswitch
Trunk 2 to switch B (data) 02:02 vmnic5 flex_dvswitch
iDRAC - oob network M0 Not applicable Not applicable

PowerFlex R750 compute-only nodes


PowerFlex R750 compute-only node slot layout and logical network configurations.

PowerFlex R750 (4 x 25 GB)

Slot layout

PowerFlex R750 nodes Dual CPU


Slot 0 (OCP) CX5
Slot 1 Empty
Slot 2 Empty
Slot 3 Empty
Slot 4 Empty
Slot 5 Empty
Slot 6 CX5
Slot 7 Empty
Slot 8 Empty

Logical network

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
Trunk 1 to switch B 06:01 vmnic4 cust_dvswitch
Trunk 2 to switch A (data) 01:02 vmnic1 flex_dvswitch
Trunk 2 to switch B (data) 06:02 vmnic3 flex_dvswitch

Preparing to expand a PowerFlex appliance 45


Internal Use - Confidential

Description Slot:Port VMW label dvSwitch


iDRAC - OOB management M0 Not applicable Not applicable

PowerFlex R750 (4 x 25 GB) and 1 GPU (SW) A10

Slot layout

PowerFlex R750 nodes Dual CPU


Slot 0 (OCP) CX5
Slot 1 Empty
Slot 2 CX5
Slot 3 Empty
Slot 4 Empty
Slot 5 Empty
Slot 6 Empty
Slot 7 GPU1
Slot 8 Empty

Logical network

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
Trunk 1 to switch B 02:01 vmnic4 cust_dvswitch
Trunk 2 to switch A (data) 02:02 vmnic5 flex_dvswitch
Trunk 2 to switch B (data) 00:02 vmnic3 flex_dvswitch
iDRAC - OOB management M0 Not applicable Not applicable

PowerFlex R750 (4 x 25 GB) and 1 GPU (SW) T4

Slot layout

46 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

PowerFlex R750 nodes Dual CPU


Slot 0 (OCP) CX5
Slot 1 Empty
Slot 2 Empty
Slot 3 GPU (A10/T4:SW)
Slot 4 Empty
Slot 5 Empty
Slot 6 CX5
Slot 7 Empty
Slot 8 Empty

Logical network

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
(mgmt,data1,data3, rep1)
Trunk 1 to switch B 06:01 vmnic4 cust_dvswitch
(mgmt,data1,data3, rep1)
Trunk 2 to switch A (data2, 06:02 vmnic5 flex_dvswitch
data4, rep2)
Trunk 2 to switch B (data2, 00:02 vmnic3 flex_dvswitch
data4, rep2)
iDRAC - OOB management M0 Not applicable Not applicable

PowerFlex R750 (4 x 25 GB) and 2 GPUs (SW)

Slot layout

PowerFlex R750 nodes Dual CPU


Slot 0 (OCP) CX5
Slot 1 Empty
Slot 2 Empty
Slot 3 GPU2 (SW)
Slot 4 Empty
Slot 5 Empty
Slot 6 CX5
Slot 7 GPU1 (SW)
Slot 8 Empty

Preparing to expand a PowerFlex appliance 47


Internal Use - Confidential

Logical network

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
(mgmt,data1,data3, rep1)
Trunk 1 to switch B 06:01 vmnic4 cust_dvswitch
(mgmt,data1,data3, rep1)
Trunk 2 to switch A (data2, 06:02 vmnic5 flex_dvswitch
data4, rep2)
Trunk 2 to switch B (data2, 00:02 vmnic3 flex_dvswitch
data4, rep2)
iDRAC - OOB management M0 Not applicable Not applicable

PowerFlex R750 (4 x 25 GB) and 1 GPU (DW)

Slot layout

PowerFlex R750 nodes Dual CPU


Slot 0 (OCP) CX5
Slot 1 Empty
Slot 2 Empty
Slot 3 Empty
Slot 4 Empty
Slot 5 Empty
Slot 6 CX5
Slot 7 GPU1 (DW)
Slot 8 Empty

Logical network

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
(mgmt,data1,data3, rep1)
Trunk 1 to switch B 06:01 vmnic4 cust_dvswitch
(mgmt,data1,data3, rep1)
Trunk 2 to switch A (data2, 06:02 vmnic5 flex_dvswitch
data4, rep2)
Trunk 2 to switch B (data2, 00:02 vmnic3 flex_dvswitch
data4, rep2)
iDRAC - OOB management M0 Not applicable Not applicable

48 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

PowerFlex R750 (4 x 25 GB) and 2 GPUs (DW)

Slot layout

PowerFlex R750 nodes Dual CPU


Slot 0 (OCP) CX5
Slot 1 Empty
Slot 2 GPU2 (DW)
Slot 3 Empty
Slot 4 Empty
Slot 5 Empty
Slot 6 CX5
Slot 7 GPU1 (DW)
Slot 8 Empty

Logical network

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch A (mgmt, 00:01 vmnic2 cust_dvswitch
data1, data3, rep1)
Trunk 1 to switch B (mgmt, 06:01 vmnic4 cust_dvswitch
data1, data3, rep1)
Trunk 2 to switch A (data2, 06:02 vmnic5 flex_dvswitch
data4, rep2)
Trunk 2 to switch B (data2, 00:02 vmnic3 flex_dvswitch
data4, rep2)
iDRAC - OOB management M0 Not applicable Not applicable

PowerFlex R750 (6 x 25 GB)

Slot layout

Preparing to expand a PowerFlex appliance 49


Internal Use - Confidential

PowerFlex R750 nodes Dual CPU


Slot 0 (OCP) CX5
Slot 1 Empty
Slot 2 Empty
Slot 3 CX5
Slot 4 Empty
Slot 5 Empty
Slot 6 CX5
Slot 7 Empty
Slot 8 Empty

Logical network

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
Trunk 2 to switch B 00:02 vmnic3 flex_dvswitch
Customer access A 03:01 vmnic4 Customer fabric
Trunk 1 to switch B 03:02 vmnic5 cust_dvswitch
Customer access B 06:01 vmnic6 Customer fabric
Trunk 2 to switch A 06:02 vmnic7 flex_dvswitch
iDRAC - OOB management M0 Not applicable Not applicable

PowerFlex R750 with a single GPU (6 x 25 GB)

Slot layout

PowerFlex R750 nodes Dual CPU


Slot 0 (OCP) CX5
Slot 1 Empty
Slot 2 Empty
Slot 3 CX5
Slot 4 Empty
Slot 5 Empty
Slot 6 CX5
Slot 7 GPU1 (SW)
Slot 8 Empty

50 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

Logical network

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
Trunk 2 to switch B 00:02 vmnic3 flex_dvswitch
Customer access A 03:01 vmnic4 Customer fabric
Trunk 1 to switch B 03:02 vmnic5 cust_dvswitch
Customer access B 06:01 vmnic6 Customer fabric
Trunk 2 to switch A 06:02 vmnic7 flex_dvswitch
iDRAC - oob network M0 Not applicable Not applicable

PowerFlex R750 with 2 GPUs (6 x 25 GB)

Slot layout

PowerFlex R750 nodes Dual CPU


Slot 0 (OCP) CX5
Slot 1 Empty
Slot 2 GPU2 (SW)
Slot 3 CX5
Slot 4 Empty
Slot 5 Empty
Slot 6 CX5
Slot 7 GPU1 (SW)
Slot 8 Empty

Logical network

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
Trunk 2 to switch B 00:02 vmnic3 flex_dvswitch
Customer access A 03:01 vmnic4 Customer fabric
Trunk 1 to switch B 03:02 vmnic5 cust_dvswitch
Customer access B 06:01 vmnic6 Customer fabric
Trunk 2 to switch A 06:02 vmnic7 flex_dvswitch
iDRAC - oob network M0 Not applicable Not applicable

Preparing to expand a PowerFlex appliance 51


Internal Use - Confidential

PowerFlex R750 (6 x 25 GB) and 1 GPU (DW)

Slot layout

PowerFlex R750 nodes Dual CPU


Slot 0 (OCP) CX5
Slot 1 Empty
Slot 2 Empty
Slot 3 CX5
Slot 4 Empty
Slot 5 Empty
Slot 6 CX5
Slot 7 GPU1
Slot 8 Empty

Logical network

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
Trunk 2 to switch B 00:02 vmnic3 flex_dvswitch
Customer access A 03:01 vmnic4 Customer fabric
Trunk 1 to switch B 03:02 vmnic5 cust_dvswitch
Customer access B 06:01 vmnic6 Customer fabric
Trunk 2 to switch A 06:02 vmnic7 flex_dvswitch
iDRAC - oob network M0 Not applicable Not applicable

PowerFlex R750 (6 x 25 GB) and 2 GPU

Slot layout

52 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

PowerFlex R750 nodes Dual CPU


Slot 0 (OCP) CX5
Slot 1 Empty
Slot 2 GPU2
Slot 3 CX5
Slot 4 Empty
Slot 5 Empty
Slot 6 CX5
Slot 7 GPU1
Slot 8 Empty

Logical network

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
Trunk 2 to switch B 00:02 vmnic3 flex_dvswitch
Customer access A 03:01 vmnic4 Customer fabric
Trunk 1 to switch B 03:02 vmnic5 cust_dvswitch
Customer access B 06:01 vmnic6 Customer fabric
Trunk 2 to switch A 06:02 vmnic7 flex_dvswitch
iDRAC - oob network M0 Not applicable Not applicable

PowerFlex R750 (4 x 100 GB)

Slot layout

PowerFlex R750 nodes Dual CPU


Slot 0 (OCP) Empty
Slot 1 Empty
Slot 2 Empty
Slot 3 CX6-DX
Slot 4 Empty
Slot 5 Empty
Slot 6 CX6-DX
Slot 7 Empty
Slot 8 Empty

Preparing to expand a PowerFlex appliance 53


Internal Use - Confidential

Logical network

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch 03:01 vmnic2 cust_dvswitch
A (mgmt,data1,data3,
rep1)
Trunk 1 to switch 06:01 vmnic4 cust_dvswitch
B (mgmt,data1,data3,
rep1)
Trunk 2 to switch A 06:02 vmnic5 flex_dvswitch
(data2, data4, rep2)
Trunk 2 to switch B 03:02 vmnic3 flex_dvswitch
(data2, data4, rep2)
iDRAC - OOB network M0 Not applicable Not applicable

PowerFlex R750 (4 x 100 GB) and 1 GPU (DW)

Slot layout

PowerFlex R750 nodes Dual CPU


Slot 0 (OCP) Empty
Slot 1 Empty
Slot 2 CX5
Slot 3 CX6-DX
Slot 4 Empty
Slot 5 Empty
Slot 6 CX6-DX
Slot 7 GPU1
Slot 8 Empty

Logical network

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch 03:01 vmnic2 cust_dvswitch
A (mgmt,data1,data3,
rep1)
Trunk 1 to switch 06:01 vmnic4 cust_dvswitch
B (mgmt,data1,data3,
rep1)
Trunk 2 to switch A 06:02 vmnic5 flex_dvswitch
(data2, data4, rep2)

54 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

Description Slot:Port VMW label dvSwitch


Trunk 2 to switch B 03:02 vmnic3 flex_dvswitch
(data2, data4, rep2)
iDRAC - OOB network M0 Not applicable Not applicable

PowerFlex R750 (4 x 100 GB) and 2 GPUs (DW)

Slot layout

PowerFlex R750 nodes Dual CPU


Slot 0 (OCP) Empty
Slot 1 Empty
Slot 2 GPU2
Slot 3 CX6-DX
Slot 4 Empty
Slot 5 Empty
Slot 6 CX6-DX
Slot 7 GPU1
Slot 8 Empty

Logical network

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch 03:01 vmnic2 cust_dvswitch
A (mgmt,data1,data3,
rep1)
Trunk 1 to switch 06:01 vmnic4 cust_dvswitch
B (mgmt,data1,data3,
rep1)
Trunk 2 to switch A 06:02 vmnic5 flex_dvswitch
(data2, data4, rep2)
Trunk 2 to switch B 03:02 vmnic3 flex_dvswitch
(data2, data4, rep2)
iDRAC - OOB network M0 Not applicable Not applicable

Preparing to expand a PowerFlex appliance 55


Internal Use - Confidential

PowerFlex R750 (4 x 100 GB) and 1 GPU (SW)

Slot layout

PowerFlex R750 nodes Dual CPU


Slot 0 (OCP) Empty
Slot 1 Empty
Slot 2 Empty
Slot 3 CX6-DX
Slot 4 Empty
Slot 5 Empty
Slot 6 CX6-DX
Slot 7 GPU1
Slot 8 Empty

Logical network

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch 03:01 vmnic2 cust_dvswitch
A (mgmt,data1,data3,
rep1)
Trunk 1 to switch 06:01 vmnic4 cust_dvswitch
B (mgmt,data1,data3,
rep1)
Trunk 2 to switch A 06:02 vmnic5 flex_dvswitch
(data2, data4, rep2)
Trunk 2 to switch B 03:02 vmnic3 flex_dvswitch
(data2, data4, rep2)
iDRAC - OOB network M0 Not applicable Not applicable

PowerFlex R750 (4 x 100 GB) and 2 GPUs (SW)

Slot layout

56 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

PowerFlex R750 nodes Dual CPU


Slot 0 (OCP) Empty
Slot 1 Empty
Slot 2 GPU2
Slot 3 CX6-DX
Slot 4 Empty
Slot 5 Empty
Slot 6 CX6-DX
Slot 7 GPU1
Slot 8 Empty

Logical network

Description Slot:Port VMW label dvSwitch


Trunk 1 to switch 03:01 vmnic2 cust_dvswitch
A (mgmt,data1,data3,
rep1)
Trunk 1 to switch 06:01 vmnic4 cust_dvswitch
B (mgmt,data1,data3,
rep1)
Trunk 2 to switch A 06:02 vmnic5 flex_dvswitch
(data2, data4, rep2)
Trunk 2 to switch B 03:02 vmnic3 flex_dvswitch
(data2, data4, rep2)
iDRAC - OOB network M0 Not applicable Not applicable

PowerFlex R6525 compute only nodes


PowerFlex AMD-based compute only node slot layout and logical network configurations.

PowerFlex R6525 with a single CPU

Slot layout

PowerFlex R6525 nodes Dual CPU


Slot 0 (OCP) CX5
Slot 1 CX5
Slot 2 Empty
Slot 3 Empty

Logical network

Preparing to expand a PowerFlex appliance 57


Internal Use - Confidential

Description Slot:Port VMW label dvSwitch for LACP bonding


NIC
Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
Trunk 1 to switch B 01:01 vmnic4 cust_dvswitch
Trunk 2 to switch A 01:02 vmnic5 flex_dvswitch
Trunk 2 to switch B 00:02 vmnic3 flex_dvswitch
iDRAC - oob network M0 Not applicable Not applicable

PowerFlex R6525 with dual CPUs

Slot layout

PowerFlex R6525 nodes Dual CPU


Slot 0 (OCP) CX5
Slot 1 Empty
Slot 2 Empty
Slot 3 CX5

Logical network

Description Slot:Port VMW label dvSwitch for LACP bonding


NIC
Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
Trunk 1 to switch B 03:01 vmnic4 cust_dvswitch
Trunk 2 to switch A 03:02 vmnic5 flex_dvswitch
Trunk 2 to switch B 00:02 vmnic3 flex_dvswitch
iDRAC - oob network M0 Not applicable Not applicable

PowerFlex R6525 with dual CPUs 6 x 25 GB

Slot layout

PowerFlex R6525 nodes Dual CPU


Slot 0 (OCP) CX5
Slot 1 CX5
Slot 2 Empty
Slot 3 CX5

Logical network

58 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

Description Slot:Port VMW label dvSwitch for LACP bonding


NIC
Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
Trunk 1 to switch B 00:02 vmnic3 flex_dvswitch
Customer access A 01:01 vmnic4 Customer fabric
Trunk 1 to switch B 01:02 vmnic5 cust_dvswitch
Customer access B 03:01 vmnic6 Customer fabric
Trunk 2 to switch A 03:02 vmnic7 flex_dvswitch
iDRAC - oob network M0 Not applicable Not applicable

PowerFlex R6525 with dual CPU 6 x 25 GB and GPU

Slot layout

PowerFlex R6525 nodes Dual CPU


Slot 0 (OCP) CX5
Slot 1 CX5
Slot 2 GPU1
Slot 3 CX5

Logical network

Description Slot:Port VMW label dvSwitch for LACP bonding


NIC
Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
Trunk 2 to switch B 00:02 vmnic3 flex_dvswitch
Customer access A 01:01 vmnic4 Customer fabric
Trunk 1 to switch B 01:02 vmnic5 cust_dvswitch
Customer access B 03:01 vmnic6 Customer fabric
Trunk 2 to switch A 03:02 vmnic7 flex_dvswitch
iDRAC - oob network M0 Not applicable Not applicable

PowerFlex R6525 with dual CPU 4 x 100 GB

Slot layout

PowerFlex R6525 nodes Dual CPU


Slot 0 (OCP) Empty

Preparing to expand a PowerFlex appliance 59


Internal Use - Confidential

PowerFlex R6525 nodes Dual CPU


Slot 1 CX6-DX
Slot 2 CX6-DX
Slot 3 Empty

Logical network

Description Slot:Port VMW label dvSwitch for LACP bonding


NIC
Trunk 1 to switch A 01:01 vmnic2 cust_dvswitch
Trunk 1 to switch B 02:01 vmnic4 cust_dvswitch
Trunk 2 to switch A 02:02 vmnic5 flex_dvswitch
Trunk 2 to switch B 01:02 vmnic3 flex_dvswitch
iDRAC - oob network M0 Not applicable Not applicable

PowerFlex R6525 with dual CPU 4 x 25 GB and 2 GPUs

Slot layout

PowerFlex R6525 nodes Dual CPU


Slot 0 (OCP) CX5
Slot 1 GPU2
Slot 2 GPU1
Slot 3 CX5

Logical network

Description Slot:Port VMW label dvSwitch for LACP bonding


NIC
Trunk 1 to switch A 00:01 vmnic2 cust_dvswitch
Trunk 1 to switch B 03:01 vmnic4 cust_dvswitch
Trunk 2 to switch A 03:02 vmnic5 flex_dvswitch
Trunk 2 to switch B 00:02 vmnic3 flex_dvswitch
iDRAC - oob network M0 Not applicable Not applicable

Cabling the PowerFlex R640/R740xd/R840 nodes


The following information describes the cabling and ports for PowerFlex R640/R740xd/R840 nodes.
Use this information with the expansion documentation that is provided by MFGS-DOCC-00-A01 to cable the expansion nodes.
MFGS-DOCC-00-A01 provides all the cables, labels, elevations, and port maps. Download the Expansion Service VCE_SKU
MFGS-DOCC-00-A01 from Unified Dashboard. This number refers to a service provided by our Manufacturing Technical
Program Management (TPM) team. When a customer orders this service, they receive a set of build documents similar to

60 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

the standard as built documents for a standard factory build. This contains complete documentation for adding the hardware
and cabling, and cable labels.
In a default PowerFlex setup two data networks are standard. Four data networks are only required for specific customer
requirements, for example, high performance or use of trunk ports.
The following PowerFlex R740xd node example is from the manufacturing documentation kit:

Cisco Nexus 93180YC-FX 1A


1-1 1-3 1-5 1-7
PowerFlex R740xd node PowerFlex R740xd node PowerFlex R640 node 10- PowerFlex R640 node 10-
1-1-01:01 1-2-01:01 Drive 1-3-02:01 Drive 1-4-02:01
PowerFlex R740xd node PowerFlex R740xd node PowerFlex R640 node 10- PowerFlex R640 node 10-
1-1-02:02 1-2-02:02 Drive 1-3-01:02 Drive 1-4-01:02
1-2 1-4 1-6 1-8

Cisco Nexus 93180YC-FX 1B


1-1 1-3 1-5 1-7
PowerFlex R740xd node PowerFlex R740xd node PowerFlex R640 node 10- PowerFlex R640 node 10-
1-1-02:01 1-2-02:01 Drive 1-3-01:01 Drive 1-4-01:01
PowerFlex R740xd node PowerFlex R740xd node PowerFlex R640 node 10- PowerFlex R640 node 10-
1-1-01:02 1-2-01:02 Drive 1-3-02:02 Drive 1-4-02:02
1-2 1-4 1-6 1-8

In the example, PowerFlex R740xd node 1-1-01:0 indicates the following:


● PowerFlex R740xd node 1-1 is the first PowerFlex R740xd node.
● 01:01 is slot:port, which is slot 1 port 1 on thePowerFlex R740xd node.
The following PowerFlex R840 node example is from the manufacturing documentation kit:

Cisco Nexus 93180YC-FX 1A


1-1 1-3 1-5
PowerFlex R840 compute-only node PowerFlex R840 hyperconverged node PowerFlex R840 hyperconverged node
1-1-00:01 1-2-04:01 with GPU 1-1-00:01
PowerFlex R840 compute-only node PowerFlex R840 hyperconverged node PowerFlex R840 hyperconverged node
1-1-04:02 1-2-06:02 with GPU 1-1-04:02
1-2 1-4 1-6

Cisco Nexus 93180YC-FX 1B


1-1 1-3 1-5
PowerFlex R840 compute-only node PowerFlex R840 hyperconverged node PowerFlex R840 hyperconverged node
1-1-04:01 1-2-06:01 with GPU 1-1-04:01
PowerFlex R840 compute-only node PowerFlex R840 hyperconverged node PowerFlex R840 hyperconverged node
1-1-00:02 1-2-04:02 with GPU 1-1-00:02
1-2 1-4 1-6

In the example, PowerFlex R840 compute-only node 1-1-00:01 indicates the following:
● PowerFlex R840 compute-only node 1-1 is the first PowerFlex R840 compute-only node.
● 00:01 is slot:port, which is slot 0 port 1 on the PowerFlex R840 node.

Related information
Configure the host

Preparing to expand a PowerFlex appliance 61


Internal Use - Confidential

Cabling requirements for PowerFlex management controller server


slot layout
The following information describes the cabling requirements for the PowerFlex management controller.

PowerFlex management controller - small or large

Slot layout
PowerFlex R640 management Single CPU small Dual CPU large
controller node
Slot 0 X710 / i350 X710 / i350
Slot 1 X710 X710
Slot 2 X550 X550
Slot 3 Empty Empty

Slot matrix for PowerFlex R640 management controller node - small

Description Slot:Port VMW label DVSwitch for non-bonded DVSwitch for LACP
and static bonding NIC bonding NIC
Trunk 1 to switch A 00:01 vmnic0 vDS0 FE_dvswitch
Trunk 1 to switch B 01:01 vmnic4 vDS0 FE_dvswitch
Trunk 2 to switch A 01:02 vmnic5 vDS1 BE_dvswitch
Trunk 2 to switch B 00:02 vmnic1 vDS1 BE_dvswitch
To management switch 02:01 vmnic6 vDS2 oob_dvswitch
iDRAC - OOB network M0 Not applicable Not applicable Not applicable

Slot matrix for PowerFlex R640 management controller node - large


Description Slot:Port VMW label DVSwitch for non-bonded DVSwitch for LACP
and static bonding NIC bonding NIC
Trunk 1 to switch A 00:01 vmnic0 vDS0 FE_dvswitch
Trunk 1 to switch B 02:01 vmnic4 vDS0 FE_dvswitch
Trunk 2 to switch A 02:02 vmnic5 vDS1 BE_dvswitch
Trunk 2 to switch B 00:02 vmnic1 vDS1 BE_dvswitch
To management switch 03:01 vmnic6 vDS2 oob_dvswitch
iDRAC - OOB network M0 Not applicable Not applicable Not applicable

62 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

PowerFlex management controller with BOSS card - small

Slot layout
PowerFlex R640 management controller node with Single CPU small
BOSS card
Slot 0 X710 / i350
Slot 1 BOSS
Slot 2 X550
Slot 3 Empty

Slot matrix for PowerFlex R640 management controller node with BOSS card - small
NOTE: For non-bonded NIC port design, configure the ports 00:02 and 01:02 as access instead of trunk ports.

Description Slot:Port VMW label DVSwitch for non-bonded and DVSwitch for LACP
static bonding NIC bonding NIC
Trunk 1 to switch A 00:01 vmnic0 vDS0 FE_dvswitch
Trunk 1 to switch B 00:03 vmnic2 vDS0 FE_dvswitch
Trunk 2 to switch A 00:04 vmnic3 vDS1 BE_dvswitch
Trunk 2 to switch B 00:02 vmnic1 vDS1 BE_dvswitch
To management switch 02:01 vmnic4 vDS2 oob_dvswitch
iDRAC - OOB network M0 Not applicable Not applicable Not applicable

PowerFlex management controller with BOSS card - large

Slot layout
PowerFlex R640 management controller node with Dual CPU large
BOSS card
Slot 0 X710 / i350
Slot 1 BOSS
Slot 2 X710
Slot 3 X550

Preparing to expand a PowerFlex appliance 63


Internal Use - Confidential

Slot matrix for PowerFlex R640 management controller node with BOSS card - large
Description Slot:Port VMW label DVSwitch for non-bonded DVSwitch for LACP
and static bonding NIC bonding NIC
Trunk 1 to switch A 00:01 vmnic0 vDS0 FE_dvswitch
Trunk 1 to switch B 02:01 vmnic4 vDS0 FE_dvswitch
Trunk 2 to switch A 02:02 vmnic5 vDS1 BE_dvswitch
Trunk 2 to switch B 00:02 vmnic1 vDS1 BE_dvswitch
To management switch 03:01 vmnic6 vDS2 oob_dvswitch
iDRAC - OOB network M0 Not applicable Not applicable Not applicable

Cabling requirements for PowerFlex hyperconverged nodes


The following information describes the cabling requirements for PowerFlex hyperconverged nodes.

PowerFlex R640 hyperconverged nodes

PowerFlex R640 nodes with SSD

Slot layout

PowerFlex R640 node with SSD Dual CPU


Slot 0 (rNDC) X710 / i350
Slot 1 BOSS
Slot 2 CX4-LX
Slot 3 CX4-LX

dfgsgrg

Logical network

Network logical version Description Slot:Port VMW DVSwitch


label
Non-bonded NIC port design Trunk to switch A 02:01 vmnic4 dvswitch0
Data networks are access ports. Trunk to switch B 03:01 vmnic6 dvswitch0
Data 1 to switch A 03:02 vmnic7 dvswitch1
Data 2 to switch B 02:02 vmnic5 dvswitch2
iDRAC - OOB M0 Not Not applicable
network applicable
Static bonding NIC port design Trunk 1 to switch A 02:01 vmnic4 cust_dvswitch

64 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

Network logical version Description Slot:Port VMW DVSwitch


label

Data networks are configured with Trunk 1 to switch B 03:01 vmnic6 cust_dvswitch
port channel vPC or VLT. Two
Trunk 2 to switch A 03:02 vmnic7 flex_dvswitch
logical data networks are created in
(data)
flex_dvswitch: data1 and data2.
Trunk 2 to switch B 02:02 vmnic5 flex_dvswitch
(data)
iDRAC - OOB M0 Not Not applicable
network applicable
LACP bonding NIC port design Trunk 1 to switch A 02:01 vmnic4 cust_dvswitch
Data networks are configured with Trunk 1 to switch B 03:01 vmnic6 cust_dvswitch
port channel vPC or VLT (LACP
enabled). Four logical data networks Trunk 2 to switch A 03:02 vmnic7 flex_dvswitch
are created in flex_dvswitch: data1, (data)
data2, data3 (if required), and data4
Trunk 2 to switch B 02:02 vmnic5 flex_dvswitch
(if required), rep1, and rep2. There
(data)
are no changes in physical connectivity
from node to switch. iDRAC - OOB M0 Not Not applicable
network applicable

PowerFlex R640 nodes 8* with NVMe

Slot layout

PowerFlex R640 nodes 8* with NVMe Dual CPU


Slot 0 (rNDC) X710 / i350
Slot 1 BOSS
Slot 2 CX4-LX
Slot 3 CX4-LX

Logical network

Network logical version Description Slot:Port VMW label DVSwitch


Non-bonded NIC port design Trunk to switch A 02:01 vmnic4 dvswitch0
Data networks are access ports. Trunk to switch B 03:01 vmnic6 dvswitch0
Data 1 to switch A 03:02 vmnic7 dvswitch1
Data 2 to switch B 02:02 vmnic5 dvswitch2
iDRAC - OOB M0 Not applicable Not applicable
network
Static bonding NIC port design Trunk 1 to switch A 02:01 vmnic4 cust_dvswitch
Data networks are configured with Trunk 1 to switch B 03:01 vmnic6 cust_dvswitch
port channel vPC or VLT. Two

Preparing to expand a PowerFlex appliance 65


Internal Use - Confidential

Network logical version Description Slot:Port VMW label DVSwitch

logical data networks are created in Trunk 2 to switch A 03:02 vmnic7 flex_dvswitch
flex_dvswitch: data1 and data2. (data)
Trunk 2 to switch B 02:02 vmnic5 flex_dvswitch
(data)
iDRAC - OOB M0 Not applicable Not applicable
network
LACP bonding NIC port design Trunk 1 to switch A 02:01 vmnic4 cust_dvswitch
Data networks are configured with Trunk 1 to switch B 03:01 vmnic6 cust_dvswitch
port channel vPC or VLT (LACP
enabled). Four logical data networks Trunk 2 to switch A 03:02 vmnic7 flex_dvswitch
are created in flex_dvswitch: data1, (data)
data2, data3 (if required), and data4
Trunk 2 to switch B 02:02 vmnic5 flex_dvswitch
(if required). There are no changes
(data)
in physical connectivity from node to
switch. iDRAC - OOB M0 Not applicable Not applicable
network

PowerFlex R640 nodes 10* with NVMe

Slot layout

PowerFlex R640 nodes 10* with NVMe Dual CPU


Slot 0 (rNDC) CX4-LX
Slot 1 NVMe Bridge
Slot 2 BOSS
Slot 3 CX4-LX

Logical network

Network logical version Description Slot:Port VMW label DVSwitch


Non-bonded NIC port design Trunk to switch A 00:01 vmnic0 dvswitch0
Data networks are access ports. Trunk to switch B 03:01 vmnic2 dvswitch0
Data 1 to switch A 03:02 vmnic3 dvswitch1
Data 2 to switch B 00:02 vmnic1 dvswitch2
iDRAC - OOB M0 Not applicable Not applicable
network
Static bonding NIC port design Trunk 1 to switch A 00:01 vmnic0 cust_dvswitch
Data networks are configured with Trunk 1 to switch B 03:01 vmnic2 cust_dvswitch
port channel vPC or VLT. Two
logical data networks are created in Trunk 2 to switch A 03:02 vmnic3 flex_dvswitch
flex_dvswitch: data1 and data2. (data)

66 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

Network logical version Description Slot:Port VMW label DVSwitch


Trunk 2 to switch B 00:02 vmnic1 flex_dvswitch
(data)
iDRAC - OOB M0 Not applicable Not applicable
network
LACP bonding NIC port design Trunk 1 to switch A 00:01 vmnic0 cust_dvswitch
Data networks are configured with Trunk 1 to switch B 03:01 vmnic2 cust_dvswitch
port channel vPC or VLT (LACP
enabled). Four logical data networks Trunk 2 to switch A 03:02 vmnic3 flex_dvswitch
are created in flex_dvswitch: data1, (data)
data2, data3 (if required), and data4
Trunk 2 to switch B 00:02 vmnic1 flex_dvswitch
(if required). There are no changes
(data)
in physical connectivity from node to
switch. iDRAC - OOB M0 Not applicable Not applicable
network

PowerFlex R740xd hyperconverged nodes

PowerFlex R740xd nodes with SSD

Slot layout

PowerFlex R740xd node with SSD Dual CPU/GPU


Slot 0 (rNDC) X710 / i350
Slot 1/2 GPU (optional)
Slot 3 CX4-LX
Slot 4 CX4-LX
Slot 5 BOSS
Slot 7/8 GPU (optional)

Logical network

Network logical version Description Slot:Port VMW label DVSwitch


Non-bonded NIC port design Trunk to switch A 03:01 vmnic4 dvswitch0
Data networks are access ports Trunk to switch B 04:01 vmnic6 dvswitch0
Data 1 to switch A 04:02 vmnic7 dvswitch1
Data 2 to switch B 03:02 vmnic5 dvswitch2

Preparing to expand a PowerFlex appliance 67


Internal Use - Confidential

Network logical version Description Slot:Port VMW label DVSwitch


iDRAC - OOB M0 Not applicable Not applicable
network
Static bonding NIC port design Trunk 1 to switch A 03:01 vmnic4 cust_dvswitch
Data networks are configured with Trunk 1 to switch B 04:01 vmnic6 cust_dvswitch
port channel vPC or VLT. Two logical
data networks are created in vDS1: Trunk 2 to switch A 04:02 vmnic7 flex_dvswitch
data1 and data2. (data)
Trunk 2 to switch B 03:02 vmnic5 flex_dvswitch
(data)
iDRAC - OOB M0 Not applicable Not applicable
network
LACP bonding NIC port design Trunk 1 to switch A 03:01 vmnic4 cust_dvswitch
Data networks are configured with Trunk 1 to switch B 04:01 vmnic6 cust_dvswitch
port channel vPC or VLT (LACP
enabled). Four logical data networks Trunk 2 to switch A 04:02 vmnic7 flex_dvswitch
are created in vDS1: data1, data2, (data)
data3 (if required), and data4 (if
Trunk 2 to switch B 03:02 vmnic5 flex_dvswitch
required). There are no changes in
(data)
physical connectivity from node to
switch. iDRAC - OOB M0 Not applicable Not applicable
network

PowerFlex R740xd nodes with NVMe

Slot layout

PowerFlex R740xd node with NVMe Dual CPU/GPU


Slot 0 (rNDC) CX4-LX
Slot 1 CX4-LX
Slot 3 NVMe Bridge
Slot 4 NVMe Bridge
Slot 7 BOSS

Logical network

Network logical version Description Slot:Port VMW label DVSwitch


Non-bonded NIC port design Trunk to switch A 00:01 vmnic0 dvswitch0
Data networks are access ports. Trunk to switch B 01:01 vmnic2 dvswitch0
Data 1 to switch A 01:02 vmnic3 dvswitch1

68 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

Network logical version Description Slot:Port VMW label DVSwitch


Data 2 to switch B 00:02 vmnic1 dvswitch2
iDRAC - OOB M0 Not applicable Not applicable
network
Static bonding NIC port design Trunk 1 to switch A 00:01 vmnic0 cust_dvswitch
Data networks are configured with Trunk 1 to switch B 01:01 vmnic2 cust_dvswitch
port channel vPC or VLT. Two logical
data networks are created in vDS1: Trunk 2 to switch A 01:02 vmnic3 flex_dvswitch
data1 and data2. (data)
Trunk 2 to switch B 00:02 vmnic1 flex_dvswitch
(data)
iDRAC - OOB M0 Not applicable Not applicable
network
LACP bonding NIC port design Trunk 1 to switch A 00:01 vmnic0 cust_dvswitch
Data networks are configured with Trunk 1 to switch B 01:01 vmnic2 cust_dvswitch
port channel vPC or VLT (LACP
enabled). Four logical data networks Trunk 2 to switch A 01:02 vmnic3 flex_dvswitch
are created in vDS1: data1, data2, (data)
data3 (if required), and data4 (if
Trunk 2 to switch B 00:02 vmnic1 flex_dvswitch
required). There are no changes in
(data)
physical connectivity from node to
switch. iDRAC - OOB M0 Not applicable Not applicable
network

PowerFlex R840 hyperconverged nodes

PowerFlex R840 nodes with SSD and GPU

Slot layout

PowerFlex R840 node with SSD GPU


Slot 0 (rNDC) CX4-LX
Slot 1 Empty
Slot 2 GPU (Double width)
Slot 3 HBA330
Slot 4 CX4-LX
Slot 5 Empty
Slot 6 BOSS

Preparing to expand a PowerFlex appliance 69


Internal Use - Confidential

Logical network

Network logical version Description Slot:Port VMW label DVSwitch


Non-bonded NIC port design Trunk to switch A 00:01 vmnic0 dvswitch0
Data networks are access ports. Trunk to switch B 04:01 vmnic2 dvswitch0
Data 1 to switch A 04:02 vmnic3 dvswitch1
Data 2 to switch B 00:02 vmnic1 dvswitch2
iDRAC - OOB M0 Not applicable Not applicable
network
Static bonding NIC port design Trunk 1 to switch A 00:01 vmnic0 cust_dvswitch
Data networks are configured with Trunk 1 to switch B 04:01 vmnic2 cust_dvswitch
port channel vPC or VLT. Two
logical data networks are created in Trunk 2 to switch A 04:02 vmnic3 flex_dvswitch
flex_dvswitch; data1 and data2. (data)
Trunk 2 to switch B 00:02 vmnic1 flex_dvswitch
(data)
iDRAC - OOB M0 Not applicable Not applicable
network
LACP bonding NIC port design Trunk 1 to switch A 00:01 vmnic0 cust_dvswitch
Data networks are configured with Trunk 1 to switch B 04:01 vmnic2 cust_dvswitch
port channel vPC or VLT (LACP
enabled). Four logical data networks Trunk 2 to switch A 04:02 vmnic3 flex_dvswitch
are created in flex_dvswitch: data1, (data)
data2, data3 (if required), and data4
Trunk 2 to switch B 00:02 vmnic1 flex_dvswitch
(if required). There are no changes
(data)
in physical connectivity from node to
switch. iDRAC - OOB M0 Not applicable Not applicable
network

PowerFlex R840 node with SSD and without GPU

Slot layout

PowerFlex R840 node with SSD Without GPU


Slot 0 (rNDC) CX4-LX
Slot 1 HBA330
Slot 2 Empty
Slot 3 BOSS
Slot 4 CX4-LX
Slot 5 Empty

70 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

PowerFlex R840 node with SSD Without GPU


Slot 6 Empty

Logical network

Network logical version Description Slot:Port VMW label DVSwitch


Non-bonded NIC port design Trunk to switch A 00:01 vmnic0 dvswitch0
Data networks are access ports. Trunk to switch B 04:01 vmnic2 dvswitch0
Data 1 to switch A 04:02 vmnic3 dvswitch1
Data 2 to switch B 00:02 vmnic1 dvswitch2
iDRAC - OOB M0 Not applicable Not applicable
network
Static bonding NIC port design Trunk 1 to switch A 00:01 vmnic0 cust_dvswitch
Data networks are configured with Trunk 1 to switch B 04:01 vmnic2 cust_dvswitch
port channel vPC or VLT. Two
logical data networks are created in Trunk 2 to switch A 04:02 vmnic3 flex_dvswitch
flex_dvswitch; data1 and data2. (data)
Trunk 2 to switch B 00:02 vmnic1 flex_dvswitch
(data)
iDRAC - OOB M0 Not applicable Not applicable
network
LACP bonding NIC port design Trunk 1 to switch A 00:01 vmnic0 cust_dvswitch
Data networks are configured with Trunk 1 to switch B 04:01 vmnic2 cust_dvswitch
port channel vPC or VLT (LACP
enabled). Four logical data networks Trunk 2 to switch A 04:02 vmnic3 flex_dvswitch
are created in flex_dvswitch; data1, (data)
data2, data3 (if required), and data4
Trunk 2 to switch B 00:02 vmnic1 flex_dvswitch
(if required). There are no changes
(data)
in physical connectivity from node to
switch. iDRAC - OOB M0 Not applicable Not applicable
network

PowerFlex R840 nodes with NVMe and GPU

Slot layout

PowerFlex R840 node with NVMe GPU


Slot 0 (rNDC) CX4-LX
Slot 1 Empty

Preparing to expand a PowerFlex appliance 71


Internal Use - Confidential

PowerFlex R840 node with NVMe GPU


Slot 2 GPU (double width)
Slot 3 Empty
Slot 4 CX4-LX
Slot 5 Empty
Slot 6 BOSS

Logical network

Network logical version Description Slot:Port VMW label DVSwitch


Non-bonded NIC port design Trunk to switch A 00:01 vmnic0 dvswitch0
Data networks are access ports. Trunk to switch B 04:01 vmnic2 dvswitch0
Data 1 to switch A 04:02 vmnic3 dvswitch1
Data 2 to switch B 00:02 vmnic1 dvswitch2
iDRAC - OOB M0 Not applicable Not applicable
network
Static bonding NIC port design Trunk 1 to switch A 00:01 vmnic0 cust_dvswitch
Data networks are configured with Trunk 1 to switch B 04:01 vmnic2 cust_dvswitch
port channel vPC or VLT. Two
logical data networks are created in Trunk 2 to switch A 04:02 vmnic3 flex_dvswitch
flex_dvswitch; data1 and data2. (data)
Trunk 2 to switch B 00:02 vmnic1 flex_dvswitch
(data)
iDRAC - OOB M0 Not applicable Not applicable
network
LACP bonding NIC port design Trunk 1 to switch A 00:01 vmnic0 cust_dvswitch
Data networks are configured with Trunk 1 to switch B 04:01 vmnic2 cust_dvswitch
port channel vPC or VLT (LACP
enabled). Four logical data networks Trunk 2 to switch A 04:02 vmnic3 flex_dvswitch
are created in flex_dvswitch: data1, (data)
data2, data3 (if required), and data4
Trunk 2 to switch B 00:02 vmnic1 flex_dvswitch
(if required). There are no changes
(data)
in physical connectivity from node to
switch. iDRAC - OOB M0 Not applicable Not applicable
network

PowerFlex R840 nodes with NVMe and no GPU

72 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

Slot layout

PowerFlex R840 node with NVMe No GPU


Slot 0 (rNDC) CX4-LX
Slot 1 Empty
Slot 2 Empty
Slot 3 BOSS
Slot 4 CX4-LX
Slot 5 Empty
Slot 6 Empty

Logical network

Network logical version Description Slot:Port VMW label DVSwitch


Non-bonded NIC port design Trunk to switch A 00:01 vmnic0 dvswitch0
Data networks are access ports. Trunk to switch B 04:01 vmnic2 dvswitch0
Data 1 to switch A 04:02 vmnic3 dvswitch1
Data 2 to switch B 00:02 vmnic1 dvswitch2
iDRAC - OOB M0 Not applicable Not applicable
network
Static bonding NIC port design Trunk 1 to switch A 00:01 vmnic0 cust_dvswitch
Data networks are configured with Trunk 1 to switch B 04:01 vmnic2 cust_dvswitch
port channel vPC or VLT. Two
logical data networks are created in Trunk 2 to switch A 04:02 vmnic3 flex_dvswitch
flex_dvswitch; data1 and data2. (data)
Trunk 2 to switch B 00:02 vmnic1 flex_dvswitch
(data)
iDRAC - OOB M0 Not applicable Not applicable
network
LACP bonding NIC port design Trunk 1 to switch A 00:01 vmnic0 cust_dvswitch
Data networks are configured with Trunk 1 to switch B 04:01 vmnic2 cust_dvswitch
port channel vPC or VLT (LACP
enabled). Four logical data networks Trunk 2 to switch A 04:02 vmnic3 flex_dvswitch
are created in flex_dvswitch; data1, (data)
data2, data3 (if required), and data4
Trunk 2 to switch B 00:02 vmnic1 flex_dvswitch
(if required). There are no changes
(data)
in physical connectivity from node to
switch. iDRAC - OOB M0 Not applicable Not applicable
network

Preparing to expand a PowerFlex appliance 73


Internal Use - Confidential

Cabling requirements for PowerFlex storage-only nodes


The following information describes the cabling requirements for PowerFlex storage-only nodes.

PowerFlex R640 storage-only nodes

PowerFlex R640 nodes with SSD

Slot layout

PowerFlex R640 node with SSD Dual CPU


Slot 0 (rNDC) CX4-LX
Slot 1 BOSS
Slot 2 CX4-LX
Slot 3 Empty

Logical network

Network logical version Description Slot:Port Device name DVSwitch


Non-bonded NIC port design Trunk to switch A 00:01 em1 bond0
Data networks are access ports. Trunk to switch B 02:01 p2p1 bond0
Data 1 to switch A 02:02 p2p2 Not applicable
Data 2 to switch B 00:02 em2 Not applicable
iDRAC - OOB M0 Not applicable Not applicable
network
LACP bonding NIC port design Trunk 1 to switch A 00:01 em1 bond0
(management, data1)
Data networks are configured with
port channel vPC or VLT (LACP Trunk 1 to switch B 02:01 p2p1 bond0
enabled). There are two logical data (management, data1)
networks: data1 is a part of bond0 and
data2 is a part of bond1. Trunk 2 to switch A 02:02 p2p2 bond1
(data2)
Trunk 2 to switch B 00:02 em2 bond1
(data2)
iDRAC - OOB M0 Not applicable Not applicable
network
LACP bonding NIC port design Trunk 1 to switch A 00:01 em1 bond0
(management, data1,
Data networks are configured with
data3, rep1)
port channel vPC or VLT (LACP
enabled). There are four logical data Trunk 1 to switch B 02:01 p2p1 bond0
networks: data1 and data3 (if required) (management, data1,
are a part of bond0, and data2 and data3, rep1)

74 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

Network logical version Description Slot:Port Device name DVSwitch

data4 (if required) are a part of bond1. Trunk 2 to switch A 02:02 p2p2 bond1
There are no changes in physical (data2, data4, rep2)
connectivity from node to switch.
Trunk 2 to switch B 00:02 em2 bond1
NOTE: rep1 and rep2 virtual (data2, data4, rep2)
interfaces are used only if the iDRAC - OOB M0 Not applicable Not applicable
native asynchronous replication is network
enabled.

PowerFlex R640 nodes 8 with NVMe

Slot layout

PowerFlex R640 nodes with NVMe Dual CPU


Slot 0 (rNDC) X710/i350
Slot 1 BOSS
Slot 2 CX4-LX
Slot 3 CX4-LX

Logical network

Network logical version Description Slot:Port Device name DVSwitch


Non-bonded NIC port design Trunk to switch A 02:01 p2p1 bond0
Data networks are access ports. Trunk to switch B 03:01 p3p1 bond0
Data 1 to switch A 03:02 p3p2 Not applicable
Data 2 to switch B 02:02 p2p2 Not applicable
iDRAC - OOB M0 Not applicable Not applicable
network

LACP bonding NIC port design Trunk 1 to switch A 02:01 p2p1 bond0
(management, data1)
Data networks are configured with
port channel vPC or VLT (LACP Trunk 1 to switch B 03:01 p3p1 bond0
enabled). There are two logical data (management, data1)
networks: data1 is a part of bond0 and
Trunk 2 to switch A 03:02 p3p2 bond1
data2 is a part of bond1.
(data2)
Trunk 2 to switch B 02:02 p2p2 bond1
(data2)
iDRAC - OOB M0 Not applicable Not applicable
network
LACP bonding NIC port design Trunk 1 to switch A 02:01 p2p1 bond0
(management, data1,
Data networks are configured with
data3, rep1)
port channel vPC or VLT (LACP

Preparing to expand a PowerFlex appliance 75


Internal Use - Confidential

Network logical version Description Slot:Port Device name DVSwitch

enabled) . There are four logical data Trunk 1 to switch B 03:01 p3p1 bond0
networks: data1 and data3 (if required) (management, data1,
are a part of bond0, data2 and data4 data3, rep1)
(if required) are a part of bond1. There
Trunk 2 to switch A 03:02 p3p2 bond1
are no changes in physical connectivity
(data2, data4, rep2)
from node to switch.
Trunk 2 to switch B 02:02 p2p2 bond1
NOTE: rep1 and rep2 virtual
(data2, data4, rep2)
interfaces are used only if the
native asynchronous replication is iDRAC - OOB M0 Not applicable Not applicable
enabled. network

PowerFlex R640 nodes 10 with NVMe

Slot layout

PowerFlex R640 nodes with NVMe Dual CPU


Slot 0 (rNDC) CX4-LX
Slot 1 NVMe Bridge
Slot 2 BOSS
Slot 3 CX4-LX

Logical network

Network logical version Description Slot:Port Device name DVSwitch


Non-bonded NIC port design Trunk to switch A 00:01 em1 bond0
Data networks are access ports. Trunk to switch B 03:01 p3p1 bond0
Data 1 to switch A 03:02 p3p2 Not applicable
Data 2 to switch B 00:02 em2 Not applicable
iDRAC - OOB M0 Not applicable Not applicable
network
LACP bonding NIC port design Trunk 1 to switch A 00:01 em1 bond0
(management, data1)
Data networks are configured with
port channel vPC or VLT (LACP Trunk 1 to switch B 03:01 p3p1 bond0
enabled). There are two logical data (management, data1)
networks: data1 is a part of bond0 and
data2 is a part of bond1. Trunk 2 to switch A 03:02 p3p2 bond1
(data2)
Trunk 2 to switch B 00:02 em2 bond1
(data2)
iDRAC - OOB M0 Not applicable Not applicable
network

76 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

Network logical version Description Slot:Port Device name DVSwitch


LACP bonding NIC port design Trunk 1 to switch A 00:01 em1 bond0
(management, data1,
Data networks are configured with
data3, rep1)
port channel vPC or VLT (LACP
enabled). There are four logical data Trunk 1 to switch B 03:01 p3p1 bond0
networks: data1 and data3 (if required) (management, data1,
are a part of bond0, data2 and data4 data3, rep1)
(if required) are a part of bond1. There
are no changes in physical connectivity Trunk 2 to switch A 03:02 p3p2 bond1
from node to switch. (data2, data4, rep2)

NOTE: rep1 and rep2 virtual Trunk 2 to switch B 00:02 em2 bond1
interfaces are used only if the (data2, data4, rep2)
native asynchronous replication is iDRAC - OOB M0 Not applicable Not applicable
enabled. network

PowerFlex R740xd storage-only nodes

PowerFlex R740xd nodes with SSD

Slot layout

PowerFlex R740xd node with SSD Dual CPU


Slot 0 (rNDC) X710/i350
Slot 1 CX4-LX
Slot 2 CX4-LX
Slot 3 BOSS

Logical network

Network logical version Description Slot:Port Device name DVSwitch


Non-bonded NIC port design Trunk to switch A 01:01 p1p1 bond0
Data networks are access ports. Trunk to switch B 02:01 p2p1 bond0
Data 1 to switch A 02:02 p2p2 Not applicable
Data 2 to switch B 01:02 p1p2 Not applicable
iDRAC - OOB M0 Not applicable Not applicable
network
LACP bonding NIC port design Trunk 1 to switch A 01:01 p1p1 bond0
(management, data1)

Preparing to expand a PowerFlex appliance 77


Internal Use - Confidential

Network logical version Description Slot:Port Device name DVSwitch

Data networks are configured with Trunk 1 to switch B 02:01 p2p1 bond0
port channel vPC or VLT (LACP (management, data1)
enabled). There are two logical data
Trunk 2 to switch A 02:02 p2p2 bond1
networks: data1 is a part of bond0 and
(data2)
data2 is a part of bond1.
Trunk 2 to switch B 01:02 p1p2 bond1
(data2)
iDRAC - OOB M0 Not applicable Not applicable
network
LACP bonding NIC port design Trunk 1 to switch A 01:01 p1p1 bond0
(management, data1,
Data networks are configured with
data3, rep1)
port channel vPC or VLT (LACP
enabled). There are four logical data Trunk 1 to switch B 02:01 p2p1 bond0
networks: data1 and data3 (if required) (management, data1,
are a part of bond0, data2 and data4 data3, rep1)
(if required) are a part of bond1. There
are no changes in physical connectivity Trunk 2 to switch A 02:02 p2p2 bond1
from Node to Switch). (data2, data4, rep2)
NOTE: rep1 and rep2 virtual Trunk 2 to switch B 01:02 p1p2 bond1
interfaces are used only if (data2, data4, rep2)
native asynchronous replication is
enabled. iDRAC - OOB M0 Not applicable Not applicable
network

PowerFlex R740xd nodes with NVMe

Slot layout

PowerFlex R740xd nodes with NVMe Dual CPU


Slot 0 (rNDC) X710/i350
Slot 1 CX4-LX
Slot 3 NVMe Bridge
Slot 4 NVMe Bridge
Slot 7 BOSS
Slot 8 NVMe Bridge

Logical network

Network logical version Description Slot:Port Device name DVSwitch


Non-bonded NIC port design Trunk to switch A 01:01 p1p1 bond0

78 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

Network logical version Description Slot:Port Device name DVSwitch

Data networks are access ports. Trunk to switch B 08:01 p8p1 bond0
Data 1 to switch A 08:02 p8p2 Not applicable
Data 2 to switch B 01:02 p1p2 Not applicable
iDRAC - OOB M0 Not applicable Not applicable
network
LACP bonding NIC port design Trunk 1 to switch A 01:01 p1p1 bond0
(management, data1)
Data networks are configured with
port channel vPC or VLT (LACP Trunk 1 to switch B 08:01 p8p1 bond0
enabled). There are two logical data (management, data1)
networks: data1 is a part of bond0, and
data2 is a part of bond1. Trunk 2 to switch A 08:02 p8p2 bond1
(data2)
Trunk 2 to switch B 01:02 p1p2 bond1
(data2)
iDRAC - OOB M0 Not applicable Not applicable
network
LACP bonding NIC port design Trunk 1 to switch A 01:01 p1p1 bond0
(management, data1,
Data networks are configured with
data3, rep1)
port channel vPC or VLT (LACP
enabled). There are four logical data Trunk 1 to switch B 08:01 p8p1 bond0
networks: data1 and data3 (if required) (management, data1,
are a part of bond0, and data2 and data3, rep1)
data4 (if required) are a part of bond1.
There are no changes in physical Trunk 2 to switch A 08:02 p8p2 bond1
connectivity from node to switch. (data2, data4, rep2)
NOTE: rep1 and rep2 virtual Trunk 2 to switch B 01:02 p1p2 bond1
interfaces are used only if (data2, data4, rep2)
native asynchronous replication is
enabled. iDRAC - OOB M0 Not applicable Not applicable
network

Cabling requirements for PowerFlex compute-only nodes


The following information describes the cabling requirements for PowerFlex compute-only nodes.

PowerFlex R640 compute-only nodes

PowerFlex R640 nodes with SSD

Slot layout

PowerFlex R640 node SSD Dual CPU


Slot 0 (rNDC) X710/i350
Slot 1 BOSS

Preparing to expand a PowerFlex appliance 79


Internal Use - Confidential

PowerFlex R640 node SSD Dual CPU


Slot 2 CX4-LX
Slot 3 CX4-LX

Logical network for PowerFlex R640 nodes with SSD

Network logical version Description Slot:Port VMW label DVSwitch


Non-bonded NIC port design Trunk to switch A 02:01 vmnic4 dvswitch0
Data networks are access ports Trunk to switch B 03:01 vmnic6 dvswitch0
Data 1 to switch A 03:02 vmnic7 dvswitch1
Data 2 to switch B 02:02 vmnic5 dvswitch2
iDRAC - OOB M0 n/a n/a
network
Static bonding NIC port design Trunk 1 to switch A 02:01 vmnic4 cust_dvswitch
Data networks are configured with Trunk 1 to switch B 03:01 vmnic6 cust_dvswitch
port channel vPC or VLT. Two
logical data networks are created in Trunk 2 to switch A 03:02 vmnic7 flex_dvswitch
flex_dvswitch: data1 and data2. (data)
Trunk 2 to switch B 02:02 vmnic5 flex_dvswitch
(data)
iDRAC - OOB M0 n/a n/a
network
LACP bonding NIC port design Trunk 1 to switch A 02:01 vmnic4 cust_dvswitch
The data networks are configured Trunk 1 to switch B 03:01 vmnic6 cust_dvswitch
with port channel vPC or VLT (LACP
enabled). Four logical data networks Trunk 2 to switch A 03:02 vmnic7 flex_dvswitch
are created in flex_dvswitch: data1, (data)
data2, data3 (if required), and data4
Trunk 2 to switch B 02:02 vmnic5 flex_dvswitch
(if required). There are no changes
(data)
in physical connectivity from node to
switch. iDRAC - OOB M0 n/a n/a
network

Logical network for PowerFlex R640 Windows-based compute-only nodes

Network logical version Description Slot:Port Logical port NIC teaming


Non-bonded NIC port design Trunk to switch A 02:01 Slot2Port1 Team0
Data networks are access ports Trunk to switch B 03:01 Slot3Port1 Team0
Data 1 to switch A 03:02 Slot3Port2 n/a
Data 2 to switch B 02:02 Slot2Port2 n/a
iDRAC - OOB M0 n/a n/a
network
Static bonding NIC port design Trunk 1 to switch A 02:01 Slot2Port1 Team0
Data networks are configured with Trunk 1 to switch B 03:01 Slot3Port1 Team0
port channel vPC or VLT. Two logical
data networks are created in Team1: Trunk 2 to switch A 03:02 Slot3Port2 Team1
data1 and data2. (data)

80 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

Network logical version Description Slot:Port Logical port NIC teaming


Trunk 2 to switch B 02:02 Slot2Port2 Team1
(data)
iDRAC - OOB M0 n/a n/a
network
LACP bonding NIC port design Trunk 1 to switch A 02:01 Slot2Port1 Team0
The data networks are configured Trunk 1 to switch B 03:01 Slot3Port1 Team0
with port channel vPC or VLT (LACP
enabled). Four logical data networks Trunk 2 to switch A 03:02 Slot3Port2 Team1
are created in Team1: data1, data2, (data)
data3 (if required), and data4 (if
Trunk 2 to switch B 02:02 Slot2Port2 Team1
required). There are no changes in
(data)
physical connectivity from node to
switch. iDRAC - OOB M0 n/a n/a
network

PowerFlex R840 compute-only nodes

PowerFlex R840 nodes GPU

Slot layout

PowerFlex R840 nodes with SSD GPU (1A+2A)


Slot 0 (rNDC) CX4-LX
Slot 1 Empty
Slot 2 GPU (double width)
Slot 3 HBA330
Slot 4 CX4-LX
Slot 5 Empty
Slot 6 BOSS

Logical network for PowerFlex R840 ESXi-based compute-only nodes with GPU

Network logical version Description Slot:Port VMW label DVSwitch


Non-bonded NIC port design Trunk to switch A 00:01 vmnic0 dvswitch0
Data networks are access ports. Trunk to switch B 04:01 vmnic2 dvswitch0
Data 1 to switch A 04:02 vmnic3 dvswitch1

Preparing to expand a PowerFlex appliance 81


Internal Use - Confidential

Network logical version Description Slot:Port VMW label DVSwitch


Data 2 to switch B 00:02 vmnic1 dvswitch2
iDRAC - OOB M0 Not applicable Not applicable
network
Static bonding NIC port design Trunk 1 to switch A 00:01 vmnic0 cust_dvswitch
Data networks are configured with Trunk 1 to switch B 04:01 vmnic2 cust_dvswitch
port channel vPC or VLT. Two
logical data networks are created in Trunk 2 to switch A 04:02 vmnic3 flex_dvswitch
flex_dvswitch; data1 and data2. (data)
Trunk 2 to switch B 00:02 vmnic1 flex_dvswitch
(data)
iDRAC - OOB M0 Not applicable Not applicable
network
LACP bonding NIC port design Trunk 1 to switch A 00:01 vmnic0 cust_dvswitch
Data networks are configured with Trunk 1 to switch B 04:01 vmnic2 cust_dvswitch
port channel vPC or VLT (LACP
enabled). Four logical data networks Trunk 2 to switch A 04:02 vmnic3 flex_dvswitch
are created in flex_dvswitch: data1, (data)
data2, data3 (if required), and data4
Trunk 2 to switch B 00:02 vmnic1 flex_dvswitch
(if required). There are no changes
(data)
in physical connectivity from node to
switch. iDRAC - OOB M0 Not applicable Not applicable
network

Logical network for PowerFlex R840 Windows-based compute-only nodes with GPU

Network logical version Description Slot:Port Logical port NIC teaming


Non-bonded NIC port design Trunk to switch A 00:01 NIC1 Team0
Data networks are access ports. Trunk to switch B 04:01 Slot4Port1 Team0
Data 1 to switch A 04:02 Slot4Port2 Not applicable
Data 2 to switch B 00:02 NIC2 Not applicable
iDRAC - OOB M0 Not applicable Not applicable
network
Static bonding NIC port design Trunk 1 to switch A 00:01 NIC1 Team0
Data networks are configured with Trunk 1 to switch B 04:01 Slot4Port1 Team0
port channel vPC or VLT. Two logical
data networks are created in Team1: Trunk 2 to switch A 04:02 Slot4Port2 Team1
data1 and data2. (data)
Trunk 2 to switch B 00:02 NIC2 Team1
(data)
iDRAC - OOB M0 Not applicable Not applicable
network
LACP bonding NIC port design Trunk 1 to switch A 00:01 NIC1 Team0
The data networks are configured Trunk 1 to switch B 04:01 Slot4Port1 Team0
with port channel vPC or VLT (LACP
enabled). Four logical data networks Trunk 2 to switch A 04:02 Slot4Port2 Team1
are created in Team1: data1, data2, (data)
data3 (if required) and data4 (if
Trunk 2 to switch B 00:02 NIC2 Team1
required). There are no changes in
(data)
physical connectivity from node to
switch.

82 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

Network logical version Description Slot:Port Logical port NIC teaming


iDRAC - OOB M0 Not applicable Not applicable
network

PowerFlex R840 nodes - GPU capable

Slot layout

PowerFlex R840 nodes SSD GPU capable (1A+2A)


Slot 0 (rNDC) CX4-LX
Slot 1 Empty
Slot 2 BOSS
Slot 3 HBA330
Slot 4 CX4-LX
Slot 5 Empty
Slot 6 Empty

Logical network for PowerFlex R840 ESXi-based compute-only nodes - GPU capable

Network logical version Description Slot:Port VMW label DVSwitch


Non-bonded NIC port design Trunk to switch A 00:01 vmnic0 dvswitch0
Data networks are access ports. Trunk to switch B 04:01 vmnic2 dvswitch0
Data 1 to switch A 04:02 vmnic3 dvswitch1
Data 2 to switch B 00:02 vmnic1 dvswitch2
iDRAC - OOB M0 Not applicable Not applicable
network
Static bonding NIC port design Trunk 1 to switch A 00:01 vmnic0 cust_dvswitch
Data networks are configured with Trunk 1 to switch B 04:01 vmnic2 cust_dvswitch
port channel vPC or VLT. Two
logical data networks are created in Trunk 2 to switch A 04:02 vmnic3 flex_dvswitch
flex_dvswitch; data1 and data2. (data)
Trunk 2 to switch B 00:02 vmnic1 flex_dvswitch
(data)
iDRAC - OOB M0 Not applicable Not applicable
network

Preparing to expand a PowerFlex appliance 83


Internal Use - Confidential

Network logical version Description Slot:Port VMW label DVSwitch


LACP bonding NIC port design Trunk 1 to switch A 00:01 vmnic0 cust_dvswitch
Data networks are configured with Trunk 1 to switch B 04:01 vmnic2 cust_dvswitch
port channel vPC or VLT (LACP
enabled). Four logical data networks Trunk 2 to switch A 04:02 vmnic3 flex_dvswitch
are created in flex_dvswitch: data1, (data)
data2, data3 (if required), and data4
Trunk 2 to switch B 00:02 vmnic1 flex_dvswitch
(if required). There are no changes
(data)
in physical connectivity from node to
switch. iDRAC - OOB M0 Not applicable Not applicable
network

Logical network for PowerFlex R840 Windows-based compute-only nodes - GPU capable

Network logical version Description Slot:Port Logical port NIC teaming


Non-bonded NIC port design Trunk to switch A 00:01 NIC1 Team0
Data networks are access ports. Trunk to switch B 04:01 Slot4Port1 Team0
Data 1 to switch A 04:02 Slot4Port2 Not applicable
Data 2 to switch B 00:02 NIC2 Not applicable
iDRAC - OOB M0 Not applicable Not applicable
network
Static bonding NIC port design Trunk 1 to switch A 00:01 NIC1 Team0
Data networks are configured with Trunk 1 to switch B 04:01 Slot4Port1 Team0
port channel vPC or VLT. Two logical
data networks are created in Team1: Trunk 2 to switch A 04:02 Slot4Port2 Team1
data1 and data2. (data)
Trunk 2 to switch B 00:02 NIC2 Team1
(data)
iDRAC - OOB M0 Not applicable Not applicable
network
LACP bonding NIC port design Trunk 1 to switch A 00:01 NIC1 Team0
The data networks are configured Trunk 1 to switch B 04:01 Slot4Port1 Team0
with port channel vPC or VLT (LACP
enabled). Four logical data networks Trunk 2 to switch A 04:02 Slot4Port2 Team1
are created in Team1: data1, data2, (data)
data3 (if required), and data4 (if
Trunk 2 to switch B 00:02 NIC2 Team1
required). There are no changes in
(data)
physical connectivity from node to
switch. iDRAC - OOB M0 Not applicable Not applicable
network

PowerFlex R840 nodes without GPU

84 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

Slot layout

PowerFlex R840 node No GPU (1B+2B)


Slot 0 (rNDC) CX4-LX
Slot 1 Empty
Slot 2 BOSS
Slot 3 HBA330
Slot 4 CX4-LX
Slot 5 Empty
Slot 6 Empty

Logical network for PowerFlex R840 ESXi-based compute-only nodes without GPU

Network logical version Description Slot:Port VMW label DVSwitch


Non-bonded NIC port design Trunk to switch A 00:01 vmnic0 dvswitch0
Data networks are access ports. Trunk to switch B 04:01 vmnic2 dvswitch0
Data 1 to switch A 04:02 vmnic3 dvswitch1
Data 2 to switch B 00:02 vmnic1 dvswitch2
iDRAC - OOB M0 Not applicable Not applicable
network
Static bonding NIC port design Trunk 1 to switch A 00:01 vmnic0 cust_dvswitch
Data networks are configured with Trunk 1 to switch B 04:01 vmnic2 cust_dvswitch
port channel vPC or VLT. Two
logical data networks are created in Trunk 2 to switch A 04:02 vmnic3 flex_dvswitch
flex_dvswitch; data1 and data2. (data)
Trunk 2 to switch B 00:02 vmnic1 flex_dvswitch
(data)
iDRAC - OOB M0 Not applicable Not applicable
network
LACP bonding NIC port design Trunk 1 to switch A 00:01 vmnic0 cust_dvswitch
Data networks are configured with Trunk 1 to switch B 04:01 vmnic2 cust_dvswitch
port channel vPC or VLT (LACP
enabled). Four logical data networks Trunk 2 to switch A 04:02 vmnic3 flex_dvswitch
are created in flex_dvswitch; data1, (data)
data2, data3 (if required), and data4
Trunk 2 to switch B 00:02 vmnic1 flex_dvswitch
(if required). There are no changes
(data)
in physical connectivity from node to
switch. iDRAC - OOB M0 Not applicable Not applicable
network

Logical network for PowerFlex R840 Windows-based compute-only nodes without GPU

Network logical version Description Slot:Port Logical port NIC teaming


Non-bonded NIC port design Trunk to switch A 00:01 NIC1 Team0
Data networks are access ports. Trunk to switch B 04:01 Slot4Port1 Team0
Data 1 to switch A 04:02 Slot4Port2 Not applicable
Data 2 to switch B 00:02 NIC2 Not applicable

Preparing to expand a PowerFlex appliance 85


Internal Use - Confidential

Network logical version Description Slot:Port Logical port NIC teaming


iDRAC - OOB M0 Not applicable Not applicable
network
Static bonding NIC port design Trunk 1 to switch A 00:01 NIC1 Team0
Data networks are configured with Trunk 1 to switch B 04:01 Slot4Port1 Team0
port channel vPC or VLT. Two logical
data networks are created in vDS1: Trunk 2 to switch A 04:02 Slot4Port2 Team1
data1 and data2. (data)
Trunk 2 to switch B 00:02 NIC2 Team1
(data)
iDRAC - OOB M0 Not applicable Not applicable
network
LACP bonding NIC port design Trunk 1 to switch A 00:01 NIC1 Team0
The data networks are configured Trunk 1 to switch B 04:01 Slot4Port1 Team0
with port channel vPC or VLT (LACP
enabled). Four logical data networks Trunk 2 to switch A 04:02 Slot4Port2 Team1
are created in vDS1: data1, data2, (data)
data3 (if required), and data4 (if
Trunk 2 to switch B 00:02 NIC2 Team1
required). There are no changes in
(data)
physical connectivity from node to
switch. iDRAC - OOB M0 Not applicable Not applicable
network

Cabling requirements for VMware NSX-T Edge nodes


The following information describes the cabling requirements for VMware NSX-T Edge nodes.

Slot layout
VMware NSX-T Edge node SSD Dual CPU
Slot 0 (rNDC) CX4-LX
Slot 1 BOSS
Slot 2 CX4-LX
Slot 3 CX4-LX

Logical network
Logical Network Description Slot:Port Logical port DVSwitch
LACP bonding NIC port design Management to switch A 00:01 vmnic0 dvswitch0
Management network is configured Management to switch B 02:01 vmnic2 dvswitch0
with port channel or vPC or VLT
(LACP enabled)

Non-bonded NIC port design Transport to switch B 00:02 vmnic1 dvswitch1

86 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

Logical Network Description Slot:Port Logical port DVSwitch

Both transport and external Transport to switch A 03:01 vmnic4 dvswitch1


networks are access ports
External 01 to switch B 02:02 vmnic3 dvswitch1
External 02 to switch A 03:02 vmnic5 dvswitch1
iDRAC - OOB network M0 Not applicable Not applicable

Connecting the VMware NSX-T Edge nodes


The following information describes the connection between VMware NSX-T Edge nodes and the aggregation or border leaf
switches, depending on the network topology used within an environment. If the local disk (RAID10) is utilized, by default a
minimum of two physical edge servers are deployed. If vSAN is utilized, by default a minimum of four physical edge servers are
deployed.
Each VMware NSX-T Edge node is cabled in the same way regardless of the number of VMware NSX-T Edge nodes installed.
By default, each VMware NSX-T Edge node has all its six connections connected to either the aggregation switches in an
aggregation and access topology or border leaf switches in a leaf and spine topology. If there is a port capacity or a cable
distance constraint, the two management and two transport connections are relocated to the access switches. The two
external edge connections must always remain on either the aggregation or border leaf switches.
The following sections describe the cabling information for each network topology.

Connecting the VMware NSX-T Edge nodes to the aggregation and


access topology
Use the following options if the PowerFlex appliance based on PowerFlex R640 nodes is using an aggregation and access
topology.
The port numbers within the port maps are provided for example purpose only. Port numbers vary for each build based on the
number of devices connecting to the switches.

Option 1 (default): Connecting all six connections to the aggregation switches


The following port maps show the VMware NSX-T Edge node connections mapped to the aggregation switches. In the following
examples, the PowerFlex R640 node 1G-00:01 indicates the following:
● PowerFlex R640 node 1G is the first new PowerFlex R640 node used for VMware NSX-T Edge node.
● 00:01 is slot:port which is slot 0 port 1 on the PowerFlex R640 node.
Consider the following while connecting all six connections to the aggregation switches:
○ The port numbers can vary for each build.
○ The assumption is that the two VMware NSX-T Edge nodes (1E-1F) already exist (not shown in the table) and you are
adding two more VMware NSX-T Edge nodes (1G-1H).
○ Each VMware NSX-T Edge node requires three physical port per switch. As the network adapter in each VMware NSX-T
Edge node is 25G, only a maximum number of four connections per port is allowed.
The aggregation switches provide all six connections for the VMware NSX-T Edge nodes. The following port map shows the
two VMware NSX-T Edge nodes have all six links connected to the aggregation switches.

Aggregation switch A
1-29 1-31

R640 10-Drive 1G-03:01 R640 10-Drive 1G-00:01


R640 10-Drive 1H-03:01 R640 10-Drive 1H-00:01

Not applicable R640 10-Drive 1G-02:02


R640 10-Drive 1H-02:02

Preparing to expand a PowerFlex appliance 87


Internal Use - Confidential

Aggregation switch A
1-30 1-32

Aggregation switch B
1-29 1-31

R640 10-Drive 1G-00:02 R640 10-Drive 1G-02:01


R640 10-Drive 1H-00:02 R640 10-Drive 1H-02:01

Not applicable R640 10-Drive 1G-03:02


R640 10-Drive 1H-03:02

1-30 1-32

Option 2: Connecting nodes to both aggregation and access switches


The following port maps show the VMware NSX-T Edge node connections mapped to both the aggregation and access
switches. In the following examples, PowerFlex R640 node 1G-00:01 indicates the following:
● PowerFlex R640 node 1G is the first new PowerFlex R640 node used for VMware NSX-T Edge node.
● 00:01 is slot:port which is slot 0 port 1 on the PowerFlex R640 node.
Consider the following while connecting nodes to both aggregation and access switches:
● The port numbers can vary for each build.
● The assumption is that the two VMware NSX-T Edge nodes (1E-1F) already exist (not shown in the table) and you are
adding two more VMware NSX-T Edge nodes (1G-1H).
● Each VMware NSX-T Edge node requires one physical port per switch. As the network adapter in each VMware NSX-T Edge
node is 25G, only a maximum number of four connections per port is allowed.
The aggregation switches provide only the two VMware NSX-T edge external traffic links. The following port map shows two
links connected on the aggregation switches.

Aggregation switch A
1-31

R640 10-Drive 1G-02:02


R640 10-Drive 1H-02:02

Aggregation switch B
1-31

R640 10-Drive 1G-03:02


R640 10-Drive 1H-03:02

The access switches provide two management and two transport traffic links. The following port map shows two VMware
NSX-T Edge nodes with four links connected to the access switches.

Access switch A
1-31

R640 10-Drive 1G-00:01


R640 10-Drive 1H-00:01

R640 10-Drive 1G-03:01


R640 10-Drive 1H-03:01

88 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

Access switch A
1-32

Access switch B
1-31

R640 10-Drive 1G-02:01


R640 10-Drive 1H-02:01

R640 10-Drive 1G-00:02


R640 10-Drive 1H-00:02

1-32

Connecting the VMware NSX-T Edge nodes to the leaf and spine
topology
Use the following options if the PowerFlex appliance based on PowerFlex R640 nodes is using a leaf and spine topology.

Option 1 (default): Connecting all six VMware NSX-T Edge nodes directly to the
VMware NSX-T Edge node border leaf switches
The following port maps show the connections mapped to the border leaf switches. In the following examples, the PowerFlex
R640 node 1G-00:01 indicates the following:
● PowerFlex R640 node 1G is the first new PowerFlex R640 node used for VMware NSX-T Edge node.
● 00:01 is slot:port which is slot 0 port 1 on the PowerFlex R640 node.
Consider the following while connecting the VMware NSX-T Edge nodes to the leaf and spine topology:
● The port numbers can vary from each build.
● The assumption is that the two VMware NSX-T Edge nodes (1E-1F) already exist (not shown in the table) and you are
adding two more edge nodes (1G-1H).
● Each VMware NSX-T Edge node requires three physical port per switch. As the network adapter in each VMware NSX-T
Edge node is 25G, only a maximum number of four connections per port is allowed.
The border leaf switches provide all six connections for the VMware NSX-T Edge nodes. The following port map shows the two
VMware NSX-T Edge nodes have all six links connected to the border leaf switches.

Border leaf switch A


1-29 1-31

R640 10-Drive 1G-03:01 R640 10-Drive 1G-00:01


R640 10-Drive 1H-03:01 R640 10-Drive 1H-00:01

Not applicable R640 10-Drive 1G-02:02


R640 10-Drive 1H-02:02

1-30 1-32

Border leaf switch B


1-29 1-31

R640 10-Drive 1G-00:02 R640 10-Drive 1G-02:01


R640 10-Drive 1H-00:02 R640 10-Drive 1H-02:01

Preparing to expand a PowerFlex appliance 89


Internal Use - Confidential

Border leaf switch B

Not applicable R640 10-Drive 1G-03:02


R640 10-Drive 1H-03:02

1-30 1-32

Option 2: Connecting the VMware NSX-T Edge nodes to both border leaf and leaf
switches
The following port maps show the VMware NSX-T Edge node connections mapped to both the border leaf and leaf switches. In
the following examples, PowerFlex R640 node 1G-00:01 indicates the following:
● PowerFlex R640 node 1G is the first new PowerFlex R640 node used for VMware NSX-T Edge node.
● 00:01 is slot:port which is slot 0 port 1 on the PowerFlex R640 node.
Consider the following while connecting the VMware NSX-T Edge nodes to both border leaf and leaf switches:
● The port numbers can vary from each build.
● The assumption is that the two VMware NSX-T Edge nodes (1E-1F) already exist (not shown in the table) and you are
adding two more edge nodes (1G-1H).
● Each VMware NSX-T Edge node requires one physical port per switch. As the network adapter in each VMware NSX-T Edge
node is 25G, only a maximum number of four connections per port is allowed.
The border leaf switches provide only two VMware NSX-T edge external traffic links. The following port map shows two
VMware NSX-T Edge nodes with two links connected to the border leaf switches.

Border leaf switch A


1-31

R640 10-Drive 1G-02:02


R640 10-Drive 1H-02:02

Border leaf switch B


1-31

R640 10-Drive 1G-03:02


R640 10-Drive 1H-03:02

The leaf switches provide two management and two transport traffic links. The following port map shows two VMware NSX-T
Edge nodes with four links connected to the leaf switches.

Leaf switch A
1-31

R640 10-Drive 1G-00:01


R640 10-Drive 1H-00:01

R640 10-Drive 1G-03:01


R640 10-Drive 1H-03:01

1-32

Leaf switch B
1-31

R640 10-Drive 1G-02:01


R640 10-Drive 1H-02:01

90 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

Leaf switch B

R640 10-Drive 1G-00:02


R640 10-Drive 1H-00:02

1-32

PowerFlex R640/R740xd/R840 nodes rNDC slot


layout
The following information describes the network daughter card (rNDC) slot layout information for the PowerFlex R640/
R740xd/R840 nodes.

PowerFlex management controller rNDC slot layout


This section describes the slot layout information for the PowerFlex management controller.

PowerFlex management controller with BOSS card - small or large

Slot layout

PowerFlex R640 management Single CPU - small Dual CPU - large


controller node
Slot 0 (rNDC) Mellanox CX-4 25 GB Mellanox CX-4 25 GB
Slot 1 BOSS BOSS
Slot 2 Mellanox CX-4 25 GB Mellanox CX-4 25 GB
Slot 3 Intel X550-10GbT Intel X550-10GbT

Slot matrix for PowerFlex R640 management controller node - small or large

Model Description Slot:Port Logical port DvSwitch


PowerFlex R640 Trunk 1 to Switch A 0:01 vmnic0 fe_dvswitch
management controller
node Trunk 2 to Switch B 0:02 vmnic1 be_dvswitch
Trunk 1 to Switch B 2:01 vmnic2 fe_dvswitch
Trunk 2 to Switch A 2:02 vmnic3 be_dvswitch
OOB network 3:01 vmnic4 oob-dvswitch

Preparing to expand a PowerFlex appliance 91


Internal Use - Confidential

PowerFlex hyperconverged node rNDC slot layout


This section describes the slot layout for PowerFlex hyperconverged nodes.

PowerFlex R640 hyperconverged nodes


PowerFlex R640 node with SSD

Slot layout

PowerFlex R640 hyperconverged nodes with SSD Dual CPU


Slot 0 (rNDC) Mellanox CX-4 25 GB
Slot 1 BOSS
Slot 2 Empty
Slot 3 Mellanox CX-4 25 GB

Slot matrix for PowerFlex R640 hyperconverged nodes with SSD

Model Description Slot:Port Logical port Dvswitch


PowerFlex R640 Trunk 1 to Switch A 0:01 vmnic0 cust_dvswitch
hyperconverged nodes
with SSD Trunk 2 to Switch B 0:02 vmnic1 flex_dvswitch
Trunk 1 to Switch B 3:01 vmnic2 cust_dvswitch
Trunk 2 to Switch A 3:02 vmnic3 flex_dvswitch
iDRAC-OOB network M0 Not applicable Not applicable

PowerFlex R640 hyperconverged nodes with 8* NVMe

Slot layout

PowerFlex R640 hyperconverged nodes with 8* NVMe Dual CPU


Slot 0 (rNDC) Mellanox CX-4 25 GB
Slot 1 BOSS
Slot 2 Empty
Slot 3 Mellanox CX-4 25 GB

Slot matrix for PowerFlex R640 hyperconverged nodes with 8* NVMe

Model Description Slot:Port Logical port Dvswitch


PowerFlex R640 Trunk 1 to Switch A 0:01 vmnic0 cust_dvswitch
hyperconverged nodes
with 8* NVMe Trunk 2 to Switch B 0:02 vmnic1 flex_dvswitch

92 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

Model Description Slot:Port Logical port Dvswitch


Trunk 1 to Switch A 3:01 vmnic2 cust_dvswitch
Trunk 2 to Switch B 3:02 vmnic3 flex_dvswitch
iDRAC - OOB network M0 Not applicable Not applicable

PowerFlex R640 hyperconverged nodes with 10* NVMe

Slot layout

PowerFlex R640 hyperconverged nodes with 10* NVMe Dual CPU


Slot 0 (rNDC) Mellanox CX-4 25 GB
Slot 1 NVMe bridge
Slot 2 BOSS
Slot 3 Mellanox CX-4 25 GB

Slot matrix for PowerFlex R640 hyperconverged nodes with 10* NVMe

Model Description Slot:Port Logical port Dvswitch


PowerFlex R640 Trunk 1 to Switch A 0:01 vmnic0 cust_dvswitch
hyperconverged nodes
with 10* NVMe Trunk 2 to Switch B 0:02 vmnic1 flex_dvswitch
Trunk 1 to Switch B 3:01 vmnic2 cust_dvswitch
Trunk 2 to Switch A 3:02 vmnic3 flex_dvswitch
iDRAC - OOB network M0 Not applicable Not applicable

PowerFlex R740xd hyperconverged nodes


PowerFlex R740xd with SSD

Slot layout

PowerFlex R740xd hyperconverged nodes with SSD Dual CPU


rNDC (CPU 1) Mellanox CX-4 25 GB
Slot 1 (CPU 1) open (x8)
Slot 2 (CPU 1) Not applicable
Slot 3 (CPU 1) BOSS
Slot 4 (CPU 2) open (x16)

Preparing to expand a PowerFlex appliance 93


Internal Use - Confidential

PowerFlex R740xd hyperconverged nodes with SSD Dual CPU


Slot 5 (CPU 2) Mellanox CX-4 25 GB
Slot 6 (CPU 1) HBA330
Slot 7 (CPU 2) open (x8)
Slot 8 (CPU 2) open (x16)

Slot matrix for PowerFlex R740xd hyperconverged nodes with SSD

Model Description Slot:Port Logical port Dvswitch


PowerFlex R740xd Trunk 1 to Switch A 0:01 vmnic0 cust_dvswitch
hyperconverged nodes
with SSD and 25 G Trunk 2 to Switch B 0:02 vmnic1 flex_dvswitch
Trunk 1 to Switch B 5:01 vmnic2 cust_dvswitch
Trunk 2 to Switch A 5:02 vmnic3 flex_dvswitch
iDRAC-OOB network M0 Not applicable Not applicable

PowerFlex R740xd hyperconverged nodes with NVMe

Slot layout

PowerFlex R740xd hyperconverged nodes with NVMe Dual CPU


rNDC (CPU 1) Mellanox CX-4 25 GB
Slot 1 (CPU 1) open (x16)
Slot 2 (CPU 1) Not applicable
Slot 3 (CPU 1) NVMe bridge
Slot 4 (CPU 2) NVMe bridge
Slot 5 (CPU 2) Mellanox CX-4 25 GB
Slot 6 (CPU 1) BOSS
Slot 7 (CPU 2) open (x8)
Slot 8 (CPU 2) open (x16)

Slot matrix for PowerFlex R740xd hyperconverged nodes with NVMe

Model Description Slot:Port Logical port Dvswitch


PowerFlex R740xd Trunk 1 to Switch A 0:01 vmnic0 cust_dvswitch
hyperconverged nodes
with NVMe-25G-4/24 Trunk 2 to Switch B 0:02 vmnic1 flex_dvswitch
Trunk 1 to Switch B 5:01 vmnic2 cust_dvswitch
Trunk 2 to Switch A 5:02 vmnic3 flex_dvswitch
iDRAC-OOB network M0 Not applicable Not applicable

94 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

PowerFlex R740xd hyperconverged nodes with two GPUs

Slot layout

PowerFlex R740xd hyperconverged nodes with two GPUs Dual CPU


rNDC (CPU 1) Mellanox CX-4 25 GB
Slot 1 (CPU 1) GPU1 (DW)
Slot 2 (CPU 1) Not applicable
Slot 3 (CPU 1) BOSS
Slot 4 (CPU 2) open (x16)
Slot 5 (CPU 2) Mellanox CX-4 25 GB
Slot 6 (CPU 1) HBA330
Slot 7 (CPU 2) blocked
Slot 8 (CPU 2) GPU2 (DW)

Slot matrix for PowerFlex R740xd hyperconverged nodes with two GPUs

Model Description Slot:Port Logical port Dvswitch


PowerFlex R740xd Trunk 1 to Switch A 0:01 vminic0 cust_dvswitch
hyperconverged nodes
with SSD, 25G and two Trunk 2 to Switch B 0:02 vminic1 flex_dvswitch
GPUs Trunk 1 to Switch B 5:01 vminic2 cust_dvswitch
Trunk 2 to Switch A 5:02 vminic3 flex_dvswitch
iDRAC-OOB network M0 Not applicable Not applicable

PowerFlex R740xd hyperconverged nodes with NVMe and two GPUs

Slot layout

PowerFlex R740xd hyperconverged nodes with NVMe and two Dual CPU
GPUs
rNDC (CPU 1) Mellanox CX-4 25 GB
Slot 1 (CPU 1) GPU1 (DW/SW)
Slot 2 (CPU 1) Not applicable

Preparing to expand a PowerFlex appliance 95


Internal Use - Confidential

PowerFlex R740xd hyperconverged nodes with NVMe and two Dual CPU
GPUs
Slot 3 (CPU 1) NVMe bridge
Slot 4 (CPU 2) NVMe bridge
Slot 5 (CPU 2) Mellanox CX-4 25 GB
Slot 6 (CPU 1) BOSS
Slot 7 (CPU 2) blocked
Slot 8 (CPU 2) GPU2 (DW)

Slot matrix for PowerFlex R740xd hyperconverged nodes with NVMe and two GPUs

Model Description Slot:Port Logical port Dvswitch


PowerFlex R740xd Trunk 1 to Switch A 0:01 vmnic0 cust_dvswitch
hyperconverged nodes
with NVMe, 25 G, and Trunk 1 to Switch A 0:02 vmnic1 flex_dvswitch
two GPUs Trunk 1 to Switch A 5:01 vmnic2 cust_dvswitch
Trunk 1 to Switch A 5:02 vmnic3 flex_dvswitch
iDRAC - OOB network M0 Not applicable Not applicable

NOTE: PowerFlex R640/R740xd/R840 with three GPUs, 100 GB PowerFlex nodes, and six ports will be updated after the
SPM matrix update. The slot matrix might change after the SPM addendum is complete.

PowerFlex R840 hyperconverged nodes


PowerFlex R840 hyperconverged nodes with SSD

Slot layout

PowerFlex R840 hyperconverged nodes with SSD Dual CPU


rNDC (CPU1) Mellanox CX-4 25 GB
Slot 1 (NA) Not applicable
Slot 2 (CPU1) open (x16)
Slot 3 (CPU1) HBA330
Slot 4 (CPU2) Mellanox CX-4 25 GB
Slot 5 (NA) Not applicable
Slot 6 (CPU2) BOSS

Slot matrix for PowerFlex R840 hyperconverged nodes with SSD

96 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

Model Description Slot:Port Logical port Dvswitch


PowerFlex R840 Trunk 1 to Switch A 0:01 vmnic0 cust_dvswitch
hyperconverged nodes
with SSD Trunk 2 to Switch B 0:02 vmnic1 flex_dvswitch
Trunk 1 to Switch B 4:01 vmnic2 cust_dvswitch
Trunk 2 to Switch A 4:02 vmnic3 flex_dvswitch
iDRAC -- OOB Network M0 Not applicable Not applicable

PowerFlex R840 hyperconverged nodes with NVMe

Slot layout

PowerFlex R840 hyperconverged nodes with SSD Dual CPU


rNDC (CPU1) Mellanox CX-4 25 GB
Slot 1 (NA) Not applicable
Slot 2 (CPU1) open (x16)
Slot 3 (CPU1) open (x16)
Slot 4 (CPU2) Mellanox CX-4 25 GB
Slot 5 (NA) Not applicable
Slot 6 (CPU2) BOSS

Slot matrix for PowerFlex R840 hyperconverged nodes with NVMe

Model Description Slot:Port Logical port Dvswitch


PowerFlex R840 Trunk 1 to Switch A 0:01 vmnic0 cust_dvswitch
hyperconverged nodes
with NVMe Trunk 2 to Switch B 0:02 vmnic1 flex_dvswitch
Trunk 1 to Switch B 4:01 vmnic2 cust_dvswitch
Trunk 2 to Switch A 4:02 vmnic3 flex_dvswitch
iDRAC - OOB network M0 Not applicable Not applicable

PowerFlex R840 hyperconverged nodes with GPU

Slot layout

Preparing to expand a PowerFlex appliance 97


Internal Use - Confidential

PowerFlex R840 hyperconverged nodes with GPU Dual CPU


rNDC (CPU1) Mellanox CX-4 25 GB
Slot 1 (NA) Not applicable
Slot 2 (CPU1) GPU1 (DW)
Slot 3 (CPU1) HBA330
Slot 4 (CPU2) Mellanox CX-4 25 GB
Slot 5 (NA) Not applicable
Slot 6 (CPU2) BOSS

Slot matrix for PowerFlex R840 hyperconverged nodes with GPU

Model Description Slot:Port Logical port Dvswitch


PowerFlex R840 Trunk 1 to Switch A 0:01 vmnic0 cust_dvswitch
hyperconverged nodes
with GPU Trunk 2 to Switch B 0:02 vmnic1 flex_dvswitch
Trunk 1 to Switch B 4:01 vmnic2 cust_dvswitch
Trunk 2 to Switch A 4:02 vmnic3 flex_dvswitch
iDRAC-OOB network M0 Not applicable Not applicable

PowerFlex R840 hyperconverged nodes with NVMe and GPU

Slot layout

PowerFlex R840 hyperconverged nodes with NVMe and GPU Dual CPU
rNDC (CPU1) Mellanox CX-4 25 GB
Slot 1 (NA) NA
Slot 2 (CPU1) GPU1 (DW)
Slot 3 (CPU1) BOSS
Slot 4 (CPU2) Mellanox CX-4 25 GB
Slot 5 (NA) NA
Slot 6 (CPU2) GPU2 (DW)

Slot matrix for PowerFlex R840 hyperconverged nodes with NVMe and GPU

Model Description Slot:Port Logical port Dvswitch


PowerFlex R840 Trunk 1 to Switch A 0:01 vmnic0 cust_dvswitch
hyperconverged nodes
with NVMe and GPU Trunk 2 to Switch B 0:02 vmnic1 flex_dvswitch
Trunk 1 to Switch B 4:01 vmnic2 cust_dvswitch
Trunk 2 to Switch A 4:02 vmnic3 flex_dvswitch

98 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

Model Description Slot:Port Logical port Dvswitch


iDRAC-OOB network M0 Not applicable Not applicable

PowerFlex storage-only node rNDC slot layout


This section describes the slot layout for PowerFlex storage-only nodes.

PowerFlex R640 storage-only nodes


PowerFlex R640 storage-only nodes with SSD

Slot layout

PowerFlex R640 storage-only nodes with SSD Dual CPU


rNDC (CPU1) Mellanox CX-4 25 GB
Slot 1 (CPU1) BOSS
Slot 2 (CPU1) open (x16)
Slot 3 (CPU2) Mellanox CX-4 25 GB

Slot matrix for PowerFlex R640 storage-only nodes with SSD

Model Description Slot:Port Logical port Dvswitch


PowerFlex storage-only Trunk 1 to Switch A 0:01 em1 bond0
node
Trunk 2 to Switch B 0:02 em2 bond1
Trunk 1 to Switch B 3:01 p3p1 bond0
Trunk 2 to Switch A 3:02 p3p2 bond1
iDRAC - OOB network M0 Not applicable Not applicable

PowerFlex R640 storage-only nodes with 8* NVMe

Slot layout

PowerFlex R640 storage-only nodes with 8* NVMe Dual CPU


rNDC (CPU1) Mellanox CX-4 25 GB
Slot 1 (CPU1) BOSS
Slot 2 (CPU1) open (x16)
Slot 3 (CPU2) Mellanox CX-4 25 GB

Slot matrix for PowerFlex R640 storage-only nodes with 8* NVMe

Preparing to expand a PowerFlex appliance 99


Internal Use - Confidential

Model Description Slot:Port Logical port Dvswitch


PowerFlex storage-only Trunk 1 to Switch A 0:01 em1 bond0
node with 8* NVMe
Trunk 2 to Switch B 0:02 em2 bond1
Trunk 1 to Switch B 3:01 p3p1 bond0
Trunk 2 to Switch A 3:02 p3p2 bond1
iDRAC - OOB network M0 Not applicable Not applicable

PowerFlex R640 storage-only nodes with 10* NVMe

Slot layout

PowerFlex R640 with 10*NVMe Dual CPU


rNDC (CPU1) Mellanox CX-4 25 GB
Slot 1(CPU1) NVMe bridge
Slot 2(CPU1) BOSS
Slot 3(CPU2) Mellanox CX-4 25 GB

Slot matrix for PowerFlex R640 storage-only nodes with 10* NVMe

Model Description Slot:Port Logical port Dvswitch


PowerFlex storage-only Trunk 1 to Switch A 0:01 em1 bond0
node with 10* NVMe
Trunk 2 to Switch B 0:02 em2 bond1
Trunk 1 to Switch B 3:01 p3p1 bond0
Trunk 2 to Switch A 3:02 p3p2 bond1
iDRAC - OOB network M0 Not applicable Not applicable

PowerFlex R740xd storage-only nodes


PowerFlex R740xd storage-only nodes with SSD

Slot layout

PowerFlex R740xd storage-only nodes with SSD Dual CPU


rNDC (CPU1) Mellanox CX-4 25 GB
Slot 1 (CPU1) open (x8)
Slot 2 (CPU1) NA

100 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

PowerFlex R740xd storage-only nodes with SSD Dual CPU


Slot 3 (CPU1) BOSS
Slot 4 (CPU2) open (x16)
Slot 5 (CPU2) Mellanox CX-4 25 GB
Slot 6 (CPU1) HBA330
Slot 7 (CPU2) open (x8)
Slot 8 (CPU2) open (x16)

Slot matrix for PowerFlex R740xd storage-only nodes with SSD

Model Description Slot:Port Logical port Dvswitch


PowerFlex storage-only Trunk 1 to Switch A 0:01 em1 bond0
node with SSD
Trunk 2 to Switch B 0:02 em2 bond1
Trunk 1 to Switch B 5:01 p5p1 bond0
Trunk 2 to Switch A 5:02 p5p2 bond1
iDRAC - OOB network M0 Not applicable Not applicable

PowerFlex R740xd storage-only nodes with NVMe

Slot layout

PowerFlex R740xd storage-only nodes with NVMe Dual CPU


rNDC (CPU1) Mellanox CX-4 25 GB
Slot 1 (CPU1) open (x16)
Slot 2 (CPU1) NA
Slot 3 (CPU1) NVMe bridge
Slot 4 (CPU2) NVMe bridge
Slot 5 (CPU2) Mellanox CX-4 25 GB
Slot 6 (CPU1) BOSS
Slot 7 (CPU2) open (x8)
Slot 8 (CPU2) open (x16)

Slot matrix for PowerFlex R740xd storage-only nodes with NVMe

Mode Description Slot:Port Logical port Dvswitch


PowerFlex storage-only Trunk 1 to Switch A 0:01 em1 bond0
node with NVMe
Trunk 2 to Switch B 0:02 em2 bond1
Trunk 1 to Switch B 5:01 p5p1 bond0

Preparing to expand a PowerFlex appliance 101


Internal Use - Confidential

Mode Description Slot:Port Logical port Dvswitch


Trunk 2 to Switch A 5:02 p5p2 bond1
iDRAC - OOB network M0 Not applicable Not applicable

PowerFlex compute-only node slot layout


This section describes the slot layout for PowerFlex compute-only nodes.

PowerFlex R640 compute-only nodes


PowerFlex R640 compute-only nodes with SSD

Slot layout

PowerFlex R640 compute-only nodes with SSD Dual CPU


rNDC (CPU1) Mellanox CX-4 25 GB
Slot 1 (CPU1) BOSS
Slot 2 (CPU1) open (x16)
Slot 3 (CPU2) Mellanox CX-4 25 GB

Slot matrix for PowerFlex R640 nodes with SSD

Model Description Slot:Port Logical port Dvswitch


PowerFlex compute- Trunk 1 to Switch A 0:01 vmnic0 cust_dvswitch
only nodes with SSD
Trunk 2 to Switch B 0:02 vmnic1 flex_dvswitch
Trunk 1 to Switch B 3:01 vmnic2 cust_dvswitch
Trunk 2 to Switch A 3: 02 vmnic3 flex_dvswitch
iDRAC - OOB network M0 Not applicable Not applicable

PowerFlex R740xd compute-only nodes


PowerFlex R740xd compute-only nodes without hard drives

Slot layout

102 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

PowerFlex R740xd compute-only nodes without hard Dual CPU


drives
rNDC (CPU1) Mellanox CX-4 25 GB
Slot 1 (CPU1) GPU1 (DW)
Slot 2 (CPU1) NA
Slot 3 (CPU1) BOSS
Slot 4 (CPU2) open (x16)
Slot 5 (CPU2) Mellanox CX-4 25 GB
Slot 6 (CPU1) HBA330
Slot 7 (CPU2) blocked
Slot 8 (CPU2) GPU2 (DW)

Slot matrix for PowerFlex R740xd compute-only nodes without hard drives

Model Description Slot:Port Logical port Dvswitch


PowerFlex compute- Trunk 1 to Switch A 0:01 vmnic0 cust_dvswitch
only nodes -25G
without hard drives Trunk 2 to Switch B 0:02 vmnic1 flex_dvswitch
Trunk 1 to Switch B 5:01 vmnic2 cust_dvswitch
Trunk 2 to Switch A 5:02 vmnic3 flex_dvswitch
iDRAC - OOB network M0 Not applicable Not applicable

PowerFlex R840 compute-only nodes


PowerFlex R840 compute-only nodes with 25G

Slot layout

PowerFlex R840 compute-only nodes with 25G Dual CPU


rNDC (CPU1) M/Q-25G
Slot 1 (NA) Not applicable
Slot 2 (CPU1) open (x16)
Slot 3 (CPU1) HBA330
Slot 4 (CPU2) M/Q-25G
Slot 5 (NA) Not applicable
Slot 6 (CPU2) BOSS

Slot matrix for PowerFlex R840 compute-only nodes with 25G

Preparing to expand a PowerFlex appliance 103


Internal Use - Confidential

Model Description Slot:Port Logical port dvswitch


PowerFlex compute- Trunk 1 to Switch A 0:01 vmnic0 cust_dvswitch
only nodes
Trunk 2 to Switch B 0:02 vmnic1 flex_dvswitch
Trunk 1 to Switch B 4:01 vmnic2 cust_dvswitch
Trunk 2 to Switch A 4:02 vmnic3 flex_dvswitch
iDRAC - OOB network M0 Not applicable Not applicable

PowerFlex R840 compute-only nodes with 25G 1 GPU

Slot layout

PowerFlex R840 compute-only nodes with 1 GPU Dual CPU


rNDC (CPU1) M/Q-25G
Slot 1 (NA) NA
Slot 2 (CPU1) GPU1 (DW)
Slot 3 (CPU1) HBA330
Slot 4 (CPU2) M/Q-25G
Slot 5 (NA) NA
Slot 6 (CPU2) BOSS

Slot matrix

Model Description Slot:Port Logical port Dvswitch


PowerFlex R840 Trunk 1 to Switch A 0:01 vmnic0 cust_dvswitch
compute-only nodes
with 1 GPU Trunk 2 to Switch B 0:02 vmnic1 flex_dvswitch
Trunk 1 to Switch B 4:01 vmnic2 cust_dvswitch
Trunk 2 to Switch A 4:02 vmnic3 flex_dvswitch
iDRAC - OOB network M0 Not applicable Not applicable

Ports and security configuration data


PowerFlex ports
For information about the ports and protocols used by PowerFlex components, see the Dell EMC PowerFlex Security
Configuration Guide. You can find this guide at the Dell Technologies Technical Resource Center.

104 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

PowerFlex Manager ports


Port TCP Service
20 Yes FTP
21 Yes FTP
22 Yes SSH
80 Yes HTTP
443 Yes HTTPS

Jump server ports


Port TCP Service
22 Yes SSH
5901 or 5902 Yes VNC

CloudLink Center ports


See the Network port information for CloudLink Center for information about the ports used by CloudLink Center.

Customer network and router requirements


Configure the customer network for routing and layer-2 access for the various networks before PowerFlex Manager deploys the
PowerFlex appliance cluster.
The pre-deployment customer network requirements are as follows:
● Redundant connections to access switches using VLT or VPC (virtual link trunking or virtual port channel).
● MTU=9216 on all ports or link aggregation interfaces carrying PowerFlex data VLANs.
● MTU=9216 as default on VMware vMotion and vSAN interfaces.
● Layer-2 connectivity of PowerFlex data and PowerFlex management VLANs from access switches to PowerFlex gateway.
For example, you must configure data VLANs on access switch uplinks and through customer network to PowerFlex
gateway.
● Layer-2 connectivity of operating system installation VLAN from access switches to PowerFlex Manager. For example, you
must configure the installation VLAN on access switch uplinks and through customer network to PowerFlex Manager.
● VLANs and routes as described below. For example, the router for any routed network is 192.168.<VLAN>.254. For example,
for VLAN 200, the router is 192.168.200.254.
The following table lists customer network pre-deployment VLAN configuration options. A minimum of two logical data networks
are supported. Optionally, you can configure four logical data networks. Verify the number of logical data networks configured in
an existing setup and configure the logical data network VLANs accordingly.
If you see VLANs other than the following listed for services discovered in lifecycle mode, assign and match the VLAN details
with an existing setup. The VLAN names are for example only and may not match the configured system.

Default VLAN Network or VLANs Properties


105 Hypervisor management Layer-3 connectivity. Management
VMs require DNS and NTP. PowerFlex
Manager requires routing between
VLANs 101, 105, and 150 to access the
hypervisor management.

106 Hypervisor migration Layer-2 connectivity.


140 PowerFlex management controller 2.0 PowerFlex management Layer-3 connectivity uplink

Preparing to expand a PowerFlex appliance 105


Internal Use - Confidential

Default VLAN Network or VLANs Properties


141 PowerFlex management controller 2.0 PowerFlex data 1 Layer-2 connectivity
MTU=9216

142 PowerFlex management controller 2.0 PowerFlex data 2 Layer-2 connectivity


MTU=9216

143 PowerFlex management controller 2.0 PowerFlex hypervisor Layer-2 connectivity


migration
150 PowerFlex management Layer-3 connectivity on uplink.
PowerFlex Manager requires routing
between VLANs 105 and 150 to access
the hypervisor management.

151 PowerFlex Data 1 Layer-2 connectivity


MTU=9216

152 PowerFlex Data 2 Layer-2 connectivity


MTU=9216

153 PowerFlex Data 3 Layer-2 connectivity


MTU=9216

154 PowerFlex Data 4 Layer-2 connectivity


MTU=9216

161 Replication VLAN 1 Routable to replication peer system


162 Replication VLAN 2 Routable to replication peer system

NOTE:
● A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks.
● VLANs 161 and 162 are used to support native asynchronous replication.

Access switch configuration requirements


Configure the management, uplink, and VLT or VPC interconnect ports before PowerFlex Manager deploys the PowerFlex
appliance cluster. The pre-deployment access switch requirements are as follows:
● Management interfaces IP addresses configured (192.168.101.45, 192.168.101.46).
● Switches and interconnect link aggregation interfaces configured to support VLT or VPC.
● MTU=9216 on redundant uplinks with link aggregation (VLT or VPC) to customer data center network.
● MTU=9216 on VLT or VPC interconnect link aggregation interfaces.
● LLDP enabled on switch ports that are connected to PowerFlex appliance node ports.
● SNMP enabled, community string set (‘public’) and trap destination set to PowerFlex Manager (192.168.105.110).
● All uplink, VLT or VPC, and PowerFlex appliance connected ports are not shut down.
● For access switch configurations for both Dell and Cisco switches, see Configuring the network.
● Dell recommends that only PowerFlex appliance (including the PowerFlex management node, if available) must be connected
to access switches.
If you see VLANs other than the following listed for services discovered in lifecycle mode, assign and match the VLAN details
with an existing setup. The VLAN names are for example only and may not match the configured system. A minimum of two
logical data networks are supported. Optionally, you can configure four logical data networks. Verify the number of logical data
networks configured in an existing setup and configure the logical data network VLANs accordingly.
The following table lists access switch pre-deployment VLAN configuration options.

106 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

Default VLAN Network or VLANs Properties


101 Access switch management Connected to hardware management network
104 Operating system installation Layer-2 connectivity
105 Hypervisor management Layer-3 connectivity on uplinks
106 Hypervisor migration Layer-2 connectivity
140 PowerFlex management controller 2.0 PowerFlex Layer-3 connectivity on uplinks
management
141 PowerFlex management controller 2.0 VMware vMotion Layer 2 connectivity, MTU=9216
142 PowerFlex management controller 2.0 PowerFlex Data 1 Layer 2 connectivity, MTU=9216
143 PowerFlex management controller 2.0 PowerFlex Data 2 Layer 2 connectivity, MTU=9216
150 PowerFlex management Layer-3 connectivity on uplinks
151 PowerFlex Data 1 Layer 2 connectivity, MTU=9216
152 PowerFlex Data 2 Layer 2 connectivity, MTU=9216
153 PowerFlex Data 3 Layer 2 connectivity, MTU=9216
154 PowerFlex Data 4 Layer 2 connectivity, MTU=9216
161 PowerFlex replication 1 Layer-3 connectivity on uplinks
162 PowerFlex replication 2 Layer-3 connectivity on uplinks

Configure iDRAC network settings


Configure the iDRAC network settings on each of the PowerFlex appliance nodes. The iDRAC configuration steps here also
apply to the optional PowerFlex management node.

Prerequisites
For console operations, ensure that you have a crash cart. A crash cart enables a keyboard, mouse, and monitor (KVM)
connection to the node.

Steps
1. Connect the KVM to the node.
2. During boot, to access the Main Menu, press F2.
3. From System Setup Main Menu, select the iDRAC Settings menu option. To configure the network settings, do the
following:
a. From the iDRAC Settings pane, select Network.
b. From the iDRAC Settings-Network pane, verify the following parameter values:
● Enable NIC = Enabled
● NIC Selection = Dedicated
c. From the IPv4 Settings pane, configure the IPv4 parameter values for the iDRAC port as follows:
● Enable IPv4 = Enabled
● Enable DHCP = Disabled
● Static IP Address = <ip address > # select the IP address from this range for each node (192.168.101.21 to
192.168.101.24)
● Static Gateway = 192.168.101.254
● Static Subnet Mask = 255.255.255.0
● Static Preferred DNS Server = 192.168.200.101
4. After configuring the parameters, click Back to display the iDRAC Settings pane.
5. From the iDRAC Settings pane, select User Configuration and configure the following:
a. User Name = root

Preparing to expand a PowerFlex appliance 107


Internal Use - Confidential

b. LAN User privilege = Administrator


c. Change Password = calvin. In this field, enter a new password.
d. In the Re-enter password dialog box, type the password again and press Enter twice.
e. Click Back.
6. From the iDRAC Settings pane, click Finish > Yes. Click OK to return to the System Setup Main Menu pane.
7. To exit the BIOS and apply all setting post boot, select Finish.
8. Reboot the node and confirm iDRAC settings by accessing the iDRAC using the web interface.

Related information
Discover resources

Disable IPMI using a Windows-based jump server


Perform this procedure to disable IPMI for PowerFlex node or PowerFlex management node using a Windows-based jump server
or environment.

Prerequisites
Ensure that iDRAC command line tools (including racadm) are installed on the Windows-based jump server. Download the
specific versions and installation instructions of the tools for Windows from the Dell Technologies Support site.
NOTE: Disabling IPMI is only required if the PowerFlex nodes are older, or have had it enabled during deployment.

NOTE: This procedure cannot be performed without racadm.

Steps
1. For a single PowerFlex node:
a. From the jump server, open a PowerShell session.
b. Type racadm -r x.x.x.x -u root -p yyyyy set iDRAC.IPMILan.Enable Disabled
where x.x.x.x is the IP address of the iDRAC node and yyyyy is the iDRAC password.
2. For multiple PowerFlex nodes:
a. From the jump server, at the root of the C: drive, create a folder named ipmi.
b. From the File Explorer, go to View and select the File Name extensions check box.
c. Open a notepad file, and paste this text into the file: powershell -noprofile -executionpolicy bypass
-file ".\disableIPMI.ps1"
d. Save the file and rename it to runme.cmd in C:\ipmi.
e. Open a notepad file, and paste this text into the file: import-csv $pwd\hosts.csv -Header:"Hosts"
| Select-Object -ExpandProperty hosts | % {racadm -r $_ -u root -p XXXXXX set
idrac.ipmilan.enable disabled
where XXXXXX is the customer password that must be changed.
f. Save the file and rename it to disableIPMI.ps1 in C:\ipmi.
g. Open a notepad file and list all of the iDRAC IP addresses that you want to include, one per line.

h. Save the file and rename it hosts.csv in C:\ipmi.


i. Open a PowerShell session and go to C:\ipmi.
j. Type .\runme.cmd.
Example output:

108 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

Disable IPMI using an embedded operating system-


based jump server
Perform this procedure to disable IPMI for PowerFlex node using an embedded operating system-based jump server.

Prerequisites
Ensure that the iDRAC command-line tools are installed on the embedded operating system-based jump server.

Steps
1. For a single PowerFlex node:
a. From the jump server, open a terminal session.
b. Type racadm -r x.x.x.x -u root -p yyyyy set iDRAC.IPMILan.Enable 0 where x.x.x.x is the IP
address of the iDRAC node and yyyyy is the iDRAC password.
2. For multiple PowerFlex nodes:
a. From the jump server, open a terminal window.
b. Edit the idracs text file and enter IP addresses for each iDRAC, one per line.
c. Save the file.
d. At command line interface, type while read line; do echo “$line” ; racadm -r $line -u root -p
yyyyy set iDRAC.IPMILan.Enable 0; done < idracs where yyyyy is the iDRAC password.
The following output displays the IP address for each iDRAC and the output from the racadm command:

Preparing to expand a PowerFlex appliance 109


Internal Use - Confidential

3. For a single node using the iDRAC UI settings:


a. Log on to the iDRAC interface.
b. Click iDRAC Settings > Connectivity > Network.
c. Navigate to IPMI Settings.
You can view the current status of IPMI.
d. Select Disabled in the Enable IPMI Over LAN drop-down list.
e. Click Apply.
f. Repeat for each servers iDRAC.

Download files from the Dell Technologies Support


site
Ensure the integrity of files from the Dell Technologies Support site.

About this task


The Dell Technologies Support site contains files that are required by PowerFlex appliance for deployment. See Dell
Technologies Support site.
Compare the SHA256 value of files that are downloaded from the Dell Technologies Support site with the SHA2 value of the
files that are stored in the Dell Technologies Support site.

Steps
1. On the Dell Technologies Support site, to see the SHA2 hash value, hover over the question mark (?) next to the File
Description.
2. In the Windows file manager, right-click the downloaded file and select CRC SHA > SHA-256. The CRC SHA option is
available only if you install 7-zip application.
The SHA-256 value is calculated.
3. The SHA2 value that is shown on the Dell Technologies Support site and the SHA-256 value that is generated by Microsoft
Windows must match. If the values do not match, the file is corrupted. Download the file again.

110 Preparing to expand a PowerFlex appliance


Internal Use - Confidential

3
Configuring the network
This section covers the network configuration examples of physical switches and virtual networking.
NOTE: The physical switch configuration in this section is used as a reference for the customer to configure the switches.

Related information
Create the distributed port groups for the BE_dvSwitch
Configure the Cisco Nexus access and aggregation switches
Configure the Dell access switches
Configure a port channel with LACP bonding NIC or individual trunk
Configure access switch ports for PowerFlex nodes

VLAN mapping

VLAN mapping for access and aggregation switches

VLAN L2/L3 MTU Default VLAN number


flex-oob-mgmt-<vlanid> L3 1500 101
flex-vcsa-ha-<vlanid> L2 1500 103
flex-install-<vlanid> L2 1500 104
flex-node-mgmt-<vlanid> L3 1500/9000 105
flex-vmotion-<vlanid> L2 9000/1500 106 - only for PowerFlex
management controller 1.0
flex-vsan-<vlanid> L2 9000 113 - only for PowerFlex
management controller 1.0
flex-stor-mgmt-<vlanid> L3 1500 150
flex-data1-<vlanid> L2/L3 9000/1500 151 - L3 If external SDC
to SDS communication is
enabled
flex-data2-<vlanid> L2/L3 9000/1500 152 - L3 If external SDC
to SDS communication is
enabled
flex-data3-<vlanid> (if L2/L3 9000/1500 153 - L3 If external SDC
required) to SDS communication is
enabled
flex-data4-<vlanid> (if L2/L3 9000/1500 154 - L3 If external SDC
required) to SDS communication is
enabled
flex-rep1-<vlanid> L3 1500 161 - only if data replication is
enabled
flex-rep2-<vlanid> L3 1500 162 - only if data replication is
enabled

Configuring the network 111


Internal Use - Confidential

VLAN L2/L3 MTU Default VLAN number


pfmc-sds-mgmt-<vlanid> L3 1500 140 - only for PowerFlex
management controller 2.0
pfmc-sds-data1-<vlanid> L2 9000 141 - only for PowerFlex
management controller 2.0
pfmc-sds-data2-<vlanid> L2 9000 142 - only for PowerFlex
management controller 2.0
pfmc-vmotion-<vlanid> L2 1500 143 - only for PowerFlex
management controller 2.0
nsx-transport-<vlanid> L2 9000 121
nsx-vsan-<vlanid> L2 9000 116
nsx-edge1-<vlanid> L3 1500 122 (nsx-edge1-ext-link1,nsx-
edge2-ext-link1)
nsx-edge2-<vlanid> L3 1500 123 (nsx-edge1-ext-link2,nsx-
edge2-ext-link2)

VLAN mapping for leaf-spine switches

Name L2/L3 Default VLAN VxLAN VRF name (For routed


network)
flex-oob-mgmt-<vlanid> L3 101 10101 FLEX_Management_VRF
flex-vcsa-ha-<vlanid> L2 103 10103 FLEX_Management_VRF
flex-install-<vlanid> L2 104 10104 FLEX_Management_VRF
flex-node-mgmt-<vlanid> L3 105 10105 FLEX_Management_VRF
flex-node-mgmt-<vlanid> L2 106 - PowerFlex management 10106 FLEX_Management_VRF
controller 1.0 only
flex-vsan-<vlanid> L2 113 - PowerFlex management 10113 FLEX_Management_VRF
controller 1.0 only
flex-stor-mgmt-<vlanid> L3 150 10150 FLEX _Management_VRF
flex-data1-<vlanid> L2 151 10151 FLEX_Management_VRF
flex-data2-<vlanid> L2 152 10152 FLEX_Management_VRF
flex-data3-<vlanid> (if L2 153 10153 FLEX_Management_VRF
required)
flex-data4-<vlanid> (if L2 154 10154 FLEX_Management_VRF
required)
flex-data1-<vlanid> L3 151 - For L3 external SDS 10151 FLEX_SDS_VRF
flex-data2-<vlanid> L3 152 - For L3 external SDS 10152 FLEX_SDS_VRF
flex-data3-<vlanid> (if L3 153 - For L3 external SDS 10153 FLEX_SDS_VRF
required)
flex-data4-<vlanid> (if L3 154 - For L3 external SDS 10154 FLEX_SDS_VRF
required)

flex-tenant1-data1-<vlanid> L3 171 - For multi-tenant SDC 10151 FLEX_<Tenant1>_SDC_VRF

flex-tenant1-data2-<vlanid> L3 172 - For multi-tenant SDC 10152 FLEX_<Tenant1>_SDC_VRF

112 Configuring the network


Internal Use - Confidential

Name L2/L3 Default VLAN VxLAN VRF name (For routed


network)

flex-tenant1-data3-<vlanid> L3 173 - For multi-tenant SDC 10153 FLEX_<Tenant1>_SDC_VRF

flex-tenant1-data4-<vlanid> L3 174 - For multi-tenant SDC 10154 FLEX_<Tenant1>_SDC_VRF

flex-tenant2-data1-<vlanid> L3 181 - For multi-tenant SDC 10171 FLEX_<Tenant2>_SDC_VRF

flex-tenant2-data2-<vlanid> L3 182 - For multi-tenant SDC 10172 FLEX_<Tenant2>_SDC_VRF

flex-tenant2-data3-<vlanid> L3 183 - For multi-tenant SDC 10173 FLEX_<Tenant2>_SDC_VRF

flex-tenant2-data4-<vlanid> L3 184 - For multi-tenant SDC 10174 FLEX_<Tenant2>_SDC_VRF

flex-rep1-<vlanid> L3 161 - For data replication 10161 FLEX_REP_VRF


flex-rep2-<vlanid> L3 162 - For data replication 10162 FLEX_REP_VRF
pfmc-sds-mgmt-<vlanid> L3 140 - PowerFlex management 10140 FLEX_Management_VRF
controller 2.0 only
pfmc-sds-data1-<vlanid> L2 141 - PowerFlex management 10141 FLEX_Management_VRF
controller 2.0 only
pfmc-sds-data2-<vlanid> L2 142 - PowerFlex management 10142 FLEX_Management_VRF
controller 2.0 only
pfmc-vmotion-<vlanid> L3 143 - PowerFlex management 10143 FLEX_Management_VRF
controller 2.0 only
nsx-transport-<vlanid> L2 121 10121 FLEX_NSX_VRF
nsx-edge1-<vlanid> L3 122 10122
nsx-edge2-<vlanid> L3 123 10123
temp-dns-<vlanid> L3 999 10999 FLEX_Management_VRF
FLEX_MGMT_VRF-<vlanid> 1231 101231 FLEX_Management_VRF
FLEX_REP_VRF-<vlanid> 1232 101232 FLEX_REP_VRF
FLEX_SDS_VRF-<vlanid> 1233 101233 FLEX_SDS_VRF
FLEX_<tenant1>_SDC_VRF- 1234 101234 FLEX_<tenant1>_SDC_VRF
<vlanid>
FLEX_<tenant2>_SDC_VRF- 1235 101235 FLEX_<tenant2>_SDC_VRF
<vlanid>

Configuring the network 113


Internal Use - Confidential

Configuration data
This section provides the port channel and individual trunk configuration data for full network automation (FNA) or partial
network automation (PNA).

Port-channel with LACP for full network automation or partial


network automation
All nodes are connected to access and leaf pair switches.

Node vSwitch Port-channel/ Speed (GB) Mode Required Node LB


Interface VLANs
mode
PowerFlex FE_DvSwitch 91,92,93,94 10/25 Active 104,105,150 LAG-Active-Src
management and dest IP and
controller 1.0 TCP/UDP
PowerFlex BE_DvSwitch 81,82,83,84 10/25 Active 103,106,113,151-1 LAG-Active-Src
management 54 (153 and 154 and dest IP and
controller 1.0 optional) TCP/UDP
PowerFlex FE_DvSwitch 91,92,93,94 10/25 Active 104,105,140,150 LAG-Active-Src
management and dest IP and
controller 2.0 TCP/UDP
PowerFlex BE_DvSwitch 81,82,83,84 10/25 Active 103,141,142,143,1 LAG-Active-Src
management 51-154 (153 and and dest IP and
controller 2.0 154 optional) TCP/UDP
PowerFlex oob_DvSwitch Access 1/10 NA 101 NA
management
controller 1.0 or
PowerFlex
management
controller 2.0
PowerFlex Cust_DvSwitch 2,4,6 10/25/100 Active 104-106 LAG-Active-Src
compute-only and dest IP and
node (VMware TCP/UDP
ESXi)
PowerFlex Flex-DvSwitch 1,3,5 10/25/100 Active 151-154 (153 and LAG-Active-Src
compute-only 154 optional) and dest IP and
node (VMware TCP/UDP
ESXi)
PowerFlex Bond0 2,4,6 10/25/100 Active 104 - 105 LAG-Active-Src
compute-only and dest IP and
node (Linux) TCP/UDP
PowerFlex Bond1 1,3,5 10/25/100 Active 151 - 154 (153, Mode 4
compute-only 154 optional)
node (Linux)
PowerFlex Cust_DvSwitch 2,4,6 10/25/100 Active 104-106,150 LAG-Active-Src
hyperconverged and dest IP and
node TCP/UDP
PowerFlex Flex-DvSwitch 1,3,5 10/25/100 Active 151-154 (153 and LAG-Active-Src
hyperconverged 154 optional) and dest IP and
node TCP/UDP

114 Configuring the network


Internal Use - Confidential

Node vSwitch Port-channel/ Speed (GB) Mode Required Node LB


Interface VLANs
mode
PowerFlex Bond0 2,4,6 10/25/100 Active 150,151,153,161 Mode 4
storage-only (153 optional)
node
PowerFlex Bond1 1,3,5 10/25/100 Active 152,154,162 (154 Mode 4
storage-only optional)
node
Aggregation 1900 100 Active All vlans as
specified vlan
mapping section

Port-channel for full network automation


Node vSwitch Port-channel/ Speed (GB) Mode Required Node LB
Interface VLANs
mode
PowerFlex FE_dvSwitch 91,92,93,94 10/25 ON 104,105,150 Route based on
management IP hash
controller 1.0
PowerFlex BE_dvSwitch 81,82,83,84 10/25 ON 103,106,113,151-1 Route based on
management 54 (153 and 154 IP hash
controller 1.0 optional)
PowerFlex FE_dvSwitch 91,92,93,94 10/25 Active 104,105,140,150 Route based on
management IP hash
controller 2.0
PowerFlex BE_dvSwitch 81,82,83,84 10/25 Active 103,141,142,143,1 Route based on
management 51-154 (153 and IP hash
controller 2.0 154 optional)
PowerFlex oob_dvSwitch Access 1/10 NA 101 NA
management
controller 1.0 or
PowerFlex
management
controller 2.0
PowerFlex Cust_dvSwitch 2,4,6 10/25/100 ON 104-106 Route based on
compute-only IP hash
nodes
PowerFlex Flex_dvSwitch 1,3,5 10/25/100 ON 151-154 (153 and Route based on
compute-only 154 optional) IP hash
nodes
PowerFlex Cust_dvSwitch 2,4,6 10/25/100 ON 104-106,150 Route based on
hyperconverged IP hash
nodes
PowerFlex Flex_dvSwitch 1,3,5 10/25/100 ON 151-154 (153 and Route based on
hyperconverged 154 optional) IP hash
nodes
PowerFlex NA NA NA NA NA NA
storage-only
nodes
PowerFlex NA NA NA NA NA NA
storage-only
nodes

Configuring the network 115


Internal Use - Confidential

Node vSwitch Port-channel/ Speed (GB) Mode Required Node LB


Interface VLANs
mode
Aggregation 1900 100 ON All vlans (not for
leaf-spine)

Individual trunk for full network automation or partial network


automation
Node vSwitch Port-channel/ Speed (GB) Required VLANs Node LB
Interface mode
PowerFlex FE_dvSwitch Trunk 10/25 104,105,150 ● originating
management virtual port
controller 1.0 (recommended)
● physical NIC
load
● Source MAC
hash
PowerFlex BE_dvSwitch Trunk 10/25 103,106,113,151-154 ● originating
management (153 and 154 virtual port
controller 1.0 optional) (recommended)
● physical NIC
load
● Source MAC
hash
PowerFlex FE_dvSwitch Trunk 10/25 104,105,140,150 ● originating
management virtual port
controller 2.0 (recommended)
● physical NIC
load
● Source MAC
hash
PowerFlex BE_dvSwitch Trunk 10/25 103,141,142,143,151- ● originating
management 154 (153 and 154 virtual port
controller 2.0 optional) (recommended)
● physical NIC
load
● Source MAC
hash
PowerFlex oob_dvSwitch Access 1/10 101 NA
management
controller 1.0
or PowerFlex
management
controller 2.0
PowerFlex Cust_dvSwitch Trunk 10/25/100 104-106 ● originating
compute-only virtual port
nodes (recommended)
● physical NIC
load
● Source MAC
hash
PowerFlex Flex_dvSwitch Trunk 10/25/100 151-154 (153 and ● originating
compute-only 154 optional) virtual port
nodes (recommended)

116 Configuring the network


Internal Use - Confidential

Node vSwitch Port-channel/ Speed (GB) Required VLANs Node LB


Interface mode
● physical NIC
load
● Source MAC
hash
PowerFlex Cust_dvSwitch Trunk 10/25/100 104-106,150 ● originating
hyperconverged virtual port
nodes (recommended)
● physical NIC
load
● Source MAC
hash
PowerFlex Flex_dvSwitch Trunk 10/25/100 151-154 (153 and ● originating
hyperconverged 154 optional) virtual port
nodes (recommended)
● physical NIC
load
● Source MAC
hash
PowerFlex storage- Bond0 Trunk 10/25/100 150,151,153,161 (153 ● Mode0-RR
only nodes (Option optional) ● Mode1- Active
1) backup
● Mode6-
Adaptive LB
(recommended)
PowerFlex storage- Bond1 Trunk 10/25/100 152,154,162 (154 ● Mode0-RR
only nodes (Option optional) ● Mode1- Active
1) backup
● Mode6-
Adaptive LB
(recommended)
PowerFlex storage- Per NIC VLAN Trunk 10/25/100 151,152,153,154 ● Mode0-RR
only nodes (Option Bonded(150,161,162 ● Mode1- Active
2) ) backup
● Mode6-
Adaptive LB
(recommended)
Aggregation 1900 100 All vlans as
specified vlan
mapping section

Configure the Cisco Nexus access and aggregation


switches
Use this task to configure the Cisco Nexus access and aggregation switches with LACP bonding NIC port design.

About this task


This procedure is applicable for the PowerFlex controller nodes, hyperconverged, storage-only, and compute-only nodes.

NOTE: If the Cisco Nexus switches contain vlan dot1Q tag native in running-config, the PXE boot fails.

Configuring the network 117


Internal Use - Confidential

Prerequisites
See Configuring the network for information on the interface type.

Steps
1. Configure port channels:

interface port-channel <port-channel Number>


Description "Port Channel to <connectivity info>”
switchport trunk allowed vlan add <vlan list>
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
switchport mode trunk
no lacp suspend-individual
mtu 9216
lacp vpc-convergence <This command is applicable only for LACP bonding NIC port
design>
speed <speed>
vpc <vpc number same as port-channel number>

2. Repeat this configuration for the remaining PowerFlex nodes as applicable.


3. Configure the interface depending on the interface type:

If the interface type is... Run the following command using command prompt...
Port channel with LACP
interface <interface number>
Description “Connected to <connectivity info>"
channel-group <channel-group> mode <mode>
no shutdown

Port channel
interface <interface number>
Description “Connected to <connectivity info>"
channel-group <channel-group> mode <mode>
no shutdown

Trunk
interface <interface number>
switchport mode trunk
switchport trunk allowed vlan <vlan-list>
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
mtu 9216
channel-group <channel-grp-number> mode active
speed <speed>

Related information
Configuring the network

Configure the Dell access switches


Use this procedure to configure the Dell access switches with LACP bonding NIC port design.

About this task


This procedure is applicable for the PowerFlex controller nodes, hyperconverged, storage-only, and compute-only nodes.
NOTE: If the Cisco Nexus switches contain vlan dot1Q tag native in running-config, the PXE boot fails.

Prerequisites
See Configuring the network for information on the interface type.

118 Configuring the network


Internal Use - Confidential

Steps
1. Configure port channels:

interface port-channel <port-channel number>


Description "Port Channel to <node info>”
switchport trunk allowed vlan <vlan list>
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
switchport mode trunk
lacp fallback enable <This command is applicable only for LACP bonding NIC
port design>
speed <speed>
vlt-port-channel <vlt number same as port-channel number>

2. Repeat this configuration for the remaining PowerFlex nodes as applicable.


3. Configure the interface depending on the interface type:

If the interface type is... Run the following command using command prompt...
Port channel with LACP
interface <interface number>
Description “Connected to <connectivity info>"
channel-group <channel-group> mode <mode>
no shutdown

Port channel
interface <interface number>
Description “Connected to <connectivity info>"
channel-group <channel-group> mode <mode>
no shutdown

Trunk
interface <interface number>
Description “Connected to <connectivity info>"
switchport mode trunk
switchport trunk allowed vlan <vlan-list>
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
mtu 9216
channel-group <channel-group-number> mode active
speed <speed>

Related information
Configuring the network

Configuring the network 119


Internal Use - Confidential

Configuring the PowerFlex management controller


2.0, hyperconverged, and ESXi-based compute-only
node network
Use this section to configure the PowerFlex management controller 2.0, hyperconverged, and ESXi-based compute-only node
network.

Add VMkernel adapter to the PowerFlex hyperconverged or ESXi-


based compute-only node hosts
Use this procedure to add VMkernel adapter to the PowerFlex hyperconverged or ESXi-based compute-only node hosts.

Steps
1. Log in to the VMware vSphere Client.
2. Select the host and click Configure on the right pane.
3. Under Networking tab, select the VMkernel adapter.
4. Click Add Networking.
5. Select Connection type as VMkernel network adapter and click Next.
6. Select Target device as Existing network and click Browse to select the appropriate port group.
7. On the port properties, select Enable services, select the appropriate service, and click Next.
For example, for vMotion, select vMotion. The MTU for pfmc-vmotion=1500. For any other networks, retain the default
service.
8. In IPV4 Settings, select Use static IPV4 settings, provide the appropriate IP address and subnet details, and click Next.
9. Verify the details on Ready to Complete and click Finish.
10. Repeat the steps 2 through 9 to create the VMkernel adapters for the following port groups:
● flex-data1-<vlanid>
● flex-data2-<vlanid>
● flex-data3-<vlanid> (if required)
● flex-data4-<vlanid> (if required)
● flex-vmotion-<vlanid>

Related information
Assign VMware vSphere licenses

Add VMkernel adapter to the PowerFlex controller node hosts


Use this procedure to add VMkernel adapter to the PowerFlex controller node hosts.

Steps
1. Log in to the VMware vSphere Client.
2. Select the host and click Configure on the right pane.
3. Under Networking tab, select the VMkernel adapter.
4. Click Add Networking.
5. Select Connection type as VMkernel network adapter and click Next.
6. Select Target device as Existing network and click Browse to select the appropriate port group.
7. On the port properties, select Enable services, select the appropriate service, and click Next.
For example, for vMotion, select vMotion. The MTU for pfmc-vmotion=1500. For any other networks, retain the default
service.

120 Configuring the network


Internal Use - Confidential

8. In IPV4 Settings, select Use static IPV4 settings, provide the appropriate IP address and subnet details, and click Next.
9. Verify the details on Ready to Complete and click Finish.
10. Repeat the steps 2 through 9 to create the VMkernel adapters for the following port groups:
● pfmc-vmotion-<vlanid>
● pfmc-sds-data1-<vlanid>
● pfmc-sds-data2-<vlanid>

Related information
Assign VMware vSphere licenses
Modify the failover order for the FE_dvSwitch
Add PowerFlex controller nodes to an existing dvSwitch

Create LAG on dvSwitches


Use this procedure to create Link Aggregation Group (LAG) on the new dvSwitches.

Steps
1. Log in to the VMware vSphere client and select Networking inventory.
2. Select Inventory, right-click the dvswitch, and select Configure.
3. In Settings, select LACP.
4. Click New, type name as FE-LAG or BE-LAG.
The default number of ports is 2.
5. Select mode as active.
6. Select the load balancing option. See Configuration data for more information.
7. Click OK to create LAG.
Repeat steps 1 through 6 to create LAG on additional dvswitches.

Assign LAG as a standby uplink for the dvSwitch


Use this procedure to assign LAG as a standby uplink for the dvSwitch.

Steps
1. Select the dvSwitch.
2. Click Configure and from Settings, select LACP.
3. Click Migrating network traffic to LAGs.
4. Click Manage Distributed Port Groups, click Teaming and Failover, and click Next.
5. Select all port groups, and click Next.
6. Select LAG and move it to Standby Uplinks.
7. Click Finish.

Add hosts to dvSwitches


Use this procedure to add a host with one vmnic and migrate the VM networking to the dvSwitch port-groups.

Prerequisites
See Configuration data for naming information of the dvSwitches.

Steps
1. Select the dvSwitch.

Configuring the network 121


Internal Use - Confidential

NOTE: If you are not using LACP, right-click and skip to step 4.

2. Click Configure and in Settings select LACP.


3. Click Migrating network traffic to LAGs.
4. Click Add and Manage Hosts.
5. Click Add Hosts and click Next.
6. Click New Host, select the host in maintenance mode, and click OK.
7. Click Next.
8. Select <vmnicX> on <dvSwitch> and click Assign Uplink.
9. Select LAG-0 for an LACP bonding NIC port design or Uplink1, click OK, and click Next.
10. Assign the respective port groups for VMkernel adapters.
11. Click OK > Next.
12. On Migrating VM networking, select all the VMs and assign to corresponding portgroup.
13. Click Next > Finish.
14. Add a second vmnic to the dvSwitch:
a. Select the dvSwitch.
NOTE: If you are not using LACP, right-click and skip to Step d.

b. On the right-hand pane, click Configure and in Settings select LACP.


c. Click Migrating network traffic to LAGs.
d. Click Add and Manage Hosts.
e. Click Add Hosts and click Next.
f. Click Attached Hosts, select the server in maintenance mode, and click Next.
g. Click Next.
h. Select <vmnicX> on <dvSwitch> and click Assign Uplink.
i. Select LAG-1 for an LACP bonding NIC port design or Uplink2, and click OK.
j. Click Next > Next > Next > Finish.

Assign LAG as an active uplink for the dvSwitch


Use this procedure to assign LAG as an active uplink for the dvSwitch.

Prerequisites
See Configuration data for naming information of the dvSwitches and loadbalancing options.

Steps
1. Select the dvSwitch.
2. Click Configure and from Settings, select LACP.
3. Click Migrating network traffic to LAGs.
4. Click Manage Distributed Port Groups, click Teaming and Failover, and click Next.
5. Select all port groups, and click Next.
6. Select a load balancing option.
7. Select LAG and move it to Active Uplinks.
8. Move Uplink1 and Uplink2 to Unused Uplinks and click Next.
9. Click Finish.

122 Configuring the network


Internal Use - Confidential

Set load balancing for dvSwitch


Use this procedure for setting the load balancing for a dvSwitch without LAG.

Prerequisites
See Configuration data for naming information of the dvSwitches and loadbalancing options.

Steps
1. Select the dvSwitch.
2. Right-click the dvSwitch, select Distributed Portgroup > Manage distributed portgroups.
3. Select teaming and failover and select all the port groups, and click Next.
4. Select load balancing.

Create the distributed switch (oob_dvswitch) for the PowerFlex


management node network
Use this procedure to create virtual distributed switches on the PowerFlex management node hosts

Steps
1. Log in to VMware vSphere client.
2. From Home, click Networking and expand the data center.
3. Right-click the data center and perform the following:
a. Click Distributed Switch > New Distributed Switch.
b. Update the name to oob_dvswitch and click Next.
c. On the Select Version page, select 7.0.0 - ESXi 7.0 and later, and click Next.
d. Under Edit Settings, select 1 for Number of uplinks.
e. Select Enabled from Network I/O Control.
f. Clear the Create default port group option.
g. Click Next.
h. On the Read to complete page, click Finish.

Add a host to oob_dvswitch


Use this procedure to add a host to oob_dvswitch.

Steps
1. Log in to the VMware vSphere client.
2. Click Networking and select oob_dvswitch.
3. Right-click Add and Manage Hosts.
4. Select Add Hosts and click Next.
5. Click New Host, select the host in maintenance mode, and click OK.
6. Click Next.
7. Select vmnic4 and click Assign Uplink.
8. Select Uplink 1, and click OK.
9. Click Next > Next > Next.
10. Click Finish.

Configuring the network 123


Internal Use - Confidential

Delete the standard switch (vSwitch0)


Use this procedure to delete the standard switch (vSwitch0).

Steps
1. Log in to VMware vSphere Client.
2. On Menu, click Host and Cluster.
3. Select Host.
4. Click Configure > Networking > Virtual Switches.
5. Right-click Standard Switch: vSwitch0 and click ...> Remove.
6. On the Remove Standard Switch window, click Yes.

Add layer 3 routing on PowerFlex compute-only nodes or


PowerFlex hyperconverged nodes between SDC to layer 3 internal
SDS
Use this procedure to configure PowerFlex hyperconverged nodes or PowerFlex compute-only nodes for internal SDS to SDC.

Steps
1. Start an SSH session to the PowerFlex node using PuTTY.
2. Log in as root.
3. In the PowerFlex CLI, type esxcli network ip route ipv4 add -g <gateway> -n <destination subnet
in CIDR>.

Add layer 3 routing on the PowerFlex hyperconverged storage for


internal SDS to SDC
Use this procedure to enable an external SDC to SDS communication and configure the PowerFlex hyperconverged nodes for an
external SDC reachability.

Steps
1. Start an SSH session on the SVM of the PowerFlex hyperconverged nodes using PuTTY.
2. Log in as root.
3. Run cd to /etc/sysconfig/network-scripts/
4. In the PowerFlex CLI, type echo "<destination subnet> via <gateway> dev <SIO Interface>">route-
<SIO Interface>.

124 Configuring the network


Internal Use - Confidential

Configuring the PowerFlex storage-only node network


Use this section to configure the PowerFlex storage-only node network.

Configure a port channel with LACP bonding NIC or individual


trunk
Use this procedure to configure the port channel with LACP bonding NIC port design or individual trunk using NetworkManager
TUI.

Steps
1. Log in as root from the virtual console.
2. Type nmtui to set up the networking.
3. Click Edit a connection.
4. Perform the following to configure the bond interface:
a. Click Add and select Bond.
b. Set Profile name and Device to bond <X>.
c. Set Mode to <Mode>. See Configuring the network for more information.
d. Set IPv4 Configuration to Disabled.
e. Set IPv6 Configuration to Ignore.
f. Set Automatically Connect.
g. Set Available to all users.
h. Click OK.
i. Repeat these steps for additional bond interface.
5. Configure VLANs on the bond interface:
a. Click Add and select VLAN. Press Tab to view the VLANs window.
b. Set Profile name and Device to bond <X>.VLAN#, where VLAN# is the VLAN ID.
c. Set IPv4 Configuration to Manual. Press Tab to view the configuration.
d. Select Add and set the IP for each VLAN using the CIDR notation.
If the IP is 192.168.150.155 and the network mask is 255.255.255.0, then enter 192.168.150.155/24.
e. Set the Gateway to the default gateway of each VLAN. Since data networks are private VLANs, do not add a gateway.
f. Set DNS server to customer DNS servers.
g. Set IPv6 Configuration to Ignore.
h. Set Automatically Connect.
i. Set Available to all users.
j. Click OK.
k. Repeat these steps for additional data and replication VLANs.
l. Click Back > Quit.
6. Configure the physical interface as secondary for bond:
a. Edit the network configuration file using the #vi command.
b. vi ifcfg0em1.
c. Change BOOTPROTO to none.
d. Delete the lines from DEFROUTE to IPV6_ADDR_GEN_MODE.
e. Change ONBOOT to yes.
f. Save the network configuration file.
g. Type MASTER=bond0 SLAVE=yes.
h. Save the file.
i. Type systemctl disable NetworkManager to disable the NetworkManager.
j. Type systemctl status firewalld to check if firewalld is enabled.
k. Type systemctl enable firewalld to enable firewalld. To enable firewalld on all the SDS components, see the
Enabling firewall service on PowerFlex storage-only nodes and SVMs KB article.
l. Type systemctl restart network to restart the network.
7. Verify the settings:

Configuring the network 125


Internal Use - Confidential

a. Type ip addr show.


b. Type cat /proc/net/bonding/bond0 | grep -B 2 -A 2 MII to confirm that the port channel is active.
c. To verify the MTU, type grep MTU /etc/sysconfig/network-scripts/ifcfg*.
8. Create a static route for replication VLAN, used only to enable replication between primary and remote site:
a. Log in to the node using SSH.
b. Run cd /etc/sysconfig/network-scripts/.
c. Create the file using vi command route-bond<x>.<vlanid for rep1>.
For example, 192.168.161.0/24 through 192.168.163.1 dev bond1.163.
d. Create file using vi command route-bond<x>.<vlanid for rep2>.
For example, 192.168.162.0/24 through 192.168.164.1 dev bond1.164.
9. Configure the port channel with LACP bonding NIC:
a. Type vi ifcfg-bond1 to create bond<x>.
b. Type the following line in the configuration file and save it:
BONDING_OPTS="miimon=100 mode=802.3ad xmit_hash_policy=layer2+3 lacp_rate=fast".
c. Repeat these steps for additional bond interfaces.

Related information
Configuring the network

Configure individual trunk with per NIC VLAN


Use this procedure to configure data and management networks.

About this task


This configuration is only for proof of concept purpose and before making the system production ready the management VLAN
need to be shared over all the data NICs, the task is given in the next section.
This procedure documents how to setup a PowerFlex storage-only node with the new performance setup where an individual
PowerFlex data VLAN is setup per network interface with the management interface shared on one of the interfaces.
This document assumes that the switches are already setup with trunk allowing the management VLAN on each port and the
data VLANs on their individual ports.

Steps
1. Log in to PowerFlex storage-only node as root.
2. Edit the ifcfg-p*p* files to change NM_CONTROLLED=no to NM_CONTROLLED=yes.
3. Remove the GATEWAYDEV line from /etc/sysconfig/networks.
4. Type systemctl restart network to restart the host.
5. Set the data networks:
a. Select Edit a connection.
b. Select Add and select VLAN.
c. Set Profile name to the <interface name>.<vlan id>.
d. Set Device to the <interface name>.<vlan id>.
e. Confirm that the automatically populated parent and VLAN ID values match.
f. Set MTU to the appropriate value.
g. Set IPv4 Configuration to Manual.
h. Highlight Show next to IPv4 Configuration and press Enter to see the remaining values.
i. Select Add next to the address. Enter the appropriate IP address of the VLAN and the CIDR notation for the subnet
prefix.
For example, if the IP is 192.168.150.155 and the network mask is 255.255.255.0, then enter 192.168.150.155/24.
j. Select Never use this network for default route.
k. Set IPv6 configuration to Ignore.
l. Select OK and press Enter.
m. Repeat these steps for the remaining interfaces with appropriate interface names and VLAN numbers.
6. Set up the management network shared on the data interface:

126 Configuring the network


Internal Use - Confidential

a. Select Add and select VLAN.


b. Set Profile name to the <interface name>.<vlan id>.
c. Set Device to the <interface name>.<vlan id>.
d. Confirm that the automatically populated parent and VLAN ID values match.
e. Set MTU to the appropriate value.
f. Set IPv4 Configuration to Manual.
g. Highlight Show next to IPv4 Configuration and press Enter to see the remaining values.
h. Select Add next to the address. Enter the appropriate IP address of the VLAN and the CIDR notation for the subnet
prefix.
For example, if the IP is 192.168.150.155 and the network mask is 255.255.255.0, then enter 192.168.150.155/24.
i. Go to Gateway and select the default gateway.
j. Go to DNS servers and select Add.
k. Set the DNS server.
l. Select Require IPv4 addressing for this connection.
m. Set IPv6 configuration to Ignore.
n. Select OK and press Enter.
o. Repeat these steps for the remaining interfaces with appropriate interface names and VLAN numbers.
7. Select Back.
8. Select Quit.
9. Type systemctl restart network to restart the host.
10. Confirm the connectivity on all interfaces and IP addresses.
11. Disable network manager, enter the following command: systemctl disable NetworkManager.
12. Stop Network Manager, enter the following command: systemctl stop NetworkManager.
13. Add the NM_CONTROLLED=no lines back to the ifcfg-p* files.
14. Add GATEWAYDEV=bond0.<mgmt vlan id> to /etc/sysconfig/networks file.

Configure a high availability bonded management network


Use this procedure to convert a non-high availability management network into a high availability bonded configuration.

About this task


Use this procedure after the switches are configured with a single trunk allowing management.

Steps
1. Log in as root from the iDRAC interface (virtual console of the host).
2. Type systemctl status NetworkManager to check the status of Network Manager.
3. Depending on the active status in the output, perform either of the following:
● If the active status is inactive (dead), type systemctl enable NetworkManager --now.
● If the active status is active (running), continue to the next step.
4. Type nmtui to set up the networking.
5. Click Edit a connection.
6. Highlight the <interface name> <mgmt vlan id> interface and press Tab.
7. Select Delete and confirm to delete.
8. Create a bond interface for management:
a. Select Edit a connection.
b. Select Add and choose Bond.
c. Set the Profile Nameand Device to bond0.
d. Move to Add next to Secondary and press Enter.
e. Select Ethernet and select Create.
f. Set Profile name to the interface name bond.
g. Set Device to the interface name.
h. Select Show next to Ethernet.
i. Set MTU to the appropriate value.

Configuring the network 127


Internal Use - Confidential

j. Select OK.
k. Repeat these steps to set up the remaining three interfaces.
9. Go to Mode and select the specific mode:
● For mode0, select Round robin.
● For mode1, select Active backup.
● For mode6, select Adaptive load balancing (ALB)
10. Go to IPv4 configuration and highlight Automatic. Press Enter and select Disabled.
11. Go to IPv6 configuration and highlight Automatic. Press Enter and select Ignore.
12. Click OK and press Enter.
13. Create a VLAN sub-interface on bond:
a. In the nmtui interface, select Add (create sub-interfaces for management and optional replication VLANs).
b. In the pop-up, go to VLAN.
c. Set the Profile Name and Device to bond0.<vlan id>. This populates the parent and VLAN ID fields. Confirm that
the details match.
d. Set MTU to the appropriate value.
e. Set IPv4 Configuration to Manual.
f. Highlight Show next to IPv4 Configuration and press Enter to view the remaining values.
g. Select Add next to the address. Enter the appropriate IP address of the VLAN and the CIDR notation for the subnet
prefix.
For example, if the IP is 192.168.150.155 and the network mask is 255.255.255.0, then enter 192.168.150.155/24.
h. Set the Gateway to the default gateway of each VLAN. This is applicable only for management VLAN sub-interface.
i. Go to the DNS servers, and select Add. This is applicable only for management VLAN sub-interface.
j. Set the first DNS server. To add more than one DNS server, select Add.
k. Select Require IPv4 addressing for this connection.
l. Go to IPv6 configuration and set to Ignore.
m. Select OK.
n. Repeat step 13 for each VLAN.
o. Select Back.
p. Select Quit.
14. Restart the network:
a. Type systemctl restart network to restart the network.
b. Confirm the connectivity on all interfaces and IP addresses and select OK.
c. Select Back.
d. Select Quit.
15. Type systemctl stop NetworkManager to stop Network Manager.
16. Type systemctl restart network to restart the network.
17. Confirm the connectivity on all interfaces and IP addresses.

Add a layer 3 routing on PowerFlex storage-only node for an


internal SDS to SDC
Use this procedure to add a layer 3 on PowerFlex storage-only node for an internal SDS to SDC.

Steps
1. Start an SSH session to the PowerFlex storage-only node using PuTTY.
2. Log in as root.
3. Type cd to /etc/sysconfig/network-scripts/.
4. In the PowerFlex CLI, type echo "<destination subnet> via <gateway> dev <SIO Interface>">route-
<SIO Interface>.

128 Configuring the network


Internal Use - Confidential

II
Converting a PowerFlex controller node with
a PERC H755 to a PowerFlex management
controller 2.0
Use the procedures in this section to convert a standalone PowerFlex R650 controller node with a PERC H755 to a PowerFlex
management controller 2.0.
Before converting a PowerFlex controller node to a PowerFlex management controller 2.0, ensure that the following
prerequisites are met:
● Back up the PowerFlex controller node.
● Latest Intelligent Catalog is available.
● See Cabling the PowerFlex R650/R750/R6525 nodes for cabling information on PowerFlex management node.
● See Configuring the network to configure the management node switches.

Converting a PowerFlex controller node with a PERC H755 to a PowerFlex management controller 2.0 129
Internal Use - Confidential

4
Configuring the new PowerFlex controller
node

Upgrade the firmware


Use this procedure to upgrade the firmware.

Steps
1. In the web browser, enter https://<ip-address-of-idrac>.
2. From the iDRAC dashboard, click Maintenance > System Update > Manual Update.
3. Click Choose File. Browse to the release appropriate Intelligent Catalog folder and select the appropriate files.
Required firmware:
● Dell iDRAC or Lifecycle Controller firmware
● Dell BIOS firmware
● Dell BOSS Controller firmware
● Dell Intel X550 or X540 or i350 firmware
● Dell Mellanox ConnectX-5 EN firmware
● PERC H755P controller firmware
4. Click Upload.
5. Click Install and Reboot.

Configure BOSS card


Use this procedure only if the BOSS card RAID1 is not configured.

Steps
1. Launch the virtual console, select Boot from the menu, and select BIOS setup from Boot Controls to enter the system
BIOS.
2. Power cycle the server and enter the BIOS setup.
3. From the menu, click Power > Reset System (Warm Boot).
4. From System Setup main menu, select Device Settings.
5. Select AHCI Controller in Slot x: BOSS-x Configuration Utility.
6. Select Create RAID Configuration.
7. Select both the devices and click Next.
8. Enter VD_R1_1 for name and retain the default values.
9. Click Yes to create the virtual disk and then click OK to apply the new configuration.
10. Click Next > OK.
11. Select VD_R1_1 that was created and click Back > Finish > Yes > OK.
12. Select System BIOS.
13. Select Boot Settings and enter the following settings:
● Boot Mode: UEFI
● Boot Sequence Retry: Enabled
● Hard Disk Failover: Disabled

130 Configuring the new PowerFlex controller node


Internal Use - Confidential

● Generic USB Boot: Disabled


● Hard-disk Drive Placement: Disabled
● Clean all Sysprep order and variables: None
14. Click Back > Finish > Finish and click Yes to reboot the node.
15. Boot the node into BIOS mode by pressing F2 during boot.
16. Select System BIOS > Boot Settings > UEFI Settings.
17. Select UEFI Boot Sequence to change the order.
18. Click AHCI Controller in Slot 1: EFI Fixed Disk Boot Device 1 and select + to move to the top.
19. Click Back > Back > Finish to reboot the node again.

Convert physical disks to non-raid disks


Use this procedure to convert the physical disks on the PowerFlex management controller 2.0 to non-raid disks.

Steps
1. Connect to the iDRAC web interface.
2. Click Storage > Overview > Physical Disks.
3. Confirm the SSD Name is NonRAID Solid State Disk 0:1:x. If not, proceed to step 4.
4. Click Storage > Overview > Controllers.
5. In the Actions list for the PERC H755 Front (Embedded), select Reset configuration > OK > Apply now.
6. Click Job Queue.
Wait for task to complete.
7. Select Storage > Overview > Physical Disks.
8. In the Actions menu for each SSD, select Convert to Non-Raid (ensure not to select SSD 0 and SSD 1), and click OK.
9. Select Apply Later.
10. Repeat from all SSD drives.
11. Click Storage > Overview > Tasks.
12. Under Pending Operations actions, select PERC H755 FRONT (Embedded).
13. Select Apply Now.
14. Click Job Queue.
Wait for task to complete.
15. Click Storage > Overview > Physical Disks.
16. Confirm that the SSD name is NonRAID Solid State Disk 0:1:x.

Install VMware ESXi


Use this procedure to install VMware ESXi on the PowerFlex management controller.

Steps
1. Log in to iDRAC and perform the following steps:
a. Connect to the iDRAC interface and launch a virtual remote console from Dashboard and click Launch Virtual Console.
b. Select Virtual Media > Connect Virtual Media > Map CD/DVD.
c. Click Choose File and browse to the folder where the ISO file is saved, select it, and click Open.
d. Click Map Device > Close.
e. Click Boot > Virtual CD/DVD/ISO.
f. Click Yes to confirm boot action.
g. Click Power > Reset System (warm boot).
h. Click Yes.
2. Perform the following steps to install VMware ESXi:
a. On the VMware ESXi installer screen, press Enter to continue.
b. Press F11 to accept the license agreement.

Configuring the new PowerFlex controller node 131


Internal Use - Confidential

c. Under Local, select ATA DELLBOSS VD as the installation location. If prompted, click Enter.
d. Select US Default as the keyboard layout and press Enter.
e. At the prompt, type the root password, and press Enter.
f. At the Confirm Install screen, press F11.
g. In Virtual Console, click Virtual Media > Disconnect Virtual Media.
h. Click Yes to un-map all devices.
i. Press Enter to reboot the PowerFlex management controller when the installation completes.

Configure VMware ESXi


Use this procedure to configure VMware ESXi on the PowerFlex node.

Steps
1. Press F2 to customize the system.
2. Enter the root password and press Enter.
3. Go to DCUI and select Troubleshooting Options.
4. Select Enable SSH.
5. Select Enable ESXi Shell.
6. Press ESC to exit from troubleshooting mode options.
7. Go to Direct Console User Interface (DCUI) > Configure Management Network.
8. Set Network Adapter to VMNIC2.
9. Set the ESXi Management VLAN ID to the required VLAN value.
10. Set the IPv4 ADDRESS, SUBNET MASK, and DEFAULT GATEWAY.
11. Select IPV6 Configuration > Disable IPV6 and press Enter.
12. Go to DNS Configuration and set the customer provided value.
13. Go to Custom DNS Suffixes and set the customer provided value.
14. Press ESC to exit the network configuration and press Y to apply the changes.
15. Type Y to commit the changes and the node restarts.
16. Verify the host connectivity by pinging the IP address from the jump server using the command prompt.

Install Dell Integrated Service Module


Use this procedure to install Dell Integrated Service Module (ISM).

Prerequisites
Download the latest supported version from Dell iDRAC Service Module.

Steps
1. Copy ISM-Dell-Web-X.X.X-XXXX.VIB-ESX7i-Live_AXX.zip to the /vmfs/volumes/<datastore>/ folder on
the PowerFlex management node running VMware ESXi.
2. SSH to the new appliance management host running VMware ESXi.
3. To install VMware vSphere 7.x Dell iDRAC service module, type esxcli software vib install -d /vmfs/
volumes/<datastore>/ISM-Dell-Web-X.X.X-XXXX.VIB-ESX7i-Live_AXX.zip.

Configure NTP on the host


Use this procedure to configure the NTP on the host.

Steps
1. Log in to VMware ESXi host client as root.

132 Configuring the new PowerFlex controller node


Internal Use - Confidential

2. In the left pane, click Manage.


3. Click System and Time & Date.
4. Click Edit NTP Settings.
5. Select Use Network Time Protocol (enable NTP client).
6. Select Start and Stop with host from the drop-down list.
7. Enter NTP IP Addresses.
8. Click Save.
9. Click Services > ntpd.
10. Click Start.

Rename the BOSS datastore


Use this procedure to rename the BOSS datastore.

Steps
1. Log in to VMware ESXi host client as a root.
2. On the left pane, click Storage.
3. Click Datastores.
4. Right-click datastore1 and click Rename.
5. Enter PFMC_DS<last_ip_octet>.
6. Click Save.

Add a new PowerFlex management controller 2.0 to


VMware vCenter
Use this procedure to add a new PowerFlex management controller 2.0 to VMware vCenter.

Steps
1. Log in to the vCSA.
2. Right-click the cluster and click Add Host to add multiple hosts.
3. Enter FQDN of host.
4. Enter root username and password and click Next.
5. Select the certificate and click OK for certificate alert.
6. Verify the Host Summary and click Next.
7. Verify the summary and click Finish.
If the PowerFlex node is in maintenance mode, right-click the VMware ESXi host, and click Maintenance Mode > Exit
Maintenance Mode.

Enable HA and DRS on an existing cluster


Use this procedure to enable HA and DRS on an existing cluster.

Steps
1. Log in to the VMware vSphere Client.
2. Browse to the existing cluster.
3. Right-click the cluster that you want to configure and click Settings.
4. Under Services,click vSphere DRS, and click Edit.
5. Select Turn ON vSphere DRS and expand DRS Automation.

Configuring the new PowerFlex controller node 133


Internal Use - Confidential

6. Select Full Automated for Automation Level and select priority 1 and 2 for Migration Threshold.
7. Set Power Management to Off and click OK.
8. To enable vSphere HA, click Services > vSphere Availability, and click Edit.
9. Select Turn ON VMware vSphere HA and click OK.

Migrate interfaces to the BE_dvSwitch


Use this procedure to migrate the interfaces to the BE_dvSwitch.

Steps
1. Log in to the VMware vSphere Client.
2. Select BE_dvSwitch.
3. Click Configure and in Settings, select LACP.
4. Click Migrating network traffic to LAGs > Add and Manage Hosts.
5. Select Add Hosts and click Next.
6. Click New Hosts, select the host, and click OK > Next.
7. Select vmnic3 and click Assign Uplink.
8. Select LAG-BE-0 and select Apply this uplink assignment to the rest of the hosts, and click OK.
9. Select vmnic7 and click Assign Uplink.
10. Select LAG-BE-1 and select Apply this uplink assignment to the rest of the hosts, and click OK.
11. Click Next > Next > Next > Finish.

Migrate interfaces to the FE_dvSwitch


Use this procedure to migrate the interfaces to the FE_dvSwitch.

Steps
1. Log in to the VMware vSphere Client.
2. Click Network.
3. Select FE_dvSwitch.
4. Click Configure and in Settings, select LACP.
5. Click Migrating network traffic to LAGs > Add and Manage Hosts.
6. Select Add Hosts and click Next.
7. Click New Hosts, select the host, and click OK > Next.
8. Select vmnic2 and click Assign Uplink. Select Apply this uplink assignment to the rest of the hosts.
9. Select LAG-FE-0 and click OK.
10. Select vmnic6 and click Assign Uplink. Select Apply this uplink assignment to the rest of the hosts.
11. Select LAG-FE-1 and click OK.
12. In Manage VMkernel Adapter, select vmk0, and click Assign port group.
13. Select flex-node-mgmt-<vlandld>. Select Apply this uplink assignment to the rest of the hosts.
14. Click OK.
15. Click Next > Next > Finish.

134 Configuring the new PowerFlex controller node


Internal Use - Confidential

Create the distributed port group for the


FE_dvSwitch
Create a distributed port group for the PowerFlex management node network.

Steps
1. Log in to the VMware vSphere Client and click Networking.
2. Right-click FE_dvSwitch and select Distributed Port Group > New Distribution Port Group.
3. Enter flex-install-<vlanid> and click Next.
NOTE: For partial network deployment using PowerFlex Manager, this step is not required.

4. Leave the port related options (port binding, allocation, and number of ports) as the default values.
5. Select VLAN as the VLAN type.
6. Set the VLAN ID to the appropriate VLAN number and click Next.
7. In the Ready to complete screen, verify the details and click Finish.
8. Repeat steps 2 to 7 to create the following port groups:
● flex-node-mgmt-<vlanid>
● flex-stor-mgmt-<vlanid>

Create the distributed port groups for the


BE_dvSwitch
Create a distributed port group for the PowerFlex management node network.

Steps
1. Log in to the VMware vSphere Client and click Networking.
2. Right-click BE_dvSwitch and select Distributed Port Group > New Distribution Port Group.
3. Enter flex-data1-<vlanid> and click Next.
4. Leave the port-related options (Port binding, Port allocation, and # of ports) as the default values.
5. Select VLAN as the VLAN type.
6. Set the VLAN ID to the appropriate VLAN number.
7. Clear the Customize default policies configuration and click Next > Finish.
8. Repeat steps 2 through 7 for each additional port group.

Related information
Configuring the network

Modify the failover order for the BE_dvSwitch


Use this procedure to modify the failover order for the BE_dvSwitch.

Steps
1. Log in to the VMware vSphere Client.
2. Click Networking and select BE_dvSwitch.
3. Click Configure, and from Settings, select LACP.
4. Click Migrating network traffic to LAGs.
5. Click Manage Distributed Port Groups, click Teaming and Failover, and click Next.

Configuring the new PowerFlex controller node 135


Internal Use - Confidential

6. Select All port groups and click Next.


7. Select LAG-BE and move it to Active Uplinks.
8. Move Uplink1 and Uplink2 to Unused Uplinks and click Next.
9. Click Finish.

Modify the failover order for the FE_dvSwitch


Use this procedure to modify the failover order for the FE_dvSwitch.

Steps
1. Log in to the VMware vSphere Client.
2. Click Networking and select FE_dvSwitch.
3. Click Configure, and from Settings, select LACP.
4. Click Migrating network traffic to LAGs.
5. Click Manage Distributed Port Groups, click Teaming and Failover, and click Next.
6. Select All port groups and click Next.
7. Select LAG-FE and move it to Active Uplinks.
8. Move Uplink1 and Uplink2 to Unused Uplinks and click Next.
9. Click Finish.

Related information
Add VMkernel adapter to the PowerFlex controller node hosts

Enable PCI passthrough for PERC H755 on the


PowerFlex management controller
Use this procedure to enable PCI passthrough on the PowerFlex management controller.

Steps
1. Log in to the VMware ESXi host.
2. Select Manage > Hardware > PCI Devices.
3. Select Broadcom / LSI PERC H755 Front Device > Toggle passthrough.
4. Reboot is required, defer till after SDC install.
NOTE: Ignore the VMware popup warning: Failed to configure passthrough devices.

Install the PowerFlex storage data client (SDC) on the


PowerFlex management controller
Use this procedure to install the PowerFlex storage data client (SDC) on the PowerFlex management controller.

Steps
1. Copy the SDC file to the local datastore on the VMware ESXi server.
2. Use SSH to log in to each VMware ESXi hosts as root.
3. Type the following command to install the storage data client (SDC): esxcli software component apply -d /
vmfs/volumes/PFMC_DS<last ip octet>/sdc.zip.

136 Configuring the new PowerFlex controller node


Internal Use - Confidential

Configure PowerFlex storage data client (SDC) on the


PowerFlex management controller
Use this procedure to manually configure the SDC on the PowerFlex management controller 2.0.

Steps
1. To configure the SDC, generate 1 UUID per server (https://www.guidgenerator.com/online-guid-
generator.aspx).

NOTE: Use the default UUID.

2. Use SSH to log in to the VMware ESXi host as root.


3. Substitute the new UUID in the following command with the pfmc-data1-vip and pfmc-data2-vip:

esxcli system module parameters set -m scini -p "IoctlIniGuidStr=<guid>


IoctlMdmIPStr=pfmc-data1-vip,pfmc-data2-vip"

See PowerFlex management node cabling, for more information.


4. Type the following command to verify scini configuration: esxcli system module parameters list -m scini |
head
5. Reboot the PowerFlex management controller 2.0.

Manually deploy the SVM


Use this procedure to manually deploy the SVM of the selected Intelligent Catalog.

About this task


Deploy the PowerFlex SVM on the PowerFlex controller nodes.
NOTE: Manually deploy the PowerFlex SVM on each of the PowerFlex controller nodes. The SVM on the PowerFlex
management controller 2.0 node is installed in the local storage. The SVM on the standalone PowerFlex management
controller 2.0 node is installed in the PERC-01 storage.

Steps
1. Log in to the VMware vCSA.
2. Select Hosts and Clusters.
3. Right-click the ESXi host > Select Deploy OVF Template.
4. Select Local file > Upload file > Browse to the SVM OVA template.
5. Click Open > Next.
6. Enter pfmc-<svm-ip-address> for VM name.
7. Click Next.
8. Identify the cluster and select the node that you are deploying. Verify that there are not compatibility warnings and click
Next.
9. Click Next.
10. Review details and click Next.
11. Select Local datastore Thin Provision and Disable Storage DRS for this VM and click Next.
12. Select pfmc-sds-mgmt-<vlanid> for VM network and click Next.
13. Click Finish.

Configuring the new PowerFlex controller node 137


Internal Use - Confidential

Configure the SVM


Use this procedure to configure the SVM.

About this task


The PowerFlex controller node must have one hard disk to enable PowerFlex installation. Ensure that PCI passthrough added for
Direct Path IO (DPIO) is enabled on the PowerFlex management controller 2.0.

Steps
1. To configure the SVM, right-click each SVM, and click Settings and perform the following steps:
a. Set CPU to 12 CPUs with 12 cores per socket.
b. Select Reservation and enter the GHz value: 17.4.
c. Set Memory to 18 GB and check Reserve all guest memory (all locked).
d. Set Network Adapter 1 to the pfmc-sds-mgmt-<vlanid>.
e. Set Network Adapter 2 to the pfmc-sds-data1-<vlanid>.
f. Set Network Adapter 3 to the pfmc-sds-data2-<vlanid>.
g. Click Add New Device and select PCI Device (new single PowerFlex management controller).
h. Enable Toggle DirectPath IO.
i. Select PCI Device = PERC H755 Front BroadCom/LSI.
j. Click OK.
2. Configure the new PowerFlex controller node:
a. Click Add New Device and select PCI Device (new single PowerFlex controller node).
b. Enable DirectPath IO.
c. Select PCI Device = PERC H755 Front BroadCom/LSI.
d. Click OK.
3. Create an additional hard disk on the standalone PowerFlex controller node only:
a. Click Add New Device and select the hard disk.
b. Assign 1 TB for the new hard disk.
c. Click OK.
4. Power on the SVM and open a console.
5. Log in using the following credentials:
● Username: root
● Password: admin
6. To change the root password type passwd and enter the new SVM root password twice.
7. Type nmtui, select Set system hostname, press Enter, and create the hostname.

Configure the pfmc-sds-mgmt-<vlanid> networking interface


Use this procedure to configure the networking interface.

Steps
1. From the nmtui, select Edit Connection.
2. Select Wired connection 1 to modify connection for pfmc-sds-mgmt-<vlanid>.

Network interface settings Description


Profile name Set to eth0.
Ethernet a. Select Show.
b. Leave the cloned MAC address line blank.
c. Set MTU to 1500.

IPv4 configuration Select Automatic and change to Manual then select Show.

138 Configuring the new PowerFlex controller node


Internal Use - Confidential

Network interface settings Description


Addresses Select Add and enter the IP address of this interface
(pfmc-sds-mgmt_ip) with the subnet mask (Example:
100.65.140.10/24).

Gateway Enter the gateway IP address.


DNS Server Enter DNS server IP address.
Search domains a. Enter the domain.
b. Select Require IPv4 addressing for this connection..
IPv6 configuration Select Automatic and change to Ignore.

3. Select Automatically connect.


4. Select Available to all users.
5. To exit the screen, select OK.

Configure the pfmc-sds-data1-<vlanid> networking interface


Use this procedure to configure the networking interface.

Steps
1. From the nmtui, select Edit Connection.
2. Select Wired connection 2 to modify connection for pfmc-sds-data1-<vlanid>.

Network interface settings Description


Profile name Set to eth1.
Ethernet a. Select Show.
b. Leave the cloned MAC address line blank.
c. Set MTU to 9000.

IPv4 configuration Select Automatic and change to Manual then select Show.
Addresses Select Add and enter the IP address of this interface
(pfmc-sds-data1_ip).

Gateway Leave blank.


DNS Server Leave blank.
Search domains Leave blank.

3. Select Never use this network for the default route.


4. Select Require IPv4 addressing for this connection.
5. For IPv6 configuration, select Automatic and change to Ignore.
6. Select Automatically connect.
7. Select Available to all users.
8. Select OK.
9. To exit nmtui, select Quit.

Configure the pfmc-sds-data2-<vlanid> networking interface


Use this procedure to configure the networking interface.

Steps
1. From the nmtui, select Edit Connection.

Configuring the new PowerFlex controller node 139


Internal Use - Confidential

2. Select Wired connection 3 to modify connection for pfmc-sds-data2-<vlanid>.

Network interface settings Description


Profile name Set to eth2.
Ethernet a. Select Show.
b. Leave the cloned MAC address line blank.
c. Set MTU to 9000.

IPv4 configuration Select Automatic and change to Manual then select Show.
Addresses Select Add and enter the IP address of this interface
(pfmc-sds-data2_ip).

Gateway Leave blank.


DNS Server Leave blank.
Search domains Leave blank.

3. Select Never use this network for the default route.


4. Select Require IPv4 addressing for this connection.
5. For IPv6 configuration, select Automatic and change to Ignore.
6. Select Automatically connect.
7. Select Available to all users.
8. To exit the screen, select OK.

Install required PowerFlex packages


Use this procedure to install the required PowerFlex packages.

Steps
1. On all PowerFlex controller nodes perform the following:
a. Install LIA on all the PowerFlex management controllers by typing the following:
TOKEN=<TOKEN-PASSWORD> rpm -ivh /root/install/EMC-ScaleIO-lia-x.x-x.el7.x86_64.rpm
Where <TOKEN-PASSWORD> is a password used for LIA. The LIA password must be identical in all LIAs within the same
system.
The password must be between 6 and 31, ASCII-printable characters with no blank spaces. It must include at least three
of the following groups: [a-z], [A-Z], [0-9], special chars (!@#$ …).

NOTE: If you use special characters on a Linux-based server, you must escape them when issuing the command.

b. Install the SDS on all PowerFlex management controllers by typing:


rpm -ivh /root/install/EMC-ScaleIO-sds-x.xxx.xxx.el7.x86_64.rpm
2. On the MDM PowerFlex controller nodes, perform the following:
a. Install MDM on the SVM1 and SVM2 by running the following command:
MDM_ROLE_IS_MANAGER=1 rpm -ivh /root/install/EMC-ScaleIO-mdm-x.x-
xxxx.xxx.el7.x86_64.rpm
3. On the TieBreaker PowerFlex controller nodes, perform the following:
a. Install MDM on SVM3 by running the following command:
MDM_ROLE_IS_MANAGER=0 rpm -ivh /root/install/EMC-ScaleIO-mdm-x.x-
xxxx.xxx.el7.x86_64.rpm
b. To reboot, type Reboot.
4. Reboot all SVMs.

140 Configuring the new PowerFlex controller node


Internal Use - Confidential

Verify connectivity between the PowerFlex storage VMs


Use this procedure to verify connectivity between PowerFlex storage VMs.

Prerequisites
This verification uses an MTU of 8972 to verify jumbo frames between SVMs.

Steps
1. Log in to the VMware vCSA.
2. Right-click the SVM.
3. On the VM summary page, select Launch Web Console.
4. Log in to SVM as root.
5. Verify connectivity between SVM, type ping -M do -s 8972 [destination IP].
6. Confirm connectivity for all interfaces to all SVMs.

Manually deploy PowerFlex on the PowerFlex


management controller nodes
Create MDM cluster
Use this procedure to create the MDM cluster.

Steps
1. Run the following command on SVM to create the MDM cluster: scli --create_mdm_cluster --master_mdm_ip
<pfmc-sds-data1-ip,pfmc-sds-data2-ip> --master_mdm_management_ip <pfmc-sds-mgm-ipt> --
cluster_virtual_ip <pfmc-data1-vip, pfmc-data2-vip> --master_mdm_virtual_ip_interface
eth1,eth2 --master_mdm_name <mdm-name(hostname of svm)> --accept_license --
approve_certificate
See PowerFlex management node cabling, for more information.
2. To log in to MDM, type scli --login --username admin (default password is admin).
3. To change default password, type scli --set_password.
4. Log in to the PowerFlex cluster with new password: scli --login --username admin.
5. To add a secondary MDM to the cluster, type scli --add_standby_mdm --mdm_role manager --
new_mdm_ip <pfmc-sds-data1-ip,pfmc-sds-data2-ip> --new_mdm_management_ip <pfmc-sds-mgmt>
--new_mdm_virtual_ip_interface eth1,eth2 --new_mdm_name <mdm-name>
6. To add Tiebreaker MDM to the cluster, type scli --add_standby_mdm --mdm_role tb --new_mdm_ip <pfmc-
sds-data1-ip,pfmc-sds-data2-ip> --new_mdm_name <mdm-name>

Add protection domain


Use this procedure to add a protection domain.

Steps
1. Log in to the MDM, type: scli --login --username admin
2. Create the protection domain, type: scli --add_protection_domain --protection_domain_name PFMC

Configuring the new PowerFlex controller node 141


Internal Use - Confidential

Add storage pool


Use this procedure to add storage pools.

Steps
1. Run the following command to log in to the MDM: scli --login --username admin.
2. To create the storage pool, type scli --add_storage_pool --protection_domain_name PFMC --
dont_use_rmcache --media_type SSD --data_layout medium_granularity --storage_pool_name
PFMC-Pool.

Add SDSs
Use this procedure to add SDSs.

Steps
1. Log in to the MDM: scli --login --username admin.
2. To add SDSs, type scli --add_sds --sds_ip <pfmc-sds-data1-ip,pfcm-sds-data2-ip> --
protection_domain_name PFMC --storage_pool_name PFMC-Pool --disable_rmcache --sds_name
PFMC-SDS-<last ip octet>.
3. Repeat for each PowerFlex management controller.

Set the spare capacity for the medium granularity storage pool
Use this procedure to set the spare capacity for the medium granularity storage pool.

Steps
1. Log in to the primary MDM, type: scli --login --username admin.
2. To modify the capacity pool, type scli --modify_spare_policy --protection_domain_name PFMC --
storage_pool_name PFMC-Pool --spare_percentage <percentage>.
NOTE: Spare percentage is 1/n (where n is the number of nodes in the cluster). For example, a three-node clusters
spare percentage is 34%.

3. Type Y to proceed.

Identify the disks on each of the PowerFlex management controller


nodes
Use this procedure to identify the disks on each of the PowerFlex management controller nodes.

Steps
1. Log in as root to each of the PowerFlex SVMs.
2. To identify all available disks on the SVM, type lsblk.
3. Repeat for all PowerFlex management controller node SVMs.

Add storage devices


Use this procedure to add storage devices.

Steps
1. Log in to the MDM: scli --login --username admin.

142 Configuring the new PowerFlex controller node


Internal Use - Confidential

2. To add SDS storage devices, type scli --add_sds_device --sds_name <sds name> --storage_pool_name
<storage pool name> --device_path /dev/sd(x).
3. Repeat for all devices and for all PowerFlex management controller SVMs.

Create datastores
Use this procedure to create datastores and add volumes.

Prerequisites
For volume sizes, see PowerFlex management controller and virtual machine details.

Steps
1. Log in to the MDM: scli --login --username admin.
2. To create the vcsa datastore, type scli --add_volume --protection_domain_name PFMC --
storage_pool_name PFMC-Pool --size_gb 3500 --volume_name vcsa -- thin_provisioned --
dont_use_rmcache.
3. To create the general datastore, type scli --add_volume --protection_domain_name PFMC --
storage_pool_name PFMC-Pool --size_gb 1600 --volume_name general --thin_provisioned --
dont_use_rmcache.
4. To create the PowerFlex Manager datastore, type scli --add_volume --protection_domain_name PFMC --
storage_pool_name PFMC-Pool --size_gb 1000 --volume_name PFMC-pfxm --thin_provisioned --
dont_use_rmcache.

Add PowerFlex storage to new PowerFlex management controller


nodes
Steps
1. Log in to the MDM: scli --login --username admin.
2. To query all storage data clients (SDC) to capture the SDC IDs, type scli --query_all_sdc.
3. To query all the volumes to capture volume names, type scli --query_all_volumes.
4. To map volumes to SDCs, type scli --map_volume_to_sdc --volume_name <volume name> --sdc_id <sdc
id> --allow_multi_map.
5. Repeat steps 2 through 4 for all volumes and SDCs.

Create VMFS datastores for PowerFlex management controller


nodes
Steps
1. Log in to the VMware vCSA.
2. Right-click the new PowerFlex management controller node in Host and Clusters view.
3. Select Storage > New Datastore
4. Select VMFS and click Next.
5. Name the datastore and select the available LUN and click Next.
6. Select VMFS 6 and click Next.
7. For partition configuration: leave the defaults as is and click Next.
8. Click Finish to start creating the datastore.
9. Repeat for all additional volumes created in the PowerFlex cluster.

Configuring the new PowerFlex controller node 143


Internal Use - Confidential

Optimize VMware ESXi


Use this procedure to improve input/output concurrency, and increase the per device queue length value on a per datastore
basis.

Steps
1. Log in to the VMware ESXi host as root.
2. To identify the datastores by their EUI (extended unique identifier), type esxcli storage core device list |
grep -I emc.
3. To increase the queue length, type esxcli storage core device set -d <DEVICE_ID> -O <Outstanding
IOs>.
4. Set increasing queue length to 256.
Example:
esxcli storage core device set -d eui.16bb852c56d3b93e3888003b00000000 -O 256

5. Repeat for all datastores.

Remove storage disk device on the PowerFlex


controller node
Use this procedure to remove storage disk device on the PowerFlex controller node.

Steps
1. Log in to the primary MDM: scli --login --username admin.

2. To identify the device path attached to the PowerFlex controller node, type scli --query_device_latency_meters
--sds_id <sds id>.
3. To remove the storage disk device, type scli --remove_sds_device --sds_id <sds id> --device_path
<device path>.

Migrate all VMs


Use this procedure to migrate all VMs to the pfmc-vcsa datastore using storage-only migration.

Steps
1. Log in to the VMware vCSA HTML client.
2. Click Storage.
3. Select the PERC-01 datastore.
4. Click the VMs tab to identify existing VMs that need to be migrated to the PowerFlex storage.
5. Right-click the VM to migrate and select Migrate.
6. Select Change both compute resource and storage and click Next.
7. For a compute resource, select one of the PowerFlex controller nodes and click Next.
8. For storage, select the datastore and click Next.
9. For networks, verify that the destination networks are correct and click Next.
10. Retain the default for vMotion priority and click Next.
Wait for vMotion to finish.
11. Repeat these steps for all VMs that need to be migrated.

144 Configuring the new PowerFlex controller node


Internal Use - Confidential

Migrate PowerFlex SVM on PERC-01 datastore


Use this procedure to migrate the PowerFlex SVM on PERC-01 datastore.

Steps
1. Right-click PowerFlex SVM > Power > Shut Down Guest OS and click Yes.
2. Right-click PowerFlex SVM and select Edit Settings.
3. Select the 1 TB hard drive and click OK.
4. Right-click PowerFlex SVM > Migrate and select Change Storage Only.
5. For storage, select the local datastore and click Next.
6. Verify the summary and click Finish.
Wait for vMotion to finish.

Delete PERC datastore


Use this procedure to delete the PERC datastore.

Steps
1. Log in to the VMware vCenter Server Appliance (vCSA).
2. Click Storage.
3. Right-click the PERC-01 datastore and select Unmount Datastore > Host and click OK.
4. Right-click the PERC-01 datastore and select Delete Datastore. Click Yes to confirm.

Configuring the new PowerFlex controller node 145


Internal Use - Confidential

5
Convert a standalone PowerFlex controller
node to non-raid mode

Convert physical disks to non-raid disks


Use this procedure to convert the physical disks on the PowerFlex management controller 2.0 to non-raid disks.

Steps
1. Connect to the iDRAC web interface.
2. Click Storage > Overview > Physical Disks.
3. Confirm the SSD Name is NonRAID Solid State Disk 0:1:x. If not, proceed to step 4.
4. Click Storage > Overview > Controllers.
5. In the Actions list for the PERC H755 Front (Embedded), select Reset configuration > OK > Apply now.
6. Click Job Queue.
Wait for task to complete.
7. Select Storage > Overview > Physical Disks.
8. In the Actions menu for each SSD, select Convert to Non-Raid (ensure not to select SSD 0 and SSD 1), and click OK.
9. Select Apply Later.
10. Repeat from all SSD drives.
11. Click Storage > Overview > Tasks.
12. Under Pending Operations actions, select PERC H755 FRONT (Embedded).
13. Select Apply Now.
14. Click Job Queue.
Wait for task to complete.
15. Click Storage > Overview > Physical Disks.
16. Confirm that the SSD name is NonRAID Solid State Disk 0:1:x.

Enable PCI passthrough for PERC H755 on the


PowerFlex management controller
Use this procedure to enable PCI passthrough on the PowerFlex management controller.

Steps
1. Log in to the VMware ESXi host.
2. Select Manage > Hardware > PCI Devices.
3. Select Broadcom / LSI PERC H755 Front Device > Toggle passthrough.
4. Reboot is required, defer till after SDC install.
NOTE: Ignore the VMware popup warning: Failed to configure passthrough devices.

146 Convert a standalone PowerFlex controller node to non-raid mode


Internal Use - Confidential

Install the PowerFlex storage data client (SDC) on the


PowerFlex management controller
Use this procedure to install the PowerFlex storage data client (SDC) on the PowerFlex management controller.

Steps
1. Copy the SDC file to the local datastore on the VMware ESXi server.
2. Use SSH to log in to each VMware ESXi hosts as root.
3. Type the following command to install the storage data client (SDC): esxcli software component apply -d /
vmfs/volumes/PFMC_DS<last ip octet>/sdc.zip.

Configure PowerFlex storage data client (SDC) on the


PowerFlex management controller
Use this procedure to manually configure the SDC on the PowerFlex management controller 2.0.

Steps
1. To configure the SDC, generate 1 UUID per server (https://www.guidgenerator.com/online-guid-
generator.aspx).

NOTE: Use the default UUID.

2. Use SSH to log in to the VMware ESXi host as root.


3. Substitute the new UUID in the following command with the pfmc-data1-vip and pfmc-data2-vip:

esxcli system module parameters set -m scini -p "IoctlIniGuidStr=<guid>


IoctlMdmIPStr=pfmc-data1-vip,pfmc-data2-vip"

See PowerFlex management node cabling, for more information.


4. Type the following command to verify scini configuration: esxcli system module parameters list -m scini |
head
5. Reboot the PowerFlex management controller 2.0.

Configure the SVM


Use this procedure to configure the SVM.

About this task


The PowerFlex controller node must have one hard disk to enable PowerFlex installation. Ensure that PCI passthrough is enabled
on the PowerFlex management controller 2.0.

Steps
1. Log in to the VMware vCSA.
2. Select Host and Clusters.
3. Right-click the single PowerFlex Management SVM and select Edit Settings.
4. Click Add New Device and select PCI Device.
5. Select DirectPath IO.
6. Select PCI Device = PERC H755 Front BroadCom/LSI.
7. Click OK.
8. Power on the SVM.

Convert a standalone PowerFlex controller node to non-raid mode 147


Internal Use - Confidential

Identify the disks on each of the PowerFlex


management controller nodes
Use this procedure to identify the disks on each of the PowerFlex management controller nodes.

Steps
1. Log in as root to each of the PowerFlex SVMs.
2. To identify all available disks on the SVM, type lsblk.
3. Repeat for all PowerFlex management controller node SVMs.

Add storage devices


Use this procedure to add storage devices.

Steps
1. Log in to the MDM: scli --login --username admin.
2. To add SDS storage devices, type scli --add_sds_device --sds_name <sds name> --storage_pool_name
<storage pool name> --device_path /dev/sd(x).
3. Repeat for all devices and for all PowerFlex management controller SVMs.

Convert a PowerFlex single cluster to a three node


cluster
Use this procedure to convert the PowerFlex cluster from a single node to a three node cluster.

Steps
1. Log in as root to the primary MDM: scli -login -username --admin.
2. To verify cluster status (cluster mode is 1_node), type scli --query_cluster.
Output should be similar to the following: Cluster: Mode: 1_node.
3. To convert a single node cluster to a three node cluster, type scli --switch_cluster_mode --cluster_mode
3_node --add_slave_mdm_name <standby-mdm-name> --add_tb_name <tiebreaker-mdm-name>.
4. On the three node cluster, type scli --query_cluster.
Output should be similar to the following: Cluster: Mode: 3_node.

Deploy the PowerFlex gateway


Perform this procedure to deploy the PowerFlex gateway.

About this task


NOTE: PowerFlex management controller 2.0 requires two gateways installed: one for the management gateway and one
for the customer gateway.
After the gateway deployment completes, if necessary, see the instructions in the knowledge base article to resolve the
following error: The Root Directory Filled up due to large Localhost_access.log files
Knowledge base article 541865
For choosing the appropriate volume/datastores during the PowerFlex gateway installation, see PowerFlex management
controller datastore and virtual machine details for more information.

148 Convert a standalone PowerFlex controller node to non-raid mode


Internal Use - Confidential

Steps
1. From the PowerFlex Manager menu, click Templates.
2. On the Templates page, click Add a Template.
3. In the Add a Template wizard, click Clone an existing PowerFlex Manager template.
4. For Category, select Sample Templates. For Template to be Cloned, select Management - PowerFlex Gateway. Click
Next.
5. On the Template Information page, provide the template name, template category, template description, firmware and
software compliance, and who should have access to the service deployed from this template. Click Next.
6. On the Additional Settings page, perform the following:
a. Under Network Settings, select PowerFlex Management Network and PowerFlex Data Networks.
b. Under PowerFlex Gateway Settings, select the gateway credential or create a new credential with root and admin
users.
c. Under Cluster Settings, select Management vCenter.
7. Click Finish.
8. Select the VMware cluster and click Edit > Continue. Select the data center name from Data Center Name and a new
cluster name from Cluster Name list and port groups.
NOTE: Deploying PowerFlex gateway from PowerFlex Manager requires a management vCenter, a datacenter, a cluster,
and dvSwitch port groups for the PowerFlex management and PowerFlex data networks.

9. Expand vSphere Network Settings and select the appropriate port groups.
10. Click Save.
11. Select the VM and click Edit > Continue. Under VM Settings, select the datastore and click Save.
12. Click Publish template and click Deploy.
13. In the Deploy Service wizard, click Yes, and do the following:
a. Provide the service name, service description and click Next.
b. Under PowerFlex Gateway Settings, provide the hostname and IP address and click Next.
c. Select Deploy Now or Deploy Later to schedule the deployment and click Next.
d. Review the details in Summary page and click Finish.
After successful deployment, the PowerFlex gateway is discovered automatically.

Attach PowerFlex management controller 2.0 to the


PowerFlex gateway
After deploying PowerFlex Manager, restore the PowerFlex gateway configuration.

Prerequisites
Ensure the PowerFlex management controller 2.0 Gateway is deployed and configured before proceeding.

Steps
1. Log in to the primary MDM (capture the system id on login): scli --login --username admin.
2. To identify the virtual IP addresses, type scli --query_cluster.
3. Log in to the PowerFlex gateway as root.
4. Modify the gatewayUser.properties file:
a. Type cd /opt/emc/scaleio/gateway/webapps/ROOT/WEB-INF/classes
b. Type vi gatewayUser.properties to edit the file and modify the following fields.
IP addresses should be on <MGMTIP> network:
● mdm.ip.addresses=<PRIMARY MDM IP>,<SECONDARY MDM IP 1>,<DATA 1 VIP>,<DATA 2 VIP>

● system.id=SYSTEM ID of PowerFlex Cluster

● notification_method=none

Convert a standalone PowerFlex controller node to non-raid mode 149


Internal Use - Confidential

● bypass_certificate_check=true

5. Define administrative password <ADMIN PWD>.


a. Format as 'text', i.e. single quote, text string, single quote.
6. Create PowerFlex gateway lockbox credentials: /opt/emc/scaleio/gateway/bin/FOSGWTool.sh --
change_lb_passphrase --new_passphrase <ADMIN PWD>.
7. Create the PowerFlex gateway MDM credentials: /opt/emc/scaleio/gateway/bin/FOSGWTool.sh --
set_mdm_credentials --mdm_user admin --mdm_password <ADMIN PWD>.
8. Create the PowerFlex gateway LIA password: /opt/emc/scaleio/gateway/bin/FOSGWTool.sh --
set_lia_password --lia_password <ADMIN PWD>.
9. Restart the PowerFlex gateway service: service scaleio-gateway restart.
10. Log in to PowerFlex Manager, Select Resources, select the PowerFlex management controller 2.0 Gateway, and click Run
Inventory.

Confirming the PowerFlex Gateway is functional


Before configuring PowerFlex, verify the gateway is working properly.

Steps
1. From the jump server, browse to the PowerFlex Gateway IP address.
2. Log in as admin.
3. Navigate to the Maintain tab.
4. Complete the following fields:
● Primary MDM IP address
● MDM username
● MDM password
● LIA password
5. Click Retrieve to view the system configuration.
6. Click Cancel.
7. Click OK to accept the pop up message.
8. Populate the fields again if they are empty.
9. Click Test REST configuration.
10. Click Connect to MDM.
11. In the pop up window, type the user name and password.

License PowerFlex on PowerFlex management


controller cluster
PowerFlex requires a valid license for production environments and provides support entitlements.

About this task


Verify PowerFlex licensing at the beginning of the implementation phase. The implementation is affected by any issues with the
license file.

Steps
1. Using the administrator credentials, log in to the jump server.
2. Copy the PowerFlex license to the primary MDM.
3. Log in to the primary MDM.
4. Run the following command to apply PowerFlex license: scli --mdm_ip <primary mdm ip> --set_license
--license_file <path to license file>

150 Convert a standalone PowerFlex controller node to non-raid mode


Internal Use - Confidential

Add PowerFlex management controller service to


PowerFlex Manager
Use this procedure to add the PowerFlex management controller to PowerFlex Manager.

Prerequisites
Ensure the following conditions are met before you add an existing service:
● The VMware vCenter, PowerFlex gateway, switches, and hosts are discovered in the resource list.
● The PowerFlex gateway must be in the service.

NOTE: For PowerFlex management controller 2.0 with a PERC H755, the service will be in lifecycle mode.

Steps
1. In PowerFlex Manager, click Services > + Add Existing Service > Next.
2. On the Service Information page, enter a service name in the Name field.
3. Enter a description in the Description field.
4. Select Hyperconverged for the Type.
5. Select the Intelligent Catalog version applicable from the Firmware and Software Compliance.
6. Click Next.
7. Choose one of the following network automation types:
● Full network automation (FNA)
● Partial network automation (PNA)
NOTE: If you choose PNA, PowerFlex Manager skips the switch configuration step, which is normally performed
for a service with FNA. PNA allows you to work with unsupported switches. However, it also requires more manual
configuration before a deployment can proceed successfully. If you choose to use PNA, you give up the error handling
and network automation features that are available with a full network configuration that includes supported switches.

8. (Optional) In the Number of Instances field, provide the number of component instances that you want to include in the
template.
9. On the Cluster Information page, enter a name for the cluster component in the Component Name field.
10. Select values for the cluster settings:

Cluster settings Description


Target Virtual Machine Manager Select the VMware vCenter name where the cluster is
available.
Data Center Name Select the data center name where the cluster is available.
Cluster Name Select the name of the cluster you want to discover.
Target Power Gateway Select the name of the gateway you want to discover.
Target Protection Domain Select the name of the protection domain you want to
discover.
OS Image Choose your ESXi image.

11. Click Next.


12. On OS Credentials page, select the OS credential that you want to use for each node and SVM and click Next.
13. Review the inventory on the Inventory Summary page and click Next.
14. On the Network Mapping page, review the networks that are mapped to port groups and make any required edits and click
Next.
15. Review the Summary page and click Finish when the service is ready to be added.
16. Automatically migrate the vCLS VMs:
a. For storage pools, select PFMC-POOL.

Convert a standalone PowerFlex controller node to non-raid mode 151


Internal Use - Confidential

b. Type MIGRATE VCLS VIRTUAL MACHINES.


c. Click Confirm.

152 Convert a standalone PowerFlex controller node to non-raid mode


Internal Use - Confidential

III
Adding a PowerFlex controller node with a
PERC H755 to a PowerFlex management
controller 2.0
Use the procedures in this section to add a PowerFlex R650 controller node with a PERC H755 to a PowerFlex management
controller 2.0 with PowerFlex.
Before adding a PowerFlex controller node to a PowerFlex management controller 2.0, ensure that the following prerequisites
are met:
● Back up the PowerFlex management node.
● Convert the PowerFlex R650 controller node with a PERC H755 to a PowerFlex management controller 2.0. See Converting
a PowerFlex controller node with a PERC H755 to a PowerFlex management controller 2.0 for more information.
● Latest intelligent catalog is available.
● See Cabling the PowerFlex R650/R750/R6525 nodes for cabling information on PowerFlex management node.
● See Configuring the network to configure the management node switches.

Adding a PowerFlex controller node with a PERC H755 to a PowerFlex management controller 2.0 153
Internal Use - Confidential

6
Discovering the new resource
Whether expanding an existing service or adding a service, the first step is to discover the new resource.

Steps
1. Connect the new NICs of the PowerFlex appliance node to the access switches and the out-of-band management switch
exactly like the existing nodes in the same protection domain. For more details, see Cabling the PowerFlex R650/R750/
R6525 nodes.
2. Ensure that the newly connected switch ports are not shut down.
3. Set the IP address of the iDRAC management port, username, and password for the new PowerFlex appliance nodes.
4. Log in to PowerFlex Manager.
5. Discover the new PowerFlex appliance nodes in the PowerFlex Manager resources. For more details, see Discover resources
in the next section.

Discover resources
Use this procedure to discover and allow PowerFlex Manager access to resources in the environment. Provide the management
IP address and credential for each discoverable resource.

Prerequisites
Verify that the iDRAC network settings are configured. See Configure iDRAC network settings for more information.

About this task


Dell EMC recommends using separate operating system credentials for SVM and VMware ESXi. For information about creating
or updating credentials in PowerFlex Manager, click Settings > Credentials Management and access the online help.
During node discovery, you can configure iDRAC nodes to automatically send alerts to PowerFlex Manager. You can also change
the IP address of the iDRAC nodes and discover them. See Reconfigure the discovered nodes with new management IP and
credentials in the Dell EMC PowerFlex Appliance Administration Guide. If the PowerFlex nodes are not configured for alert
connector, Secure Remote Services does not receive critical or error alerts for those resources.
The following table describes how to configure resources in managed mode:

Resource type Resource state Example


PowerFlex nodes Managed PowerEdge iDRAC management IP address
If you want to perform firmware updates or deployments on a
discovered node, change the default state to managed.
Perform firmware or catalog updates from the Services page,
or the Resources page.

NOTE: For partial network deployments, you do not need to discover the switches. The switches need to be pre-
configured. For sample configurations for Dell PowerSwitch, Cisco Nexus, and Arista switches, see the Dell EMC PowerFlex
Appliance Administration Guide.
The following are the specific details for completing the Discovery wizard steps:

154 Discovering the new resource


Internal Use - Confidential

Resource IP address Resource Discover into Credentials: Credentials: Credentials: Credentials:


type range state node pool name Username password SNMPv2
community
string
Node 192.168.101.21- Managed PowerFlex node PowerFlex root calvin customer
pool appliance provided
192.168.101.24
iDRAC default
Switch 192.168.101.45- Managed N/A access admin admin public
switches
192.168.101.46

VM 192.168.105.105 Managed N/A vCenter administrator P@ssw0rd! N/A


Manager @vsphere.loc
al
Element 192.168.105.120 Managed N/A CloudLink secadmin Secadmin!! N/A
Manager* 192.168.105.121
*

** This is optional. For a new CloudLink Center deployment, the CloudLink Center is discovered automatically.

Prerequisites
● Configure the iDRAC network settings.
● Gather the IP addresses and credentials that are associated with the resources.
NOTE: PowerFlex Manager also allows you to use the name-based searches to discover a range of nodes that were
assigned the IP addresses through DHCP to iDRAC. For more information about this feature, see Dell EMC PowerFlex
Manager Online Help.

Steps
1. On the PowerFlex Manager Getting Started page, click Discover Resources.
2. On the Welcome page of the Discovery Wizard, read the instructions and click Next.
3. On the Identify Resources page, click Add Resource Type. From the Resource Type list, select the resource that you
want to discover.
4. Enter the management IP address of the resource in the IP/Hostname Range field. To discover a resource in an IP range,
provide a starting and ending IP address.
5. In the Resource State list, select Managed or Unmanaged.
6. For PowerFlex node, to discover resources into a selected node pool instead of the global pool (default), select the node
pool from the Discover into Node Pool list.
7. Select the appropriate credential from the Credentials list. See the table above for details.
8. For PowerFlex node, if you want PowerFlex Manager to automatically reconfigure the iDRAC IP addresses of the nodes it
discovers, select the Reconfigure discovered nodes with new management IP and credentials check box. This option
is not selected by default, because it is faster to discover the nodes if you bypass the reconfiguration.
NOTE: iDRAC can be discovered using hostname also.

NOTE: For the Resource Type, you can use a range with hostname or IP address, provided the hostname has a valid
DNS entry.

9. For PowerFlex node, select the Auto configure nodes to send alerts to PowerFlex Manager check box to have
PowerFlex Manager automatically configure nodes to send alerts to PowerFlex Manager.
10. Click Next to start discovery. On the Discovered Resources page, select the resources from which you want to collect
inventory data and click Finish. The discovered resources are listed on the Resources page.

Related information
Configure iDRAC network settings

Discovering the new resource 155


Internal Use - Confidential

7
Upgrade the firmware
Use this procedure to upgrade the firmware.

Steps
1. In the web browser, enter https://<ip-address-of-idrac>.
2. From the iDRAC dashboard, click Maintenance > System Update > Manual Update.
3. Click Choose File. Browse to the release appropriate Intelligent Catalog folder and select the appropriate files.
Required firmware:
● Dell iDRAC or Lifecycle Controller firmware
● Dell BIOS firmware
● Dell BOSS Controller firmware
● Dell Intel X550 or X540 or i350 firmware
● Dell Mellanox ConnectX-5 EN firmware
● PERC H755P controller firmware
4. Click Upload.
5. Click Install and Reboot.

156 Upgrade the firmware


Internal Use - Confidential

8
Configure BOSS card
Use this procedure only if the BOSS card RAID1 is not configured.

Steps
1. Launch the virtual console, select Boot from the menu, and select BIOS setup from Boot Controls to enter the system
BIOS.
2. Power cycle the server and enter the BIOS setup.
3. From the menu, click Power > Reset System (Warm Boot).
4. From System Setup main menu, select Device Settings.
5. Select AHCI Controller in Slot x: BOSS-x Configuration Utility.
6. Select Create RAID Configuration.
7. Select both the devices and click Next.
8. Enter VD_R1_1 for name and retain the default values.
9. Click Yes to create the virtual disk and then click OK to apply the new configuration.
10. Click Next > OK.
11. Select VD_R1_1 that was created and click Back > Finish > Yes > OK.
12. Select System BIOS.
13. Select Boot Settings and enter the following settings:
● Boot Mode: UEFI
● Boot Sequence Retry: Enabled
● Hard Disk Failover: Disabled
● Generic USB Boot: Disabled
● Hard-disk Drive Placement: Disabled
● Clean all Sysprep order and variables: None
14. Click Back > Finish > Finish and click Yes to reboot the node.
15. Boot the node into BIOS mode by pressing F2 during boot.
16. Select System BIOS > Boot Settings > UEFI Settings.
17. Select UEFI Boot Sequence to change the order.
18. Click AHCI Controller in Slot 1: EFI Fixed Disk Boot Device 1 and select + to move to the top.
19. Click Back > Back > Finish to reboot the node again.

Configure BOSS card 157


Internal Use - Confidential

9
Convert physical disks to non-raid disks
Use this procedure to convert the physical disks on the PowerFlex management controller 2.0 to non-raid disks.

Steps
1. Connect to the iDRAC web interface.
2. Click Storage > Overview > Physical Disks.
3. Confirm the SSD Name is NonRAID Solid State Disk 0:1:x. If not, proceed to step 4.
4. Click Storage > Overview > Controllers.
5. In the Actions list for the PERC H755 Front (Embedded), select Reset configuration > OK > Apply now.
6. Click Job Queue.
Wait for task to complete.
7. Select Storage > Overview > Physical Disks.
8. In the Actions menu for each SSD, select Convert to Non-Raid (ensure not to select SSD 0 and SSD 1), and click OK.
9. Select Apply Later.
10. Repeat from all SSD drives.
11. Click Storage > Overview > Tasks.
12. Under Pending Operations actions, select PERC H755 FRONT (Embedded).
13. Select Apply Now.
14. Click Job Queue.
Wait for task to complete.
15. Click Storage > Overview > Physical Disks.
16. Confirm that the SSD name is NonRAID Solid State Disk 0:1:x.

158 Convert physical disks to non-raid disks


Internal Use - Confidential

10
Install VMware ESXi
Use this procedure to install VMware ESXi on the PowerFlex management controller.

Steps
1. Log in to iDRAC and perform the following steps:
a. Connect to the iDRAC interface and launch a virtual remote console from Dashboard and click Launch Virtual Console.
b. Select Virtual Media > Connect Virtual Media > Map CD/DVD.
c. Click Choose File and browse to the folder where the ISO file is saved, select it, and click Open.
d. Click Map Device > Close.
e. Click Boot > Virtual CD/DVD/ISO.
f. Click Yes to confirm boot action.
g. Click Power > Reset System (warm boot).
h. Click Yes.
2. Perform the following steps to install VMware ESXi:
a. On the VMware ESXi installer screen, press Enter to continue.
b. Press F11 to accept the license agreement.
c. Under Local, select ATA DELLBOSS VD as the installation location. If prompted, click Enter.
d. Select US Default as the keyboard layout and press Enter.
e. At the prompt, type the root password, and press Enter.
f. At the Confirm Install screen, press F11.
g. In Virtual Console, click Virtual Media > Disconnect Virtual Media.
h. Click Yes to un-map all devices.
i. Press Enter to reboot the PowerFlex management controller when the installation completes.

Install VMware ESXi 159


Internal Use - Confidential

11
Configure VMware ESXi
Use this procedure to configure VMware ESXi on the PowerFlex node.

Steps
1. Press F2 to customize the system.
2. Enter the root password and press Enter.
3. Go to DCUI and select Troubleshooting Options.
4. Select Enable SSH.
5. Select Enable ESXi Shell.
6. Press ESC to exit from troubleshooting mode options.
7. Go to Direct Console User Interface (DCUI) > Configure Management Network.
8. Set Network Adapter to VMNIC2.
9. Set the ESXi Management VLAN ID to the required VLAN value.
10. Set the IPv4 ADDRESS, SUBNET MASK, and DEFAULT GATEWAY.
11. Select IPV6 Configuration > Disable IPV6 and press Enter.
12. Go to DNS Configuration and set the customer provided value.
13. Go to Custom DNS Suffixes and set the customer provided value.
14. Press ESC to exit the network configuration and press Y to apply the changes.
15. Type Y to commit the changes and the node restarts.
16. Verify the host connectivity by pinging the IP address from the jump server using the command prompt.

160 Configure VMware ESXi


Internal Use - Confidential

12
Install Dell Integrated Service Module
Use this procedure to install Dell Integrated Service Module (ISM).

Prerequisites
Download the latest supported version from Dell iDRAC Service Module.

Steps
1. Copy ISM-Dell-Web-X.X.X-XXXX.VIB-ESX7i-Live_AXX.zip to the /vmfs/volumes/<datastore>/ folder on
the PowerFlex management node running VMware ESXi.
2. SSH to the new appliance management host running VMware ESXi.
3. To install VMware vSphere 7.x Dell iDRAC service module, type esxcli software vib install -d /vmfs/
volumes/<datastore>/ISM-Dell-Web-X.X.X-XXXX.VIB-ESX7i-Live_AXX.zip.

Install Dell Integrated Service Module 161


Internal Use - Confidential

13
Configure NTP on the host
Use this procedure to configure the NTP on the host.

Steps
1. Log in to VMware ESXi host client as root.
2. In the left pane, click Manage.
3. Click System and Time & Date.
4. Click Edit NTP Settings.
5. Select Use Network Time Protocol (enable NTP client).
6. Select Start and Stop with host from the drop-down list.
7. Enter NTP IP Addresses.
8. Click Save.
9. Click Services > ntpd.
10. Click Start.

162 Configure NTP on the host


Internal Use - Confidential

14
Rename the BOSS datastore
Use this procedure to rename the BOSS datastore.

Steps
1. Log in to VMware ESXi host client as a root.
2. On the left pane, click Storage.
3. Click Datastores.
4. Right-click datastore1 and click Rename.
5. Enter PFMC_DS<last_ip_octet>.
6. Click Save.

Rename the BOSS datastore 163


Internal Use - Confidential

15
Enable PCI passthrough for PERC H755 on
the PowerFlex management controller
Use this procedure to enable PCI passthrough on the PowerFlex management controller.

Steps
1. Log in to the VMware ESXi host.
2. Select Manage > Hardware > PCI Devices.
3. Select Broadcom / LSI PERC H755 Front Device > Toggle passthrough.
4. Reboot is required, defer till after SDC install.
NOTE: Ignore the VMware popup warning: Failed to configure passthrough devices.

164 Enable PCI passthrough for PERC H755 on the PowerFlex management controller
Internal Use - Confidential

16
Install the PowerFlex storage data client
(SDC) on the PowerFlex management
controller
Use this procedure to install the PowerFlex storage data client (SDC) on the PowerFlex management controller.

Steps
1. Copy the SDC file to the local datastore on the VMware ESXi server.
2. Use SSH to log in to each VMware ESXi hosts as root.
3. Type the following command to install the storage data client (SDC): esxcli software component apply -d /
vmfs/volumes/PFMC_DS<last ip octet>/sdc.zip.

Install the PowerFlex storage data client (SDC) on the PowerFlex management controller 165
Internal Use - Confidential

17
Configure PowerFlex storage data client
(SDC) on the PowerFlex management
controller
Use this procedure to manually configure the SDC on the PowerFlex management controller 2.0.

Steps
1. To configure the SDC, generate 1 UUID per server (https://www.guidgenerator.com/online-guid-
generator.aspx).

NOTE: Use the default UUID.

2. Use SSH to log in to the VMware ESXi host as root.


3. Substitute the new UUID in the following command with the pfmc-data1-vip and pfmc-data2-vip:

esxcli system module parameters set -m scini -p "IoctlIniGuidStr=<guid>


IoctlMdmIPStr=pfmc-data1-vip,pfmc-data2-vip"

See PowerFlex management node cabling, for more information.


4. Type the following command to verify scini configuration: esxcli system module parameters list -m scini |
head
5. Reboot the PowerFlex management controller 2.0.

166 Configure PowerFlex storage data client (SDC) on the PowerFlex management controller
Internal Use - Confidential

18
Create a staging cluster and add a host
Use this procedure to create a staging cluster and add a host.

Steps
1. Create a staging cluster:
a. Log in to the VMware vSphere Client.
b. Right-click Datacenter and click New Cluster.
c. Enter the cluster name as Staging and click Next.
d. Verify the Summary and click Finish.
2. Add the host:
a. Click vCenter > Hosts and Clusters.
b. Right-click the staging cluster and click Add Host.
c. Enter FQDN of host.
d. Enter root username and password and click Next.
e. In the Security Alert dialog box, select the host and click OK.
f. Verify the summary and click Finish.

Create a staging cluster and add a host 167


Internal Use - Confidential

19
Assign VMware vSphere licenses
Use this procedure to assign VMware vSphere licenses to the new host.

Steps
1. Log in to the vCSA and from the menu, click Administration > Licensing > Licenses.
2. Click Assets, select the newly added VMware ESXi host, and click Assign License.
3. Select the license and click OK.

Related information
Add VMkernel adapter to the PowerFlex hyperconverged or ESXi-based compute-only node hosts
Add VMkernel adapter to the PowerFlex controller node hosts

168 Assign VMware vSphere licenses


Internal Use - Confidential

20
Add PowerFlex controller nodes to an
existing dvSwitch
Use this procedure to add PowerFlex controller nodes to an existing distributed virtual switch.

About this task


The dvswitch names are for example only and may not match the configured system. Do not change these names or a data
unavailable or data lost event may occur.

Steps
1. Log in to the VMware vSphere Client.
2. Click Home and select Networking.
3. Right-click the oob_dvswitch and select Add and Manage Hosts.
4. In the wizard, select Add Hosts and click Next
a. Click +New Hosts, select the installed node, and click OK.
b. Click Next.
c. From Manage Physical Adapters and click Next. Clear the remaining check boxes.
d. Set VMNIC4 as the Network Adapters.
e. Click Assign uplink.
f. Select Uplink1 and click OK.
g. Click Next > Next.
5. Right-click the fe_dvswitch and select Add and Manage Hosts:
a. Select Add hosts and click Next.
b. Click +New Hosts, select the installed node, and click OK.
c. Click Next.
d. Select the Manage Physical Adapters and Manage VMkernel Adapters check boxes and click Next. Clear the
remaining check boxes.
e. Select VMNIC2 and VMNIC6 as the Network Adapters.
f. Click Assign Uplink
g. Select Uplink1 or LAG_FE-0 and click OK.
h. After assigning the VMNICs, click Next.
i. Select VMK0 and select Assign Port Group.
j. Select the port group for the VMK0 Network Label and click OK.
k. Click Next > Next.
l. Click Finish.
6. Right-click the be_dvswitch and select Add and Manage Hosts:
a. Select Add hosts and click Next.
b. Click +New Hosts, select the installed node, and click OK.
c. Click Next.
d. Select the Manage Physical Adapters and Manage VMkernel Adapters check boxes and click Next. Clear the
remaining check boxes.
e. Select VMNIC3 and VMNIC7 as the Network Adapters.
f. Click Assign Uplink.
g. Select Uplink1 or LAG_BE-0 and click OK.
h. Click Assign Uplink and select Uplink2 or LAG_BE-1, and click OK.
i. After assigning the VMNICs, click Next > Next.
j. Click Finish.
7. Click Home and select Hosts and Clusters:

Add PowerFlex controller nodes to an existing dvSwitch 169


Internal Use - Confidential

a. Select the new VMware ESXi host.


b. Select Configure > Networking > Virtual Switches.
c. Double-click vSwitch0 and click X to remove it.
d. Click Yes to confirm.
e. Click the refresh icon and verify that vSwitch0 is removed.
f. Repeat this step for any other vSwitch (not dvswitch).

Related information
Add VMkernel adapter to the PowerFlex controller node hosts

170 Add PowerFlex controller nodes to an existing dvSwitch


Internal Use - Confidential

21
Migrate the PowerFlex controller node to the
PowerFlex management controller 2.0
Use this procedure to migrate the PowerFlex R650 controller node to the PowerFlex management controller 2.0.

About this task


After the server is updated and is part of the networking stack for the controller cluster, move the host to the appropriate
location.

Steps
1. Log in to the PowerFlex controller node VMware vSphere Client.
2. Click Hosts and Clusters.
3. Expand both the Staging cluster and the PowerFlex management controller 2.0 cluster. This is the cluster containing the
current controller hosts.
4. Click the host in the Staging cluster and drag it into the PowerFlex management controller 2.0 cluster. Accept the defaults
to queries, if prompted.
5. Click Finish.
6. Right-click Staging Cluster, and click Delete > Yes.

Migrate the PowerFlex controller node to the PowerFlex management controller 2.0 171
Internal Use - Confidential

22
Manually deploy the SVM
Use this procedure to manually deploy the SVM of the selected Intelligent Catalog.

About this task


Deploy the PowerFlex SVM on the PowerFlex controller nodes.
NOTE: Manually deploy the PowerFlex SVM on each of the PowerFlex controller nodes. The SVM on the PowerFlex
management controller 2.0 node is installed in the local storage. The SVM on the standalone PowerFlex management
controller 2.0 node is installed in the PERC-01 storage.

Steps
1. Log in to the VMware vCSA.
2. Select Hosts and Clusters.
3. Right-click the ESXi host > Select Deploy OVF Template.
4. Select Local file > Upload file > Browse to the SVM OVA template.
5. Click Open > Next.
6. Enter pfmc-<svm-ip-address> for VM name.
7. Click Next.
8. Identify the cluster and select the node that you are deploying. Verify that there are not compatibility warnings and click
Next.
9. Click Next.
10. Review details and click Next.
11. Select Local datastore Thin Provision and Disable Storage DRS for this VM and click Next.
12. Select pfmc-sds-mgmt-<vlanid> for VM network and click Next.
13. Click Finish.

Configure the SVM


Use this procedure to configure the SVM.

About this task


The PowerFlex controller node must have one hard disk to enable PowerFlex installation. Ensure that PCI passthrough is enabled
on the PowerFlex management controller 2.0.

Steps
1. Right-click each SVM, and click Settings.
a. Set CPU to 12 CPUs with 12 cores per socket.
b. Select Reservation and enter the GHz value: 17.4.
c. Set Memory to 18 GB and check Reserve all guest memory (all locked).
d. Set Network Adapter 1 to the pfmc-sds-mgmt-<vlanid>.
e. Set Network Adapter 2 to the pfmc-sds-data1-<vlanid>.
f. Set Network Adapter 3 to the pfmc-sds-data2-<vlanid>.
g. Click Add New Device and select PCI Device (new single PowerFlex management controller).
h. Enable Toggle DirectPath IO.
i. PCI Device = PERC H755 Front BroadCom / LSI
j. Click OK.
2. Power on the SVM and open a console.

172 Manually deploy the SVM


Internal Use - Confidential

3. Log in using the following credentials:


● Username: root
● Password: admin
4. To change the root password, type passwd and enter the new SVM root password twice.
5. Type nmtui, select Set system hostname, press Enter, and create the hostname.

Configure the pfmc-sds-mgmt-<vlanid> networking


interface
Use this procedure to configure the networking interface.

Steps
1. From the nmtui, select Edit Connection.
2. Select Wired connection 1 to modify connection for pfmc-sds-mgmt-<vlanid>.

Network interface settings Description


Profile name Set to eth0.
Ethernet a. Select Show.
b. Leave the cloned MAC address line blank.
c. Set MTU to 1500.

IPv4 configuration Select Automatic and change to Manual then select Show.
Addresses Select Add and enter the IP address of this interface
(pfmc-sds-mgmt_ip) with the subnet mask (Example:
100.65.140.10/24).

Gateway Enter the gateway IP address.


DNS Server Enter DNS server IP address.
Search domains a. Enter the domain.
b. Select Require IPv4 addressing for this connection..
IPv6 configuration Select Automatic and change to Ignore.

3. Select Automatically connect.


4. Select Available to all users.
5. To exit the screen, select OK.

Configure the pfmc-sds-data1-<vlanid> networking


interface
Use this procedure to configure the networking interface.

Steps
1. From the nmtui, select Edit Connection.
2. Select Wired connection 2 to modify connection for pfmc-sds-data1-<vlanid>.

Network interface settings Description


Profile name Set to eth1.
Ethernet a. Select Show.

Manually deploy the SVM 173


Internal Use - Confidential

Network interface settings Description


b. Leave the cloned MAC address line blank.
c. Set MTU to 9000.

IPv4 configuration Select Automatic and change to Manual then select Show.
Addresses Select Add and enter the IP address of this interface
(pfmc-sds-data1_ip).

Gateway Leave blank.


DNS Server Leave blank.
Search domains Leave blank.

3. Select Never use this network for the default route.


4. Select Require IPv4 addressing for this connection.
5. For IPv6 configuration, select Automatic and change to Ignore.
6. Select Automatically connect.
7. Select Available to all users.
8. Select OK.
9. To exit nmtui, select Quit.

Configure the pfmc-sds-data2-<vlanid> networking


interface
Use this procedure to configure the networking interface.

Steps
1. From the nmtui, select Edit Connection.
2. Select Wired connection 3 to modify connection for pfmc-sds-data2-<vlanid>.

Network interface settings Description


Profile name Set to eth2.
Ethernet a. Select Show.
b. Leave the cloned MAC address line blank.
c. Set MTU to 9000.

IPv4 configuration Select Automatic and change to Manual then select Show.
Addresses Select Add and enter the IP address of this interface
(pfmc-sds-data2_ip).

Gateway Leave blank.


DNS Server Leave blank.
Search domains Leave blank.

3. Select Never use this network for the default route.


4. Select Require IPv4 addressing for this connection.
5. For IPv6 configuration, select Automatic and change to Ignore.
6. Select Automatically connect.
7. Select Available to all users.
8. To exit the screen, select OK.

174 Manually deploy the SVM


Internal Use - Confidential

Extend the MDM cluster from three to five nodes


using SCLI
Use this procedure to extend the MDM cluster using SCLI.
It is critical that the MDM cluster is distributed across access switches and physical cabinets to ensure maximum resiliency and
availability of the cluster. The location of the MDM components should be checked and validated during every engagement,
and adjusted if found noncompliant with the published guidelines. If an expansion includes adding physical cabinets and access
switches, you should relocate the MDM cluster components. See MDM cluster component layouts for more information.
When adding new MDM or tiebreaker nodes to a cluster, first place the PowerFlex R640/R740xd storage-only nodes (if
available), followed by the PowerFlex R640/R740xd/R840 hyperconverged nodes.

Prerequisites
● Identify new nodes to use as MDM or tiebreaker.
● Identify the management IP address, data1 IP address, and data2 IP address (log in to each new node or SVM and run the IP
addr command).
● Gather virtual interfaces for the nodes being used for the new MDM or tiebreaker, and note the interface of data1 and data2.
For example, for a PowerFlex storage-only node, the interface is bond0.152 and bond1.160. If it is an SVM, it is eth3 and
eth4.
● Identify the primary MDM.

Steps
1. SSH to each new node or SVM and assign the proper role (MDM or tiebreaker) to each.
2. Transfer the MDM, LIA, and SDS packages to the newly identified MDM cluster nodes.
NOTE: The following steps contain sample versions of PowerFlex files as examples only. Use the appropriate PowerFlex
files for your deployment.

3. To install the LIA, enter TOKEN=<flexos password> rpm -ivh /root/install/EMC-ScaleIO-lia-3.x-


x.xxx.el7.x86_64.rpm.
4. To install the MDM service:
● For the MDM role, enter MDM_ROLE_IS_MANAGER=1 rpm -ivh /root/install/EMC-ScaleIO-mdm-3.x-
x.xxx.el7.x86_64.rpm
● For the tiebreaker role, enter MDM_ROLE_IS_MANAGER=0 rpm-ivh /root/install/EMC-ScaleIO-mdm-3.x-
x.xxx.el7.x86_64.rpm .
5. To install SDS, enter enter rpm -ivh /root/install/EMC-ScaleIO-sds-3.x-x.xxx.el7.x86_64.rpm
6. Open an SSH terminal to the primary MDM and log in to the operating system.
7. Log in to PowerFlex by entering scli –-login --username admin –-password <powerflex password>.
8. Enter scli –-query_cluster to query the cluster. Verify that it is in three node cluster mode.
9. Add a new MDM by entering scli --add_standby_mdm --mdm_role manager --new_mdm_ip
<new mdm data1,data2 ip’s> --new_mdm_management_ip <mdm management IP> --
new_mdm_virtual_ip_interfaces <list both interface, comma seperated> --new_mdm_name <new
mdm name>.

10. Add a new tiebreaker by entering scli --add_standby_mdm --mdm_role tb --new_mdm_ip <new tb
data1,data2 ip’s> --new_mdm_name <new tb name>

11. Enter scli -–query_cluster to find the ID for the newly added Standby MDM and the Standby TB.
12. To switch to five node cluster, enter scli --switch_cluster_mode --cluster_mode 5_node --
add_slave_mdm_id <Standby MDM ID> --add_tb_id <Standby tiebreaker ID>

13. Repeat steps 1 through 9 to add standby MDM and tiebreakers on other PowerFlex nodes.

Manually deploy the SVM 175


Internal Use - Confidential

Verify connectivity between the PowerFlex storage


VMs
Use this procedure to verify connectivity between PowerFlex storage VMs.

Prerequisites
This verification uses an MTU of 8972 to verify jumbo frames between SVMs.

Steps
1. Log in to the VMware vCSA.
2. Right-click the SVM.
3. On the VM summary page, select Launch Web Console.
4. Log in to SVM as root.
5. Verify connectivity between SVM, type ping -M do -s 8972 [destination IP].
6. Confirm connectivity for all interfaces to all SVMs.

Add SDSs
Use this procedure to add SDSs.

Steps
1. Log in to the MDM: scli --login --username admin.
2. To add SDSs, type scli --add_sds --sds_ip <pfmc-sds-data1-ip,pfcm-sds-data2-ip> --
protection_domain_name PFMC --storage_pool_name PFMC-Pool --disable_rmcache --sds_name
PFMC-SDS-<last ip octet>.
3. Repeat for each PowerFlex management controller.

Identify the disks on each of the PowerFlex


management controller nodes
Use this procedure to identify the disks on each of the PowerFlex management controller nodes.

Steps
1. Log in as root to each of the PowerFlex SVMs.
2. To identify all available disks on the SVM, type lsblk.
3. Repeat for all PowerFlex management controller node SVMs.

Add storage devices


Use this procedure to add storage devices.

Steps
1. Log in to the MDM: scli --login --username admin.
2. To add SDS storage devices, type scli --add_sds_device --sds_name <sds name> --storage_pool_name
<storage pool name> --device_path /dev/sd(x).
3. Repeat for all devices and for all PowerFlex management controller SVMs.

176 Manually deploy the SVM


Internal Use - Confidential

Add PowerFlex storage to new PowerFlex


management controller nodes
Steps
1. Log in to the MDM: scli --login --username admin.
2. To query all storage data clients (SDC) to capture the SDC IDs, type scli --query_all_sdc.
3. To query all the volumes to capture volume names, type scli --query_all_volumes.
4. To map volumes to SDCs, type scli --map_volume_to_sdc --volume_name <volume name> --sdc_id <sdc
id> --allow_multi_map.
5. Repeat steps 2 through 4 for all volumes and SDCs.

Set the spare capacity for the medium granularity


storage pool
Use this procedure to set the spare capacity for the medium granularity storage pool.

Steps
1. Log in to the primary MDM, type: scli --login --username admin.
2. To modify the capacity pool, type scli --modify_spare_policy --protection_domain_name PFMC --
storage_pool_name PFMC-Pool --spare_percentage <percentage>.
NOTE: Spare percentage is 1/n (where n is the number of nodes in the cluster). For example, a three-node clusters
spare percentage is 34%.

3. Type Y to proceed.

License PowerFlex on PowerFlex management


controller cluster
PowerFlex requires a valid license for production environments and provides support entitlements.

About this task


Verify PowerFlex licensing at the beginning of the implementation phase. The implementation is affected by any issues with the
license file.

Steps
1. Using the administrator credentials, log in to the jump server.
2. Copy the PowerFlex license to the primary MDM.
3. Log in to the primary MDM.
4. Run the following command to apply PowerFlex license: scli --mdm_ip <primary mdm ip> --set_license
--license_file <path to license file>

Manually deploy the SVM 177


Internal Use - Confidential

23
Add PowerFlex management controller
service to PowerFlex Manager
Use this procedure to add the PowerFlex management controller to PowerFlex Manager.

Prerequisites
Ensure the following conditions are met before you add an existing service:
● The VMware vCenter, PowerFlex gateway, switches, and hosts are discovered in the resource list.
● The PowerFlex gateway must be in the service.

NOTE: For PowerFlex management controller 2.0 with a PERC H755, the service will be in lifecycle mode.

Steps
1. In PowerFlex Manager, click Services > + Add Existing Service > Next.
2. On the Service Information page, enter a service name in the Name field.
3. Enter a description in the Description field.
4. Select Hyperconverged for the Type.
5. Select the Intelligent Catalog version applicable from the Firmware and Software Compliance.
6. Click Next.
7. Choose one of the following network automation types:
● Full network automation (FNA)
● Partial network automation (PNA)
NOTE: If you choose PNA, PowerFlex Manager skips the switch configuration step, which is normally performed
for a service with FNA. PNA allows you to work with unsupported switches. However, it also requires more manual
configuration before a deployment can proceed successfully. If you choose to use PNA, you give up the error handling
and network automation features that are available with a full network configuration that includes supported switches.

8. (Optional) In the Number of Instances field, provide the number of component instances that you want to include in the
template.
9. On the Cluster Information page, enter a name for the cluster component in the Component Name field.
10. Select values for the cluster settings:

Cluster settings Description


Target Virtual Machine Manager Select the VMware vCenter name where the cluster is
available.
Data Center Name Select the data center name where the cluster is available.
Cluster Name Select the name of the cluster you want to discover.
Target Power Gateway Select the name of the gateway you want to discover.
Target Protection Domain Select the name of the protection domain you want to
discover.
OS Image Choose your ESXi image.

11. Click Next.


12. On OS Credentials page, select the OS credential that you want to use for each node and SVM and click Next.
13. Review the inventory on the Inventory Summary page and click Next.

178 Add PowerFlex management controller service to PowerFlex Manager


Internal Use - Confidential

14. On the Network Mapping page, review the networks that are mapped to port groups and make any required edits and click
Next.
15. Review the Summary page and click Finish when the service is ready to be added.
16. Automatically migrate the vCLS VMs:
a. For storage pools, select PFMC-POOL.
b. Type MIGRATE VCLS VIRTUAL MACHINES.
c. Click Confirm.

Add PowerFlex management controller service to PowerFlex Manager 179


Internal Use - Confidential

24
Update the PowerFlex management
controller 2.0 service details
Use this procedure to update the PowerFlex management controller 2.0 service details.

Prerequisites
Ensure the following conditions are met before you add an existing service:
● The VMware vCenter, PowerFlex gateway, switches, and hosts are discovered in the resource list.
● The PowerFlex gateway must be in the service.

Before you update the details for a service, ensure that you run inventory for the VMware vCenter and the PowerFlex
management controller 2.0 Gateway.
1. Log in to PowerFlex Manager.
2. Select the Resources page.
3. Click the PowerFlex management controller 2.0 Gateway and Management vCenter, and click Run Inventory. Wait for the
inventory to finish.

Steps
1. On the menu bar, click Services.
2. On the Services page, click the PowerFlex management controller 2.0 Gateway and Management vCenter, and in the right
pane, click View Details.
3. On the Services Details page, in the right pane, under Services Actions, click Update Service Details.
4. Review the OS Credentials page and click Next.
PowerFlex Manager shows all nodes and credentials, regardless of whether they are in the service. This enables you to
update the username and password for a node if it has changed.
5. Review the Inventory Summary page and click Next.
6. Review the Summary page and click Finish.

180 Update the PowerFlex management controller 2.0 service details


Internal Use - Confidential

IV
Adding a PowerFlex management node to a
PowerFlex management controller 1.0 with
VMware vSAN
Use the procedures in this section to add a PowerFlex management node to a PowerFlex management controller 1.0 with
VMware vSAN.
Before adding a PowerFlex management node, you must complete the initial set of expansion procedures that are common to all
expansion scenarios covered in Performing the initial expansion procedures.
After adding a PowerFlex management node, see Completing the expansion.

Adding a PowerFlex management node to a PowerFlex management controller 1.0 with VMware vSAN 181
Internal Use - Confidential

25
Hardware requirement
Node chassis Role Data drive Boot Data drives Network Description
type configurati
on
PowerFlex Controll SSD (1.92 BOSS H740 controller rNDC: Dual port Mellanox CS4- Dual CPU - 192
R640 node er TB) 5x SSD LX (connected through 1x10 GB RAM (small
GB to access switches and controller)
operating at 10 G).
PCIe: Dual port Mellanox CX4-
LX (connected through 1x10
GB to access switches and
operating at 10 G).
PCIe: Intell X550 dual port
1 GbE (baseT) connected to
OOBM switch.

182 Hardware requirement


Internal Use - Confidential

26
Upgrade the firmware
Use this procedure to upgrade the firmware.

Steps
1. In the web browser, enter https://<ip-address-of-idrac>.
2. From the iDRAC dashboard, click Maintenance > System Update > Manual Update.
3. Click Choose File. Browse to the release appropriate Intelligent Catalog folder and select the appropriate file.
Required firmware:
● Dell iDRAC or Lifecycle Controller firmware
● Dell BIOS firmware
● Dell BOSS Controller firmware
● Dell Intel X550 or X540 or i350 firmware
● Dell Mellanox ConnectX-5 EN firmware
● PERC H740P controller firmware
4. Click Upload.
5. Click Install and Reboot.

Upgrade the firmware 183


Internal Use - Confidential

27
Configure enhanced HBA mode
Use this procedure to enable enhanced HBA mode on the PowerFlex management node with Dell PERC H740P mini RAID
controller cards.

Steps
1. Log in to BIOS setup.
2. Go to the device settings and click Integrated RAID Controller1:Dell <PERC H740P Mini> Configuration Utility.
3. Click Main menu > Controller management > Advanced Controller management > Manage controller Mode.
4. Click Switch to Enhanced HBA Controller Mode.
5. Select Confirm > Yes and click OK.
6. Click Back > Back > Back > Finish > Finish and click Yes to exit from the BIOS and restart.

184 Configure enhanced HBA mode


Internal Use - Confidential

28
Configure BOSS card
Use this procedure only if the BOSS card RAID1 is not configured.

About this task


PowerFlex Manager configures the BIOS on production hosts.

Steps
1. Launch the virtual console, select Boot from the menu, and select BIOS setup from Boot Controls to enter the system
BIOS.
2. Power cycle the server and enter the BIOS setup.
3. From the menu, click Power > Reset System (Warm Boot).
4. From System Setup main menu, select Device Settings.
5. Select AHCI Controller in SlotX: BOSS-X Configuration Utility.
6. Select Create RAID Configuration.
7. Select both the devices and click Next.
8. Enter VD_R1_1 for name and retain the default values.
9. Click Yes to create the virtual disk and then click OK to apply the new configuration.
10. Click Next > OK.
11. Select VD_R1_1 that was created and click Back > Finish > Yes > OK.
12. Select System BIOS.
13. Select Boot Settings and enter the following settings:
● Boot Mode: UEFI
● Boot Sequence Retry: Enabled
● Hard Disk Failover: Disabled
● Generic USB Boot: Disabled
● Hard-disk Drive Placement: Disabled
● Clean all Sysprep order and variables: None
14. Click Back > Finish > Finish and click Yes to reboot the node.
15. Boot the node into BIOS mode by pressing F2 during boot.
16. Select System BIOS > Boot Settings > UEFI Settings.
17. Select UEFI Boot Sequence to change the order.
18. Click AHCI Controller in Slot 1: EFI Fixed Disk Boot Device 1 and select + to move to the top.
19. Click Back > Back > Finish to reboot the node again.

Configure BOSS card 185


Internal Use - Confidential

29
Install VMware ESXi
Use this procedure to install VMware ESXi on the PowerFlex management controller.

Steps
1. Log in to iDRAC and perform the following steps:
a. Connect to the iDRAC interface and launch a virtual remote console from Dashboard and click Launch Virtual Console.
b. Select Virtual Media > Connect Virtual Media > Map CD/DVD.
c. Click Choose File and browse to the folder where the ISO file is saved, select it, and click Open.
d. Click Map Device > Close.
e. Click Boot > Virtual CD/DVD/ISO.
f. Click Yes to confirm boot action.
g. Click Power > Reset System (warm boot).
h. Click Yes.
2. Perform the following steps to install VMware ESXi:
a. On the VMware ESXi installer screen, press Enter to continue.
b. Press F11 to accept the license agreement.
c. Under Local, select ATA DELLBOSS VD as the installation location. If prompted, click Enter.
d. Select US Default as the keyboard layout and press Enter.
e. At the prompt, type the root password, and press Enter.
f. At the Confirm Install screen, press F11.
g. In Virtual Console, click Virtual Media > Disconnect Virtual Media.
h. Click Yes to un-map all devices.
i. Press Enter to reboot the PowerFlex management controller when the installation completes.

186 Install VMware ESXi


Internal Use - Confidential

30
Configure VMware ESXi
Use this procedure to configure VMware ESXi on the PowerFlex node.

Steps
1. Press F2 to customize the system.
2. Enter the root password and press Enter.
3. Go to Direct Console User Interface (DCUI) > Configure Management Network.
4. Set Network Adapter to VMNIC2 and VMNIC0.
5. Set the ESXi Management VLAN ID to the required VLAN value.
6. Set the IPv4 ADDRESS, SUBNET MASK, and DEFAULT GATEWAY.
7. Select IPV6 Configuration > Disable IPV6 and reboot.
8. Press Esc to exit the network configuration and press Y to apply the changes.
9. Go to DNS Configuration and set the customer provided value.
10. Go to Custom DNS Suffixes and set the customer provided value.
11. Go to DCUI and select Troubleshooting Options.
12. Select Enable SSH.
13. Select Enable ESXi SSH.
14. Press Esc to exit from troubleshooting mode options.
15. Go to DCUI IPv6 Configuration.
16. Select Configure Management Network > IPv6 Configuration.
17. Disable IPv6.
18. Press ESC to return to the DCUI.
19. Type Y to commit the changes and the node restarts.
20. Verify the host connectivity by pinging the IP address from the jump server using the command prompt.

Configure VMware ESXi 187


Internal Use - Confidential

31
Modify the existing VM network
Use this procedure to modify the existing VM network.

Steps
1. Log in to VMware ESXi host client as root.
2. On the left pane, click Networking.
3. Right-click VM Network and click Edit Settings.
4. Change VLAN ID to flex-node-mgmt-<vlanid> and click Save.

188 Modify the existing VM network


Internal Use - Confidential

32
Configure NTP on the host
Use this procedure to configure the NTP on the host.

Steps
1. Log in to VMware ESXi host client as root.
2. In the left pane, click Manage.
3. Click System and Time & Date.
4. Click Edit NTP Settings.
5. Select Use Network Time Protocol (enable NTP client).
6. Select Start and Stop with host from the drop-down list.
7. Enter NTP IP Addresses.
8. Click Save.
9. Click Services > ntpd.
10. Click Start.

Configure NTP on the host 189


Internal Use - Confidential

33
Create a data center and add a host
Use this procedure to create a data center and then add a host to the data center.

Steps
Create a data center:
1. Log in to the VMware vSphere Client.
2. Right-click vCenter and click New Datacenter.
3. Enter data center name as PowerFlex Management and click OK.
Add a host to the data center:
NOTE: The vCLS VMs are deployed on the local datastore when the node is added to the cluster from VCSA 7.0Ux. These
VMs are auto deployed using VMware vCenter. When you add the host cluster they are used for managing the HA and DRS
service on the cluster.
4. Right-click Datacenter and click New Cluster.
5. Enter the cluster name as PowerFlex Management Cluster and retain the default for DRS, HA, and vSAN. Click OK.
6. Right-click the cluster and click Add Host to add multiple hosts.
7. Enter FQDN of host.
8. Enter root username and password and click Next.
9. Select the certificate and click OK for certificate alert.
10. Verify the Host Summary and click Next.
11. For VMware ESXi prior to version 7.0 Ux, perform:
a. Select the valid license and click Next.
b. Select Disabled and click Next > Next.
12. Verify the summary and click Finish.
NOTE: If the node goes into maintenance mode, right-click the VMware ESXi host and click Maintenance Mode > Exit
Maintenance Mode.

190 Create a data center and add a host


Internal Use - Confidential

34
Add hosts to an existing dvswitch
Use this procedure to add hosts to an existing dvswitch on the PowerFlex management controller.

Steps
1. Log in to VMware vCenter with administrator credentials.
2. Right-click an existing fe_dvswitch.
3. Click Add and Manage Hosts.
4. Click +New Hosts, select the newly added hosts, and click OK.
5. Click Assign Uplink and select lag1-0 for the appropriate VMNIC.
6. Click Assign Uplink and select lag1-1 for the appropriate VMNIC.
7. Click Next and assign port group.
8. Select the mgmt vlan port group and click OK.
9. Retain the default values and click Finish.
10. Repeat step 1 to 9 for be_dvswitch.
11. Repeat these steps for the remaining hosts.

Add hosts to an existing dvswitch 191


Internal Use - Confidential

35
Add VMkernel adapter to the hosts
Use this procedure to add VMkernel adapter to the hosts.

Steps
1. Log in to the VMware vSphere Client.
2. Select the host and click Configure in the right pane.
3. Under Networking tab, select the VMkernel adapter.
4. Click Add.
5. Select Connection type as VMkernel network adapter and click Next.
6. Select Target device as Existing network and click Browse to select the appropriate port group.
7. In the port properties, select Enable services, select the appropriate service, and click Next.
For example, for vMotion, select vMotion and for vSan, select vSAN. For any other networks, retain the default service.
8. In IPV4 Settings, select Use static IPV4 settings, provide the appropriate IP address and subnet details, and click Next.
9. Verify the details on Ready to Complete and click Finish.
10. Repeat the steps 2 through 9 to create the VMkernel adapters for the following port groups:
● flex-vmotion-<vlanid>
● flex-vsan-<vlanid>
● flex-vcsa-ha-<vlanid>

192 Add VMkernel adapter to the hosts


Internal Use - Confidential

36
Configure vSAN on management cluster
Use this procedure to configure the vSAN on management cluster.

Steps
1. Right-click the management cluster, select Configure, and click vSAN.
2. Select single site cluster.
3. Leave all the values to default and click Next.
4. Claim the disks from all nodes.
5. Select Claim for Capacity tier and Cache tier.
6. Retain all the values to default and click Next.
7. Click Finish.

Configure vSAN on management cluster 193


Internal Use - Confidential

37
Migrate VMware vCenter server appliance
7.0 from PERC-01 datastore to vSAN
datastore
Use this procedure to migrate VMware vCenter server appliance 7.0 from PERC-01 datastore to vSAN datastore.

Steps
1. Log in to VMware vCenter using the admin credentials.
2. Right-click the VM and select Migrate.
3. Select Change both compute and storage and click Next.
4. Select the new node and click Next.
5. Select the vSAN datastore and click Next.
6. Retain the default values and click Next.
7. Click Finish.
8. Repeat steps 2 through 7 for the remaining VMs. Verify all VMs are migrated to vSAN datastore.

194 Migrate VMware vCenter server appliance 7.0 from PERC-01 datastore to vSAN datastore
Internal Use - Confidential

38
Claim disks from PowerFlex management
node
Use this procedure to claim disks from the PowerFlex management node.

Steps
1. Log in to the VMware vCenter using admin credentials.
2. Place the node in maintenance mode.
3. Select the datastore and click Delete to delete the RAID datastore from the PowerFlex management node.
4. Restart the node and log in to BIOS.
5. Delete the virtual disk from the host and restart.
6. Log in to BIOS and configure the enhanced HBA controller mode:
a. Go to the device settings and click Integrated RAID Controller1:Dell <PERC H740P Mini> Configuration Utility.
b. Click Main menu > Controller management > Advanced Controller management > Manage controller Mode.
c. Click Switch to Enhanced HBA Controller Mode.
d. Select Confirm > Yes and click OK.
e. Click Back > Back > Back > Finish > Finish and click Yes to exit from the BIOS and restart.
7. Restart the node.
8. Select the cluster and vSAN.
9. Claim the disks and select Claim for Capacity tier and Cache tier.

Claim disks from PowerFlex management node 195


Internal Use - Confidential

39
Enable VMware vCSA high availability on
PowerFlex management controller vCSA
Use this procedure to enable VMware vCSA high availability on the PowerFlex management controller vCSA.

Prerequisites
Valid credentials are required for the PowerFlex management controller vCSA.

Steps
1. Go to the vCenter instance from Host and Cluster view in the VMware vSphere Client.
2. From the Configure tab, click Settings > vCenter HA.
3. Click Set Up vCenter HA.
4. On the Resource Settings page, perform the following steps:
a. For an active node, click Browse and select the vCSA HA VLAN from the list.
b. For a passive node, perform the following steps:
i. Click Edit
ii. Select the datacenter.
iii. Click Next.
iv. Select the host and click Next.
v. Select the vSAN datastore and click Next.
vi. Select the network for management and vCenter HA and click Next.
vii. Verify the details on the Summary page and click Finish.
c. For a witness node, perform the following steps:
i. Select the datacenter and click Next.
ii. Select the host and click Next.
iii. Select the vSAN datastore and click Next.
iv. Select the network for vCenter HA and click Next.
v. Verify the details on the Summary page and click Finish.
5. On the IP settings page, enter the vCenter HA IP addresses for active, passive, and witness nodes and click Finish.
6. Wait for the task to complete and verify that the vCenter HA status appears as Mode:Enabled and State:Healthy.

196 Enable VMware vCSA high availability on PowerFlex management controller vCSA
Internal Use - Confidential

40
Migrate vCLS VMs
Use the following procedure to migrate the vSphere Cluster Services (vCLS) VMs manually on the controller setup.

About this task


For VMware vSphere 7.0Ux and above, the vCenter Server Appliance (vCSA) deploys the vCLS VM to manage the HA and
vSphere Distributed Resource Scheduler (DRS) services on the host when it is added to the cluster. This task helps to migrate
the vCLS VMs to vSAN datastore on the controller environment.
See PFxM - vSphere Clustered Services - vCLS VMs for more information.

Steps
1. Log in to VMware vCSA HTML client using the credentials provided in the Workbook.
2. Go to VMs and templates inventory or Administration > vCenter Server Extensions > vSphere ESX Agent Manager
> VMs to view the VMs.
The VMs are in the vCLS folder after the host is added to the cluster.
3. Right-click the VM and click Migrate.
4. In the Migrate dialog box, click Yes.
5. On the Select a migration type page, select Change storage only and click Next.
6. On the Select storage page, select vSAN datastore for controller nodes.
7. Click Next > Finish.
8. Repeat these steps to migrate all the vCLS VMs.

Migrate vCLS VMs 197


Internal Use - Confidential

V
Adding a PowerFlex R650/R750/R6525 node
to a PowerFlex Manager service in managed
mode
Expanding a PowerFlex appliance requires the use of an existing PowerFlex gateway. Use the procedures in this section to add a
PowerFlex R650/R750/R6525 node to the PowerFlex Manager services discovered in managed mode.
There are four types of PowerFlex appliance environment expansions:
● Expanding a PowerFlex management controller 2.0 with HBA355i. The PowerFlex management controller 2.0 requires a
separate PowerFlex Gateway.
● Expanding an existing PowerFlex Manager service. This option also expands the existing protection domain if the service is
hyperconverged, storage-only, or compute-only.
● Creating a new PowerFlex Manager service. If the service is hyperconverged or storage-only, this option lets you either
expand to the existing protection domain or create a protection domain.
● Expanding a service in PowerFlex Manager that is discovered in a lifecycle mode. See Adding a PowerFlex R640/R740xd/
R840 node to a PowerFlex Manager service in lifecycle mode for more information.
Before adding a PowerFlex node in managed mode, complete the initial set of expansion procedures that are common to all
expansion scenarios covered in Performing the initial expansion procedures.
The PowerFlex R650 controller node with PowerFlex can have either of the following RAID controllers:
● PERC H755: PowerFlex Manager puts a PowerFlex management controller 2.0 with PERC H755 service in lifecycle mode.
If you are adding a PowerFlex controller node to a PowerFlex management controller 2.0 with PowerFlex, delete the RAID
and convert the physical disks to non-RAID disks. Use the manual expansion procedures in Converting a PowerFlex controller
node with a PERC H755 to a PowerFlex management controller 2.0 and Adding a PowerFlex controller node with a PERC
H755 to a PowerFlex management controller 2.0.
● HBA355: PowerFlex Manager puts a PowerFlex management controller 2.0 with HBA355 service in managed mode. Use
PowerFlex Manager expansion procedures in this section to add a PowerFlex controller node with HBA355 to a PowerFlex
management controller 2.0.
NOTE: PowerFlex Manager 3.8 onwards, deployment and management of Windows-based PowerFlex compute-only nodes
is not supported. See Deploying Windows-based PowerFlex compute-only nodes manually to deploy the Windows-based
PowerFlex compute-only nodes manually.
After adding a PowerFlex node in managed mode, see Completing the expansion.

198 Adding a PowerFlex R650/R750/R6525 node to a PowerFlex Manager service in managed mode
Internal Use - Confidential

41
Discovering the new resource
Whether expanding an existing service or adding a service, the first step is to discover the new resource.

Steps
1. Connect the new NICs of the PowerFlex appliance node to the access switches and the out-of-band management switch
exactly like the existing nodes in the same protection domain. For more details, see Cabling the PowerFlex R650/R750/
R6525 nodes.
2. Ensure that the newly connected switch ports are not shut down.
3. Set the IP address of the iDRAC management port, username, and password for the new PowerFlex appliance nodes.
4. Log in to PowerFlex Manager.
5. Discover the new PowerFlex appliance nodes in the PowerFlex Manager resources. For more details, see Discover resources
in the next section.

Discover resources
Use this procedure to discover and allow PowerFlex Manager access to resources in the environment. Provide the management
IP address and credential for each discoverable resource.

Prerequisites
Verify that the iDRAC network settings are configured. See Configure iDRAC network settings for more information.

About this task


Dell EMC recommends using separate operating system credentials for SVM and VMware ESXi. For information about creating
or updating credentials in PowerFlex Manager, click Settings > Credentials Management and access the online help.
During node discovery, you can configure iDRAC nodes to automatically send alerts to PowerFlex Manager. You can also change
the IP address of the iDRAC nodes and discover them. See Reconfigure the discovered nodes with new management IP and
credentials in the Dell EMC PowerFlex Appliance Administration Guide. If the PowerFlex nodes are not configured for alert
connector, Secure Remote Services does not receive critical or error alerts for those resources.
The following table describes how to configure resources in managed mode:

Resource type Resource state Example


PowerFlex nodes Managed PowerEdge iDRAC management IP address
If you want to perform firmware updates or deployments on a
discovered node, change the default state to managed.
Perform firmware or catalog updates from the Services page,
or the Resources page.

NOTE: For partial network deployments, you do not need to discover the switches. The switches need to be pre-
configured. For sample configurations for Dell PowerSwitch, Cisco Nexus, and Arista switches, see the Dell EMC PowerFlex
Appliance Administration Guide.
The following are the specific details for completing the Discovery wizard steps:

Discovering the new resource 199


Internal Use - Confidential

Resource IP address Resource Discover into Credentials: Credentials: Credentials: Credentials:


type range state node pool name Username password SNMPv2
community
string
Node 192.168.101.21- Managed PowerFlex node PowerFlex root calvin customer
pool appliance provided
192.168.101.24
iDRAC default
Switch 192.168.101.45- Managed N/A access admin admin public
switches
192.168.101.46

VM 192.168.105.105 Managed N/A vCenter administrator P@ssw0rd! N/A


Manager @vsphere.loc
al
Element 192.168.105.120 Managed N/A CloudLink secadmin Secadmin!! N/A
Manager* 192.168.105.121
*

** This is optional. For a new CloudLink Center deployment, the CloudLink Center is discovered automatically.

Prerequisites
● Configure the iDRAC network settings.
● Gather the IP addresses and credentials that are associated with the resources.
NOTE: PowerFlex Manager also allows you to use the name-based searches to discover a range of nodes that were
assigned the IP addresses through DHCP to iDRAC. For more information about this feature, see Dell EMC PowerFlex
Manager Online Help.

Steps
1. On the PowerFlex Manager Getting Started page, click Discover Resources.
2. On the Welcome page of the Discovery Wizard, read the instructions and click Next.
3. On the Identify Resources page, click Add Resource Type. From the Resource Type list, select the resource that you
want to discover.
4. Enter the management IP address of the resource in the IP/Hostname Range field. To discover a resource in an IP range,
provide a starting and ending IP address.
5. In the Resource State list, select Managed or Unmanaged.
6. For PowerFlex node, to discover resources into a selected node pool instead of the global pool (default), select the node
pool from the Discover into Node Pool list.
7. Select the appropriate credential from the Credentials list. See the table above for details.
8. For PowerFlex node, if you want PowerFlex Manager to automatically reconfigure the iDRAC IP addresses of the nodes it
discovers, select the Reconfigure discovered nodes with new management IP and credentials check box. This option
is not selected by default, because it is faster to discover the nodes if you bypass the reconfiguration.
NOTE: iDRAC can be discovered using hostname also.

NOTE: For the Resource Type, you can use a range with hostname or IP address, provided the hostname has a valid
DNS entry.

9. For PowerFlex node, select the Auto configure nodes to send alerts to PowerFlex Manager check box to have
PowerFlex Manager automatically configure nodes to send alerts to PowerFlex Manager.
10. Click Next to start discovery. On the Discovered Resources page, select the resources from which you want to collect
inventory data and click Finish. The discovered resources are listed on the Resources page.

Related information
Configure iDRAC network settings

200 Discovering the new resource


Internal Use - Confidential

42
Sanitize the NVDIMM
Use this procedure to sanitize the NVDIMM (if present) on the newly added node before expansion of the node.

Prerequisites
● Ensure that the NVDIMM version is same as the other nodes in the cluster.
To verify the NVDIMM version, perform the following:
1. Log in to the iDRAC.
2. Go to System > Inventory.
3. Expand the Firmware inventory and look for entries describing DIMMs.
The following table describes what steps to take in the iDRAC or system BIOS depending on the NVDIMM firmware:

NVDIMM firmware on the newly shipped node Action


Higher than the firmware in production 1. Downgrade the NVDIMM firmware in iDRAC.
2. Sanitize the NVDIMM before adding the node to the
cluster.
For more information, see the following knowledge base
articles:
○ Steps to take if NVDIMM firmware on newly shipped
nodes have higher firmware than RCM and/or intelligent
catalog
○ How to remove NVDIMMs from existing cluster in order
to upgrade or downgrade firmware

Lower than the firmware running in production ○ Upgrade the NVDIMM firmware in iDRAC.
NOTE: Sanitizing the NVDIMM for a firmware
upgrade is not required.
● Perform the following steps to upload the NVDIMM firmware:
1. From the iDRAC Web Browser section, click Maintenance > System Update.
2. Click Choose File. Browse to the appropriate intelligent catalog release version folder and select the NVDIMM firmware
file, and click Upload.
3. Click Install and Reboot or Install Next Reboot.
The Updating Job Queue message displays.
4. Click the Job Queue page to monitor the progress of the install.

Steps
1. Reboot the server.
2. Press F2 immediately to enter System Setup.
3. Go to System BIOS > Memory Settings > Persistent Memory > NVDIMM-N Persistent Memory.
System BIOS displays the NVDIMM information for the system.
4. Select the NVDIMMs installed on the node.
5. Find the Sanitize NVDIMM setting in the list and select the Enabled option.
A warning appears that NVDIMM data will be erased if changes are saved when exiting BIOS.
6. Click OK.
7. Click Back > Back > Back to exit to System BIOS Settings, and then click Finish > Yes > OK.
8. Click Finish, and then at the prompt click OK.
The system reboots.

Sanitize the NVDIMM 201


Internal Use - Confidential

43
Expanding a PowerFlex appliance service
Use this section to expand an existing service in a PowerFlex appliance environment.

Adding a compatibility management file


Use this procedure to add or edit a compatibility management file to PowerFlex Manager.

About this task


This procedure can be performed using PowerFlex Manager 3.7.x or higher. When you attempt an upgrade, PowerFlex Manager
warns you if the current version of the software is incompatible with the target version or if the Intelligent Catalog version
currently loaded on the virtual appliance is incompatible with the target compliance version. To determine the valid path,
PowerFlex Manager uses information provided in the compatibility matrix file.

Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Settings and select Virtual Appliance Management.
3. In the Compatibility Management section, click Add/Edit.
4. If you are using Secure Remote Services, click Download from Secure Remote Services (Recommended) .
5. If you are not using Secure Remote Services, download the compatibility file from Dell Technologies Support site to the jump
server.
6. Click Upload from Local to use a local file. Then, click Choose File to select the GPG file and click Save.

Add PowerFlex nodes to a service


After the additional PowerFlex node is discovered, you must add it to a service for it to be managed within PowerFlex Manager.

About this task


If you are adding PowerFlex storage-only nodes without bonding, perform the manual configuration after the expansion. See
Configure a high availability bonded management network for more information.

Steps
1. On the menu bar, click Services.
2. Select the service to which you are adding a PowerFlex node to view its details.
3. On the Service Details page, click Add Resources. Select Add Nodes.
4. In the Duplicate Node wizard:
a. From the Resource to Duplicate list, select an existing node that will be duplicated on the additional node.
b. In the Number of Instances box, set the number of instances to 1 and click Next.

c. Under PowerFlex Settings, specify the PowerFlex Storage Pool Spare Capacity setting. For replication-enabled
services, verify and set the journal capacity depending on the requirement.
d. Under OS Settings, set the Host Name Selection. If you select Specify at Deployment, provide a name for the host
in the Host Name field. If you select Auto-Generate, specify a template for the name in the Host Name Template
field.
e. If you are adding a node to a hyperconverged service, specify the Host Name Selection under SVM OS Settings and
provide details about the hostname, as you did for the OS Settings.

202 Expanding a PowerFlex appliance service


Internal Use - Confidential

f. In the IP Source box, enter an IP address.


g. Under Hardware Settings, in the Node Source box, select Node Pool or Manual Entry.
h. In the Node Pool box, select the node pool. Alternatively, if you select Manual Entry, select the specific node in the
Choose Node box.
i. Under PowerFlex Settings, specify the Fault Set for a node:
NOTE: If the PowerFlex configuration includes fault sets, contact Dell support for assistance. Do not go to the
procedure until you have received guidance from a support representative.
● PowerFlex Manager Selected Fault Set instructs PowerFlex Manager to select the fault set name based on the
template settings.
● fault-set-name enables you to select one of the fault sets in an existing protection domain.
You can add nodes within a fault set, but PowerFlex Manager does not allow you to add a new fault set within the same
service. To add a new fault set, you need to deploy a separate service with settings for the fault set you want to create.
j. Click Next.
k. Review the Summary page and click Finish.
If the node you are adding has a different type of disk than the base deployment, PowerFlex Manager displays a banner
at the top of the Summary page to inform you of the different disk types. You can still complete the node expansion.
However, your service may have sub-optimal performance.

Based on the component type, the required settings and properties are displayed automatically and can be edited as
permitted for a node expansion.

5. Click Save to deploy the PowerFlex node.


When the job is complete, ensure that the node is deployed successfully.

License PowerFlex on PowerFlex management


controller cluster
PowerFlex requires a valid license for production environments and provides support entitlements.

About this task


Verify PowerFlex licensing at the beginning of the implementation phase. The implementation is affected by any issues with the
license file.

Steps
1. Using the administrator credentials, log in to the jump server.
2. Copy the PowerFlex license to the primary MDM.
3. Log in to the primary MDM.
4. Run the following command to apply PowerFlex license: scli --mdm_ip <primary mdm ip> --set_license
--license_file <path to license file>

Verify newly added SVMs or storage-only nodes


machine status in CloudLink Center
Use this procedure to verify newly added SVMs or PowerFlex storage-only nodes machine status in CloudLink Center.

Steps
1. Log in to CloudLink Center using secadmin credentials.
2. Click Agents > Machines.
3. Ensure that the status of newly added machines is in Connected state.

Expanding a PowerFlex appliance service 203


Internal Use - Confidential

4. Ensure the following depending on the drives:


● Software encrypted drives (SEDs): The Status of the Devices and SED appear as Encrypted HW and Managed
respectively.
● Non-SEDs: The Status of the Devices for each newly added machine appears as Encrypted.

204 Expanding a PowerFlex appliance service


Internal Use - Confidential

44
Expanding a PowerFlex appliance with a new
service
You can expand a PowerFlex appliance environment with a service. You can expand a service through cloning a template or
editing an existing template.
Ensure the new PowerFlex appliance nodes are discovered. See the following sections for details about each step mentioned
here.
● Clone a template. You can also edit an existing template. The template that you edit depends on the expansion requirements.
● Deploy a service using the newly created and published template.
● Add a volume.
● After expanding the PowerFlex hyperconverged node or PowerFlex storage-only node, redistribute the MDM cluster.
Redistributing the MDM cluster is not applicable for a PowerFlex compute-only node.

Cloning a template
The clone feature allows you to copy an existing template into a new template. A cloned template contains the components that
existed in the original template. You can edit it to add additional components or modify the cloned components.

Prerequisites
A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks. Verify the
number of logical data networks configured in an existing setup. Accordingly, configure the logical data networks while creating
a template.

Steps
1. Log in to PowerFlex Manager.
2. From the PowerFlex Manager menu bar, click Templates > My Templates.
3. Select a template, and then click Clone in the right pane.
4. In the Clone Template dialog box, enter a template name in the Template Name field.
5. Select a template category from the Template Category list. To create a template category, select Create New Category
6. In the Template Description box, enter a description for the template.
7. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or
choose Use PowerFlex Manager appliance default catalog.
You cannot select a minimal compliance version for a template, because it only includes server firmware updates. The
compliance version for a template must include the full set of compliance update capabilities. PowerFlex Manager does not
show any minimal compliance versions in the Firmware and Software Compliance list.

8. Indicate Who should have access to the service deployed from this template by selecting one of the following options:
● Grant access to Only PowerFlex Manager Administrators.
● Grant access to PowerFlex Manager Administrators and Specific Standard and Operator Users. Click Add
User(s) to add one or more standard and or operator users to the list. Click Remove User(s) to remove users from the
list.
● Grant access to PowerFlex Manager Administrators and All Standard and Operator Users.
9. Click Next.
10. On the Additional Settings page, provide new values for the Network Settings, OS Settings, Cluster Settings,
PowerFlex Gateway Settings, and Node Pool Settings.
If you clone a template that has a Target CloudLink Center setting, the cloned template shows this setting in the Original
Target CloudLink Center field. Change this setting by selecting a new target for the cloned template in the Select New
Target CloudLink Center setting.

Expanding a PowerFlex appliance with a new service 205


Internal Use - Confidential

When defining a template, you choose a single CloudLink Center as the target for the deployed service. If the CloudLink
Center for the service shuts down, PowerFlex Manager loses communication with the CloudLink Center. If the CloudLink
Center is part of a cluster, PowerFlex Manager moves to another CloudLink Center when you update the service details.

11. Click Finish.


12. Edit the newly cloned template.
13. Select the VMware cluster and click Edit.
14. Verify the number of node instances to expand and click Continue.
15. Verify the Cluster Settings.
16. Configure the virtual distributed switch (VDS) settings:
a. Under vSphere VDS Settings, click Configure VDS Settings.
b. Click User Existing Port Groups as we are using an existing VDS port group configuration and click Next.
c. Verify the VDS switch names and click Next.
d. Select the Port Group Name for each VLAN on both the VDS (cust_dvswitch and flex_dvswitch) and click Next.
e. On the Advanced Networking page, choose the MTU size. The default value for Hypervisor Management network is
1500. The default value for vMotion network is 9000.
f. Click Next.
g. Verify all the details on the Summary, click Finish, and confirm.
h. Click View All Settings and verify the configuration details for VMware cluster, PowerFlex cluster, and Hyperconverged.
i. Click Publish Template.

Adding a compatibility management file


Use this procedure to add or edit a compatibility management file to PowerFlex Manager.

About this task


This procedure can be performed using PowerFlex Manager 3.7.x or higher. When you attempt an upgrade, PowerFlex Manager
warns you if the current version of the software is incompatible with the target version or if the Intelligent Catalog version
currently loaded on the virtual appliance is incompatible with the target compliance version. To determine the valid path,
PowerFlex Manager uses information provided in the compatibility matrix file.

Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Settings and select Virtual Appliance Management.
3. In the Compatibility Management section, click Add/Edit.
4. If you are using Secure Remote Services, click Download from Secure Remote Services (Recommended) .
5. If you are not using Secure Remote Services, download the compatibility file from Dell Technologies Support site to the jump
server.
6. Click Upload from Local to use a local file. Then, click Choose File to select the GPG file and click Save.

Deploy a service
Use this procedure to deploy a service. You cannot deploy a service using a template that is in draft state. Publish the template
before using it to deploy a service.

Steps
1. On the menu bar, click Services > Deploy New Service.
2. On the Deploy Service page, perform the following steps:
a. From the Select Published Template list, select the previously defined and published hyperconverged template to
deploy the service.
b. Enter the Service Name and Service Description that identifies the service.
c. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or
select Use PowerFlex Manager appliance default catalog.

206 Expanding a PowerFlex appliance with a new service


Internal Use - Confidential

PowerFlex Manager checks the VMware vCenter version to determine if it matches the VMware ESXi version for the
selected compliance version. If the VMware ESXi version is greater than the vCenter version, PowerFlex Manager blocks
the service deployment and displays an error. PowerFlex Manager instructs you to upgrade vCenter first or use a
different compliance version that is compatible with the installed vCenter version.
NOTE: Changing the firmware repository might update the firmware level on nodes for this service. The global
default firmware repository maintains the firmware on the shared devices.

d. Select one of the options from Who should have access to the service deployed from this template? drop-down
list.
NOTE: For a PowerFlex hyperconverged or storage-only node deployment, if you want to use CloudLink encryption,
perform the following:
i. Verify that CloudLink Center is deployed.
ii. In the template, under Node settings, select Enable Encryption (Software Encryption/Self Encrypting
Drive).
iii. Under PowerFlex Cluster settings, select CloudLink Center.

3. Click Next.
4. On the Deployment Settings page, configure the required settings. You can override any of the cluster settings that are
specified in the template.
If you are deploying a service with CloudLink, ensure that the correct CloudLink Center is displayed under the CloudLink
Center Settings.
5. To configure the PowerFlex Settings for a hyperconverged or storage-only service that has replication-enabled in the
template, specify the Journal Capacity. The default journal capacity is 10% of the over all capacity but you can customize
the capacity according to the requirement.
6. To configure the PowerFlex Settings, select one of the following options for PowerFlex MDM Virtual IP Source:
● PowerFlex Manager Selected IP instructs PowerFlex Manager to select the virtual IP addresses.
● User Entered IP enables you to specify the IP address manually for each PowerFlex data network that is part of the
node definition in the service template.
NOTE: If you are using a PowerFlex Manager version prior to 3.8, verify that the correct disk type (NVMe, SSD, or
HDD) is selected. From the Deployment Settings page, select PowerFlex Setting > Storage Pool disk type. Ensure
that you select the correct disk type: (NVMe or SSD).

7. To configure OS Settings, select an IP Source.


To manually enter the IP address, select User Entered IP. From the IP Source list, select Manual Entry. Then enter the IP
address in the Static IP Address box.

8. To configure Hardware Settings, select the node source from the Node Source list.
● If you select Node Pool, you can view all user-defined node pools and the global pool. Standard users can see only the
pools for which they have permission. Select the Retry On Failure option to ensure that PowerFlex Manager selects
another node from the node pool for deployment if any node fails. Each node can be retried up to five times.
● If you select Manual Entry, the Choose Node list is displayed. Select the node by its Service Tag for deployment from
the list.
9. Click Next.
10. On the Schedule Deployment page, select one of the following options and click Next:
● Deploy Now—Select this option to deploy the service immediately.
● Deploy Later—Select this option and enter the date and time to deploy the service.
11. Review the Summary page.
The Summary page gives you a preview of what the service will look like after the deployment.
12. Click Finish when you are ready to begin the deployment. For more information, see PowerFlex Manager online help.

Expanding a PowerFlex appliance with a new service 207


Internal Use - Confidential

Add volumes with PowerFlex Manager


Use this procedure to add volumes using PowerFlex Manager.

About this task


After the hyperconverged deployment is complete, PowerFlex Manager automatically creates two volumes with 16 GB,
thin provisioned with the name powerflex-service-vol-1 and powerflex-service-vol-2 and two datastore with the name
powerflex-esxclustershotname-ds1 and powerflex-esxclustershotname-ds2, which is mapped to vCSA, and vCLS VMs
is migrated on the service datastore to maintain the HA cluster. The example in this procedure is for adding new volumes
to a hyperconverged service. For compute-only nodes, add existing volumes. See PowerFlex Manager Online Help for more
information.
To create additional volumes, complete the following procedure for a hyperconverged deployment:

Steps
1. On the Services page, open the service that was deployed earlier.
2. Under Resource Actions, click Add Resources > Add Volume > Create New Volume > Next.
3. Click Add New Volume.
4. Enter the following values:

Field Value
Volume 1
Volume Name Create New Volume …
New Volume Name Volume1
Datastore Name Create New Datastore
New Datastore Datastore1
Storage Pool Select the storage pool from the drop-down
Enable Compression Select this check box (if compression is enabled for
deployment)
Volume Size 8 (or any multiple of 8 )
Volume Type Thick
Volume 2
Volume Name Create New Volume …
New Volume Name Volume2
Datastore Name Create New Datastore
New Datastore Datastore2
Storage Pool Do not change this option.
Enable Compression Select this check box ( if Compression is enabled for
deployment)
Volume Size 8 (or any multiple of 8)
Volume Type Thick

5. Click Next > Finish.


6. Review the summary and click Finish.
7. You can monitor the volume creation progress on the Jobs page or the Recent Activity section of the Resources page.
8. After the volumes are added, the datastores are associated with the cluster in the VMware vSphere being used to manage
the cluster.
For example in this document, 192.168.105.105.

208 Expanding a PowerFlex appliance with a new service


Internal Use - Confidential

NOTE: For PowerFlex version 3.5 and later, the New Medium Granularity Storage Pools have the Persistent
Checksum feature enabled by default. The existing New Medium Granularity Storage Pools will not have this
feature enabled during an upgrade. See the Dell EMC PowerFlex Appliance Administration Guide for manual procedures
for disabling (or enabling) this feature.
The SDS thread count is set to 8 by default. While expanding a PowerFlex storage-only node with 16-core CPU with a new
service, set the SDS thread count to 12. The MD cache is enabled by default on a PowerFlex storage-only node with FG pool.
The MD cache can be calculated using the following formula:

FGMC_RAM_in_GiB = (Total_drive_capacity_in_TiB / 2) * 4 Compression factor * Percent of metadata to cache

Redistribute the MDM cluster using PowerFlex


Manager
Use this procedure to redistribute the MDM across clusters to ensure maximum cluster resiliency and availability.

About this task


It is critical that the MDM cluster is distributed across access switches and physical cabinets to ensure maximum resiliency and
availability of the cluster. The location of the MDM components should be checked and validated during every engagement and
adjusted if found non-compliant with the published guidelines.
When adding new MDM or tiebreaker nodes to a cluster, first place the MDM components on the PowerFlex storage-only nodes
(if available), then the PowerFlex hyperconverged nodes. This procedure is not applicable for PowerFlex compute-only nodes.
Use PowerFlex Manager to change the MDM role for a node in a PowerFlex cluster. When adding a node to a cluster, you might
want to switch the MDM role from one of the existing nodes to the new node.
You can launch the wizard for reconfiguring MDM roles from the Services page or from the Resources page. The nodes that
are listed and the operations available are the same regardless of where you launch the wizard.

Steps
1. Access the wizard from the Services page or the Resource page. Click Services or Resources from the menu bar to
access the wizard.
a. Select the service or resource with the PowerFlex gateway containing the MDMs.
b. Click View Details.
c. Click Reconfigure MDM Roles. The MDM Reconfiguration page displays.
2. Review the current MDM configuration for the cluster.
3. For each MDM role that you want to reassign, use Select New Node for MDM Role to select the new hostname or IP
address. You can reassign multiple roles at a time.
4. Click Next. The Summary page displays.
5. Type Change MDM Roles to confirm the changes.
6. Click Finish.

Related information
Redistribute the MDM cluster

Expanding a PowerFlex appliance with a new service 209


Internal Use - Confidential

45
Configuring the hyperconverged or compute-
only transport nodes
This section describes how to configure the hyperconverged or compute-only nodes as part of preparing the PowerFlex
appliance for NSX-T. Before you configure the VMware ESXi hosts as NSX-T transport nodes, you must add the transport
distributed port groups and convert the distributed switch from LACP to individual trunks covered in this section.
NOTE: If you configure VMware NSX-T on PowerFlex hyperconverged or compute-only nodes and add them to PowerFlex
Manager, the services will be in lifecycle mode. If you need to perform an expansion on such a node, see Adding a
PowerFlex R640/R740xd/R840 node to a PowerFlex Manager service in lifecycle mode to add the PowerFlex node.
Contact VMware Support to configure VMware NSX-T on a new PowerFlex node and see Add PowerFlex nodes to a
service to update the service details.

Configure VMware NSX-T overlay distributed virtual


port group
Use this procedure to create and configure the VMware NSX-T overlay distributed virtual port group on cust_dvswitch.

Prerequisites
Ensure that the VMware vSphere vCenter Server and the VMware vSphere Client are accessible.

Steps
1. Log in to the VMware vSphere Client.
2. Click Networking.
3. Expand the PowerFlex Customer-Datacenter.
4. Right-click cust_dvswitch.
5. Click Distributed Port Group > New Distributed Port Group.
6. Update the name to pfmc-nsx-transport-121 and click Next.
7. Select the default Port binding.
8. Select the default Port allocation.
9. Select the default # of ports (default is 8).
10. Select the default VLAN as VLAN Type.
11. Set the VLAN ID to 121.
12. Clear the Customize default policies configuration check box and click Next.
13. Click Finish.
14. Right-click the pfnc-nsx-transport-121 and click Edit Settings....
15. Click Teaming and failover.
16. Verify that Uplink1 and Uplink2 are moved to Active.
17. Click OK.

210 Configuring the hyperconverged or compute-only transport nodes


Internal Use - Confidential

Convert trunk access to LACP-enabled switch ports


for cust_dvswitch
Use this procedure to convert the physical NICs from trunk to LACP without losing network connectivity. Use this option only if
cust_dvswitch is configured as trunk. LACP is the default configuration for cust_dvswitch.

Prerequisites
Both Cisco Nexus access switch ports for the compute VMware ESXi hosts are configured as trunk access. These ports will be
configured as LACP enabled after the physical adapter is removed from each ESXi host.
WARNING: As the VMK0 (ESXi management) is not configured on cust_dvswitch, both the vmnics are first
migrated to the LAGs simultaneously and then the port channel is configured. Data connectivity to PowerFlex is
lost until the port channels are brought online with both vmnic interfaces connected to LAGs.

About this task


This procedure includes reconfiguring one port at a time as an LACP without requiring any migration of the VMkernel network
adapters.

Steps
1. Log in to the VMware vSphere Client.
2. Look at VMware vCenter and physical switches to ensure that both ports across all hosts are up.
3. For each compute VMware ESXi host, record the physical switch ports to which vmnic5 (switch-B) and vmnic7 (switch -A)
are connected.
a. Click Home, then select Hosts and Clusters and expand the compute cluster.
b. Select the first compute ESXi host in left pane, and then select Configure tab in right pane.
c. Select Virtual switches under Networking.
d. Expand cust_dvswitch.
e. Expand Uplink1 and click eclipse (…) for vmnic7 and select view settings.
f. Click LLDP tab.
g. Record the Port ID (switch port) and System Name (switch).
h. Repeat step 3 for vmnic5 on Uplink 2.
4. Configure LAG (LACP) on cust_dvswitch within VMware vCenter Server:
a. Click Home, then select Networking.
b. Expand the compute cluster and click cust_dvswitch > Configure > LACP.
c. Click +New to open wizard.
d. Verify that the name is lag1.
e. Verify that the number of ports is 2.
f. Verify that the mode is Active.
g. Change Load Balancing mode to Source and destination IP address, TCP/UDP port..
h. Click OK.
5. Migrate vmnic5 to lag1-0 and vmnic7 to lag1-1 on cust_dvswitch for the compute VMware ESXi host as follows:
a. Click Home, then select Networking and expand the PowerFlex data center.
b. Right-click cust_dvswitch and select Manage host networking to open wizard.
c. Select Add hosts... and click Next.
d. Click Attached hosts..., select all the compute ESXi hosts, and click OK.
e. Click Next.
f. For each ESXi host, select vmnic5 and click Assign uplink.
g. Click lag1-0 and click OK.
h. For each ESXi host, select vmnic7 and click Assign uplink.
i. Click lag1-1 and click OK.
j. Click Next > Next > Next > Finish.
6. Create port-channel (LACP) on switch-A for compute VMware ESXi host.
The following switch configuration is an example of a single compute VMware ESXi host.

Configuring the hyperconverged or compute-only transport nodes 211


Internal Use - Confidential

a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-A switch.
b. Create a port channel on switch-A for each compute VMware ESXi host as follows:

interface port-channel40
Description to flex-compute-esxi-host01
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
speed 25000
no lacp suspend-individual
vpc 40

7. Configure channel-group (LACP) on switch-A access port (vmnic5) for each compute VMware ESXi host.
The following switch port configuration is an example of a single compute VMware ESXi host.
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-A switch.
b. Create port on switch-A as follows:

int e1/1/1
description to flex-compute-esxi-host01 – vmnic5
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 40 mode active

8. Create port-channel (LACP) on switch-B for compute VMware ESXi host.


The following switch configuration is an example of a single compute VMware ESXi host.
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-B switch.
b. Create a port channel on switch-B for each compute VMware ESXi host as follows:

interface port-channel40
Description to flex-compute-esxi-host01
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
speed 25000
no lacp suspend-individual
vpc 40

9. Configure channel-group (LACP) on switch-B access port (vmnic7) for each compute VMware ESXi host.
The following switch port configuration is an example of a single compute VMware ESXi host.
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-B switch.
b. Create port on switch-B as follows:

int e1/1/1
description to flex-compute-esxi-host01 – vmnic7
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 40 mode active

10. Update teaming and policy to route based on physical NIC load for each port group within cust_dvswitch:
a. Click Home and select Networking.

212 Configuring the hyperconverged or compute-only transport nodes


Internal Use - Confidential

b. Expand cust_dvswitch to have all port group in view.


c. Right-click flex-data-01 and select Edit Settings.
d. Click Teaming and failover.
e. Change Load Balancing mode to Route based on IP hash.
f. Repeat steps 10b to 10e for each remaining port groups.

Configuring the hyperconverged or compute-only transport nodes 213


Internal Use - Confidential

46
Add a Layer 3 routing between an external
SDC and SDS
Use this procedure to enable an external SDC to SDS communication and configure the PowerFlex node for an external SDC
reachability.

Steps
1. In the template, from Node > Network settings, select the required VLANs to enable an external SDC communication on
the SDS data interfaces.
2. From Node > Static routes, select Enabled.
3. Click Add New Static Route.
4. Select the source and destination VLANs, and manually enter the gateway IP address of the SDS data network VLAN.
5. Repeat these steps for each data VLAN.

214 Add a Layer 3 routing between an external SDC and SDS


Internal Use - Confidential

VI
Adding a PowerFlex R640/R740xd/R840
node to a PowerFlex Manager service in
managed mode
Expanding a PowerFlex appliance requires the use of an existing PowerFlex gateway. Use the procedures in this section to add a
PowerFlex R640/R740xd/R840 node to the PowerFlex Manager services discovered in managed mode.
There are three types of PowerFlex appliance environment expansions:
● Expanding an existing PowerFlex Manager service. This option also expands the existing protection domain if the service is
hyperconverged, storage-only, or compute-only.
● Creating a new PowerFlex Manager service. If the service is hyperconverged or storage-only, this option lets you either
expand to the existing protection domain or create a protection domain.
● Expanding a service in PowerFlex Manager that is discovered in a lifecycle mode. See Adding a PowerFlex R640/R740xd/
R840 node to a PowerFlex Manager service in lifecycle mode for more information.
Before adding a PowerFlex node in managed mode, complete the initial set of expansion procedures that are common to all
expansion scenarios covered in Performing the initial expansion procedures.
After adding a PowerFlex node in managed mode, see Completing the expansion.

Related information
Install the nvme -cli tool and iDRAC Service Module (iSM)

Adding a PowerFlex R640/R740xd/R840 node to a PowerFlex Manager service in managed mode 215
Internal Use - Confidential

47
Discovering the new resource
Whether expanding an existing service or adding a service, the first step is to discover the new resource.

Steps
1. Connect the new NICs of the PowerFlex appliance node to the access switches and the out-of-band management switch
exactly like the existing nodes in the same protection domain. For more details, see Cabling the PowerFlex R640/R740xd/
R840 nodes.
2. Ensure that the newly connected switch ports are not shut down.
3. Set the IP address of the iDRAC management port, username, and password for the new PowerFlex appliance nodes.
4. Log in to PowerFlex Manager.
5. Discover the new PowerFlex appliance nodes in the PowerFlex Manager resources. For more details, see Discover resources.

Discover resources
Use this procedure to discover and allow PowerFlex Manager access to resources in the environment. Provide the management
IP address and credential for each discoverable resource.

Prerequisites
Verify that the iDRAC network settings are configured. See Configure iDRAC network settings for more information.

About this task


Dell EMC recommends using separate operating system credentials for SVM and VMware ESXi. For information about creating
or updating credentials in PowerFlex Manager, click Settings > Credentials Management and access the online help.
During node discovery, you can configure iDRAC nodes to automatically send alerts to PowerFlex Manager. You can also change
the IP address of the iDRAC nodes and discover them. See Reconfigure the discovered nodes with new management IP and
credentials in the Dell EMC PowerFlex Appliance Administration Guide. If the PowerFlex nodes are not configured for alert
connector, Secure Remote Services does not receive critical or error alerts for those resources.
The following table describes how to configure resources in managed mode:

Resource type Resource state Example


PowerFlex nodes Managed PowerEdge iDRAC management IP address
If you want to perform firmware updates or deployments on a
discovered node, change the default state to managed.
Perform firmware or catalog updates from the Services page,
or the Resources page.

NOTE: For partial network deployments, you do not need to discover the switches. The switches need to be pre-
configured. For sample configurations for Dell PowerSwitch, Cisco Nexus, and Arista switches, see the Dell EMC PowerFlex
Appliance Administration Guide.
The following are the specific details for completing the Discovery wizard steps:

216 Discovering the new resource


Internal Use - Confidential

Resource IP address Resource Discover into Credentials: Credentials: Credentials: Credentials:


type range state node pool name Username password SNMPv2
community
string
Node 192.168.101.21- Managed PowerFlex node PowerFlex root calvin customer
pool appliance provided
192.168.101.24
iDRAC default
Switch 192.168.101.45- Managed N/A access admin admin public
switches
192.168.101.46

VM 192.168.105.105 Managed N/A vCenter administrator P@ssw0rd! N/A


Manager @vsphere.loc
al
Element 192.168.105.120 Managed N/A CloudLink secadmin Secadmin!! N/A
Manager* 192.168.105.121
*

** This is optional. For a new CloudLink Center deployment, the CloudLink Center is discovered automatically.

Prerequisites
● Configure the iDRAC network settings.
● Gather the IP addresses and credentials that are associated with the resources.
NOTE: PowerFlex Manager also allows you to use the name-based searches to discover a range of nodes that were
assigned the IP addresses through DHCP to iDRAC. For more information about this feature, see Dell EMC PowerFlex
Manager Online Help.

Steps
1. On the PowerFlex Manager Getting Started page, click Discover Resources.
2. On the Welcome page of the Discovery Wizard, read the instructions and click Next.
3. On the Identify Resources page, click Add Resource Type. From the Resource Type list, select the resource that you
want to discover.
4. Enter the management IP address of the resource in the IP/Hostname Range field. To discover a resource in an IP range,
provide a starting and ending IP address.
5. In the Resource State list, select Managed or Unmanaged.
6. For PowerFlex node, to discover resources into a selected node pool instead of the global pool (default), select the node
pool from the Discover into Node Pool list.
7. Select the appropriate credential from the Credentials list. See the table above for details.
8. For PowerFlex node, if you want PowerFlex Manager to automatically reconfigure the iDRAC IP addresses of the nodes it
discovers, select the Reconfigure discovered nodes with new management IP and credentials check box. This option
is not selected by default, because it is faster to discover the nodes if you bypass the reconfiguration.
NOTE: iDRAC can be discovered using hostname also.

NOTE: For the Resource Type, you can use a range with hostname or IP address, provided the hostname has a valid
DNS entry.

9. For PowerFlex node, select the Auto configure nodes to send alerts to PowerFlex Manager check box to have
PowerFlex Manager automatically configure nodes to send alerts to PowerFlex Manager.
10. Click Next to start discovery. On the Discovered Resources page, select the resources from which you want to collect
inventory data and click Finish. The discovered resources are listed on the Resources page.

Related information
Configure iDRAC network settings

Discovering the new resource 217


Internal Use - Confidential

48
Sanitize the NVDIMM
Use this procedure to sanitize the NVDIMM (if present) on the newly added node before expansion of the node.

Prerequisites
● Ensure that the NVDIMM version is same as the other nodes in the cluster.
To verify the NVDIMM version, perform the following:
1. Log in to the iDRAC.
2. Go to System > Inventory.
3. Expand the Firmware inventory and look for entries describing DIMMs.
The following table describes what steps to take in the iDRAC or system BIOS depending on the NVDIMM firmware:

NVDIMM firmware on the newly shipped node Action


Higher than the firmware in production 1. Downgrade the NVDIMM firmware in iDRAC.
2. Sanitize the NVDIMM before adding the node to the
cluster.
For more information, see the following knowledge base
articles:
○ Steps to take if NVDIMM firmware on newly shipped
nodes have higher firmware than RCM and/or intelligent
catalog
○ How to remove NVDIMMs from existing cluster in order
to upgrade or downgrade firmware

Lower than the firmware running in production ○ Upgrade the NVDIMM firmware in iDRAC.
NOTE: Sanitizing the NVDIMM for a firmware
upgrade is not required.
● Perform the following steps to upload the NVDIMM firmware:
1. From the iDRAC Web Browser section, click Maintenance > System Update.
2. Click Choose File. Browse to the appropriate intelligent catalog release version folder and select the NVDIMM firmware
file, and click Upload.
3. Click Install and Reboot or Install Next Reboot.
The Updating Job Queue message displays.
4. Click the Job Queue page to monitor the progress of the install.

Steps
1. Reboot the server.
2. Press F2 immediately to enter System Setup.
3. Go to System BIOS > Memory Settings > Persistent Memory > NVDIMM-N Persistent Memory.
System BIOS displays the NVDIMM information for the system.
4. Select the NVDIMMs installed on the node.
5. Find the Sanitize NVDIMM setting in the list and select the Enabled option.
A warning appears that NVDIMM data will be erased if changes are saved when exiting BIOS.
6. Click OK.
7. Click Back > Back > Back to exit to System BIOS Settings, and then click Finish > Yes > OK.
8. Click Finish, and then at the prompt click OK.
The system reboots.

218 Sanitize the NVDIMM


Internal Use - Confidential

49
Expanding a PowerFlex appliance service
Use this section to expand an existing service in a PowerFlex appliance environment.

Adding a compatibility management file


Use this procedure to add or edit a compatibility management file to PowerFlex Manager.

About this task


This procedure can be performed using PowerFlex Manager 3.7.x or higher. When you attempt an upgrade, PowerFlex Manager
warns you if the current version of the software is incompatible with the target version or if the Intelligent Catalog version
currently loaded on the virtual appliance is incompatible with the target compliance version. To determine the valid path,
PowerFlex Manager uses information provided in the compatibility matrix file.

Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Settings and select Virtual Appliance Management.
3. In the Compatibility Management section, click Add/Edit.
4. If you are using Secure Remote Services, click Download from Secure Remote Services (Recommended) .
5. If you are not using Secure Remote Services, download the compatibility file from Dell Technologies Support site to the jump
server.
6. Click Upload from Local to use a local file. Then, click Choose File to select the GPG file and click Save.

Add PowerFlex nodes to a service


After the additional PowerFlex node is discovered, you must add it to a service for it to be managed within PowerFlex Manager.

Steps
1. On the menu bar, click Services.
2. Select the service to which you are adding a PowerFlex node to view its details.
3. On the Service Details page, click Add Resources. Select Add Nodes.
4. In the Duplicate Node wizard:
a. From the Resource to Duplicate list, select an existing node that will be duplicated on the additional node.
b. In the Number of Instances box, set the number of instances to 1 and click Next.

c. Under PowerFlex Settings, specify the PowerFlex Storage Pool Spare Capacity setting. For replication-enabled
services, verify and set the journal capacity depending on the requirement.
d. Under OS Settings, set the Host Name Selection. If you select Specify at Deployment, provide a name for the host
in the Host Name field. If you select Auto-Generate, specify a template for the name in the Host Name Template
field.
e. If you are adding a node to a hyperconverged service, specify the Host Name Selection under SVM OS Settings and
provide details about the hostname, as you did for the OS Settings.
f. In the IP Source box, enter an IP address.
g. Under Hardware Settings, in the Node Source box, select Node Pool or Manual Entry.

Expanding a PowerFlex appliance service 219


Internal Use - Confidential

h. In the Node Pool box, select the node pool. Alternatively, if you select Manual Entry, select the specific node in the
Choose Node box.
i. Under PowerFlex Settings, specify the Fault Set for a node:
NOTE: If the PowerFlex configuration includes fault sets, contact Dell support for assistance. Do not go to the
procedure until you have received guidance from a support representative.
● PowerFlex Manager Selected Fault Set instructs PowerFlex Manager to select the fault set name based on the
template settings.
● fault-set-name enables you to select one of the fault sets in an existing protection domain.
You can add nodes within a fault set, but PowerFlex Manager does not allow you to add a new fault set within the same
service. To add a new fault set, you need to deploy a separate service with settings for the fault set you want to create.
j. Click Next.
k. Review the Summary page and click Finish.
If the node you are adding has a different type of disk than the base deployment, PowerFlex Manager displays a banner
at the top of the Summary page to inform you of the different disk types. You can still complete the node expansion.
However, your service may have sub-optimal performance.

Based on the component type, the required settings and properties are displayed automatically and can be edited as
permitted for a node expansion.

5. Click Save to deploy the PowerFlex node.


When the job is complete, ensure that the node is deployed successfully.

Verify newly added SVMs or storage-only nodes


machine status in CloudLink Center
Use this procedure to verify newly added SVMs or PowerFlex storage-only nodes machine status in CloudLink Center.

Steps
1. Log in to CloudLink Center using secadmin credentials.
2. Click Agents > Machines.
3. Ensure that the status of newly added machines is in Connected state.
4. Ensure the following depending on the drives:
● Software encrypted drives (SEDs): The Status of the Devices and SED appear as Encrypted HW and Managed
respectively.
● Non-SEDs: The Status of the Devices for each newly added machine appears as Encrypted.

220 Expanding a PowerFlex appliance service


Internal Use - Confidential

50
Expanding a PowerFlex appliance with a new
service
You can expand a PowerFlex appliance environment with a service. You can expand a service through cloning a template or
editing an existing template.
Ensure the new PowerFlex appliance nodes are discovered. See the following sections for details about each step mentioned
here.
● Clone a template. You can also edit an existing template. The template that you edit depends on the expansion requirements.
● Deploy a service using the newly created and published template.
● Add a volume.
● After expanding the PowerFlex hyperconverged node or PowerFlex storage-only node, redistribute the MDM cluster.
Redistributing the MDM cluster is not applicable for a PowerFlex compute-only node.

Cloning a template
The clone feature allows you to copy an existing template into a new template. A cloned template contains the components that
existed in the original template. You can edit it to add additional components or modify the cloned components.

Prerequisites
A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks. Verify the
number of logical data networks configured in an existing setup. Accordingly, configure the logical data networks while creating
a template.

Steps
1. Log in to PowerFlex Manager.
2. From the PowerFlex Manager menu bar, click Templates > My Templates.
3. Select a template, and then click Clone in the right pane.
4. In the Clone Template dialog box, enter a template name in the Template Name field.
5. Select a template category from the Template Category list. To create a template category, select Create New Category
6. In the Template Description box, enter a description for the template.
7. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or
choose Use PowerFlex Manager appliance default catalog.
You cannot select a minimal compliance version for a template, because it only includes server firmware updates. The
compliance version for a template must include the full set of compliance update capabilities. PowerFlex Manager does not
show any minimal compliance versions in the Firmware and Software Compliance list.

8. Indicate Who should have access to the service deployed from this template by selecting one of the following options:
● Grant access to Only PowerFlex Manager Administrators.
● Grant access to PowerFlex Manager Administrators and Specific Standard and Operator Users. Click Add
User(s) to add one or more standard and or operator users to the list. Click Remove User(s) to remove users from the
list.
● Grant access to PowerFlex Manager Administrators and All Standard and Operator Users.
9. Click Next.
10. On the Additional Settings page, provide new values for the Network Settings, OS Settings, Cluster Settings,
PowerFlex Gateway Settings, and Node Pool Settings.
If you clone a template that has a Target CloudLink Center setting, the cloned template shows this setting in the Original
Target CloudLink Center field. Change this setting by selecting a new target for the cloned template in the Select New
Target CloudLink Center setting.

Expanding a PowerFlex appliance with a new service 221


Internal Use - Confidential

When defining a template, you choose a single CloudLink Center as the target for the deployed service. If the CloudLink
Center for the service shuts down, PowerFlex Manager loses communication with the CloudLink Center. If the CloudLink
Center is part of a cluster, PowerFlex Manager moves to another CloudLink Center when you update the service details.

11. Click Finish.


12. Edit the newly cloned template.
13. Select the VMware cluster and click Edit.
14. Verify the number of node instances to expand and click Continue.
15. Verify the Cluster Settings.
16. Configure the virtual distributed switch (VDS) settings:
a. Under vSphere VDS Settings, click Configure VDS Settings.
b. Click User Existing Port Groups as we are using an existing VDS port group configuration and click Next.
c. Verify the VDS switch names and click Next.
d. Select the Port Group Name for each VLAN on both the VDS (cust_dvswitch and flex_dvswitch) and click Next.
e. On the Advanced Networking page, choose the MTU size. The default value for Hypervisor Management network is
1500. The default value for vMotion network is 9000.
f. Click Next.
g. Verify all the details on the Summary, click Finish, and confirm.
h. Click View All Settings and verify the configuration details for VMware cluster, PowerFlex cluster, and Hyperconverged.
i. Click Publish Template.

Adding a compatibility management file


Use this procedure to add or edit a compatibility management file to PowerFlex Manager.

About this task


This procedure can be performed using PowerFlex Manager 3.7.x or higher. When you attempt an upgrade, PowerFlex Manager
warns you if the current version of the software is incompatible with the target version or if the Intelligent Catalog version
currently loaded on the virtual appliance is incompatible with the target compliance version. To determine the valid path,
PowerFlex Manager uses information provided in the compatibility matrix file.

Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Settings and select Virtual Appliance Management.
3. In the Compatibility Management section, click Add/Edit.
4. If you are using Secure Remote Services, click Download from Secure Remote Services (Recommended) .
5. If you are not using Secure Remote Services, download the compatibility file from Dell Technologies Support site to the jump
server.
6. Click Upload from Local to use a local file. Then, click Choose File to select the GPG file and click Save.

Deploy a service
Use this procedure to deploy a service. You cannot deploy a service using a template that is in draft state. Publish the template
before using it to deploy a service.

Steps
1. On the menu bar, click Services > Deploy New Service.
2. On the Deploy Service page, perform the following steps:
a. From the Select Published Template list, select the previously defined and published hyperconverged template to
deploy the service.
b. Enter the Service Name and Service Description that identifies the service.
c. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or
select Use PowerFlex Manager appliance default catalog.

222 Expanding a PowerFlex appliance with a new service


Internal Use - Confidential

PowerFlex Manager checks the VMware vCenter version to determine if it matches the VMware ESXi version for the
selected compliance version. If the VMware ESXi version is greater than the vCenter version, PowerFlex Manager blocks
the service deployment and displays an error. PowerFlex Manager instructs you to upgrade vCenter first or use a
different compliance version that is compatible with the installed vCenter version.
NOTE: Changing the firmware repository might update the firmware level on nodes for this service. The global
default firmware repository maintains the firmware on the shared devices.

d. Select one of the options from Who should have access to the service deployed from this template? drop-down
list.
NOTE: For a PowerFlex hyperconverged or storage-only node deployment, if you want to use CloudLink encryption,
perform the following:
i. Verify that CloudLink Center is deployed.
ii. In the template, under Node settings, select Enable Encryption (Software Encryption/Self Encrypting
Drive).
iii. Under PowerFlex Cluster settings, select CloudLink Center.

3. Click Next.
4. On the Deployment Settings page, configure the required settings. You can override any of the cluster settings that are
specified in the template.
If you are deploying a service with CloudLink, ensure that the correct CloudLink Center is displayed under the CloudLink
Center Settings.
5. To configure the PowerFlex Settings for a hyperconverged or storage-only service that has replication-enabled in the
template, specify the Journal Capacity. The default journal capacity is 10% of the over all capacity but you can customize
the capacity according to the requirement.
6. To configure the PowerFlex Settings, select one of the following options for PowerFlex MDM Virtual IP Source:
● PowerFlex Manager Selected IP instructs PowerFlex Manager to select the virtual IP addresses.
● User Entered IP enables you to specify the IP address manually for each PowerFlex data network that is part of the
node definition in the service template.
NOTE: Verify that the correct disk type (NVMe, SSD, or HDD) is selected. From the Deployment Settings page,
select PowerFlex Setting > Storage Pool disk type. Ensure that you select the correct disk type: (NVMe or SSD).

7. To configure OS Settings, select an IP Source.


To manually enter the IP address, select User Entered IP. From the IP Source list, select Manual Entry. Then enter the IP
address in the Static IP Address box.

8. To configure Hardware Settings, select the node source from the Node Source list.
● If you select Node Pool, you can view all user-defined node pools and the global pool. Standard users can see only the
pools for which they have permission. Select the Retry On Failure option to ensure that PowerFlex Manager selects
another node from the node pool for deployment if any node fails. Each node can be retried up to five times.
● If you select Manual Entry, the Choose Node list is displayed. Select the node by its Service Tag for deployment from
the list.
9. Click Next.
10. On the Schedule Deployment page, select one of the following options and click Next:
● Deploy Now—Select this option to deploy the service immediately.
● Deploy Later—Select this option and enter the date and time to deploy the service.
11. Review the Summary page.
The Summary page gives you a preview of what the service will look like after the deployment.
12. Click Finish when you are ready to begin the deployment. For more information, see PowerFlex Manager online help.

Add volumes with PowerFlex Manager


Use this procedure to add volumes using PowerFlex Manager.

About this task


After the hyperconverged deployment is complete, PowerFlex Manager automatically creates two volumes with 16 GB,
thin provisioned with the name powerflex-service-vol-1 and powerflex-service-vol-2 and two datastore with the name

Expanding a PowerFlex appliance with a new service 223


Internal Use - Confidential

powerflex-esxclustershotname-ds1 and powerflex-esxclustershotname-ds2, which is mapped to vCSA, and vCLS VMs is


migrated on the service datastore to maintain the HA cluster.
To create additional volumes, complete the following procedure for a hyperconverged deployment:

Steps
1. On the Services page, open the service that was deployed earlier.
2. Under Resource Actions, click Add Resources > Add Volume > Add Existing Volumes > Next.
3. Click on Select Volumes.
4. In the search text box, enter the volume names that are created in the PowerFlex storage-only node service and click
Search.
5. Select the volumes. Click >> to move the volumes.
6. Click ADD.

Field Value
Volume 1
Volume Name Create New Volume …
New Volume Name Volume1
Datastore Name Create New Datastore
New Datastore Datastore1
Storage Pool Select the storage pool from the drop-down
Enable Compression Select this check box (if compression is enabled for
deployment)
Volume Size 8 (or any multiple of 8 )
Volume Type Thick
Volume 2
Volume Name Create New Volume …
New Volume Name Volume2
Datastore Name Create New Datastore
New Datastore Datastore2
Storage Pool Do not change this option.
Enable Compression Select this check box ( if Compression is enabled for
deployment)
Volume Size 8 (or any multiple of 8)
Volume Type Thick

7. Click Next > Finish.


8. Review the summary and click Finish.
9. You can monitor the volume creation progress on the Jobs page or the Recent Activity section of the Resources page.
10. After the volumes are added, the datastores are associated with the cluster in the VMware vSphere being used to manage
the cluster.
For example in this document, 192.168.105.105.
NOTE: For PowerFlex version 3.5 and later, the New Medium Granularity Storage Pools have the Persistent
Checksum feature enabled by default. The existing New Medium Granularity Storage Pools will not have this
feature enabled during an upgrade. See the Dell EMC PowerFlex Appliance Administration Guide for manual procedures
for disabling (or enabling) this feature.

224 Expanding a PowerFlex appliance with a new service


Internal Use - Confidential

Redistribute the MDM cluster using PowerFlex


Manager
Use this procedure to redistribute the MDM across clusters to ensure maximum cluster resiliency and availability.

About this task


It is critical that the MDM cluster is distributed across access switches and physical cabinets to ensure maximum resiliency and
availability of the cluster. The location of the MDM components should be checked and validated during every engagement and
adjusted if found non-compliant with the published guidelines.
When adding new MDM or tiebreaker nodes to a cluster, first place the MDM components on the PowerFlex storage-only nodes
(if available), then the PowerFlex hyperconverged nodes. This procedure is not applicable for PowerFlex compute-only nodes.
Use PowerFlex Manager to change the MDM role for a node in a PowerFlex cluster. When adding a node to a cluster, you might
want to switch the MDM role from one of the existing nodes to the new node.
You can launch the wizard for reconfiguring MDM roles from the Services page or from the Resources page. The nodes that
are listed and the operations available are the same regardless of where you launch the wizard.

Steps
1. Access the wizard from the Services page or the Resource page. Click Services or Resources from the menu bar to
access the wizard.
a. Select the service or resource with the PowerFlex gateway containing the MDMs.
b. Click View Details.
c. Click Reconfigure MDM Roles. The MDM Reconfiguration page displays.
2. Review the current MDM configuration for the cluster.
3. For each MDM role that you want to reassign, use Select New Node for MDM Role to select the new hostname or IP
address. You can reassign multiple roles at a time.
4. Click Next. The Summary page displays.
5. Type Change MDM Roles to confirm the changes.
6. Click Finish.

Related information
Redistribute the MDM cluster

Expanding a PowerFlex appliance with a new service 225


Internal Use - Confidential

51
Configuring the hyperconverged or compute-
only transport nodes
This section describes how to configure the hyperconverged or compute-only nodes as part of preparing the PowerFlex
appliance for NSX-T. Before you configure the VMware ESXi hosts as NSX-T transport nodes, you must add the transport
distributed port groups and convert the distributed switch from LACP to individual trunks covered in this section.
NOTE: If you configure VMware NSX-T on PowerFlex hyperconverged or compute-only nodes and add them to PowerFlex
Manager, the services will be in lifecycle mode. If you need to perform an expansion on such a node, see Adding a
PowerFlex R640/R740xd/R840 node to a PowerFlex Manager service in lifecycle mode to add the PowerFlex node.
Contact VMware Support to configure VMware NSX-T on a new PowerFlex node and see Add PowerFlex nodes to a
service to update the service details.

Configure VMware NSX-T overlay distributed virtual


port group
Use this procedure to create and configure the VMware NSX-T overlay distributed virtual port group on cust_dvswitch.

Prerequisites
Ensure that the VMware vSphere vCenter Server and the VMware vSphere Client are accessible.

Steps
1. Log in to the VMware vSphere Client.
2. Click Networking.
3. Expand the PowerFlex Customer-Datacenter.
4. Right-click cust_dvswitch.
5. Click Distributed Port Group > New Distributed Port Group.
6. Update the name to pfmc-nsx-transport-121 and click Next.
7. Select the default Port binding.
8. Select the default Port allocation.
9. Select the default # of ports (default is 8).
10. Select the default VLAN as VLAN Type.
11. Set the VLAN ID to 121.
12. Clear the Customize default policies configuration check box and click Next.
13. Click Finish.
14. Right-click the pfnc-nsx-transport-121 and click Edit Settings....
15. Click Teaming and failover.
16. Verify that Uplink1 and Uplink2 are moved to Active.
17. Click OK.

226 Configuring the hyperconverged or compute-only transport nodes


Internal Use - Confidential

Convert trunk access to LACP-enabled switch ports


for cust_dvswitch
Use this procedure to convert the physical NICs from trunk to LACP without losing network connectivity. Use this option only if
cust_dvswitch is configured as trunk. LACP is the default configuration for cust_dvswitch.

Prerequisites
Both Cisco Nexus access switch ports for the compute VMware ESXi hosts are configured as trunk access. These ports will be
configured as LACP enabled after the physical adapter is removed from each ESXi host.
WARNING: As the VMK0 (ESXi management) is not configured on cust_dvswitch, both the vmnics are first
migrated to the LAGs simultaneously and then the port channel is configured. Data connectivity to PowerFlex is
lost until the port channels are brought online with both vmnic interfaces connected to LAGs.

About this task


This procedure includes reconfiguring one port at a time as an LACP without requiring any migration of the VMkernel network
adapters.

Steps
1. Log in to the VMware vSphere Client.
2. Look at VMware vCenter and physical switches to ensure that both ports across all hosts are up.
3. For each compute VMware ESXi host, record the physical switch ports to which vmnic5 (switch-B) and vmnic7 (switch -A)
are connected.
a. Click Home, then select Hosts and Clusters and expand the compute cluster.
b. Select the first compute ESXi host in left pane, and then select Configure tab in right pane.
c. Select Virtual switches under Networking.
d. Expand cust_dvswitch.
e. Expand Uplink1 and click eclipse (…) for vmnic7 and select view settings.
f. Click LLDP tab.
g. Record the Port ID (switch port) and System Name (switch).
h. Repeat step 3 for vmnic5 on Uplink 2.
4. Configure LAG (LACP) on cust_dvswitch within VMware vCenter Server:
a. Click Home, then select Networking.
b. Expand the compute cluster and click cust_dvswitch > Configure > LACP.
c. Click +New to open wizard.
d. Verify that the name is lag1.
e. Verify that the number of ports is 2.
f. Verify that the mode is Active.
g. Change Load Balancing mode to Source and destination IP address, TCP/UDP port..
h. Click OK.
5. Migrate vmnic5 to lag1-0 and vmnic7 to lag1-1 on cust_dvswitch for the compute VMware ESXi host as follows:
a. Click Home, then select Networking and expand the PowerFlex data center.
b. Right-click cust_dvswitch and select Manage host networking to open wizard.
c. Select Add hosts... and click Next.
d. Click Attached hosts..., select all the compute ESXi hosts, and click OK.
e. Click Next.
f. For each ESXi host, select vmnic5 and click Assign uplink.
g. Click lag1-0 and click OK.
h. For each ESXi host, select vmnic7 and click Assign uplink.
i. Click lag1-1 and click OK.
j. Click Next > Next > Next > Finish.
6. Create port-channel (LACP) on switch-A for compute VMware ESXi host.
The following switch configuration is an example of a single compute VMware ESXi host.

Configuring the hyperconverged or compute-only transport nodes 227


Internal Use - Confidential

a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-A switch.
b. Create a port channel on switch-A for each compute VMware ESXi host as follows:

interface port-channel40
Description to flex-compute-esxi-host01
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
speed 25000
no lacp suspend-individual
vpc 40

7. Configure channel-group (LACP) on switch-A access port (vmnic5) for each compute VMware ESXi host.
The following switch port configuration is an example of a single compute VMware ESXi host.
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-A switch.
b. Create port on switch-A as follows:

int e1/1/1
description to flex-compute-esxi-host01 – vmnic5
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 40 mode active

8. Create port-channel (LACP) on switch-B for compute VMware ESXi host.


The following switch configuration is an example of a single compute VMware ESXi host.
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-B switch.
b. Create a port channel on switch-B for each compute VMware ESXi host as follows:

interface port-channel40
Description to flex-compute-esxi-host01
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
speed 25000
no lacp suspend-individual
vpc 40

9. Configure channel-group (LACP) on switch-B access port (vmnic7) for each compute VMware ESXi host.
The following switch port configuration is an example of a single compute VMware ESXi host.
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-B switch.
b. Create port on switch-B as follows:

int e1/1/1
description to flex-compute-esxi-host01 – vmnic7
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 40 mode active

10. Update teaming and policy to route based on physical NIC load for each port group within cust_dvswitch:
a. Click Home and select Networking.

228 Configuring the hyperconverged or compute-only transport nodes


Internal Use - Confidential

b. Expand cust_dvswitch to have all port group in view.


c. Right-click flex-data-01 and select Edit Settings.
d. Click Teaming and failover.
e. Change Load Balancing mode to Route based on IP hash.
f. Repeat steps 10b to 10e for each remaining port groups.

Configuring the hyperconverged or compute-only transport nodes 229


Internal Use - Confidential

52
Add a Layer 3 routing between an external
SDC and SDS
Use this procedure to enable an external SDC to SDS communication and configure the PowerFlex node for an external SDC
reachability.

Steps
1. In the template, from Node > Network settings, select the required VLANs to enable an external SDC communication on
the SDS data interfaces.
2. From Node > Static routes, select Enabled.
3. Click Add New Static Route.
4. Select the source and destination VLANs, and manually enter the gateway IP address of the SDS data network VLAN.
5. Repeat these steps for each data VLAN.

230 Add a Layer 3 routing between an external SDC and SDS


Internal Use - Confidential

VII
Adding a PowerFlex R650/R750/R6525 node
in lifecycle mode
Use the procedures in this section to add a PowerFlex R650/R750/R6525 node for the PowerFlex Manager services discovered
in lifecycle mode.
Before adding a PowerFlex node in lifecycle mode, you must complete the initial set of expansion procedures that are common
to all expansion scenarios covered in Performing the initial expansion procedures.
The PowerFlex controller node with PowerFlex can have either of the following RAID controllers:
● If you are using HBA355, see Adding a PowerFlex R650/R750/R6525 node to a PowerFlex Manager service in managed
mode for expansion using PowerFlex Manager.
● If you are using PERC H755, see Converting a PowerFlex controller node with a PERC H755 to a PowerFlex management
controller 2.0 and Adding a PowerFlex controller node with a PERC H755 to a PowerFlex management controller 2.0 for
manual expansion.
After adding a PowerFlex node in lifecycle mode, see Completing the expansion.

Adding a PowerFlex R650/R750/R6525 node in lifecycle mode 231


Internal Use - Confidential

53
Performing a PowerFlex storage-only node
expansion
Perform the manual expansion procedure to add a PowerFlex R650/R750 storage-only node to PowerFlex Manager services
that are discovered in a lifecycle mode.
Before adding a PowerFlex node, you must complete the following initial set of expansion procedures:
● Preparing to expand a PowerFlex appliance
● Configuring the network

Discover resources
Use this procedure to discover and grant PowerFlex Manager access to resources in the environment. Provide the management
IP address and credential for each discoverable resource.

Prerequisites
Verify that the iDRAC network settings are configured. See Configure iDRAC network settings for more information.

About this task


During node discovery, you can configure iDRAC nodes to automatically send alerts to PowerFlex Manager. You can also change
the IP address of the iDRAC nodes and discover them. See Reconfigure the discovered nodes with new management IP and
credentials in the Dell EMC PowerFlex Appliance Administration Guide. If the PowerFlex nodes are not configured for alert
connector, Secure Remote Services does not receive critical or error alerts for those resources.
The following table describes how to configure resources in managed mode:

Resource type Resource state Example


PowerFlex nodes Managed PowerEdge iDRAC management IP address
If you want to perform firmware updates or deployments on a
discovered node, change the default state to managed.
Perform firmware or catalog updates from the Services page,
or the Resources page.

NOTE: For partial network deployments, you do not need discover the switches. The switches need to be pre-configured.
For sample configurations for Dell PowerSwitch, Cisco Nexus, and Arista switches, see Dell EMC PowerFlex Appliance
Administration Guide.
The following are the specific details for completing the Discovery wizard steps:

Resource IP address Resource Discover into Credentials: Credentials: Credentials: Credentials:


type range state node pool name Username password SNMPv2
community
string
Node 192.168.101.21- Managed Storage-only PowerFlex root calvin customer
pool appliance provided
192.168.101.24
iDRAC default
Switch 192.168.101.45- Managed NA access admin admin public
switches
192.168.101.46

232 Performing a PowerFlex storage-only node expansion


Internal Use - Confidential

Resource IP address Resource Discover into Credentials: Credentials: Credentials: Credentials:


type range state node pool name Username password SNMPv2
community
string
VM 192.168.105.105 Managed NA vCenter administrator P@ssw0rd! NA
Manager @vsphere.loc
al
Element 192.168.105.120 Managed NA CloudLink secadmin Secadmin!! NA
Manager* 192.168.105.121
*

** This is optional. For a new CloudLink Center deployment, the CloudLink Center is discovered automatically.

Prerequisites
● Configure the iDRAC network settings.
● Gather the IP addresses and credentials that are associated with the resources.
NOTE: PowerFlex Manager also allows you to use the name-based searches to discover a range of nodes that were
assigned the IP addresses through DHCP to iDRAC. For more information about this feature, see Dell EMC PowerFlex
Manager Online Help.

Steps
1. On the PowerFlex Manager Getting Started page, click Discover Resources.
2. On the Welcome page of the Discovery Wizard, read the instructions and click Next.
3. On the Identify Resources page, click Add Resource Type. From the Resource Type list, select the resource that you
want to discover.
4. Enter the management IP address of the resource in the IP/Hostname Range field. To discover a resource in an IP range,
provide a starting and ending IP address.
5. In the Resource State list, select Managed or Unmanaged.
6. For PowerFlex node, to discover resources into a selected node pool instead of the global pool (default), select the node
pool from the Discover into Node Pool list.
7. Select the appropriate credential from the Credentials list. See the table above for details.
8. For PowerFlex node, if you want PowerFlex Manager to automatically reconfigure the iDRAC IP addresses of the nodes it
discovers, select the Reconfigure discovered nodes with new management IP and credentials check box. This option
is not selected by default, because it is faster to discover the nodes if you bypass the reconfiguration.
NOTE: iDRAC can be discovered using hostname also.

NOTE: For the Resource Type Node, you can use a range with hostname or IP address, provided the hostname has a
valid DNS entry.

9. For PowerFlex node, select the Auto configure nodes to send alerts to PowerFlex Manager check box to have
PowerFlex Manager automatically configure nodes to send alerts to PowerFlex Manager.
10. Click Next to start discovery. On the Discovered Resources page, select the resources from which you want to collect
inventory data and click Finish. The discovered resources are listed on the Resources page.

Install the operating system


Use this procedure to install Red Hat Enterprise Linux or the embedded operating system 7.x from the iDRAC web interface.

Prerequisites
● Verify that the customer Red Hat Enterprise Linux or the embedded operating system ISO is available and is located in the
Intelligent Catalog code directory.
● Ensure that the following are installed for specific operating systems:

Performing a PowerFlex storage-only node expansion 233


Internal Use - Confidential

Operating system Requirement

Red Hat Enterprise Linux or embedded operating system 7.x ○ hwinfo


○ net-tools
○ pciutils
○ ethtool

Steps
1. From the iDRAC web interface, launch the virtual console.
2. Click Connect Virtual Media.
3. Under Map CD/DVD, click Browse for the appropriate ISO.
4. Click Map Device > Close.
5. From the Boot menu, select Virtual CD/DVD/ISO, and click Yes.
6. From the Power menu, select Reset System (warm boot) or Power On System if the machine is off.
7. Set the boot option to UEFI.
a. Press F2 to enter system setup.
b. Under System BIOS > Boot setting, select UEFI as the boot mode.
NOTE: Ensure that the BOSS card is set as the primary boot device from the boot sequence settings. If the BOSS
card is not the primary boot device, reboot the server and change the UEFI boot sequence from System BIOS >
Boot settings > UEFI BOOT settings.

c. Click Back > Back > Finish > Yes > Finish > OK > Yes.
8. Select Install Red Hat Enterprise Linux/CentOS 7.x from the menu.
NOTE: Wait until all configuration checks pass and the screen for language selection is displayed.

9. Select Continue from the language selection screen.


10. From the Software Selection menu, choose Minimal Install and click Done at the top of the screen.
11. From KDUMP clear the Enable kdump option and click Done.
12. From Network & Hostname set the hostname at the bottom of the screen and click Done.
13. From Installation Destination, select DELLBOSS VD and click Done.
a. Under Partitioning, select the radio button for I will configure partitioning and click Done.
b. Click the link Click here to create them automatically.
c. Partitions now display under the new Red Hat Enterprise Linux/embedded operating system 7.x installation.
d. Delete the /home partition by selecting it and clicking the - button at the bottom.
e. Click Done at the top of the screen. From the Summary of Changes dialog box, select Accept Changes.
14. Click Begin Installation.
15. Select Root Password, set the root password, click Done.
16. Click Finish configuration. Wait for installation to complete and click Reboot.
17. Wait for the system to come back online.

Configure the host


Configure the host after installing the embedded operating system. Images and output are provided for example purposes only.
While performing this procedure, if you see VLANs other than the following listed for services discovered in lifecycle mode,
assign and match the VLAN details with an existing setup. The VLAN names are for example only and may not match the
configured system. See Configuring the network for more information.
See Cabling the PowerFlex R650/R750/R6525 nodes for information on cabling the PowerFlex nodes.

Related information
Cabling the PowerFlex R650/R750/R6525 nodes

234 Performing a PowerFlex storage-only node expansion


Internal Use - Confidential

Install the nvme -cli tool and iDRAC Service Module (iSM)
Use this procedure to install dependency packages for Red Hat Enterprise Linux or embedded operating system.

About this task


Dependency packages for Red Hat Enterprise Linux or embedded operating system are installed with a minimal install of Red Hat
Enterprise Linux or embedded operating system 7.x, except the usbutils, net-tools, and redhat-lsb-core packages. Install these
packages manually, using the following procedure.
NOTE: If PowerFlex Manager is installed, do not use this procedure to install the dependency packages. See Related
information for information on expanding a PowerFlex node using PowerFlex Manager.

Steps
1. Copy the Red Hat Enterprise Linux or embedded operating system 7.x image to the /tmp folder of the PowerFlex storage-
only node using SCP or WINSCP.
2. Use PuTTY to log in to the PowerFlex storage-only node.
3. Run #cat /etc/*-release to identify the installed operating system.
4. Type # mount -o loop /tmp/<os.iso> /mnt to mount the iso image at the /mnt mount point.
5. Change directory to /etc/yum.repos.d
6. Type # touch <os.repo> to create a repository file.
7. Edit the file using a vi command and add the following lines:

[repository]
name=os.repo
baseurl=file:///mnt
enabled=1
gpgcheck=0

8. Type #yum repolist to test that you can use yum to access the directory.
9. Install the dependency packages per the installed operating system. To install dependency packages, enter:

# yum install usbutils


# yum install net-tools
# yum install redhat-lsb-core

10. Install the iDRAC Service Module, as follows:


a. Go to Dell EMC Support, and download the iSM package as per Intelligent Catalog.
b. Log in to the PowerFlex storage-only node.
c. Create a folder that is named ism in the /tmp directory. Type cd /tmp and create a folder that is named ism.
d. Use WinSCP to copy the iSM package to the /tmp/ism folder.
e. Gunzip and untar the file using following commands:
NOTE: Package name may differ from the following example depending on the Intelligent Catalog version.

# gunzip OM-iSM-Dell-Web-LX-340-1471_A00.tar.gz
# tar -xvf OM-iSM-Dell-Web-LX-340-1471_A00.tar

f. Change directory as per the installed operating system.


g. Type # rpm -ivh dcism-3.x.x-xxxx.el7.x86_64.rpm to install the package.
h. Type # systemctl status dcismeng.service to verify that dcism service is running on the PowerFlex storage-
only node.

Performing a PowerFlex storage-only node expansion 235


Internal Use - Confidential

NOTE: If dcismeng.service is not running, type systemctl start dcismeng.service to start the
service.
i. Type # ip a |grep idrac to verify link local IP address (169.254.0.2) is automatically configured to the interface
idrac on the PowerFlex storage-only node after successful installation of iSM.

j. Type # ping 169.254.0.1 to verify PowerFlex storage-only node operating system can communicate with iDRAC
using ping command (default link local IP address for iDRAC is 169.254.0.1.
11. Type # yum install nvme-cli to install the nvme-cli package.
12. Type # nvme list to ensure that the disk firmware version matches the Intelligent Catalog values.

If the disk firmware version does not match the Intelligent Catalog values, see Related information for information on
upgrading the firmware.

Related information
Adding a PowerFlex R640/R740xd/R840 node to a PowerFlex Manager service in managed mode
Upgrade the disk firmware for NVMe drives

Upgrade the disk firmware for NVMe drives


Use this procedure to upgrade firmware for Dell Express Flash NVMe PCIe SSDs.

Steps
1. Go to Dell EMC Support, and download the Dell Express Flash NVMe PCIe SSD firmware as per Intelligent Catalog.
2. Log in to the PowerFlex storage-only node.
3. Create a folder in the /tmp directory named diskfw.
4. Use WinSCP to copy the downloaded backplane package to the /tmp/diskfw folder.
5. Change directory to cd /tmp/diskfw/.
6. Change the access permissions of the file using the following command:
NOTE: Package name may differ from the following example depending on the Intelligent Catalog version.

chmod +x Express-Flash-PCIe-SSD_Firmware_R37D0_LN64_1.1.1_A02_01.BIN

7. Enter ./Express-Flash-PCIe-SSD_Firmware_R37D0_LN64_1.1.1_A02_01.BIN to run the package.


8. Follow the instructions that are provided for updating the firmware.
9. When prompted to upgrade, type Y and press Enter. Do the same when prompted for reboot.

Related information
Install the nvme -cli tool and iDRAC Service Module (iSM)

236 Performing a PowerFlex storage-only node expansion


Internal Use - Confidential

Install PowerFlex components on PowerFlex storage-


only nodes
Use this procedure to install PowerFlex components on PowerFlex storage-only nodes with or without NVMe drives.

Steps
1. From the management jump server VM, extract all required Red Hat files from the
VxFlex_OS_3.x.x_xxx_Complete_Software/ VxFlex_OS_3.x.x_xxx_RHEL_OEL7 package to the Red Hat
node root folder.
2. Use WinSCP to copy the following Red Hat files from the jump host folder to the /tmp folder on the Red Hat Enterprise
Linux node:
● EMC-ScaleIO-sds-3.x-x.xxx.el7.x86_64.rpm
● EMC-ScaleIO-sdr-3.x-x.xxx.el7.x86_64.rpm
● EMC-ScaleIO-mdm-3.x-x.xxx.el7.x86_64.rpm
● EMC-ScaleIO-lia-3.x-x.xxx.el7.x86_64.rpm
From the appropriate Intelligent Catalog folder, copy the PERC CLI perccli-7.x-xxx.xxxx.rpm rpm package.
NOTE: Verify that the PowerFlex version you install is the same as the version on other Red Hat Enterprise Linux
servers.
3. Use PuTTY and connect to the PowerFlex management IP address of the new node.
4. Go to /tmp, and install the LIA software (use the admin password for the token value).

TOKEN=<admin password> rpm -ivh /tmp/EMC-ScaleIO-lia-3.x-x.xxx.el7.x86_64.rpm

5. Type #rpm -ivh /tmp/EMC-ScaleIO-sds-3.x-x.xxx.el7.x86_64.rpm to install the storage data server (SDS)
software.
6. To enable replication type rpm -ivh //tmp/EMC-ScaleIO-sdr-3.x-x.xxx.el7.x86_64.rpm to install the storage
data replication (SDR) software.
7. Type rpm -ivh /tmp/perccli-7.x-xxx.xxxx.rpm to install the PERC CLI.
8. Reboot the PowerFlex storage-only node by typing reboot.

Add a new PowerFlex storage-only node without


NVDIMM to PowerFlex
Use this procedure to add a PowerFlex storage-only node without NVDIMM.

Prerequisites
Confirm that the PowerFlex system is functional and no rebuild or rebalances are running. For PowerFlex 3.5 or later, use the
PowerFlex GUI presentation server to add a PowerFlex storage-only node to PowerFlex.

Steps
1. If you are using PowerFlex GUI presentation server:
a. Log in to the PowerFlex GUI presentation server.
b. Click Configuration > SDSs.
c. Click Add.
d. Enter the SDS Name.
e. Select the Protection Domain and SDS Port.
f. Enter the IP address for data1, data2, data3 (if required), and data4 (if required).
g. Select SDS and SDC, as the appropriate communication roles for all the IP addresses that are added.
h. Click SDS.
2. If you are using a PowerFlex version prior to 3.5:

Performing a PowerFlex storage-only node expansion 237


Internal Use - Confidential

a. Log in to the PowerFlex GUI, and connect to the primary MDM.


b. Click the SDS pane.
c. Click Add and enter the SDS IP addresses to be added.
d. Choose the SDS and SDC connection, click ADD IP after adding each data IP address for a particular SDS.
e. Click ADD SDS.
f. Type the name of the PowerFlex node.
g. Click the + icon next to the IP address and type in the PowerFlex Data1 IP Address, PowerFlex Data 2 IP Address,
PowerFlex Data 3 IP Address, and PowerFlex Data 4 IP Address as recorded earlier. Do not add the management IP
address to the list. The data 3 (if required) and data 4 (if required) IP addresses are only applicable for the LACP bonding
NIC port design. A minimum of two logical data networks are supported. Optionally, you can configure four logical data
networks.
NOTE: There should be no more than two IP addresses listed and ensure both Communication Roles are selected
for both IP addresses.

h. Click OK and verify that the SDS was successfully added.


i. Click Close.

Configuring NVDIMM devices for PowerFlex storage-


only nodes
● Verify that Red Hat Enterprise Linux or the embedded operating system is installed and that the network is configured.
● Verify that the PowerFlex storage-only node meets the requirements using configuring storage-only nodes with NVDIMMs.
● Depending on the drive type, use the following to install PowerFlex:
○ SSD - Installs PowerFlex components on PowerFlex storage-only nodes without NVMe drives.
○ NVMe drives - Installs PowerFlex components on NVMe storage nodes.
NOTE: If adding a PowerFlex storage-only node with NVDIMMs to an existing protection domain, see Identify NVDIMM
acceleration pool in a protection domain to ensure a protection domain with other NVDIMM accelerated devices.

If adding PowerFlex storage-only nodes with NVDIMMs to a new protection domain, see Create an NVDIMM protection
domain. Dell recommends that a minimum of six PowerFlex storage-only nodes be in a protection domain.

Verify the PowerFlex version


Use this procedure to verify the installed PowerFlex version.

Prerequisites
Skip this procedure if NVDIMM is not available in the PowerFlex nodes.

Steps
1. Log in to the jump server.
2. SSH to primary MDM.
3. Log in with administrator credentials.
scli --login --username admin --password 'admin_password'
4. Type scli --version to verify the PowerFlex version.
Sample output:
DellEMC ScaleIO Version: R3_x.x.xxx

238 Performing a PowerFlex storage-only node expansion


Internal Use - Confidential

Verify the operating system version


Use this procedure to verify that Red Hat Enterprise Linux or embedded operating system 7.x is installed.

Steps
1. Log in the jump server.
2. SSH to the PowerFlex storage-only node.
3. Enter the following command to verify the operating system version:
cat /etc/*-release
4. Enter either of the following commands to verify the operating system version:
● cat /etc/*-release
● rpm -qa | grep release. For example, [root@sio-mgmt-26 ~]# rpm -qa | grep release centos-
release-7-7.1908.0.el7.centos.x86_64

Verify necessary RPMs


Use this procedure to verify necessary RPMs are installed.

About this task


The following RPMs must be installed:
● ndctl
● ndctl-libs
● daxctl-libs
● libpmem
● libpmemblk
The sample output in this procedure is for reference purpose only. The versions may vary depending on the intelligent catalog
used.

Steps
1. Log in to the jump server.
2. SSH to the PowerFlex storage-only node.
3. Type yum list installed ndctl ndctl-libs daxctl-libs libpmem libpmemblk
Sample output:

[root@eagles-r640-nvso-235 ~]# yum list installed ndctl ndctl-libs daxctl-libs


libpmem libpmemblk
Failed to set locale, defaulting to C
Loaded plugins: product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-
manager to register.
Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast
Installed Packages
daxctl-libs.x86_64 62-1.el7
@razor-internal
libpmem.x86_64 1.4-3.el7
@razor-internal
libpmemblk.x86_64 1.4-3.el7
@razor-internal
ndctl.x86_64 62-1.el7
@razor-internal
ndctl-libs.x86_64 62-1.el7
@razor-internal

4. If the RPMs are not installed, type yum install -y <rpm> to install the RPMs.

Performing a PowerFlex storage-only node expansion 239


Internal Use - Confidential

Activate NVDIMM regions


Use this procedure to activate NVDIMM regions.

Steps
1. Log in to the jump server.
2. SSH to the PowerFlex storage-only node.
3. List the NVDIMM regions by typing the following command:
ndctl list -R
4. Type ndctl destroy-namespace all -f to destroy all default namespaces. If this fails to reclaim space and if you
have already sanitized the NVDIMMs, type ndctl -start-scrub to scrub the NVDIMMs.
5. For each discovered region, type ndctl create-namespace -r region[x] -m raw -f (x corresponds to the
region number) to re-create the namespace.
For example, type:

ndctl create-namespace -r region0 -m raw -f


6. Type ndctl list -N to verify the namespaces (the number of namespaces should match number of NVDIMMs in the
PowerFlex appliance).

Create namespaces and DAX devices


Use this procedure to create namespaces, and configure each namespace as a devdax device.

Steps
1. SSH to the PowerFlex storage-only node.
2. For each NVDIMM, type (starting with namespace0.0):
ndctl create-namespace -f -e namespace[x].0 --mode=devdax --align=4k --no-autolabel

For example, ndctl create-namespace -f -e namespace0.0 --mode=devdax --align=4k --no-


autolabel

{"dev":"namespace0.0","mode":"devdax","map":"dev","size":"15.75 GiB
(16.91 GB)","uuid":"348d510e-dc70-4855-a6ca-6379046896d5","raw_uuid":
"4ca5cda2-ebd4-4894-aa4e-0cfc823745e2","daxregion":{"id":0,"size":"15.75 GiB (16.91
GB)","align":4096,"devices":[{"chardev":"dax0.0","size":"15.75 GiB (16.91 GB)"}]},
"numa_node":0}

3. Repeat for each NVDIMM namespace.

Identify NVDIMM acceleration pool in a protection domain


Use this procedure to identify a protection domain that is configured with NVDIMM acceleration pool using the PowerFlex GUI.
NVDIMM acceleration pools are required for compression.

Steps
1. Log in to the PowerFlex GUI presentation server as an administrative user.
2. Click Configuration > Acceleration Pool.
3. Note the acceleration pool name. The name is required while creating a compression storage pool.

240 Performing a PowerFlex storage-only node expansion


Internal Use - Confidential

Identify NVDIMM acceleration pool in a protection domain using a


PowerFlex version prior to 3.5
Use this procedure to identify a protection domain that is configured with NVDIMM acceleration pool using a PowerFlex version
prior to 3.5. NVDIMM acceleration pools are required for compression.

Steps
1. Log in to the PowerFlex GUI as an administrative user.
2. Select Backend > Storage.
3. Filter By Storage Pools.
4. Expand the SDSs in the protection domains. Under the Acceleration Type column, identify the protection domain with Fine
Granularity Layout. This is a protection domain that has been configured with NVDIMM accelerated devices.
5. The acceleration pool name (in this example, AP1) is listed under the column Accelerated On. This is needed when creating
a compression storage pool.

PowerFlex CLI (SCLI) keys for acceleration pools


This lists the PowerFlex CLI (SCLI) keys to use for acceleration pools and data compression.
All scli commands are performed on the Primary MDM.
Key:
● [PD] = Protection Domain
● [APNAME] = Acceleration Pool Name
● [SPNAME] = Storage Pool Name
● [VOLNAME] = Volume Name
● [SDSNAME] = SDS Name
● [SDS-IPs] = PowerFlex Data IP addresses

Create an NVDIMM protection domain


Use this procedure to create a protection domain.

About this task


Use this procedure only if you are expanding a PowerFlex node to a new protection domain.

Steps
1. If using PowerFlex GUI presentation server:
a. Log in to the PowerFlex GUI presentation server.
b. Click Dashboard > Configuration > Protected Domains, and ADD.
c. In the Add Protection Domain window, enter the name of the protection domain.
d. Click ADD PROTECTION DOMAIN.
2. If using a PowerFlex version prior to 3.5:
a. Log in to the PowerFlex GUI.
b. Click Backend > Storage.
c. Right-click PowerFlex System, and click + Add Protection Domain.
d. Enter the protection domain name, and click OK.

Performing a PowerFlex storage-only node expansion 241


Internal Use - Confidential

Create an acceleration pool


Use this procedure to create an acceleration pool that is required for a compression storage pool.

Steps
1. If using PowerFlex GUI presentation server:
a. Log in to the PowerFlex GUI presentation server.
b. Click Dashboard > Configuration > Acceleration Pools, and ADD.
c. Enter the acceleration pool in the Name field of the Add Acceleration pool window.
d. Select NVDIMM as the Pool Type, and select Protection Domain from the drop-down list.
e. In the Add Devices section, select the Add Devices to All SDSs check box, only if the devices are needed to be added
on all SDSs. If not, leave it unchecked.
f. In the Add Device section, select the Add Devices to All SDSs check box, only if the devices are needed to be added
on all SDSs. If not, leave it unchecked.
g. In the Path and Device Name fields, enter the device path and device name respectively. Select the appropriate SDS
from the drop-down menu. Click Add Devices.
h. Repeat the previous step to add devices.
i. Click Add Acceleration Pool.
2. If using a PowerFlex version prior to 3.5:
a. Log in to the PowerFlex GUI.
b. Click Backend > Devices.
c. Right-click Protection Domain, and click + Add > Add Acceleration Pool.
d. Enter the acceleration pool name.
e. Select NVDIMM.
f. Click OK. DAX devices are added later.
g. Click OK > Close.

Create a storage pool


Use this procedure to create a storage pool.

Steps
1. If using PowerFlex GUI presentation server:
a. Log in to the PowerFlex GUI presentation server.
b. Click Dashboard > Configuration > Storage Pools, and Add.
c. Enter the storage pool name in the Name field of the Add Storage pool window.
d. Select Protection Domain from the drop-down list.
e. Select SSD as the Media Type from the drop-down, and select FINE for Data Layout Granularity.
f. Select the Acceleratio Pool from the drop-down menu, and click Add Storage Pool.
2. If using a PowerFlex version prior to 3.5:
a. Log in to the PowerFlex GUI.
b. Select Backend > Storage.
c. Right-click Protection Domain, and click + Add > Add Storage Pool.
d. Add the new storage pool details:
● Name: Provide name
● Media Type: SSD
● Data Layout: Fine Granularity
● Acceleration Pool: Acceleration pool that was created previously
● Fine Granularity: Enable Compression
e. Click OK > Close

242 Performing a PowerFlex storage-only node expansion


Internal Use - Confidential

Add DAX devices to the acceleration pool


Use this procedure to add DAX devices to the acceleration pool.

Steps
1. Log in to the primary MDM using SSH.
2. For each SDS with NVDIMM, type the following to add NVDIMM devices to the acceleration pool:
ndctl create-namespace -f -e namespace[x].0 --mode=devdax --align=4k --no-
autolabel #scli--add_sds_device--sds_name <SDS_NAME>--device_path /dev/dax0.0 --
acceleration_pool_name <ACCP_NAME> --force_device_takeover

3. Repeat for the remaining DAX devices.

Add SDS to NVDIMM protection domain


Use this procedure to add SDSs to an NVDIMM protection pool.

About this task


You can also add an SDS to an NVDIMM protection pool using the PowerFlex GUI.

Steps
1. SSH to the PowerFlex storage-only node.
2. Type lsblk to get the disk devices.
Sample output:

3. Type ls -lia /dev/da* to get the DAX devices.


Sample output:

Performing a PowerFlex storage-only node expansion 243


Internal Use - Confidential

4. If using PowerFlex GUI presentation server:


a. Log in to the PowerFlex GUI presentation server.
b. Click Configuration > SDSs
c. Select the newly added SDSs by clicking on the check box.
d. Click ADD DEVICE > Acceleration Device.
e. In the Path and Name fields, enter the device path and name respectively. Select the acceleration pool you recorded in
the Drive information table.
f. Click ADD Device.
g. Expand the ADVANCED (OPTIONAL) section, and select yes for Select Force Device Take Over.
h. Click ADD DEVICES.
5. If using a PowerFlex version prior to 3.5:
a. Log in to the PowerFlex GUI.
b. Select Backend > Storage.
c. Right-click Protection Domain. and select Add > Add SDS.
d. Add SDS to the NVDIMM protection domain:
● Name: SDS Name
● IP address
○ Data 1 IP address with SDC and SDS enabled
○ Data 2 IP address with SDC and SDS enabled
○ Applicable for an LACP bonding NIC port design:
■ Data 3 IP address (if required) with SDC and SDS enabled
■ Data 4 IP address (if required) with SDC and SDS enabled
● Add devices.
● Add acceleration devices.

e. Click Advanced.
● Select Force Device Takeover.

244 Performing a PowerFlex storage-only node expansion


Internal Use - Confidential

f. Click OK > Close.


6. Repeat for all PowerFlex storage-only nodes.

Create a compressed volume


Use this procedure to create a compressed volume.

Steps
1. If using PowerFlex GUI presentation server:
a. Log in to the PowerFlex GUI presentation server.
b. Click Dashboard > Configuration > Volumes, and ADD.
c. In the ADD Volume window, enter the name in the Volume name field.
d. Select THIN or THICK as the Provisioning option.
e. Enter the size in the Size field. Select the Storage Pool from the drop-down menu.
f. Click Add Volume.
2. If using a PowerFlex version prior to 3.5:
a. Log in to the PowerFlex GUI.
b. Click Frontend > Volumes.
c. Right-click Storage Pool, and click Add Volume.
d. Add the volume details:
● Name: Volume name
● Size: Required volume size
● Enable compression
e. Click OK > Close.
f. Right-click the volume, and select Map.
g. Map to all hosts.
h. Click OK.

Add drives to PowerFlex


Use this procedure to add SSD or NVMe drives to PowerFlex.

Steps
1. If using PowerFlex GUI presentation server to enable zero padding on a storage pool:
a. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.

Performing a PowerFlex storage-only node expansion 245


Internal Use - Confidential

b. Click Storage Pools from the left pane, and select the storage pool.
c. Select Settings from the drop-down menu.
d. Click Modify > General Settings from the drop-down menu.
e. Click Enable Zero Padding Policy > Apply.
NOTE: After the first device is added to a specific pool, you cannot modify the zero padding policy. FG pool is always
zero padded. By default, zero padding is disabled only for MG pool.

2. If using a PowerFlex version prior to 3.5 to enable zero padding on a storage pool:
a. Select Backend > Storage, and right-click Select By Storage Pools from the drop-down menu.
b. Right-click the storage pool, and click Modify zero padding policy.
c. Select Enable Zero Padding Policy, and click OK > Close.
NOTE: Zero padding cannot be enabled when devices are available in the storage pools.

3.
If CloudLink is... Do this...
Enabled See one of the following procedures depending on the
devices:
● Encrypt PowerFlex hyperconverged (SVM) or storage-
only devices (SED drives)
● Encrypt PowerFlex hyperconverged (SVM) or storage-
only devices (non-SED drives)
Disabled Use PuTTY to access the Red Hat Enterprise Linux or an
embedded operating system node.

When adding NVMe drives, keep a separate storage pool for the PowerFlex storage-only node.

4. Note the devices by typing lsblk -p or nmve list.

246 Performing a PowerFlex storage-only node expansion


Internal Use - Confidential

5. If you are using PowerFlex GUI presentation server:


a. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
b. Click Configuration > SDSs.
c. Locate the newly added PowerFlex SDS, right-click, select Add Device, and choose Storage device from the drop-
down menu.
d. Type /dev/nvmeXXn1 where X is the value from step 3. Provide the storage pool, verify the device type, and click Add
Device. Accordingly, add all the required device, and click Add Devices.
NOTE: If the devices are not getting added, then select Advance Settings > Advance Takeover from the Add
Device Storage page.

e. Repeat these steps on all the SDS, where you want to add the devices.
f. Ensure all the rebuild and balance activities are successfully completed.
g. Verify the space capacity after adding the new node.
6. If you are using a PowerFlex version prior to 3.5:
a. Connect to the PowerFlex GUI.
b. Click Backend.
c. Locate the newly added PowerFlex SDS, right-click, and select Add Device.
d. Type /dev/nvmeXXn1 where X is the value from step 3.
e. Select the Storage Pool.

Performing a PowerFlex storage-only node expansion 247


Internal Use - Confidential

NOTE: If the existing PD has Red Hat Enterprise Linux nodes, replace or expand with Red Hat Enterprise Linux. If
the existing PD has embedded operating system nodes, replace or expand with embedded operating system.

f. Repeat these steps for each device.


g. Click OK > Close. A rebalance of the PowerFlex storage-only node begins.

Add storage data replication to PowerFlex


Use this procedure to add storage data replication to PowerFlex.

About this task


This procedure is required when not using PowerFlex Manager.

Prerequisites
Replication is supported on PowerFlex storage-only nodes with dual CPU. The node should be migrated to an LACP bonding NIC
port design.

Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
2. Click the Protection tab in the left pane.
NOTE: In the PowerFlex GUI version 3.5 or prior, this tab is Replication.

3. Click SDR > Add, and enter the storage data replication name.
4. Choose the protection domain.
5. Enter the IP address to be used and click Add IP. Repeat this for each IP address and click Add SDR.
NOTE: While adding storage data replication, Dell recommends to add IP addresses for flex-data1-<vlanid>, flex-data2-
<vlanid>, flex-data3-<vlanid> (if required), and flex-data4-<vlanid> (if required) along with flex-rep1-<vlanid>, and
flex-rep2-<vlanid>. Choose the role of Application and Storage for all data IP addresses and choose role as External
for the replication IP addresses.

6. Repeat steps 3 through 5 for all the storage data replicator you are adding. If you are expanding a replication-enabled
PowerFlex cluster, skip steps 7 through 11.
7. Click Protection > Journal Capacity > Add, and provide the capacity percentage as 10%, which is the default. You can
customize if needed.
8. Extract and add the MDM certificate:
NOTE: You can perform steps 8 through 13 only when the Secondary Site is up and running.

a. Log in to the primary MDM, by using the SSH on source and destination.
b. Type scli command scli --login --username admin. Provide the MDM cluster password, when prompted.
c. See the following example and run the command to extract the certificate on source and destination primary MDM.
Example for source: scli --extract_root_ca --certificate_file /tmp/Source.crt
Example for destination: scli --extract_root_ca --certificate_file /tmp/destination.crt
d. Copy the extracted certificated of source (primary MDM) to destination (primary MDM) using the SCP and conversely.
e. See the following example to add the copied certificate:
Example for source: scli --add_trusted_ca --certificate_file /tmp/destination.crt --comment
destination_crt
Example for destination: scli --add_trusted_ca --certificate_file /tmp/source.crt --comment
source_crt
f. Type scli --list_trusted_ca to verify the added certificate.
9. Create the remote consistency group (RCG).
Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443.
NOTE: Use the primary MDM IP and credentials to log in to the PowerFlex cluster.

248 Performing a PowerFlex storage-only node expansion


Internal Use - Confidential

10. Click the Protection tab from the left pane. If you are using a PowerFlex version 3.5 or prior, click the Replication tab.
11. Choose RCG (Remote Consistency Group), and click ADD.
12. On the General tab:
a. Enter the RCG name and RPO.
b. Select the Source Protection Domain from the drop-down list.
c. Select the target system and Target protection domain from the drop-down list, and click Next.
d. Under the Pair tab, select the source and destination volumes.
NOTE: The source and destination volumes must be identical in size and provisioning type. Do not map the volume
on the destination site of a volume pair. Retain the read-only permission. Do not create a pair containing a
destination volume that is mapped to the SDCs with a read_write permission.

e. Click Add pair, select the added pair to be replicated, and click Next.
f. In the Review Pairs tab, select the added pair, and select Add RCG, and start replication according to the requirement.

Add a PowerFlex node to a PowerFlex Manager


service in lifecycle mode
Use this procedure to update the inventory and service details in PowerFlex Manager for a new PowerFlex node.

Steps
1. Update the inventory for vCenter (vCSA), switches, gateway VM, and nodes:
a. Click Resources on the home screen.
b. Select the vCenter, switches (applicable only for full networking), gatewayVM, and newly added nodes.
c. Click Run Inventory.
d. Click Close.
e. Wait for the job in progress to complete.
2. Update Services details:
a. Click Services
b. Choose the services on which new a node is expanded and click View Details.
c. On the Services details screen, choose Update Service Details.
d. Choose the credentials for the node and SVM and click Next.
e. On the Inventory Sumary, verify that the newly added nodes are reflecting under Physical Node, and click Next.
f. On the Summary page, verify the details and click Finish.

Performing a PowerFlex storage-only node expansion 249


Internal Use - Confidential

54
Performing a PowerFlex hyperconverged
node expansion
Perform the manual expansion procedure to add a PowerFlex R650/R750 hyperconverged node to PowerFlex Manager services
that are discovered in a lifecycle mode.
Before adding a PowerFlex node, you must complete the following initial set of expansion procedures:
● Preparing to expand a PowerFlex appliance
● Configuring the network
Type systemctl status firewalld to verify if firewalld is enabled. If disabled, see the Enabling firewall service on
PowerFlex storage-only nodes and SVMs KB article to enable firewalld on all SDS components.

Discover resources
Use this procedure to discover and allow PowerFlex Manager access to resources in the environment. Provide the management
IP address and credential for each discoverable resource.

Prerequisites
Verify that the iDRAC network settings are configured. See Configure iDRAC network settings for more information.

About this task


Dell EMC recommends using separate operating system credentials for SVM and VMware ESXi. For information about creating
or updating credentials in PowerFlex Manager, click Settings > Credentials Management and access the online help.
During node discovery, you can configure iDRAC nodes to automatically send alerts to PowerFlex Manager. You can also change
the IP address of the iDRAC nodes and discover them. See Reconfigure the discovered nodes with new management IP and
credentials in the Dell EMC PowerFlex Appliance Administration Guide. If the PowerFlex nodes are not configured for alert
connector, Secure Remote Services does not receive critical or error alerts for those resources.
The following table describes how to configure resources in managed mode:

Resource type Resource state Example


PowerFlex nodes Managed PowerEdge iDRAC management IP address
If you want to perform firmware updates or deployments on a
discovered node, change the default state to managed.
Perform firmware or catalog updates from the Services page,
or the Resources page.

NOTE: For partial network deployments, you do not need to discover the switches. The switches need to be pre-
configured. For sample configurations for Dell PowerSwitch, Cisco Nexus, and Arista switches, see the Dell EMC PowerFlex
Appliance Administration Guide.
The following are the specific details for completing the Discovery wizard steps:

250 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

Resource IP address Resource Discover into Credentials: Credentials: Credentials: Credentials:


type range state node pool name Username password SNMPv2
community
string
Node 192.168.101.21- Managed PowerFlex node PowerFlex root calvin customer
pool appliance provided
192.168.101.24
iDRAC default
Switch 192.168.101.45- Managed N/A access admin admin public
switches
192.168.101.46

VM 192.168.105.105 Managed N/A vCenter administrator P@ssw0rd! N/A


Manager @vsphere.loc
al
Element 192.168.105.120 Managed N/A CloudLink secadmin Secadmin!! N/A
Manager* 192.168.105.121
*

** This is optional. For a new CloudLink Center deployment, the CloudLink Center is discovered automatically.

Prerequisites
● Configure the iDRAC network settings.
● Gather the IP addresses and credentials that are associated with the resources.
NOTE: PowerFlex Manager also allows you to use the name-based searches to discover a range of nodes that were
assigned the IP addresses through DHCP to iDRAC. For more information about this feature, see Dell EMC PowerFlex
Manager Online Help.

Steps
1. On the PowerFlex Manager Getting Started page, click Discover Resources.
2. On the Welcome page of the Discovery Wizard, read the instructions and click Next.
3. On the Identify Resources page, click Add Resource Type. From the Resource Type list, select the resource that you
want to discover.
4. Enter the management IP address of the resource in the IP/Hostname Range field. To discover a resource in an IP range,
provide a starting and ending IP address.
5. In the Resource State list, select Managed or Unmanaged.
6. For PowerFlex node, to discover resources into a selected node pool instead of the global pool (default), select the node
pool from the Discover into Node Pool list.
7. Select the appropriate credential from the Credentials list. See the table above for details.
8. For PowerFlex node, if you want PowerFlex Manager to automatically reconfigure the iDRAC IP addresses of the nodes it
discovers, select the Reconfigure discovered nodes with new management IP and credentials check box. This option
is not selected by default, because it is faster to discover the nodes if you bypass the reconfiguration.
NOTE: iDRAC can be discovered using hostname also.

NOTE: For the Resource Type, you can use a range with hostname or IP address, provided the hostname has a valid
DNS entry.

9. For PowerFlex node, select the Auto configure nodes to send alerts to PowerFlex Manager check box to have
PowerFlex Manager automatically configure nodes to send alerts to PowerFlex Manager.
10. Click Next to start discovery. On the Discovered Resources page, select the resources from which you want to collect
inventory data and click Finish. The discovered resources are listed on the Resources page.

Related information
Configure iDRAC network settings

Performing a PowerFlex hyperconverged node expansion 251


Internal Use - Confidential

Expanding compute and storage capacity


Use this procedure to determine if sufficient capacity is available with the associated PowerFlex license before expanding your
system.

Steps
1. If you are using PowerFlex presentation server:
a. Log in to the PowerFlex presentation server.
b. Click Settings.
c. Copy the license and click Update License.
2. If you are using a version prior to PowerFlex 3.5:
a. Log in to the PowerFlex GUI and click Preferences > About. Note the current capacity available with the associated
PowerFlex license.
b. If the capacity available is sufficient and does not exceed with the planned expansion, proceed with the expansion
process.
c. If the capacity available exceeds with the planned expansion, obtain an updated license with additional capacity. Engage
the customer account team to obtain an updated license. Once an updated license is available, click Preferences >
System Settings > License > Update License. Verify that the updated capacity is available by selecting Preferences
> About.

Verify the CloudLink license


Use this procedure to verify that the customer has sufficient license to expand the nodes.

Prerequisites
Use the self-SED based license for SED drives, and capacity license for non-SED drives.

Steps
1. Log in to the CloudLink Center web console.
2. Click System > License.
3. Check the limit and verify that there is enough capacity for the expansion.

Install VMware ESXi


Use this procedure to install VMware ESXi on the PowerFlex node.

Prerequisites
Verify the customer VMware ESXi ISO is available and is located in the Intelligent Catalog code directory.

Steps
1. Log in to the iDRAC:
a. Connect to the iDRAC interface, and launch a virtual remote console on the Dashboard.
b. Select Connect Virtual Media.
c. Under Map CD/DVD, click Choose File > Browse and browse to the folder where the ISO file is saved in the Intelligent
Catalog folder, select it and click Open..
d. Click Map Device.
e. Click Boot > Virtual CD/DVD/ISO.
f. Click Power > Reset System (warm boot).
NOTE: If the system is powered off, you must map the ISO image. Change the boot Next Boot to Virtual CD/DVD
ISO and power on the server. It boots with the ISO image. A reset is not required.

g. Press F2 to enter system setup mode.

252 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

h. Under System BIOS > Boot setting, select UEFI as the boot mode.
NOTE: Ensure that the BOSS card is set as the primary boot device from the boot sequence settings. If the BOSS
card is not the primary boot device, reboot the server and change the UEFI boot sequence from System BIOS >
Boot settings > UEFI BOOT settings.

i. Click Back > Finish > Yes > Finish > OK > Finish > Yes.
2. Install VMware ESXi:
a. On the VMware ESXi installer screen, press Enter to continue.
b. Press F11 to accept the license agreement.
c. Select US Default as the keyboard layout.
d. At the Confirm Install screen, press F11.
e. When the installation is complete, remove the installation media before rebooting.
f. Press Enter to reboot the node.
NOTE: Set the first boot device to be the drive on which you installed VMware ESXi in Step 3.

3. Configure the host settings in Direct Console User Interface (DCUI):


a. Press F2 to customize the system.
b. Provide the password for the root user and press Enter.
c. Press F2 to enter the DCUI.
d. Go to Configure Management Network.
e. Set Network Adapters to VMNIC0 and VMNIC2.
f. See the ESXi Management VLAN ID field in the Workbook for the required VLAN value.
g. Choose Set static IPv4 address and network configuration. Set IPv4 ADDRESS, SUBNET MASK, and DEFAULT
GATEWAY configuration to the values defined in the Workbook.
h. Choose Use the following DNS server addresses and hostname. Go to DNS Configuration. See the Workbook for
the required DNS value.
i. Go to Custom DNS Suffixes. See the Workbook (local PFMC DNS).
j. Press Esc to exit.
k. Go to Troubleshooting Options.
l. Select Enable ESXi Shell and Enable SSH.
m. Press <Alt>-F1
n. Log in as root.
o. To enable the VMware ESXi host to work on the port channel, type:

esxcli network vswitch standard policy failover set -v vSwitch0 -l iphash


esxcli network vswitch standard portgroup policy failover set -p "Management
Network" -l iphash

p. Type exit to log off.


q. Press <Alt>-F2. Press Esc to return to the DCUI.
r. Select Troubleshooting Options > Disable ESXi Shell.
s. Go to DCUI IPv6 Configuration > IPv6 Configuration.
t. Disable IPv6.
u. Press ESC to return to the DCUI.
v. Type Y to commit the changes and the node restarts.
w. Verify host connectivity by pinging the IP address from the jump server, using the command prompt.

Performing a PowerFlex hyperconverged node expansion 253


Internal Use - Confidential

Create a new VMware ESXi cluster to add PowerFlex


nodes
After installing VMware ESXi, use this procedure to create a new cluster, enable high-availability and DRS, and add a host to the
cluster.

About this task


If you are adding the host to a new cluster, follow the entire procedure. To add the host on an existing cluster, skip steps 1
through 6.

Prerequisites
Ensure that you have access to the customer vCenter.

Steps
1. From the vSphere Client home page, go to Home > Hosts and Clusters.
2. Select a data center.
3. Right-click the data center and select New Cluster.
4. Enter a name for the cluster.
5. Select vSphere DRS and vSphere HA cluster features.
6. Click OK.
7. Select the existing cluster or newly created cluster.
8. From the Configure tab, click Configuration > Quickstart.
9. Click Add in the Add hosts card.
10. On the Add hosts page, in the New hosts tab, add the hosts that are not part of the vCenter Server inventory by entering
the IP address, or hostname and credentials.
11. (Optional) Select the Use the same credentials for all hosts option to reuse the credentials for all added hosts.
12. Click Next.
13. The Host Summary page lists all the hosts to be added to the cluster with related warnings. Review the details and click
Next.
14. On the Ready to complete page, review the IP addresses or FQDN of the added hosts and click Finish.
15. Add the new licenses:
a. Click Menu > Administration
b. In the Administration section, click Licensing.
c. Click Licenses..
d. From the Licenses tab, click Add.
e. Enter or paste the license keys for VMware vSphere and vCenter per line. Click Next.
The license key is a 25-character length of alphabets and digits in the format XXXXX-XXXXX-XXXXX-XXXXX-XXXXX.
You can enter a list of keys in one operation. A new license is created for every license key you enter.
f. On the Edit license names page, rename the new licenses as appropriate and click Next.
g. Optionally, provide an identifying name for each license. Click Next.
h. On the Ready to complete page, review the new licenses and click Finish.

Migrating vCLS VMs


Use this procedure to migrate the vSphere Cluster Services (vCLS) VMs manually to the service datastore.

About this task


VMware vSphere 7.0Ux or ESX 7.0Ux creates vCLS VM when the vCenter Server Appliance (vCSA) is upgraded. This task helps
to migrate the vCLS VMs to service datastore.

254 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

Steps
1. Log in to VMware vCSA HTML Client using the credentials.
2. Go to VMs and templates inventory or Administration > vCenter Server Extensions > vSphere ESX Agent Manager
> VMs to view the VMs.
The VMs are in the vCLS folder once the host is added to the cluster.
3. Right-click the VM and click Migrate.
4. In the Migrate dialog box, click Yes.
5. On the Select a migration type page, select Change storage only and click Next.
6. On the Select storage page, select the PowerFlex volumes for hyperconverged or ESXi-based compute-only node which
will be mapped after the PowerFlex deployment.
NOTE: The volume name is powerflex-service-vol-1 and powerflex-service-vol-2. The datastore name is
powerflex-esxclustershotname-ds1 and powerflex-esxclustershotname-ds2. If these volumes or datastore are
not present, create the volumes or datastores to migrate the vCLS VMs.

7. Click Next > Finish.


8. Repeat the above steps to migrate all the vCLS VMs.

Renaming the VMware ESXi local datastore


Use this procedure to rename local datastores using the proper naming conventions.

Prerequisites
VMware ESXi must be installed with hosts added to the VMware vCenter.

Steps
1. Log in to the VMware vSphere Client.
2. Click Hosts and Clusters.
3. Locate and select the VMware ESXi host.
4. Select Datastores.
5. Right-click the datastore name, and select Rename.
6. Name the datastore using the DASXX convention, with XX being the node number.

Patch and install drivers for VMware ESXi


Use this procedure if VMware ESXi drivers differ from the current Intelligent Catalog. Patch and install the VMware ESXi drivers
using the VMware vSphere Client.

Prerequisites
Apply all VMware ESXi updates before installing or loading hardware drivers.
NOTE: This procedure is required only if the ISO drivers are not at the proper Intelligent Catalog level.

Steps
1. Log in to the VMware vSphere Client.
2. Click Hosts and Clusters.
3. Locate and select the VMware ESXi host that you installed.
4. Select Datastores.
5. Right-click the datastore name and select Browse Files.
6. Select the Upload icon (to upload file to the datastore).
7. Browse to the Intelligent Catalog folder or downloaded current solution Intelligent Catalog files.

Performing a PowerFlex hyperconverged node expansion 255


Internal Use - Confidential

8. Select the VMware ESXi patch .zip files according to the current solution Intelligent Catalog and node type and click OK to
upload.
9. Select the driver and vib files according to the current Intelligent Catalog and node type and click OK to upload.
10. Click Hosts and Clusters.
11. Locate the VMware ESXi host, right-click, and select Enter Maintenance Mode.
12. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client.
13. Log in as root.
14. Type cd /vmfs/volumes/DASXX where XX is the name of the local datastore that is assigned to the VMware ESXi
server.
15. To display the contents of the directory, type ls.
16. If the directory indicates vib files, type esxcli software vib install –v /vmfs/volumes/DASXX/
patchname.vib to install the vib. These vib files can be individual drivers that are absent in the larger patch cluster
and must be installed separately.
17. Perform either of the following depending on the VMware ESXi version:
a. For VMware ESXi 7.0, type esxcli software vib update -d /vmfs/volumes/DASXXX/VMware-
ESXi-7.0<version>-depot.zip.
b. For VMware ESXi 6.x, type esxcli software vib install -d /vmfs/volumes/DASXXX/<ESXI-patch-
file>.zip
18. Type reboot to reboot the host.
19. Once the host completes rebooting, open an SSH session with the VMware ESXi host, and type esxcli software vib
list |grep net-i to verify that the correct drivers loaded.
20. Select the host and click Exit Maintenance Mode.
21. Update the test plan and host tracker with the results.

Add PowerFlex nodes to Distributed Virtual Switches


Use the VMware vSphere Client to apply settings and add nodes to switches.
You can select multiple hosts and apply settings in template mode. In an LACP bonding NIC port design, dvswitch0 is referred
to as cust_dvswitch and dvswitch1 as flex_dvswitch.
CAUTION: If NSX is configured on the ESXi host, do not migrate mgmt and vmotion VMkernels to the VDS.

See Configuring the network for more information.

Change the MTU value


Use this procedure to change the MTU values to 9000 for the VMkernel port group and the dvswitch.

Prerequisites
● Type show running-configuration interface port-channel <portchannel number> to back up the
switch port and verify that the port channel for the impacted host are updated to MTU 9216. If the MTU value is set to
9000, skip this procedure.
● Back up the dvswitch configuration:
○ Click Menu, and from the drop-down, click Networking.
○ Click the impacted dvswitch and click the Configure tab.
○ From the Properties page, verify the MTU value. If the MTU value is set to 9000, skip this procedure.
● See the following table for recommended MTU values:

Switch Default MTU Recommended MTU


Dell EMC PowerSwitch 9216 9216
Cisco Nexus 9216 9216
cust_dvswitch 9000 9000

256 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

VMkernel Default MTU Recommended MTU


vMotion 9000 1500 or 9000
Management 1500 1500 or 9000

Steps
1. Change the MTU to 9216 or jumbo on physical switch port (Dell EMC PowerSwitch and Cisco Nexus):
a. Dell:

interface port-channel31
description Downlink-Port-Channel-to-Ganga-r840-nvme-01
no shutdown
Dell EMC PowerSwitch switchport mode trunk
switchport trunk allowed vlan 103,106,113,151,152,153,154
mtu 9216
vlt-port-channel 31
spanning-tree port type edge

b. Cisco:

interface port-channel31
description Downlink-Port-Channel-to-Ganga-r840-nvme-01
no shutdown
Cisco Nexus switchport mode trunk
Cisco Nexus switchport trunk allowed vlan 103,106,113,151,152,153,154
mtu 9216
vpc 31
spanning-tree port type edge

2. Change MTU to 9000 on cust_dvswitch:


a. Log in to VMware vCenter using administrator credentials.
b. Click Networking > cust_dvswitch.
c. Right-click and select Edit Settings.
d. Click Advanced and change the MTU value to 9000.
e. Repeat these steps for the remaining switch.
3. Change MTU to 9000 for vMotion VMkernel:
a. Click Hosts and Clusters.
b. Click the node and click Configure.
c. Click Vmkernel adapters under Networking.
d. Select the vMotion VMK and click Edit.
e. In the Port Properties tab, change the MTU to 9000.
f. Repeat for the remaining nodes.
4. Change MTU to 9000 for the PowerFlex gateway data interfaces:
a. Start a PuTTy session.
b. Connect to the PowerFlex gateway with root credentials.
c. Type ip addr sh and verify the details of the interfaces.
d. Type cd /etc/sysconfig/network-scripts.
e. Type vi flex-data1.
f. Select the MTU=1500 line and change the value to 9000. If this line is not visible, append the line MTU=9000.
g. Save and exit.
h. Repeat these steps for all other data network interfaces.
i. Restart the gateway VM.
5. To verify that the MTU is changed, ping -s 8972 any data Ips.

Performing a PowerFlex hyperconverged node expansion 257


Internal Use - Confidential

Validating network configurations


Test network connectivity between hosts and Metadata Managers (MDMs) before installing PowerFlex on new nodes.

Prerequisites
Gather the IP addresses of the primary and secondary MDMs.

Steps
1. Open Direct Console User Interface (DCUI) or use SSH to log in to the new hosts.
2. At the command-line interface, run the following commands to ping each of the primary MDM and secondary MDM IP
addresses.
If ping test fails, you must remediate before continuing.
NOTE: A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks.

vmkping –I vmk0 <Mgmt ip address of primary MDM or secondary MDM>


vmkping –I vmk2 <Data1 address of primary MDM or secondary MDM> -s 8972 –d
vmkping –I vmk3 <Data2 address of primary MDM or secondary MDM> -s 8972 –d

Run the following commands for LACP bonding NIC port design. x is the VMkernel adapter number in vmkx.

vmkping –I vmkx <Data3 address of primary MDM or secondary MDM> -s 8972 –d


vmkping –I vmkx <Data4 address of primary MDM or secondary MDM> -s 8972 –d

NOTE: After several host restarts, check the access switches for error or disabled states by running the following
commands:

# show interface brief


# show interface | inc CRC
# show interface counters error

3. Optional: If errors appear in the counters of any interfaces, type the following and check the counters again.
Output from a Cisco Nexus switch:

# clear counters interface ethernet X/X

4. Optional: If there are still errors on the counter, perform the following to see if the errors are old and irrelevant or new and
relevant.
a. Optional: Type # show interface | inc flapped.

Sample output:
Last link flapped 1d02h
b. Type # show logging logfile | inc failure.

Sample output:
Dec 12 12:34:50.151 access-a %ETHPORT-5-IF_DOWN_LINK_FAILURE: Interface Ethernet1/4/3
is down (Link failure)
5. Optional: Check and reset physical connections, bounce and reset ports, and clear counters until errors stop occurring.
Do not activate new nodes until all errors are resolved and no new errors appear.

258 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

Add the new host to PowerFlex


Configure the direct path I/O
Use this procedure to configure SSD.

Steps
1. In the VMware vSphere Client, select the new ESXi hosts.
2. Click Configure > Hardware > PCI Devices.
3. Click Configure PassThrough.
The Edit PCI Device Availability window opens.
4. From the PCI Device drop-down menu, select the Avago (LSI Logic) Dell HBA330 Mini check box and click OK.
5. Right-click the VMware ESXi host and select Maintenance Mode.
6. Right-click the VMware ESXi host and select Reboot to reboot the host.

Add NVMe devices as RDMs


Use this procedure to add NVMe devices as RDMs.

Steps
1. Use SSH to log in host.
2. Run the following command to generate a list of NVMe devices:

ls /vmfs/devices/disks/ |grep NVMe |grep -v ':'

Here is the example of the output:

t10.NVMe____Dell_Express_Flash_PM1725a_800GB_SFF____0A0FB071EB382500

t10.NVMe____Dell_Express_Flash_PM1725a_800GB_SFF____1C0FB071EB382500

t10.NVMe____Dell_Express_Flash_PM1725a_800GB_SFF____3906B071EB382500

3. Run the following command for each NVME device and increment the disk number for each:

vmkfstools -z /vmfs/devices/disks/
t10.NVMe____Dell_Express_Flash_PM1725a_800GB_SFF____0A0FB071EB382500 /vmfs/volumes/
DASxx/<svm_name>/<svm_name>-nvme_disk0.vmdk
vmkfstools -z /vmfs/devices/disks/
t10.NVMe____Dell_Express_Flash_PM1725a_800GB_SFF____0A0FB071EB382500 /vmfs/volumes/
DASxx/<svm_name>/<svm_name>-nvme_disk1.vmdk
vmkfstools -z /vmfs/devices/disks/
t10.NVMe____Dell_Express_Flash_PM1725a_800GB_SFF____1C0FB071EB382500 /vmfs/volumes/
DASxx/<svm_name>/<svm_name>-nvme_disk2.vmdk

4. Log in to the VMware vSphere Client and go to Hosts and Clusters.


5. Right-click the SVM and click Edit Settings.
6. Click Add new device > Existing Hard Disk.
7. Under datastores, select the local DASXX, the directory of the SVM, and then select <svmname>_disk0.vmdk.
8. Click OK.
9. Repeat the steps for each NVMe device, and then click OK.
10. Right-click the SVM and select compatibility and select upgrade VM compatibility.
11. Click Yes and select VMware ESXi 7.x.
12. Click OK.

Performing a PowerFlex hyperconverged node expansion 259


Internal Use - Confidential

13. Power on the SVM.


14. Use SSH to log in to the SVM and run lsblk.
15. Note the drive names of the NVMe devices (for example sdb, sdc). Exclude the 16 GB boot device and sr0 from the list as
they will be added later.

Install and configure the SDC


After adding hosts to the PowerFlex node with VMware ESXi, install the storage data client (SDC) to continue the expansion
process.

Steps
1. Copy the SDC file to the local datastore on the VMware vSphere ESXi server.
2. Use SSH on the host and type esxcli software vib install -d /vmfs/volumes/datastore1/
sdc-3.6.xxxxx.xx-esx7.x.zip -n scaleio-sdc-esx7.x.
3. Reboot the PowerFlex node.
4. To configure the SDC, generate a new UUID:
NOTE: If the PowerFlex cluster is using an SDC authentication, the newly added SDC reports as disconnected when
added to the system. See Configure an authentication enabled SDC for more information.

a. Use SSH to connect to the primary MDM.


b. Type uuidgen. A new UUID string is generated.
6607f734-da8c-4eec-8ea1-837c3a6007bf
5. Use SSH to connect to the new PowerFlex node.
6. Substitute the new UUID in the following code:

esxcli system module parameters set -m scini -p "IoctlIniGuidStr=6607f734-


da8c-4eec-8ea1-837c3a6007bf IoctlMdmIPStr=VIP1,VIP2,VIP3,VIP4"

A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks. Verify the
number of VIPs configured in an existing setup.
7. Reboot the PowerFlex node.

Rename the SDCs


Use this procedure to rename the SDCs.

Steps
1. If using PowerFlex presentation server:
a. Log in to the PowerFlex presentation server
b. Go to Configuration > SDCs.
c. Select the SDC and click Modify > Rename and rename the new host to standard.
For example, ESX-10.234.91.84
2. If using a version prior to PowerFlex 3.5:
a. Log in to the PowerFlex GUI.
b. Click Frontend > SDCs and rename the new host to standard.
For example, ESX-10.234.91.84

260 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

Calculate RAM capacity for medium granularity SDS


Use the formula provided in this procedure to calculate the required capacity for RAM.

About this task


See the following table to verify SVM memory size on the Medium Granularity (MG) SDS. Ensure that you have modified the
memory allocation before starting the upgrade to PowerFlex 3.5.x, based on the table or formula.

Steps
1. Using the table, calculate the required RAM capacity.

MG capacity (TiB) Required MG RAM Additional services Total RAM required Total RAM required
capacity (GiB) memory in the SVM (without in the SVM (with
CloudLink) (GiB) CloudLink)

9.3 7 MDM: 6.5 GiB 17 21 GiB


19.92 10 LIA: 350 MiB 19 23 GiB
38.84 13 OS Base: 1 GiB 23 27 GiB
Buffer: 2 GiB
22.5 10 20 24 GiB
Cloudlink: 4 GiB
46.08 15 25 29 GiB
● Without CloudLink:
92.16 24 9.85 GiB 34 38 GiB
● With CloudLink:
13.85 GiB

2. Alternatively, you can calculate RAM capacity using the following formula:
NOTE: The calculation is in binary MiB, GiB, and TiB.

RAM_capacity_in_GiB = 5 + (210 * total_drive_capacity_in_TiB) / 1024


NOTE: Round up the RAM size to the next GiB, for example, if the output of the calculation is 16.75 GiB, round this up
to 17 GiB.

3. Open the PowerFlex GUI using the PowerFlex management IP address and the relevant PowerFlex username and password.
4. Select the Storage Data Server (SDS) from the Backend where you want to update the RAM size.
5. Right-click the SDS, select Configure IP addresses, and note the flex-data1-<vlanid> and flex-data2-<vlanid> IP addresses
associated with this SDS. A window appears displaying the IP addresses used on that SDS for data communication. Use
these IP addresses to verify that you powered off the correct PowerFlex VM.
6. Right-click the SDS, select Enter Maintenance Mode, and click OK.
7. Wait for the GUI to display a green check mark, click Close.
8. In the PowerFlex GUI, click Backend, and right-click the SVM and verify the checkbox is deselected for Configure RAM
Read Cache.
9. Power off the SVM.
10. In VMware vCenter, open Edit Settings and modify the RAM size based on the table or formula in step 1. The SVM should
be set as 8 or 12vCPU, configured at 8 or 12 Socket, 8 or 12 Core (for CloudLink, additional 4 threads).
11. Power on the SVM.
12. From the PowerFlex GUI backend, right-click the SDS and select Exit Maintenance Mode and click OK.
13. Wait for the rebuild and rebalance to complete.
14. Repeat steps 6 through 13 for the remaining SDSs.

Performing a PowerFlex hyperconverged node expansion 261


Internal Use - Confidential

Change memory and CPU settings on the SVM


Use this procedure to change the memory and CPU settings on the SVM to update PowerFlex hyperconverged nodes to
PowerFlex 3.6.x.x.

About this task


To update memory and CPU settings:

Component vCPUs Memory


SDS with Medium Granularity pool vCPU total: 8 (SDS) + 2 (MDM/TB) + 2 RAM_capacity_in_GIB = 10 + (100
(CloudLink) = 12 vCPU * number of drives) + (550 *
Total_drive_capacity_in_TIB)/1024
NOTE: Physical core requirement 2
sockets with 12 cores each (vCPU
cannot exceed physical cores).

SDS with Fine Granularity pool vCPU total: 10 (SDS) + 2 (MDM/TB) + 2 RAM_capacity_in_GIB = 5 + (210 *
(CloudLink) = 14 vCPU Total_drive_capacity_in_TIB)/1024
NOTE: Physical core requirement 2
sockets with 14 cores each (vCPU
cannot exceed physical cores).

NOTE: Set SDS thread count to 10.

Fine Granularity MD cache Not applicable FG_MD_Cache =


(Total_drive_capacity_in_TIB/2) *
4 * Compression_Factor *
Percent_of_Metadata_to_Cache
It is recommended to allocate 2% for
compression factor and 2% for metadata
to cache.
NOTE: For PowerFlex
hyperconveged nodes, 2 GB
is allocated by default without
considering the formula.

MDM/TB Not applicable 6.5 GIB additional memory


LIA Not applicable 350 MIB additional memory
Operating system Not applicable 2 GIB additional memory
CloudLink Not applicable 4 GIB additional memory
Spare Not applicable 2 GIB additional memory

Prerequisites
● Enter SDS node (SVMs) into maintenance mode and power off the SVM.
● Switch the primary cluster role to secondary if you are putting the primary MDM into maintenance mode (change back to
original node once completed). Do activity only one SDS at a time.
● If you place multiple SDS into maintenance mode at same time, there will be a chance of data loss.
● Ensure that the node has enough CPU cores in each socket.

Steps
1. Log in to the PowerFlex GUI presentation server, https://Presentation_Server_IP:8443.
2. Click Configuration > SDSs.
3. In the right pane, select the SDS and click More > Enter Maintenance Mode.
4. In the Enter SDS into Maintenance Mode dialog box, select Instant.
If maintenance mode takes more than 30 minutes, select PMM.

262 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

5. Click Enter Maintenance Mode.


6. Verify the operation completed successfully and click Dismiss.
7. Shut down the SVM.
a. Log in to VMware vCenter using the VMware vSphere Client.
b. Select the SVM, right-click Power > Shut Down Guest OS.

Manually deploy the SVM


Use this procedure to manually deploy the SVM.

Steps
1. Log in to the VMware vSphere Client and do the following:
a. Right-click the ESXi host, and select Deploy OVF Template.
b. Click Choose Files and browse the SVM OVA template.
c. Click Next.
2. Go to hosts and templates/EMC PowerFlex and right-click PowerFlex SVM Template, and select the new VM from this
template.
3. Enter a name similar to svm-<hostname>-<SVM IP ADDRESS>, select a datacenter and folder, and click Next.
4. Identify the cluster and select the node that you are deploying. Verify that there are no compatibility warnings and click
Next. Review the details and click Next.
5. Select the local datastore DASXX, and click Next.
6. Leave Customize hardware checked and click Next.
a. Set CPU with 12 cores per socket.
b. Set Memory to 16 GB and check Reserve all guest memory (All locked).
NOTE: Number of vCPUs and size of memory may change based on your system configuration. Check in the existing
SVM and update the CPU and memory settings accordingly.

c. Set Network Adapter 1 to the flex-stor-mgmt-<vlanid>.


d. Set Network Adapter 2 to the flex-data1-<vlanid>.
e. Set Network Adapter 3 to the flex-data2-<vlanid>.
f. For an LACP bonding NIC port design:
i. Set Network Adapter 4 to the flex-data3-<vlanid> (if required).
ii. Set Network Adapter 5 to the flex-data4-<vlanid> (if required).
A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks. Verify
the number of logical data networks configured in an existing setup and configure the new SVM accordingly. If only two
logical data networks are configured on an existing setup, download and deploy the three NIC OVA templates.

7. Click Next > Finish and wait for the cloning process to complete.
8. Right-click the new SVM, and select Edit Settings and do the following:
This is applicable only for SSD. For NVMe, see Add NVMe devices as RDMs.
a. From the New PCI device drop-down menu, click DirectPath IO.
b. From the PCI Device drop-down menu, expand Select Hardware, and select Avago (LSI Logic) Dell HBA330 Mini.
c. Click OK.
9. Prepare for asynchronous replication:
NOTE: If replication is enabled, follow the below steps, else skip and go to step 11.

a. Add virtual NICs:


i. Log in to the production vCenter using VMware vSphere Client and navigate to Host and Clusters.
ii. Right-click the SVM and click Edit Settings.
iii. Click ADD NEW DEVICE and select Network Adapter from the list.
iv. Select the appropriate port group created for SDR external communication, click OK.
v. Repeat steps ii to iv for creating a second NIC.
vi. Record the MAC address of newly added network adapters from vCenter:
● Right-click the SVM and click Edit Settings.

Performing a PowerFlex hyperconverged node expansion 263


Internal Use - Confidential

● Select Network Adapter from the list with SDR port group.
● Expand the network adapter and record the details from the MAC Address field.
b. Modify the vCPU, Memory, vNUMA, and CPU reservation settings on SVMs:
The following requirements are for reference:
● 12 GB additional memory is required for SDR.
For example, if you have 24 GB memory existing in SVM, add 12 GB to enable replication. In this case, 24+12=36 GB.
● Additional 8*vCPUs required for SDR:
○ vCPU total for MG Pool based system: 8 (SDS) + 8 (SDR) + 2 (MDM/TB) + 2 (CloudLink) = 20 vCPUs
○ vCPU total for FG Pool based system
○ vCPU total: 10 (SDS) + 10 (SDR) + 2 (MDM/TB) + 2 (CloudLink) = 24 vCPU
● Per SVM, set numa.vcpu.maxPerVirtualNode to half the vCPU value assigned to the SVM.
For example, if the SVM has 20 vCPU, set numa.vcpu.maxPerVirtualNode to 10.

i. Browse to the SVM in the VMware vSphere Client.


ii. Find a virtual machine, select a data center, folder, cluster, resource pool, or host.
iii. Click the VMs tab, right-click the virtual machine and select Edit Settings.
iv. Click VM Options > Advanced.
v. Under Configuration Parameters, click the Edit Configuration.
10. Power on the new SVM and open a console.
11. For versions prior to PowerFlex 3.x, run systemctl status firewalld to check if the firewall is enabled.
If the service is inactive or disabled, see the KB article Enabling firewall service on PowerFlex storage-only nodes and SVMs
to enable the service and required ports for each PowerFlex component.

12. Log in using the following credentials:


● Username: root
● Password: admin
13. To change the root password type passwd and enter the new SVM root password twice.
14. Type nmtui, select Set system hostname, press Enter, and create the hostname. For example, ScaleIO-10-234-92-84.
15. Select Edit a Connection.
16. Set eth0 interface for flex-stor-mgmt-<vlanid>.
17. Set eth1 interface for flex-data1-<vlanid>.
18. Set eth2 interface for flex-data2-<vlanid>.
19. If the existing network is an LACP bonding NIC, repeat step 16 for eth3 and eth4.
20. If native asynchronous replication is enabled, perform the following:
● Set eth5 interface for flex-rep1-<vlanid>
● Set eth6 interface for flex-rep2-<vlanid>
21. Exit nmtui back to the command prompt.
22. Type systemctl to restart the network.
23. Type TOKEN=<TOKEN-PASSWORD> rpm -i /root/install/EMC-ScaleIO-lia-3.x-x.el7.x86_64.rpm to
install the LIA.
24. Type rpm -i /root/install/EMC-ScaleIO-sds-3.x-xel7.x86_64.rpm to install the SDS.
25. Type lsblk from the SVM console to list available storage to add to the SDS.

Prepare the SVMs for replication


Use the following procedures to prepare the SVMs for replication. These tasks are applicable only if native asynchronous
replication is configured. If native asynchronous replication is not configured, skip to Add the new SDS to PowerFlex.

Related information
Add the new SDS to PowerFlex

264 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

Setting the SDS NUMA


Use this procedure to allow the SDS to use the memory from the other NUMA.

Steps
1. Log in to the SDS (SVMs) using PuTTy.
2. Append the line numa_memory_affinity=0 to SDS configuration file /opt/emc/scaleio/sds/cfg/conf.txt,
type: echo #numa_memory_affinity=0 >> /opt/emc/scaleio/sds/cfg/conf.txt.
3. Run #cat /opt/emc/scaleio/sds/cfg/conf.txt to verify that the line is appended.

Enable replication on PowerFlex nodes with FG pool


Use this procedure to enable replication on PowerFlex nodes with FG pool.

About this task


If the PowerFlex node has FG Pool and if you want to enable replication, set the SDS thread count to ten from default of eight.

Steps
1. Use SSH to log in to the primary MDM. Log in to PowerFlex cluster using #scli --login --username admin.
2. To query the current value, type, #scli --query_performance_parameters --print_all --tech --
all_sds|grep -i SDS_NUMBER_OS_THREADS.
3. To set the value of SDS_number_OS_threads to 10, type, # scli --set_performance_parameters -sds_id
<ID> --tech --sds_number_os_threads 10.

NOTE: Do not set the SDS threads globally, set the SDS threads per SDS.

Verify and disable Network Manager


Use this procedure to verify that Network Manager is not running.

Steps
1. Log in to the SDS (SVMs) using PuTTy.
2. Run # systemctl status NetworkManager to ensure that Network Manager is not running.
Output shows Network Manager is disabled and inactive.
3. If Network Manager is enabled and active, run the following command to stop and disable the service:

# systemctl stop NetworkManager


# systemctl disable NetworkManager

Updating the network configuration


Use this procedure to update the network configuration file for all the network interfaces.

Steps
1. Log in to SDS (SVMs) using PuTTY.
2. Note the MAC addresses of all the interfaces, type, #ifconfig or #ip a.

Performing a PowerFlex hyperconverged node expansion 265


Internal Use - Confidential

3. Edit all the interface configuration files (ifcfg-eth0, ifcfg-eth1, ifcfg-eth2, ifcfg-eth3, ifcfg-eth4) and update the NAME,
DEVICE and HWADDR to ensure correct MAC address and NAME gets assigned.
NOTE: If any of the entries are already there with correct value, then you can ignore such values.
● Use the vi editor to update the file # vi /etc/sysconfig/network-scripts/ifcfg-ethX
or
● Append the line using the following command:

# echo NAME=ethX >> /etc/sysconfig/network-scripts/ifcfg-ethX


# echo HWADDR=xx:xx:xx:xx:xx:xx >> /etc/sysconfig/network-scripts/ifcfg-ethX

Example file:

BOOTPROTO=none
ONBOOT=yes
HOTPLUG=yes
TYPE=Ethernet
DEVICE=eth2
IPADDR=192.168.155.46
NETMASK=255.255.254.0
DEFROUTE=no
MTU=9000
PEERDNS=no
NM_CONTROLLED=no
NAME=eth2
HWADDR=00:50:56:80:fd:82

Update network interface configuration files


Use this procedure to ensure the network interface configuration file updates correctly.

About this task


Remove net.ifnames=0 and biosdevname=0 from the /etc/default/grub file to avoid the interface name issue when you
add virtual NICs to SVM for SDR communication.

Steps
1. Log in to the SVM using PuTTY.
2. Edit the grub file located in /etc/default/grub, type: # vi /etc/default/grub.
3. From the last line, remove net.ifnames=0 and biosdevname=0, and save the file.
4. Rebuild the GRUB configuration file, using: # grub2-mkconfig -o /boot/grub2/grub.cfg

266 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

Shutting down the SVM


Use this procedure to shut down the SVM.

Steps
1. Log in to VMware vCenter using VMware vSphere Client.
2. Select the SVM, right-click Power > Shut-down Guest OS. Ensure you shut down the correct SVM.

Modifying the vCPU, memory, vNUMA, and CPU reservation settings on


SVMs
When you enable replication on PowerFlex hyperconverged nodes, there are specific memory and CPU settings that must be
updated.

Setting the vNUMA advanced option


Use this procedure to set numa.vcpu.maxPerVirtualNode.

About this task


Ensure the CPU hot plug feature is disabled. If this feature is enabled, disable it before configuring the vNUMA parameter.

Steps
1. Log in to the production VMware vCenter using VMware vSphere Client.
2. Right-click the VM that you want to change and select Edit Settings.
3. Under the Virtual Hardware tab, expand CPU, and clear the CPU Hot Plug check box.

Editing the SVM configuration


Use this procedure to set the SVM numa.vcpu.maxPerVirtualNode to half the vCPU value assigned to the SVM.

Steps
1. Browse to the SVM in the VMware VMware vSphere Client.
2. To find a VM, select a data center, folder, cluster, resource pool, or host.
3. Click the VMs tab.
4. Right-click the VM and select Edit Settings.
5. Click VM Options and expand Advanced.
6. Under Configuration Parameters, click Edit Configuration.
7. In the dialog box that appears, click Add Configuration Params.
8. Enter a new parameter name and its value depending on the pool:
● If the SVM for an MG pool has 20 vCPU, set numa.vcpu.maxPerVirtualNode to 10.
● If the SVM for an FG pool has 24 vCPU, set numa.vcpu.maxPerVirtualNode to 12.
9. Click OK > OK.
10. Ensure the following:
● CPU shares are set to high.
● 50% of the vCPU reserved on the SVM.
For example:
● If the SVM for an MG pool is configured with 20 vCPUs and CPU speed is 2.8 GHz, set a reservation of 28 GHz
(20*2.8/2).
● If the SVM is configured with 24 vCPUs and CPU speed is 3 GHz, set a reservation of 36 GHz (24*3/2).
11. Find the CPU and clock speed:
a. Log in to VMware vCenter.

Performing a PowerFlex hyperconverged node expansion 267


Internal Use - Confidential

b. Click Host and Cluster.


c. Expand the Cluster and select Physical node.
d. Find the details against Processor Type under the Summary tab.
12. Right-click the VM you want to change and select Edit Settings.
13. Under the Virtual Hardware tab, expand CPU, verify Reservation and Shares.

Modify the memory size according to the SDR requirements for MG Pool
Use this procedure to add additional memory required for SDR if replication is enabled.

About this task


NOTE: 12 GB of additional memory is required for SDR. For example, if you have 24 GB memory existing in the SVM, add 12
GB for enabling replication so it would be 24+12 = 36 GB.

Steps
1. Log in to the production VMware vCenter using vSphere client.
2. Right-click the VM you want to change and select Edit Settings.
3. Under the Virtual Hardware tab, expand Memory, modify the memory size according to SDR requirement.
4. Click OK.

Increase the vCPU count


Use this procedure to increase the vCPU count according to the SDR requirement.

About this task


The physical core requirement is two sockets with ten cores each (vCPU * per NUMA domain cannot exceed physical cores).
vCPU total: 8 (SDS) + 8 (SDR) + 2 (MDM/TB) + 2(CloudLink) = 20 vCPUs

Steps
1. Log in to the production VMware vCenter using VMware vSphere client.
2. Right-click the virtual machine that requires changes and select Edit Settings.
3. Under the Virtual Hardware tab, expand CPU, increase the vCPU count according to SDR requirement.
4. Click OK.

Modify the memory size according to the SDR requirements for FG Pool
Use this procedure to add additional memory required for SDR if replication is enabled.

About this task


NOTE: 12 GB of additional memory is required for SDR. For example, if you have 32 GB memory existing in the SVM, add 12
GB for enabling replication so it would be 32+12 = 44 GB.

Steps
1. Log in to the production VMware vCenter using VMware vSphere Client.
2. Right-click the VM that requires changes and select Edit Settings.
3. Under the Virtual Hardware tab, expand Memory, modify the memory size according to SDR requirement.
4. Click OK.

268 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

Increase the vCPU count


Use this procedure to increase the vCPU count according to the SDR requirement.

About this task


The physical core requirement is two sockets with ten cores each (vCPU * per NUMA domain cannot exceed physical cores).
vCPU total: 10 (SDS) + 10 (SDR) + 2 (MDM/TB) + 2 (CloudLink) = 24 vCPUs

Steps
1. Log in to the production VMware vCenter using VMware vSphere Client.
2. Right-click the virtual machine that requires changes and select Edit Settings.
3. Under the Virtual Hardware tab, expand CPU, and increase the vCPU count according to SDR requirement.
4. Click OK.

Adding virtual NICS to SVMs


Use this procedure to add two more NICs to each SVM for SDR external communication.

Steps
1. Log in to the production VMware vCenter using VMware vSphere Client and navigate to Host and Clusters.
2. Right-click the SVM and click Edit Setting.
3. Click Add new device and select Network Adapter from the list.
4. Select the appropriate port group created for SDR external communication and click OK.
5. Repeat steps 2 through 4 to create the second NIC.

Power on the SVM and configure network interfaces


Use these procedures to power on the SVMs and create interface configuration files for the newly added network adapters.

Configuring newly added network interface controllers for the SVMs

Steps
1. Log in to VMware vCenter using vSphere client.
2. Select the SVM, right-click Power > Power on.
3. Log in to SVM using PuTTY.
4. Create rep1 network interface, type: cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/
network-scripts/ifcfg-eth5.
5. Create rep2 network interface, type: cp etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/
network-scripts/ifcfg-eth6.
6. Edit newly created configuration files (ifcfg-eth5, ifcfg-eth6) using the vi editor and modify the entry for IPADDR,
NETMASK, GATEWAY, DEFROUTE, DEVICE, NAME and HWADDR, where:
● DEVICE is the newly created device of eth5 and eth6
● IPADDR is the IP address of the rep1 and rep2 networks
● NETMASK is the subnet mask
● GATEWAY is the gateway for the SDR external communication
● DEFROUTE change to no
● HWADDR=MAC address collected from the topic Adding virtual NICs to SVMs
● NAME=newly created device name for eth5 and eth6
NOTE: Ensure that the MTU value is set to 9000 for SDR interfaces on both primary and secondary site and also end to
end devices. Confirm with the customer about their existing MTU values and configure it.

Performing a PowerFlex hyperconverged node expansion 269


Internal Use - Confidential

Adding a permanent static route for replication external networks


Use this procedure to create a permanent route.

Steps
1. Go to /etc/sysconfig/network-scripts and create a file called route-interface and type:

#touch /etc/sysconfig/network-scripts/route-eth5
#touch /etc/sysconfig/network-scripts/route-eth6

2. Edit each file and add the appropriate network information.


For example, 10.0.10.0/23 via 10.0.30.1, where 10.0.10.0/23 is the network address and prefix length of the
remote or destination network. The IP address 10.0.30.1 is the gateway address leading to the remote network.
Sample file

/etc/sysconfig/network-scripts/route-eth5
10.0.10.0/23 via 10.0.30.1
/etc/sysconfig/network-scripts/route-eth6
10.0.20.0/23 via 10.0.40.1

3. Reboot the SVM, type: #reboot.


4. Ensure all the changes are persistent after reboot.
5. After SVM is rebooted, ensure all the interfaces are configured properly, type: #ifconfig or #ip a.
6. Verify the new route added to the system, type: #netstat -rn.

Installing SDR RPMs on the SDS nodes (SVMs)


About this task
The SDR RPM must be installed on all SVMs, both at the source and destination sites only if both sites have PowerFlex
hyperconverged nodes. Storage Data Replicators (SDR) are responsible for processing all I/Os of replication volumes. All
application I/Os of replicated volumes are processed by the source SDRs. At the source, application I/Os are sent by the SDC
to the SDR. The I/Os are sent to the target SDRs and stored in their journals. The target SDRs journals apply the I/Os to the
target volumes. A minimum of two SDRs are deployed at both the source and target systems to maintain high availability. If one
SDR fails, the MDM directs the SDC to send the I/Os to an available SDR.

Steps
1. Use WinSCP or SCP to copy the SDR package to the tmp folder.
2. SSH to SVM and run the following to install the SDR package:#rpm -ivh /tmp/EMC-ScaleIO-sdr-3.6-
x.xxx.el7.x86_64.rpm.

Add the storage data replicator to PowerFlex nodes


Use this procedure to add the SDR to PowerFlex nodes.

Prerequisites
The IP address of the node must be configured for SDR. The SDR communicates with several components:
● SDC (application)
● SDS (storage)
● Remote SDR (external)

Steps
1. In the left pane, click Protection > SDRs.
2. In the right pane, click Add.
3. In the Add SDR dialog box, enter the connection information of the SDR:

270 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

a. Enter the SDR name.


b. Update the SDR Port, if required (default is 11088).
c. Select the relevant Protected Domain.
d. Enter the IP Address of the MDM that is configured for SDR.
e. Select Role External for the SDR to SDR external communication.
f. Select Role Application and Storage for the SDR to SDC and SDR to SDS communication.
g. Click ADD SDR to initiate a connection with the peer system.
4. Verify that the operation completed successfully and click Dismiss.
5. Modify the IP address role if required:
a. From the PowerFlex GUI, in the left pane, click Protection > SDRs.
b. In the right pane, select the relevant SDR check box, and click Modify > Modify IP Role.
c. In the <SDR name> Modify IPs Role dialog box, select the relevant role for the IP address.
d. Click Apply.
e. Verify that the operation completed successfully and click Dismiss.
6. Repeat both tasks Add journal capacity and Add Storage Data Replicator (SDR) to PowerFlex nodes for source and
destination.

Verify communication between the source and destination


Use this procedure to verify communication between the source and destination sites.

Steps
1. Log in to all the SVMs and PowerFlex nodes in source and destination sites.
2. Ping the following IP addresses from each of the SVM and PowerFlex nodes in source site:
● Management IP addresses of the primary and secondary MDMs
● External IP addresses configured for SDR-SDR communication
3. Ping the following IP addresses from each of the SVM and PowerFlex nodes in destination site:
● Management IP addresses of the primary and secondary MDMs
● External IP addresses configured for SDR-SDR communication

Add the new SDS to PowerFlex


Use this procedure to add new SDS to PowerFlex.

Steps
1. If you are using a PowerFlex presentation server:
a. Log in to the PowerFlex presentation server.
b. Click Configuration > SDS and click Add.
c. On the Add SDS page, enter the SDS name and select the Protection Domain.
d. Under Add IP, enter the data IP address and click Add SDS.
e. Locate the newly added PowerFlex SDS, right-click and select Add Device.
f. Choose Storage device from the drop-down menu.
g. Locate the newly added PowerFlex SDS, right-click and select Add Device, and choose Acceleration Device from the
drop-down menu.
CAUTION: In case the deployment fails for SSD or NVMe with NVDIMM, it can be due to any one of the
following reasons. Click View Logs and see Configuring the NVDIMM for a new PowerFlex hyperconverged
node for the node configuration table and steps to add SDS and NVDIMM to the FG pool.
● The following error appears if the required NVDIMM size and the RAM size to SVM does not match the node
configuration table.

VMWARE_CANNOT_RETRIEVE_VM_MOR_ID

Performing a PowerFlex hyperconverged node expansion 271


Internal Use - Confidential

● If the deployment fails to add the device and SDS to the PowerFlex GUI, you should manually add the SDS and NVDIMM
to FG pool.
2. If you are using a PowerFlex version prior to 3.5:
a. Log in to the PowerFlex GUI, and click Backend > Storage.
b. Right-click the new protection domain, and select +Add > Add SDS.
c. Enter a name.
For example, 10.234.92.84-ESX.
d. Add the following addresses in the IP addresses field and click OK:
● flex-data1-<vlanid>
● flex-data2-<vlanid>
● flex-data3-<vlanid> (if required)
● flex-data4-<vlanid> (if required)
e. Add New Devices from the lsblk output from the previous step.
f. Select the storage pool destination and media type.
g. Click OK and wait for the green check box to appear and click Close.

Related information
Add drives to PowerFlex
Prepare the SVMs for replication

Add drives to PowerFlex


Use this procedure to add SSD or NVMe drives to PowerFlex.

Steps
1. If you are using PowerFlex GUI presentation server to enable zero padding on a storage pool:
a. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
b. Click Storage Pools from the left pane, and select the storage pool.
c. Click Settings from the drop-down menu.
d. Click Modify > General Settings from the drop-down menu.
e. Click Enable Zero Padding Policy > Apply.
NOTE: After the first device is added to a specific pool, you cannot modify the zero padding policy. FG pool is always
zero padded. By default, zero padding is disabled only for MG pool.

2. If you are using a PowerFlex version prior to 3.5 to enable zero padding on a storage pool:
a. Select Backend > Storage, and right-click Select By Storage Pools from the drop-down menu.
b. Right-click the storage pool, and click Modify zero padding policy.
c. Select Enable Zero Padding Policy, and click OK > Close.
NOTE: Zero padding cannot be enabled when devices are available in the storage pools.

3.
If CloudLink is... Do this...
Enabled See one of the following procedures depending on the
devices:
● Encrypt PowerFlex hyperconverged (SVM) or storage-
only devices (SED drives)
● Encrypt PowerFlex hyperconverged (SVM) or storage-
only devices (non-SED drives)
Disabled Use PuTTY to access the Red Hat Enterprise Linux or an
embedded operating system node.

When adding NVMe drives, keep a separate storage pool for the PowerFlex storage-only node.

4. Note the devices by typing lsblk -p or nmve list.

272 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

5. If you are using PowerFlex GUI presentation server:


a. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
b. Click Configuration > SDSs.
c. Locate the newly added PowerFlex SDS, right-click, select Add Device, and choose Storage device from the drop-
down menu.
d. Type /dev/nvmeXXn1 where X is the value from step 3. Provide the storage pool, verify the device type, and click Add
Device. Accordingly, add all the required device, and click Add Devices.
NOTE: If the devices are not getting added, ensure you select Advance Settings > Advance Takeover from the
Add Device Storage page.

e. Repeat steps 5a to 5 d on all the SDS, where you want to add the devices.
f. Ensure that all the rebuild and balance activities are successfully completed.
g. Verify the space capacity after adding the new node.
6. If you are using a PowerFlex version prior to 3.5:
a. Connect to the PowerFlex GUI.
b. Click Backend.
c. Locate the newly added PowerFlex SDS, right-click, and select Add Device.
d. Type /dev/nvmeXXn1 where X is the value from step 3.
e. Select the Storage Pool, as identified in the Workbook.

Performing a PowerFlex hyperconverged node expansion 273


Internal Use - Confidential

NOTE: If the existing protection domain has Red Hat Enterprise Linux nodes, replace or expand with Red Hat
Enterprise Linux. If the existing protection domain has embedded operating system nodes, replace or expand with
embedded operating system.

f. Repeat steps 6a to 6e for each device.


g. Click OK > Close.
A rebalance of the PowerFlex storage-only node begins.

Related information
Add the new SDS to PowerFlex

Configuring the NVDIMM for a new PowerFlex


hyperconverged node
Verify that the VMware ESXi host recognizes the NVDIMM
Use this procedure to ensure that the NVDIMM is recognized.

Prerequisites
● Ensure the NVDIMM firmware on the new node is of same version of the existing system in the cluster.
● If NVDIMM firmware is higher than the Intelligent Catalog version, you must manually downgrade NVDIMM firmware.
● The VMware ESXi host and the VMware vCenter server are using version 6.7 or higher.
● The VM version of your SVM is version 14 or higher.
● The firmware of the NVDIMM is version 9324 or higher.
● The VMware ESXi host recognizes the NVDIMM.

Steps
1. Log in to the VMware vCenter.
2. Select the VMware ESXi host.
3. Go to the Summary tab.
4. In the VM Hardware section, verify that the required amount of persistent memory is listed.

Add NVDIMM
Use this procedure to add an NVDIMM.

Steps
1. Using the PowerFlex GUI, perform the following to enter the target SDS into maintenance mode:

If you are using a... Do this...


PowerFlex GUI presentation server a. Log in to PowerFlex GUI presentation server.
b. From Configuration, select the SDS.
c. Click MORE, select Enter maintenance mode >
Protected, and click Enter maintenance mode.
PowerFlex version prior to 3.5 a. Log in to the PowerFlex GUI.
b. Select the SDS from the Backend tab of the VMware
host.
c. Right-click the SDS, select Enter maintenance mode,
and click OK.

274 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

NOTE: For the new PowerFlex nodes with NVMe or SSD, remove the SDS or device if it is added to the GUI before
placing the SDS into maintenance mode. Skip this step if the SDS is not added to the GUI.

2. Use VMware vCenter to shut down the SVM.


3. Add the NVDIMM device to the SVM:
a. Edit the SVM settings.
b. Add an NVDIMM device.
c. Set the required NVDIMM device size.
d. Click OK.
4. Increase the RAM size according to the following capacity configuration table:

FG NVDIMM NVDIMM Required FG Additional Total RAM Total RAM


Capacity Capacity Capacity RAM capacity Services required in the required in
Required Delivered Memory SVM (no the SVM (with
(minimum 16 CloudLink CloudLink)
GB units -
must add in
pairs)
9.3 TiB 8 GB 2 x 16 GB = 32 17 GiB MDM: 6.5 GiB 27 GiB 31 GiB
GB
LIA: 350 MiB
19.92 TiB 15 GB 2 x 16 GB = 32 22 GiB OS Base: 1 GiB 32 GiB 36 GiB
GB
Buffer : 2 GiB
38.84 TiB 28 GB 2 x 16 GB = 32 32 GiB 42 GiB 46 GiB
GB CloudLink: 4 GiB
● Without
22.5 TiB 18 GB 2 x 16 GB = 32 25 GiB 35 GiB 39 GiB
CloudLink:
GB
9.85 GiB
46.08 TiB 34 GB 4 x 16 GB = 64 38 GiB ● With 48 GiB 52 GiB
GB CloudLink:
13.85 GiB
92.16 TiB 66 GB 6 x 16 GB = 96 62 GiB 72 GiB 76 GiB
GB

NOTE: In case the capacity is not matching with the configuration table, use the following formula to calculate the
NVDIMM or RAM capacity for Fine Granularity. The calculation is in binary MiB, GiB, and TiB. Round off the RAM size to
the next GiB. For example, if the output of the equation is 16.75 GiB, round it off to 17 GiB.

NVDIMM_capacity_in_GiB = ((100*Number_of_drives) + (700*Capacity_in_TiB))/1024

RAM_capacity_in_GiB = 10 + ((100*Number_of_drives) + (550*Capacity_in_TiB))/1024

5. In Edit Settings, change the Memory size as per the node configuration table, and select the Reserver all guest memory
(All locked) check box.

6. Right-click the SVM, choose Edit settings. Set the SVM as 8 or 12 vCPU, configure at 8 or 12 socket, 8 or 12 core (for
CloudLink additional 4 threads).
7. Use VMware vCenter to turn on the SVM.

Performing a PowerFlex hyperconverged node expansion 275


Internal Use - Confidential

8. Using the PowerFlex GUI, remove the SDS from maintenance mode.
9. Create a namespace on the NVDIMM:
a. Connect to the SVM using SSH and type # ndctl create-namespace -f -e namespace0.0 --mode=dax
--align=4K.
10. Perform steps 3 to 8 for every PowerFlex node with NVDIMM.
11. Create an acceleration pool for the NVDIMM devices:
a. Connect using SSH to the primary MDM, type #scli --add_acceleration_pool --
protection_domain_name <PD_NAME> --media_type NVRAM --acceleration_pool_name
<ACCP_NAME> in the SCLI to create the acceleration pool.
NOTE: Use this step only when you want to add the new PowerFlex node to the new acceleration pool. Otherwise,
skip this step and go to the step to add SSD or NVMe device.

b. For each SDS with NVDIMM, type #scli --add_sds_device --sds_name <SDS_NAME> --
device_path /dev/dax0.0 --acceleration_pool_name <ACCP_NAME> --force_device_takeover to
add the NVDIMM devices to the acceleration pool:

NOTE: Use this step only when you want to add the new acceleration device to a new acceleration pool. Otherwise,
skip this step and go to the step to add SSD or NVMe device.

12. Create a storage pool for SSD devices accelerated by an NVDIMM acceleration pool with Fine Granularity data layout:
a. Connect using SSH to the primary MDM and enter #scli --add_storage_pool --protection_domain_name
<PD_NAME> --storage_pool_name <SP_NAME> --media_type SSD --compression_method normal
--fgl_acceleration_pool_name<ACCP_NAME> --fgl_profile high_performance --data_layout
fine_granularity.
NOTE: Use this step only when you want to add the new PowerFlex node to a new storage pool. Otherwise, skip this
step and go to the step to add SSD or NVMe device.

13. Add the SSD or NVMe device to the existing Fine Granularity storage pool using the PowerFlex GUI.

If you are using a... Do this...


PowerFlex GUI presentation server a. Log in to the PowerFlex GUI.
b. Click Dashboard > Configuration > SDSs
c. Click Add device > Acceleration Device.
d. In the Add acceleration device to SDS dialog box,
enter the device path in the Path field, and device name
in the Name field.
e. Select the Acceleration Pool, you recorded in the drive
information table.
f. Click Add device.
g. Expand ADVANCED (OPTIONAL).
h. Select Force Device Take Over as YES.
i. Click Add devices.
PowerFlex version prior to 3.5 a. Connect to the PowerFlex GUI.
b. Right-click the newly added SDS.
c. Click Add device and choose the storage pool or
acceleration device that was created in the previous
steps and acceleration pool. Expand Advance Settings,
and choose Force Device Take Over.

14. Set the spare capacity for the fine granularity storage pool.
When finished, if you are not extending the MDM cluster, see Completing the expansion.

276 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

Extend the MDM cluster from three to five nodes


If the customer is scaling their environment, you should extend the MDM cluster from three to five nodes.

Extend the MDM cluster from three to five nodes using SCLI
Use this procedure to extend the MDM cluster using SCLI.
It is critical that the MDM cluster is distributed across access switches and physical cabinets to ensure maximum resiliency and
availability of the cluster. The location of the MDM components should be checked and validated during every engagement,
and adjusted if found noncompliant with the published guidelines. If an expansion includes adding physical cabinets and access
switches, you should relocate the MDM cluster components. See MDM cluster component layouts for more information.
When adding new MDM or tiebreaker nodes to a cluster, first place the PowerFlex storage-only nodes (if available), followed by
the PowerFlex hyperconverged nodes.

Prerequisites
● Identify new nodes to use as MDM or tiebreaker.
● Identify the management IP address, data1 IP address, and data2 IP address (log in to each new node or SVM and run the IP
addr command).
● Gather virtual interfaces for the nodes being used for the new MDM or tiebreaker, and note the interface of data1 and data2.
For example, for a PowerFlex storage-only node, the interface is bond0.152 and bond1.160. If it is an SVM, it is eth3 and
eth4.
● Identify the primary MDM.

Steps
1. SSH to each new node or SVM and assign the proper role (MDM or tiebreaker) to each.
2. Transfer the MDM and LIA packages to the newly identified MDM cluster nodes.
NOTE: The following steps contain sample versions of PowerFlex files as examples only. Use the appropriate PowerFlex
files for your deployment.

3. To install the LIA, enter TOKEN=<flexos password> rpm -ivh EMC-ScaleIO-lia-3.x-


x.xxx.el7.x86_64.rpm.
4. To install the MDM service:
● For the MDM role, enter MDM_ROLE_IS_MANAGER=1 rpm -ivh EMC-ScaleIO-mdm-3.x-
x.xxx.el7.x86_64.rpm
● For the tiebreaker role, enter MDM_ROLE_IS_MANAGER=0 rpm-ivh EMC-ScaleIO-mdm-3.x-
x.xxx.el7.x86_64.rpm.
5. Open an SSH terminal to the primary MDM and log in to the operating system.
6. Log in to PowerFlex by entering scli –-login --username admin –-password <powerflex password>.
7. Enter scli –-query_cluster to query the cluster. Verify that it is in three node cluster mode.
8. Add a new MDM by entering scli --add_standby_mdm --mdm_role manager --new_mdm_ip
<new mdm data1,data2 ip’s> --new_mdm_management_ip <mdm management IP> --
new_mdm_virtual_ip_interfaces <list both interface, comma seperated> --new_mdm_name <new
mdm name>.

9. Add a new tiebreaker by entering scli --add_standby_mdm --mdm_role tb --new_mdm_ip <new tb


data1,data2 ip’s> --new_mdm_name <new tb name>

10. Enter scli -–query_cluster to find the ID for the newly added Standby MDM and the Standby TB.
11. To switch to five node cluster, enter scli --switch_cluster_mode --cluster_mode 5_node --
add_slave_mdm_id <Standby MDM ID> --add_tb_id <Standby tiebreaker ID>

12. Repeat steps 1 through 9 to add Standby MDM and tiebreakers on other PowerFlex nodes.

Performing a PowerFlex hyperconverged node expansion 277


Internal Use - Confidential

Redistribute the MDM cluster


Use this procedure to redistribute the MDM cluster manually.
It is critical that the MDM cluster is distributed across access switches and physical cabinets to ensure maximum resiliency and
availability of the cluster. The location of the MDM components should be checked and validated during every engagement,
and adjusted if found noncompliant with the published guidelines. If an expansion includes adding physical cabinets and access
switches, you should relocate the MDM cluster components. See MDM cluster component layouts for more information.
When adding new MDM or tiebreaker nodes to a cluster, first place the PowerFlex storage-only nodes (if available), followed by
the PowerFlex hyperconverged nodes.

Prerequisites
● Identify new nodes to use as MDM or tiebreaker.
● Identify the management IP address, data1 IP address, and data2 IP address (log in to each new node or SVM and enter the
IP addr command).
● Gather virtual interfaces for the nodes being used for the new MDM or tiebreaker, and note the interface of data1 and data2.
For example, for a PowerFlex storage-only node, the interface is bond0.152 and bond1.160. If it is an SVM, it is eth3 and
eth4.
● Identify the primary MDM.

Steps
1. SSH to each new node or SVM and assign the proper role (MDM or tiebreaker) to each.
2. Transfer the MDM and LIA packages to the newly identified MDM cluster nodes.
NOTE: The following steps contain sample versions of PowerFlex files as examples only. Use the appropriate PowerFlex
files for your deployment.

3. To install the LIA, enter TOKEN=<flexos password> rpm -ivh EMC-ScaleIO-lia-3.x-


x.xxx.el7.x86_64.rpm.
4. To install the MDM service:
● For the MDM role, enter MDM_ROLE_IS_MANAGER=1 rpm -ivh EMC-ScaleIO-mdm-3.x-
x.xxx.el7.x86_64.rpm
● For the tiebreaker role, enter MDM_ROLE_IS_MANAGER=0 rpm-ivh EMC-ScaleIO-mdm-3.x-
x.xxx.el7.x86_64.rpm.
5. Open an SSH terminal to the primary MDM and log in to the operating system.
6. Log in to PowerFlex by entering scli –-login --username admin –-password <powerflex password>.
7. Add new standby MDM by entering scli --add_standby_mdm --mdm_role manager --new_mdm_ip
<new mdm data1, data2 ip’s> --new_mdm_management_ip <mdm management IP> --
new_mdm_virtual_ip_interfaces <list both interface, comma seperated> --new_mdm_name <new
mdm name>.

8. Add a new standby tiebreaker by entering scli --add_standby_mdm --mdm_role tb --new_mdm_ip <new tb
data1, data2 ip’s> --new_mdm_name <new tb name>.

9. Repeat Steps 7 and 8 for each new MDM and tiebreaker that you are adding to the cluster.
10. Enter scli –-query_cluster to find the ID for the current MDM and tiebreaker. Note the IDs of the MDM and
tiebreaker being replaced.
11. To replace the MDM, enter scli --replace_cluster_mdm --add_slave_mdm_id <mdm id to add> --
remove_slave_mdm_id <mdm id to remove>.
Repeat this step for each MDM.
12. To replace the tiebreaker, enter scli --replace_cluster_mdm --add_tb_id <tb id to add> --
remove_tb_id <tb id to remove>.
Repeat this step for each tiebreaker.
13. Enter scli -–query_cluster to find the IDs for MDMs and tiebreakers being removed.
14. Using IDs to remove the old MDM, enter scli --remove_standby_mdm --remove_mdm_id <mdm id to
remove>.

278 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

NOTE: This step might not be necessary if this MDM remains in service as a standby. See MDM cluster component
layouts for more information.

15. To remove the old tiebreaker, enter scli --remove_standby_mdm --remove_mdm_id <mdm id to remove>.

NOTE: This step might not be necessary if this tiebreaker remains in service as a standby. See MDM cluster component
layouts for more information.

16. Repeat steps 1 through 12, as needed.

MDM cluster component layouts


This topic provides examples of layouts for MDM components in a PowerFlex appliance with two to five cabinets.
The Metadata Manager (MDM) cluster contains the following components:
● Primary MDM 100
● Secondary MDM 2 and 3
● Tiebreaker 1 and 2
When a PowerFlex appliance contains multiple cabinets, distribute the MDM components to maximize resiliency.
Distribute the primary MDM, secondary MDMs, and tiebreakers across physical cabinets and access switch pairs to ensure
maximum availability of the cluster. When introducing new or standby MDM components into the cluster, make sure you adhere
to the MDM redistribution methodology and select your hosts appropriately, so the cluster remains properly distributed across
the physical cabinets and access switch pairs.
The following illustrations provide examples of MDM component layouts for two to five cabinets:
● MDM cluster component layout for a two-cabinet PowerFlex appliance

● MDM cluster component layout for a three-cabinet PowerFlex appliance

Performing a PowerFlex hyperconverged node expansion 279


Internal Use - Confidential

● MDM cluster component layout for a four-cabinet PowerFlex appliance

● MDM cluster component layout for a five-cabinet PowerFlex appliance

Add a PowerFlex node to a PowerFlex Manager


service in lifecycle mode
Use this procedure to update the inventory and service details in PowerFlex Manager for a new PowerFlex node.

Steps
1. Update the inventory for vCenter (vCSA), switches, gateway VM, and nodes:
a. Click Resources on the home screen.

280 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

b. Select the vCenter, switches (applicable only for full networking), gatewayVM, and newly added nodes.
c. Click Run Inventory.
d. Click Close.
e. Wait for the job in progress to complete.
2. Update Services details:
a. Click Services
b. Choose the services on which new a node is expanded and click View Details.
c. On the Services details screen, choose Update Service Details.
d. Choose the credentials for the node and SVM and click Next.
e. On the Inventory Sumary, verify that the newly added nodes are reflecting under Physical Node, and click Next.
f. On the Summary page, verify the details and click Finish.

Configuring the NSX-T Ready tasks


This section describes how to configure the PowerFlex nodes as part of preparing the PowerFlex appliance for NSX-T. Before
you configure the ESXi hosts as NSX-T transport nodes, you must add the transport distributed port groups and convert the
distributed switch from LACP to individual trunks.
NOTE: If you configure VMware NSX-T on PowerFlex hyperconverged or compute-only nodes and add them to PowerFlex
Manager, the services will be in lifecycle mode. If you need to perform an expansion on such a node, see Adding a
PowerFlex R640/R740xd/R840 node to a PowerFlex Manager service in lifecycle mode to add the PowerFlex node.
Contact VMware Support to configure VMware NSX-T on a new PowerFlex node and see Add PowerFlex nodes to a
service to update the service details.

Configure VMware NSX-T overlay distributed virtual port group


Use this procedure to create and configure the VMware NSX-T overlay distributed virtual port group on cust_dvswitch.

Prerequisites
Ensure that the VMware vSphere vCenter Server and the VMware vSphere Client are accessible.

Steps
1. Log in to the VMware vSphere Client.
2. Click Networking.
3. Expand the PowerFlex Customer-Datacenter.
4. Right-click cust_dvswitch.
5. Click Distributed Port Group > New Distributed Port Group.
6. Update the name to pfmc-nsx-transport-121 and click Next.
7. Select the default Port binding.
8. Select the default Port allocation.
9. Select the default # of ports (default is 8).
10. Select the default VLAN as VLAN Type.
11. Set the VLAN ID to 121.
12. Clear the Customize default policies configuration check box and click Next.
13. Click Finish.
14. Right-click the pfnc-nsx-transport-121 and click Edit Settings....
15. Click Teaming and failover.
16. Verify that Uplink1 and Uplink2 are moved to Active.
17. Click OK.

Performing a PowerFlex hyperconverged node expansion 281


Internal Use - Confidential

Convert trunk access to LACP-enabled switch ports for


cust_dvswitch
Use this procedure to convert the physical NICs from trunk to LACP without losing network connectivity. Use this option only if
cust_dvswitch is configured as trunk. LACP is the default configuration for cust_dvswitch.

Prerequisites
Both Cisco Nexus access switch ports for the compute VMware ESXi hosts are configured as trunk access. These ports will be
configured as LACP enabled after the physical adapter is removed from each ESXi host.
WARNING: As the VMK0 (ESXi management) is not configured on cust_dvswitch, both the vmnics are first
migrated to the LAGs simultaneously and then the port channel is configured. Data connectivity to PowerFlex is
lost until the port channels are brought online with both vmnic interfaces connected to LAGs.

About this task


This procedure includes reconfiguring one port at a time as an LACP without requiring any migration of the VMkernel network
adapters.

Steps
1. Log in to the VMware vSphere Client.
2. Look at VMware vCenter and physical switches to ensure that both ports across all hosts are up.
3. For each compute VMware ESXi host, record the physical switch ports to which vmnic5 (switch-B) and vmnic7 (switch -A)
are connected.
a. Click Home, then select Hosts and Clusters and expand the compute cluster.
b. Select the first compute ESXi host in left pane, and then select Configure tab in right pane.
c. Select Virtual switches under Networking.
d. Expand cust_dvswitch.
e. Expand Uplink1 and click eclipse (…) for vmnic7 and select view settings.
f. Click LLDP tab.
g. Record the Port ID (switch port) and System Name (switch).
h. Repeat step 3 for vmnic5 on Uplink 2.
4. Configure LAG (LACP) on cust_dvswitch within VMware vCenter Server:
a. Click Home, then select Networking.
b. Expand the compute cluster and click cust_dvswitch > Configure > LACP.
c. Click +New to open wizard.
d. Verify that the name is lag1.
e. Verify that the number of ports is 2.
f. Verify that the mode is Active.
g. Change Load Balancing mode to Source and destination IP address, TCP/UDP port..
h. Click OK.
5. Migrate vmnic5 to lag1-0 and vmnic7 to lag1-1 on cust_dvswitch for the compute VMware ESXi host as follows:
a. Click Home, then select Networking and expand the PowerFlex data center.
b. Right-click cust_dvswitch and select Manage host networking to open wizard.
c. Select Add hosts... and click Next.
d. Click Attached hosts..., select all the compute ESXi hosts, and click OK.
e. Click Next.
f. For each ESXi host, select vmnic5 and click Assign uplink.
g. Click lag1-0 and click OK.
h. For each ESXi host, select vmnic7 and click Assign uplink.
i. Click lag1-1 and click OK.
j. Click Next > Next > Next > Finish.
6. Create port-channel (LACP) on switch-A for compute VMware ESXi host.
The following switch configuration is an example of a single compute VMware ESXi host.
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-A switch.

282 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

b. Create a port channel on switch-A for each compute VMware ESXi host as follows:

interface port-channel40
Description to flex-compute-esxi-host01
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
speed 25000
no lacp suspend-individual
vpc 40

7. Configure channel-group (LACP) on switch-A access port (vmnic5) for each compute VMware ESXi host.
The following switch port configuration is an example of a single compute VMware ESXi host.
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-A switch.
b. Create port on switch-A as follows:

int e1/1/1
description to flex-compute-esxi-host01 – vmnic5
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 40 mode active

8. Create port-channel (LACP) on switch-B for compute VMware ESXi host.


The following switch configuration is an example of a single compute VMware ESXi host.
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-B switch.
b. Create a port channel on switch-B for each compute VMware ESXi host as follows:

interface port-channel40
Description to flex-compute-esxi-host01
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
speed 25000
no lacp suspend-individual
vpc 40

9. Configure channel-group (LACP) on switch-B access port (vmnic7) for each compute VMware ESXi host.
The following switch port configuration is an example of a single compute VMware ESXi host.
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-B switch.
b. Create port on switch-B as follows:

int e1/1/1
description to flex-compute-esxi-host01 – vmnic7
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 40 mode active

10. Update teaming and policy to route based on physical NIC load for each port group within cust_dvswitch:
a. Click Home and select Networking.
b. Expand cust_dvswitch to have all port group in view.

Performing a PowerFlex hyperconverged node expansion 283


Internal Use - Confidential

c. Right-click flex-data-01 and select Edit Settings.


d. Click Teaming and failover.
e. Change Load Balancing mode to Route based on IP hash.
f. Repeat steps 10b to 10e for each remaining port groups.

Add the VMware NSX-T service using PowerFlex Manager


Use this procedure only if the PowerFlex nodes are added to the NSX-T environment.

Prerequisites
NOTE: Before adding a VMware NSX-T service using PowerFlex Manager, either the customer or VMware services must
add the new PowerFlex node to NSX-T Data Center using NSX-T UI.
Consider the following:
● Before adding this service Update service details in PowerFlex Manager, verify that the NSX-T Data Center is configured
on the PowerFlex hyperconverged or compute-only nodes.
● If the transport nodes (PowerFlex cluster) is configured with NSX-T, then you cannot replace the field units using PowerFlex
Manager. You must add the node manually by following either of these procedures depending on the node type:
○ Performing a PowerFlex hyperconverged node expansion
○ Performing a PowerFlex compute-only node expansion

Steps
1. Log in to PowerFlex Manager.
2. If NSX-T Data Center 3.0 or higher is deployed and is using VDS (not N-VDS), then add the transport network:
a. From Getting Started, click Define Networks.
b. Click + Define and do the following:

NSX-T information Values


Name Type NSX-T Transport

Description Type Used for east-west traffic

Network Type Select General Purpose LAN


VLAN ID Type 121. See the Workbook.

Configure Static IP Address Ranges Select the Configure Static IP Address Ranges check box. Type
the starting and ending IP address of the transport network IP pool

c. Click Save > Close.


3. From Getting Started, click Add Existing Service and do the following:
a. On the Welcome page, click Next.
b. On the Service Information page, enter the following details:

Service Information Details


Name Type NSX-T Service

Description Type Transport Nodes

Type Type Hyperconverged or Compute-only

Firmware and software compliance Select the Intelligent Catalog version


Who should have access to the service deployed Leave as default
from this template?

c. Click Next.
d. On the Network Information page, select Full Network Automation, and click Next.
e. On the Cluster Information page, enter the following details:

284 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

Cluster Information Details


Target Virtual Machine Manager Select vCSA name
Data Center Name Select data center name
Cluster Name Select cluster name
Target PowerFlex gateway Select PowerFlex gateway name
Target Protection Domain Select PD-1
OS Image Select the ESXi image

f. Click Next.
g. On the OS Credentials page, select the OS credentials for each node, and click Next.
h. On the Inventory Summary page, review the summary and click Next.
i. On the Networking Mapping page, verify that the networks are aligned with the correct dvSwitch.
j. On the Summary page, review the summary and click Finish.
4. Verify PowerFlex Manager recognizes NSX-T is configured on the nodes:
a. Click Services.
b. Select the hyperconverged or compute-only service.
c. Verify that a banner appears under the Service Details tab, notifying that NSX-T is configured on a node and is
preventing some features from being used. In case you do not see this banner, check if you have selected the wrong
service or NSX-T is not configured on the hyperconverged or compute-only nodes.

Performing a PowerFlex hyperconverged node expansion 285


Internal Use - Confidential

55
Encrypting PowerFlex hyperconverged
(SVM) or storage-only node devices (SED or
non-SED drives)

Encrypt PowerFlex hyperconverged (SVM) or


storage-only devices (SED drives)
Use this procedure to encrypt PowerFlex hyperconverged (SVM) or storage-only devices (SED drives).

Prerequisites

NOTE: This procedure is not applicable for PowerFlex storage-only nodes with NVMe drives.

Ensure that the following prerequisites are met:


● If you are using PowerFlex presentation server, see Editing the SVM configuration for the CPU settings.
● If you are using PowerFlex versions prior to 3.6, verify the Storage VM (SVM) and CPU settings:
1. Log in to the customer VMware vCenter.
2. Right-click SVM > Edit Settings.
3. Verify the SVM vCPU is set to 12 (one socket and twelve cores), and RAM is set to 16 GB (applicable for MG pool
enabled system only). If you have an FG pool enabled system, change the RAM size based on the node configuration
table specified in Add NVDIMM .
● SSH to the SVM or the PowerFlex storage-only node on which you plan to have the encrypted devices.
● Download and install the CloudLink agent by entering the following command:

curl -O http://cloudlink_ip/cloudlink/securevm && sh securevm -S cloudlink_ip

where cloudlink_ip is the IP address of one of the CloudLink Center VMs.


NOTE: The preceding command downloads and installs the CloudLink agent on SVMs or PowerFlex storage-only nodes,
and then adds the machine (SVM or PowerFlex storage-only nodes) into the default machine group.

● If you want to add the SVM into a specific machine group, use the -G [group_code] argument with the preceding
command.
where -G group_code specifies the registration code for the machine group to which you want to assign the machine.

NOTE: To obtain the registration code of the machine group, log in to the CloudLink Center using a web browser.

Steps
1. Open a browser, and provide the CloudLink Center IP address.
2. In the Username box, enter secadmin.
3. In the Password box, enter the secadmin password.
4. Click Agents > Machines.
5. Ensure that the hostname of the new SVM or PowerFlex storage-only node is listed, and is in Connected state.

286 Encrypting PowerFlex hyperconverged (SVM) or storage-only node devices (SED or non-SED drives)
Internal Use - Confidential

6. If the SDS has devices that are added to PowerFlex, remove the devices. Otherwise, skip this step.

If you are using a... Do this...


PowerFlex GUI presentation server a. Log in to the PowerFlex GUI presentation server.
b. Click Dashboard > Configuration > Devices.
c. Enter the SDS name in the search box, and select the
device by clicking on the check box.
d. Click More, and then select Remove.
e. Click Remove.
PowerFlex version prior to 3.5 a. Log in to the PowerFlex GUI.
b. Click Backend.
c. Locate the required SDS.
d. Right-click each device, and then click Remove.
NOTE: Repeat this step for each of the device
added to PowerFlex. It might take some time to
remove all the devices.

7. From the SSH session:


a. Enter #svm status to view the encrypted devices.

b. Enter svm manage /dev/sdX to take control of the encrypted devices.


where X is the device letter.

c. Enter #svm status to view the status of the devices.

Encrypting PowerFlex hyperconverged (SVM) or storage-only node devices (SED or non-SED drives) 287
Internal Use - Confidential

NOTE: If the device shows taking control, run #svm status until the device status shows as managed. It
is a known issue that the CLI status of SED drives shows as unencrypted, whereas CloudLink Center UI shows the
device status as Encrypted HW.

d. Log in to CloudLink Center.


e. Click Agents > Machines. Ensure the status of newly added machines are in Connected state. Select each newly added
machine and verify the status of the devices and SED as Encrypted HW and Managed respectively.

NOTE: There are no /dev/mapper devices for SEDs. Use the device name listed in the svm status command. It
is recommended to add self-encrypted drives (SEDs) to their own storage pools.

f. Once all SED drives are Managed, add the encrypted devices to the PowerFlex SDS.

If you are using a... Do this...


PowerFlex GUI presentation server a. Log in to the PowerFlex GUI presentation server.
b. Click Dashboard > Configuration > SDSs.
c. Select the SDS to add the disk.
d. Click Add Device > Storage Device.
e. In the Add storage device to SDS dialog box, enter the
device path in the Path field, and device name in the
Name field.
f. Select the storage pool and media type you recorded in
the drive information table.
g. Click Add Devices.
PowerFlex version prior to 3.5 a. Log in to the PowerFlex GUI.
b. Click Backend.
c. Locate the PowerFlex SDS, right-click, and then click
Add Device.
d. In Add device to SDS, enter the Path and select the
Storage Pool for each device.

288 Encrypting PowerFlex hyperconverged (SVM) or storage-only node devices (SED or non-SED drives)
Internal Use - Confidential

If you are using a... Do this...


If the PowerFlex storage-only node has only SSD disks,
then the path is /dev/mapper/svm_sdX where X is
the device you have managed.
e. Repeat the substeps a-d for all SDS nodes. Wait for
PowerFlex rebalance to finish.

8. Ensure that rebalance is running and progressing before continuing to another SDS.

Related information
Verify the CloudLink license

Encrypt PowerFlex hyperconverged (SVM) or


storage-only devices (non-SED drives)
Use this procedure to encrypt PowerFlex hyperconverged (SVM) or storage-only devices.

Prerequisites
Ensure that the following prerequisites are met:
● If you are using PowerFlex presentation server, see Modifying the vCPU, memory, vNUMA, and CPU reservation settings on
SVMs for the CPU settings.
● If you are using PowerFlex versions prior to 3.6, the Storage VM (SVM) vCPU is set to 12 (one socket and twelve cores),
and RAM is set to 16 GB (applicable for MG pool enabled system only). If you have an FG pool enabled system, change the
RAM size based on the node configuration table specified in Add NVDIMM
● SSH to the SVM or the PowerFlex storage-only node on which you plan to have the encrypted devices.
● Download and install the CloudLink Agent by entering:
curl -O http://cloudlink_ip/cloudlink/securevm && sh securevm -S cloudlink_ip

where cloudlink_ip is the IP address of one of the CloudLink Center VMs.


NOTE: The preceding command downloads and installs the CloudLink agent on SVMs or PowerFlex storage-only nodes,
and then adds the machine (SVM or storage-only node) into the default machine group.

● If you want to add the SVM into a specific machine group, use the -G [group_code] argument with the preceding
command.
where -G group_code specifies the registration code for the machine group to which you want to assign the machine.

NOTE: To obtain the registration code of the machine group, log in to the CloudLink Center using a web browser.

Steps
1. Open a browser, and enter the CloudLink Center IP address.
2. In the Username box, enter secadmin.
3. In the Password box, enter the secadmin password.
The CloudLink Center home page is displayed.
4. Click Agents > Machines.
5. Ensure that the hostname of the new SVM or PowerFlex storage-only node is listed, and is in Connected state.

Encrypting PowerFlex hyperconverged (SVM) or storage-only node devices (SED or non-SED drives) 289
Internal Use - Confidential

6. On the SVM or PowerFlex storage-only node, edit the file vi /opt/emc/extra/pre_run.sh

NOTE: Ensure that Storage Data Server (SDS) is installed before CloudLink Agent is installed.

In the vi /opt/emc/extra/pre_run.sh file, type sleep 60 before the last line if it does not already exist.

7. If the SDS has devices that are added to PowerFlex, remove the devices. Otherwise, skip this step.

If you are using a... Do this...


PowerFlex GUI presentation server a. Log in to the PowerFlex GUI presentation server.
b. Click Dashboard > Configuration > Devices.
c. Type the SDS in the search box, and select the device.
d. Click More, and select Remove.
e. Click Remove.
PowerFlex version prior to 3.5 a. Log in to the PowerFlex GUI.
b. Click Backend.
c. Locate the required SDS.
d. Right-click each device and click Remove.
NOTE: Repeat this step for each of the device
added to PowerFlex. It might take some time to
remove all the devices.

8. From the SSH session:


a. Enter #svm status to view the unencrypted devices.

b. For SSD drives, enter svm encrypt /dev/sdX for each drive you want to encrypt.
where X is the device letter.

c. For NVMe drives, enter use svm encrypt /dev/nvmexxx for each drive you want to encrypt.
d. Enter #svm status to view the status of the devices.

290 Encrypting PowerFlex hyperconverged (SVM) or storage-only node devices (SED or non-SED drives)
Internal Use - Confidential

e. Add the encrypted devices to the PowerFlex SDS.

If you are using a... Do this...


PowerFlex GUI presentation server i. Log in to the PowerFlex GUI presentation server, go to
https://<IPaddress>:8443 using MDM.
ii. Click Configuration > SDS.
iii. Select the appropriate PowerFlex SDS, click Add
Device, and choose Storage device from the drop-
down menu.
● If the PowerFlex storage-only node has only SSD
disks, then the path is /dev/mapper/svm_sdX
where X is the device you have encrypted.
● If the PowerFlex storage-only node node has
NVMe disks, then the path is /dev/mapper/
svm_nvmeXnX where X is the device you have
encrypted.

iv. Enter the new device path and name in the Path
and Name fields of the Add Storage Device to SDS
window.
v. Select the Storage Pool, Media Type you recorded in
the drive information table.
vi. Click Add Device.
vii. Repeat steps c and d to add all the devices, and click
Add Devices.
PowerFlex version prior to 3.5 i. Log in to the PowerFlex GUI.
ii. Click Backend.
iii. Locate the PowerFlex SDS, right-click, and select Add
Device.
iv. In Add device to SDS, enter the Path and select the
Storage Pool for each device.
● If the PowerFlex storage-only node has only SSD
disks, then the path is /dev/mapper/svm_sdX
where X is the device you have encrypted.
● If the PowerFlex storage-only node node has
NVMe disks, then the path is /dev/mapper/
svm_nvmeXnX where X is the device you have
encrypted.

v. Repeat these steps for all SDS nodes. Wait for


PowerFlex rebalance to finish.

9. Ensure that rebalance is running and progressing before continuing to another SDS.

Related information
Verify the CloudLink license

Encrypting PowerFlex hyperconverged (SVM) or storage-only node devices (SED or non-SED drives) 291
Internal Use - Confidential

Verify newly added SVMs or storage-only nodes machine status in CloudLink Center

292 Encrypting PowerFlex hyperconverged (SVM) or storage-only node devices (SED or non-SED drives)
Internal Use - Confidential

56
Performing a PowerFlex compute-only node
expansion
Perform the manual expansion procedure to add a PowerFlex R650/R750/R6525 compute-only node to PowerFlex Manager
services that are discovered in a lifecycle mode.
Before adding a PowerFlex node, you must complete the following initial set of expansion procedures:
● Preparing to expand a PowerFlex appliance
● Configuring the network

Discover resources
Perform this step to discover and allow PowerFlex Manager access to resources in the environment. Provide the management
IP address and credential for each discoverable resource.

Prerequisites
Verify that the iDRAC network settings are configured. See Configure iDRAC network settings for more information.

About this task


Dell EMC recommends using separate operating system credentials for SVM and VMware ESXi. For information about creating
or updating credentials in PowerFlex Manager, click Settings > Credentials Management and access the online help.
During node discovery, you can configure iDRAC nodes to automatically send alerts to PowerFlex Manager. You can also change
the IP address of the iDRAC nodes and discover them. See Reconfigure the discovered nodes with new management IP and
credentials in the Dell EMC PowerFlex Appliance Administration Guide. If the PowerFlex nodes are not configured for alert
connector, Secure Remote Services does not receive critical or error alerts for those resources.
The following table describes how to configure resources in managed mode:

Resource type Resource state Example


PowerFlex nodes Managed PowerEdge iDRAC management IP address
If you want to perform firmware updates or deployments on a
discovered node, change the default state to managed.
Perform firmware or catalog updates from the Services page,
or the Resources page.

NOTE: For partial network deployments, you do not need to discover the switches. The switches need to be pre-
configured. For sample configurations for Dell, Cisco, and Arista switches, see the Dell EMC PowerFlex Appliance
Administration Guide.
The following are the specific details for completing the Discovery wizard steps:

Resource IP address Resource Discover into Credentials: Credentials: Credentials: Credentials:


type range state node pool name Username password SNMPv2
community
string
Node 192.168.101.21- Managed Compute-only PowerFlex root calvin customer
pool appliance provided
192.168.101.24
iDRAC default

Performing a PowerFlex compute-only node expansion 293


Internal Use - Confidential

Resource IP address Resource Discover into Credentials: Credentials: Credentials: Credentials:


type range state node pool name Username password SNMPv2
community
string
Switch 192.168.101.45- Managed NA access admin admin public
switches
192.168.101.46

VM 192.168.105.105 Managed NA vCenter administrator P@ssw0rd! NA


Manager @vsphere.loc
al
Element 192.168.105.120 Managed NA CloudLink secadmin Secadmin!! NA
Manager* 192.168.105.121
*

** This is optional. For a new CloudLink Center deployment, the CloudLink Center is discovered automatically.

Prerequisites
● Configure the iDRAC network settings.
● Gather the IP addresses and credentials that are associated with the resources.
NOTE: PowerFlex Manager also allows you to use the name-based searches to discover a range of nodes that were
assigned the IP addresses through DHCP to iDRAC. For more information about this feature, see Dell EMC PowerFlex
Manager Online Help.

Steps
1. On the PowerFlex Manager Getting Started page, click Discover Resources.
2. On the Welcome page of the Discovery Wizard, read the instructions and click Next.
3. On the Identify Resources page, click Add Resource Type. From the Resource Type list, select the resource that you
want to discover.
4. Enter the management IP address of the resource in the IP/Hostname Range field. To discover a resource in an IP range,
provide a starting and ending IP address.
5. In the Resource State list, select Managed or Unmanaged.
6. For PowerFlex node, to discover resources into a selected node pool instead of the global pool (default), select the node
pool from the Discover into Node Pool list.
7. Select the appropriate credential from the Credentials list. See the table above for details.
8. For PowerFlex node, if you want PowerFlex Manager to automatically reconfigure the iDRAC IP addresses of the nodes it
discovers, select the Reconfigure discovered nodes with new management IP and credentials check box. This option
is not selected by default, because it is faster to discover the nodes if you bypass the reconfiguration.
NOTE: iDRAC can be discovered using hostname also.

NOTE: For the Resource Type Node, you can use a range with hostname or IP address, provided the hostname has a
valid DNS entry.

9. For PowerFlex node, select the Auto configure nodes to send alerts to PowerFlex Manager check box to have
PowerFlex Manager automatically configure nodes to send alerts to PowerFlex Manager.
10. Click Next to start discovery. On the Discovered Resources page, select the resources from which you want to collect
inventory data and click Finish. The discovered resources are listed on the Resources page.

Install VMware ESXi to expand compute capacity


Use this procedure to install VMware ESXi on the PowerFlex nodes to expand the compute capacity.

About this task


The network adapters specified in this procedure are for representation purpose only. See Cabling the PowerFlex R640/
R740xd/R840 nodes for the logical network associated with the PowerFlex node.

294 Performing a PowerFlex compute-only node expansion


Internal Use - Confidential

Prerequisites
Verify the customer VMware ESXi ISO is available and is located in the Intelligent Catalog code directory.

Steps
1. Log in to the iDRAC:
a. Connect to the iDRAC interface and launch a virtual remote console by clicking Dashboard > Virtual Console and click
Launch Virtual Console.
b. Select Connect Virtual Media.
c. Under Map CD/DVD, click Choose File > Browse and browse to the folder where the ISO file is saved, select it, and
click Open.
d. Click Map Device.
e. Click Menu > Boot > Virtual CD/DVD/ISO.
f. Click Power > Reset System (warm boot).
2. Set the boot option to UEFI.
a. Press F2 to enter system setup.
b. Under System BIOS > Boot setting, select UEFI as the boot mode.
NOTE: Ensure that the BOSS card is set as the primary boot device from the boot sequence settings. If the BOSS
card is not set as the primary boot device, reboot the server and change the UEFI boot sequence from System
BIOS > Boot settings > UEFI BOOT settings.

c. Click Back > Back > Finish > Yes > Finish > OK > Finish > Yes.
3. Install VMware ESXi:
a. On the VMware ESXi installer screen, press Enter to continue.
b. Press F11 to accept the license agreement.
c. Under Local, select DELLBOSS VD as the install location, and click Enter if prompted to do so.
d. Select US Default as the keyboard layout.
e. When prompted, type the root password and press Enter.
f. At the Confirm Install screen, press F11.
g. When the installation is complete, delete the installation media before rebooting.
h. Press Enter to reboot the node.
NOTE: Set the first boot device to be the drive on which you installed VMware ESXi in Step 3.

4. Configure the host:


a. Press F2 to customize the system.
b. Provide the password for the root user and press Enter.
c. Go to Direct Console User Interface (DCUI) > Configure Management Network.
d. Set Network Adapters to the following:
● VMNIC0
● VMNIC1
● VMNIC2
● VMNIC3
The network adapters specified are for representation purpose only. See Cabling the PowerFlex R650/R750/R6525
nodes and Configuring the network.

e. See the VMware ESXi Management VLAN ID field in the Workbook for the required VLAN value.
f. Set IPv4 ADDRESS, SUBNET MASK, and DEFAULT GATEWAY configuration to the values defined in the Workbook.
g. Go to DNS Configuration. See the Workbook for the required DNS value.
h. Go to Custom DNS suffix. See the Workbook (local VXRC DNS).
i. Go to DCUI Troubleshooting Options.
j. Select Enable ESXi Shell and Enable SSH.
k. Press <Alt>-F1
l. Log in as root.
m. To enable the VMware ESXi host to work on the port channel, type:

esxcli network vswitch standard policy failover set -v vSwitch0 -l iphash

Performing a PowerFlex compute-only node expansion 295


Internal Use - Confidential

n. Type vim-cmd hostsvc/datastore/rename datastore1 DASXX to rename the datastore, where XX is the
server number.
o. Type exit to log off.
p. Press <Alt>-F2 to return to the DCUI.
q. Select Disable ESXi Shell.
r. Go to DCUI IPv6 Configuration.
s. Disable IPv6.
t. Press ESC to return to the DCUI.
u. Type Y to commit the changes and the node restarts.
v. Verify host connectivity by pinging the IP address from the jump server, using the command prompt.

Create a new VMware ESXi cluster to add PowerFlex


nodes
After installing VMware ESXi, use this procedure to create a new cluster, enable high-availability and DRS, and add a host to the
cluster.

About this task


If you are adding the host to a new cluster, follow the entire procedure. To add the host on an existing cluster, skip steps 1
through 6.

Prerequisites
Ensure that you have access to the customer vCenter.

Steps
1. From the vSphere Client home page, go to Home > Hosts and Clusters.
2. Select a data center.
3. Right-click the data center and select New Cluster.
4. Enter a name for the cluster.
5. Select vSphere DRS and vSphere HA cluster features.
6. Click OK.
7. Select the existing cluster or newly created cluster.
8. From the Configure tab, click Configuration > Quickstart.
9. Click Add in the Add hosts card.
10. On the Add hosts page, in the New hosts tab, add the hosts that are not part of the vCenter Server inventory by entering
the IP address, or hostname and credentials.
11. (Optional) Select the Use the same credentials for all hosts option to reuse the credentials for all added hosts.
12. Click Next.
13. The Host Summary page lists all the hosts to be added to the cluster with related warnings. Review the details and click
Next.
14. On the Ready to complete page, review the IP addresses or FQDN of the added hosts and click Finish.
15. Add the new licenses:
a. Click Menu > Administration
b. In the Administration section, click Licensing.
c. Click Licenses..
d. From the Licenses tab, click Add.
e. Enter or paste the license keys for VMware vSphere and vCenter per line. Click Next.
The license key is a 25-character length of alphabets and digits in the format XXXXX-XXXXX-XXXXX-XXXXX-XXXXX.
You can enter a list of keys in one operation. A new license is created for every license key you enter.
f. On the Edit license names page, rename the new licenses as appropriate and click Next.
g. Optionally, provide an identifying name for each license. Click Next.
h. On the Ready to complete page, review the new licenses and click Finish.

296 Performing a PowerFlex compute-only node expansion


Internal Use - Confidential

Install and configure the SDC


After adding hosts to the PowerFlex node with VMware ESXi, install the storage data client (SDC) to continue the expansion
process.

Steps
1. Copy the SDC file to the local datastore on the VMware vSphere ESXi server.
2. Use SSH on the host and type esxcli software vib install -d /vmfs/volumes/datastore1/
sdc-3.x.xxxxx.xx-esx7.x.zip -n scaleio-sdc-esx7.x.
3. Reboot the PowerFlex node.
4. To configure the SDC, generate a new UUID:
NOTE: If the PowerFlex cluster is using an SDC authentication, the newly added SDC reports as disconnected when
added to the system. See Configure an authentication enabled SDC for more information.

a. Use SSH to connect to the primary MDM.


b. Type uuidgen. A new UUID string is generated.
6607f734-da8c-4eec-8ea1-837c3a6007bf
5. Use SSH to connect to the new PowerFlex node.
6. Substitute the new UUID in the following code:

esxcli system module parameters set -m scini -p "IoctlIniGuidStr=6607f734-


da8c-4eec-8ea1-837c3a6007bf IoctlMdmIPStr=VIP1,VIP2, VIP3, VIP4"

A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks. Verify the
number of VIPs configured in the existing setup.
7. Reboot the PowerFlex node.

Rename the SDCs


Use this procedure to rename the SDCs.

Steps
1. If using PowerFlex GUI presentation server:
a. Log in to the PowerFlex GUI presentation server.
b. Go to Configuration > SDCs
c. Select the SDC and click Modify > Rename and rename the new host to standard.
For example, ESX-10.234.91.84
2. If using a PowerFlex version prior to 3.5:
a. From the PowerFlex GUI, click Frontend > SDCs and rename new host to standard.
For example, ESX-10.234.91.84

Renaming the VMware ESXi local datastore


Use this procedure to rename local datastores using the proper naming conventions.

Prerequisites
VMware ESXi must be installed with hosts added to the VMware vCenter.

Steps
1. Log in to the VMware vSphere Client.
2. Click Hosts and Clusters.
3. Locate and select the VMware ESXi host.

Performing a PowerFlex compute-only node expansion 297


Internal Use - Confidential

4. Select Datastores.
5. Right-click the datastore name, and select Rename.
6. Name the datastore using the DASXX convention, with XX being the node number.

Patch and install drivers for VMware ESXi


Use this procedure if VMware ESXi drivers differ from the current Intelligent Catalog. Patch and install the VMware ESXi drivers
using the VMware vSphere Client.

Prerequisites
Apply all VMware ESXi updates before installing or loading hardware drivers.
NOTE: This procedure is required only if the ISO drivers are not at the proper Intelligent Catalog level.

Steps
1. Log in to the VMware vSphere Client.
2. Click Hosts and Clusters.
3. Locate and select the VMware ESXi host that you installed.
4. Select Datastores.
5. Right-click the datastore name and select Browse Files.
6. Select the Upload icon (to upload file to the datastore).
7. Browse to the Intelligent Catalog folder or downloaded current solution Intelligent Catalog files.
8. Select the VMware ESXi patch .zip files according to the current solution Intelligent Catalog and node type and click OK to
upload.
9. Select the driver and vib files according to the current Intelligent Catalog and node type and click OK to upload.
10. Click Hosts and Clusters.
11. Locate the VMware ESXi host, right-click, and select Enter Maintenance Mode.
12. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client.
13. Log in as root.
14. Type cd /vmfs/volumes/DASXX where XX is the name of the local datastore that is assigned to the VMware ESXi
server.
15. To display the contents of the directory, type ls.
16. If the directory indicates vib files, type esxcli software vib install –v /vmfs/volumes/DASXX/
patchname.vib to install the vib. These vib files can be individual drivers that are absent in the larger patch cluster
and must be installed separately.
17. Perform either of the following depending on the VMware ESXi version:
a. For VMware ESXi 7.0, type esxcli software vib update -d /vmfs/volumes/DASXXX/VMware-
ESXi-7.0<version>-depot.zip.
b. For VMware ESXi 6.x, type esxcli software vib install -d /vmfs/volumes/DASXXX/<ESXI-patch-
file>.zip
18. Type reboot to reboot the host.
19. Once the host completes rebooting, open an SSH session with the VMware ESXi host, and type esxcli software vib
list |grep net-i to verify that the correct drivers loaded.
20. Select the host and click Exit Maintenance Mode.
21. Update the test plan and host tracker with the results.

Add PowerFlex nodes to Distributed Virtual Switches


Use the VMware vSphere Client to apply settings and add nodes to switches for a PowerFlex compute-only node expansion.
The network adapters specified in this procedure are for representation purpose only. See Cabling the PowerFlex R650/R750/
R6525 nodes for the logical network associated with the PowerFlex node. For PowerFlex compute-only nodes in a dual network
environment, see the cabling requirements and adjust the steps accordingly.

298 Performing a PowerFlex compute-only node expansion


Internal Use - Confidential

The dvswitch names are for example only and may not match the configured system. Do not change these names or a data
unavailable or data lost event may occur.
You can select multiple hosts and apply settings in template mode.
NOTE: If the ESXi host participates in NSX, do not migrate management and vMotion VMkernels to the VDS.

See Configuring the network for more information.

Validating network configurations


Test network connectivity between hosts and Metadata Managers (MDMs) before installing PowerFlex on new nodes.

Prerequisites
Gather the IP addresses of the primary and secondary MDMs.

Steps
1. Open Direct Console User Interface (DCUI) or use SSH to log in to the new hosts.
2. At the command-line interface, run the following commands to ping each of the primary MDM and secondary MDM IP
addresses.
If ping test fails, you must remediate before continuing.
NOTE: A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks.

vmkping –I vmk0 <Mgmt ip address of primary MDM or secondary MDM>


vmkping –I vmk2 <Data1 address of primary MDM or secondary MDM> -s 8972 –d
vmkping –I vmk3 <Data2 address of primary MDM or secondary MDM> -s 8972 –d

Run the following commands for LACP bonding NIC port design. x is the VMkernel adapter number in vmkx.

vmkping –I vmkx <Data3 address of primary MDM or secondary MDM> -s 8972 –d


vmkping –I vmkx <Data4 address of primary MDM or secondary MDM> -s 8972 –d

NOTE: After several host restarts, check the access switches for error or disabled states by running the following
commands:

# show interface brief


# show interface | inc CRC
# show interface counters error

3. Optional: If errors appear in the counters of any interfaces, type the following and check the counters again.
Output from a Cisco Nexus switch:

# clear counters interface ethernet X/X

4. Optional: If there are still errors on the counter, perform the following to see if the errors are old and irrelevant or new and
relevant.
a. Optional: Type # show interface | inc flapped.

Sample output:
Last link flapped 1d02h
b. Type # show logging logfile | inc failure.

Sample output:
Dec 12 12:34:50.151 access-a %ETHPORT-5-IF_DOWN_LINK_FAILURE: Interface Ethernet1/4/3
is down (Link failure)
5. Optional: Check and reset physical connections, bounce and reset ports, and clear counters until errors stop occurring.
Do not activate new nodes until all errors are resolved and no new errors appear.

Performing a PowerFlex compute-only node expansion 299


Internal Use - Confidential

Migrating vCLS VMs


Use this procedure to migrate the vSphere Cluster Services (vCLS) VMs manually to the service datastore.

About this task


VMware vSphere 7.0Ux or ESX 7.0Ux creates vCLS VM when the vCenter Server Appliance (vCSA) is upgraded. This task helps
to migrate the vCLS VMs to service datastore.

Steps
1. Log in to VMware vCSA HTML Client using the credentials.
2. Go to VMs and templates inventory or Administration > vCenter Server Extensions > vSphere ESX Agent Manager
> VMs to view the VMs.
The VMs are in the vCLS folder once the host is added to the cluster.
3. Right-click the VM and click Migrate.
4. In the Migrate dialog box, click Yes.
5. On the Select a migration type page, select Change storage only and click Next.
6. On the Select storage page, select the PowerFlex volumes for hyperconverged or ESXi-based compute-only node which
will be mapped after the PowerFlex deployment.
NOTE: The volume name is powerflex-service-vol-1 and powerflex-service-vol-2. The datastore name is
powerflex-esxclustershotname-ds1 and powerflex-esxclustershotname-ds2. If these volumes or datastore are
not present, create the volumes or datastores to migrate the vCLS VMs.

7. Click Next > Finish.


8. Repeat the above steps to migrate all the vCLS VMs.

Add a PowerFlex node to a PowerFlex Manager


service in lifecycle mode
Use this procedure to update the inventory and service details in PowerFlex Manager for a new PowerFlex node.

Steps
1. Update the inventory for vCenter (vCSA), switches, gateway VM, and nodes:
a. Click Resources on the home screen.
b. Select the vCenter, switches (applicable only for full networking), gatewayVM, and newly added nodes.
c. Click Run Inventory.
d. Click Close.
e. Wait for the job in progress to complete.
2. Update Services details:
a. Click Services
b. Choose the services on which new a node is expanded and click View Details.
c. On the Services details screen, choose Update Service Details.
d. Choose the credentials for the node and SVM and click Next.
e. On the Inventory Sumary, verify that the newly added nodes are reflecting under Physical Node, and click Next.
f. On the Summary page, verify the details and click Finish.

300 Performing a PowerFlex compute-only node expansion


Internal Use - Confidential

Configuring the NSX-T Ready tasks


This section describes how to configure the PowerFlex nodes as part of preparing the PowerFlex appliance for NSX-T. Before
you configure the ESXi hosts as NSX-T transport nodes, you must add the transport distributed port groups and convert the
distributed switch from LACP to individual trunks.
NOTE: If you configure VMware NSX-T on PowerFlex hyperconverged or compute-only nodes and add them to PowerFlex
Manager, the services will be in lifecycle mode. If you need to perform an expansion on such a node, see Adding a
PowerFlex R640/R740xd/R840 node to a PowerFlex Manager service in lifecycle mode to add the PowerFlex node.
Contact VMware Support to configure VMware NSX-T on a new PowerFlex node and see Add PowerFlex nodes to a
service to update the service details.

Configure VMware NSX-T overlay distributed virtual port group


Use this procedure to create and configure the VMware NSX-T overlay distributed virtual port group on cust_dvswitch.

Prerequisites
Ensure that the VMware vSphere vCenter Server and the VMware vSphere Client are accessible.

Steps
1. Log in to the VMware vSphere Client.
2. Click Networking.
3. Expand the PowerFlex Customer-Datacenter.
4. Right-click cust_dvswitch.
5. Click Distributed Port Group > New Distributed Port Group.
6. Update the name to pfmc-nsx-transport-121 and click Next.
7. Select the default Port binding.
8. Select the default Port allocation.
9. Select the default # of ports (default is 8).
10. Select the default VLAN as VLAN Type.
11. Set the VLAN ID to 121.
12. Clear the Customize default policies configuration check box and click Next.
13. Click Finish.
14. Right-click the pfnc-nsx-transport-121 and click Edit Settings....
15. Click Teaming and failover.
16. Verify that Uplink1 and Uplink2 are moved to Active.
17. Click OK.

Convert trunk access to LACP-enabled switch ports for


cust_dvswitch
Use this procedure to convert the physical NICs from LACP to trunk without losing network connectivity. Use this option only if
cust_dvswitch is configured as LACP. It is recommended to avoid using LACP as the NICS for transport traffic type.

Prerequisites
Both Cisco access switch ports for the compute VMware ESXi hosts are configured with LACP. These ports will be configured
as trunk access after removing the physical adapter from each ESXi host.

About this task


This procedure includes reconfiguring one port at a time as a trunk and migrating the VMK0 to a new distributed port group.

Performing a PowerFlex compute-only node expansion 301


Internal Use - Confidential

Steps
1. Log in to the VMware vSphere Client.
2. Look at vCenter and physical switches to ensure that both ports across all hosts are up.
3. For each compute ESXi host, record the physical switch port to which vmnic4 (switch-B) and vmnic6 (switch -A) connect.
a. Click Home, then select Hosts and Clusters and expand the compute cluster.
b. Select the first compute ESXi host in the left pane, and then select Configure tab in the right pane.
c. Select Virtual switches under Networking.
d. Expand cust_dvswitch.
e. Expand lag-1 and click eclipse (…) for vmnic4 and select view settings.
f. Click LLDP tab.
g. Record the port ID (switch port) and system name (switch).
h. Repeat step 3 for vmnic6 on lag1-1.
4. Repeat steps 2 and 3 for each additional compute ESXi host.
5. Create a management distributed port group for cust_dvswitch as follows:
a. Right-click cust_dvswitch (Workbook default name).
b. Click Distributed Port Group > New Distributed Port Group.
c. Update the name to pfcc-node-mgmt-105-new and click Next.
d. Select the default Port binding.
e. Select the default Port allocation.
f. Select the default # of ports (default is 8).
g. Select the default VLAN as VLAN Type.
h. Set the VLAN ID to 105.
i. Clear the Customize default policies configuration and click Next.
j. Click Finish.
k. Right-click the pfcc-node-mgmt-105-new and click Edit Settings...
l. Click Teaming and failover.
m. Verify that Uplink1 and Uplink2 are listed under active and LAG is unused.
n. Click OK.
6. Remove channel-group from the port interface (vmnic6) on Switch-B for each compute ESXi host as follows:
NOTE: This step must be done before removing the physical NICs from the VDS. Otherwise, only one physical NIC gets
removed successfully. The other physical NIC fails to remove from the LAG because both ports are bonded to a port
channel.

a. SSH to Switch-B switch.


b. Enter the following switch commands to configure trunk access for the ESXi host:

config t
interface ethernet 1/x
no channel-group

c. Repeat steps 6a and 6b for each switch port for the remaining compute ESXi hosts.
7. Migrate vmnic6 to Uplink2 and VMK0 to pfcc-node-mgmt-105-new on cust_dvswitch for each compute ESXi host as
follows:
a. Click Home, then select Networking and expand the PowerFlex data center.
b. Right-click cust_dvswitch and select Add and Manage hosts to open the wizard.
● Select Manage host networking and click Next.
● Click Attached hosts..., select all the compute ESXi hosts, and click OK.
● Click Next.
● For each ESXi host, select vmnic6 and click Assign uplink.
● Select Uplink2 and click OK.
● Click Next.
● Select vmk0 (esxi-management) and click Assign port group.
● Select pfcc-node-mgmt-105-new and click OK.
● Click Next > Next > Next > Finish.
8. Remove channel-group from the port interface (vmnic4) on Switch-A for each compute ESXi host as follows:

302 Performing a PowerFlex compute-only node expansion


Internal Use - Confidential

a. SSH to Switch-A switch.


b. Enter the following switch commands to configure trunk access for the ESXi host:

Config t
Interface ethernet 1/x
No channel-group

c. Repeat steps 8a and 8b for each switch port for the remaining compute ESXi hosts.
9. Add vmnic4 to Uplink1 on cust_dvswitch for each compute ESXi host as follows:
a. Click Home, then select Networking and expand the PowerFlex data center.
b. Right-click cust_dvswitch and select Add and Manage Hosts to open the wizard.
● Select Manage host networking and click Next.
● Click Attached hosts..., select all the compute ESXi hosts, and click OK.
● Click Next.
● For each ESXi host, select vmnic4 and click Assign uplink.
● Select Uplink1 and click OK.
● Click Next > Next > Next > Finish.
10. Delete the pfcc-node-mgmt-105 port group on cust_DVswitch:
a. Click Home, select Networking, and expand the PowerFlex data center.
b. Expand cust_dvswitch to view the distributed port groups.
c. Right-click pfcc-node-mgmt-105 and click Delete.
d. Click Yes to confirm deletion of the distributed port group.
11. Rename the pfmc-node-mgmt-105-new port group on cust_dvswitch:
a. Click Home, select Networking, and expand the PowerFlex data center.
b. Expand cust_dvwitch to view the distributed port groups.
c. Right-click pfmc-node-mgmt-105-new and click Rename.
d. Enter pfnc-node-mgmt-105 and click OK.
12. Update teaming and policy to be route based on physical NIC load for port group flex-vmotion-106:
a. Click Home, select Networking, and expand the PowerFlex compute data center.
b. Expand cust_dvswitch to view the distributed port groups.
c. Right-click pfcc-vmotion-106 and click Edit Settings....
d. Click Teaming and failover.
e. Move both Uplink1 and Uplink2 to be Active and lag1 to Unused.
f. Change Load Balancing mode to Route based on originating virtual port.
g. Repeat steps 12c through 12f for the remaining port groups on cust_dvswitch.

Add the VMware NSX-T service using PowerFlex Manager


Use this procedure only if the PowerFlex nodes are added to the NSX-T environment.

Prerequisites
NOTE: Before adding a VMware NSX-T service using PowerFlex Manager, either the customer or VMware services must
add the new PowerFlex node to NSX-T Data Center using NSX-T UI.
Consider the following:
● Before adding this service Update service details in PowerFlex Manager, verify that the NSX-T Data Center is configured
on the PowerFlex hyperconverged or compute-only nodes.
● If the transport nodes (PowerFlex cluster) is configured with NSX-T, then you cannot replace the field units using PowerFlex
Manager. You must add the node manually by following either of these procedures depending on the node type:
○ Performing a PowerFlex hyperconverged node expansion
○ Performing a PowerFlex compute-only node expansion

Steps
1. Log in to PowerFlex Manager.
2. If NSX-T Data Center 3.0 or higher is deployed and is using VDS (not N-VDS), then add the transport network:

Performing a PowerFlex compute-only node expansion 303


Internal Use - Confidential

a. From Getting Started, click Define Networks.


b. Click + Define and do the following:

NSX-T information Values


Name Type NSX-T Transport

Description Type Used for east-west traffic

Network Type Select General Purpose LAN


VLAN ID Type 121. See the Workbook.

Configure Static IP Address Ranges Select the Configure Static IP Address Ranges check box. Type
the starting and ending IP address of the transport network IP pool

c. Click Save > Close.


3. From Getting Started, click Add Existing Service and do the following:
a. On the Welcome page, click Next.
b. On the Service Information page, enter the following details:

Service Information Details


Name Type NSX-T Service

Description Type Transport Nodes

Type Type Hyperconverged or Compute-only

Firmware and software compliance Select the Intelligent Catalog version


Who should have access to the service deployed Leave as default
from this template?

c. Click Next.
d. On the Network Information page, select Full Network Automation, and click Next.
e. On the Cluster Information page, enter the following details:

Cluster Information Details


Target Virtual Machine Manager Select vCSA name
Data Center Name Select data center name
Cluster Name Select cluster name
Target PowerFlex gateway Select PowerFlex gateway name
Target Protection Domain Select PD-1
OS Image Select the ESXi image

f. Click Next.
g. On the OS Credentials page, select the OS credentials for each node, and click Next.
h. On the Inventory Summary page, review the summary and click Next.
i. On the Networking Mapping page, verify that the networks are aligned with the correct dvSwitch.
j. On the Summary page, review the summary and click Finish.
4. Verify PowerFlex Manager recognizes NSX-T is configured on the nodes:
a. Click Services.
b. Select the hyperconverged or compute-only service.
c. Verify that a banner appears under the Service Details tab, notifying that NSX-T is configured on a node and is
preventing some features from being used. In case you do not see this banner, check if you have selected the wrong
service or NSX-T is not configured on the hyperconverged or compute-only nodes.

304 Performing a PowerFlex compute-only node expansion


Internal Use - Confidential

Deploying Windows-based PowerFlex compute-only


nodes manually
You can manually deploy Windows-based PowerFlex compute-only nodes.
NOTE: As of PowerFlex Manager release 3.8 and later, deployment of and management of Windows compute-only nodes is
not supported. Expansions and deployments must be performed manually.

Installing Windows compute-only node with LACP bonding NIC


port design
Use the procedures in this section to install Windows Server 2016 or 2019 on the PowerFlex compute-only node.

Prerequisites
● Ensure that the required information is captured in the Workbook and stored in VAST.
● Prepare the servers by updating all servers to the correct Intelligent Catalog firmware releases and configuring BIOS
settings.
● Ensure that the iDRAC network is configured.
● Ensure that the Windows operating system ISO is downloaded to jump host.
NOTE: As of PowerFlex Manager 3.8, the deployment of Windows compute-only noes is not supported. To manually install
Windows compute-only nodes with LACP bonding NIC port design without PowerFlex Manager, complete the steps in the
following sections.

Configure access switch ports for PowerFlex nodes


See Configuring the network to configure the access switch ports for PowerFlex nodes.

Related information
Configuring the network

Performing a PowerFlex compute-only node expansion 305


Internal Use - Confidential

Mount the Windows Server 2016 or 2019 ISO


Use this procedure to mount the Windows Server ISO.

Steps
1. Connect to the iDRAC, and launch a virtual remote console.
2. Click Menu > Virtual Media > Connect Virtual Media > Map Device > Map CD/DVD.
3. Click Choose File and browse and select the customer provided Windows Server 2016 or 2019 DVD ISO and click Open.
4. Click Map Device.
5. Click Close.
6. Click Boot and select Virtual CD/DVD/ISO. Click Yes.
7. Click Power > Reset System (warm boot) to reboot the server.
The host boots from the attached Windows Server 2016 or 2019 virtual media.

Install the Windows Server 2016 or 2019 on a PowerFlex compute-only node


Use this procedure to install Windows Server on a PowerFlex compute-only node.

Steps
1. Select the desired values for the Windows Setup page, and click Next.
NOTE: The default values are US-based settings.

2. Click Install now.


3. Enter the product key, and click Next.
4. Select the operating system version with Desktop Experience (For example, Windows Server 2019 Datacenter (Desktop
Experience)), and click Next.
5. Select the check box next to the license terms, and click Next.
6. Select the Custom option, and click Next.
7. To install the operating system, select the available drive with a minimum of 60 GB space on the bootable disk and click
Next.
NOTE: Wait until the operating system installation is complete.

8. Enter the password according to the standard password policy.


9. Click Finish.
10. Install or upgrade the network driver using these steps:
NOTE: Use this procedure if the driver is not updated or discovered by Windows automatically.

a. Download DELL EMC Server Update Utility, Windows 64 bit Format, v.x.x.x.iso file from Dell
Technologies Support site.
b. Map the driver CD/DVD/ISO through iDRAC, if the installation requires it.
c. Connect to the server as the administrator.
d. Open and run the mapped disk with elevated permission.
e. Select Install, and click Next.
f. Select I accept the license terms and click Next.
g. Select the check box beside the device drives, and click Next.
h. Click Install, and Finish.
i. Close the window to exit.

306 Performing a PowerFlex compute-only node expansion


Internal Use - Confidential

Configure the network


Use this procedure to configure the network from the network management console.

Steps
1. Open iDRAC console and log in to the Windows Server 2016 or 2019 using admin credentials.
2. Press Windows+R and enter ncpa.cpl.
3. Select the appropriate management NIC.
4. Perform the following for the Management Network:
a. Select Properties.
b. Click Configure....
c. Click the Advanced tab, and select the VLAN ID option from the Property column.
d. Enter the VLAN ID in the Value column.
e. Click OK and exit.
f. Right-click the appropriate NIC, and click Properties, select Internet Protocol Version 4 (TCP/IPv4) and assign
static IP address of the server.
5. Open the PowerShell console, and perform the following procedures:

To create the... Do this...


Team a. Type New-NetLbfoTeam -Name "Team name"
-TeamMembers "NIC1","NIC2".
For example, New-NetLbfoTeam -Name
"flex-node-mgmt-<105>" -TeamMembers
"NIC1","Slot4port1" (Select the appropriate NIC
with 25G ports).
b. Enter Y to confirm.

Management network if the IPs are not assigned manually as a. Type Add-NetLbfoTeamNic -Team "flex-node-
specified in Step 4 (optional) mgmt-<105>", to map the VLAN to the interface using
this command:
NOTE: Assign the IP address according to the
Workbook.
b. Type New-NetIPAddress -InterfaceAlias
'flex-node-mgmt-<105>' -IPAddress
'IP' -PrefixLength 'Prefix number'
-DefaultGateway 'Gateway IP', to assign the IP
address to the interface.
Data network NOTE: Assign the IP address according to the
Workbook.
a. Type New-NetIPAddress –InterfaceAlias
'Interface name' –IPv4Address 'IP' –
PrefixLength 'prefix' Select NIC2, to create
the Data1 network.
b. Type New-NetIPAddress –InterfaceAlias
'Interface name' –IPv4Address 'IP' –
PrefixLength 'prefix' Select Slot4 Port2,
to create the Data2 network.
Where, Interface name is the NIC assigned for data1 or
data2 and IP is the data1 IP or data2 IP.
The prefix is the CIDR notation. For example, if the
network mask is 255.255.255.0, then the CIDR notation
(prefix) is 24.

6. Applicable for an LACP NIC port bonding design: Modify Team0 settings and create a VLAN:

Performing a PowerFlex compute-only node expansion 307


Internal Use - Confidential

To... Do this...
Edit Team0 settings: a. Open the Server Manager, and click Local Server > NIC
teaming.
b. In the NIC teaming window, click Tasks > New Team.
c. Enter name as Team0 and select the appropriate
network adapters.
d. Expand the Additional properties, and modify as
follows:
● Teaming mode: LACP
● Load balancing mode: Dynamic
● Standby adapter: None (all adapters active)
e. Click OK to save the changes.
f. Select Team0 from the Teams list.
g. From the Adapters and Interfaces, click the Team
Interfaces tab in
Create a VLAN in Team0 a. Click Tasks and click Add Interface.
b. In the New team interface dialog box, type the name
as, General Purpose LAN.
c. Assign VLAN ID (200) to the new interface in the VLAN
field, and click OK.
d. From the network management console, right-click the
newly created network interface controller, and select
Properties Internet Protocol Version 4 (TCP/IPv4),
and click Properties.
e. Select the Assign the static IP address check box.

7. Remove the IPs from the data1 and data2 network adapters.
8. Create Team1 and VLAN:

To create a... Do this...


Team and assign the name as Team1 a. Open the Server Manager, and Local Server > NIC
teaming.
b. In the NIC teaming window, click Tasks > New Team.
c. Enter name as Team1, and select the appropriate
network adapters.
d. Expand the Additional properties, and modify as
follows:
● Teaming mode: LACP
● Load balancing mode: Dynamic
● Standby adapter: None (all adapters active)
VLAN in Team1 a. Select your existing NIC Team Team0 in the Teams
list box, and select the Team Interfaces tab in the
Adapters and Interfaces list box.
b. Click Tasks, and click Add Interface.
c. In the New team interface dialog box, type the name
as, flex-data1-<vlanid>
d. Assign VLAN ID (151) to the new interface in the VLAN
field, and click OK.
e. From the network management console, right-click the
newly created network interface controller, and select
Properties Internet Protocol Version 4 (TCP/IPv4),
and click Properties.

9. Repeat step 8 for data2, data3 (if required), and data4 (if required).
A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks. Verify the
number of logical data networks configured in an existing setup and configure the logical data networks accordingly.

308 Performing a PowerFlex compute-only node expansion


Internal Use - Confidential

Disable Windows Firewall


Use this procedure to disable Windows Firewall through the Windows Server 2016 or 2019 or Microsoft PowerShell.

Steps
1. Windows Server 2016 or 2019:
a. Press Windows key+R on your keyboard, type control and click OK.
The All Control Panel Items window opens.
b. Click System and Security > Windows Firewall .
c. Click Turn Windows Defender Firewall on or off.
d. Turn off Windows Firewall for both private and public network settings, and click OK.
2. Windows PowerShell:
a. Click Start, type Windows PowerShell.
b. Right-click Windows PowerShell, click More > Run as Administrator.
c. Type Set-NetFirewallProfile -profile Domain, Public, Private -enabled false in the Windows
PowerShell console.

Enable the Hyper-V role through Windows Server 2016 or 2019


Use this procedure to enable the Hyper-V role through Windows Server 2016 or 2019.

About this task


This is an optional procedure and is recommended only when you want to enable the Hyper-V role on a specified server.

Steps
1. Click Start > Server Manager.
2. In Server Manager, on the Manage menu, click Add Roles and Features.
3. On the Before you begin page, click Next.
4. On the Select installation type page, select Role-based or feature-based installation, and click Next.
5. On the Select destination server page, click Select a server from the server pool, and click Next.
6. On the Select server roles page, select Hyper-V.
An Add Roles and Features Wizard page opens, prompting you to add features to Hyper-V.
7. Click Add Features. On the Features page, click Next.
8. Retain the default selections/locations on the following pages, and click Next:
● Create Virtual Switches
● Virtual Machine Migration
● Default stores
9. On the Confirm installation selections page, verify your selections, and click Restart the destination server
automatically if required, and click Install.
10. Click Yes to confirm automatic restart.

Enable the Hyper-V role through Windows PowerShell


Enable the Hyper-V role using Windows PowerShell.

About this task


This is an optional procedure and is recommended only when you want to enable the Hyper-V role on a specified server.

Steps
1. Click Start, type Windows PowerShell.
2. Right-click Windows PowerShell, and select Run as Administrator.

Performing a PowerFlex compute-only node expansion 309


Internal Use - Confidential

3. Type Install-WindowsFeature -Name Hyper-V -IncludeManagementTools -Restart in the Windows


PowerShell console.

Enable Remote Desktop access


Use this procedure to enable Remote Desktop access.

Steps
1. Go to Start > Run.
2. Enter SystemPropertiesRemote.exe and click OK.
3. Select Allow remote connection to this computer.
4. Click Apply > OK.

Install and configure a Windows-based compute-only node to


PowerFlex
Use this procedure to install and configure a Windows-based compute-only node to PowerFlex.

About this task


For Windows PowerFlex compute-only nodes, only firmware upgrades are supported.

Steps
1. Download the EMC-ScaleIO-sdc*.msi and LIA software.
2. Double-click EMC-ScaleIO LIA setup.
3. Accept the terms in the license agreement, and click Install.
4. Click Finish.
5. Configure the Windows-based compute-only node depending on the MDM VIP availability:
● If you know the MDM VIPs before installing the SDC component:
a. Type msiexec /i <SDC_PATH>.msi MDM_IP=<LIST_VIP_MDM_IPS>, where <SDC_PATH> is the path where
the SDC installation package is located. The <LIST_VIP_MDM_IPS> is a comma-separated list of the MDM IP
addresses or the virtual IP address of the MDM.
b. Accept the terms in the license agreement, and click Install.
c. Click Finish.
d. Permit the Windows server reboot to load the SDC driver on the server.
● If you do not know the MDM VIPs before installing the SDC component:
a. Click EMC-ScaleIO SDC setup.
b. Accept the terms in the license agreement, and click Install.
c. Click Finish.
d. Type C:\Program Files\EMC\scaleio\sdc\bin>drv_cfg.exe --add_mdm --ip <VIPs_MDMs> to
configure the node in PowerFlex.
● Applicable only if the existing network is an LACP bonding NIC:
a. Add all MDM VIP IPs, and run the command to add C:\Program
Files\EMC\scaleio\sdc\bin>drv_cfg.exe --mod_mdm_ip --ip <existing MDM VIP>--
new_mdm_ip <all 4 MDM VIP>.

310 Performing a PowerFlex compute-only node expansion


Internal Use - Confidential

Map a volume to a Windows-based compute-only node using


PowerFlex
Use this procedure to map a volume to a Windows-based compute-only node using PowerFlex.

About this task


If you are using a PowerFlex version prior to 3.5, see Mapping a volume to a Windows-based compute-only node using a
PowerFlex version prior to 3.5 .

Steps
1. Log in to the presentation server at https://<presentation serverip>:8443.
2. In the left pane, click SDCs.
3. In the right pane, select the Windows host.
4. Select the Windows host, click Mapping, and then select Map from the drop-down list.
5. Click Apply. Once the mapping is complete, click Dismiss.
6. To open the disk management console, perform the following steps:
a. Press Windows+R.
b. Enter diskmgmt.msc and press Enter.
7. Rescan the disk and set the disks online:
a. Click Action > Rescan Disks.
b. Right-click each Offline disk, and click Online.
8. Right-click each disk and select Initialize disk.
After initialization, the disks appear online.
9. Right-click Unallocated and select New Simple Volume.
10. Select default and click Next.
11. Assign the drive letter.
12. Select default and click Next.
13. Click Finish.

Mapping a volume to a Windows-based compute-only node using a


PowerFlex version prior to 3.5
Use this procedure to map an existing volume to a Windows-based compute-only node using a PowerFlex version prior to 3.5.

About this task


For Windows PowerFlex compute-only nodes, only firmware upgrades are supported.

Steps
1. Open the PowerFlex GUI, click Frontend, and select SDC.
2. Windows-based compute-only nodes are listed as SDCs if configured correctly.
3. Click Frontend again, and select Volumes. Right-click the volume, and click Map.
4. Select the Windows-based compute-only nodes, and then click Map.
5. Log in to the Windows Server compute-only node.
6. To open the disk management console, perform the following steps:
a. Press Windows+R.
b. Enter diskmgmt.msc and press Enter.
7. Rescan the disk and set the disks online:
a. Click Action > Rescan Disks.
b. Right-click each Offline disk, and click Online.
8. Right-click each disk and select Initialize disk.

Performing a PowerFlex compute-only node expansion 311


Internal Use - Confidential

After initialization, the disks appear online.


9. Right-click Unallocated and select New Simple Volume.
10. Select default and click Next.
11. Assign the drive letter.
12. Select default and click Next.
13. Click Finish.

Licensing Windows Server 2016 compute-only nodes


Use this procedure to activate the licenses for Windows Server 2016 compute-only nodes.

About this task


Without Internet connectivity, phone activation might be required.

Steps
1. Using the administrator credentials, log in to the target Windows Server 2016.
2. When the main desktop view appears, click Start and type Run.
3. Type slui 3 and press Enter.
4. Enter the customer provided Product key and click Next.
If the key is valid, Windows Server 2016 is successfully activated.
If the key is invalid, verify that the Product key entered is correct and try the procedure again.
NOTE: If the key is still invalid, try activating without an Internet connection.

Activating a license without an Internet connection


Use this procedure if you cannot activate the license using an Internet connection.

Steps
1. Using the administrator credentials, log in to the target Windows Server VM (jump server).
2. When the main desktop view appears, click Start and select Command Prompt (Admin) from the option list.
3. At the command prompt, use the slmgr command to change the current product key to the newly entered key.

C:\Windows\System32> slmgr /ipk XXXXX-XXXXX-XXXXX-XXXXX-XXXXX

4. At the command prompt, use the slui command to initiate the phone activation wizard. For example: C:
\Windows\System32> slui 4.
5. From the drop-down menu, select the geographic location that you are calling and click Next.
6. Call the displayed number, and follow the automated prompts.
After the process is completed, the system provides a confirmation ID.

7. Click Enter Confirmation ID and enter the codes that are provided. Click Activate Windows.
Successful activation can be validated using the slmgr command.

C:\Windows\System32> slmgr /dlv

8. Repeat this process for each Windows VM.

312 Performing a PowerFlex compute-only node expansion


Internal Use - Confidential

VIII
Adding a PowerFlex R640/R740xd/R840
node to a PowerFlex Manager service in
lifecycle mode
Use the procedures in this section to add a PowerFlex R640/R740xd/R840 node for the PowerFlex Manager services
discovered in lifecycle mode.
Before adding a PowerFlex node in lifecycle mode, you must complete the initial set of expansion procedures that are common
to all expansion scenarios covered in Performing the initial expansion procedures.
After adding a PowerFlex node in lifecycle mode, see Completing the expansion.

Adding a PowerFlex R640/R740xd/R840 node to a PowerFlex Manager service in lifecycle mode 313
Internal Use - Confidential

57
Performing a PowerFlex storage-only node
expansion
Perform the manual expansion procedure to add a PowerFlex storage-only node to PowerFlex Manager services that are
discovered in a lifecycle mode.
See Cabling the PowerFlex R640/R740xd/R840 nodes for cabling information.

Discover resources
Use this procedure to discover and grant PowerFlex Manager access to resources in the environment. Provide the management
IP address and credential for each discoverable resource.

Prerequisites
Verify that the iDRAC network settings are configured. See Configure iDRAC network settings for more information.

About this task


During node discovery, you can configure iDRAC nodes to automatically send alerts to PowerFlex Manager. You can also change
the IP address of the iDRAC nodes and discover them. See Reconfigure the discovered nodes with new management IP and
credentials in the Dell EMC PowerFlex Appliance Administration Guide. If the PowerFlex nodes are not configured for alert
connector, Secure Remote Services does not receive critical or error alerts for those resources.
The following table describes how to configure resources in managed mode:

Resource type Resource state Example


PowerFlex nodes Managed PowerEdge iDRAC management IP address
If you want to perform firmware updates or deployments on a
discovered node, change the default state to managed.
Perform firmware or catalog updates from the Services page,
or the Resources page.

NOTE: For partial network deployments, you do not need discover the switches. The switches need to be pre-configured.
For sample configurations for Dell PowerSwitch, Cisco Nexus, and Arista switches, see Dell EMC PowerFlex Appliance
Administration Guide.
The following are the specific details for completing the Discovery wizard steps:

Resource IP address Resource Discover into Credentials: Credentials: Credentials: Credentials:


type range state node pool name Username password SNMPv2
community
string
Node 192.168.101.21- Managed Storage-only PowerFlex root calvin customer
pool appliance provided
192.168.101.24
iDRAC default
Switch 192.168.101.45- Managed NA access admin admin public
switches
192.168.101.46

VM 192.168.105.105 Managed NA vCenter administrator P@ssw0rd! NA


Manager @vsphere.loc
al

314 Performing a PowerFlex storage-only node expansion


Internal Use - Confidential

Resource IP address Resource Discover into Credentials: Credentials: Credentials: Credentials:


type range state node pool name Username password SNMPv2
community
string
Element 192.168.105.120 Managed NA CloudLink secadmin Secadmin!! NA
Manager* 192.168.105.121
*

** This is optional. For a new CloudLink Center deployment, the CloudLink Center is discovered automatically.

Prerequisites
● Configure the iDRAC network settings.
● Gather the IP addresses and credentials that are associated with the resources.
NOTE: PowerFlex Manager also allows you to use the name-based searches to discover a range of nodes that were
assigned the IP addresses through DHCP to iDRAC. For more information about this feature, see Dell EMC PowerFlex
Manager Online Help.

Steps
1. On the PowerFlex Manager Getting Started page, click Discover Resources.
2. On the Welcome page of the Discovery Wizard, read the instructions and click Next.
3. On the Identify Resources page, click Add Resource Type. From the Resource Type list, select the resource that you
want to discover.
4. Enter the management IP address of the resource in the IP/Hostname Range field. To discover a resource in an IP range,
provide a starting and ending IP address.
5. In the Resource State list, select Managed or Unmanaged.
6. For PowerFlex node, to discover resources into a selected node pool instead of the global pool (default), select the node
pool from the Discover into Node Pool list.
7. Select the appropriate credential from the Credentials list. See the table above for details.
8. For PowerFlex node, if you want PowerFlex Manager to automatically reconfigure the iDRAC IP addresses of the nodes it
discovers, select the Reconfigure discovered nodes with new management IP and credentials check box. This option
is not selected by default, because it is faster to discover the nodes if you bypass the reconfiguration.
NOTE: iDRAC can be discovered using hostname also.

NOTE: For the Resource Type Node, you can use a range with hostname or IP address, provided the hostname has a
valid DNS entry.

9. For PowerFlex node, select the Auto configure nodes to send alerts to PowerFlex Manager check box to have
PowerFlex Manager automatically configure nodes to send alerts to PowerFlex Manager.
10. Click Next to start discovery. On the Discovered Resources page, select the resources from which you want to collect
inventory data and click Finish. The discovered resources are listed on the Resources page.

Install the operating system


Use this procedure to install Red Hat Enterprise Linux or the embedded operating system 7.x from the iDRAC web interface.

Prerequisites
● Verify that the customer Red Hat Enterprise Linux or the embedded operating system ISO is available and is located in the
Intelligent Catalog code directory.
● Ensure that the following are installed for specific operating systems:

Operating system Requirement

Red Hat Enterprise Linux or embedded operating system 7.x ○ hwinfo


○ net-tools
○ pciutils

Performing a PowerFlex storage-only node expansion 315


Internal Use - Confidential

Operating system Requirement


○ ethtool

Steps
1. From the iDRAC web interface, launch the virtual console.
2. Click Connect Virtual Media.
3. Under Map CD/DVD, click Browse for the appropriate ISO.
4. Click Map Device > Close.
5. From the Boot menu, select Virtual CD/DVD/ISO, and click Yes.
6. From the Power menu, select Reset System (warm boot) or Power On System if the machine is off.
7. Set the boot option to UEFI.
a. Press F2 to enter system setup.
b. Under System BIOS > Boot setting, select UEFI as the boot mode.
NOTE: Ensure that the BOSS card is set as the primary boot device from the boot sequence settings. If the BOSS
card is not the primary boot device, reboot the server and change the UEFI boot sequence from System BIOS >
Boot settings > UEFI BOOT settings.

c. Click Back > Back > Finish > Yes > Finish > OK > Yes.
8. Select Install Red Hat Enterprise Linux/CentOS 7.x from the menu.
NOTE: Wait until all configuration checks pass and the screen for language selection is displayed.

9. Select Continue from the language selection screen.


10. From the Software Selection menu, choose Minimal Install and click Done at the top of the screen.
11. From KDUMP clear the Enable kdump option and click Done.
12. From Network & Hostname set the hostname at the bottom of the screen and click Done.
13. From Installation Destination, select DELLBOSS VD and click Done.
a. Under Partitioning, select the radio button for I will configure partitioning and click Done.
b. Click the link Click here to create them automatically.
c. Partitions now display under the new Red Hat Enterprise Linux/embedded operating system 7.x installation.
d. Delete the /home partition by selecting it and clicking the - button at the bottom.
e. Click Done at the top of the screen. From the Summary of Changes dialog box, select Accept Changes.
14. Click Begin Installation.
15. Select Root Password, set the root password, click Done.
16. Click Finish configuration. Wait for installation to complete and click Reboot.
17. Wait for the system to come back online.

Configure the host


Configure the host after installing the embedded operating system. Images and output are provided for example purposes only.

Prerequisites
While performing this procedure, if you see VLANs other than the following listed for services discovered in lifecycle mode,
assign and match the VLAN details with an existing setup. The VLAN names are for example only and may not match the
configured system.
See Cabling the PowerFlex nodes for information on cabling the PowerFlex nodes.

Steps
1. Log in as root from the virtual console.
2. Type nmtui to set up the networking.
3. See Cabling the PowerFlex nodes for information on cabling the PowerFlex R640 and R740xd nodes with SSD or with NVMe
drives.

316 Performing a PowerFlex storage-only node expansion


Internal Use - Confidential

4. Perform the following to set up flex-data1-<vlanid> at NetworkManager TUI for the non-bonded NIC:
NOTE: Skip this step in case of static bonding NIC or LACP bonding NIC port design.

a. Click Edit a connection.


b. See Cabling the PowerFlex nodes for cabling information, to choose the network interface depending on the availability
of NVMe drives.
c. Press Tab to Show and enter 9000 for MTU.
d. Set the IPv4 Configuration to Manual, and click Show.
e. Press Tab to Add, and set the IP for flex-data1-<vlanid> using CIDR notation.
Example: If the IP address is 192.168.152.155 and the network mask is 255.255.248.0, and then the line should read
192.168.152.155/21.

f. Set Never use this network for default route.


g. Set IPv6 Configuration to Ignore.
h. Set Automatically Connect.
i. Set Available to all users.
j. Select OK.
5. To set up flex-data2-<vlanid> for the non-bonded NIC port design:
NOTE: Skip this step in case of static bonding NIC or LACP bonding NIC port design.

a. Click Edit a connection, press Tab for OK, and press Enter.
b. See Cabling the PowerFlex nodes for cabling information on the PowerFlex R640 and R740xd nodes with or without
NVMe drives.
c. Select Ethernet, and press Tab for Show.
d. Press the Enter key, and enter 9000 for MTU.
e. Set the IPv4 Configuration to Manual, and click Show to open the configuration.
f. Press Tab to Add, and set the IP for flex-data2-<vlanid> using CIDR notation.
For example, if the IP is 192.168.160.155 and the network mask is 255.255.248.0, and then the line should read
192.168.160.155/21.
g. Set Never use this network for default route by using the Space Bar.
h. Set IPv6 Configuration to Ignore.
i. Set Automatically Connect.
j. Set Available to all users.
k. Select OK.
6. To set up bond0 and bond1 for the non-bonded NIC, static bonding NIC, and LACP bonding NIC port design:
a. Select Add and choose Bond.
b. Set Profile name and Device to bond0.
c. Set Mode to 802.3ad.
d. Set IPv4 Configuration to Disabled.
e. Set IPv6 Configuration to Ignore.
f. Set Automatically Connect.
g. Set Available to all users.
h. Click OK.
i. Repeat these steps to set up bond1 and to change the profile and device name as bond1. Skip bond1 creation in case of
a non-bonded NIC.
7. To set up VLANs on bond0 and bond1:
For example:
● flex-stor-mgmt-<vlanid>(bond0.VLAN) (non-bonded, static bonding, and LACP bonding NIC port design)
● flex-data1-<vlanid>(bond0.VLAN) (static bonding NIC and LACP bonding NIC)

NOTE: See step 4 to set up flex-data1-<vlanid> interface for non-bonded NIC.


● flex-data3-<vlanid> (bond0.VLAN) (LACP bonding NIC) (if required)
● flex-rep1-<vlanid> (bond0.VLAN) (LACP bonding NIC only for replication enabled system)
● flex-data2-<vlanid> (bond1.VLAN) (static bonding and LACP bonding NIC)

Performing a PowerFlex storage-only node expansion 317


Internal Use - Confidential

NOTE: See step 5 to set up flex-data2-<vlanid> interface for non-bonded NIC.

● flex-data4-<vlanid> (bond1.VLAN) (LACP bonding NIC) (if required)


● flex-rep2-<vlanid> (bond1.VLAN) (LACP bonding NIC only for replication enabled system)
A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks. Verify the
number of logical data networks configured in an existing setup. Accordingly, configure the logical data networks for newly
added PowerFlex nodes.

a. Click Add, and select VLAN.


You might have to press Tab to see it.
b. Set the profile name and device to bond0. VLAN# or bond1. VLAN# , where VLAN# is the <vlanid>.
c. Set IPv4 Configuration to Manual, press Tab right to Show to open configuration.
d. Select Add and set the IP for each VLAN using the CIDR notation.
Example: If the IP is 192.168.150.155 and the network mask is 255.255.255.0, and then the line should read
192.168.150.155/24.
e. Set the Gateway to the default gateway of each VLAN.
NOTE: As the data networks are private VLANs, adding a gateway may not be required.

f. Set DNS server to customer DNS servers.


g. Select IPv6 Configuration > Ignore.
h. Select Automatically Connect.
i. Select Available to all users.
j. Click OK.
k. Repeat these steps to add data and replication VLANs.
l. For data and replication VLANs, select the following check boxes:
● Never use this network for default route
● Ignore automatically obtained DNS parameters
8. Complete the setup of the bond interface by selecting Back > Quit.
9. Change the directory to /etc/sysconfig/network-scripts at the command line.
10. Edit the network configuration file using the #vi command.
NOTE: The following example is for PowerFlex R640 nodes with SSD. Use the same steps for other PowerFlex
storage-only nodes. See Cabling the PowerFlex nodes for cabling information and edit the networking based on the
respective node type.

a. vi ifcg-em1
b. Change BOOTPROTO to none.
c. Delete the lines from DEFROUTE to IPV6_ADDR_GEN_MODE.
d. Change ONBOOT to yes.
e. Type MASTER=bond0 SLAVE=yes.
f. Save the file.
11. Edit the network configuration file using the #vi command.
NOTE: The following example is for PowerFlex R640 nodes with SSD. Use the same steps for other PowerFlex
storage-only nodes. See Cabling the PowerFlex nodes for cabling information, and edit the networking based on the
respective node type.

a. vi ifcg-p2p1
b. Change BOOTPROTO to none.
c. Delete the lines from DEFROUTE to IPV6_ADDR_GEN_MODE.
d. Change ONBOOT to yes.
e. Type MASTER=bond0 SLAVE=yes
f. Save the file.
12. Edit the network configuration file using the #vi command.

318 Performing a PowerFlex storage-only node expansion


Internal Use - Confidential

NOTE: The following example is for PowerFlex R640 nodes with SSD. Use the same steps for other PowerFlex
storage-only nodes. See Cabling the PowerFlex nodes for cabling information and edit the networking based on the
respective node type.

a. vi ifcg-em2
b. Change BOOTPROTO to none.
c. Delete the lines from DEFROUTE to IPV6_ADDR_GEN_MODE.
d. Change ONBOOT to yes.
e. Type MASTER=bond1 SLAVE=yes.
f. Save the file.
13. Edit the network configuration file using the #vi command.
NOTE: The following example is for PowerFlex R640 nodes with SSD. Use the same steps for other PowerFlex
storage-only nodes. See Cabling the PowerFlex nodes and edit the networking based on the respective node type.

a. vi ifcg-p2p2
b. Change BOOTPROTO to none.
c. Delete the lines from DEFROUTE to IPV6_ADDR_GEN_MODE.
d. Change ONBOOT to yes.
e. Type MASTER=bond1 SLAVE=yes.
f. Save the file.
14. Type systemctl disable NetworkManager to disable the NetworkManager.
15. Type systemctl status firewalld to check if firewalld is enabled.
16. Type systemctl enable firewalld to enable firewalld. To enable firewalld on all the SDS components, see the
Enabling firewall service on PowerFlex storage-only nodes and SVMs KB article.
17. Type systemctl restart network to restart the network.
18. To check the settings, type ip addr show.
19. Type cat /proc/net/bonding/bond0 | grep -B 2 -A 2 MII to confirm that the port channel is active.
20. To check the MTU, type grep MTU /etc/sysconfig/network-scripts/ifcfg*
21. Verify MTU flex-data1-<vlanid>, flex-data2-<vlanid>, flex-data3-<vlanid> (if required), and flex-data4-<vlanid> (if required)
settings are set to 9000.
For flex-rep1-<vlanid> and flex-rep2-<vlanid>, set MTU to 1500.
22. To check connectivity, ping the default gateway and the MDM Data IP address.
23. Create a static route for replication VLAN, used only to enable replication between primary and remote site:
a. Log in to the node using SSH.
b. Run cd /etc/sysconfig/network-scripts/.
c. Create the file using vi command route-bond.<vlanid for rep1>.
NOTE: For example, 192.168.161.0/24 through 192.168.163.1 dev bond1.163.
d. Create file using vi command route-bond0.<vlanid for rep2>.
NOTE: For example, 192.168.162.0/24 through 192.168.164.1 dev bond1.164.

Related information
Cabling the PowerFlex R640/R740xd/R840 nodes

Install the nvme -cli tool and iDRAC Service Module (iSM)
Use this procedure to install dependency packages for Red Hat Enterprise Linux or embedded operating system.

About this task


Dependency packages for Red Hat Enterprise Linux or embedded operating system are installed with a minimal install of Red Hat
Enterprise Linux or embedded operating system 7.x, except the usbutils, net-tools, and redhat-lsb-core packages. Install these
packages manually, using the following procedure.

Performing a PowerFlex storage-only node expansion 319


Internal Use - Confidential

NOTE: If PowerFlex Manager is installed, do not use this procedure to install the dependency packages. See Related
information for information on expanding a PowerFlex node using PowerFlex Manager.

Steps
1. Copy the Red Hat Enterprise Linux or embedded operating system 7.x image to the /tmp folder of the PowerFlex storage-
only node using SCP or WINSCP.
2. Use PuTTY to log in to the PowerFlex storage-only node.
3. Run #cat /etc/*-release to identify the installed operating system.
4. Type # mount -o loop /tmp/<os.iso> /mnt to mount the iso image at the /mnt mount point.
5. Change directory to /etc/yum.repos.d
6. Type # touch <os.repo> to create a repository file.
7. Edit the file using a vi command and add the following lines:

[repository]
name=os.repo
baseurl=file:///mnt
enabled=1
gpgcheck=0

8. Type #yum repolist to test that you can use yum to access the directory.
9. Install the dependency packages per the installed operating system. To install dependency packages, enter:

# yum install usbutils


# yum install net-tools
# yum install redhat-lsb-core

10. Install the iDRAC Service Module, as follows:


a. Go to Dell EMC Support, and download the iSM package as per Intelligent Catalog.
b. Log in to the PowerFlex storage-only node.
c. Create a folder that is named ism in the /tmp directory. Type cd /tmp and create a folder that is named ism.
d. Use WinSCP to copy the iSM package to the /tmp/ism folder.
e. Gunzip and untar the file using following commands:
NOTE: Package name may differ from the following example depending on the Intelligent Catalog version.

# gunzip OM-iSM-Dell-Web-LX-340-1471_A00.tar.gz
# tar -xvf OM-iSM-Dell-Web-LX-340-1471_A00.tar

f. Change directory as per the installed operating system.


g. Type # rpm -ivh dcism-3.x.x-xxxx.el7.x86_64.rpm to install the package.
h. Type # systemctl status dcismeng.service to verify that dcism service is running on the PowerFlex storage-
only node.

NOTE: If dcismeng.service is not running, type systemctl start dcismeng.service to start the
service.
i. Type # ip a |grep idrac to verify link local IP address (169.254.0.2) is automatically configured to the interface
idrac on the PowerFlex storage-only node after successful installation of iSM.

320 Performing a PowerFlex storage-only node expansion


Internal Use - Confidential

j. Type # ping 169.254.0.1 to verify PowerFlex storage-only node operating system can communicate with iDRAC
using ping command (default link local IP address for iDRAC is 169.254.0.1.
11. Type # yum install nvme-cli to install the nvme-cli package.
12. Type # nvme list to ensure that the disk firmware version matches the Intelligent Catalog values.

If the disk firmware version does not match the Intelligent Catalog values, see Related information for information on
upgrading the firmware.

Related information
Adding a PowerFlex R640/R740xd/R840 node to a PowerFlex Manager service in managed mode
Upgrade the disk firmware for NVMe drives

Upgrade the disk firmware for NVMe drives


Use this procedure to upgrade firmware for Dell Express Flash NVMe PCIe SSDs.

Steps
1. Go to Dell EMC Support, and download the Dell Express Flash NVMe PCIe SSD firmware as per Intelligent Catalog.
2. Log in to the PowerFlex storage-only node.
3. Create a folder in the /tmp directory named diskfw.
4. Use WinSCP to copy the downloaded backplane package to the /tmp/diskfw folder.
5. Change directory to cd /tmp/diskfw/.
6. Change the access permissions of the file using the following command:
NOTE: Package name may differ from the following example depending on the Intelligent Catalog version.

chmod +x Express-Flash-PCIe-SSD_Firmware_R37D0_LN64_1.1.1_A02_01.BIN

7. Enter ./Express-Flash-PCIe-SSD_Firmware_R37D0_LN64_1.1.1_A02_01.BIN to run the package.


8. Follow the instructions that are provided for updating the firmware.
9. When prompted to upgrade, type Y and press Enter. Do the same when prompted for reboot.

Related information
Install the nvme -cli tool and iDRAC Service Module (iSM)

Migrate PowerFlex storage-only nodes from a non-bonded to an


LACP bonding NIC port design
Use this procedure to migrate PowerFlex storage-only nodes from a non-bonded to an LACP bonding NIC port design.

Prerequisites
A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks.

NOTE: This procedure is applicable only for an LACP bonding NIC port design.

Performing a PowerFlex storage-only node expansion 321


Internal Use - Confidential

Steps
1. Log in to PowerFlex GUI presentation server and place PowerFlex node into instant maintenance mode. Migrate one
PowerFlex storage-only node at a time.
a. Log in to the node using PuTTY.
b. Go to /etc/sysconfig/network-scripts/.
c. Find vi ifcfg-em2, delete the IP address, mask and save the file.
d. Find vi ifcfg-p2p2, delete the IP address, mask and save the file. Interface names may vary depending on the server.
e. Run vi ifcfg-bond1 to create bond1 and insert below lines in file and save it.

BOOTPROTO=none
ONBOOT=yes
HOTPLUG=yes
TYPE=Bond
DEVICE=bond1
MTU=9000
BONDING_OPTS="miimon=100 mode=802.3ad xmit_hash_policy=layer2+3 lacp_rate=fast"
PEERDNS=no
NM_CONTROLLED=no

f. Type vi /etc/sysconfig/network-scripts/ifcfg-em2 and modify MASTER, SLAVE, and ONBOOT as follows:

DEVICE=em2
HWADDR=24:8A:07:5B:17:68
MASTER=bond1
SLAVE=yes
ONBOOT=yes
HOTPLUG=yes
TYPE=Ethernet
MTU=9000
NM_CONTROLLED=no

g. Save and exit.


h. Repeat steps f and g for ifcfg-p2p2.
i. Type vi ifcfg-bond0.151 and insert the following lines, and save the file.

BOOTPROTO=none
ONBOOT=yes
HOTPLUG=yes
TYPE=Bond
VLAN=yes
DEVICE=bond0.151
IPADDR=192.168.151.50
NETMASK=255.255.255.0
MTU=9000
BONDING_OPTS="miimon=100 mode=802.3ad xmit_hash_policy=layer2+3 lacp_rate=fast"
PEERDNS=no
NM_CONTROLLED=no

j. Repeat step i and create ifcfg-bond0.153, and insert the respective lines in file and save the file.
k. Type vi ifcfg-bond1.152 and insert below lines and save the file.

BOOTPROTO=none
ONBOOT=yes
HOTPLUG=yes
TYPE=Bond
VLAN=yes
DEVICE=bond1.152
IPADDR=192.168.152.50
NETMASK=255.255.255.0
MTU=9000
BONDING_OPTS="miimon=100 mode=802.3ad xmit_hash_policy=layer2+3 lacp_rate=fast"
PEERDNS=no
NM_CONTROLLED=no

l. Repeat step h and create ifcfg-bond1.154, and insert the respective lines in file and save the file.
2. Type systemctl restart network to restart network services.

322 Performing a PowerFlex storage-only node expansion


Internal Use - Confidential

3. Type ip addr and check if the newly added bonds are shown in results.
4. Remove the node from the instant maintenance mode.

Related information
Cabling the PowerFlex R640/R740xd/R840 nodes

Migrate PowerFlex storage-only nodes from a static to an LACP


bonding NIC port design
Use this procedure to migrate PowerFlex storage-only nodes from a static bonding to an LACP bonding NIC port design.

Prerequisites
A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks.

NOTE: This procedure is applicable only for an LACP bonding NIC port design.

Steps
1. Log in to PowerFlex GUI presentation server, and place the PowerFlex node into instant maintenance mode. Migrate one
PowerFlex storage-only node at a time.
a. Log in to the node using PuTTY.
b. Go to /etc/sysconfig/network-scripts/.
c. Type vi ifcfg-bond0.153 and insert below lines:

BOOTPROTO=none
ONBOOT=yes
HOTPLUG=yes
TYPE=Bond
VLAN=yes
DEVICE=bond0.153
IPADDR=192.168.153.190
NETMASK=255.255.255.0
MTU=9000
BONDING_OPTS="miimon=100 mode=802.3ad xmit_hash_policy=layer2+3 lacp_rate=fast"
PEERDNS=no
NM_CONTROLLED=no

d. Save the file and exit.


e. Type vi ifcfg-bond1.154 and insert below lines and save the file.

BOOTPROTO=none
ONBOOT=yes
HOTPLUG=yes
TYPE=Bond
VLAN=yes
DEVICE=bond1.154
IPADDR=192.168.154.190
NETMASK=255.255.255.0
MTU=9000
BONDING_OPTS="miimon=100 mode=802.3ad xmit_hash_policy=layer2+3 lacp_rate=fast"
PEERDNS=no
NM_CONTROLLED=no

f. Save the file and exit.


2. Type systemctl restart network to restart network services.
3. Type ip addr and check if the newly added bonds are shown in results.
4. Remove the node from the instant maintenance mode.

Performing a PowerFlex storage-only node expansion 323


Internal Use - Confidential

Install PowerFlex components on PowerFlex storage-


only nodes
Use this procedure to install PowerFlex components on PowerFlex storage-only nodes with or without NVMe drives.

Steps
1. From the management jump server VM, extract all required Red Hat files from the
VxFlex_OS_3.x.x_xxx_Complete_Software/ VxFlex_OS_3.x.x_xxx_RHEL_OEL7 package to the Red Hat
node root folder.
2. Use WinSCP to copy the following Red Hat files from the jump host folder to the /tmp folder on the Red Hat Enterprise
Linux node:
● EMC-ScaleIO-sds-3.x-x.xxx.el7.x86_64.rpm
● EMC-ScaleIO-sdr-3.x-x.xxx.el7.x86_64.rpm
● EMC-ScaleIO-mdm-3.x-x.xxx.el7.x86_64.rpm
● EMC-ScaleIO-lia-3.x-x.xxx.el7.x86_64.rpm
From the appropriate Intelligent Catalog folder, copy the PERC CLI perccli-7.x-xxx.xxxx.rpm rpm package.
NOTE: Verify that the PowerFlex version you install is the same as the version on other Red Hat Enterprise Linux
servers.
3. Use PuTTY and connect to the PowerFlex management IP address of the new node.
4. Go to /tmp, and install the LIA software (use the admin password for the token value).

TOKEN=<admin password> rpm -ivh /tmp/EMC-ScaleIO-lia-3.x-x.xxx.el7.x86_64.rpm

5. Type #rpm -ivh /tmp/EMC-ScaleIO-sds-3.x-x.xxx.el7.x86_64.rpm to install the storage data server (SDS)
software.
6. To enable replication type rpm -ivh //tmp/EMC-ScaleIO-sdr-3.x-x.xxx.el7.x86_64.rpm to install the storage
data replication (SDR) software.
7. Type rpm -ivh /tmp/perccli-7.x-xxx.xxxx.rpm to install the PERC CLI.
8. Reboot the PowerFlex storage-only node by typing reboot.

Add a new PowerFlex storage-only node without


NVDIMM to PowerFlex
Use this procedure to add a PowerFlex storage-only node without NVDIMM.

Prerequisites
Confirm that the PowerFlex system is functional and no rebuild or rebalances are running. For PowerFlex 3.5 or later, use the
PowerFlex GUI presentation server to add a PowerFlex storage-only node to PowerFlex.

Steps
1. If you are using PowerFlex GUI presentation server:
a. Log in to the PowerFlex GUI presentation server.
b. Click Configuration > SDSs.
c. Click Add.
d. Enter the SDS Name.
e. Select the Protection Domain and SDS Port.
f. Enter the IP address for data1, data2, data3 (if required), and data4 (if required).
g. Select SDS and SDC, as the appropriate communication roles for all the IP addresses that are added.
h. Click SDS.
2. If you are using a PowerFlex version prior to 3.5:

324 Performing a PowerFlex storage-only node expansion


Internal Use - Confidential

a. Log in to the PowerFlex GUI, and connect to the primary MDM.


b. Click the SDS pane.
c. Click Add and enter the SDS IP addresses to be added.
d. Choose the SDS and SDC connection, click ADD IP after adding each data IP address for a particular SDS.
e. Click ADD SDS.
f. Type the name of the PowerFlex node.
g. Click the + icon next to the IP address and type in the PowerFlex Data1 IP Address, PowerFlex Data 2 IP Address,
PowerFlex Data 3 IP Address, and PowerFlex Data 4 IP Address as recorded earlier. Do not add the management IP
address to the list. The data 3 (if required) and data 4 (if required) IP addresses are only applicable for the LACP bonding
NIC port design. A minimum of two logical data networks are supported. Optionally, you can configure four logical data
networks.
NOTE: There should be no more than two IP addresses listed and ensure both Communication Roles are selected
for both IP addresses.

h. Click OK and verify that the SDS was successfully added.


i. Click Close.

Configuring NVDIMM devices for PowerFlex storage-


only nodes
● Verify that Red Hat Enterprise Linux or the embedded operating system is installed and that the network is configured.
● Verify that the PowerFlex storage-only node meets the requirements using configuring storage-only nodes with NVDIMMs.
● Depending on the drive type, use the following to install PowerFlex:
○ SSD - Installs PowerFlex components on PowerFlex storage-only nodes without NVMe drives.
○ NVMe drives - Installs PowerFlex components on NVMe storage nodes.
NOTE: If adding a PowerFlex storage-only node with NVDIMMs to an existing protection domain, see Identify NVDIMM
acceleration pool in a protection domain to ensure a protection domain with other NVDIMM accelerated devices.

If adding PowerFlex storage-only nodes with NVDIMMs to a new protection domain, see Create an NVDIMM protection
domain. Dell recommends that a minimum of six PowerFlex storage-only nodes be in a protection domain.

Verify the PowerFlex version


Use this procedure to verify the installed PowerFlex version.

Prerequisites
Skip this procedure if NVDIMM is not available in the PowerFlex nodes.

Steps
1. Log in to the jump server.
2. SSH to primary MDM.
3. Log in with administrator credentials.
scli --login --username admin --password 'admin_password'
4. Type scli --version to verify the PowerFlex version.
Sample output:
DellEMC ScaleIO Version: R3_x.x.xxx

Performing a PowerFlex storage-only node expansion 325


Internal Use - Confidential

Verify the operating system version


Use this procedure to verify that Red Hat Enterprise Linux or embedded operating system 7.x is installed.

Steps
1. Log in the jump server.
2. SSH to the PowerFlex storage-only node.
3. Enter the following command to verify the operating system version:
cat /etc/*-release

Verify necessary RPMs


Use this procedure to verify necessary RPMs are installed.

About this task


The following RPMs must be installed:
● ndctl
● ndctl-libs
● daxctl-libs
● libpmem
● libpmemblk
The sample output in this procedure is for reference purpose only. The versions may vary depending on the intelligent catalog
used.

Steps
1. Log in to the jump server.
2. SSH to the PowerFlex storage-only node.
3. Type yum list installed ndctl ndctl-libs daxctl-libs libpmem libpmemblk
Sample output:

[root@eagles-r640-nvso-235 ~]# yum list installed ndctl ndctl-libs daxctl-libs


libpmem libpmemblk
Failed to set locale, defaulting to C
Loaded plugins: product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-
manager to register.
Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast
Installed Packages
daxctl-libs.x86_64 62-1.el7
@razor-internal
libpmem.x86_64 1.4-3.el7
@razor-internal
libpmemblk.x86_64 1.4-3.el7
@razor-internal
ndctl.x86_64 62-1.el7
@razor-internal
ndctl-libs.x86_64 62-1.el7
@razor-internal

4. If the RPMs are not installed, type yum install -y <rpm> to install the RPMs.

Activate NVDIMM regions


Use this procedure to activate NVDIMM regions.

Steps
1. Log in to the jump server.

326 Performing a PowerFlex storage-only node expansion


Internal Use - Confidential

2. SSH to the PowerFlex storage-only node.


3. List the NVDIMM regions by typing the following command:
ndctl list -R
4. Type ndctl destroy-namespace all -f to destroy all default namespaces. If this fails to reclaim space and if you
have already sanitized the NVDIMMs, type ndctl -start-scrub to scrub the NVDIMMs.
5. For each discovered region, type ndctl create-namespace -r region[x] -m raw -f (x corresponds to the
region number) to re-create the namespace.
For example, type:

ndctl create-namespace -r region0 -m raw -f


6. Type ndctl list -N to verify the namespaces (the number of namespaces should match number of NVDIMMs in the
PowerFlex appliance).

Create namespaces and DAX devices


Use this procedure to create namespaces, and configure each namespace as a devdax device.

Steps
1. SSH to the PowerFlex storage-only node.
2. For each NVDIMM, type (starting with namespace0.0):
ndctl create-namespace -f -e namespace[x].0 --mode=devdax --align=4k --no-autolabel

For example, ndctl create-namespace -f -e namespace0.0 --mode=devdax --align=4k --no-


autolabel

{"dev":"namespace0.0","mode":"devdax","map":"dev","size":"15.75 GiB
(16.91 GB)","uuid":"348d510e-dc70-4855-a6ca-6379046896d5","raw_uuid":
"4ca5cda2-ebd4-4894-aa4e-0cfc823745e2","daxregion":{"id":0,"size":"15.75 GiB (16.91
GB)","align":4096,"devices":[{"chardev":"dax0.0","size":"15.75 GiB (16.91 GB)"}]},
"numa_node":0}

3. Repeat for each NVDIMM namespace.

Identify NVDIMM acceleration pool in a protection domain


Use this procedure to identify a protection domain that is configured with NVDIMM acceleration pool using the PowerFlex GUI.
NVDIMM acceleration pools are required for compression.

Steps
1. Log in to the PowerFlex GUI presentation server as an administrative user.
2. Click Configuration > Acceleration Pool.
3. Note the acceleration pool name. The name is required while creating a compression storage pool.

Identify NVDIMM acceleration pool in a protection domain using a


PowerFlex version prior to 3.5
Use this procedure to identify a protection domain that is configured with NVDIMM acceleration pool using a PowerFlex version
prior to 3.5. NVDIMM acceleration pools are required for compression.

Steps
1. Log in to the PowerFlex GUI as an administrative user.
2. Select Backend > Storage.
3. Filter By Storage Pools.

Performing a PowerFlex storage-only node expansion 327


Internal Use - Confidential

4. Expand the SDSs in the protection domains. Under the Acceleration Type column, identify the protection domain with Fine
Granularity Layout. This is a protection domain that has been configured with NVDIMM accelerated devices.
5. The acceleration pool name (in this example, AP1) is listed under the column Accelerated On. This is needed when creating
a compression storage pool.

PowerFlex CLI (SCLI) keys for acceleration pools


This lists the PowerFlex CLI (SCLI) keys to use for acceleration pools and data compression.
All scli commands are performed on the Primary MDM.
Key:
● [PD] = Protection Domain
● [APNAME] = Acceleration Pool Name
● [SPNAME] = Storage Pool Name
● [VOLNAME] = Volume Name
● [SDSNAME] = SDS Name
● [SDS-IPs] = PowerFlex Data IP addresses

Create an NVDIMM protection domain


Use this procedure to create a protection domain.

About this task


Use this procedure only if you are expanding a PowerFlex node to a new protection domain.

Steps
1. If using PowerFlex GUI presentation server:
a. Log in to the PowerFlex GUI presentation server.
b. Click Dashboard > Configuration > Protected Domains, and ADD.
c. In the Add Protection Domain window, enter the name of the protection domain.
d. Click ADD PROTECTION DOMAIN.
2. If using a PowerFlex version prior to 3.5:
a. Log in to the PowerFlex GUI.
b. Click Backend > Storage.
c. Right-click PowerFlex System, and click + Add Protection Domain.
d. Enter the protection domain name, and click OK.

Create an acceleration pool


Use this procedure to create an acceleration pool that is required for a compression storage pool.

Steps
1. If using PowerFlex GUI presentation server:
a. Log in to the PowerFlex GUI presentation server.
b. Click Dashboard > Configuration > Acceleration Pools, and ADD.
c. Enter the acceleration pool in the Name field of the Add Acceleration pool window.
d. Select NVDIMM as the Pool Type, and select Protection Domain from the drop-down list.
e. In the Add Devices section, select the Add Devices to All SDSs check box, only if the devices are needed to be added
on all SDSs. If not, leave it unchecked.
f. In the Add Device section, select the Add Devices to All SDSs check box, only if the devices are needed to be added
on all SDSs. If not, leave it unchecked.
g. In the Path and Device Name fields, enter the device path and device name respectively. Select the appropriate SDS
from the drop-down menu. Click Add Devices.

328 Performing a PowerFlex storage-only node expansion


Internal Use - Confidential

h. Repeat the previous step to add devices.


i. Click Add Acceleration Pool.
2. If using a PowerFlex version prior to 3.5:
a. Log in to the PowerFlex GUI.
b. Click Backend > Devices.
c. Right-click Protection Domain, and click + Add > Add Acceleration Pool.
d. Enter the acceleration pool name.
e. Select NVDIMM.
f. Click OK. DAX devices are added later.
g. Click OK > Close.

Create a storage pool


Use this procedure to create a storage pool.

Steps
1. If using PowerFlex GUI presentation server:
a. Log in to the PowerFlex GUI presentation server.
b. Click Dashboard > Configuration > Storage Pools, and Add.
c. Enter the storage pool name in the Name field of the Add Storage pool window.
d. Select Protection Domain from the drop-down list.
e. Select SSD as the Media Type from the drop-down, and select FINE for Data Layout Granularity.
f. Select the Acceleratio Pool from the drop-down menu, and click Add Storage Pool.
2. If using a PowerFlex version prior to 3.5:
a. Log in to the PowerFlex GUI.
b. Select Backend > Storage.
c. Right-click Protection Domain, and click + Add > Add Storage Pool.
d. Add the new storage pool details:
● Name: Provide name
● Media Type: SSD
● Data Layout: Fine Granularity
● Acceleration Pool: Acceleration pool that was created previously
● Fine Granularity: Enable Compression
e. Click OK > Close

Add DAX devices to the acceleration pool


Use this procedure to add DAX devices to the acceleration pool.

Steps
1. Log in to the primary MDM using SSH.
2. For each SDS with NVDIMM, type the following to add NVDIMM devices to the acceleration pool:
ndctl create-namespace -f -e namespace[x].0 --mode=devdax --align=4k --no-
autolabel #scli--add_sds_device--sds_name <SDS_NAME>--device_path /dev/dax0.0 --
acceleration_pool_name <ACCP_NAME> --force_device_takeover

3. Repeat for the remaining DAX devices.

Performing a PowerFlex storage-only node expansion 329


Internal Use - Confidential

Add SDS to NVDIMM protection domain


Use this procedure to add SDSs to an NVDIMM protection pool.

About this task


You can also add an SDS to an NVDIMM protection pool using the PowerFlex GUI.

Steps
1. SSH to the PowerFlex storage-only node.
2. Type lsblk to get the disk devices.
Sample output:

3. Type ls -lia /dev/da* to get the DAX devices.


Sample output:

4. If using PowerFlex GUI presentation server:


a. Log in to the PowerFlex GUI presentation server.
b. Click Configuration > SDSs
c. Select the newly added SDSs by clicking on the check box.
d. Click ADD DEVICE > Acceleration Device.
e. In the Path and Name fields, enter the device path and name respectively. Select the acceleration pool you recorded in
the Drive information table.
f. Click ADD Device.
g. Expand the ADVANCED (OPTIONAL) section, and select yes for Select Force Device Take Over.
h. Click ADD DEVICES.
5. If using a PowerFlex version prior to 3.5:

330 Performing a PowerFlex storage-only node expansion


Internal Use - Confidential

a. Log in to the PowerFlex GUI.


b. Select Backend > Storage.
c. Right-click Protection Domain. and select Add > Add SDS.
d. Add SDS to the NVDIMM protection domain:
● Name: SDS Name
● IP address
○ Data 1 IP address with SDC and SDS enabled
○ Data 2 IP address with SDC and SDS enabled
○ Applicable for an LACP bonding NIC port design:
■ Data 3 IP address (if required) with SDC and SDS enabled
■ Data 4 IP address (if required) with SDC and SDS enabled
● Add devices.
● Add acceleration devices.

e. Click Advanced.
● Select Force Device Takeover.

f. Click OK > Close.


6. Repeat for all PowerFlex storage-only nodes.

Performing a PowerFlex storage-only node expansion 331


Internal Use - Confidential

Create a compressed volume


Use this procedure to create a compressed volume.

Steps
1. If using PowerFlex GUI presentation server:
a. Log in to the PowerFlex GUI presentation server.
b. Click Dashboard > Configuration > Volumes, and ADD.
c. In the ADD Volume window, enter the name in the Volume name field.
d. Select THIN or THICK as the Provisioning option.
e. Enter the size in the Size field. Select the Storage Pool from the drop-down menu.
f. Click Add Volume.
2. If using a PowerFlex version prior to 3.5:
a. Log in to the PowerFlex GUI.
b. Click Frontend > Volumes.
c. Right-click Storage Pool, and click Add Volume.
d. Add the volume details:
● Name: Volume name
● Size: Required volume size
● Enable compression
e. Click OK > Close.
f. Right-click the volume, and select Map.
g. Map to all hosts.
h. Click OK.

Add drives to PowerFlex


Use this procedure to add SSD or NVMe drives to PowerFlex.

Steps
1. If using PowerFlex GUI presentation server to enable zero padding on a storage pool:
a. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
b. Click Storage Pools from the left pane, and select the storage pool.
c. Select Settings from the drop-down menu.
d. Click Modify > General Settings from the drop-down menu.
e. Click Enable Zero Padding Policy > Apply.
NOTE: After the first device is added to a specific pool, you cannot modify the zero padding policy. FG pool is always
zero padded. By default, zero padding is disabled only for MG pool.

2. If using a PowerFlex version prior to 3.5 to enable zero padding on a storage pool:
a. Select Backend > Storage, and right-click Select By Storage Pools from the drop-down menu.
b. Right-click the storage pool, and click Modify zero padding policy.
c. Select Enable Zero Padding Policy, and click OK > Close.
NOTE: Zero padding cannot be enabled when devices are available in the storage pools.

3.
If CloudLink is... Do this...
Enabled See one of the following procedures depending on the
devices:
● Encrypt PowerFlex hyperconverged (SVM) or storage-
only devices (SED drives)
● Encrypt PowerFlex hyperconverged (SVM) or storage-
only devices (non-SED drives)

332 Performing a PowerFlex storage-only node expansion


Internal Use - Confidential

If CloudLink is... Do this...


Disabled Use PuTTY to access the Red Hat Enterprise Linux or an
embedded operating system node.

When adding NVMe drives, keep a separate storage pool for the PowerFlex storage-only node.

4. Note the devices by typing lsblk -p or nmve list.

5. If you are using PowerFlex GUI presentation server:


a. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
b. Click Configuration > SDSs.
c. Locate the newly added PowerFlex SDS, right-click, select Add Device, and choose Storage device from the drop-
down menu.
d. Type /dev/nvmeXXn1 where X is the value from step 3. Provide the storage pool, verify the device type, and click Add
Device. Accordingly, add all the required device, and click Add Devices.
NOTE: If the devices are not getting added, then select Advance Settings > Advance Takeover from the Add
Device Storage page.

e. Repeat these steps on all the SDS, where you want to add the devices.
f. Ensure all the rebuild and balance activities are successfully completed.
g. Verify the space capacity after adding the new node.

Performing a PowerFlex storage-only node expansion 333


Internal Use - Confidential

6. If you are using a PowerFlex version prior to 3.5:


a. Connect to the PowerFlex GUI.
b. Click Backend.
c. Locate the newly added PowerFlex SDS, right-click, and select Add Device.
d. Type /dev/nvmeXXn1 where X is the value from step 3.
e. Select the Storage Pool.
NOTE: If the existing PD has Red Hat Enterprise Linux nodes, replace or expand with Red Hat Enterprise Linux. If
the existing PD has embedded operating system nodes, replace or expand with embedded operating system.

f. Repeat these steps for each device.


g. Click OK > Close. A rebalance of the PowerFlex storage-only node begins.

Add a Layer 3 routing between an external SDC and


SDS
Use this procedure to enable an external SDC to SDS communication and configure the PowerFlex node for an external SDC
reachability.

Steps
1. To manually configure the PowerFlex node for an external SDC reachability:
a. Log in to the PowerFlex node using ssh <ip address>.
b. Configure one static route per interface for each external network.
echo "<destination subnet> via <gateway> dev <SIO Interface>">route-<SIO Interface>
2. To manually configure the SDS for PowerFlex node reachability:
a. Log in to the PowerFlex node using ssh <ip address>.
b. Configure one static route per interface for each PowerFlex data network.
esxcli network ip route ipv4 add -g <gateway> -n <destination subnet in CIDR>

Add storage data replication to PowerFlex


Use this procedure to add storage data replication to PowerFlex.

About this task


This procedure is required when not using PowerFlex Manager.

Prerequisites
Replication is supported on PowerFlex storage-only nodes with dual CPU. The node should be migrated to an LACP bonding NIC
port design.

Steps
1. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
2. Click the Protection tab in the left pane.
NOTE: In the PowerFlex GUI version 3.5 or prior, this tab is Replication.

3. Click SDR > Add, and enter the storage data replication name.
4. Choose the protection domain.
5. Enter the IP address to be used and click Add IP. Repeat this for each IP address and click Add SDR.
NOTE: While adding storage data replication, Dell recommends to add IP addresses for flex-data1-<vlanid>, flex-data2-
<vlanid>, flex-data3-<vlanid> (if required), and flex-data4-<vlanid> (if required) along with flex-rep1-<vlanid>, and

334 Performing a PowerFlex storage-only node expansion


Internal Use - Confidential

flex-rep2-<vlanid>. Choose the role of Application and Storage for all data IP addresses and choose role as External
for the replication IP addresses.

6. Repeat steps 3 through 5 for all the storage data replicator you are adding. If you are expanding a replication-enabled
PowerFlex cluster, skip steps 7 through 11.
7. Click Protection > Journal Capacity > Add, and provide the capacity percentage as 10%, which is the default. You can
customize if needed.
8. Extract and add the MDM certificate:
NOTE: You can perform steps 8 through 13 only when the Secondary Site is up and running.

a. Log in to the primary MDM, by using the SSH on source and destination.
b. Type scli command scli --login --username admin. Provide the MDM cluster password, when prompted.
c. See the following example and run the command to extract the certificate on source and destination primary MDM.
Example for source: scli --extract_root_ca --certificate_file /tmp/Source.crt
Example for destination: scli --extract_root_ca --certificate_file /tmp/destination.crt
d. Copy the extracted certificated of source (primary MDM) to destination (primary MDM) using the SCP and conversely.
e. See the following example to add the copied certificate:
Example for source: scli --add_trusted_ca --certificate_file /tmp/destination.crt --comment
destination_crt
Example for destination: scli --add_trusted_ca --certificate_file /tmp/source.crt --comment
source_crt
f. Type scli --list_trusted_ca to verify the added certificate.
9. Create the remote consistency group (RCG).
Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443.
NOTE: Use the primary MDM IP and credentials to log in to the PowerFlex cluster.
10. Click the Protection tab from the left pane. If you are using a PowerFlex version 3.5 or prior, click the Replication tab.
11. Choose RCG (Remote Consistency Group), and click ADD.
12. On the General tab:
a. Enter the RCG name and RPO.
b. Select the Source Protection Domain from the drop-down list.
c. Select the target system and Target protection domain from the drop-down list, and click Next.
d. Under the Pair tab, select the source and destination volumes.
NOTE: The source and destination volumes must be identical in size and provisioning type. Do not map the volume
on the destination site of a volume pair. Retain the read-only permission. Do not create a pair containing a
destination volume that is mapped to the SDCs with a read_write permission.

e. Click Add pair, select the added pair to be replicated, and click Next.
f. In the Review Pairs tab, select the added pair, and select Add RCG, and start replication according to the requirement.

Add a PowerFlex node to a PowerFlex Manager


service in lifecycle mode
Use this procedure to update the inventory and service details in PowerFlex Manager for a new PowerFlex node.

Steps
1. Update the inventory for vCenter (vCSA), switches, gateway VM, and nodes:
a. Click Resources on the home screen.
b. Select the vCenter, switches (applicable only for full networking), gatewayVM, and newly added nodes.
c. Click Run Inventory.
d. Click Close.
e. Wait for the job in progress to complete.

Performing a PowerFlex storage-only node expansion 335


Internal Use - Confidential

2. Update Services details:


a. Click Services
b. Choose the services on which new a node is expanded and click View Details.
c. On the Services details screen, choose Update Service Details.
d. Choose the credentials for the node and SVM and click Next.
e. On the Inventory Sumary, verify that the newly added nodes are reflecting under Physical Node, and click Next.
f. On the Summary page, verify the details and click Finish.

336 Performing a PowerFlex storage-only node expansion


Internal Use - Confidential

58
Performing a PowerFlex hyperconverged
node expansion
Perform the manual expansion procedure to add a PowerFlex hyperconverged node to PowerFlex Manager services that are
discovered in a lifecycle mode.
See Cabling the PowerFlex R640/R740xd/R840 nodes for cabling information.
Type systemctl status firewalld to verify if firewalld is enabled. If disabled, see the Enabling firewall service on
PowerFlex storage-only nodes and SVMs KB article to enable firewalld on all SDS components.

Discover resources
Use this procedure to discover and allow PowerFlex Manager access to resources in the environment. Provide the management
IP address and credential for each discoverable resource.

Prerequisites
Verify that the iDRAC network settings are configured. See Configure iDRAC network settings for more information.

About this task


Dell EMC recommends using separate operating system credentials for SVM and VMware ESXi. For information about creating
or updating credentials in PowerFlex Manager, click Settings > Credentials Management and access the online help.
During node discovery, you can configure iDRAC nodes to automatically send alerts to PowerFlex Manager. You can also change
the IP address of the iDRAC nodes and discover them. See Reconfigure the discovered nodes with new management IP and
credentials in the Dell EMC PowerFlex Appliance Administration Guide. If the PowerFlex nodes are not configured for alert
connector, Secure Remote Services does not receive critical or error alerts for those resources.
The following table describes how to configure resources in managed mode:

Resource type Resource state Example


PowerFlex nodes Managed PowerEdge iDRAC management IP address
If you want to perform firmware updates or deployments on a
discovered node, change the default state to managed.
Perform firmware or catalog updates from the Services page,
or the Resources page.

NOTE: For partial network deployments, you do not need to discover the switches. The switches need to be pre-
configured. For sample configurations for Dell PowerSwitch, Cisco Nexus, and Arista switches, see the Dell EMC PowerFlex
Appliance Administration Guide.
The following are the specific details for completing the Discovery wizard steps:

Resource IP address Resource Discover into Credentials: Credentials: Credentials: Credentials:


type range state node pool name Username password SNMPv2
community
string
Node 192.168.101.21- Managed PowerFlex node PowerFlex root calvin customer
pool appliance provided
192.168.101.24
iDRAC default

Performing a PowerFlex hyperconverged node expansion 337


Internal Use - Confidential

Resource IP address Resource Discover into Credentials: Credentials: Credentials: Credentials:


type range state node pool name Username password SNMPv2
community
string
Switch 192.168.101.45- Managed N/A access admin admin public
switches
192.168.101.46

VM 192.168.105.105 Managed N/A vCenter administrator P@ssw0rd! N/A


Manager @vsphere.loc
al
Element 192.168.105.120 Managed N/A CloudLink secadmin Secadmin!! N/A
Manager* 192.168.105.121
*

** This is optional. For a new CloudLink Center deployment, the CloudLink Center is discovered automatically.

Prerequisites
● Configure the iDRAC network settings.
● Gather the IP addresses and credentials that are associated with the resources.
NOTE: PowerFlex Manager also allows you to use the name-based searches to discover a range of nodes that were
assigned the IP addresses through DHCP to iDRAC. For more information about this feature, see Dell EMC PowerFlex
Manager Online Help.

Steps
1. On the PowerFlex Manager Getting Started page, click Discover Resources.
2. On the Welcome page of the Discovery Wizard, read the instructions and click Next.
3. On the Identify Resources page, click Add Resource Type. From the Resource Type list, select the resource that you
want to discover.
4. Enter the management IP address of the resource in the IP/Hostname Range field. To discover a resource in an IP range,
provide a starting and ending IP address.
5. In the Resource State list, select Managed or Unmanaged.
6. For PowerFlex node, to discover resources into a selected node pool instead of the global pool (default), select the node
pool from the Discover into Node Pool list.
7. Select the appropriate credential from the Credentials list. See the table above for details.
8. For PowerFlex node, if you want PowerFlex Manager to automatically reconfigure the iDRAC IP addresses of the nodes it
discovers, select the Reconfigure discovered nodes with new management IP and credentials check box. This option
is not selected by default, because it is faster to discover the nodes if you bypass the reconfiguration.
NOTE: iDRAC can be discovered using hostname also.

NOTE: For the Resource Type, you can use a range with hostname or IP address, provided the hostname has a valid
DNS entry.

9. For PowerFlex node, select the Auto configure nodes to send alerts to PowerFlex Manager check box to have
PowerFlex Manager automatically configure nodes to send alerts to PowerFlex Manager.
10. Click Next to start discovery. On the Discovered Resources page, select the resources from which you want to collect
inventory data and click Finish. The discovered resources are listed on the Resources page.

Related information
Configure iDRAC network settings

338 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

Expanding compute and storage capacity


Use this procedure to determine if sufficient capacity is available with the associated PowerFlex license before expanding your
system.

Steps
1. If you are using PowerFlex presentation server:
a. Log in to the PowerFlex presentation server.
b. Click Settings.
c. Copy the license and click Update License.
2. If you are using a version prior to PowerFlex 3.5:
a. Log in to the PowerFlex GUI and click Preferences > About. Note the current capacity available with the associated
PowerFlex license.
b. If the capacity available is sufficient and does not exceed with the planned expansion, proceed with the expansion
process.
c. If the capacity available exceeds with the planned expansion, obtain an updated license with additional capacity. Engage
the customer account team to obtain an updated license. Once an updated license is available, click Preferences >
System Settings > License > Update License. Verify that the updated capacity is available by selecting Preferences
> About.

Verify the CloudLink license


Use this procedure to verify that the customer has sufficient license to expand the nodes.

Prerequisites
Use the self-SED based license for SED drives, and capacity license for non-SED drives.

Steps
1. Log in to the CloudLink Center web console.
2. Click System > License.
3. Check the limit and verify that there is enough capacity for the expansion.

Install VMware ESXi


Use this procedure to install VMware ESXi on the PowerFlex node.

Prerequisites
Verify the customer VMware ESXi ISO is available and is located in the Intelligent Catalog code directory.

Steps
1. Log in to the iDRAC:
a. Connect to the iDRAC interface, and launch a virtual remote console on the Dashboard.
b. Select Connect Virtual Media.
c. Under Map CD/DVD, click Choose File > Browse and browse to the folder where the ISO file is saved in the Intelligent
Catalog folder, select it and click Open..
d. Click Map Device.
e. Click Boot > Virtual CD/DVD/ISO.
f. Click Power > Reset System (warm boot).
NOTE: If the system is powered off, you must map the ISO image. Change the boot Next Boot to Virtual CD/DVD
ISO and power on the server. It boots with the ISO image. A reset is not required.

g. Press F2 to enter system setup mode.

Performing a PowerFlex hyperconverged node expansion 339


Internal Use - Confidential

h. Under System BIOS > Boot setting, select UEFI as the boot mode.
NOTE: Ensure that the BOSS card is set as the primary boot device from the boot sequence settings. If the BOSS
card is not the primary boot device, reboot the server and change the UEFI boot sequence from System BIOS >
Boot settings > UEFI BOOT settings.

i. Click Back > Finish > Yes > Finish > OK > Finish > Yes.
2. Install VMware ESXi:
a. On the VMware ESXi installer screen, press Enter to continue.
b. Press F11 to accept the license agreement.
c. Under Local, select DELLBOSS VD as the install location, and click Enter if prompted to do so.
d. Select US Default as the keyboard layout.
e. When prompted, type the root password and press Enter.
f. At the Confirm Install screen, press F11.
g. When the installation is complete, remove the installation media before rebooting.
h. Press Enter to reboot the node.
NOTE: Set the first boot device to be the drive on which you installed VMware ESXi in Step 3.

3. Configure the host settings in Direct Console User Interface (DCUI):


a. Press F2 to customize the system.
b. Provide the password for the root user and press Enter.
c. Press F2 to enter the DCUI.
d. Go to Configure Management Network.
e. Set Network Adapters to VMNIC4 and VMNIC6.
NOTE: For PowerFlex R840 hyperconverged nodes (GPU or No GPU), select VMNIC0 and VMNIC2.

f. See the ESXi Management VLAN ID field in the Workbook for the required VLAN value.
g. Choose Set static IPv4 address and network configuration. Set IPv4 ADDRESS, SUBNET MASK, and DEFAULT
GATEWAY configuration to the values defined in the Workbook.
h. Choose Use the following DNS server addresses and hostname. Go to DNS Configuration. See the Workbook for
the required DNS value.
i. Go to Custom DNS Suffixes. See the Workbook (local PFMC DNS).
j. Press Esc to exit.
k. Go to Troubleshooting Options.
l. Select Enable ESXi Shell and Enable SSH.
m. Press <Alt>-F1
n. Log in as root.
o. To enable the VMware ESXi host to work on the port channel, type:

esxcli network vswitch standard policy failover set -v vSwitch0 -l iphash


esxcli network vswitch standard portgroup policy failover set -p "Management
Network" -l iphash

p. Type exit to log off.


q. Press <Alt>-F2. Press Esc to return to the DCUI.
r. Select Troubleshooting Options > Disable ESXi Shell.
s. Go to DCUI IPv6 Configuration > IPv6 Configuration.
t. Disable IPv6.
u. Press ESC to return to the DCUI.
v. Type Y to commit the changes and the node restarts.
w. Verify host connectivity by pinging the IP address from the jump server, using the command prompt.

340 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

Create a new VMware ESXi cluster to add PowerFlex


nodes
After installing VMware ESXi, use this procedure to create a new cluster, enable high-availability and DRS, and add a host to the
cluster.

About this task


If you are adding the host to a new cluster, follow the entire procedure. To add the host on an existing cluster, skip steps 1
through 6.

Prerequisites
Ensure that you have access to the customer vCenter.

Steps
1. From the vSphere Client home page, go to Home > Hosts and Clusters.
2. Select a data center.
3. Right-click the data center and select New Cluster.
4. Enter a name for the cluster.
5. Select vSphere DRS and vSphere HA cluster features.
6. Click OK.
7. Select the existing cluster or newly created cluster.
8. From the Configure tab, click Configuration > Quickstart.
9. Click Add in the Add hosts card.
10. On the Add hosts page, in the New hosts tab, add the hosts that are not part of the vCenter Server inventory by entering
the IP address, or hostname and credentials.
11. (Optional) Select the Use the same credentials for all hosts option to reuse the credentials for all added hosts.
12. Click Next.
13. The Host Summary page lists all the hosts to be added to the cluster with related warnings. Review the details and click
Next.
14. On the Ready to complete page, review the IP addresses or FQDN of the added hosts and click Finish.
15. Add the new licenses:
a. Click Menu > Administration
b. In the Administration section, click Licensing.
c. Click Licenses..
d. From the Licenses tab, click Add.
e. Enter or paste the license keys for VMware vSphere and vCenter per line. Click Next.
The license key is a 25-character length of alphabets and digits in the format XXXXX-XXXXX-XXXXX-XXXXX-XXXXX.
You can enter a list of keys in one operation. A new license is created for every license key you enter.
f. On the Edit license names page, rename the new licenses as appropriate and click Next.
g. Optionally, provide an identifying name for each license. Click Next.
h. On the Ready to complete page, review the new licenses and click Finish.

Migrating vCLS VMs


Use this procedure to migrate the vSphere Cluster Services (vCLS) VMs manually to the service datastore.

About this task


VMware vSphere 7.0Ux or ESX 7.0Ux creates vCLS VM when the vCenter Server Appliance (vCSA) is upgraded. This task helps
to migrate the vCLS VMs to service datastore.

Performing a PowerFlex hyperconverged node expansion 341


Internal Use - Confidential

Steps
1. Log in to VMware vCSA HTML Client using the credentials.
2. Go to VMs and templates inventory or Administration > vCenter Server Extensions > vSphere ESX Agent Manager
> VMs to view the VMs.
The VMs are in the vCLS folder once the host is added to the cluster.
3. Right-click the VM and click Migrate.
4. In the Migrate dialog box, click Yes.
5. On the Select a migration type page, select Change storage only and click Next.
6. On the Select storage page, select the PowerFlex volumes for hyperconverged or ESXi-based compute-only node which
will be mapped after the PowerFlex deployment.
NOTE: The volume name is powerflex-service-vol-1 and powerflex-service-vol-2. The datastore name is
powerflex-esxclustershotname-ds1 and powerflex-esxclustershotname-ds2. If these volumes or datastore are
not present, create the volumes or datastores to migrate the vCLS VMs.

7. Click Next > Finish.


8. Repeat the above steps to migrate all the vCLS VMs.

Renaming the VMware ESXi local datastore


Use this procedure to rename local datastores using the proper naming conventions.

Prerequisites
VMware ESXi must be installed with hosts added to the VMware vCenter.

Steps
1. Log in to the VMware vSphere Client.
2. Click Hosts and Clusters.
3. Locate and select the VMware ESXi host.
4. Select Datastores.
5. Right-click the datastore name, and select Rename.
6. Name the datastore using the DASXX convention, with XX being the node number.

Patch and install drivers for VMware ESXi


Use this procedure if VMware ESXi drivers differ from the current Intelligent Catalog. Patch and install the VMware ESXi drivers
using the VMware vSphere Client.

Prerequisites
Apply all VMware ESXi updates before installing or loading hardware drivers.
NOTE: This procedure is required only if the ISO drivers are not at the proper Intelligent Catalog level.

Steps
1. Log in to the VMware vSphere Client.
2. Click Hosts and Clusters.
3. Locate and select the VMware ESXi host that you installed.
4. Select Datastores.
5. Right-click the datastore name and select Browse Files.
6. Select the Upload icon (to upload file to the datastore).
7. Browse to the Intelligent Catalog folder or downloaded current solution Intelligent Catalog files.

342 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

8. Select the VMware ESXi patch .zip files according to the current solution Intelligent Catalog and node type and click OK to
upload.
9. Select the driver and vib files according to the current Intelligent Catalog and node type and click OK to upload.
10. Click Hosts and Clusters.
11. Locate the VMware ESXi host, right-click, and select Enter Maintenance Mode.
12. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client.
13. Log in as root.
14. Type cd /vmfs/volumes/DASXX where XX is the name of the local datastore that is assigned to the VMware ESXi
server.
15. To display the contents of the directory, type ls.
16. If the directory indicates vib files, type esxcli software vib install –v /vmfs/volumes/DASXX/
patchname.vib to install the vib. These vib files can be individual drivers that are absent in the larger patch cluster
and must be installed separately.
17. Perform either of the following depending on the VMware ESXi version:
a. For VMware ESXi 7.0, type esxcli software vib update -d /vmfs/volumes/DASXXX/VMware-
ESXi-7.0<version>-depot.zip.
b. For VMware ESXi 6.x, type esxcli software vib install -d /vmfs/volumes/DASXXX/<ESXI-patch-
file>.zip
18. Type reboot to reboot the host.
19. Once the host completes rebooting, open an SSH session with the VMware ESXi host, and type esxcli software vib
list |grep net-i to verify that the correct drivers loaded.
20. Select the host and click Exit Maintenance Mode.
21. Update the test plan and host tracker with the results.

Add PowerFlex nodes to Distributed Virtual Switches


Use the VMware vSphere Client to apply settings and add nodes to switches.
You can select multiple hosts and apply settings in template mode. In an LACP bonding NIC port design, dvswitch0 is referred
to as cust_dvswitch and dvswitch1 as flex_dvswitch.
CAUTION: If NSX is configured on the ESXi host, do not migrate mgmt and vmotion VMkernels to the VDS.

See Configuring the network for more information.

Change the MTU value


Use this procedure to change the MTU values to 9000 for the VMkernel port group and the dvswitch.

Prerequisites
● Type show running-configuration interface port-channel <portchannel number> to back up the
switch port and verify that the port channel for the impacted host are updated to MTU 9216. If the MTU value is set to
9000, skip this procedure.
● Back up the dvswitch configuration:
○ Click Menu, and from the drop-down, click Networking.
○ Click the impacted dvswitch and click the Configure tab.
○ From the Properties page, verify the MTU value. If the MTU value is set to 9000, skip this procedure.
● See the following table for recommended MTU values:

Switch Default MTU Recommended MTU


Dell EMC PowerSwitch 9216 9216
Cisco Nexus 9216 9216
cust_dvswitch 9000 9000

Performing a PowerFlex hyperconverged node expansion 343


Internal Use - Confidential

VMkernel Default MTU Recommended MTU


vMotion 9000 1500 or 9000
Management 1500 1500 or 9000

Steps
1. Change the MTU to 9216 or jumbo on physical switch port (Dell EMC PowerSwitch and Cisco Nexus):
a. Dell:

interface port-channel31
description Downlink-Port-Channel-to-Ganga-r840-nvme-01
no shutdown
Dell EMC PowerSwitch switchport mode trunk
switchport trunk allowed vlan 103,106,113,151,152,153,154
mtu 9216
vlt-port-channel 31
spanning-tree port type edge

b. Cisco:

interface port-channel31
description Downlink-Port-Channel-to-Ganga-r840-nvme-01
no shutdown
Cisco Nexus switchport mode trunk
Cisco Nexus switchport trunk allowed vlan 103,106,113,151,152,153,154
mtu 9216
vpc 31
spanning-tree port type edge

2. Change MTU to 9000 on cust_dvswitch:


a. Log in to VMware vCenter using administrator credentials.
b. Click Networking > cust_dvswitch.
c. Right-click and select Edit Settings.
d. Click Advanced and change the MTU value to 9000.
e. Repeat these steps for the remaining switch.
3. Change MTU to 9000 for vMotion VMkernel:
a. Click Hosts and Clusters.
b. Click the node and click Configure.
c. Click Vmkernel adapters under Networking.
d. Select the vMotion VMK and click Edit.
e. In the Port Properties tab, change the MTU to 9000.
f. Repeat for the remaining nodes.
4. Change MTU to 9000 for the PowerFlex gateway data interfaces:
a. Start a PuTTy session.
b. Connect to the PowerFlex gateway with root credentials.
c. Type ip addr sh and verify the details of the interfaces.
d. Type cd /etc/sysconfig/network-scripts.
e. Type vi flex-data1.
f. Select the MTU=1500 line and change the value to 9000. If this line is not visible, append the line MTU=9000.
g. Save and exit.
h. Repeat these steps for all other data network interfaces.
i. Restart the gateway VM.
5. To verify that the MTU is changed, ping -s 8972 any data Ips.

344 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

Validating network configurations


Test network connectivity between hosts and Metadata Managers (MDMs) before installing PowerFlex on new nodes.

Prerequisites
Gather the IP addresses of the primary and secondary MDMs.

Steps
1. Open Direct Console User Interface (DCUI) or use SSH to log in to the new hosts.
2. At the command-line interface, run the following commands to ping each of the primary MDM and secondary MDM IP
addresses.
If ping test fails, you must remediate before continuing.
NOTE: A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks.

vmkping –I vmk0 <Mgmt ip address of primary MDM or secondary MDM>


vmkping –I vmk2 <Data1 address of primary MDM or secondary MDM> -s 8972 –d
vmkping –I vmk3 <Data2 address of primary MDM or secondary MDM> -s 8972 –d

Run the following commands for LACP bonding NIC port design. x is the VMkernel adapter number in vmkx.

vmkping –I vmkx <Data3 address of primary MDM or secondary MDM> -s 8972 –d


vmkping –I vmkx <Data4 address of primary MDM or secondary MDM> -s 8972 –d

NOTE: After several host restarts, check the access switches for error or disabled states by running the following
commands:

# show interface brief


# show interface | inc CRC
# show interface counters error

3. Optional: If errors appear in the counters of any interfaces, type the following and check the counters again.
Output from a Cisco Nexus switch:

# clear counters interface ethernet X/X

4. Optional: If there are still errors on the counter, perform the following to see if the errors are old and irrelevant or new and
relevant.
a. Optional: Type # show interface | inc flapped.

Sample output:
Last link flapped 1d02h
b. Type # show logging logfile | inc failure.

Sample output:
Dec 12 12:34:50.151 access-a %ETHPORT-5-IF_DOWN_LINK_FAILURE: Interface Ethernet1/4/3
is down (Link failure)
5. Optional: Check and reset physical connections, bounce and reset ports, and clear counters until errors stop occurring.
Do not activate new nodes until all errors are resolved and no new errors appear.

Performing a PowerFlex hyperconverged node expansion 345


Internal Use - Confidential

Add the new host to PowerFlex


Configure the direct path I/O
Use this procedure to configure SSD.

Steps
1. In the VMware vSphere Client, select the new ESXi hosts.
2. Click Configure > Hardware > PCI Devices.
3. Click Configure PassThrough.
The Edit PCI Device Availability window opens.
4. From the PCI Device drop-down menu, select the Avago (LSI Logic) Dell HBA330 Mini check box and click OK.
5. Right-click the VMware ESXi host and select Maintenance Mode.
6. Right-click the VMware ESXi host and select Reboot to reboot the host.

Add NVMe devices as RDMs


Use this procedure to add NVMe devices as RDMs.

Steps
1. Use SSH to log in host.
2. Run the following command to generate a list of NVMe devices:

ls /vmfs/devices/disks/ |grep NVMe |grep -v ':'

Here is the example of the output:

t10.NVMe____Dell_Express_Flash_PM1725a_800GB_SFF____0A0FB071EB382500

t10.NVMe____Dell_Express_Flash_PM1725a_800GB_SFF____1C0FB071EB382500

t10.NVMe____Dell_Express_Flash_PM1725a_800GB_SFF____3906B071EB382500

3. Run the following command for each NVME device and increment the disk number for each:

vmkfstools -z /vmfs/devices/disks/
t10.NVMe____Dell_Express_Flash_PM1725a_800GB_SFF____0A0FB071EB382500 /vmfs/volumes/
DASxx/<svm_name>/<svm_name>-nvme_disk0.vmdk
vmkfstools -z /vmfs/devices/disks/
t10.NVMe____Dell_Express_Flash_PM1725a_800GB_SFF____0A0FB071EB382500 /vmfs/volumes/
DASxx/<svm_name>/<svm_name>-nvme_disk1.vmdk
vmkfstools -z /vmfs/devices/disks/
t10.NVMe____Dell_Express_Flash_PM1725a_800GB_SFF____1C0FB071EB382500 /vmfs/volumes/
DASxx/<svm_name>/<svm_name>-nvme_disk2.vmdk

4. Log in to the VMware vSphere Client and go to Hosts and Clusters.


5. Right-click the SVM and click Edit Settings.
6. Click Add new device > Existing Hard Disk.
7. Under datastores, select the local DASXX, the directory of the SVM, and then select <svmname>_disk0.vmdk.
8. Click OK.
9. Repeat the steps for each NVMe device, and then click OK.
10. Right-click the SVM and select compatibility and select upgrade VM compatibility.
11. Click Yes and select VMware ESXi 7.x.
12. Click OK.

346 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

13. Power on the SVM.


14. Use SSH to log in to the SVM and run lsblk.
15. Note the drive names of the NVMe devices (for example sdb, sdc). Exclude the 16 GB boot device and sr0 from the list as
they will be added later.

Install and configure the SDC


After adding hosts to the PowerFlex node with VMware ESXi, install the storage data client (SDC) to continue the expansion
process.

Steps
1. Copy the SDC file to the local datastore on the VMware vSphere ESXi server.
2. Use SSH on the host and type esxcli software vib install -d /vmfs/volumes/datastore1/
sdc-3.6.xxxxx.xx-esx7.x.zip -n scaleio-sdc-esx7.x.
3. Reboot the PowerFlex node.
4. To configure the SDC, generate a new UUID:
NOTE: If the PowerFlex cluster is using an SDC authentication, the newly added SDC reports as disconnected when
added to the system. See Configure an authentication enabled SDC for more information.

a. Use SSH to connect to the primary MDM.


b. Type uuidgen. A new UUID string is generated.
6607f734-da8c-4eec-8ea1-837c3a6007bf
5. Use SSH to connect to the new PowerFlex node.
6. Substitute the new UUID in the following code:

esxcli system module parameters set -m scini -p "IoctlIniGuidStr=6607f734-


da8c-4eec-8ea1-837c3a6007bf IoctlMdmIPStr=VIP1,VIP2,VIP3,VIP4"

A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks. Verify the
number of VIPs configured in an existing setup.
7. Reboot the PowerFlex node.

Rename the SDCs


Use this procedure to rename the SDCs.

Steps
1. If using PowerFlex presentation server:
a. Log in to the PowerFlex presentation server
b. Go to Configuration > SDCs.
c. Select the SDC and click Modify > Rename and rename the new host to standard.
For example, ESX-10.234.91.84
2. If using a version prior to PowerFlex 3.5:
a. Log in to the PowerFlex GUI.
b. Click Frontend > SDCs and rename the new host to standard.
For example, ESX-10.234.91.84

Performing a PowerFlex hyperconverged node expansion 347


Internal Use - Confidential

Calculate RAM capacity for medium granularity SDS


Use the formula provided in this procedure to calculate the required capacity for RAM.

About this task


See the following table to verify SVM memory size on the Medium Granularity (MG) SDS. Ensure that you have modified the
memory allocation before starting the upgrade to PowerFlex 3.5.x, based on the table or formula.

Steps
1. Using the table, calculate the required RAM capacity.

MG capacity (TiB) Required MG RAM Additional services Total RAM required Total RAM required
capacity (GiB) memory in the SVM (without in the SVM (with
CloudLink) (GiB) CloudLink)

9.3 7 MDM: 6.5 GiB 17 21 GiB


19.92 10 LIA: 350 MiB 19 23 GiB
38.84 13 OS Base: 1 GiB 23 27 GiB
Buffer: 2 GiB
22.5 10 20 24 GiB
Cloudlink: 4 GiB
46.08 15 25 29 GiB
● Without CloudLink:
92.16 24 9.85 GiB 34 38 GiB
● With CloudLink:
13.85 GiB

2. Alternatively, you can calculate RAM capacity using the following formula:
NOTE: The calculation is in binary MiB, GiB, and TiB.

RAM_capacity_in_GiB = 5 + (210 * total_drive_capacity_in_TiB) / 1024


NOTE: Round up the RAM size to the next GiB, for example, if the output of the calculation is 16.75 GiB, round this up
to 17 GiB.

3. Open the PowerFlex GUI using the PowerFlex management IP address and the relevant PowerFlex username and password.
4. Select the Storage Data Server (SDS) from the Backend where you want to update the RAM size.
5. Right-click the SDS, select Configure IP addresses, and note the flex-data1-<vlanid> and flex-data2-<vlanid> IP addresses
associated with this SDS. A window appears displaying the IP addresses used on that SDS for data communication. Use
these IP addresses to verify that you powered off the correct PowerFlex VM.
6. Right-click the SDS, select Enter Maintenance Mode, and click OK.
7. Wait for the GUI to display a green check mark, click Close.
8. In the PowerFlex GUI, click Backend, and right-click the SVM and verify the checkbox is deselected for Configure RAM
Read Cache.
9. Power off the SVM.
10. In VMware vCenter, open Edit Settings and modify the RAM size based on the table or formula in step 1. The SVM should
be set as 8 or 12vCPU, configured at 8 or 12 Socket, 8 or 12 Core (for CloudLink, additional 4 threads).
11. Power on the SVM.
12. From the PowerFlex GUI backend, right-click the SDS and select Exit Maintenance Mode and click OK.
13. Wait for the rebuild and rebalance to complete.
14. Repeat steps 6 through 13 for the remaining SDSs.

348 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

Change memory and CPU settings on the SVM


Use this procedure to change the memory and CPU settings on the SVM to update PowerFlex hyperconverged nodes to
PowerFlex 3.6.x.x.

About this task


To update memory and CPU settings:

Component vCPUs Memory


SDS with Medium Granularity pool vCPU total: 8 (SDS) + 2 (MDM/TB) + 2 RAM_capacity_in_GIB = 10 + (100
(CloudLink) = 12 vCPU * number of drives) + (550 *
Total_drive_capacity_in_TIB)/1024
NOTE: Physical core requirement 2
sockets with 12 cores each (vCPU
cannot exceed physical cores).

SDS with Fine Granularity pool vCPU total: 10 (SDS) + 2 (MDM/TB) + 2 RAM_capacity_in_GIB = 5 + (210 *
(CloudLink) = 14 vCPU Total_drive_capacity_in_TIB)/1024
NOTE: Physical core requirement 2
sockets with 14 cores each (vCPU
cannot exceed physical cores).

NOTE: Set SDS thread count to 10.

Fine Granularity MD cache Not applicable FG_MD_Cache =


(Total_drive_capacity_in_TIB/2) *
4 * Compression_Factor *
Percent_of_Metadata_to_Cache
NOTE: It is recommended to allocate
2% for compression factor and 2%
for metadata to cache.

MDM/TB Not applicable 6.5 GIB additional memory


LIA Not applicable 350 MIB additional memory
Operating system Not applicable 2 GIB additional memory
CloudLink Not applicable 4 GIB additional memory
Spare Not applicable 2 GIB additional memory

Prerequisites
● Enter SDS node (SVMs) into maintenance mode and power off the SVM.
● Switch the primary cluster role to secondary if you are putting the primary MDM into maintenance mode (change back to
original node once completed). Do activity only one SDS at a time.
● If you place multiple SDS into maintenance mode at same time, there will be a chance of data loss.
● Ensure that the node has enough CPU cores in each socket.

Steps
1. Log in to the PowerFlex GUI presentation server, https://Presentation_Server_IP:8443.
2. Click Configuration > SDSs.
3. In the right pane, select the SDS and click More > Enter Maintenance Mode.
4. In the Enter SDS into Maintenance Mode dialog box, select Instant.
If maintenance mode takes more than 30 minutes, select PMM.
5. Click Enter Maintenance Mode.
6. Verify the operation completed successfully and click Dismiss.
7. Shut down the SVM.

Performing a PowerFlex hyperconverged node expansion 349


Internal Use - Confidential

a. Log in to VMware vCenter using the VMware vSphere Client.


b. Select the SVM, right-click Power > Shut Down Guest OS.

Manually deploy the SVM


Use this procedure to manually deploy the SVM.

Steps
1. Log in to the VMware vSphere Client and do the following:
a. Right-click the ESXi host, and select Deploy OVF Template.
b. Click Choose Files and browse the SVM OVA template.
c. Click Next.
2. Go to hosts and templates/EMC PowerFlex and right-click PowerFlex SVM Template, and select the new VM from this
template.
3. Enter a name similar to svm-<hostname>-<SVM IP ADDRESS>, select a datacenter and folder, and click Next.
4. Identify the cluster and select the node that you are deploying. Verify that there are no compatibility warnings and click
Next. Review the details and click Next.
5. Select the local datastore DASXX, and click Next.
6. Leave Customize hardware checked and click Next.
a. Set CPU with 12 cores per socket.
b. Set Memory to 16 GB and check Reserve all guest memory (All locked).
NOTE: Number of vCPUs and size of memory may change based on your system configuration. Check in the existing
SVM and update the CPU and memory settings accordingly.

c. Set Network Adapter 1 to the flex-stor-mgmt-<vlanid>.


d. Set Network Adapter 2 to the flex-data1-<vlanid>.
e. Set Network Adapter 3 to the flex-data2-<vlanid>.
f. For an LACP bonding NIC port design:
i. Set Network Adapter 4 to the flex-data3-<vlanid> (if required).
ii. Set Network Adapter 5 to the flex-data4-<vlanid> (if required).
A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks. Verify
the number of logical data networks configured in an existing setup and configure the new SVM accordingly. If only two
logical data networks are configured on an existing setup, download and deploy the three NIC OVA templates.

7. Click Next > Finish and wait for the cloning process to complete.
8. Right-click the new SVM, and select Edit Settings and do the following:
This is applicable only for SSD. For NVMe, see Add NVMe devices as RDMs.
a. From the New PCI device drop-down menu, click DirectPath IO.
b. From the PCI Device drop-down menu, expand Select Hardware, and select Avago (LSI Logic) Dell HBA330 Mini.
c. Click OK.
9. Prepare for asynchronous replication:
NOTE: If replication is enabled, follow the below steps, else skip and go to step 11.

a. Add virtual NICs:


i. Log in to the production vCenter using VMware vSphere Client and navigate to Host and Clusters.
ii. Right-click the SVM and click Edit Settings.
iii. Click ADD NEW DEVICE and select Network Adapter from the list.
iv. Select the appropriate port group created for SDR external communication, click OK.
v. Repeat steps ii to iv for creating a second NIC.
vi. Record the MAC address of newly added network adapters from vCenter:
● Right-click the SVM and click Edit Settings.
● Select Network Adapter from the list with SDR port group.
● Expand the network adapter and record the details from the MAC Address field.
b. Modify the vCPU, Memory, vNUMA, and CPU reservation settings on SVMs:

350 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

The following requirements are for reference:


● 12 GB additional memory is required for SDR.
For example, if you have 24 GB memory existing in SVM, add 12 GB to enable replication. In this case, 24+12=36 GB.
● Additional 8*vCPUs required for SDR:
○ vCPU total for MG Pool based system: 8 (SDS) + 8 (SDR) + 2 (MDM/TB) + 2 (CloudLink) = 20 vCPUs
○ vCPU total for FG Pool based system
○ vCPU total: 10 (SDS) + 10 (SDR) + 2 (MDM/TB) + 2 (CloudLink) = 24 vCPU
● Per SVM, set numa.vcpu.maxPerVirtualNode to half the vCPU value assigned to the SVM.
For example, if the SVM has 20 vCPU, set numa.vcpu.maxPerVirtualNode to 10.

i. Browse to the SVM in the VMware vSphere Client.


ii. Find a virtual machine, select a data center, folder, cluster, resource pool, or host.
iii. Click the VMs tab, right-click the virtual machine and select Edit Settings.
iv. Click VM Options > Advanced.
v. Under Configuration Parameters, click the Edit Configuration.
10. Power on the new SVM and open a console.
11. For versions prior to PowerFlex 3.x, run systemctl status firewalld to check if the firewall is enabled.
If the service is inactive or disabled, see the KB article Enabling firewall service on PowerFlex storage-only nodes and SVMs
to enable the service and required ports for each PowerFlex component.

12. Log in using the following credentials:


● Username: root
● Password: admin
13. To change the root password type passwd and enter the new SVM root password twice.
14. Type nmtui, select Set system hostname, press Enter, and create the hostname. For example, ScaleIO-10-234-92-84.
15. Select Edit a Connection.
16. Set eth0 interface for flex-stor-mgmt-<vlanid>.
17. Set eth1 interface for flex-data1-<vlanid>.
18. Set eth2 interface for flex-data2-<vlanid>.
19. If the existing network is an LACP bonding NIC, repeat step 16 for eth3 and eth4.
20. If native asynchronous replication is enabled, perform the following:
● Set eth5 interface for flex-rep1-<vlanid>
● Set eth6 interface for flex-rep2-<vlanid>
21. Exit nmtui back to the command prompt.
22. Type systemctl to restart the network.
23. Type TOKEN=<TOKEN-PASSWORD> rpm -i /root/install/EMC-ScaleIO-lia-3.x-x.el7.x86_64.rpm to
install the LIA.
24. Type rpm -i /root/install/EMC-ScaleIO-sds-3.x-xel7.x86_64.rpm to install the SDS.
25. Type lsblk from the SVM console to list available storage to add to the SDS.

Prepare the SVMs for replication


Use the following procedures to prepare the SVMs for replication. These tasks are applicable only if native asynchronous
replication is configured. If native asynchronous replication is not configured, skip to Add the new SDS to PowerFlex.

Related information
Add the new SDS to PowerFlex

Performing a PowerFlex hyperconverged node expansion 351


Internal Use - Confidential

Setting the SDS NUMA


Use this procedure to allow the SDS to use the memory from the other NUMA.

Steps
1. Log in to the SDS (SVMs) using PuTTy.
2. Append the line numa_memory_affinity=0 to SDS configuration file /opt/emc/scaleio/sds/cfg/conf.txt,
type: echo #numa_memory_affinity=0 >> /opt/emc/scaleio/sds/cfg/conf.txt.
3. Run #cat /opt/emc/scaleio/sds/cfg/conf.txt to verify that the line is appended.

Enable replication on PowerFlex nodes with FG pool


Use this procedure to enable replication on PowerFlex nodes with FG pool.

About this task


If the PowerFlex node has FG Pool and if you want to enable replication, set the SDS thread count to ten from default of eight.

Steps
1. Use SSH to log in to the primary MDM. Log in to PowerFlex cluster using #scli --login --username admin.
2. To query the current value, type, #scli --query_performance_parameters --print_all --tech --
all_sds|grep -i SDS_NUMBER_OS_THREADS.
3. To set the value of SDS_number_OS_threads to 10, type, # scli --set_performance_parameters -sds_id
<ID> --tech --sds_number_os_threads 10.

NOTE: Do not set the SDS threads globally, set the SDS threads per SDS.

Verify and disable Network Manager


Use this procedure to verify that Network Manager is not running.

Steps
1. Log in to the SDS (SVMs) using PuTTy.
2. Run # systemctl status NetworkManager to ensure that Network Manager is not running.
Output shows Network Manager is disabled and inactive.
3. If Network Manager is enabled and active, run the following command to stop and disable the service:

# systemctl stop NetworkManager


# systemctl disable NetworkManager

Updating the network configuration


Use this procedure to update the network configuration file for all the network interfaces.

Steps
1. Log in to SDS (SVMs) using PuTTY.
2. Note the MAC addresses of all the interfaces, type, #ifconfig or #ip a.

352 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

3. Edit all the interface configuration files (ifcfg-eth0, ifcfg-eth1, ifcfg-eth2, ifcfg-eth3, ifcfg-eth4) and update the NAME,
DEVICE and HWADDR to ensure correct MAC address and NAME gets assigned.
NOTE: If any of the entries are already there with correct value, then you can ignore such values.
● Use the vi editor to update the file # vi /etc/sysconfig/network-scripts/ifcfg-ethX
or
● Append the line using the following command:

# echo NAME=ethX >> /etc/sysconfig/network-scripts/ifcfg-ethX


# echo HWADDR=xx:xx:xx:xx:xx:xx >> /etc/sysconfig/network-scripts/ifcfg-ethX

Example file:

BOOTPROTO=none
ONBOOT=yes
HOTPLUG=yes
TYPE=Ethernet
DEVICE=eth2
IPADDR=192.168.155.46
NETMASK=255.255.254.0
DEFROUTE=no
MTU=9000
PEERDNS=no
NM_CONTROLLED=no
NAME=eth2
HWADDR=00:50:56:80:fd:82

Update network interface configuration files


Use this procedure to ensure the network interface configuration file updates correctly.

About this task


Remove net.ifnames=0 and biosdevname=0 from the /etc/default/grub file to avoid the interface name issue when you
add virtual NICs to SVM for SDR communication.

Steps
1. Log in to the SVM using PuTTY.
2. Edit the grub file located in /etc/default/grub, type: # vi /etc/default/grub.
3. From the last line, remove net.ifnames=0 and biosdevname=0, and save the file.
4. Rebuild the GRUB configuration file, using: # grub2-mkconfig -o /boot/grub2/grub.cfg

Performing a PowerFlex hyperconverged node expansion 353


Internal Use - Confidential

Shutting down the SVM


Use this procedure to shut down the SVM.

Steps
1. Log in to VMware vCenter using VMware vSphere Client.
2. Select the SVM, right-click Power > Shut-down Guest OS. Ensure you shut down the correct SVM.

Modifying the vCPU, memory, vNUMA, and CPU reservation settings on


SVMs
When you enable replication on PowerFlex hyperconverged nodes, there are specific memory and CPU settings that must be
updated.

Setting the vNUMA advanced option


Use this procedure to set numa.vcpu.maxPerVirtualNode.

About this task


Ensure the CPU hot plug feature is disabled. If this feature is enabled, disable it before configuring the vNUMA parameter.

Steps
1. Log in to the production VMware vCenter using VMware vSphere Client.
2. Right-click the VM that you want to change and select Edit Settings.
3. Under the Virtual Hardware tab, expand CPU, and clear the CPU Hot Plug check box.

Editing the SVM configuration


Use this procedure to set the SVM numa.vcpu.maxPerVirtualNode to half the vCPU value assigned to the SVM.

Steps
1. Browse to the SVM in the VMware VMware vSphere Client.
2. To find a VM, select a data center, folder, cluster, resource pool, or host.
3. Click the VMs tab.
4. Right-click the VM and select Edit Settings.
5. Click VM Options and expand Advanced.
6. Under Configuration Parameters, click Edit Configuration.
7. In the dialog box that appears, click Add Configuration Params.
8. Enter a new parameter name and its value depending on the pool:
● If the SVM for an MG pool has 20 vCPU, set numa.vcpu.maxPerVirtualNode to 10.
● If the SVM for an FG pool has 24 vCPU, set numa.vcpu.maxPerVirtualNode to 12.
9. Click OK > OK.
10. Ensure the following:
● CPU shares are set to high.
● 50% of the vCPU reserved on the SVM.
For example:
● If the SVM for an MG pool is configured with 20 vCPUs and CPU speed is 2.8 GHz, set a reservation of 28 GHz
(20*2.8/2).
● If the SVM is configured with 24 vCPUs and CPU speed is 3 GHz, set a reservation of 36 GHz (24*3/2).
11. Find the CPU and clock speed:
a. Log in to VMware vCenter.

354 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

b. Click Host and Cluster.


c. Expand the Cluster and select Physical node.
d. Find the details against Processor Type under the Summary tab.
12. Right-click the VM you want to change and select Edit Settings.
13. Under the Virtual Hardware tab, expand CPU, verify Reservation and Shares.

Modify the memory size according to the SDR requirements for MG Pool
Use this procedure to add additional memory required for SDR if replication is enabled.

About this task


NOTE: 12 GB of additional memory is required for SDR. For example, if you have 24 GB memory existing in the SVM, add 12
GB for enabling replication so it would be 24+12 = 36 GB.

Steps
1. Log in to the production VMware vCenter using vSphere client.
2. Right-click the VM you want to change and select Edit Settings.
3. Under the Virtual Hardware tab, expand Memory, modify the memory size according to SDR requirement.
4. Click OK.

Increase the vCPU count


Use this procedure to increase the vCPU count according to the SDR requirement.

About this task


The physical core requirement is two sockets with ten cores each (vCPU * per NUMA domain cannot exceed physical cores).
vCPU total: 8 (SDS) + 8 (SDR) + 2 (MDM/TB) + 2(CloudLink) = 20 vCPUs

Steps
1. Log in to the production VMware vCenter using VMware vSphere client.
2. Right-click the virtual machine that requires changes and select Edit Settings.
3. Under the Virtual Hardware tab, expand CPU, increase the vCPU count according to SDR requirement.
4. Click OK.

Modify the memory size according to the SDR requirements for FG Pool
Use this procedure to add additional memory required for SDR if replication is enabled.

About this task


NOTE: 12 GB of additional memory is required for SDR. For example, if you have 32 GB memory existing in the SVM, add 12
GB for enabling replication so it would be 32+12 = 44 GB.

Steps
1. Log in to the production VMware vCenter using VMware vSphere Client.
2. Right-click the VM that requires changes and select Edit Settings.
3. Under the Virtual Hardware tab, expand Memory, modify the memory size according to SDR requirement.
4. Click OK.

Performing a PowerFlex hyperconverged node expansion 355


Internal Use - Confidential

Increase the vCPU count


Use this procedure to increase the vCPU count according to the SDR requirement.

About this task


The physical core requirement is two sockets with ten cores each (vCPU * per NUMA domain cannot exceed physical cores).
vCPU total: 10 (SDS) + 10 (SDR) + 2 (MDM/TB) + 2 (CloudLink) = 24 vCPUs

Steps
1. Log in to the production VMware vCenter using VMware vSphere Client.
2. Right-click the virtual machine that requires changes and select Edit Settings.
3. Under the Virtual Hardware tab, expand CPU, and increase the vCPU count according to SDR requirement.
4. Click OK.

Adding virtual NICS to SVMs


Use this procedure to add two more NICs to each SVM for SDR external communication.

Steps
1. Log in to the production VMware vCenter using VMware vSphere Client and navigate to Host and Clusters.
2. Right-click the SVM and click Edit Setting.
3. Click Add new device and select Network Adapter from the list.
4. Select the appropriate port group created for SDR external communication and click OK.
5. Repeat steps 2 through 4 to create the second NIC.

Power on the SVM and configure network interfaces


Use these procedures to power on the SVMs and create interface configuration files for the newly added network adapters.

Configuring newly added network interface controllers for the SVMs

Steps
1. Log in to VMware vCenter using vSphere client.
2. Select the SVM, right-click Power > Power on.
3. Log in to SVM using PuTTY.
4. Create rep1 network interface, type: cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/
network-scripts/ifcfg-eth5.
5. Create rep2 network interface, type: cp etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/
network-scripts/ifcfg-eth6.
6. Edit newly created configuration files (ifcfg-eth5, ifcfg-eth6) using the vi editor and modify the entry for IPADDR,
NETMASK, GATEWAY, DEFROUTE, DEVICE, NAME and HWADDR, where:
● DEVICE is the newly created device of eth5 and eth6
● IPADDR is the IP address of the rep1 and rep2 networks
● NETMASK is the subnet mask
● GATEWAY is the gateway for the SDR external communication
● DEFROUTE change to no
● HWADDR=MAC address collected from the topic Adding virtual NICs to SVMs
● NAME=newly created device name for eth5 and eth6
NOTE: Ensure that the MTU value is set to 9000 for SDR interfaces on both primary and secondary site and also end to
end devices. Confirm with the customer about their existing MTU values and configure it.

356 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

Adding a permanent static route for replication external networks


Use this procedure to create a permanent route.

Steps
1. Go to /etc/sysconfig/network-scripts and create a file called route-interface and type:

#touch /etc/sysconfig/network-scripts/route-eth5
#touch /etc/sysconfig/network-scripts/route-eth6

2. Edit each file and add the appropriate network information.


For example, 10.0.10.0/23 via 10.0.30.1, where 10.0.10.0/23 is the network address and prefix length of the
remote or destination network. The IP address 10.0.30.1 is the gateway address leading to the remote network.
Sample file

/etc/sysconfig/network-scripts/route-eth5
10.0.10.0/23 via 10.0.30.1
/etc/sysconfig/network-scripts/route-eth6
10.0.20.0/23 via 10.0.40.1

3. Reboot the SVM, type: #reboot.


4. Ensure all the changes are persistent after reboot.
5. After SVM is rebooted, ensure all the interfaces are configured properly, type: #ifconfig or #ip a.
6. Verify the new route added to the system, type: #netstat -rn.

Installing SDR RPMs on the SDS nodes (SVMs)


About this task
The SDR RPM must be installed on all SVMs, both at the source and destination sites only if both sites have PowerFlex
hyperconverged nodes. Storage Data Replicators (SDR) are responsible for processing all I/Os of replication volumes. All
application I/Os of replicated volumes are processed by the source SDRs. At the source, application I/Os are sent by the SDC
to the SDR. The I/Os are sent to the target SDRs and stored in their journals. The target SDRs journals apply the I/Os to the
target volumes. A minimum of two SDRs are deployed at both the source and target systems to maintain high availability. If one
SDR fails, the MDM directs the SDC to send the I/Os to an available SDR.

Steps
1. Use WinSCP or SCP to copy the SDR package to the tmp folder.
2. SSH to SVM and run the following to install the SDR package:#rpm -ivh /tmp/EMC-ScaleIO-sdr-3.6-
x.xxx.el7.x86_64.rpm.

Add the storage data replicator to PowerFlex nodes


Use this procedure to add the SDR to PowerFlex nodes.

Prerequisites
The IP address of the node must be configured for SDR. The SDR communicates with several components:
● SDC (application)
● SDS (storage)
● Remote SDR (external)

Steps
1. In the left pane, click Protection > SDRs.
2. In the right pane, click Add.
3. In the Add SDR dialog box, enter the connection information of the SDR:

Performing a PowerFlex hyperconverged node expansion 357


Internal Use - Confidential

a. Enter the SDR name.


b. Update the SDR Port, if required (default is 11088).
c. Select the relevant Protected Domain.
d. Enter the IP Address of the MDM that is configured for SDR.
e. Select Role External for the SDR to SDR external communication.
f. Select Role Application and Storage for the SDR to SDC and SDR to SDS communication.
g. Click ADD SDR to initiate a connection with the peer system.
4. Verify that the operation completed successfully and click Dismiss.
5. Modify the IP address role if required:
a. From the PowerFlex GUI, in the left pane, click Protection > SDRs.
b. In the right pane, select the relevant SDR check box, and click Modify > Modify IP Role.
c. In the <SDR name> Modify IPs Role dialog box, select the relevant role for the IP address.
d. Click Apply.
e. Verify that the operation completed successfully and click Dismiss.
6. Repeat both tasks Add journal capacity and Add Storage Data Replicator (SDR) to PowerFlex nodes for source and
destination.

Verify communication between the source and destination


Use this procedure to verify communication between the source and destination sites.

Steps
1. Log in to all the SVMs and PowerFlex nodes in source and destination sites.
2. Ping the following IP addresses from each of the SVM and PowerFlex nodes in source site:
● Management IP addresses of the primary and secondary MDMs
● External IP addresses configured for SDR-SDR communication
3. Ping the following IP addresses from each of the SVM and PowerFlex nodes in destination site:
● Management IP addresses of the primary and secondary MDMs
● External IP addresses configured for SDR-SDR communication

Add the new SDS to PowerFlex


Use this procedure to add new SDS to PowerFlex.

Steps
1. If you are using a PowerFlex presentation server:
a. Log in to the PowerFlex presentation server.
b. Click Configuration > SDS and click Add.
c. On the Add SDS page, enter the SDS name and select the Protection Domain.
d. Under Add IP, enter the data IP address and click Add SDS.
e. Locate the newly added PowerFlex SDS, right-click and select Add Device.
f. Choose Storage device from the drop-down menu.
g. Locate the newly added PowerFlex SDS, right-click and select Add Device, and choose Acceleration Device from the
drop-down menu.
CAUTION: In case the deployment fails for SSD or NVMe with NVDIMM, it can be due to any one of the
following reasons. Click View Logs and see Configuring the NVDIMM for a new PowerFlex hyperconverged
node for the node configuration table and steps to add SDS and NVDIMM to the FG pool.
● The following error appears if the required NVDIMM size and the RAM size to SVM does not match the node
configuration table.

VMWARE_CANNOT_RETRIEVE_VM_MOR_ID

358 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

● If the deployment fails to add the device and SDS to the PowerFlex GUI, you should manually add the SDS and NVDIMM
to FG pool.
2. If you are using a PowerFlex version prior to 3.5:
a. Log in to the PowerFlex GUI, and click Backend > Storage.
b. Right-click the new protection domain, and select +Add > Add SDS.
c. Enter a name.
For example, 10.234.92.84-ESX.
d. Add the following addresses in the IP addresses field and click OK:
● flex-data1-<vlanid>
● flex-data2-<vlanid>
● flex-data3-<vlanid> (if required)
● flex-data4-<vlanid> (if required)
e. Add New Devices from the lsblk output from the previous step.
f. Select the storage pool destination and media type.
g. Click OK and wait for the green check box to appear and click Close.

Related information
Add drives to PowerFlex
Prepare the SVMs for replication

Add drives to PowerFlex


Use this procedure to add SSD or NVMe drives to PowerFlex.

Steps
1. If you are using PowerFlex GUI presentation server to enable zero padding on a storage pool:
a. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
b. Click Storage Pools from the left pane, and select the storage pool.
c. Click Settings from the drop-down menu.
d. Click Modify > General Settings from the drop-down menu.
e. Click Enable Zero Padding Policy > Apply.
NOTE: After the first device is added to a specific pool, you cannot modify the zero padding policy. FG pool is always
zero padded. By default, zero padding is disabled only for MG pool.

2. If you are using a PowerFlex version prior to 3.5 to enable zero padding on a storage pool:
a. Select Backend > Storage, and right-click Select By Storage Pools from the drop-down menu.
b. Right-click the storage pool, and click Modify zero padding policy.
c. Select Enable Zero Padding Policy, and click OK > Close.
NOTE: Zero padding cannot be enabled when devices are available in the storage pools.

3.
If CloudLink is... Do this...
Enabled See one of the following procedures depending on the
devices:
● Encrypt PowerFlex hyperconverged (SVM) or storage-
only devices (SED drives)
● Encrypt PowerFlex hyperconverged (SVM) or storage-
only devices (non-SED drives)
Disabled Use PuTTY to access the Red Hat Enterprise Linux or an
embedded operating system node.

When adding NVMe drives, keep a separate storage pool for the PowerFlex storage-only node.

4. Note the devices by typing lsblk -p or nmve list.

Performing a PowerFlex hyperconverged node expansion 359


Internal Use - Confidential

5. If you are using PowerFlex GUI presentation server:


a. Connect to the PowerFlex GUI presentation server through https://<ipaddress>:8443 using MDM.
b. Click Configuration > SDSs.
c. Locate the newly added PowerFlex SDS, right-click, select Add Device, and choose Storage device from the drop-
down menu.
d. Type /dev/nvmeXXn1 where X is the value from step 3. Provide the storage pool, verify the device type, and click Add
Device. Accordingly, add all the required device, and click Add Devices.
NOTE: If the devices are not getting added, ensure you select Advance Settings > Advance Takeover from the
Add Device Storage page.

e. Repeat steps 5a to 5 d on all the SDS, where you want to add the devices.
f. Ensure that all the rebuild and balance activities are successfully completed.
g. Verify the space capacity after adding the new node.
6. If you are using a PowerFlex version prior to 3.5:
a. Connect to the PowerFlex GUI.
b. Click Backend.
c. Locate the newly added PowerFlex SDS, right-click, and select Add Device.
d. Type /dev/nvmeXXn1 where X is the value from step 3.
e. Select the Storage Pool, as identified in the Workbook.

360 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

NOTE: If the existing protection domain has Red Hat Enterprise Linux nodes, replace or expand with Red Hat
Enterprise Linux. If the existing protection domain has embedded operating system nodes, replace or expand with
embedded operating system.

f. Repeat steps 6a to 6e for each device.


g. Click OK > Close.
A rebalance of the PowerFlex storage-only node begins.

Related information
Add the new SDS to PowerFlex

Configuring the NVDIMM for a new PowerFlex


hyperconverged node
Verify that the VMware ESXi host recognizes the NVDIMM
Use this procedure to ensure that the NVDIMM is recognized.

Prerequisites
● Ensure the NVDIMM firmware on the new node is of same version of the existing system in the cluster.
● If NVDIMM firmware is higher than the Intelligent Catalog version, you must manually downgrade NVDIMM firmware.
● The VMware ESXi host and the VMware vCenter server are using version 6.7 or higher.
● The VM version of your SVM is version 14 or higher.
● The firmware of the NVDIMM is version 9324 or higher.
● The VMware ESXi host recognizes the NVDIMM.

Steps
1. Log in to the VMware vCenter.
2. Select the VMware ESXi host.
3. Go to the Summary tab.
4. In the VM Hardware section, verify that the required amount of persistent memory is listed.

Add NVDIMM
Use this procedure to add an NVDIMM.

Steps
1. Using the PowerFlex GUI, perform the following to enter the target SDS into maintenance mode:

If you are using a... Do this...


PowerFlex GUI presentation server a. Log in to PowerFlex GUI presentation server.
b. From Configuration, select the SDS.
c. Click MORE, select Enter maintenance mode >
Protected, and click Enter maintenance mode.
PowerFlex version prior to 3.5 a. Log in to the PowerFlex GUI.
b. Select the SDS from the Backend tab of the VMware
host.
c. Right-click the SDS, select Enter maintenance mode,
and click OK.

Performing a PowerFlex hyperconverged node expansion 361


Internal Use - Confidential

NOTE: For the new PowerFlex nodes with NVMe or SSD, remove the SDS or device if it is added to the GUI before
placing the SDS into maintenance mode. Skip this step if the SDS is not added to the GUI.

2. Use VMware vCenter to shut down the SVM.


3. Add the NVDIMM device to the SVM:
a. Edit the SVM settings.
b. Add an NVDIMM device.
c. Set the required NVDIMM device size.
d. Click OK.
4. Increase the RAM size according to the following capacity configuration table:

FG NVDIMM NVDIMM Required FG Additional Total RAM Total RAM


Capacity Capacity Capacity RAM capacity Services required in the required in
Required Delivered Memory SVM (no the SVM (with
(minimum 16 CloudLink CloudLink)
GB units -
must add in
pairs)
9.3 TiB 8 GB 2 x 16 GB = 32 17 GiB MDM: 6.5 GiB 27 GiB 31 GiB
GB
LIA: 350 MiB
19.92 TiB 15 GB 2 x 16 GB = 32 22 GiB OS Base: 1 GiB 32 GiB 36 GiB
GB
Buffer : 2 GiB
38.84 TiB 28 GB 2 x 16 GB = 32 32 GiB 42 GiB 46 GiB
GB CloudLink: 4 GiB
● Without
22.5 TiB 18 GB 2 x 16 GB = 32 25 GiB 35 GiB 39 GiB
CloudLink:
GB
9.85 GiB
46.08 TiB 34 GB 4 x 16 GB = 64 38 GiB ● With 48 GiB 52 GiB
GB CloudLink:
13.85 GiB
92.16 TiB 66 GB 6 x 16 GB = 96 62 GiB 72 GiB 76 GiB
GB

NOTE: In case the capacity is not matching with the configuration table, use the following formula to calculate the
NVDIMM or RAM capacity for Fine Granularity. The calculation is in binary MiB, GiB, and TiB. Round off the RAM size to
the next GiB. For example, if the output of the equation is 16.75 GiB, round it off to 17 GiB.

NVDIMM_capacity_in_GiB = ((100*Number_of_drives) + (700*Capacity_in_TiB))/1024

RAM_capacity_in_GiB = 10 + ((100*Number_of_drives) + (550*Capacity_in_TiB))/1024

5. In Edit Settings, change the Memory size as per the node configuration table, and select the Reserver all guest memory
(All locked) check box.

6. Right-click the SVM, choose Edit settings. Set the SVM as 8 or 12 vCPU, configure at 8 or 12 socket, 8 or 12 core (for
CloudLink additional 4 threads).
7. Use VMware vCenter to turn on the SVM.

362 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

8. Using the PowerFlex GUI, remove the SDS from maintenance mode.
9. Create a namespace on the NVDIMM:
a. Connect to the SVM using SSH and type # ndctl create-namespace -f -e namespace0.0 --mode=dax
--align=4K.
10. Perform steps 3 to 8 for every PowerFlex node with NVDIMM.
11. Create an acceleration pool for the NVDIMM devices:
a. Connect using SSH to the primary MDM, type #scli --add_acceleration_pool --
protection_domain_name <PD_NAME> --media_type NVRAM --acceleration_pool_name
<ACCP_NAME> in the SCLI to create the acceleration pool.
NOTE: Use this step only when you want to add the new PowerFlex node to the new acceleration pool. Otherwise,
skip this step and go to the step to add SSD or NVMe device.

b. For each SDS with NVDIMM, type #scli --add_sds_device --sds_name <SDS_NAME> --
device_path /dev/dax0.0 --acceleration_pool_name <ACCP_NAME> --force_device_takeover to
add the NVDIMM devices to the acceleration pool:

NOTE: Use this step only when you want to add the new acceleration device to a new acceleration pool. Otherwise,
skip this step and go to the step to add SSD or NVMe device.

12. Create a storage pool for SSD devices accelerated by an NVDIMM acceleration pool with Fine Granularity data layout:
a. Connect using SSH to the primary MDM and enter #scli --add_storage_pool --protection_domain_name
<PD_NAME> --storage_pool_name <SP_NAME> --media_type SSD --compression_method normal
--fgl_acceleration_pool_name<ACCP_NAME> --fgl_profile high_performance --data_layout
fine_granularity.
NOTE: Use this step only when you want to add the new PowerFlex node to a new storage pool. Otherwise, skip this
step and go to the step to add SSD or NVMe device.

13. Add the SSD or NVMe device to the existing Fine Granularity storage pool using the PowerFlex GUI.

If you are using a... Do this...


PowerFlex GUI presentation server a. Log in to the PowerFlex GUI.
b. Click Dashboard > Configuration > SDSs
c. Click Add device > Acceleration Device.
d. In the Add acceleration device to SDS dialog box,
enter the device path in the Path field, and device name
in the Name field.
e. Select the Acceleration Pool, you recorded in the drive
information table.
f. Click Add device.
g. Expand ADVANCED (OPTIONAL).
h. Select Force Device Take Over as YES.
i. Click Add devices.
PowerFlex version prior to 3.5 a. Connect to the PowerFlex GUI.
b. Right-click the newly added SDS.
c. Click Add device and choose the storage pool or
acceleration device that was created in the previous
steps and acceleration pool. Expand Advance Settings,
and choose Force Device Take Over.

14. Set the spare capacity for the fine granularity storage pool.
When finished, if you are not extending the MDM cluster, see Completing the expansion.

Performing a PowerFlex hyperconverged node expansion 363


Internal Use - Confidential

Extend the MDM cluster from three to five nodes


If the customer is scaling their environment, you should extend the MDM cluster from three to five nodes.

Extend the MDM cluster from three to five nodes using SCLI
Use this procedure to extend the MDM cluster using SCLI.
It is critical that the MDM cluster is distributed across access switches and physical cabinets to ensure maximum resiliency and
availability of the cluster. The location of the MDM components should be checked and validated during every engagement,
and adjusted if found noncompliant with the published guidelines. If an expansion includes adding physical cabinets and access
switches, you should relocate the MDM cluster components. See MDM cluster component layouts for more information.
When adding new MDM or tiebreaker nodes to a cluster, first place the PowerFlex storage-only nodes (if available), followed by
the PowerFlex hyperconverged nodes.

Prerequisites
● Identify new nodes to use as MDM or tiebreaker.
● Identify the management IP address, data1 IP address, and data2 IP address (log in to each new node or SVM and run the IP
addr command).
● Gather virtual interfaces for the nodes being used for the new MDM or tiebreaker, and note the interface of data1 and data2.
For example, for a PowerFlex storage-only node, the interface is bond0.152 and bond1.160. If it is an SVM, it is eth3 and
eth4.
● Identify the primary MDM.

Steps
1. SSH to each new node or SVM and assign the proper role (MDM or tiebreaker) to each.
2. Transfer the MDM and LIA packages to the newly identified MDM cluster nodes.
NOTE: The following steps contain sample versions of PowerFlex files as examples only. Use the appropriate PowerFlex
files for your deployment.

3. To install the LIA, enter TOKEN=<flexos password> rpm -ivh EMC-ScaleIO-lia-3.x-


x.xxx.el7.x86_64.rpm.
4. To install the MDM service:
● For the MDM role, enter MDM_ROLE_IS_MANAGER=1 rpm -ivh EMC-ScaleIO-mdm-3.x-
x.xxx.el7.x86_64.rpm
● For the tiebreaker role, enter MDM_ROLE_IS_MANAGER=0 rpm-ivh EMC-ScaleIO-mdm-3.x-
x.xxx.el7.x86_64.rpm .
5. Open an SSH terminal to the primary MDM and log in to the operating system.
6. Log in to PowerFlex by entering scli –-login --username admin –-password <powerflex password>.
7. Enter scli –-query_cluster to query the cluster. Verify that it is in three node cluster mode.
8. Add a new MDM by entering scli --add_standby_mdm --mdm_role manager --new_mdm_ip
<new mdm data1,data2 ip’s> --new_mdm_management_ip <mdm management IP> --
new_mdm_virtual_ip_interfaces <list both interface, comma seperated> --new_mdm_name <new
mdm name>.

9. Add a new tiebreaker by entering scli --add_standby_mdm --mdm_role tb --new_mdm_ip <new tb


data1,data2 ip’s> --new_mdm_name <new tb name>

10. Enter scli -–query_cluster to find the ID for the newly added Standby MDM and the Standby TB.
11. To switch to five node cluster, enter scli --switch_cluster_mode --cluster_mode 5_node --
add_slave_mdm_id <Standby MDM ID> --add_tb_id <Standby tiebreaker ID>

12. Repeat steps 1 to 9 to add Standby MDM and tiebreakers on other PowerFlex nodes.

364 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

Redistribute the MDM cluster


Use this procedure to redistribute the MDM cluster manually.
It is critical that the MDM cluster is distributed across access switches and physical cabinets to ensure maximum resiliency and
availability of the cluster. The location of the MDM components should be checked and validated during every engagement,
and adjusted if found noncompliant with the published guidelines. If an expansion includes adding physical cabinets and access
switches, you should relocate the MDM cluster components. See MDM cluster component layouts for more information.
When adding new MDM or tiebreaker nodes to a cluster, first place the PowerFlex storage-only nodes (if available), followed by
the PowerFlex hyperconverged nodes.

Prerequisites
● Identify new nodes to use as MDM or tiebreaker.
● Identify the management IP address, data1 IP address, and data2 IP address (log in to each new node or SVM and enter the
IP addr command).
● Gather virtual interfaces for the nodes being used for the new MDM or tiebreaker, and note the interface of data1 and data2.
For example, for a PowerFlex storage-only node, the interface is bond0.152 and bond1.160. If it is an SVM, it is eth3 and
eth4.
● Identify the primary MDM.

Steps
1. SSH to each new node or SVM and assign the proper role (MDM or tiebreaker) to each.
2. Transfer the MDM and LIA packages to the newly identified MDM cluster nodes.
NOTE: The following steps contain sample versions of PowerFlex files as examples only. Use the appropriate PowerFlex
files for your deployment.

3. To install the LIA, enter TOKEN=<flexos password> rpm -ivh EMC-ScaleIO-lia-3.x-


x.xxx.el7.x86_64.rpm.
4. To install the MDM service:
● For the MDM role, enter MDM_ROLE_IS_MANAGER=1 rpm -ivh EMC-ScaleIO-mdm-3.x-
x.xxx.el7.x86_64.rpm
● For the tiebreaker role, enter MDM_ROLE_IS_MANAGER=0 rpm-ivh EMC-ScaleIO-mdm-3.x-
x.xxx.el7.x86_64.rpm.
5. Open an SSH terminal to the primary MDM and log in to the operating system.
6. Log in to PowerFlex by entering scli –-login --username admin –-password <powerflex password>.
7. Add new standby MDM by entering scli --add_standby_mdm --mdm_role manager --new_mdm_ip
<new mdm data1, data2 ip’s> --new_mdm_management_ip <mdm management IP> --
new_mdm_virtual_ip_interfaces <list both interface, comma seperated> --new_mdm_name <new
mdm name>.

8. Add a new standby tiebreaker by entering scli --add_standby_mdm --mdm_role tb --new_mdm_ip <new tb
data1, data2 ip’s> --new_mdm_name <new tb name>.

9. Repeat Steps 7 and 8 for each new MDM and tiebreaker that you are adding to the cluster.
10. Enter scli –-query_cluster to find the ID for the current MDM and tiebreaker. Note the IDs of the MDM and
tiebreaker being replaced.
11. To replace the MDM, enter scli --replace_cluster_mdm --add_slave_mdm_id <mdm id to add> --
remove_slave_mdm_id <mdm id to remove>.
Repeat this step for each MDM.
12. To replace the tiebreaker, enter scli --replace_cluster_mdm --add_tb_id <tb id to add> --
remove_tb_id <tb id to remove>.
Repeat this step for each tiebreaker.
13. Enter scli -–query_cluster to find the IDs for MDMs and tiebreakers being removed.
14. Using IDs to remove the old MDM, enter scli --remove_standby_mdm --remove_mdm_id <mdm id to
remove>.

Performing a PowerFlex hyperconverged node expansion 365


Internal Use - Confidential

NOTE: This step might not be necessary if this MDM remains in service as a standby. See MDM cluster component
layouts for more information.

15. To remove the old tiebreaker, enter scli --remove_standby_mdm --remove_mdm_id <mdm id to remove>.

NOTE: This step might not be necessary if this tiebreaker remains in service as a standby. See MDM cluster component
layouts for more information.

16. Repeat Steps 1 -12, as needed.

Related information
Redistribute the MDM cluster using PowerFlex Manager

MDM cluster component layouts


This topic provides examples of layouts for MDM components in a PowerFlex appliance with two to five cabinets.
The Metadata Manager (MDM) cluster contains the following components:
● Primary MDM 100
● Secondary MDM 2 and 3
● Tiebreaker 1 and 2
When a PowerFlex appliance contains multiple cabinets, distribute the MDM components to maximize resiliency.
Distribute the primary MDM, secondary MDMs, and tiebreakers across physical cabinets and access switch pairs to ensure
maximum availability of the cluster. When introducing new or standby MDM components into the cluster, make sure you adhere
to the MDM redistribution methodology and select your hosts appropriately, so the cluster remains properly distributed across
the physical cabinets and access switch pairs.
The following illustrations provide examples of MDM component layouts for two to five cabinets:
● MDM cluster component layout for a two-cabinet PowerFlex appliance

● MDM cluster component layout for a three-cabinet PowerFlex appliance

366 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

● MDM cluster component layout for a four-cabinet PowerFlex appliance

● MDM cluster component layout for a five-cabinet PowerFlex appliance

Performing a PowerFlex hyperconverged node expansion 367


Internal Use - Confidential

Add a PowerFlex node to a PowerFlex Manager


service in lifecycle mode
Use this procedure to update the inventory and service details in PowerFlex Manager for a new PowerFlex node.

Steps
1. Update the inventory for vCenter (vCSA), switches, gateway VM, and nodes:
a. Click Resources on the home screen.
b. Select the vCenter, switches (applicable only for full networking), gatewayVM, and newly added nodes.
c. Click Run Inventory.
d. Click Close.
e. Wait for the job in progress to complete.
2. Update Services details:
a. Click Services
b. Choose the services on which new a node is expanded and click View Details.
c. On the Services details screen, choose Update Service Details.
d. Choose the credentials for the node and SVM and click Next.
e. On the Inventory Sumary, verify that the newly added nodes are reflecting under Physical Node, and click Next.
f. On the Summary page, verify the details and click Finish.

Configuring the NSX-T Ready tasks


This section describes how to configure the PowerFlex nodes as part of preparing the PowerFlex appliance for NSX-T. Before
you configure the ESXi hosts as NSX-T transport nodes, you must add the transport distributed port groups and convert the
distributed switch from LACP to individual trunks.
NOTE: If you configure VMware NSX-T on PowerFlex hyperconverged or compute-only nodes and add them to PowerFlex
Manager, the services will be in lifecycle mode. If you need to perform an expansion on such a node, see Adding a
PowerFlex R640/R740xd/R840 node to a PowerFlex Manager service in lifecycle mode to add the PowerFlex node.
Contact VMware Support to configure VMware NSX-T on a new PowerFlex node and see Add PowerFlex nodes to a
service to update the service details.

368 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

Configure VMware NSX-T overlay distributed virtual port group


Use this procedure to create and configure the VMware NSX-T overlay distributed virtual port group on cust_dvswitch.

Prerequisites
Ensure that the VMware vSphere vCenter Server and the VMware vSphere Client are accessible.

Steps
1. Log in to the VMware vSphere Client.
2. Click Networking.
3. Expand the PowerFlex Customer-Datacenter.
4. Right-click cust_dvswitch.
5. Click Distributed Port Group > New Distributed Port Group.
6. Update the name to pfmc-nsx-transport-121 and click Next.
7. Select the default Port binding.
8. Select the default Port allocation.
9. Select the default # of ports (default is 8).
10. Select the default VLAN as VLAN Type.
11. Set the VLAN ID to 121.
12. Clear the Customize default policies configuration check box and click Next.
13. Click Finish.
14. Right-click the pfnc-nsx-transport-121 and click Edit Settings....
15. Click Teaming and failover.
16. Verify that Uplink1 and Uplink2 are moved to Active.
17. Click OK.

Convert trunk access to LACP-enabled switch ports for


cust_dvswitch
Use this procedure to convert the physical NICs from trunk to LACP without losing network connectivity. Use this option only if
cust_dvswitch is configured as trunk. LACP is the default configuration for cust_dvswitch.

Prerequisites
Both Cisco Nexus access switch ports for the compute VMware ESXi hosts are configured as trunk access. These ports will be
configured as LACP enabled after the physical adapter is removed from each ESXi host.
WARNING: As the VMK0 (ESXi management) is not configured on cust_dvswitch, both the vmnics are first
migrated to the LAGs simultaneously and then the port channel is configured. Data connectivity to PowerFlex is
lost until the port channels are brought online with both vmnic interfaces connected to LAGs.

About this task


This procedure includes reconfiguring one port at a time as an LACP without requiring any migration of the VMkernel network
adapters.

Steps
1. Log in to the VMware vSphere Client.
2. Look at VMware vCenter and physical switches to ensure that both ports across all hosts are up.
3. For each compute VMware ESXi host, record the physical switch ports to which vmnic5 (switch-B) and vmnic7 (switch -A)
are connected.
a. Click Home, then select Hosts and Clusters and expand the compute cluster.
b. Select the first compute ESXi host in left pane, and then select Configure tab in right pane.

Performing a PowerFlex hyperconverged node expansion 369


Internal Use - Confidential

c. Select Virtual switches under Networking.


d. Expand cust_dvswitch.
e. Expand Uplink1 and click eclipse (…) for vmnic7 and select view settings.
f. Click LLDP tab.
g. Record the Port ID (switch port) and System Name (switch).
h. Repeat step 3 for vmnic5 on Uplink 2.
4. Configure LAG (LACP) on cust_dvswitch within VMware vCenter Server:
a. Click Home, then select Networking.
b. Expand the compute cluster and click cust_dvswitch > Configure > LACP.
c. Click +New to open wizard.
d. Verify that the name is lag1.
e. Verify that the number of ports is 2.
f. Verify that the mode is Active.
g. Change Load Balancing mode to Source and destination IP address, TCP/UDP port..
h. Click OK.
5. Migrate vmnic5 to lag1-0 and vmnic7 to lag1-1 on cust_dvswitch for the compute VMware ESXi host as follows:
a. Click Home, then select Networking and expand the PowerFlex data center.
b. Right-click cust_dvswitch and select Manage host networking to open wizard.
c. Select Add hosts... and click Next.
d. Click Attached hosts..., select all the compute ESXi hosts, and click OK.
e. Click Next.
f. For each ESXi host, select vmnic5 and click Assign uplink.
g. Click lag1-0 and click OK.
h. For each ESXi host, select vmnic7 and click Assign uplink.
i. Click lag1-1 and click OK.
j. Click Next > Next > Next > Finish.
6. Create port-channel (LACP) on switch-A for compute VMware ESXi host.
The following switch configuration is an example of a single compute VMware ESXi host.
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-A switch.
b. Create a port channel on switch-A for each compute VMware ESXi host as follows:

interface port-channel40
Description to flex-compute-esxi-host01
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
speed 25000
no lacp suspend-individual
vpc 40

7. Configure channel-group (LACP) on switch-A access port (vmnic5) for each compute VMware ESXi host.
The following switch port configuration is an example of a single compute VMware ESXi host.
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-A switch.
b. Create port on switch-A as follows:

int e1/1/1
description to flex-compute-esxi-host01 – vmnic5
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 40 mode active

8. Create port-channel (LACP) on switch-B for compute VMware ESXi host.


The following switch configuration is an example of a single compute VMware ESXi host.

370 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-B switch.
b. Create a port channel on switch-B for each compute VMware ESXi host as follows:

interface port-channel40
Description to flex-compute-esxi-host01
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
speed 25000
no lacp suspend-individual
vpc 40

9. Configure channel-group (LACP) on switch-B access port (vmnic7) for each compute VMware ESXi host.
The following switch port configuration is an example of a single compute VMware ESXi host.
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-B switch.
b. Create port on switch-B as follows:

int e1/1/1
description to flex-compute-esxi-host01 – vmnic7
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 40 mode active

10. Update teaming and policy to route based on physical NIC load for each port group within cust_dvswitch:
a. Click Home and select Networking.
b. Expand cust_dvswitch to have all port group in view.
c. Right-click flex-data-01 and select Edit Settings.
d. Click Teaming and failover.
e. Change Load Balancing mode to Route based on IP hash.
f. Repeat steps 10b to 10e for each remaining port groups.

Add the VMware NSX-T service using PowerFlex Manager


Use this procedure only if the PowerFlex nodes are added to the NSX-T environment.

Prerequisites
NOTE: Before adding a VMware NSX-T service using PowerFlex Manager, either the customer or VMware services must
add the new PowerFlex node to NSX-T Data Center using NSX-T UI.
Consider the following:
● Before adding this service Update service details in PowerFlex Manager, verify that the NSX-T Data Center is configured
on the PowerFlex hyperconverged or compute-only nodes.
● If the transport nodes (PowerFlex cluster) is configured with NSX-T, then you cannot replace the field units using PowerFlex
Manager. You must add the node manually by following either of these procedures depending on the node type:
○ Performing a PowerFlex hyperconverged node expansion
○ Performing a PowerFlex compute-only node expansion

Steps
1. Log in to PowerFlex Manager.
2. If NSX-T Data Center 3.0 or higher is deployed and is using VDS (not N-VDS), then add the transport network:
a. From Getting Started, click Define Networks.
b. Click + Define and do the following:

Performing a PowerFlex hyperconverged node expansion 371


Internal Use - Confidential

NSX-T information Values


Name Type NSX-T Transport

Description Type Used for east-west traffic

Network Type Select General Purpose LAN


VLAN ID Type 121. See the Workbook.

Configure Static IP Address Ranges Select the Configure Static IP Address Ranges check box. Type
the starting and ending IP address of the transport network IP pool

c. Click Save > Close.


3. From Getting Started, click Add Existing Service and do the following:
a. On the Welcome page, click Next.
b. On the Service Information page, enter the following details:

Service Information Details


Name Type NSX-T Service

Description Type Transport Nodes

Type Type Hyperconverged or Compute-only

Firmware and software compliance Select the Intelligent Catalog version


Who should have access to the service deployed Leave as default
from this template?

c. Click Next.
d. On the Network Information page, select Full Network Automation, and click Next.
e. On the Cluster Information page, enter the following details:

Cluster Information Details


Target Virtual Machine Manager Select vCSA name
Data Center Name Select data center name
Cluster Name Select cluster name
Target PowerFlex gateway Select PowerFlex gateway name
Target Protection Domain Select PD-1
OS Image Select the ESXi image

f. Click Next.
g. On the OS Credentials page, select the OS credentials for each node, and click Next.
h. On the Inventory Summary page, review the summary and click Next.
i. On the Networking Mapping page, verify that the networks are aligned with the correct dvSwitch.
j. On the Summary page, review the summary and click Finish.
4. Verify PowerFlex Manager recognizes NSX-T is configured on the nodes:
a. Click Services.
b. Select the hyperconverged or compute-only service.
c. Verify that a banner appears under the Service Details tab, notifying that NSX-T is configured on a node and is
preventing some features from being used. In case you do not see this banner, check if you have selected the wrong
service or NSX-T is not configured on the hyperconverged or compute-only nodes.

372 Performing a PowerFlex hyperconverged node expansion


Internal Use - Confidential

Performing a PowerFlex hyperconverged node expansion 373


Internal Use - Confidential

59
Encrypting PowerFlex hyperconverged
(SVM) or storage-only node devices (SED or
non-SED drives)

Encrypt PowerFlex hyperconverged (SVM) or


storage-only devices (SED drives)
Use this procedure to encrypt PowerFlex hyperconverged (SVM) or storage-only devices (SED drives).

Prerequisites

NOTE: This procedure is not applicable for PowerFlex storage-only nodes with NVMe drives.

Ensure that the following prerequisites are met:


● If you are using PowerFlex presentation server, see Editing the SVM configuration for the CPU settings.
● If you are using PowerFlex versions prior to 3.6, verify the Storage VM (SVM) and CPU settings:
1. Log in to the customer VMware vCenter.
2. Right-click SVM > Edit Settings.
3. Verify the SVM vCPU is set to 12 (one socket and twelve cores), and RAM is set to 16 GB (applicable for MG pool
enabled system only). If you have an FG pool enabled system, change the RAM size based on the node configuration
table specified in Add NVDIMM .
● SSH to the SVM or the PowerFlex storage-only node on which you plan to have the encrypted devices.
● Download and install the CloudLink agent by entering the following command:

curl -O http://cloudlink_ip/cloudlink/securevm && sh securevm -S cloudlink_ip

where cloudlink_ip is the IP address of one of the CloudLink Center VMs.


NOTE: The preceding command downloads and installs the CloudLink agent on SVMs or PowerFlex storage-only nodes,
and then adds the machine (SVM or PowerFlex storage-only nodes) into the default machine group.

● If you want to add the SVM into a specific machine group, use the -G [group_code] argument with the preceding
command.
where -G group_code specifies the registration code for the machine group to which you want to assign the machine.

NOTE: To obtain the registration code of the machine group, log in to the CloudLink Center using a web browser.

Steps
1. Open a browser, and provide the CloudLink Center IP address.
2. In the Username box, enter secadmin.
3. In the Password box, enter the secadmin password.
4. Click Agents > Machines.
5. Ensure that the hostname of the new SVM or PowerFlex storage-only node is listed, and is in Connected state.

374 Encrypting PowerFlex hyperconverged (SVM) or storage-only node devices (SED or non-SED drives)
Internal Use - Confidential

6. If the SDS has devices that are added to PowerFlex, remove the devices. Otherwise, skip this step.

If you are using a... Do this...


PowerFlex GUI presentation server a. Log in to the PowerFlex GUI presentation server.
b. Click Dashboard > Configuration > Devices.
c. Enter the SDS name in the search box, and select the
device by clicking on the check box.
d. Click More, and then select Remove.
e. Click Remove.
PowerFlex version prior to 3.5 a. Log in to the PowerFlex GUI.
b. Click Backend.
c. Locate the required SDS.
d. Right-click each device, and then click Remove.
NOTE: Repeat this step for each of the device
added to PowerFlex. It might take some time to
remove all the devices.

7. From the SSH session:


a. Enter #svm status to view the encrypted devices.

b. Enter svm manage /dev/sdX to take control of the encrypted devices.


where X is the device letter.

c. Enter #svm status to view the status of the devices.

Encrypting PowerFlex hyperconverged (SVM) or storage-only node devices (SED or non-SED drives) 375
Internal Use - Confidential

NOTE: If the device shows taking control, run #svm status until the device status shows as managed. It
is a known issue that the CLI status of SED drives shows as unencrypted, whereas CloudLink Center UI shows the
device status as Encrypted HW.

d. Log in to CloudLink Center.


e. Click Agents > Machines. Ensure the status of newly added machines are in Connected state. Select each newly added
machine and verify the status of the devices and SED as Encrypted HW and Managed respectively.

NOTE: There are no /dev/mapper devices for SEDs. Use the device name listed in the svm status command. It
is recommended to add self-encrypted drives (SEDs) to their own storage pools.

f. Once all SED drives are Managed, add the encrypted devices to the PowerFlex SDS.

If you are using a... Do this...


PowerFlex GUI presentation server a. Log in to the PowerFlex GUI presentation server.
b. Click Dashboard > Configuration > SDSs.
c. Select the SDS to add the disk.
d. Click Add Device > Storage Device.
e. In the Add storage device to SDS dialog box, enter the
device path in the Path field, and device name in the
Name field.
f. Select the storage pool and media type you recorded in
the drive information table.
g. Click Add Devices.
PowerFlex version prior to 3.5 a. Log in to the PowerFlex GUI.
b. Click Backend.
c. Locate the PowerFlex SDS, right-click, and then click
Add Device.
d. In Add device to SDS, enter the Path and select the
Storage Pool for each device.

376 Encrypting PowerFlex hyperconverged (SVM) or storage-only node devices (SED or non-SED drives)
Internal Use - Confidential

If you are using a... Do this...


If the PowerFlex storage-only node has only SSD disks,
then the path is /dev/mapper/svm_sdX where X is
the device you have managed.
e. Repeat the substeps a-d for all SDS nodes. Wait for
PowerFlex rebalance to finish.

8. Ensure that rebalance is running and progressing before continuing to another SDS.

Related information
Verify the CloudLink license

Encrypt PowerFlex hyperconverged (SVM) or


storage-only devices (non-SED drives)
Use this procedure to encrypt PowerFlex hyperconverged (SVM) or storage-only devices.

Prerequisites
Ensure that the following prerequisites are met:
● If you are using PowerFlex presentation server, see Modifying the vCPU, memory, vNUMA, and CPU reservation settings on
SVMs for the CPU settings.
● If you are using PowerFlex versions prior to 3.6, the Storage VM (SVM) vCPU is set to 12 (one socket and twelve cores),
and RAM is set to 16 GB (applicable for MG pool enabled system only). If you have an FG pool enabled system, change the
RAM size based on the node configuration table specified in Add NVDIMM
● SSH to the SVM or the PowerFlex storage-only node on which you plan to have the encrypted devices.
● Download and install the CloudLink Agent by entering:
curl -O http://cloudlink_ip/cloudlink/securevm && sh securevm -S cloudlink_ip

where cloudlink_ip is the IP address of one of the CloudLink Center VMs.


NOTE: The preceding command downloads and installs the CloudLink agent on SVMs or PowerFlex storage-only nodes,
and then adds the machine (SVM or storage-only node) into the default machine group.

● If you want to add the SVM into a specific machine group, use the -G [group_code] argument with the preceding
command.
where -G group_code specifies the registration code for the machine group to which you want to assign the machine.

NOTE: To obtain the registration code of the machine group, log in to the CloudLink Center using a web browser.

Steps
1. Open a browser, and enter the CloudLink Center IP address.
2. In the Username box, enter secadmin.
3. In the Password box, enter the secadmin password.
The CloudLink Center home page is displayed.
4. Click Agents > Machines.
5. Ensure that the hostname of the new SVM or PowerFlex storage-only node is listed, and is in Connected state.

Encrypting PowerFlex hyperconverged (SVM) or storage-only node devices (SED or non-SED drives) 377
Internal Use - Confidential

6. On the SVM or PowerFlex storage-only node, edit the file vi /opt/emc/extra/pre_run.sh

NOTE: Ensure that Storage Data Server (SDS) is installed before CloudLink Agent is installed.

In the vi /opt/emc/extra/pre_run.sh file, type sleep 60 before the last line if it does not already exist.

7. If the SDS has devices that are added to PowerFlex, remove the devices. Otherwise, skip this step.

If you are using a... Do this...


PowerFlex GUI presentation server a. Log in to the PowerFlex GUI presentation server.
b. Click Dashboard > Configuration > Devices.
c. Type the SDS in the search box, and select the device.
d. Click More, and select Remove.
e. Click Remove.
PowerFlex version prior to 3.5 a. Log in to the PowerFlex GUI.
b. Click Backend.
c. Locate the required SDS.
d. Right-click each device and click Remove.
NOTE: Repeat this step for each of the device
added to PowerFlex. It might take some time to
remove all the devices.

8. From the SSH session:


a. Enter #svm status to view the unencrypted devices.

b. For SSD drives, enter svm encrypt /dev/sdX for each drive you want to encrypt.
where X is the device letter.

c. For NVMe drives, enter use svm encrypt /dev/nvmexxx for each drive you want to encrypt.
d. Enter #svm status to view the status of the devices.

378 Encrypting PowerFlex hyperconverged (SVM) or storage-only node devices (SED or non-SED drives)
Internal Use - Confidential

e. Add the encrypted devices to the PowerFlex SDS.

If you are using a... Do this...


PowerFlex GUI presentation server i. Log in to the PowerFlex GUI presentation server, go to
https://<IPaddress>:8443 using MDM.
ii. Click Configuration > SDS.
iii. Select the appropriate PowerFlex SDS, click Add
Device, and choose Storage device from the drop-
down menu.
● If the PowerFlex storage-only node has only SSD
disks, then the path is /dev/mapper/svm_sdX
where X is the device you have encrypted.
● If the PowerFlex storage-only node node has
NVMe disks, then the path is /dev/mapper/
svm_nvmeXnX where X is the device you have
encrypted.

iv. Enter the new device path and name in the Path
and Name fields of the Add Storage Device to SDS
window.
v. Select the Storage Pool, Media Type you recorded in
the drive information table.
vi. Click Add Device.
vii. Repeat steps c and d to add all the devices, and click
Add Devices.
PowerFlex version prior to 3.5 i. Log in to the PowerFlex GUI.
ii. Click Backend.
iii. Locate the PowerFlex SDS, right-click, and select Add
Device.
iv. In Add device to SDS, enter the Path and select the
Storage Pool for each device.
● If the PowerFlex storage-only node has only SSD
disks, then the path is /dev/mapper/svm_sdX
where X is the device you have encrypted.
● If the PowerFlex storage-only node node has
NVMe disks, then the path is /dev/mapper/
svm_nvmeXnX where X is the device you have
encrypted.

v. Repeat these steps for all SDS nodes. Wait for


PowerFlex rebalance to finish.

9. Ensure that rebalance is running and progressing before continuing to another SDS.

Related information
Verify the CloudLink license

Encrypting PowerFlex hyperconverged (SVM) or storage-only node devices (SED or non-SED drives) 379
Internal Use - Confidential

Verify newly added SVMs or storage-only nodes machine status in CloudLink Center

380 Encrypting PowerFlex hyperconverged (SVM) or storage-only node devices (SED or non-SED drives)
Internal Use - Confidential

60
Performing a PowerFlex compute-only node
expansion
Perform the manual expansion procedure to add a PowerFlex compute-only node to PowerFlex Manager services that are
discovered in a lifecycle mode.
See Cabling the PowerFlex R640/R740xd/R840 nodes for cabling information.

Discover resources
Use this procedure to discover and allow PowerFlex Manager access to resources in the environment. Provide the management
IP address and credential for each discoverable resource.

Prerequisites
Verify that the iDRAC network settings are configured. See Configure iDRAC network settings for more information.

About this task


Dell EMC recommends using separate operating system credentials for SVM and VMware ESXi. For information about creating
or updating credentials in PowerFlex Manager, click Settings > Credentials Management and access the online help.
During node discovery, you can configure iDRAC nodes to automatically send alerts to PowerFlex Manager. You can also change
the IP address of the iDRAC nodes and discover them. See Reconfigure the discovered nodes with new management IP and
credentials in the Dell EMC PowerFlex Appliance Administration Guide. If the PowerFlex nodes are not configured for alert
connector, Secure Remote Services does not receive critical or error alerts for those resources.
The following table describes how to configure resources in managed mode:

Resource type Resource state Example


PowerFlex nodes Managed PowerEdge iDRAC management IP address
If you want to perform firmware updates or deployments on a
discovered node, change the default state to managed.
Perform firmware or catalog updates from the Services page,
or the Resources page.

NOTE: For partial network deployments, you do not need to discover the switches. The switches need to be pre-
configured. For sample configurations for Dell PowerSwitch, Cisco Nexus, and Arista switches, see the Dell EMC PowerFlex
Appliance Administration Guide.
The following are the specific details for completing the Discovery wizard steps:

Resource IP address Resource Discover into Credentials: Credentials: Credentials: Credentials:


type range state node pool name Username password SNMPv2
community
string
Node 192.168.101.21- Managed PowerFlex node PowerFlex root calvin customer
pool appliance provided
192.168.101.24
iDRAC default
Switch 192.168.101.45- Managed N/A access admin admin public
switches
192.168.101.46

Performing a PowerFlex compute-only node expansion 381


Internal Use - Confidential

Resource IP address Resource Discover into Credentials: Credentials: Credentials: Credentials:


type range state node pool name Username password SNMPv2
community
string
VM 192.168.105.105 Managed N/A vCenter administrator P@ssw0rd! N/A
Manager @vsphere.loc
al
Element 192.168.105.120 Managed N/A CloudLink secadmin Secadmin!! N/A
Manager* 192.168.105.121
*

** This is optional. For a new CloudLink Center deployment, the CloudLink Center is discovered automatically.

Prerequisites
● Configure the iDRAC network settings.
● Gather the IP addresses and credentials that are associated with the resources.
NOTE: PowerFlex Manager also allows you to use the name-based searches to discover a range of nodes that were
assigned the IP addresses through DHCP to iDRAC. For more information about this feature, see Dell EMC PowerFlex
Manager Online Help.

Steps
1. On the PowerFlex Manager Getting Started page, click Discover Resources.
2. On the Welcome page of the Discovery Wizard, read the instructions and click Next.
3. On the Identify Resources page, click Add Resource Type. From the Resource Type list, select the resource that you
want to discover.
4. Enter the management IP address of the resource in the IP/Hostname Range field. To discover a resource in an IP range,
provide a starting and ending IP address.
5. In the Resource State list, select Managed or Unmanaged.
6. For PowerFlex node, to discover resources into a selected node pool instead of the global pool (default), select the node
pool from the Discover into Node Pool list.
7. Select the appropriate credential from the Credentials list. See the table above for details.
8. For PowerFlex node, if you want PowerFlex Manager to automatically reconfigure the iDRAC IP addresses of the nodes it
discovers, select the Reconfigure discovered nodes with new management IP and credentials check box. This option
is not selected by default, because it is faster to discover the nodes if you bypass the reconfiguration.
NOTE: iDRAC can be discovered using hostname also.

NOTE: For the Resource Type, you can use a range with hostname or IP address, provided the hostname has a valid
DNS entry.

9. For PowerFlex node, select the Auto configure nodes to send alerts to PowerFlex Manager check box to have
PowerFlex Manager automatically configure nodes to send alerts to PowerFlex Manager.
10. Click Next to start discovery. On the Discovered Resources page, select the resources from which you want to collect
inventory data and click Finish. The discovered resources are listed on the Resources page.

Related information
Configure iDRAC network settings

Install VMware ESXi to expand compute capacity


Use this procedure to install VMware ESXi on the PowerFlex nodes to expand the compute capacity.

About this task


The network adapters specified in this procedure are for representation purpose only. See Cabling the PowerFlex R640/
R740xd/R840 nodes for the logical network associated with the PowerFlex node.

382 Performing a PowerFlex compute-only node expansion


Internal Use - Confidential

Prerequisites
Verify the customer VMware ESXi ISO is available and is located in the Intelligent Catalog code directory.

Steps
1. Log in to the iDRAC:
a. Connect to the iDRAC interface and launch a virtual remote console by clicking Dashboard > Virtual Console and click
Launch Virtual Console.
b. Select Connect Virtual Media.
c. Under Map CD/DVD, click Choose File > Browse and browse to the folder where the ISO file is saved, select it, and
click Open.
d. Click Map Device.
e. Click Menu > Boot > Virtual CD/DVD/ISO.
f. Click Power > Reset System (warm boot).
2. Set the boot option to UEFI.
a. Press F2 to enter system setup.
b. Under System BIOS > Boot setting, select UEFI as the boot mode.
NOTE: Ensure that the BOSS card is set as the primary boot device from the boot sequence settings. If the BOSS
card is not set as the primary boot device, reboot the server and change the UEFI boot sequence from System
BIOS > Boot settings > UEFI BOOT settings.

c. Click Back > Back > Finish > Yes > Finish > OK > Finish > Yes.
3. Install VMware ESXi:
a. On the VMware ESXi installer screen, press Enter to continue.
b. Press F11 to accept the license agreement.
c. Under Local, select DELLBOSS VD as the install location, and click Enter if prompted to do so.
d. Select US Default as the keyboard layout.
e. When prompted, type the root password and press Enter.
f. At the Confirm Install screen, press F11.
g. When the installation is complete, delete the installation media before rebooting.
h. Press Enter to reboot the node.
NOTE: Set the first boot device to be the drive on which you installed VMware ESXi in Step 3.

4. Configure the host:


a. Press F2 to customize the system.
b. Provide the password for the root user and press Enter.
c. Go to Direct Console User Interface (DCUI) > Configure Management Network.
d. Set Network Adapters to the following:
● VMNIC4
● VMNIC5
● VMNIC6
● VMNIC7
The network adapters specified are for representation purpose only. See Cabling the PowerFlex R640/R740xd/R840
nodes.

e. See the VMware ESXi Management VLAN ID field in the Workbook for the required VLAN value.
f. Set IPv4 ADDRESS, SUBNET MASK, and DEFAULT GATEWAY configuration to the values defined in the Workbook.
g. Go to DNS Configuration. See the Workbook for the required DNS value.
h. Go to Custom DNS suffix. See the Workbook (local VXRC DNS).
i. Go to DCUI Troubleshooting Options.
j. Select Enable ESXi Shell and Enable SSH.
k. Press <Alt>-F1
l. Log in as root.
m. To enable the VMware ESXi host to work on the port channel, type:

esxcli network vswitch standard policy failover set -v vSwitch0 -l iphash

Performing a PowerFlex compute-only node expansion 383


Internal Use - Confidential

n. Type vim-cmd hostsvc/datastore/rename datastore1 DASXX to rename the datastore, where XX is the
server number.
o. Type exit to log off.
p. Press <Alt>-F2 to return to the DCUI.
q. Select Disable ESXi Shell.
r. Go to DCUI IPv6 Configuration.
s. Disable IPv6.
t. Press ESC to return to the DCUI.
u. Type Y to commit the changes and the node restarts.
v. Verify host connectivity by pinging the IP address from the jump server, using the command prompt.

Create a new VMware ESXi cluster to add PowerFlex


nodes
After installing VMware ESXi, use this procedure to create a new cluster, enable high-availability and DRS, and add a host to the
cluster.

About this task


If you are adding the host to a new cluster, follow the entire procedure. To add the host on an existing cluster, skip steps 1
through 6.

Prerequisites
Ensure that you have access to the customer vCenter.

Steps
1. From the vSphere Client home page, go to Home > Hosts and Clusters.
2. Select a data center.
3. Right-click the data center and select New Cluster.
4. Enter a name for the cluster.
5. Select vSphere DRS and vSphere HA cluster features.
6. Click OK.
7. Select the existing cluster or newly created cluster.
8. From the Configure tab, click Configuration > Quickstart.
9. Click Add in the Add hosts card.
10. On the Add hosts page, in the New hosts tab, add the hosts that are not part of the vCenter Server inventory by entering
the IP address, or hostname and credentials.
11. (Optional) Select the Use the same credentials for all hosts option to reuse the credentials for all added hosts.
12. Click Next.
13. The Host Summary page lists all the hosts to be added to the cluster with related warnings. Review the details and click
Next.
14. On the Ready to complete page, review the IP addresses or FQDN of the added hosts and click Finish.
15. Add the new licenses:
a. Click Menu > Administration
b. In the Administration section, click Licensing.
c. Click Licenses..
d. From the Licenses tab, click Add.
e. Enter or paste the license keys for VMware vSphere and vCenter per line. Click Next.
The license key is a 25-character length of alphabets and digits in the format XXXXX-XXXXX-XXXXX-XXXXX-XXXXX.
You can enter a list of keys in one operation. A new license is created for every license key you enter.
f. On the Edit license names page, rename the new licenses as appropriate and click Next.
g. Optionally, provide an identifying name for each license. Click Next.
h. On the Ready to complete page, review the new licenses and click Finish.

384 Performing a PowerFlex compute-only node expansion


Internal Use - Confidential

Install and configure the SDC


After adding hosts to the PowerFlex node with VMware ESXi, install the storage data client (SDC) to continue the expansion
process.

Steps
1. Copy the SDC file to the local datastore on the VMware vSphere ESXi server.
2. Use SSH on the host and type esxcli software vib install -d /vmfs/volumes/datastore1/
sdc-3.x.xxxxx.xx-esx7.x.zip -n scaleio-sdc-esx7.x.
3. Reboot the PowerFlex node.
4. To configure the SDC, generate a new UUID:
NOTE: If the PowerFlex cluster is using an SDC authentication, the newly added SDC reports as disconnected when
added to the system. See Configure an authentication enabled SDC for more information.

a. Use SSH to connect to the primary MDM.


b. Type uuidgen. A new UUID string is generated.
6607f734-da8c-4eec-8ea1-837c3a6007bf
5. Use SSH to connect to the new PowerFlex node.
6. Substitute the new UUID in the following code:

esxcli system module parameters set -m scini -p "IoctlIniGuidStr=6607f734-


da8c-4eec-8ea1-837c3a6007bf IoctlMdmIPStr=VIP1,VIP2, VIP3, VIP4"

A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks. Verify the
number of VIPs configured in the existing setup.
7. Reboot the PowerFlex node.

Rename the SDCs


Use this procedure to rename the SDCs.

Steps
1. If using PowerFlex GUI presentation server:
a. Log in to the PowerFlex GUI presentation server.
b. Go to Configuration > SDCs
c. Select the SDC and click Modify > Rename and rename the new host to standard.
For example, ESX-10.234.91.84
2. If using a PowerFlex version prior to 3.5:
a. From the PowerFlex GUI, click Frontend > SDCs and rename new host to standard.
For example, ESX-10.234.91.84

Renaming the VMware ESXi local datastore


Use this procedure to rename local datastores using the proper naming conventions.

Prerequisites
VMware ESXi must be installed with hosts added to the VMware vCenter.

Steps
1. Log in to the VMware vSphere Client.
2. Click Hosts and Clusters.
3. Locate and select the VMware ESXi host.

Performing a PowerFlex compute-only node expansion 385


Internal Use - Confidential

4. Select Datastores.
5. Right-click the datastore name, and select Rename.
6. Name the datastore using the DASXX convention, with XX being the node number.

Patch and install drivers for VMware ESXi


Use this procedure if VMware ESXi drivers differ from the current Intelligent Catalog. Patch and install the VMware ESXi drivers
using the VMware vSphere Client.

Prerequisites
Apply all VMware ESXi updates before installing or loading hardware drivers.
NOTE: This procedure is required only if the ISO drivers are not at the proper Intelligent Catalog level.

Steps
1. Log in to the VMware vSphere Client.
2. Click Hosts and Clusters.
3. Locate and select the VMware ESXi host that you installed.
4. Select Datastores.
5. Right-click the datastore name and select Browse Files.
6. Select the Upload icon (to upload file to the datastore).
7. Browse to the Intelligent Catalog folder or downloaded current solution Intelligent Catalog files.
8. Select the VMware ESXi patch .zip files according to the current solution Intelligent Catalog and node type and click OK to
upload.
9. Select the driver and vib files according to the current Intelligent Catalog and node type and click OK to upload.
10. Click Hosts and Clusters.
11. Locate the VMware ESXi host, right-click, and select Enter Maintenance Mode.
12. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client.
13. Log in as root.
14. Type cd /vmfs/volumes/DASXX where XX is the name of the local datastore that is assigned to the VMware ESXi
server.
15. To display the contents of the directory, type ls.
16. If the directory indicates vib files, type esxcli software vib install –v /vmfs/volumes/DASXX/
patchname.vib to install the vib. These vib files can be individual drivers that are absent in the larger patch cluster
and must be installed separately.
17. Perform either of the following depending on the VMware ESXi version:
a. For VMware ESXi 7.0, type esxcli software vib update -d /vmfs/volumes/DASXXX/VMware-
ESXi-7.0<version>-depot.zip.
b. For VMware ESXi 6.x, type esxcli software vib install -d /vmfs/volumes/DASXXX/<ESXI-patch-
file>.zip
18. Type reboot to reboot the host.
19. Once the host completes rebooting, open an SSH session with the VMware ESXi host, and type esxcli software vib
list |grep net-i to verify that the correct drivers loaded.
20. Select the host and click Exit Maintenance Mode.
21. Update the test plan and host tracker with the results.

386 Performing a PowerFlex compute-only node expansion


Internal Use - Confidential

Add the PowerFlex node to Distributed Virtual


Switches for a PowerFlex compute-only node
expansion
Use the VMware vSphere Client to apply settings and add nodes to switches for a PowerFlex compute-only node expansion.

About this task


The network adapters specified in this procedure are for representation purpose only. See Cabling the PowerFlex R640/
R740xd/R840 nodes for the logical network associated with the PowerFlex node. For PowerFlex compute-only nodes in a dual
network environment, see the cabling requirements and adjust the steps accordingly.
The dvswitch names are for example only and may not match the configured system. Do not change these names or a data
unavailable or data lost event may occur.
You can select multiple hosts and apply settings in template mode.
NOTE: If the ESXi host participates in NSX, do not migrate management and vMotion VMkernels to the VDS.

Steps
1. Log in to the VMware vSphere Client.
2. Click Home and select Networking.
3. NOTE: Right-click the appropriate dvswitch and select Add and Manage Hosts: If the ESXi host participates in NSX,
skip this step to keep management on the standard switch.

● For non-bonded NIC port design, right-click dvswitch0 and select Add and Manage Hosts.
● For static bonding and LACP bonding NIC port design, right-click cust_dvswitch and select Add and Manage Hosts.
The cust_dvswitch consists of all the management networks. For example, flex-node-mgmt-<vlanid> and flex-vmotion-
<vlanid>.
a. Select Add hosts and click Next.
b. Click +New Hosts, select the installed node and click OK.
c. Click Next.
d. From Manage Physical Adapters, select the VMNICs and click Next.
e. Select vmnic4 and click Assign Uplink.
f. Select lag-0 and click OK.
g. Select vmnic6 and click Assign Uplink.
h. Select lag-1, click OK and click Next.
i. From Manage VMkernel adapters, select vmk0.
j. Click Assign port group.
k. Select the hypervisor management and click OK.
l. Click Next > Next.
m. Click Finish.
4. Click Home. Select Hosts and Clusters and select the new host.
5. Click the Configure tab then Networking, select VMkernel adapters.
NOTE: If the VMware ESXi host participates in NSX, skip this step to keep vMotion on the standard switch.

a. Click Add networking.


b. For connection type, select VMkernel Network Adapter, and click Next.
c. From Select an existing network, click Browse.
d. Select flex-vmotion-<vlanid>, and click OK.
e. Click Next.
f. Select vMotion, and click Next.
g. Select Use static IPv4 settings, and type in the ESXi vMotion Kernel IP Address and ESXi vMotion Subnet Mask
values.
6. Click Next, and click Finish.
7. Click Home, and select Networking.

Performing a PowerFlex compute-only node expansion 387


Internal Use - Confidential

8. Right-click the appropriate dvswitch and click Add and Manage Hosts.
● For non-bonded NIC port design, right-click dvswitch1, and select Add and Manage Hosts.
● For static bonding NIC port design, right-click flex_dvswitch, select Add and Manage Hosts, and add flex-data1-
<vlanid> and flex-data2-<vlanid>.
● For an LACP bonding NIC port design, right-click flex_dvswitch, select Add and Manage Hosts, and add flex-data1-
<vlanid>, flex-data2-<vlanid>, flex-data3-<vlanid>, and flex-data4-<vlanid>. A minimum of two logical data networks are
supported. Optionally, you can configure four logical data networks.
9. Select Add hosts, and click Next.
a. Click +New hosts, select the installed node, and click OK.
b. Click Next.
c. From Manage physical adapters, select VMNIC5 and VMNIC7 and click Assign Uplink.
d. Select lag-1, and click OK.
e. Click Next.
f. Select the host and click +New adapter.
g. From Select an existing network, click Browse. Select flex-data1-<vlanid>, and click OK. Click Next > Next.
h. Select Use static IPv4 settings , and type the ESXi PowerFlex Data1 Kernel IP Address and ESXi
PowerFlex Data1 Kernel Subnet Mask values that are recorded in the Logical Configuration Survey in the IPv4
address and Subnet Mask fields, respectively.
i. Click Next, then click Finish.
j. Select vmk2, and click Edit adapter.
k. Select NIC settings.
l. Set the MTU to 9000 and click OK
m. Click Next > Next > Finish.
10. For LACP bonding NIC port design: Repeat steps 8 and 9 for flex-data3-<vlanid> and flex-data4-<vlanid>. A minimum of two
logical data networks are supported. Optionally, you can configure four logical data networks.
11. Click Home > Networking. For non-bonded NIC port design, right-click dvswitch2 and click Add and Manage Hosts.
12. Select Add Hosts, and click Next.
a. Click +New hosts, select the installed node, and click OK.
b. Click Next.
c. Select Manage physical adapters, then Manage VMkernel adapters and click Next.
d. Select vmnic1 and click Assign Uplink.
e. Select lag-0 and click OK.
f. Click Next.
g. Select the host and click +New adapter.
h. From Select an existing network, click Browse.
i. Select flex-data2-<vlanid> and click OK.
j. Click Next > Next.
k. Select Use static IPv4 settings and type the ESXi PowerFlex Data 2 Kernel IP Address and ESXi PowerFlex Data
2 Kernel Subnet Mask.
l. Click Next > Finish.
m. Select vmkx, and click Edit adapter.
n. Select NIC Settings.
o. Set the MTU to 1500 and click OK.
p. Click Next > Next > Finish.
13. Repeat steps 11 and 12 to add flex-data3-<vlanid> and flex-data4-<vlanid> in an LACP bonding NIC port design. A minimum
of two logical data networks are supported. Optionally, you can configure four logical data networks.

Validating network configurations


Test network connectivity between hosts and Metadata Managers (MDMs) before installing PowerFlex on new nodes.

Prerequisites
Gather the IP addresses of the primary and secondary MDMs.

388 Performing a PowerFlex compute-only node expansion


Internal Use - Confidential

Steps
1. Open Direct Console User Interface (DCUI) or use SSH to log in to the new hosts.
2. At the command-line interface, run the following commands to ping each of the primary MDM and secondary MDM IP
addresses.
If ping test fails, you must remediate before continuing.
NOTE: A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks.

vmkping –I vmk0 <Mgmt ip address of primary MDM or secondary MDM>


vmkping –I vmk2 <Data1 address of primary MDM or secondary MDM> -s 8972 –d
vmkping –I vmk3 <Data2 address of primary MDM or secondary MDM> -s 8972 –d

Run the following commands for LACP bonding NIC port design. x is the VMkernel adapter number in vmkx.

vmkping –I vmkx <Data3 address of primary MDM or secondary MDM> -s 8972 –d


vmkping –I vmkx <Data4 address of primary MDM or secondary MDM> -s 8972 –d

NOTE: After several host restarts, check the access switches for error or disabled states by running the following
commands:

# show interface brief


# show interface | inc CRC
# show interface counters error

3. Optional: If errors appear in the counters of any interfaces, type the following and check the counters again.
Output from a Cisco Nexus switch:

# clear counters interface ethernet X/X

4. Optional: If there are still errors on the counter, perform the following to see if the errors are old and irrelevant or new and
relevant.
a. Optional: Type # show interface | inc flapped.

Sample output:
Last link flapped 1d02h
b. Type # show logging logfile | inc failure.

Sample output:
Dec 12 12:34:50.151 access-a %ETHPORT-5-IF_DOWN_LINK_FAILURE: Interface Ethernet1/4/3
is down (Link failure)
5. Optional: Check and reset physical connections, bounce and reset ports, and clear counters until errors stop occurring.
Do not activate new nodes until all errors are resolved and no new errors appear.

Migrating vCLS VMs


Use this procedure to migrate the vSphere Cluster Services (vCLS) VMs manually to the service datastore.

About this task


VMware vSphere 7.0Ux or ESX 7.0Ux creates vCLS VM when the vCenter Server Appliance (vCSA) is upgraded. This task helps
to migrate the vCLS VMs to service datastore.

Steps
1. Log in to VMware vCSA HTML Client using the credentials.
2. Go to VMs and templates inventory or Administration > vCenter Server Extensions > vSphere ESX Agent Manager
> VMs to view the VMs.
The VMs are in the vCLS folder once the host is added to the cluster.
3. Right-click the VM and click Migrate.
4. In the Migrate dialog box, click Yes.

Performing a PowerFlex compute-only node expansion 389


Internal Use - Confidential

5. On the Select a migration type page, select Change storage only and click Next.
6. On the Select storage page, select the PowerFlex volumes for hyperconverged or ESXi-based compute-only node which
will be mapped after the PowerFlex deployment.
NOTE: The volume name is powerflex-service-vol-1 and powerflex-service-vol-2. The datastore name is
powerflex-esxclustershotname-ds1 and powerflex-esxclustershotname-ds2. If these volumes or datastore are
not present, create the volumes or datastores to migrate the vCLS VMs.

7. Click Next > Finish.


8. Repeat the above steps to migrate all the vCLS VMs.

Add a PowerFlex node to a PowerFlex Manager


service in lifecycle mode
Use this procedure to update the inventory and service details in PowerFlex Manager for a new PowerFlex node.

Steps
1. Update the inventory for vCenter (vCSA), switches, gateway VM, and nodes:
a. Click Resources on the home screen.
b. Select the vCenter, switches (applicable only for full networking), gatewayVM, and newly added nodes.
c. Click Run Inventory.
d. Click Close.
e. Wait for the job in progress to complete.
2. Update Services details:
a. Click Services
b. Choose the services on which new a node is expanded and click View Details.
c. On the Services details screen, choose Update Service Details.
d. Choose the credentials for the node and SVM and click Next.
e. On the Inventory Sumary, verify that the newly added nodes are reflecting under Physical Node, and click Next.
f. On the Summary page, verify the details and click Finish.

Configuring the NSX-T Ready tasks


This section describes how to configure the PowerFlex nodes as part of preparing the PowerFlex appliance for NSX-T. Before
you configure the ESXi hosts as NSX-T transport nodes, you must add the transport distributed port groups and convert the
distributed switch from LACP to individual trunks.
NOTE: If you configure VMware NSX-T on PowerFlex hyperconverged or compute-only nodes and add them to PowerFlex
Manager, the services will be in lifecycle mode. If you need to perform an expansion on such a node, see Adding a
PowerFlex R640/R740xd/R840 node to a PowerFlex Manager service in lifecycle mode to add the PowerFlex node.
Contact VMware Support to configure VMware NSX-T on a new PowerFlex node and see Add PowerFlex nodes to a
service to update the service details.

Configure VMware NSX-T overlay distributed virtual port group


Use this procedure to create and configure the VMware NSX-T overlay distributed virtual port group on cust_dvswitch.

Prerequisites
Ensure that the VMware vSphere vCenter Server and the VMware vSphere Client are accessible.

Steps
1. Log in to the VMware vSphere Client.
2. Click Networking.

390 Performing a PowerFlex compute-only node expansion


Internal Use - Confidential

3. Expand the PowerFlex Customer-Datacenter.


4. Right-click cust_dvswitch.
5. Click Distributed Port Group > New Distributed Port Group.
6. Update the name to pfmc-nsx-transport-121 and click Next.
7. Select the default Port binding.
8. Select the default Port allocation.
9. Select the default # of ports (default is 8).
10. Select the default VLAN as VLAN Type.
11. Set the VLAN ID to 121.
12. Clear the Customize default policies configuration check box and click Next.
13. Click Finish.
14. Right-click the pfnc-nsx-transport-121 and click Edit Settings....
15. Click Teaming and failover.
16. Verify that Uplink1 and Uplink2 are moved to Active.
17. Click OK.

Convert trunk access to LACP-enabled switch ports for


cust_dvswitch
Use this procedure to convert the physical NICs from trunk to LACP without losing network connectivity. Use this option only if
cust_dvswitch is configured as trunk. LACP is the default configuration for cust_dvswitch.

Prerequisites
Both Cisco Nexus access switch ports for the compute VMware ESXi hosts are configured as trunk access. These ports will be
configured as LACP enabled after the physical adapter is removed from each ESXi host.
WARNING: As the VMK0 (ESXi management) is not configured on cust_dvswitch, both the vmnics are first
migrated to the LAGs simultaneously and then the port channel is configured. Data connectivity to PowerFlex is
lost until the port channels are brought online with both vmnic interfaces connected to LAGs.

About this task


This procedure includes reconfiguring one port at a time as an LACP without requiring any migration of the VMkernel network
adapters.

Steps
1. Log in to the VMware vSphere Client.
2. Look at VMware vCenter and physical switches to ensure that both ports across all hosts are up.
3. For each compute VMware ESXi host, record the physical switch ports to which vmnic5 (switch-B) and vmnic7 (switch -A)
are connected.
a. Click Home, then select Hosts and Clusters and expand the compute cluster.
b. Select the first compute ESXi host in left pane, and then select Configure tab in right pane.
c. Select Virtual switches under Networking.
d. Expand cust_dvswitch.
e. Expand Uplink1 and click eclipse (…) for vmnic7 and select view settings.
f. Click LLDP tab.
g. Record the Port ID (switch port) and System Name (switch).
h. Repeat step 3 for vmnic5 on Uplink 2.
4. Configure LAG (LACP) on cust_dvswitch within VMware vCenter Server:
a. Click Home, then select Networking.
b. Expand the compute cluster and click cust_dvswitch > Configure > LACP.
c. Click +New to open wizard.
d. Verify that the name is lag1.
e. Verify that the number of ports is 2.

Performing a PowerFlex compute-only node expansion 391


Internal Use - Confidential

f. Verify that the mode is Active.


g. Change Load Balancing mode to Source and destination IP address, TCP/UDP port..
h. Click OK.
5. Migrate vmnic5 to lag1-0 and vmnic7 to lag1-1 on cust_dvswitch for the compute VMware ESXi host as follows:
a. Click Home, then select Networking and expand the PowerFlex data center.
b. Right-click cust_dvswitch and select Manage host networking to open wizard.
c. Select Add hosts... and click Next.
d. Click Attached hosts..., select all the compute ESXi hosts, and click OK.
e. Click Next.
f. For each ESXi host, select vmnic5 and click Assign uplink.
g. Click lag1-0 and click OK.
h. For each ESXi host, select vmnic7 and click Assign uplink.
i. Click lag1-1 and click OK.
j. Click Next > Next > Next > Finish.
6. Create port-channel (LACP) on switch-A for compute VMware ESXi host.
The following switch configuration is an example of a single compute VMware ESXi host.
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-A switch.
b. Create a port channel on switch-A for each compute VMware ESXi host as follows:

interface port-channel40
Description to flex-compute-esxi-host01
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
speed 25000
no lacp suspend-individual
vpc 40

7. Configure channel-group (LACP) on switch-A access port (vmnic5) for each compute VMware ESXi host.
The following switch port configuration is an example of a single compute VMware ESXi host.
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-A switch.
b. Create port on switch-A as follows:

int e1/1/1
description to flex-compute-esxi-host01 – vmnic5
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 40 mode active

8. Create port-channel (LACP) on switch-B for compute VMware ESXi host.


The following switch configuration is an example of a single compute VMware ESXi host.
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-B switch.
b. Create a port channel on switch-B for each compute VMware ESXi host as follows:

interface port-channel40
Description to flex-compute-esxi-host01
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
speed 25000
no lacp suspend-individual
vpc 40

392 Performing a PowerFlex compute-only node expansion


Internal Use - Confidential

9. Configure channel-group (LACP) on switch-B access port (vmnic7) for each compute VMware ESXi host.
The following switch port configuration is an example of a single compute VMware ESXi host.
a. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client to switch-B switch.
b. Create port on switch-B as follows:

int e1/1/1
description to flex-compute-esxi-host01 – vmnic7
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 40 mode active

10. Update teaming and policy to route based on physical NIC load for each port group within cust_dvswitch:
a. Click Home and select Networking.
b. Expand cust_dvswitch to have all port group in view.
c. Right-click flex-data-01 and select Edit Settings.
d. Click Teaming and failover.
e. Change Load Balancing mode to Route based on IP hash.
f. Repeat steps 10b to 10e for each remaining port groups.

Add the VMware NSX-T service using PowerFlex Manager


Use this procedure only if the PowerFlex nodes are added to the NSX-T environment.

Prerequisites
NOTE: Before adding a VMware NSX-T service using PowerFlex Manager, either the customer or VMware services must
add the new PowerFlex node to NSX-T Data Center using NSX-T UI.
Consider the following:
● Before adding this service Update service details in PowerFlex Manager, verify that the NSX-T Data Center is configured
on the PowerFlex hyperconverged or compute-only nodes.
● If the transport nodes (PowerFlex cluster) is configured with NSX-T, then you cannot replace the field units using PowerFlex
Manager. You must add the node manually by following either of these procedures depending on the node type:
○ Performing a PowerFlex hyperconverged node expansion
○ Performing a PowerFlex compute-only node expansion

Steps
1. Log in to PowerFlex Manager.
2. If NSX-T Data Center 3.0 or higher is deployed and is using VDS (not N-VDS), then add the transport network:
a. From Getting Started, click Define Networks.
b. Click + Define and do the following:

NSX-T information Values


Name Type NSX-T Transport

Description Type Used for east-west traffic

Network Type Select General Purpose LAN


VLAN ID Type 121. See the Workbook.

Configure Static IP Address Ranges Select the Configure Static IP Address Ranges check box. Type
the starting and ending IP address of the transport network IP pool

c. Click Save > Close.

Performing a PowerFlex compute-only node expansion 393


Internal Use - Confidential

3. From Getting Started, click Add Existing Service and do the following:
a. On the Welcome page, click Next.
b. On the Service Information page, enter the following details:

Service Information Details


Name Type NSX-T Service

Description Type Transport Nodes

Type Type Hyperconverged or Compute-only

Firmware and software compliance Select the Intelligent Catalog version


Who should have access to the service deployed Leave as default
from this template?

c. Click Next.
d. On the Network Information page, select Full Network Automation, and click Next.
e. On the Cluster Information page, enter the following details:

Cluster Information Details


Target Virtual Machine Manager Select vCSA name
Data Center Name Select data center name
Cluster Name Select cluster name
Target PowerFlex gateway Select PowerFlex gateway name
Target Protection Domain Select PD-1
OS Image Select the ESXi image

f. Click Next.
g. On the OS Credentials page, select the OS credentials for each node, and click Next.
h. On the Inventory Summary page, review the summary and click Next.
i. On the Networking Mapping page, verify that the networks are aligned with the correct dvSwitch.
j. On the Summary page, review the summary and click Finish.
4. Verify PowerFlex Manager recognizes NSX-T is configured on the nodes:
a. Click Services.
b. Select the hyperconverged or compute-only service.
c. Verify that a banner appears under the Service Details tab, notifying that NSX-T is configured on a node and is
preventing some features from being used. In case you do not see this banner, check if you have selected the wrong
service or NSX-T is not configured on the hyperconverged or compute-only nodes.

394 Performing a PowerFlex compute-only node expansion


Internal Use - Confidential

Deploying Windows-based PowerFlex compute-only


nodes manually
You can manually deploy Windows-based PowerFlex compute-only nodes.
NOTE: As of PowerFlex Manager release 3.8 and later, deployment of and management of Windows compute-only nodes is
not supported. Expansions and deployments must be performed manually.

Installing Windows compute-only node with LACP bonding NIC


port design
Use the procedures in this section to install Windows Server 2016 or 2019 on the PowerFlex compute-only node.

Prerequisites
● Ensure that the required information is captured in the Workbook and stored in VAST.
● Prepare the servers by updating all servers to the correct Intelligent Catalog firmware releases and configuring BIOS
settings.
● Ensure that the iDRAC network is configured.
● Ensure that the Windows operating system ISO is downloaded to jump host.
NOTE: As of PowerFlex Manager 3.8, the deployment of Windows compute-only noes is not supported. To manually install
Windows compute-only nodes with LACP bonding NIC port design without PowerFlex Manager, complete the steps in the
following sections.

Performing a PowerFlex compute-only node expansion 395


Internal Use - Confidential

Configure access switch ports for PowerFlex nodes


Use this procedure to configure the access switch ports for PowerFlex nodes.

Steps
1. Configure the aggregation switch out-of-band management (mgmt0) connection to the management switch, type:

interface <interface>
switchport access vlan 101
no shutdown

2. For the selected ports, use the following examples:


a. Windows compute-only trunk ports example:

Cisco Nexus switch configurations Dell EMC PowerSwitch switch configurations

interface port-channel 200 interface port-channel 200


switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 104-106, switchport trunk allowed vlan 104-106,
200 200
vpc 200 vlt-port-channel 200
no shut no shut

b. The port channel interface must be either vPC configured for Cisco Nexus switches or vLT configured for Dell EMC
PowerSwitch switches:

Cisco Nexus switch configurations Dell EMC PowerSwitch switch configurations

interface <interface> interface <interface>


channel-group 200 mode active channel-group 200 mode active
no shutdown no shutdown

Cisco Nexus switch configurations Dell EMC PowerSwitch switch configurations

show vpc <VPC number> to verify the show vlt <VLT domain id> to verify the
vPC status vlt status

c. Windows compute-only node data ports example:

Cisco Nexus switch configurations Dell EMC PowerSwitch switch configurations

interface port-channel 201 interface port-channel 201


switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 151-154 switchport trunk allowed vlan 151-154
vpc 201 vlt-port-channel 201
no shut no shut

d.
Cisco Nexus switch configurations Dell EMC PowerSwitch switch configurations

interface <interface> interface <interface>


channel-group 201 mode active channel-group 201 mode active
no shutdown no shutdown

3. Type switch# copy running-config startup-config to save the configuration on all switches.

396 Performing a PowerFlex compute-only node expansion


Internal Use - Confidential

Mount the Windows Server 2016 or 2019 ISO


Use this procedure to mount the Windows Server ISO.

Steps
1. Connect to the iDRAC, and launch a virtual remote console.
2. Click Menu > Connect Virtual Media > Virtual Map > Map CD/DVD.
3. Click Choose File and browse and select the customer provided Windows Server 2016 or 2019 DVD ISO and click Open.
4. Click Map Device.
5. Click Close.
6. Click Boot and select Virtual CD/DVD/ISO. Click Yes.
7. Click Power > Reset System (warm boot) to reboot the server.
The host boots from the attached Windows Server 2016 or 2019 virtual media.

Install the Windows Server 2016 or 2019 on a PowerFlex compute-only node


Use this procedure to install Windows Server on a PowerFlex compute-only node.

Steps
1. Select the desired values for the Windows Setup page, and click Next.
NOTE: The default values are US-based settings.

2. Click Install now.


3. Enter the product key, and click Next.
4. Select the operating system version with Desktop Experience (For example, Windows Server 2019 Datacenter (Desktop
Experience)), and click Next.
5. Select the check box next to the license terms, and click Next.
6. Select the Custom option, and click Next.
7. To install the operating system, select the available drive with a minimum of 60 GB space on the bootable disk and click
Next.
NOTE: Wait until the operating system installation is complete.

8. Enter the password according to the standard password policy.


9. Click Finish.
10. Install or upgrade the network driver using these steps:
NOTE: Use this procedure if the driver is not updated or discovered by Windows automatically.

a. Download DELL EMC Server Update Utility, Windows 64 bit Format, v.x.x.x.iso file from Dell
Technologies Support site.
b. Map the driver CD/DVD/ISO through iDRAC, if the installation requires it.
c. Connect to the server as the administrator.
d. Open and run the mapped disk with elevated permission.
e. Select Install, and click Next.
f. Select I accept the license terms and click Next.
g. Select the check box beside the device drives, and click Next.
h. Click Install, and Finish.
i. Close the window to exit.

Performing a PowerFlex compute-only node expansion 397


Internal Use - Confidential

Configure the network


Use this procedure to configure the network from the network management console.

Steps
1. Open iDRAC console and log in to the Windows Server 2016 or 2019 using admin credentials.
2. Press Windows+R and enter ncpa.cpl.
3. Select the appropriate management NIC.
4. Perform the following for the Management Network:
a. Select Properties.
b. Click Configure....
c. Click the Advanced tab, and select the VLAN ID option from the Property column.
d. Enter the VLAN ID in the Value column.
e. Click OK and exit.
f. Right-click the appropriate NIC, and click Properties, select Internet Protocol Version 4 (TCP/IPv4) and assign
static IP address of the server.
5. Open the PowerShell console, and perform the following procedures:

To create the... Do this...


Team a. Type New-NetLbfoTeam -Name "Team name"
-TeamMembers "NIC1","NIC2".
For example, New-NetLbfoTeam -Name
"flex-node-mgmt-<105>" -TeamMembers
"NIC1","Slot4port1" (Select the appropriate NIC
with 25G ports).
b. Enter Y to confirm.

Management network if the IPs are not assigned manually as a. Type Add-NetLbfoTeamNic -Team "flex-node-
specified in Step 4 (optional) mgmt-<105>", to map the VLAN to the interface using
this command:
NOTE: Assign the IP address according to the
Workbook.
b. Type New-NetIPAddress -InterfaceAlias
'flex-node-mgmt-<105>' -IPAddress
'IP' -PrefixLength 'Prefix number'
-DefaultGateway 'Gateway IP', to assign the IP
address to the interface.
Data network NOTE: Assign the IP address according to the
Workbook.
a. Type New-NetIPAddress –InterfaceAlias
'Interface name' –IPv4Address 'IP' –
PrefixLength 'prefix' Select NIC2, to create
the Data1 network.
b. Type New-NetIPAddress –InterfaceAlias
'Interface name' –IPv4Address 'IP' –
PrefixLength 'prefix' Select Slot4 Port2,
to create the Data2 network.
Where, Interface name is the NIC assigned for data1 or
data2 and IP is the data1 IP or data2 IP.
The prefix is the CIDR notation. For example, if the
network mask is 255.255.255.0, then the CIDR notation
(prefix) is 24.

6. Applicable for an LACP NIC port bonding design: Modify Team0 settings and create a VLAN:

398 Performing a PowerFlex compute-only node expansion


Internal Use - Confidential

To... Do this...
Edit Team0 settings: a. Open the Server Manager, and click Local Server > NIC
teaming.
b. In the NIC teaming window, click Tasks > New Team.
c. Enter name as Team0 and select the appropriate
network adapters.
d. Expand the Additional properties, and modify as
follows:
● Teaming mode: LACP
● Load balancing mode: Dynamic
● Standby adapter: None (all adapters active)
e. Click OK to save the changes.
f. Select Team0 from the Teams list.
g. From the Adapters and Interfaces, click the Team
Interfaces tab in
Create a VLAN in Team0 a. Click Tasks and click Add Interface.
b. In the New team interface dialog box, type the name
as, General Purpose LAN.
c. Assign VLAN ID (200) to the new interface in the VLAN
field, and click OK.
d. From the network management console, right-click the
newly created network interface controller, and select
Properties Internet Protocol Version 4 (TCP/IPv4),
and click Properties.
e. Select the Assign the static IP address check box.

7. Remove the IPs from the data1 and data2 network adapters.
8. Create Team1 and VLAN:

To create a... Do this...


Team and assign the name as Team1 a. Open the Server Manager, and Local Server > NIC
teaming.
b. In the NIC teaming window, click Tasks > New Team.
c. Enter name as Team1, and select the appropriate
network adapters.
d. Expand the Additional properties, and modify as
follows:
● Teaming mode: LACP
● Load balancing mode: Dynamic
● Standby adapter: None (all adapters active)
VLAN in Team1 a. Select your existing NIC Team Team0 in the Teams
list box, and select the Team Interfaces tab in the
Adapters and Interfaces list box.
b. Click Tasks, and click Add Interface.
c. In the New team interface dialog box, type the name
as, flex-data1-<vlanid>
d. Assign VLAN ID (151) to the new interface in the VLAN
field, and click OK.
e. From the network management console, right-click the
newly created network interface controller, and select
Properties Internet Protocol Version 4 (TCP/IPv4),
and click Properties.

9. Repeat step 8 for data2, data3 (if required), and data4 (if required).
A minimum of two logical data networks are supported. Optionally, you can configure four logical data networks. Verify the
number of logical data networks configured in an existing setup and configure the logical data networks accordingly.

Performing a PowerFlex compute-only node expansion 399


Internal Use - Confidential

Disable Windows Firewall


Use this procedure to disable Windows Firewall through the Windows Server 2016 or 2019 or Microsoft PowerShell.

Steps
1. Windows Server 2016 or 2019:
a. Press Windows key+R on your keyboard, type control and click OK.
The All Control Panel Items window opens.
b. Click System and Security > Windows Firewall .
c. Click Turn Windows Defender Firewall on or off.
d. Turn off Windows Firewall for both private and public network settings, and click OK.
2. Windows PowerShell:
a. Click Start, type Windows PowerShell.
b. Right-click Windows PowerShell, click More > Run as Administrator.
c. Type Set-NetFirewallProfile -profile Domain, Public, Private -enabled false in the Windows
PowerShell console.

Enable the Hyper-V role through Windows Server 2016 or 2019


Use this procedure to enable the Hyper-V role through Windows Server 2016 or 2019.

About this task


This is an optional procedure and is recommended only when you want to enable the Hyper-V role on a specified server.

Steps
1. Click Start > Server Manager.
2. In Server Manager, on the Manage menu, click Add Roles and Features.
3. On the Before you begin page, click Next.
4. On the Select installation type page, select Role-based or feature-based installation, and click Next.
5. On the Select destination server page, click Select a server from the server pool, and click Next.
6. On the Select server roles page, select Hyper-V.
An Add Roles and Features Wizard page opens, prompting you to add features to Hyper-V.
7. Click Add Features. On the Features page, click Next.
8. Retain the default selections/locations on the following pages, and click Next:
● Create Virtual Switches
● Virtual Machine Migration
● Default stores
9. On the Confirm installation selections page, verify your selections, and click Restart the destination server
automatically if required, and click Install.
10. Click Yes to confirm automatic restart.

Enable the Hyper-V role through Windows PowerShell


Enable the Hyper-V role using Windows PowerShell.

About this task


This is an optional procedure and is recommended only when you want to enable the Hyper-V role on a specified server.

Steps
1. Click Start, type Windows PowerShell.
2. Right-click Windows PowerShell, and select Run as Administrator.

400 Performing a PowerFlex compute-only node expansion


Internal Use - Confidential

3. Type Install-WindowsFeature -Name Hyper-V -IncludeManagementTools -Restart in the Windows


PowerShell console.

Enable Remote Desktop access


Use this procedure to enable Remote Desktop access.

Steps
1. Go to Start > Run.
2. Enter SystemPropertiesRemote.exe and click OK.
3. Select Allow remote connection to this computer.
4. Click Apply > OK.

Install and configure a Windows-based compute-only node to


PowerFlex
Use this procedure to install and configure a Windows-based compute-only node to PowerFlex.

About this task


For Windows PowerFlex compute-only nodes, only firmware upgrades are supported.

Steps
1. Download the EMC-ScaleIO-sdc*.msi and LIA software.
2. Double-click EMC-ScaleIO LIA setup.
3. Accept the terms in the license agreement, and click Install.
4. Click Finish.
5. Configure the Windows-based compute-only node depending on the MDM VIP availability:
● If you know the MDM VIPs before installing the SDC component:
a. Type msiexec /i <SDC_PATH>.msi MDM_IP=<LIST_VIP_MDM_IPS>, where <SDC_PATH> is the path where
the SDC installation package is located. The <LIST_VIP_MDM_IPS> is a comma-separated list of the MDM IP
addresses or the virtual IP address of the MDM.
b. Accept the terms in the license agreement, and click Install.
c. Click Finish.
d. Permit the Windows server reboot to load the SDC driver on the server.
● If you do not know the MDM VIPs before installing the SDC component:
a. Click EMC-ScaleIO SDC setup.
b. Accept the terms in the license agreement, and click Install.
c. Click Finish.
d. Type C:\Program Files\EMC\scaleio\sdc\bin>drv_cfg.exe --add_mdm --ip <VIPs_MDMs> to
configure the node in PowerFlex.
● Applicable only if the existing network is an LACP bonding NIC:
a. Add all MDM VIP IPs, and run the command to add C:\Program
Files\EMC\scaleio\sdc\bin>drv_cfg.exe --mod_mdm_ip --ip <existing MDM VIP>--
new_mdm_ip <all 4 MDM VIP>.

Performing a PowerFlex compute-only node expansion 401


Internal Use - Confidential

Map a volume to a Windows-based compute-only node using


PowerFlex
Use this procedure to map a volume to a Windows-based compute-only node using PowerFlex.

About this task


If you are using a PowerFlex version prior to 3.5, see Mapping a volume to a Windows-based compute-only node using a
PowerFlex version prior to 3.5 .

Steps
1. Log in to the presentation server at https://<presentation serverip>:8443.
2. In the left pane, click SDCs.
3. In the right pane, select the Windows host.
4. Select the Windows host, click Mapping, and then select Map from the drop-down list.
5. To open the disk management console, perform the following steps:
a. Press Windows+R.
b. Enter diskmgmt.msc and press Enter.
6. Rescan the disk and set the disks online:
a. Click Action > Rescan Disks.
b. Right-click each Offline disk, and click Online.
7. Right-click each disk and select Initialize disk.
After initialization, the disks appear online.
8. Right-click Unallocated and select New Simple Volume.
9. Select default and click Next.
10. Assign the drive letter.
11. Select default and click Next.
12. Click Finish.

Mapping a volume to a Windows-based compute-only node using a


PowerFlex version prior to 3.5
Use this procedure to map an existing volume to a Windows-based compute-only node using a PowerFlex version prior to 3.5.

About this task


For Windows PowerFlex compute-only nodes, only firmware upgrades are supported.

Steps
1. Open the PowerFlex GUI, click Frontend, and select SDC.
2. Windows-based compute-only nodes are listed as SDCs if configured correctly.
3. Click Frontend again, and select Volumes. Right-click the volume, and click Map.
4. Select the Windows-based compute-only nodes, and then click Map.
5. Log in to the Windows Server compute-only node.
6. To open the disk management console, perform the following steps:
a. Press Windows+R.
b. Enter diskmgmt.msc and press Enter.
7. Rescan the disk and set the disks online:
a. Click Action > Rescan Disks.
b. Right-click each Offline disk, and click Online.
8. Right-click each disk and select Initialize disk.
After initialization, the disks appear online.

402 Performing a PowerFlex compute-only node expansion


Internal Use - Confidential

9. Right-click Unallocated and select New Simple Volume.


10. Select default and click Next.
11. Assign the drive letter.
12. Select default and click Next.
13. Click Finish.

Licensing Windows Server 2016 compute-only nodes


Use this procedure to activate the licenses for Windows Server 2016 compute-only nodes.

About this task


Without Internet connectivity, phone activation might be required.

Steps
1. Using the administrator credentials, log in to the target Windows Server 2016.
2. When the main desktop view appears, click Start and type Run.
3. Type slui 3 and press Enter.
4. Enter the customer provided Product key and click Next.
If the key is valid, Windows Server 2016 is successfully activated.
If the key is invalid, verify that the Product key entered is correct and try the procedure again.
NOTE: If the key is still invalid, try activating without an Internet connection.

Activating a license without an Internet connection


Use this procedure if you cannot activate the license using an Internet connection.

Steps
1. Using the administrator credentials, log in to the target Windows Server VM (jump server).
2. When the main desktop view appears, click Start and select Command Prompt (Admin) from the option list.
3. At the command prompt, use the slmgr command to change the current product key to the newly entered key.

C:\Windows\System32> slmgr /ipk XXXXX-XXXXX-XXXXX-XXXXX-XXXXX

4. At the command prompt, use the slui command to initiate the phone activation wizard. For example: C:
\Windows\System32> slui 4.
5. From the drop-down menu, select the geographic location that you are calling and click Next.
6. Call the displayed number, and follow the automated prompts.
After the process is completed, the system provides a confirmation ID.

7. Click Enter Confirmation ID and enter the codes that are provided. Click Activate Windows.
Successful activation can be validated using the slmgr command.

C:\Windows\System32> slmgr /dlv

8. Repeat this process for each Windows VM.

Performing a PowerFlex compute-only node expansion 403


Internal Use - Confidential

IX
Adding VMware NSX-T Edge nodes
Use this section to add additional VMware NSX-T Edge nodes to expand an existing VMware NSX-T environment.
This section covers the following procedures:
● Configure the PERC Mini Controller for data protection on the VMware NSX-T Edge node (RAID1+0 local storage only)
● Install and configure the VMware ESXi
● Add VMware ESXi host to the VMware vCenter server
● Add the new VMware ESXi local datastore to VMware vSphere host and rename the operating system datastore (RAID1+0
local storage only)
● Claim local disk drives to the vSAN cluster and rename the operating system datastore (vSAN only)
● Configure NTP and scratch partition settings
● Add and configure the VMware NSX-T Edge node to edge_dvswitch0 and edge_dvswitch1
● Patch and install drivers for VMware ESXi and updating the VMware settings
Before adding a VMware NSX-T Edge node, complete the initial set of expansion procedures that are common to all expansion
scenarios, see Performing the initial expansion procedures.
NOTE: For an NSX-T configured transport cluster (hyperconverged service), PowerFlex Manager does not support the
addition or removal of nodes. You must remove the NSX-T compute service, perform the operation manually, and add the
service.
After adding a VMware NSX-T Edge node, see Completing the expansion.

Verify if vSAN is configured on an existing


VMware vSphere edge cluster
Use this procedure to verify if vSAN is configured or not on an existing VMware vSphere edge cluster.

Steps
1. Log in to the VMware vSphere Client.
2. Click Hosts and Clusters.
3. Select the edge cluster and click Configure > vSAN > Services.
4. If vSAN is turned off, proceed to configure the PERC Mini Controller on the VMware NSX-T Edge nodes.

Configure the PERC Mini Controller on the


VMware NSX-T Edge nodes
Use this procedure only if RAID10 is configured on the existing production VMware vSphere edge cluster. This procedure
provides the steps to manually configure data protection for the PowerEdge RAID Controller (PERC) H730P Mini Controller for
the VMware NSX-T Edge nodes.

Prerequisites
Consider the following:
● Before configuring the PERC Mini Controller for data protection, you must have a configured and reachable iDRAC.
● Verify if vSAN is turned off.

Steps
1. Launch virtual console and from the Boot option in the menu, select BIOS Setup to enter system BIOS.
2. Power cycle the server and wait for the Boot option to appear.

404 Adding VMware NSX-T Edge nodes


Internal Use - Confidential

3. From System Setup, select Device Settings.


4. Select Integrated RAID Controller 1: Dell PERC <PERC H730P Mini> Configure Utility.
5. From System Setup, click Device Settings > Integrated RAID Controller.
6. Click Main Menu > Configuration Management.
7. On the Dashboard View - Main-Menu - Configuration Management screen, click Clear Configuration.
8. Select Confirm and click Yes > OK.
9. On the Dashboard View - Main-Menu - Configuration Management screen, click Create Virtual Disk.
10. Select RAID10 from the RAID Level list.
11. Click Select Physical Disks.
12. On the next screen, select the SSD option for Select Media Type to view all the disk drives.
13. Click Check All to select all the disk drives and click Apply Changes > OK.
14. Click Create Virtual Disk and select the Confirm check box.
15. Click Yes > OK.
16. Click Back twice and to go to the Main menu.
17. Click Virtual Disk Management to view the initialization process.
NOTE: The initialization process can occur in the background while the VMware ESXi install is on the Dell BOSS cards.
18. Click Back twice to go to the Configuration Utility.
19. Click Finish twice and click Yes when prompted by the dialog box to restart.
20. Repeat steps 1 through 20 for each new server.

Install and configure VMware ESXi


Use this procedure to install VMware ESXi on the VMware NSX-T Edge node.

Prerequisites
Verify that the customer VMware ESXi ISO is available and is located in the Intelligent Catalog code directory.

Steps
1. Configure the iDRAC:
a. Connect to the iDRAC interface and launch a virtual remote console from Dashboard and click Launch Virtual Console.
b. Select Connect Virtual Media > Map CD/DVD.
c. Browse to the folder where the ISO file is saved, select it, and click Open.
d. Click Map Device.
e. Click Boot > Virtual CD/DVD/ISO.
f. Click Yes to confirm the boot action.
g. Click Power > Reset System (warm boot).
h. Click Yes to confirm power action.
2. Install VMware ESXi:
a. On the VMware ESXi installer screen, click Enter to continue.
b. Press F11 to accept the license agreement.
c. Under Local, select DELLBOSS VD as the installation location and click Enter.
d. Select US Default as the keyboard layout and click Enter to continue.
e. At the prompt, type the customer provided root password or use the default password VMwar3!!. Click Enter.
f. When the Confirm Install screen is displayed, press F11.
g. Click Enter to reboot the node.
3. Configure the host:
a. Press F2 to access the System Customization menu.
b. Enter the password for the root user.
c. Go to Direct Console User Interface (DCUI) > Configure Management Network.
d. Set the following options under Configure Management Network:
● Network Adapters: Select vmnic0 and vmnic2.
● VLAN: See Workbook for VLAN. The standard VLAN is 105.

Install and configure VMware ESXi 405


Internal Use - Confidential

● IPv4 Configuration: Set static IPv4 address and network configuration. See Workbook for the IPv4 address,
subnet mask, and the default gateway.
● DNS Configuration: See Workbook for the primary DNS server and alternate DNS server.
○ Custom DNS Suffixes: See Workbook.
● IPv6 Configuration: Disable IPv6.
e. Press ESC to return to DCUI.
f. Type Y to commit the changes and the node restarts.
4. Use the command line to set the IP hash:
a. From the DCUI, press F2 to customize the system.
b. Enter the password for the root user.
c. Select Troubleshooting Options and press Enter.
d. From the Troubleshooting Mode Options menu, enable the following:
● ESXi Shell
● Enable SSH
e. Press Enter to enable the service.
f. Press <Alt>+F1 and log in.
g. To enable the VMware ESXi host to work on the port channel, type the following commands:
esxcli network vswitch standard policy failover set -v vSwitch0 –1 iphash

esxcli network vswitch standard portgroup policy failover set -p "Management Network"
-l iphash

h. Press <Alt>+F2 to exit DCUI.


i. Press ESC to exit DCUI.

Add a VMware ESXi host to the VMware


vCenter
Use this procedure to add a new VMware NSX-T Edge ESXi host to VMware vCenter.

Prerequisites
Ensure that you have access to the VMware vSphere Client and the VMware NSX-T Edge cluster is already created.

Steps
1. Log in to the VMware vSphere Client.
2. Click the home icon at the top of the screen and select Hosts and Clusters.
3. Expand PowerFlex Customer-Datacenter and select the PFNC cluster.
4. Right-click PFNC and click Add Host....
5. In the Host name or IP address field, type the FQDN of the new host and click Next.
6. Type the root username and password for the host and click Next.
7. At the Host Summary screen, click Next.
8. Verify the summary for accuracy and click Next.
9. Click Finish to add the host to the cluster.
10. Verify that the VMware ESXi edge node is added.
11. Right-click the VMware ESXi edge node and select Exit Maintenance Mode.

406 Add a VMware ESXi host to the VMware vCenter


Internal Use - Confidential

Add the new VMware ESXi local datastore


and rename the operating system datastore
(RAID local storage only)
Use this procedure only if the existing production VMware NSX-T Edge nodes do not have vSAN configured. This procedure
manually adds the new local datastore that was created from the RAID utility to VMware ESXi. By default, the VMware NSX-T
Edge nodes are configured using the local storage with RAID1+0 enabled and come with eight SSD hard drives. Using the local
storage with RAID1+0 enabled is the preferred method recommended by VMware because the NSX-T Edge gateway VMs have
their own method of providing availability at the services level. However, if VMware professional services recommends vSAN,
then skip this procedure.

Prerequisites
Ensure that you have access to the VMware vSphere Client.

Steps
1. Log in to the VMware vSphere Client.
2. Click the home icon at the top of the screen and select Hosts and Clusters.
3. Expand PowerFlex Customer-Datacenter and select PFNC cluster.
4. Rename the local operating system datastore to BOSS card:
a. Select an NSX-T Edge ESXi host.
b. Click Datastores.
c. Right-click the smaller size datastore (OS) and click Rename.
d. To name the datastore, type PFNC-<nsx-t edge host short name>-DASOS.
5. Right-click the third NSX Edge ESXi server and select Storage > New Datastore to open the wizard. Perform the
following:
a. Verify that VMFS is selected and click Next.
b. Name the datastore using PFNC_DAS01.
c. Click the LUN that has disks created in RAID 10.
d. Click Next > Finish.
6. Repeat steps 1 through 5 for the remaining VMware NSX-T Edge nodes.

Claim local disk drives to the vSAN and


rename the OS datastore (vSAN storage
option)
Use this procedure only if the existing production VMware NSX-T Edge nodes have vSAN configured. This procedure manually
adds the VMware ESXi hosts to an existing vSAN cluster. By default, the VMware NSX-T Edge nodes are configured using
the local storage with RAID10 enabled and come with eight SSD hard drives. Using the local storage with RAID10 enabled, is
the preferred method recommended by VMware because the NSX-T Edge gateway VMs have their own method of providing
availability at the services level. However, if VMware professional services chooses to use vSAN, then skip this procedure.

Prerequisites
● Ensure that you have access to the VMware vCenter Client.
● Ensure that vSAN is enabled for the VMware NSX-T Edge vSphere cluster.

Steps
1. Log in to the VMware vSphere Client.
2. Click the home icon at the top of the screen and select Hosts and Clusters.
3. Expand PowerFlex Customer-Datacenter and select PFNC cluster.
4. Rename the local operating system datastore for BOSS card:

Add the new VMware ESXi local datastore and rename the operating system datastore (RAID local storage only) 407
Internal Use - Confidential

a. Select an Edge ESXi host.


b. Click Datastores.
c. Right-click the smaller size datastore (OS) and select Rename.
d. To name the datastore, type PFNC-<nsx-t edge host short name>-DASOS.
5. Repeat steps 1 through 4 for the remaining VMware NSX-T Edge nodes.
6. Select the edge cluster and click Configure > vSAN > Disk Management.
7. Click Configure to open the Configure vSAN wizard.
a. Retain Single site cluster as default and click Next twice
b. For all the NSX-T Edge nodes with NSX-T cluster, select a VMware ESXi host, click the disks, and claim disks as cache or
capacity tier using the Claim For icon as follows:
i. Identify an SSD disk to be used for cache tier. It is recommended to have one or two disks of the same model. Select
a disk and select Cache tier.
ii. Identify the remaining four capacity drives. Select the remaining disks and select Capacity Tier.
iii. Click Create.
In most cases, two disk groups can be created by selecting two disks to be claimed as cache tier. A new disk group is
created for each disk claimed as cache tier.

Configure NTP and scratch partition settings


Use this procedure to configure the NTP and scratch partition settings for each VMware NSX-T Edge host.

Prerequisites
Ensure that the NSX-T Edge ESXi hosts are added to VMware vCenter server. VMware ESXi must be installed with hosts added
to the VMware vCenter.

Steps
1. Log in to the VMware vSphere Client.
2. Click the home icon at the top of the screen and select Hosts and Clusters.
3. Expand Datacenter > PFNC.
4. Configure NTP on VMware ESXi NSX-T Edge host as follows:
a. Select a VMware ESXi NSX-T Edge host.
b. Click Configure > System > Time Configuration and click Edit from Network Time Protocol.
c. Select the Enable check box.
d. Enter the NTP servers as recorded in the Workbook. Set the NTP service startup policy as Start and stop with host,
and select Start NTP service.
e. Click OK.
f. Repeat for each controller hosts.
5. Configure a scratch partition for each NSX-T Edge host as follows:
a. Click Host > Datastores.
b. Select the local PFMC-<controller host name>-DASXX datastores.
c. Click New Folder to create a folder with naming convention locker_<hostname> (use a short hostname and not the
FQDN) and click OK.
d. Select the NSX-T Edge host and click Configure > System > Advanced System Settings.
e. Locate ScratchConfig.ConfiguredScratchLocation and edit the path with the following information:

/vmfs/volumes/ PFNC-<nsx-t edge host name>-DASXX /.locker_hostname

f. Reboot the host for the changes to take effect.


g. Select the NSX-T Edge host and click Configure > System > Advanced System Settings and disable SSH alerting.
h. Filter for SSH and change the value of UserVars.SuppressShellWarning to 1.
i. Click OK.

408 Configure NTP and scratch partition settings


Internal Use - Confidential

Add the VMware NSX-T Edge node to


edge_dvswitch0
Use this procedure to add the new VMware NSX-T Edge node to the edge_dvswitch0 and configure the VMkernel networking.

Prerequisites
Ensure you have access to the management VMware vSphere Client.

Steps
1. Log in to the VMware vSphere Client.
2. Click Networking.
3. Expand PowerFlex Customer - Datacenter.
4. Right-click edge_dvswitch0 and select Add and Manage Hosts.
5. Select Add hosts and click Next.
6. Click +New Hosts, select the installed VMware NSX-T Edge nodes to add, and click OK.
7. Click Next to manage the physical adapters.
8. In Manage Physical Network Adapters, perform the following:
a. Select vmnic0 and click Assign Uplink.
b. Select lag1-0 and click OK.
c. Select vmnic2 and click Assign Uplink.
d. Select lag1-1 and click OK.
e. Click Next.
9. In Manage VMkernel network adapters, perform the following:
a. Select vmk0 and click Assign portgroup.
b. Select pfnc-node-mgmt-105 and click OK.
c. Click Next twice.
10. In the Ready to Complete screen, review the details, and click Finish.
11. If vSAN is required, perform the following steps to create and configure the vSAN VMkernel adapters:
NOTE: The vMotion VMkernel network adapter is not configured by default. Availability depends on the NSX-T Edge
Gateway VM service level.

a. Create and configure pfnc-vsan-vSAN-114 distributed port group.


b. Log in to the VMware vSphere Client.
c. Click Hosts and Clusters.
d. Expand Datacenter and PFNC cluster to view the VMware ESXi edge hosts.
e. Click ESXi edge host > Configure > VMkernel adapters.
f. Click Add Networking... to open the wizard.
g. At the VMkernel Network Adapter screen, leave the default values and click Next.
h. At the Select an existing network screen, leave the default values and click Browse....
i. Select pfnc-vsan-vSAN-114 and click OK.
j. Click Enable vSAN and click Next.
k. Select Use static IPv4 settings.
l. Enter the IPv4 address and Subnet mask.
m. Click Next > Finish.

Add the VMware NSX-T Edge node to edge_dvswitch0 409


Internal Use - Confidential

Add the VMware NSX-T Edge node to


edge_dvswitch1
Use this procedure to add the new VMware NSX-T Edge node to the edge_dvswitch1.

Prerequisites
Ensure that you have access to the management VMware vSphere Client.

Steps
1. Log in to the VMware vSphere Client.
2. Click Networking.
3. Expand PowerFlex Customer - Datacenter.
4. Right-click edge_dvswitch1 and select Add and Manage Hosts.
5. Select Add hosts and click Next.
6. Click +New Hosts, select the installed VMware NSX-T Edge nodes to add, and click OK.
7. Click Next.
8. In Manage Physical Network Adapters, perform the following:
a. Select vmnic1 and click Assign Uplink.
b. Select Uplink 1 and click OK.
c. Select vmnic3 and click Assign Uplink.
d. Select Uplink 2 and click OK.
e. Select vmnic4 and click Assign Uplink.
f. Select Uplink 3 and click OK.
g. Select vmnic5 and click Assign Uplink.
h. Select Uplink 4 and click OK.
i. Click Next.
9. Click Next > Next.
10. In the Ready to Complete screen, review the details, and click Finish.

Patch and install drivers for VMware ESXi


Use this procedure if VMware ESXi drivers differ from the current Intelligent Catalog. Patch and install the VMware ESXi drivers
using the VMware vSphere Client.

Prerequisites
Apply all VMware ESXi updates before installing or loading hardware drivers.
NOTE: This procedure is required only if the ISO drivers are not at the proper Intelligent Catalog level.

Steps
1. Log in to the VMware vSphere Client.
2. Click Hosts and Clusters.
3. Locate and select the VMware ESXi NSX-T Edge host that you installed.
4. Select Datastores.
5. Right-click the datastore name and select Browse Files.
6. Select the Upload icon (to upload file to the datastore).
7. Browse to the Intelligent Catalog folder or downloaded current solution Intelligent Catalog files.
8. Select the VMware ESXi patch .zip files according to the current solution Intelligent Catalog and node type and click OK to
upload.
9. Select the driver and vib files according to the current Intelligent Catalog and node type and click OK to upload.
10. Click Hosts and Clusters.

410 Add the VMware NSX-T Edge node to edge_dvswitch1


Internal Use - Confidential

11. Locate the VMware ESXi host, right-click, and select Enter Maintenance Mode.
12. Open an SSH session with the VMware ESXi host using PuTTy or a similar SSH client.
13. Log in as root.
14. Type cd /vmfs/volumes/PFNC-<controller host name>-DASXX where XX is the name of the local datastore
that is assigned to the VMware ESXi server.
15. To display the contents of the directory, type ls.
16. If the directory indicates vib files, type esxcli software vib install –v /vmfs/volumes/PFNC-<controller
host name>-DASXX/patchname.vib to install the vib. These vib files can be individual drivers that are absent in the
larger patch cluster and must be installed separately.
17. Perform either of the following depending on the VMware ESXi version:
a. For VMware ESXi 7.0, type esxcli software vib update -d /vmfs/volumes/DAS<name>/VMware-
ESXi-7.0<version>-depot.zip.
b. For VMware ESXi 6.x, type esxcli software vib install -d /vmfs/volumes/DASXX/<ESXI-patch-
file>.zip
18. Type reboot to reboot the host.
19. Once the host completes rebooting, open an SSH session with the VMware ESXi host, and type esxcli software vib
list |grep net-i to verify that the correct drivers loaded.
20. Select the host and click Exit Maintenance Mode.
21. Update the test plan and host tracker with the results.

Adding VMware NSX-T Edge nodes 411


Internal Use - Confidential

X
Completing the expansion
Use this section to complete the expansion of a PowerFlex appliance.
This section covers the following procedures to complete the expansion of a PowerFlex appliance:
● Update VMware ESXi settings
● Updating the rebuild and rebalance settings
● Performance tuning
● Updating the Storage Data Client parameters
● Post-installation tasks

412 Completing the expansion


Internal Use - Confidential

61
Update VMware ESXi settings
Update the VMware settings for NTP, SNMP, system log, shell warnings, advanced settings, and power management settings
on VMware ESXi hosts, according to the Workbook.

About this task


Verify settings from the Workbook against existing hosts, confirm with the customer, and apply them to the expansion nodes.

Prerequisites
Verify that VMware ESXi is installed and hosts are added to VMware vCenter.

Steps
1. Obtain the NTP server IP addresses from the Workbook or follow these steps to obtain the IP addresses from VMware
vCenter:
a. Log in to the VMware vSphere Client.
b. Select one of the VMware ESXi PowerFlex nodes from the existing cluster and click Configure.
c. Select and expand the System option.
d. Select Time Configuration.
e. Record the NTP server IP address.
2. Select each of the new expansion PowerFlex nodes and click Configure.
3. Select System.
4. Select Time Configuration.
5. Click Edit to edit Network Time Protocol.
6. Select Use Network Time Protocol to enable the NTP client.
7. Enter the IP address of the NTP servers in NTP Servers.
8. Click NTP Service Startup Policy > Start and Stop with Host.
9. Click Start NTP Service and click OK.
10. Suppress the SSH warning and complete the following tasks:
a. Select each of the new expansion PowerFlex nodes and click Configure.
b. Click Advanced System Settings > Edit.
c. Filter for SSH and change the value of UserVars.SuppressShellWarning to 1. Click OK.
11. Run the commands in the VMware ESXi shell (Alt-F1 from the DCUI) or through a remote SSH session to the VMware ESXi
host, as follows:
a. Obtain SNMP details like CommunityString and IPaddressOfTarget by running following command on one of the existing
VMware ESXi nodes in the cluster.

esxcli system snmp get

b. Record the information from Step 11a and use it for Step 11c.
c. To configure SNMP, configure the IP address of the target, according to the Workbook:

esxcli system snmp set --communities=<CommunityString>


esxcli system snmp set --targets=<IPaddressOfTarget>@162/<CommunityString>
esxcli system snmp set --syslocation=<IPaddressOfTarget>@162/<CommunityString>
esxcli system snmp set --enable true
esxcli system snmp test

d. To configure the system log, configure the log host IP address according to the Workbook:

esxcli system syslog config set --default-rotate=20 --default-size=10240 --loghost


udp://<PowerFlex Manager_IP>:514
esxcli system syslog config logger set --id=hostd --rotate=80 --size=10240
esxcli system syslog config logger set --id=vmkernel --rotate=80 --size=10240

Update VMware ESXi settings 413


Internal Use - Confidential

esxcli system syslog config logger set --id=fdm --rotate=80


esxcli system syslog config logger set --id=vpxa --rotate=20

e. To configure power management settings:

esxcli system settings advanced set --option=/Power/CpuPolicy --string-value="High


Performance"

12. Update the host tracker with the results.


13. Reboot the VMware ESXi hosts to make the changes persistent.

414 Update VMware ESXi settings


Internal Use - Confidential

62
Updating the rebuild and rebalance settings

Set rebuild and rebalance settings


Use this procedure to define rebuild and rebalance settings before and after RCM upgrades.

Steps
1. Select the following options to set the network throttling:
a. Log in to PowerFlex GUI presentation server using the primary MDM IP address.
b. From the Configuration tab, click Protection domain and select the protection domain. Click Modify and choose
the Network Throttling option from the menu. From the pop-up window, verify that Unlimited is selected for all the
parameters.
c. Click Apply.
2. Select the following options to set the I/O priority:
a. Log in to the PowerFlex GUI presentation server using the primary MDM IP address.
b. From the Configuration tab, click Storage Pool and select the storage pool. Click Modify and choose the I/O Priority
option from the menu to view the current policy settings.
c. Before an RCM upgrade, set the following policies:

Policy settings Values


Rebuild policy Unlimited
Rebalance policy Unlimited
Migration policy Retain the default value
Maintenance mode policy Limit concurrent I/O=10

If PowerFlex GUI presentation server allows, set all values to Unlimited.

d. After an RCM upgrade, set the following policies:

Policy settings Values


Rebuild policy Unlimited
Rebalance policy Limit concurrent I/O=10
Migration policy Retain the default value
Maintenance mode policy Limit concurrent I/O=10

e. Click Apply.

Set rebuild and rebalance settings using PowerFlex


versions prior to 3.5
Use this procedure to define rebuild and rebalance settings for PowerFlex versions prior to 3.5.

Steps
1. Select the following options to set the network throttling:

Updating the rebuild and rebalance settings 415


Internal Use - Confidential

a. Log in to the PowerFlex GUI.


b. Click Backend > Storage.
c. Right-click protection domain, and select Set Network Throttling.
● Rebalance per SDS: Unlimited
● Rebuild per SDS: Unlimited
● Overall per SDS: Unlimited
2. Select the following options to set the I/O priority:
a. In PowerFlex GUI, click Backend > Storage.
b. Right-click storage pool, and select Set I/O Priority.
c. To set the rebuild:
● Limit concurrent I/O.
● Rebuild concurrent I/O limit: Unlimited
d. To set the rebalance, click the Rebalance tab:
● Favor application I/O
● Rebalance concurrent I/O limit: Unlimited
● Max speed per device: 10240

416 Updating the rebuild and rebalance settings


Internal Use - Confidential

63
Performance tuning
Performance tuning for storage VMs (SVM)
After the expansion process, the storage VMs (SVM) must be tuned for better performance.

Steps
1. Using PuTTy, connect to the SVM.
2. Validate and set jumbo frames on each data interface of the SVM:
a. Type cat /etc/sysconfig/network-scripts/ifcfg-eth1 to check that MTU for eth1 is set.
b. Type cat /etc/sysconfig/network-scripts/ifcfg-eth2 to check that MTU for eth2 is set.
c. For LACP bonding NIC port design:
● Type cat /etc/sysconfig/network-scripts/ifcfg-eth3 to check that MTU for eth3 is set.
● Type cat /etc/sysconfig/network-scripts/ifcfg-eth4 to check that MTU for eth4 is set.
NOTE: A minimum of two logical data networks are supported. Optionally, you can configure four logical data
networks.

3. Optional: If the file does not contain MTU=9000, type echo 'MTU=9000' >> /etc/sysconfig/network-
scripts/ifcfg-eth1 echo 'MTU=9000' >> /etc/sysconfig/network-scripts/ifcfg-eth2.
4. Apply the settings to the SVM:

If you are using a... Do this...


PowerFlex GUI presentation server a. Log in to the PowerFlex GUI presentation server.
b. From Configuration, select the SDS.
c. Click MORE, select Enter maintenance mode >
Protected, and click Enter Maintenance Mode.
d. Return to the PuTTy session on the SDS and type
reboot.

PowerFlex version prior to 3.5 on PowerFlex R640/R740xd/ a. Log in to the PowerFlex GUI.
R840 nodes b. Click Backend, and locate SDS.
c. Right-click the SDS, and select Maintenance Mode.
d. Click OK > Close.
e. Return to the PuTTy session on the SDS and type
reboot.

5. Exit SDS from the protection maintenance mode:

If you are using a... Do this...


PowerFlex GUI presentation server a. Go back to the SVM.
b. From Configuration, select the SDS, and the node in
protected maintenance mode.
c. Click MORE > Exit maintenance mode.
PowerFlex version prior to 3.5 on PowerFlex R640/R740xd/ a. Go back to the SVM.
R840 nodes b. Click Backend, and locate the SDS.
c. Right-click the SDS, and select Exit Maintenance
Mode.
d. Click OK > Close.

Performance tuning 417


Internal Use - Confidential

Performance tuning for PowerFlex storage-only nodes


During the expansion process, the PowerFlex storage-only nodes must be tuned for better performance.

About this task


This procedure contains examples that you must modify, based on the node type and port interface name.

Steps
1. Using PuTTY, connect to the PowerFlex storage-only nodes.
2. Validate and set jumbo frames on each PowerFlex data interface.
a. Verify that MTU for data1 is set by entering:

cat /etc/sysconfig/network-scripts/ifcfg-p2p2

b. Verify that MTU for data2 is set by entering:

cat /etc/sysconfig/network-scripts/ifcfg-p1p2

c. Verify that MTU for data3 is set by entering:

cat /etc/sysconfig/network-scripts/ifcfg-p1p1

NOTE: Data3 is applicable only for two-layer dedicated storage-only nodes.

d. Verify that MTU for data4 is set by entering:

cat /etc/sysconfig/network-scripts/ifcfg-p2p1

NOTE: Data4 is applicable only for two-layer dedicated storage-only nodes.

3. Optional: If the file does not contain MTU=9000, enter:

echo 'MTU=9000' >> /etc/sysconfig/network-scripts/ifcfg-p2p2


echo 'MTU=9000' >> /etc/sysconfig/network-scripts/ifcfg-p1p2

In a two-layer deployment, enter:

echo 'MTU=9000' >> /etc/sysconfig/network-scripts/ifcfg-p2p2


echo 'MTU=9000' >> /etc/sysconfig/network-scripts/ifcfg-p1p2
echo 'MTU=9000' >> /etc/sysconfig/network-scripts/ifcfg-p1p1
echo 'MTU=9000' >> /etc/sysconfig/network-scripts/ifcfg-p2p1

4. Set the I/O scheduler of all flash devices on the embedded operating system node (caching, all-SSD, or hybrid configurations
only).
a. Enter:

chmod +x /etc/rc.d/rc.local

b. Enter lsblk -do NAME,TYPE,ROTA to gather the list of disk devices in the system.
c. Note each NAME line that contains a 0 in the ROTA column, and disk in the TYPE column (these are flash).
d. Edit the file /etc/rc.local, and insert a line for each device that is recorded in Step 4a.
For example:

vi /etc/rc.local
echo “noop” > /sys/block/sda/queue/scheduler
echo “noop” > /sys/block/sdb/queue/scheduler
echo “noop” > /sys/block/sdc/queue/scheduler
[…]
echo “noop” > /sys/block/sdz/queue/scheduler

418 Performance tuning


Internal Use - Confidential

NOTE: The data changes according to hardware.


5. Apply the settings to the PowerFlex storage-only nodes:
a. Log in to the PowerFlex GUI presentation server, and connect to the primary MDM.
b. From Configuration, select the SDS.
c. Click MORE, select Enter the maintenance mode > Protected, and click Enter Maintenance Mode.
d. Return to your PuTTY session on the Red Hat Enterprise Linux node and enter reboot. This may take a few minutes.
6. From Configuration, select the SDS, and the node in protected maintenance mode.
7. Click OK > Close.
8. Click More > Exit maintenance mode.

Tuning PowerFlex
Use this procedure to apply certain global performance settings after the PowerFlex nodes are added and tuned successfully.

Steps
1. Using PuTTY, connect to the primary MDM.
2. Enter:

scli --login --username <username>


scli --set_performance_parameters --sdc_max_inflight_requests 200 --all_sdc --tech
scli --set_performance_parameters --sdc_max_inflight_data 20 --all_sdc --tech

Performance tuning 419


Internal Use - Confidential

64
Updating the storage data client parameters
(VMware ESXi 6.x)
To complete the expansion process for VMware ESXi 6.x, update the storage data client (SDC) parameters.

Steps
1. Log in to the VMware vSphere Client.
2. Click Home, and click Inventories > VxFlex OS.
3. Click Advanced Tasks > Update SDC Parameters, and follow the on-screen instructions to complete the procedure.
4. Verify that the SDC parameters are updated by typing ESX: cat /etc/vmware/esx.conf |grep scini|grep -i
mdm on each VMware ESXi:

The following is sample output:

cat /etc/vmware/esx.conf |grep scini|grep -i mdm


/vmkernel/module/scini/options =
"IoctlIniGuidStr=f32003a5-9b26-4ee9-9e6c-3d7fd80599c6
IoctlMdmIPStr=192.168.41.81,192.168.42.81,192.168.43.81,192.168.44.81
bBlkDevIsPdlActive=1 blkDevPdlTimeoutMillis=60000"

420 Updating the storage data client parameters (VMware ESXi 6.x)
Internal Use - Confidential

65
Post-installation tasks
Use these procedures after the installation process.

Configure SNMP trap forwarding


To configure SNMP trap forwarding, specify the access credentials for the SNMP version you are using and then add the
remote server as a trap destination.

About this task


PowerFlex Manager supports different SNMP versions, depending on the communication path and function. The following table
summarizes the functions and supported SNMP versions:

Function SNMP version


PowerFlex Manager receives traps from all devices, including iDRAC v2
PowerFlex Manager receives traps from iDRAC devices only v3
PowerFlex Manager forwards traps to the network management system v2, v3

NOTE: SNMPv1 is supported wherever SNMPv2 is supported.

PowerFlex Manager can receive an SNMPv2 trap and forward it as an SNMPv3 trap.
SNMP trap forwarding configuration supports multiple forwarding destinations. If you provide more than one destination, all
traps coming from all devices are forwarded to all configured destinations in the appropriate format.
PowerFlex Manager stores up to 5 GB of SNMP alerts. Once this threshold is exceeded, PowerFlex Manager automatically
purges the oldest data to free up space.
For SNMPv2 traps to be sent from a device to PowerFlex Manager, you must provide PowerFlex Manager with the community
strings on which the devices are sending the traps. If during resource discovery you selected to have PowerFlex Manager
automatically configure iDRAC nodes to send alerts to PowerFlex Manager, you must enter the community string used in that
credential here.
For a network management system to receive SNMPv2 traps from PowerFlex Manager, you must provide the community
strings to the network management system. This configuration happens outside of PowerFlex Manager.
For a network management system to receive SNMPv3 traps from PowerFlex Manager, you must provide the PowerFlex
Manager engine ID, user details, and security level to the network management system. This configuration happens outside of
PowerFlex Manager.

Prerequisites
PowerFlex Manager and the network management system use access credentials with different security levels to establish
two-way communication. Review the access credentials that you need for each supported version of SNMP. Determine the
security level for each access credential and whether the credential supports encryption.
To configure SNMP communication, you need the access credentials and trap targets for SNMP, as shown in the following
table:

If adding... You must know...


SNMPv2 Community strings by which traps are received and forwarded
SNMPv3 User and security settings

Post-installation tasks 421


Internal Use - Confidential

Steps
1. Log in to PowerFlex Manager.
2. On the menu bar, click Settings, and click Virtual Appliance Management.
3. On the Virtual Appliance Management page, in the SNMP Trap Configuration section, click Edit.
4. To configure trap forwarding as SNMPv2, click Add community string. In the Community String box, provide the
community string by which PowerFlex Manager receives traps from devices and by which it forwards traps to destinations.
You can add more than one community string. For example, add more than one if the community string by which PowerFlex
Manager receives traps differs from the community string by which it forwards traps to a remote destination.

NOTE: An SNMPv2 community string that is configured in the credentials during discovery of the iDRAC or through
management is also displayed here. You can create a new community string or use the existing one.

5. To configure trap forwarding as SNMPv3, click Add User. Enter the Username, which identifies the ID where traps are
forwarded on the network management system. The username must be at most 16 characters. Select a Security Level:

Security Level Details Description authPassword privPassword


Minimal noAuthNoPriv No authentication and Not required Not required
no encryption
Moderate authNoPriv Messages are Required Not required
authenticated but not
encrypted

(MD5 at least 8
characters)
Maximum authPriv Messages are Required Required
authenticated and
encrypted

(MD5 and DES both at


least 8 characters)

Note the current engine ID (automatically populated), username, and security details. Provide this information to the remote
network management system so it can receive traps from PowerFlex Manager.
You can add more than one user.

6. In the Trap Forwarding section, click Add Trap Destination to add the forwarding details.
a. In the Target Address (IP) box, enter the IP address of the network management system to which PowerFlex Manager
forwards SNMP traps.
b. Provide the Port for the network management system destination. The SNMP Trap Port is 162.
c. Select the SNMP Version for which you are providing destination details.
d. In the Community String/User box, enter either the community string or username, depending on whether you are
configuring an SNMPv2 or SNMPv3 destination. For SNMPv2, if there is more than one community string, select the
appropriate community string for the particular trap destination. For SNMPv3, if there is more than one user-defined,
select the appropriate user for the particular trap destination.
7. Click Save.
The Virtual Appliance Management page displays the configured details as shown below:
Trap Forwarding <destination-ip>(SNMP v2 community string or SNMP v3 user)

NOTE: To configure nodes with PowerFlex Manager SNMP changes, go to Settings > Virtual Appliance
Management, and click Configure nodes for alert connector.

422 Post-installation tasks


Internal Use - Confidential

Configure the node for syslog forwarding


Use this procedure to configure the new node for PowerFlex Manager syslog forwarding.

About this task


For detailed information, see the PowerFlex Manager online help.

Steps
1. Log in to PowerFlex Manager.
2. From the menu bar, click Settings and click Virtual Appliance Management.
3. From the Syslog Forwarding section, click Edit.
4. Click Add syslog forward
5. For Host, enter the destination IP address of the remote server to which you want to forward syslogs.
6. Enter the destination Port where the remote server is accepting syslog messages.
7. Select the network Protocol used to transfer the syslog messages. The default is TCP.
8. Optionally enter the Facility and Severity Level to filter the syslogs that are forwarded. The default is to forward all.
9. Click Save to add the syslog forwarding destination.

Verifying PowerFlex performance profiles


Verify PowerFlex performance profiles on the new storage pool or protection domain using the PowerFlex GUI.

About this task


NOTE: When you expand a PowerFlex node, do not apply a high performance profile if you are using a custom performance
profile. Applying the high performance profile overwrites the custom performance profile in use.

Steps
1. If using PowerFlex GUI presentation server, perform the following steps to set the performance profile for SDS:
a. Go to Configuration > SDS.
b. Select the SDS and click Modify > Modify Performance Profile.
c. Verify that the setting is High.
2. If using PowerFlex GUI presentation server, perform the following steps to set the performance profile for SDC:
a. Go to Configuration > SDC.
b. Select the SDC and click Modify > Modify Performance Profile.
c. Verify that the setting is High.
3. If using a PowerFlex version prior to 3.5:
a. Click Backend > Storage.
b. Right-click the new PD, and click Set Performance Profile for all SDSs.
c. Verify that the setting is High:
● If it is set to Default, click High and click OK.
● If it is set to High, click Cancel.
d. Click Frontend > SDCs.
e. Right-click the PowerFlex system, and click Set Performance Profile for all SDCs.
f. Verify that the setting is High.
g. Right-click the PowerFlex system, and click Set Performance Profile for all MDMs.
h. Verify that the setting is High.
i. Update the test plan and host tracker with the results.

Post-installation tasks 423


Internal Use - Confidential

Configure an authentication enabled SDC


After the PowerFlex cluster is enabled with SDC authentication, you must configure the new SDC after installing the client. This
procedure is applicable only if you are using PowerFlex GUI presentation server.

Prerequisites
● Ensure you have the following:
○ Primary MDM IP address
○ Credentials to access the PowerFlex cluster
○ The IP addresses of the new cluster members
● Ensure you have installed and added the SDCs using PowerFlex Manager or manually.
NOTE: The SDC status is displayed as Disconnected as it cannot authenticate to the system.

Steps
1. Use SSH to log in to the primary MDM.
2. Log in to the PowerFlex cluster using the SCLI tool.
3. Generate and record a new Challenge-Handshake Authentication Protocol (CHAP) secret for the replacement node SDC
using scli --generate_sdc_password --sdc_IP <IP of SDC> --reason "CHAP setup - expansion".
4. Log in to the SDC host.
5. List the current scini parameters of the host.
For example:

esxcli system module parameters list -m scini | grep Ioctl


IoctlIniGuidStr string 10cb8ba6-5107-47bc-8373-5bb1dbe6efa3
Ini Guid, for example: 12345678-90AB-CDEF-1234-567890ABCDEF
IoctlMdmIPStr string 172.16.151.40,172.16.152.40
Mdms IPs, IPs for MDM in same cluster should be comma separated.
To configure more than one cluster use '+' to separate between IPs.For Example:
10.20.30.40,50.60.70.80+11.22.33.44. Max 1024 characters
IoctlMdmPasswordStr string
Mdms passwords. Each value is <ip>-<password>, Multiple passwords separated by ';'
signFor example: 10.20.30.40-AQAAAAAAAACS1pIywyOoC5t;11.22.33.44-tppW0eap4cSjsKIcMax
1024 characters

NOTE: The third parameter, IoctlMdmPasswordStr is empty.

6. Using esxcli, configure the driver with the existing and new parameters. To specify multiple IP addresses, use a semi-colon
(;) between the entries, as shown in the following example:

esxcli system module parameters set -m


scini -p "IoctlIniGuidStr=10cb8ba6-5107-47bc-8373-5bb1dbe6efa3
IoctlMdmIPStr=172.16.151.40,172.16.152.40 IoctlMdmPasswordStr=172.16.151.40-
AQAAAAAAAAA8UKVYp0LHCFD59BrnExNPvKSlGfLrwAk;172.16.152.40-
AQAAAAAAAAA8UKVYp0LHCFD59BrnExNPvKSlGfLrwAk"

7. To apply the SDC configuration, reboot the VMware ESXi nodes.


8. For a PowerFlex hyperconverged node, use the PowerFlex GUI presentation server or SCLI tool to place the corresponding
SDS into maintenance mode.
9. After the SDS is in maintenance mode, shut down the SVM.
10. Place the VMware ESXi host in maintenance mode. As the SDC is not configured, no workloads should be running on the
node.
11. Reboot the VMware ESXi host.
12. After the host has completed rebooting, remove it from the maintenance mode and power on the SVM (if available).
13. Remove the SDS out of maintenance mode (if available)
14. Repeat steps 5 to 14 for all VMware ESXi SDC hosts.

424 Post-installation tasks


Internal Use - Confidential

Add a Windows or Linux authenticated SDC


Use the drv_cfg utility on a Windows or Linux machine to modify both a running and persistent configuration. Use the
following examples to perform the task on a Windows or Linux based PowerFlex node.

About this task


For Windows PowerFlex compute-only nodes, only firmware upgrades are supported.

Prerequisites
Only one IP address is required for the command to identify the MDM to modify.

Steps
1. Press Windows +R.
2. To open the command line interface, type cmd.
3. For Windows, type drv_cfg --set_mdm_password --ip <MDM IP> in the drv_cfg utility. For example:
drv_cfg --set_mdm_password --ip <MDM IP> --port 6611 --password <secret>

4. For Linux, type /opt/emc/scaleio/sdc/bin/drv_cfg --set_mdm_password --ip <MDM IP>. For example:
/opt/emc/scaleio/sdc/bin/drv_cfg --set_mdm_password --ip <MDM IP> --port 6611 --password
<secret> --file /etc/emc/scaleio/drv_cfg.txt

5. Repeat until all new SDCs are connected.

Updating System Configuration Reporter


Update configuration data in System Configuration Reporter (SCR) when expanding PowerFlex appliance.
Install the latest version of SCR, update the PowerFlex appliance configuration and health data, and export the data to a
Configuration Reference Guide to compare with the test plan.
SCR changes frequently, so it important to update it before collecting system configuration data. If you do not have access to
the Dell Download Center, request access from Dell Service-Now.

Updating and running System Configuration Reporter


Update and run System Configuration Reporter (SCR) before updating configuration data.

About this task

NOTE: SCR is not supported in the embedded operating system.

Steps
1. Go to Dell Download Center.
2. Click Internal Use Only: Misc Reports and Internal Documentation > Internal Use Only > Dell EMC Systems
Configuration Reporter.
3. Download the most recent version of Dell EMC SCR - Sprint bits.zip.
4. Extract Dell EMC SCR - Sprint bits.zip. If you are using the jump VM, download to the D: drive.
5. Extract vcesystems-configuration-reporter-bundle.zip.
6. Double-click dell-hci-systems-configuration-reporter.bat. SCR opens a command window and run in the
default browser.

Post-installation tasks 425


Internal Use - Confidential

Configuring System Configuration Reporter


Use this procedure to configure System Configuration Reporter (SCR).

Steps
1. Click Manage Collections.
2. Click Create New Profile > Manually Create Profile.
3. Enter collection name and PowerFlex appliance serial number.
4. Click + Component to add PowerFlex appliance components. These components include:
● storage VM (SVM)
● PowerFlex storage array
● Dell EMC PowerFlex nodes
● VMware vCenter Server
● Cisco Nexus 3K network switches
● Optional: Cisco Nexus 9K switches
5. Select storage VM (SVM) and enter the IP address range and credentials for all SVMs, including PowerFlex storage-only
nodes.
6. Select PowerFlex storage array and enter PowerFlex gateway information.
7. Select Dell PowerEdge Server Units, and enter iDRAC IP range along with login credentials and SNMP string.
8. Select VMware vCenter.
9. If you are using PowerFlex R640/R740xd/R840 nodes, enter VXMA/Controller VC IP and credentials.
10. If you are using PowerFlex R650/R750/R6525 nodes, enter PFMCController VC IP and credentials.
11. Repeat for customer or production VC. If the vCenter is an enterprise vCenter, SCR runs for a long time to collect all details.
Use the filter feature to limit the amount of data collected.
12. Click Choose under the vCenter filter box. SCR queries vCenter using the credentials entered and provides a list of the data
centers and clusters. Retrieved individual clusters can be selected or cleared to filter the data collected.
13. Select Nexus 3K Network switches, enter the management switch IP address and credentials, and select Switch Role >
Management.
14. Perform the following steps depending on the Cisco Nexus switch series and switch type (optional):
NOTE: If there are multiple switches of the same type and role, then assign a different Role Group Tag to each pair of
redundant switches.

For a... Do this...


Cisco Nexus 3000 series network switches Select Nexus 3K Network switches, enter the access
switch IP address range and credentials, and select Switch
Role > Aggregate.
Cisco Nexus 9000 series access switches Select Nexus 9K Network switches, enter the access
switch IP address range and credentials, and select Switch
Role > Aggregate.
Cisco Nexus 9000 series access switches (core switch) Select Nexus 9K Network switches, enter the access
switch IP address range and credentials, and select Switch
Role > Core.

Collecting System Configuration Reporter data


Collect System Configuration Reporter (SCR) data for the expanded PowerFlex appliance.

About this task


If you close the command prompt, SCR terminates. Close the SCR browser and restart SCR.
NOTE: PowerFlex Manager also performs critical configuration checks against your resources and services. See the
PowerFlex Manager documentation for more information.

426 Post-installation tasks


Internal Use - Confidential

Steps
1. Perform one of the following:
● Select at the system level to collect data for all components.
● Individually select the components for which you want to collect data.
2. Optional: If PowerFlex appliance uses Cisco Nexus aggregation switches to aggregate multiple sets of Cisco Nexus access
switches and you chose to create a new profile for the switches, select both of the systems you created before.
3. Click Start Multi-Collection. You must have at least one component selected. Data collection begins, which you can
monitor on the screen and in the command prompt.
When collection completes, you can see a green check mark against the components that completed collection, and a red
check mark against the components that failed collection.
4. Optional: Investigate any failed components for proper credentials and IP addresses. You can confirm credentials by logging
directly into the device.
5. Optional: If a component fails, and if you modify the IP or credentials, click Start Multi-Collection to run the collection
again.
6. Optional: Repeat as necessary until data is collected for all components.

Creating a Configuration Reference Guide


Use the data collected by System Configuration Reporter (SCR) to create a Configuration Reference Guide (CRG) to compare
to the test plan.

Steps
1. Click Review Collected Data > Export Data Collection.
2. Select CRG as the Type of report and select Assessment Report and click Download. SCR initiates reporting, which you
can monitor on the screen and in the command prompt.
3. Copy the resulting .xlsx file in the Downloads folder, or upload it to a temporary FTP site for retrieval later. FTP
accounts are available from ftpaccreq.emc.com.
4. The SCR output spreadsheet lists the components on which the data is collected. Ensure that the data is collected for all
components as per the test plan.
5. Select one of the following to exit the SCR:
● Exit Only: Choose this option if you want to exit out of SCR. This is the normal mode of operation.
● Exit and Purge Data : Choose this option if you do not want to save the data collected. This is only used under rare
conditions.
● Exit, Purge Data, and Configuration : Choose this option if you want to delete the collected data and the profile
created.

Getting technical support for System Configuration Reporter


If you are unsuccessful troubleshooting System Configuration Reporter (SCR), include the following information to obtain
technical support.
Verify all IP addresses and credentials before contacting Dell EMC. You can confirm credentials by logging directly into the
device.
Send an email to scr.dellemc@dell.com with the following information:
● A brief explanation of the issue, including the product (and component, if component-specific) related to your request, as
well as any error messages and screenshots.
● The SCR build number, found in the SCR application directory in the following format: SCRBuildNumber.txt.
● The system logs and SCR data directory that are stored in C:\Users\<username>\.vce (this directory is hidden).
Compress the .vce folder as a ZIP file and upload it to a temporary FTP account. FTP accounts are available from:
ftpaccreq.emc.com.
This email is sent to the SCR team, and a team member will respond to you promptly.

Post-installation tasks 427


Internal Use - Confidential

Disabling IPMI for PowerFlex nodes


Use this procedure to disable IPMI for PowerFlex nodes using a Windows-based jump server.

Prerequisites
Ensure that iDRAC command line tools are installed on the system jump server.

Steps
1. If you are using an iDRAC version 5.0.10 or higher:

For a... Do this...


Single PowerFlex node a. From the jump server, open a PowerShell session.
b. Type racadm -r x.x.x.x -u root -p yyyyy
set iDRAC.IPMILan.Enable Disabled.
where x.x.x.x is the IP address of the iDRAC node and
yyyyy is the iDRAC password.

Multiple PowerFlex nodes a. From the jump server, at the root of the C: drive, create
a folder named ipmi.
b. From the File Explorer, go to View and select the File
Name extensions check box.
c. Open a notepad file, and paste this text into the file:
powershell -noprofile -executionpolicy
bypass -file ".\disableIPMI.ps1"
d. Save the file, and rename it runme.cmd in C:\ipmi.
e. Open a notepad file, and paste this text
into the file: import-csv $pwd\hosts.csv
-Header:"Hosts" | Select-Object
-ExpandProperty hosts | % {racadm
-r $_ -u root -p XXXXXX set
iDRAC.IPMILan.Enable Disabled
where XXXXXX is the customer password that must be
changed.
f. Save the file, and rename it disableIPMI.ps1 in C:
\ipmi.
g. Open a notepad file, and list all of the iDRAC IP
addresses that need to be included, one per line.

h. Save the file, and rename it hosts.csv in C:\ipmi.


i. Open a PowerShell session, and go to C:\ipmi.
j. Type .\runme.cmd.

2. If you are using an iDRAC version prior to 5.0.10:

428 Post-installation tasks


Internal Use - Confidential

For a... Do this...


Single PowerFlex node a. From the jump server, open a PowerShell session.
b. Type racadm -r x.x.x.x -u root -p yyyyy
config -g cfgIpmiLan -o cfgIpmiLanEnable
0.
where x.x.x.x is the IP address of the iDRAC node and
yyyyy is the iDRAC password.

Multiple PowerFlex nodes a. From the jump server, at the root of the C: drive, create
a folder named ipmi.
b. From the File Explorer, go to View and select the File
Name extensions check box.
c. Open a notepad file, and paste this text into the file:
powershell -noprofile -executionpolicy
bypass -file ".\disableIPMI.ps1"
d. Save the file, and rename it runme.cmd in C:\ipmi.
e. Open a notepad file, and paste this text
into the file: import-csv $pwd\hosts.csv
-Header:"Hosts" | Select-Object
-ExpandProperty hosts | % {racadm -r $_
-u root -p XXXXXX config -g cfgIpmiLan
-o cfgIpmiLanEnable 0}
where XXXXXX is the customer password that must be
changed.
f. Save the file, and rename it disableIPMI.ps1 in C:
\ipmi.
g. Open a notepad file, and list all of the iDRAC IP
addresses that need to be included, one per line.

h. Save the file, and rename it hosts.csv in C:\ipmi.


i. Open a PowerShell session, and go to C:\ipmi.
j. Type .\runme.cmd.

Disabling IPMI for PowerFlex nodes using an


embedded operating system-based jump server
Use this procedure to disable IPMI for PowerFlex nodes using an embedded operating system-based jump server.

Prerequisites
Ensure that iDRAC command line tools are installed on the embedded operating system-based jump server.

Steps
1. If you are using an iDRAC version 5.0.10 or higher:

Post-installation tasks 429


Internal Use - Confidential

For a... Do this...


Single PowerFlex node a. From the jump server, open a terminal session.
b. Type racadm -r x.x.x.x -u root -p yyyyy
set iDRAC.IPMILan.Enable Disabled.
where x.x.x.x is the IP address of the iDRAC node and
yyyyy is the iDRAC password.

Multiple PowerFlex nodes a. From the jump server, open a terminal window.
b. Edit the idracs.txt file and enter the IP address, one
per line for each iDRAC.
c. Save the idracs.txt.
d. Type while read line ; do echo “$line” ;
racadm -r $line -u root -p yyyyy set
iDRAC.IPMILan.Enable Disabled; done <
idracs
where yyyyy is the iDRAC password.

2. If you are using an iDRAC version prior to 5.0.10:

For a... Do this...


Single PowerFlex node a. From the jump server, open a terminal session.
b. Type racadm -r x.x.x.x -u root -p yyyyy
config -g cfgIpmiLan -o cfgIpmiLanEnable
0.
where x.x.x.x is the IP address of the iDRAC node and
yyyyy is the iDRAC password.

Multiple PowerFlex nodes a. From the jump server, open a terminal window.
b. Edit the idracs.txt file and enter the IP address, one
per line for each iDRAC.
c. Save the idracs.txt.
d. Type while read line ; do echo “$line” ;
racadm -r $line -u root -p yyyyy config
-p cfgIpmiLan -o cfgIpmiLanEnable 0.;
done < idracs
where yyyyy is the iDRAC password.

430 Post-installation tasks

You might also like