You are on page 1of 282

HPE 3PAR StoreServ 20000 Storage

Service and Upgrade Guide


Service Edition

Abstract
This Hewlett Packard Enterprise (HPE) guide provides authorized technicians information
about servicing and upgrading the hardware components for the HPE 3PAR StoreServ 20000
storage systems. This document is for HEWLETT PACKARD ENTERPRISE INTERNAL USE
ONLY.

Part Number: Q1H28-90711


Published: August 2017
© Copyright 2015, 2017 Hewlett Packard Enterprise Development LP

Notices
The information contained herein is subject to change without notice. The only warranties for Hewlett
Packard Enterprise products and services are set forth in the express warranty statements accompanying
such products and services. Nothing herein should be construed as constituting an additional warranty.
Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained
herein.
Confidential computer software. Valid license from Hewlett Packard Enterprise required for possession,
use, or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer
Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government
under vendor's standard commercial license.
Links to third-party websites take you outside the Hewlett Packard Enterprise website. Hewlett Packard
Enterprise has no control over and is not responsible for information outside the Hewlett Packard
Enterprise website.

Acknowledgments
Intel®, Itanium®, Pentium®, Intel Inside®, and the Intel Inside logo are trademarks of Intel Corporation in
the United States and other countries.
Microsoft® and Windows® are either registered trademarks or trademarks of Microsoft Corporation in the
United States and/or other countries.
Adobe® and Acrobat® are trademarks of Adobe Systems Incorporated.
Java® and Oracle® are registered trademarks of Oracle and/or its affiliates.
UNIX® is a registered trademark of The Open Group.
Contents

Getting started.........................................................................................8
Redeem and register HPE 3PAR licenses.................................................................................... 8
Precautions and advisories...........................................................................................................8
Use proper tools............................................................................................................................8
Handling Field Replaceable Units (FRU)...................................................................................... 9
Preventing Electrostatic Discharge (ESD)......................................................................... 9

Servicing the StoreServ........................................................................10


Preventing damage when replacing components....................................................................... 10
Facilitating the disassembly of a storage unit............................................................................. 10
Controller node and internal components................................................................................... 11
Replacing a controller node..............................................................................................11
Replacing controller node internal components............................................................... 15
Replacing a SATA node drive in a controller node................................................ 17
Replacing a DIMM of a controller node................................................................. 22
Replacing a Time of Day (TOD) battery in a controller node.................................27
Replacing a controller node power supply....................................................................... 31
Replacing a node power supply tray................................................................................ 34
Replacing a Backup Battery Unit (BBU)...........................................................................37
Replacing a fan module................................................................................................... 39
Replacing a PCI adapter.................................................................................................. 41
Replacing a system LED status board............................................................................. 44
Replacing a controller node L-frame assembly................................................................46
Drive chassis and internal components...................................................................................... 61
Replacing drive enclosure components........................................................................... 62
Replacing a power supply in a drive enclosure..................................................... 62
Replacing a fan module in a drive enclosure........................................................ 63
Replacing an I/O module in a drive enclosure.......................................................64
Replacing mini-SAS cables for a drive enclosure................................................. 66
Replacing a drive chassis................................................................................................ 68
Drive guidelines................................................................................................................70
Replacing a drive............................................................................................................. 70
Service Processor.......................................................................................................................72
Replacing a physical Service Processor.......................................................................... 72
System power............................................................................................................................. 74
Node rack single-phase PDU...........................................................................................74
Replacing a horizontal single-phase PDU in a node rack................................................ 80
Node rack three-phase PDU............................................................................................ 83
Replacing a horizontal three-phase PDU in a node rack................................................. 91
Expansion rack single-phase PDU...................................................................................94
Expansion rack three-phase PDU....................................................................................98
Replacing a vertical single-phase or three-phase PDU in an expansion rack............... 103

Upgrading the StoreServ....................................................................106


Controller node and internal components................................................................................. 106
Adding controller nodes................................................................................................. 107
Adding PCI adapters.......................................................................................................111
Drive chassis and drives........................................................................................................... 115

Contents 3
Adding drives..................................................................................................................116
Adding a drive chassis....................................................................................................117

Parts catalog........................................................................................122
Parts catalog for 20000 models................................................................................................ 122
Cable parts list................................................................................................................122
Controller node enclosure parts list................................................................................123
Drive enclosure parts list................................................................................................125
Service processor parts list............................................................................................ 128
Parts catalog for 20000 R2 models...........................................................................................128
Cable parts list................................................................................................................128
Controller node enclosure parts list................................................................................129
Drive enclosure parts list................................................................................................131
Service processor parts list............................................................................................ 133

Component LEDs................................................................................ 134


Storage system status LEDs.....................................................................................................134
Controller node LEDs................................................................................................................135
4-port 12 Gb/s SAS adapter LEDs............................................................................................136
4-port 16 Gb/s FC host adapter LEDs...................................................................................... 138
2-port 10 Gb/s iSCSI host adapter LEDs.................................................................................. 139
2-port 10 GbE NIC host adapter LEDs..................................................................................... 140
Power supply unit LEDs (controller node enclosure)................................................................ 141
Backup battery unit LEDs (controller node enclosure)..............................................................142
Fan module LEDs (controller node enclosure)......................................................................... 143
Drive enclosure LEDs............................................................................................................... 143
Drive LEDs (drive enclosure).................................................................................................... 144
I/O module LEDs (drive enclosure)...........................................................................................146
Fan module LEDs (drive enclosure)......................................................................................... 148
Power supply unit LEDs (drive enclosure)................................................................................ 149
Physical service processor LEDs..............................................................................................150

HPE 3PAR Service Processor............................................................ 153


Training video about the HPE 3PAR SP 5.0 and HPE 3PAR OS 3.3.1.....................................153
Network and firewall support access........................................................................................ 153
Firewall and proxy server configuration......................................................................... 153
Connection methods for the SP................................................................................................ 154
Connecting to the SP 5.x from a web browser...............................................................155
Connecting to the SP 4.x from a web browser...............................................................155
Connecting to the SP Linux console through an SSH....................................................155
Connecting to the physical SP from a laptop................................................................. 155
Interfaces for the HPE 3PAR SP...............................................................................................156
Accessing the SP 5.x SC interface................................................................................ 157
Accessing the SP 5.x TUI.............................................................................................. 157
Accessing the SP 4.x SPOCC interface.........................................................................157
Accessing the SP 4.x SPMaint interface directly........................................................... 158
Accessing the CLI session from the SP 5.x SC interface.............................................. 158
Accessing the interactive CLI interface from the SP 5.x TUI......................................... 158
Accessing the CLI session from the SP 4.x SPOCC interface.......................................158
Accessing the interactive CLI interface from the SP 4.x SPMaint interface................... 159
Check health action from the HPE 3PAR SP............................................................................ 159
Checking the health of the storage system using CLI commands................................. 159
Checking the health from the SP 5.x SC interface.........................................................164

4 Contents
Checking health from the SP 4.x SPOCC interface....................................................... 165
Checking health from the SP 4.x SPMaint interface...................................................... 165
Maintenance mode action from the SP.....................................................................................167
Setting maintenance mode from the SP 5.x SC interface..............................................167
Setting maintenance mode from the SP 4.x interactive CLI interface............................167
Setting or modifying maintenance mode from the SP 4.x SPMaint interface.................167
Locate action from the SP.........................................................................................................167
Running the locate action from the SP SC interface......................................................168
Running the locate action from the SP 4.x SPOCC interface........................................ 168
Alert notifications from the SP...................................................................................................168
Browser warnings......................................................................................................................169
Clear Internet Explorer browser warning........................................................................169
Clear Google Chrome browser warning.........................................................................170
Clear Mozilla Firefox browser warning........................................................................... 171

Accounts and credentials for service and upgrade.........................173


Training video about the passwords for support accounts........................................................174
HPE 3PAR Service Processor accounts for service and upgrade............................................ 174
Storage system accounts for service and upgrade...................................................................176
Time-based password (strong password)................................................................................. 178
Encryption-based password (strong password)........................................................................178
Retrieving HPE 3PAR SP account passwords from HPE StoreFront Remote..........................179
Communication of the password via phone.............................................................................. 180
Troubleshooting issues with passwords....................................................................................184

Connecting the power and data cables............................................ 186


Power cables............................................................................................................................ 186
Data cables............................................................................................................................... 191
Direct Connect Cable Configuration...............................................................................191
Daisy-Chained Cable Configuration...............................................................................194
Order of Controller Node Port Use for Daisy-Chained Cable Connections......... 196
Data Cable Slot and Port Details................................................................................... 196

Controller node rescue.......................................................................198


Node-to-Node Rescue processes.............................................................................................198
Node-to-Node Rescue initiated automatically................................................................ 199
Node-to-Node Rescue initiated by CLI command..........................................................199
SP-to-Node rescue................................................................................................................... 200
Accessing the DL120 SP Console Using a Laptop ....................................................... 201
SP-to-Node rescue with SP 4.x software....................................................................... 204
SP-to-Node rescue with SP 4.x software for a non-encrypted system................205
SP-to-Node rescue with SP 4.x software for an encrypted system..................... 205
SP-to-Node rescue with SP 5.x software....................................................................... 206
SP-to-Node rescue with SP 5.x software for a non-encrypted system................206
SP-to-Node rescue with SP 5.x software for an encrypted system..................... 206

Deinstalling the storage system and restoring the storage


system to factory defaults..................................................................208
System Inventory...................................................................................................................... 208
Uninstalling the system............................................................................................................. 209
Restoring the system to factory defaults...................................................................................214

Contents 5
Troubleshooting.................................................................................. 217
Troubleshooting issues with the storage system...................................................................... 217
Alerts issued by the storage system.............................................................................. 217
Alert notifications by email from the service processor....................................... 217
Alert notifications in the HPE 3PAR StoreServ Storage 3PAR Service Console.218
Viewing alerts...................................................................................................... 218
HPE 3PAR BIOS Error Codes........................................................................................219
I/O module error codes.................................................................................................. 219
I/O Module LEDs............................................................................................................226
Collecting log files.......................................................................................................... 227
Collecting the HPE 3PAR SmartStart log files.....................................................227
Collecting SP log files from the SC interface.......................................................227
Collecting SP log files from the SPOCC interface............................................... 228
Health check on the storage system.............................................................................. 228
Checking health of the storage system—HPE 3PAR SSMC...............................228
Checking health of the storage system—HPE 3PAR CLI....................................228
Troubleshooting system components....................................................................................... 232
Troubleshooting StoreServ System Components.......................................................... 233
Alert..................................................................................................................... 234
Cabling................................................................................................................ 234
Cage....................................................................................................................237
Consistency.........................................................................................................245
Data Encryption at Rest (DAR)............................................................................245
Date.....................................................................................................................246
File.......................................................................................................................246
LD........................................................................................................................248
License................................................................................................................ 252
Network............................................................................................................... 252
Node....................................................................................................................254
PD........................................................................................................................258
PDCH.................................................................................................................. 265
Port......................................................................................................................267
Port CRC............................................................................................................. 272
Port PELCRC...................................................................................................... 273
RC....................................................................................................................... 273
SNMP.................................................................................................................. 274
SP........................................................................................................................274
Task..................................................................................................................... 275
VLUN...................................................................................................................276
VV........................................................................................................................277

Websites.............................................................................................. 278

Support and other resources.............................................................279


Accessing Hewlett Packard Enterprise Support....................................................................... 279
Accessing updates....................................................................................................................279
Customer self repair..................................................................................................................280
Remote support........................................................................................................................ 280
Warranty information.................................................................................................................280
Regulatory information..............................................................................................................281
Documentation feedback.......................................................................................................... 281

6 Contents
Identifying physical locations of the logical cage numbers........... 282

Contents 7
Getting started
Before you start servicing, upgrading, or troubleshooting an HPE 3PAR StoreServ 20000 storage system,
make sure to plan and coordinate the process with an authorized Hewlett Packard Enterprise
representative. Proper planning provides a more efficient maintenance process and leads to greater
availability and reliability of the system.
If you require additional assistance, contact Hewlett Packard Enterprise Support.

Redeem and register HPE 3PAR licenses


HPE 3PAR StoreServ products include HPE 3PAR licensing, which enables all system functionality.
Failure to register the license key may limit access and restrict system upgrades.
The Summary Entitlement Certificate is enclosed in a blue envelope in the accessories kit shipped with
the system. The certificate must be redeemed through the Hewlett Packard Enterprise Licensing for
Software portal before you begin installing the hardware and software components.
To redeem the Summary Entitlement Certificate, visit http://www.hpe.com/software/licensing and
register all applicable Hewlett Packard Enterprise software licenses. Use your HPE Passport credentials
or create an HPE Passport profile.
For assistance with registering the Hewlett Packard Enterprise software licenses, visit the Hewlett
Packard Enterprise Support website: http://www.hpe.com/support.

Precautions and advisories


To avoid injury, data loss, and damage, observe these general precautions and advisories when installing
or servicing the storage system:
• Using improper tools can result in damage to the storage system.
• Prepare an ESD work surface by placing an anti-static mat on the floor or on a table near the storage
system. Attach the ground lead of the mat to an unpainted surface of the rack.
• Always use the wrist-grounding strap provided with the storage system. Attach the grounding strap clip
directly to an unpainted surface of the rack.
• Avoid contact between electronic components and clothing, which can carry an electrostatic charge.
• If applicable, ensure that all cables are properly labeled and easily identifiable before you remove a
component.
• Observe local occupational safety requirements and guidelines for heavy equipment handling.
• Do not attempt to move a fully loaded equipment rack.

Use proper tools


The following tools are not required for every service action but are useful, especially when unpacking or
installing the storage system.

CAUTION:
Always wear an electrostatic discharge (ESD) wrist-grounding strap when installing a storage
system hardware part.

• Electrostatic discharge (ESD) wrist or shoe strap


• P2 Phillips-head screwdriver
• T25 and T15 Torx screwdrivers
• Tape and markers or label maker for labeling cables

8 Getting started
Handling Field Replaceable Units (FRU)
Use the following preventative guidelines and adhere to any cautionary statements when you are
handling field replaceable units during a servicing, upgrading or troubleshooting action.

Preventing Electrostatic Discharge (ESD)


Electrostatic discharge (ESD) can damage electrostatic-sensitive devices and microcircuitry. Important
precautions such as using proper packaging and grounding techniques helps prevent damage. To prevent
electrostatic damage, use the following precautions:
• Transport products in electrostatic-safe containers, such as conductive tubes, bags, and boxes.
• Keep static-sensitive parts in their containers until they arrive at static-free workstations.
• Cover workstations with approved static-dissipating material. Use a wrist strap connected to the work
surface, and use properly grounded (earthed) tools and equipment.
• Keep the work area free of non-conductive materials, such as plastic assembly aids and foam packing.
• Ensure that you are always properly grounded (earthed) when touching a static-sensitive component
or assembly.
• Avoid touching pins, leads, and circuitry.
• Always place drives with the printed circuit board assembly-side down.
• Use non-ESD field service tools.

TIP:
If an ESD kit is unavailable, touch an unpainted metal surface to discharge static electricity from
your body before handling a FRU. Repeat the discharge action before handling other FRUs.

Handling Field Replaceable Units (FRU) 9


Servicing the StoreServ
Use this chapter to perform removal and replacement procedures on the HPE 3PAR StoreServ 20000
storage systems.
For more information on spare part numbers for storage system components listed in this chapter, see the
Parts catalog on page 122.
Videos of the service procedures listed in this chapter are available at the following media library sites:
• https://thesml.itcs.hpecorp.net/ for service personnel.
• https://psml.ext.hpe.com/ for partners.

Preventing damage when replacing components


WARNING:
Some components heat up during operation. Before servicing a component, cautiously determine if
the component is hot. Before removing the component, wait until the component has cooled off.

Static electricity can damage the components. Before removing or replacing a component, observe the
following to prevent damage.
• Remove all ESD-generating materials from your work area.
• Avoid hand contact. Transport and store all electrostatic parts and assemblies in conductive or
approved ESD packaging such as ESD tubes, bags, or boxes.
• Keep electrostatic-sensitive parts in their containers until they arrive at static-free stations. Before
removing items from their containers, place the containers on a grounded surface.
• Use the wrist-grounding strap when servicing the storage system.
• Connect your ESD wrist or shoe strap to a grounded surface before removing the new component
from its ESD package.
The same precaution applies when handling a component.
• Avoid contact with pins, leads, or circuitry.
• Use the ESD package provided with the new part to return the old part.
• Prepare an Electrostatic Discharge-safe work surface, before servicing components. Preparing this
work surface can be done by placing an anti-static mat on the floor or table near the storage system.
• Attach the ground lead of the mat to an unpainted surface of the rack.
• Attach the grounding strap clip to an unpainted surface of the rack.

Facilitating the disassembly of a storage unit


Follow these steps when disassembling a storage unit.
• Label each cable as you remove it, noting its position and routing.
Labeling will make replacing the cables much easier and will ensure that the cables are rerouted
properly.
• Keep all screws with the component removed.
The screws used in each component can be of different thread sizes and lengths. Using the wrong
screw in a component could damage the unit.

10 Servicing the StoreServ


Controller node and internal components
This section describes the procedures for removing and replacing internal components in the controller
node.

Replacing a controller node


CAUTION:
• To avoid possible data loss, only shutdown and remove one controller node at a time from the
storage system.
• Before shutting down a controller node in the cluster, confirm that the other controller nodes in
the cluster are functioning normally. Shutting down a controller node in the cluster causes all
cluster resources to fail over to the other controller nodes.
• Verify that host multipathing is functional.
• Only shut down the controller node at the time service is going to be performed.

Procedure
Preparation:
1. Unpack the replacement controller node (node) and place on an ESD safe mat.
2. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
3. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
4. Set Maintenance Mode.
5. Initiate Check Health of the storage system.
Issue the checkhealth -detail command.

CAUTION:
If health issues are identified during the Check Health scan, resolve these issues before
continuing. Refer to the details in the Check Health results and contact HPE support if
necessary.

A scan of the storage system will be run to make sure that there are no additional issues.
6. Identify the storage system to be serviced.
Issue the showsys command.
7. Verify the current state of the nodes.
Issue the shownode command.
8. Locate the failed node.

NOTE:
Some faults will automatically illuminate a blue LED (safe-to-remove) on some of the
components equipped with a UID/Service LED. Search for a designated blue LED when
replacing or servicing a controller node component. If the LED is not illuminated, issue the
locatenode <node_ID> command.

9. Shut down the node, or if it has already been halted (which could be due to a hardware failure), skip
this step.

Controller node and internal components 11


CAUTION:
If the controller node is properly shut down (halted) before removal, the storage system will
continue to function, but data loss might occur if the replacement procedure is not followed
correctly.

a. Issue the shutdownnode halt <node_ID> command, where <node_ID> is the number for
the node being shut down.
For example, with node 0:

cli% shutdownnode halt 0

b. To confirm halting of the node, enter yes when prompted.

IMPORTANT:
Allow up to 10 minutes for the controller node to shut down (halt). When the controller node
is fully shut down, the Status LED rapidly flashes green and the UID/Service LED is solid
blue. The Fault LED might be solid amber, depending on the nature of the node failure.
10. Turn off power to the failed node.
Move the power switch to the OFF position.

Figure 1: Location of power switch on controller node


11. Verify that all cables are labeled with their location.
Removal:
12. Disconnect the cables attached to the node onboard ports and cables attached to the adapters.
13. Remove the node from the enclosure.
a. Press the release button to eject the handles (1).
b. Extend the handles fully to disengage the node from the mid-plane (2).
c. Pull the node out of the enclosure (3) and place on an ESD safe mat.

12 Servicing the StoreServ


Figure 2: Removing the controller node
14. Remove the covers from both the failed node and replacement node.
a. If locked, unlock the cover latch (1).
b. Pull the latch to disengage the cover (2).
c. Lift off the cover (3).

Figure 3: Removing the node cover


Replacement:
15. Transfer the following internal components from the failed node to the same position in the
replacement node.
• Control cache DIMMs
• Data cache DIMMs
• Two SATA node drives
16. Replace and secure the node covers.
a. Align the pins on the node cover with the slots in the node (1).
b. Close the latch to slide the cover into place over the node (2).

Servicing the StoreServ 13


Figure 4: Installing the node cover
17. Transfer all of the SAS and host adapters and adapter blanks from the failed node to the same
position in the replacement node.
18. Install the replacement node.
a. Verify that the power switch is in the OFF position.
b. With the insertion handles fully extended, insert the replacement node into the enclosure until it
stops (1).
c. To close and fully engage the node into the node chassis mid plane, swing the insertion handles
inward and press closed to secure (2).

Figure 5: Inserting the controller node


19. Reconnect all the cables to the onboard ports and the adapters in the same location recorded on the
labels.
20. Initiate the Node-to-Node Rescue process.
Issue the startnoderescue -node <node_ID> command, where <node_ID> is the
replacement node.
21. Turn on power to the replacement node.

14 Servicing the StoreServ


IMPORTANT:
When power is supplied to the replacement node, the node begins to boot and the Node-to-
Node Rescue begins. This boot process might take approximately 10 to 20 minutes. When the
Node-to-Node Rescue and boot are completed, the node becomes part of the cluster.
22. Monitor the progress of the Node-to-Node Rescue.
a. Locate the <task_ID> of the current and active node_rescue process.
Issue the showtask command.
b. Monitor the progress and results.
Issue the showtask -d <task_ID> command.
To monitor the progress with more details, connect a serial cable to the Service port on the node that
is being rescued.
Verification:
23. Verify that the node has joined the cluster.
a. Verify that the controller node LEDs are in normal status with a steady, blinking green LED.
Uniform blinking with the other nodes indicates that the node has joined the cluster.
b. Issue the shownode command.
For additional details, issue the shownode -d command.
24. Initiate Check Health of the storage system.
A scan of the storage system will be run to make sure that there are no issues after the replacement.
25. If significant time is left in the maintenance window, end Maintenance Mode.
26. Follow the return instructions provided with the replacement component.
More Information
Replacing a DIMM of a controller node on page 22
Replacing a PCI adapter on page 41
Replacing a SATA node drive in a controller node on page 17
Connection methods for the SP on page 154
Interfaces for the HPE 3PAR SP on page 156
Check health action from the HPE 3PAR SP on page 159
Maintenance mode action from the SP on page 167
Locate action from the SP on page 167
Alert notifications from the SP on page 168
Accounts and credentials for service and upgrade on page 173
Node-to-Node Rescue processes on page 198

Replacing controller node internal components


The following illustration provides details of the internal components of an HPE 3PAR StoreServ 20000
storage system controller node.

Replacing controller node internal components 15


Figure 6: Internal components of a controller node

Table 1: Internal Components of the Controller Node

Item Component DIMM Location (Slot Description


ID)

1 Control Cache DIMMs J18000 CC DIMM 0:0:0

J19000 CC DIMM 0:1:0

J20000 CC DIMM 0:2:0

2 Control Cache DIMMs J18001 CC DIMM 1:0:0

J19001 CC DIMM 1:1:0

J20001 CC DIMM 1:2:0

3 Data Cache DIMMs J14005 DC DIMM 0:0:0


Bank 0:0
J15005 DC DIMM 0:0:1

4 Data Cache DIMMs J17005 DC DIMM 0:1:1


Bank 0:1
J16005 DC DIMM 0:1:0

Data Cache DIMMs J14006 DC DIMM 1:0:0


Bank 1:0
J15006 DC DIMM 1:0:1

5 Data Cache DIMMs J17006 DC DIMM 1:1:1


Bank 1:1
J16006 DC DIMM 1:1:0

Table Continued

16 Servicing the StoreServ


Item Component DIMM Location (Slot Description
ID)

6 Removable node drive platform and drives Two removable storage


drives

7 Time of Day (TOD) battery Removable battery

Replacing a SATA node drive in a controller node

NOTE:
If the failed node is already offline, it is not necessary to shut down the node because it is not part of
the cluster.

Each node contains two SSDs, also called SATA node drives. As shown on the label on the inside of the
controller node cover, the two SATA node drives are numbered.
• SATA 1 drive is the node drive on the top.
• SATA 0 drive is the node drive on the bottom.

Procedure
Preparation:
1. Unpack the replacement node drive and place on an ESD safe mat.

NOTE:
The node drive is factory installed in a bracket required for installation.
2. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
3. Initiate a maintenance window that stops the flow of system alerts from being sent to HPE by setting
Maintenance Mode.
4. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
5. Initiate Check Health of the storage system.
Issue the checkhealth -detail command.

CAUTION:
If health issues are identified during the Check Health scan, resolve these issues before
continuing. Refer to the details in the Check Health results and contact HPE support if
necessary.

A scan of the storage system will be run to make sure that there are no additional issues.
6. Verify the current state of the nodes by issuing the shownode command.
7. Verify the current state of the node drives by issuing the shownode -drive command.
8. Locate the node with the failed node drive.

Replacing a SATA node drive in a controller node 17


NOTE:
Some faults will automatically illuminate a blue LED (safe-to-remove) on some of the
components equipped with a UID/Service LED. Search for a designated blue LED when
replacing or servicing a controller node component. If the LED is not illuminating, use the
locatenode <node number> command.

9. Verify that all cables are labeled with their location.


10. Shut down the node, or if it has already been halted (which could be due to a hardware failure), skip
this step.

CAUTION:
If the controller node is properly shut down (halted) before removal, the storage system will
continue to function, but data loss might occur if the replacement procedure is not followed
correctly.

a. Issue the shutdownnode halt <node_ID> command, where <node_ID> is the number for
the node being shut down.
For example, with node 0:

cli% shutdownnode halt 0

b. To confirm halting of the node, enter yes when prompted.

IMPORTANT:
Allow up to 10 minutes for the controller node to shut down (halt). When the controller node
is fully shut down, the Status LED rapidly flashes green and the UID/Service LED is solid
blue. The Fault LED might be solid amber, depending on the nature of the node failure.
11. Turn off power to the node.
Move the power switch to the OFF position.

Figure 7: Location of power switch on controller node


Removal:
12. Disconnect the cables attached to the node onboard ports and cables attached to the adapters.

18 Servicing the StoreServ


13. Remove the node from the enclosure.
a. To eject the handles, press the release button (1).
b. Extend the handles fully to disengage the node from the mid plane (2).
c. Pull the node out of the enclosure (3) and place on an ESD safe mat.

Figure 8: Removing the controller node


14. Remove the cover from the node.
a. If locked, unlock the cover latch (1).
b. Pull the latch to disengage the cover (2).
c. Lift off the cover (3).

Figure 9: Removing the node cover


15. Locate the failed node drive and disconnect the SATA power and data cable.
16. Remove the failed node drive.
a. Squeeze the release handles (1) on the bracket that holds the node drive.
b. Slide the bracket and node drive out of the carrier (2) and place on an ESD safe mat.

Servicing the StoreServ 19


Figure 10: Removing the controller node drive
Replacement:
17. Install the replacement node drive into the carrier until it locks into place.

Figure 11: Inserting the node drive


18. On the replacement node drive, connect the SATA power and data cable.
19. Replace and secure the node cover.
a. Align the pins on the node cover with the slots in the node (1).
b. To slide the cover into place over the node, close the latch (2).

20 Servicing the StoreServ


Figure 12: Installing the node cover
20. Install the node.
a. Verify that the power switch is in the OFF position.
b. With the insertion handles fully extended, insert the replacement node into the enclosure until it
stops (1).
c. To close and fully engage the node into the node chassis mid plane, swing the insertion handles
inward and press closed to secure (2).

Figure 13: Inserting the controller node


21. Reconnect all the cables to the onboard ports and the adapters in the same location recorded on the
labels.
22. Initiate the Node-to-Node Rescue process.
Issue the startnoderescue -node <nodeID> command, where <nodeID> is the replacement
node.
23. Turn on power to the node.
Set the node power switch to the ON position.

Servicing the StoreServ 21


NOTE:
When you switch on the power, the node boots. This process might take approximately 10 to
20 minutes. When the process is complete, the node becomes part of the cluster.
24. Monitor the progress of the Node-to-Node Rescue.
a. Locate the <taskID> of the current and active node_rescue process by issuing the showtask
command.
b. Monitor the progress and results by issuing the showtask -d <taskID> command.
To monitor the progress with more details, connect a serial cable to the Service port on the node that
is being rescued.
Verification:
25. Verify that the node has joined the cluster.
a. Verify that the controller node LEDs are in normal status with a steady, blinking green LED.
Uniform blinking with the other nodes indicates that the node has joined the cluster.
b. Verify that the node has joined the cluster by issuing the shownode command. For additional
details, issue the shownode -d command.
26. Initiate Check Health of the storage system.
A scan of the storage system will be run to make sure that there are no issues after the replacement.
27. If significant time is left in the maintenance window, end Maintenance Mode.
28. Follow the return instructions provided with the replacement component.

Replacing a DIMM of a controller node

CAUTION:
Do not install controller nodes with mismatched memory configurations. Installing controller nodes
with mismatched memory configurations may cause the controller nodes to function improperly or
fail.

NOTE:
If the failed node is already offline, it is not necessary to shut down the node because the node is no
longer part of the cluster.

Each controller node contains two banks of Control Cache DIMMs and four banks of Data Cache DIMMs.
Refer to the label on the inside of the node cover, or the labels on the controller node board itself, to help
locate the failed DIMM.

Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
2. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
3. Set Maintenance Mode.
4. Perform preliminary maintenance checks and initiate servicing.

NOTE: Some faults will automatically illuminate a blue LED (safe-to-remove) on some of the
components equipped with a UID/Service LED. Search for a designated blue LED when
replacing or servicing a controller node component. If the LED is not illuminating, use the
command locatenode with specific options to illuminate the component.

22 Replacing a DIMM of a controller node


a. Enter checkhealth -detail to verify the current state of the system.
b. Enter shownode to verify the current state of the controller nodes.
c. Enter shownode –mem to verify the current state of the controller node memory.
d. Record and label the location of the cable connections from the node and PCI adapter.
e. Enter shutdownnode halt <nodeID> to halt the appropriate node.
5. When prompted, enter Yes.

NOTE:
You must wait up to ten minutes for the node to halt. When the node halts, the node status LED
rapidly blinks green, and the node service LED displays blue.
6. Turn the node power switch to the OFF position.

Figure 14: Location of power switch on controller node


7. Disconnect the cables from the node.
8. To remove the node, (1) press the release button to eject the handles, (2) extend the handles fully to
disengage the node from the mid-plane before (3) pulling the node out of the enclosure.
9. Place the node on an ESD safe work surface.

Servicing the StoreServ 23


Figure 15: Removing the controller node
10. Open the cover by (1) unlocking the latch (if locked), (2) pulling the latch to disengage the cover and
(3) lifting to fully remove the cover from the node.

NOTE:
Each controller node contains two banks of Controller Cache DIMMs and four banks of Data
Cache DIMMs.

Figure 16: Removing the node cover

24 Servicing the StoreServ


11. Open both latches on either side of the DIMM slot and remove the failed DIMM. Refer to the node
components layout label underneath the node cover for specific DIMM locations. For more
information about the location of DIMMs, refer to:
• Replacing controller node internal components on page 15

Figure 17: Removing the DIMM


12. Remove the replacement DIMM from its protective packaging.
13. Install the replacement DIMM into the same slot location of the failed DIMM.

Figure 18: Installing the DIMM


14. Align the notch on the replacement DIMM with the key in the DIMM slot and insert the DIMM evenly
into the slot latches.
15. Confirm that both latches are fully engaged and the DIMM is securely in its slot.
16. Align the pins on the node cover with the slots in the node and close the latch to slide the cover into
place over the controller node..

Servicing the StoreServ 25


Figure 19: Installing the node cover
17. To install the node, (1) insert the controller node with the insertion handles fully extended into the
enclosure until it stops and then (2) press the insertion handles to close and fully engage the node in
to the middle plane.

Figure 20: Inserting the controller node


18. Reconnect all the cables to the node.
19. Set the node power switch to the ON position to apply power to the node.

NOTE:
When you switch on the power, the node reboots. This process might take approximately ten to
fifteen minutes. The node becomes part of the cluster when the process is complete.
20. After node rescue completes and the node has booted, verify the controller node LEDs are in normal
status with a steady, blinking green LED. Uniform blinking indicates the node has joined the cluster.
21. On the CLI prompt:

26 Servicing the StoreServ


a. Enter shownode to verify the node has joined the cluster.
b. Enter shownode –mem to verify that the replaced DIMM has been successfully installed.
c. Enter checkhealth -detail to verify the current state of the system.
22. Exit and logout of the session.

Replacing a Time of Day (TOD) battery in a controller node

IMPORTANT:
The Time-of-Day (TOD) battery, also called an RTC battery, is a 3-V lithium coin battery. The lithium
coin battery might explode if it is incorrectly installed in the node. Replace the TOD battery only with
a battery supplied by Hewlett Packard Enterprise. Do not use non Hewlett Packard Enterprise
supplied batteries. Dispose of used batteries according to the manufacturer’s instructions.

Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
2. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
3. Set Maintenance Mode.
4. Perform preliminary maintenance checks and initiate servicing.

NOTE:
Some faults will automatically illuminate a blue LED (safe-to-remove) on some of the
components equipped with a UID/Service LED. Search for a designated blue LED when
replacing or servicing a controller node component. If the LED is not illuminating, use the
command
locatenode
with specific options to illuminate the component.

a. Enter checkhealth -detail to verify the current state of the system.


b. Enter shownode to verify the current state of controller nodes.
c. Record and label the location of the cable connections from the node.
d. Enter shutdownnode halt <nodeID> to halt the appropriate node.
5. When prompted, enter yes to confirm halting of the node.

NOTE:
You must wait up to ten minutes for the node to halt. When the node halts, the node status LED
rapidly blinks green, and the node service LED displays blue.
6. Turn the node power switch to the OFF position.

Replacing a Time of Day (TOD) battery in a controller node 27


Figure 21: Location of power switch on controller node
7. Disconnect the cables from the controller node and PCI adapters.
8. To remove the node, (1) press the release button to eject the handles, (2) extend the handles fully to
disengage the node from the mid-plane before (3) pulling the node out of the enclosure.
9. Place the node on an ESD safe work surface.

Figure 22: Removing the controller node


10. To open the cover (1) unlock the latch (if locked), (2) pull the latch to disengage the cover and (3) lift
to fully remove the cover from the node.

28 Servicing the StoreServ


Figure 23: Removing the node cover

NOTE:
Note the location of the battery polarity (+/- symbols) before removing the failed TOD battery.
11. Push the retaining clip over the TOD battery, (also called an RTC battery) and lift the battery out of
the housing in the controller node.
12. Refer to Replacing controller node internal components on page 15 to identify the TOD battery
location.
13. Remove the replacement TOD battery from its protective packaging.

Figure 24: Removing the TOD battery


14. Insert the replacement 3V lithium coin battery into the clock battery slot with the positive polarity
facing the retaining clip.

Servicing the StoreServ 29


Figure 25: Installing the TOD battery
15. Locate the CLEAR RTC pins on the controller board. They are located near the black heat-sink next
to the TOD battery.
16. Place a jumper over the CLEAR RTC pins to connect it for 15 seconds and then remove the jumper.
17. Align the pins on the node cover with the slots in the node and close the latch to slide the cover into
place over the controller node.

Figure 26: Installing the node cover


18. To install the node, (1) insert the controller node with the insertion handles fully extended into the
enclosure until it stops and then (2) press the insertion handles to close and fully engage the node.

30 Servicing the StoreServ


Figure 27: Inserting the controller node
19. Align and slide the controller node into the enclosure.
20. Extend both handles fully and continue to slide the controller node into the enclosure until the
handles engage.
21. Close the handles to fully seat the controller node into the middle plane.
22. Reconnect all the cables to the node.
23. Set the node power switch to the ON position to apply power to the node.

NOTE:
When you switch on the power, the node reboots. This process might take approximately ten to
fifteen minutes. The node becomes part of the cluster when the process is complete.
24. After node rescue completes and the node has booted, verify the controller node LEDs are in normal
status with a steady, blinking green LED. Uniform blinking indicates the node has joined the cluster.
25. On the CLI prompt:
a. Enter shownode to verify the node has joined the cluster
b. Enter checkhealth -detail to verify the current state of the system.
26. Exit and logout of the session.

Replacing a controller node power supply


CAUTION:
• Only one power supply unit (PSU) can be serviced at a time. If another PSU is to be serviced,
verify that the first serviced PSU is healthy and functioning, and then restart this servicing
procedure from the beginning for the next PSU to be serviced.
• To prevent overheating, the replacement of the PSU requires a maximum service time of six
minutes.
• Ensure that cables are clear of the PSU when installing in the enclosure.

To replace a power supply:

Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).

Replacing a controller node power supply 31


See Connection methods for the SP on page 154.
2. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
3. Set Maintenance Mode.
4. Perform preliminary maintenance checks and initiate servicing.
a. Enter checkhealth -detail to verify the current state of the system.
b. Enter shownode -ps to confirm the status of the power supply.

cli% shownode -ps


Node PS -Assem_Serial- -PSState-- -Service_LED- FanState ACState
DCState
0 0 5BXRF0DLL7D80L OK Off OK OK
OK
0 1 5BXRF0DLL7D6CA Failed Blue OK Failed
--

c. Enter locatenode -ps <psID> <nodeID> to prepare the power supply for service.

NOTE:
The system illuminates the service LED blue when there is a failure and the component is
safe to replace. If the blue service LED is not lit or if this is a proactive replacement, enter
locatenode –ps <powersupplyID> <nodeID>
. This will force the service LED to light blue.
5. Locate the power supply with the blue LED requiring service. The blue LED indicates the power
supply is ready for service.
6. Remove the power cord retaining strap and disconnect the power cable from the failed power supply.
7. Remove the power supply by (1) pressing the release tab and then (2) pulling out the power supply
out of the power supply tray in the enclosure and place onto the ESD safe work surface..

Figure 28: Removing the power supply


8. Insert the new power supply by sliding the unit in until it is fully engaged.

32 Servicing the StoreServ


Figure 29: Inserting the power supply

NOTE:
A clicking sound indicates when the module is fully engaged.
9. Connect the power cable and secure it with the retaining strap.
10. Confirm that green LED of the power supply is lit to indicate normal operation.

NOTE:
This process might take up to a minute to light the green LED and turn off the blue service
LED.
11. If green LED of the replacement power supply does not light after a minute, the power supply tray
might have failed, and you must remove the power supply tray.
a. Disconnect the power cable from the failed power supply.
b. Squeeze the release tab towards the handle and pull the power supply out of the power supply
tray in the enclosure and place onto the ESD safe work surface.
c. Press the release tab and pull the failed power supply tray out of the enclosure.
d. Align and slide the replacement power supply tray into the supply bay until it clicks into place and
is fully engaged in the middle plane.
e. Align and slide the power supply into power supply tray in the enclosure, until it clicks into place
and is fully engaged in the power supply tray.
f. Connect the power cable to the power supply.
g. Secure the power cable with the cable retention strap.
12. Confirm that green LED of the power supply is lit to indicate normal operation.
13. On the CLI prompt:
a. Enter shownode -ps to verify the status of the node power supply is OK.

Servicing the StoreServ 33


cli% shownode -ps
Node PS -Assem_Serial- -PSState-- -Service_LED- FanState ACState
DCState
0 0 5BXRF0DLL7D80L OK Off OK OK
OK
0 1 5BXRF0DLL7D6CA Ok Off OK OK
OK

b. Enter checkhealth -detail to verify the current state of the system.


14. Exit and logout of the session.

Replacing a node power supply tray


CAUTION:
• Only one power supply unit (PSU) can be serviced at a time. If another PSU is to be serviced,
verify that the first serviced PSU is healthy and functioning, and then restart this servicing
procedure from the beginning for the next PSU to be serviced.
• To prevent overheating, the replacement of the PSU requires a maximum service time of six
minutes.
• Ensure that cables are clear of the PSU when installing in the enclosure.

To replace a node power supply tray:

Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
2. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
3. Set Maintenance Mode.
4. Performing preliminary maintenance checks and initiate servicing.

NOTE:
Some faults will automatically illuminate a blue LED (safe-to-remove) on some of the
components equipped with a UID/Service LED. Search for a designated blue LED when
replacing or servicing a controller node component. If the LED is not illuminating, use the
command
locatenode
with specific options to illuminate the component.

a. Enter checkhealth -detail to verify the current state of the system.


b. Enter shownode -ps to confirm the status of the power supply.

cli% shownode -ps


Node PS -Assem_Serial- -PSState-- -Service_LED- FanState ACState
DCState
0 0 5BXRF0DLL7D80L OK Off OK OK
OK
0 1 5BXRF0DLL7D6CA Failed Blue OK Failed
--

34 Replacing a node power supply tray


NOTE:
The system illuminates the UID/Service LED blue when a failure occurs. If applicable, use the
locatenode -ps <psID> <nodeID>
command to illuminate the service LED when servicing a component.
5. Locate the power supply with the blue LED requiring service. The blue LED indicates that the power
supply or extender tray is ready for service.
6. Remove the power cord retaining strap and disconnect the power cable.
7. Remove the power supply by (1) pressing the release tab and then (2) pulling out the power supply.

Figure 30: Removing the power supply


8. Remove the failed power supply tray by (1) pressing the release tab and then (2) pulling out the tray.

Figure 31: Removing the power supply tray


9. Insert the replacement power supply onto the tray. Ensure the power supply is locked into the tray.

Servicing the StoreServ 35


Figure 32: Inserting the power supply onto the tray

NOTE:
A clicking sound indicates when the module is fully engaged.
10. Insert the new power supply tray and power supply by sliding the unit in until it is fully engaged.

Figure 33: Inserting the power supply

NOTE:
A clicking sound indicates when the module is fully engaged.
11. Connect the power cable and secure the cord to the handle of the power supply with a strap.
12. Enter shownode -ps to verify the status of the node power supply is OK.

cli% shownode -ps


Node PS -Assem_Serial- -PSState-- -Service_LED- FanState ACState DCState
0 0 5BXRF0DLL7D80L OK Off OK OK OK
0 1 5BXRF0DLL7D6CA Ok Off OK OK
OK

13. Enter checkhealth -detail to verify the current state of the system.
14. Exit and logout of the session.

36 Servicing the StoreServ


Replacing a Backup Battery Unit (BBU)
To replace a BBU:

Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
2. Press the release tab and pivot out the bezel to remove the controller node front bezel.
3. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
4. Set Maintenance Mode.
5. Perform preliminary maintenance checks and initiate servicing.

NOTE:
Some faults will automatically illuminate a blue LED (safe-to-remove) on some of the
components equipped with a UID/Service LED. Search for a designated blue LED when
replacing or servicing a controller node component. If the LED is not illuminating, use the
command
locatenode
with specific options to illuminate the component.

a. Enter checkhealth -detail to verify the current state of the system.


b. Enter showbattery to identify the battery module and note the Node ID.

cli% showbattery

Node Assy_Serial -State- -Service_LED- ChrgLvl(%) -ExpDate-- Expired


Testing
0 00000169 FAILED Off 100 03/18/2020 No
No
1 00000265 OK Off 100 03/18/2020 No
No

c. Enter showbattery –s to display the state of the battery modules.

NOTE:
The system illuminates a blue LED when there is a failure.

NOTE:
If the blue service LED is not lit, enter
locatenode –bat
followed by
<nodeID>
to locate the failed BBU.
6. At the front of the system, identify the failed battery module and verify the service LED is illuminated
blue.
7. Remove the failed BBU by (1) pressing the ejector and (2) pulling it out of the slot.

Replacing a Backup Battery Unit (BBU) 37


Figure 34: Removing the BBU
8. Insert the replacement BBU.

Figure 35: Inserting the BBU


9. Verify the status LED is illuminating green.
10. Replace the controller node front bezel.
11. Enter showbattery to verify the replacement battery module is working properly.

cli% showbattery
----- showbattery -----
Node Assy_Serial -State- -Service_LED- ChrgLvl(%) -ExpDate-- Expired
Testing
0 00000169 OK Off 100 03/18/2020 No
No
1 00000265 OK Off 100 03/18/2020 No
No

38 Servicing the StoreServ


NOTE:
The serial number and expiration date is read by the system and automatically set.
12. Enter checkhealth -detail to verify the current state of the system.
13. Exit and logout of the session.

IMPORTANT:
The charge time for the batteries can be up to 24 hours.

Replacing a fan module


CAUTION:
Before removing a node fan module, prepare the replacement fan module by removing the
packaging and inspecting it for excess packaging debris. Prepare for installation by placing the fan
in an accessible location to ensure the actual servicing duration is less than the maximum service
time (
five minutes
or less) and avoid overheating other components.

To replace a fan module:

Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
2. To remove the controller node front bezel, (1) press the release tab and (2) pivot out the bezel.
3. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
4. Set Maintenance Mode.
5. Perform preliminary maintenance checks and initiate servicing.

NOTE:
Some faults will automatically illuminate a blue LED (safe-to-remove) on some of the
components equipped with a UID/Service LED. Search for a designated blue LED when
replacing or servicing a controller node component. If the LED is not illuminating, use the
command
locatenode -fan <fanID> <nodeID>
to illuminate the component.

a. Enter checkhealth -detail to verify the current state of the system.


b. Enter shownode to verify the current state of the controller nodes.
c. Enter shownode -fan to identify the failed fan module and record the details.

NOTE:
The system illuminates the UID/Service LED blue when a failure occurs.
d. Enter locatenode -fan <fanID> <nodeID> to prepare the node fan for service.

Replacing a fan module 39


NOTE:
If no blue LED is lit, use the
locatenode -fan <fanID> <nodeID>
command to
locate the failed fan module
.
6. To remove the fan module, (1) press the release button and then (2) pull out the module.

Figure 36: Removing the fan module


7. Insert the replacement fan module into the slot.

Figure 37: Inserting the fan module

NOTE:
A clicking sound indicates when the module is fully engaged.

40 Servicing the StoreServ


8. Verify the status LED on the fan module is green.
9. Replace the controller node front bezel.
10. On the CLI prompt:
a. Enter shownode -fan to verify the status and speed are normal. The green LED indicates the
component status is normal.
b. Enter checkhealth -detail to verify the current state of the system.
11. Exit and logout of the session.

Replacing a PCI adapter


To replace a PCI adapter:

NOTE:
• PCI adapters, also called HBAs (Host Bus Adapters), are located in slots at the rear of the
controller node. Generally, SAS PCI adapters are on the left and fiber channel and other PCI
adapters are on the right. The replacement procedures are the same for all PCI adapters.
• If the controller node has failed due to an issue with a PCI adapter, it is not necessary to shut
down the node. The node is no longer part of the cluster.

Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
2. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
3. Set Maintenance Mode.
4. Initiate Check Health of the storage system.
5. Check that the current state of the controller nodes is healthy. Issue the HPE 3PAR CLI shownode
command.
6. If applicable, enter locatenode -pci <slot> <nodeID> to locate the PCI adapter for service.

NOTE:
• The system illuminates the UID/Service LED blue when a failure occurs. To illuminate the
service LED when servicing a component, use the locatenode command.
• The locatenode command confirms that the correct component is being serviced.

7. Identify the failed PCI adapter with the service LED illuminating blue.
8. Ensure that the cables to the PCI adapter are properly labeled to facilitate reconnecting the cables
later.
9. Only for the replacement of a 10Gb NIC adapter with HPE 3PAR File Persona running, delete
File Persona interfaces from the ports of the failed adapter on the controller node. Otherwise skip this
step.
a. Issue the showport -fs command to identify the File Persona ports.
b. Delete File Persona interfaces from the ports of the failed adapter on the controller node. Issue
the controlport fs delete -f <N:S:P> command where: <N:S:P> is the node:slot:port
for the ports in the NIC PCIe adapter being replaced in the controller node.
Deleting File Persona interfaces causes existing FPGs on this node to fail over to the second
node.
10. To halt the desired node, enter shutdownnode halt <nodeID>.
11. When prompted, enter yes to confirm halting of the node.

Replacing a PCI adapter 41


NOTE:
Wait for 10 minutes for the node to halt. When the node halts, the node status LED blinks
green, and the node service LED displays blue.
12. To power off the node, set the node power switch to the OFF position.

CAUTION:
The PCI adapters are not hot pluggable. Power off the node.
13. Disconnect all cables from the PCI adapter.

Figure 38: Removing the SFP


14. Remove the new adapter from the protective packaging.
15. If the adapter has SFF pluggable transceivers, remove the SFPs from the failed PCI adapter and
insert them into the replacement adapter.
16. Remove the PCI adapter by (1) lifting the handle up then (2) pulling out the module.

Figure 39: Removing the PCI adapter


17. Insert the replacement PCI adapter into the slot and slide it in until it clicks and locks into place.

42 Servicing the StoreServ


Figure 40: Inserting the PCI adapter
18. Reconnect the cables to the replacement PCI adapter.
19. To apply power to the node, set the node power switch to the ON position.

NOTE: When power is supplied to the node, it begins to boot. This process takes
approximately 10 minutes. The node becomes part of the cluster when the process is
complete.
20. After node rescue completes and the node has booted, verify the controller node LEDs are in normal
status with a steady, blinking green LED.
21. Uniform blinking indicates that the node has joined the cluster.
22. To verify that the node has joined the cluster, enter shownode.

cli% shownode
Control
Data Cache
Node ----Name---- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB)
Available(%)
0 4UW0001463-0 OK No Yes Off GreenBlnk 98304
131072 100
1 4UW0001463-1 OK Yes Yes Off GreenBlnk 98304
131072 100

NOTE:
If the node that was halted also contained the active Ethernet session, you must exit and
restart the CLI session.
23. Only for the replacement of a 10Gb NIC adapter with File Persona running, configure File
Persona interfaces on the Ethernet ports of the adapter on the controller node. Otherwise skip this
step.
a. To check the status of File Persona, enter showfs. Wait until the state of the File Persona
<node> is shown with a status of Running.
b. To add back the deleted ports as File Persona interfaces from step 9, issue the controlport
fs add <N:S:P> command where: <N:S:P> is the node:slot:port for the ports in the adapter.
24. To verify the current state of the system, enter checkhealth -detail.

Servicing the StoreServ 43


25. Only for the replacement of a 10Gb NIC adapter with HPE 3PAR File Persona running, restore
any failed-over FPGs to the proper controller node.
a. Once the check health states that there are FPGs that are degraded due to being "failed over,"
issue the setfpg -failover <FPG_Name>. This command restores the FPGs to the proper
controller node.
For example:

cli% checkhealth -detail fs


Checking fs
Component --Summary Description--- Qty
fs File Services FPG issues 1
--------------------------------------
1 total 1

Component -Identifier- -------------------Detailed Description-------------------


fs vfs1138_9 vfs1138_9 is degraded: Failed over from node0fs to node1fs
---------------------------------------------------------------------------------
1 total

cli% setfpg -failover vfs1138_9


This action will lead to unavailability of shares on FPG vfs1138_9.
select y=yes n=no: y

cli% checkhealth -detail fs


Checking fs
The following components are healthy: fs

26. Verify that the adapter is installed correctly.


a. To display the adapter information, issue shownode -pci.
b. To display an inventory of the ports to verify that the installed adapter is in the correct slot, issue
showport -i.
c. To verify that the ports on the storage system have a State of Ready, issue showport.
A port must be connected and correctly communicating to be ready.
d. Only for the replacement of a 10Gb NIC adapter with HPE 3PAR File Persona running, to
verify that the ports are configured for File Services, issue showport -fs.
27. Exit and logout of the session.
More Information
Alert notifications from the SP on page 168
Check health action from the HPE 3PAR SP on page 159
Connection methods for the SP on page 154
Locate action from the SP on page 167
Maintenance mode action from the SP on page 167

Replacing a system LED status board


To replace a system LED status board:

Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).

44 Replacing a system LED status board


See Connection methods for the SP on page 154.
2. Press the release tab and slide out the bezel to remove the controller node front bezel.
3. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
4. Set Maintenance Mode.
5. Perform preliminary maintenance checks and initiate servicing.
6. Enter checkhealth -detail to verify the current state of the system.
7. At the front of the system, locate fan module 2 at the top right corner and remove it to expose the
LED status board situated along the side wall.

Figure 41: Removing the fan module


8. Remove the replacement LED status board from its protective packaging.
9. To remove the failed LED status board, (1) press the tab to release the board and (2) slide it forward
along the key and angle it out.

Figure 42: Removing the LED status board


10. To install the replacement LED status board, align the board with the tab and key and insert it in to
the connector.

Servicing the StoreServ 45


Figure 43: Installing the LED status board
11. Ensure the appropriate LED(s) is illuminated on the LED status board.
12. Reinsert the removed fan module into the slot.

Figure 44: Inserting the fan module


13. Verify the LED status board in normal status with a steady green LED.
14. Replace the controller node front bezel.
15. On the CLI prompt, enter checkhealth -detail to verify the current state of the system.
16. Exit and logout of the session.

Replacing a controller node L-frame assembly


Before you start, record and label the location of the cable connections such as back-end SAS, host, and
power cables before disconnecting the cables from the node.

NOTE:
The controller node L-frame assembly is available in two sizes, either 4-node or 8-node. This
procedure is for the 4-node size. The replacement process is essentially the same for both.

46 Replacing a controller node L-frame assembly


NOTE:
Replacing a controller node L-frame assembly is an offline service activity and can be completed
during a maintenance window.

IMPORTANT:
To avoid configuration problems, track the location of the components removed, so that you can
reinstall them in the exact same location later. If required, refer to recent configuration files and/or
the hardware inventory file (HWINVENT) for the system. These files can be found in the Files links/
pages on the SP, or by referencing the StoreServ system serial number in STaTs if the system calls
home.

To replace a controller node L-Frame assembly:

Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
2. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
3. Set Maintenance Mode.
4. Perform preliminary maintenance checks and initiate servicing.
5. To assist in the reassembly, gather information about the current system and its components:
a. To verify the current state of the system, enter checkhealth -detail.
b. Enter showsys.
c. Enter shownode -d.
d. Enter shownode -i.
e. Enter showcage.
6. To illuminate blue service LEDs in the system for identification, enter locatesys.
7. To halt the system, enter shutdownsys halt.
8. When prompted, enter yes to confirm halting of the system.

NOTE:
Wait up to 10 minutes for the system to halt. When the nodes have halted, the node status
LEDs blink green and the node service LEDs display blue.
9. Record and label the location of the cable connections before disconnecting the cables from the PCI
adapter.
10. Set all the node power switches to the OFF position.
11. Remove the power cables from all the power supplies.
12. Remove all power supplies and power supply trays starting from the lowest.

Servicing the StoreServ 47


Figure 45: Removing the power supply

Figure 46: Removing the power supply tray


13. If the cabling to the controller nodes and PCI adapters allows the controller nodes to be slid out of
the chassis one to 2 inches, the cables do not need to be removed. Otherwise remove the cables.

48 Servicing the StoreServ


Figure 47: Removing the HBA
14. For each node, starting from the lowest node, (1) press the release button to eject the handles, (2)
extend the handles fully to disengage the node from the midplane, and then (3) slide the controller
node out between one and 2 inches only.
15. Remove bezels from the front of the rack.

Figure 48: Removing the controller node


16. Remove the following components and place the node on an ESD safe work surface:
a. Remove all the BBUs.
For more information on removing the BBU, see Replacing a Backup Battery Unit (BBU) on
page 37.

Servicing the StoreServ 49


Figure 49: Removing the BBU
b. Remove all the fan modules.
For more information on removing the fan module, see Replacing a fan module on page 39.

Figure 50: Removing the fan module


17. If installed, remove all BBU and fan module blanks.
18. Remove all the side flange bezels from the controller node L-frame assembly.

50 Servicing the StoreServ


Figure 51: Removing the controller node chassis flange bezels
19. Loosen the blue captive Torx T25 screws that secure the L-frame assembly exterior to the rack.

NOTE:
The number of screws depends on the size of the enclosure.

Figure 52: Loosening the exterior retaining screws (4-way node chassis)

Servicing the StoreServ 51


Figure 53: Loosening the exterior retaining screws (8-way node chassis)
20. Loosen the blue captive Torx T15 screws to secure the L-frame assembly interior to the node chassis
from the fan bays on the left.

NOTE:
The number of screws depends on the size of the enclosure.

52 Servicing the StoreServ


Figure 54: Loosening the interior retaining screws (4-way node chassis)

Servicing the StoreServ 53


Figure 55: Loosening the interior retaining screws (8-way node chassis)
21. Pull out the failed L-frame assembly to disengage it from the chassis and place on an ESD mat or
safe flat surface.

CAUTION:
Use two or more people to lift and guide the assembly over to an ESD mat or a safe, flat
surface area.

NOTE:
Transfer the status LED board from the failed L-frame assembly to the replacement L-frame
assembly, if the replacement L-frame assembly does not contain a status LED board.

54 Servicing the StoreServ


Figure 56: Removing the L-frame assembly (4-way node chassis)

Figure 57: Removing the L-frame assembly (8-way node chassis)


22. Install the replacement L-Frame assembly by sliding the assembly into the chassis. Verify the
assembly exterior retaining screws are aligned with the holes on the chassis.

Servicing the StoreServ 55


Figure 58: Installing the L-frame assembly (4-way node chassis)

Figure 59: Installing the L-frame assembly (8-way node chassis)


23. Tighten the blue captive Torx T15 interior retaining screws.

NOTE:
The interior retaining screw torque specification is approximately 9.6 In-Lb.

56 Servicing the StoreServ


Figure 60: Tightening the interior retaining screws (4-way node chassis)

Servicing the StoreServ 57


Figure 61: Tightening the interior retaining screws (8-way node chassis)
24. Tighten the blue captive Torx T25 exterior retaining screws.

NOTE:
The exterior retaining screw torque specification is approximately 31.7 In-Lb.

58 Servicing the StoreServ


Figure 62: Tightening the exterior retaining screws (4-way node chassis)

Figure 63: Tightening the exterior retaining screws (8-way node chassis)
25. Install the previously removed components:
a. Side flange bezels
b. If removed previously, BBU/fan module blanks
c. Fans

Servicing the StoreServ 59


Figure 64: Inserting the fan module
d. BBUs

Figure 65: Inserting the BBU


e. Controller nodes

60 Servicing the StoreServ


Figure 66: Installing the controller node
26. From the rear of the rack, starting at the top, (1) extend the controller handles fully. (2) Slide the
controller node into the enclosure until the handles click and lock into place. Perform this step for
each controller node.
27. Reconnect the power cords, data, and host cables.
28. Install the power supply trays.
29. Plug in the power supplies into their power supply trays in the enclosure, until they click into place
and are operational.
30. Connect the power cables to each of the power supplies and secure with the cable retention straps.
31. Power on the controller nodes.
32. After powering on the nodes, allow the nodes to boot before checking for LED status. Booting takes
approximately 10 minutes.
33. Verify the controller node LEDs are in normal status with a steady, blinking green LED.
34. The node becomes part of the cluster when the process is complete.
35. To return to 3PAR Service Processor Menu, enter exit.
36. At this point, the SP has lost its connection to the StoreServ. To reconnect to the SP CLI, select 7
Interactive CLI for a StoreServ, select the system, and then enter y to enable Maintenance Mode
before performing preliminary maintenance checks and initiate servicing.
37. On the CLI prompt, perform post-maintenance checks:
a. Enter showsys.
b. Enter shownode -d.
c. Enter shownode -i.
d. Enter showcage.
e. Enter showpd.
38. To clear the remaining blue service LEDs, enter locatesys -t 1.
39. To verify the current state of the system, enter checkhealth -detail.
40. Exit and logout of the session.

Drive chassis and internal components

Drive chassis and internal components 61


Replacing drive enclosure components
This section describes the procedures for removing and replacing components belonging to the storage
drive enclosure.

Replacing a power supply in a drive enclosure

CAUTION:
• Only one power supply unit (PSU) can be serviced at a time. If another PSU is to be serviced,
verify that the first serviced PSU is healthy and functioning, and then restart this servicing
procedure from the beginning for the next PSU to be serviced.
• To prevent overheating, the replacement of the PSU requires a maximum service time of six
minutes.
• Ensure that cables are clear of the PSU when installing in the enclosure.

Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
2. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
3. Set Maintenance Mode.
4. Perform preliminary maintenance checks and initiate servicing.
a. To verify the current state of the system, enter checkhealth -detail.
b. To display the cage IDs, enter showcage.
c. To display information about the drive cage with the problematic power supply, enter showcage
-d <cageID>.
d. To illuminate the blue service LED of the drive enclosure with the failed power supply, enter the
locatecage <cageID>.
5. Locate the power supply to be replaced.
6. The power supply is in the drive enclosure with the illuminated blue service LED.
7. Remove the retaining strap from the power cable and then remove power cable from the power
supply being replaced.

Figure 67: Removing the power cable

62 Replacing drive enclosure components


8. To remove the power supply, (1) press the latch and then (2) pull out the module.

Figure 68: Removing the power supply module


9. Slide new power supply into the enclosure bay until it clicks into place and engages in the drive
enclosure.
10. Reconnect the power cable.
11. Secure the power cable with the cable retention strap.
12. Confirm that the green LED of power supply is lit to indicate normal operation.
13. On the CLI prompt, perform post-maintenance checks:
a. To clear the blue service LED of the drive enclosure with the replaced power supply, enter
locatecage –t 0 <cageID>.
b. To display information about the serviced drive cage, enter showcage -d <cageID>.
c. To verify the current state of the system, enter checkhealth -detail.
14. Exit and logout of the session.

Replacing a fan module in a drive enclosure

Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
2. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
3. Set Maintenance Mode.
4. Perform preliminary maintenance checks and initiate servicing.
a. To verify the current state of the system, enter checkhealth -detail.
b. To display the cage IDs, enter showcage.
c. To display information about the drive cage with the failed fan module, enter showcage -d
<cageID>.
5. If a blue LED is not illuminated, to illuminate the blue LED on the drive cage to be serviced, run the
locatecage <cageID> command.
6. Identify the failed drive enclosure fan module with the service LED illuminating blue.
7. To remove the module, (1) press the release buttons and (2) pull out the module.

Replacing a fan module in a drive enclosure 63


Figure 69: Removing the fan module
8. Insert the new fan module until it clicks into place.

Figure 70: Inserting the fan module


9. Confirm that the service LED is not illuminated blue and the green LED is lit.
10. On the CLI prompt, perform post-maintenance checks:
a. To display information about the serviced drive cage, enter showcage -d <cageID>.
b. To verify the current state of the system, enter checkhealth -detail.
11. Exit and logout of the session.

Replacing an I/O module in a drive enclosure

CAUTION:
• To prevent overheating, the I/O module bay in the enclosure should not be left open for more
than 6 minutes.
• Storage systems operate using two I/O modules per drive enclosure and can temporarily operate
using one I/O module when removing the other I/O module for servicing.

64 Replacing an I/O module in a drive enclosure


Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
2. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
3. Set Maintenance Mode.
4. Perform preliminary maintenance checks and initiate servicing.
a. To verify the current state of the system, enter checkhealth -detail.
b. To display the cage IDs, enter showcage.
c. To display information about the drive cage you are working on, enter showcage -d <cageID>.
d. To illuminate the blue service LED of the drive enclosure with the failed I/O module, enter
locatecage <cageID>.
5. Locate the I/O module from the showcage output after you find the cage with blue service LED.

NOTE:
I/O module 0 is at the bottom and I/O module 1 is on top.
6. If required, label and then unplug any cables from the back panel of the I/O module.
7. To remove the module, (1) loosen the captive retaining screw, then (2) pull out the latch and (3) slide
out the I/O module.

Figure 71: Removing the I/O module


8. To install the replacement module, (1) insert the I/O module into the bay, (2) close the latch until it
locks and (3) tighten the captive screw.

Servicing the StoreServ 65


Figure 72: Installing the I/O module
9. Reconnect the mini-SAS cables to the replacement I/O modules in the same location as before.
10. On the CLI prompt, perform post-maintenance checks:
a. To turn off the blue service LED of the drive enclosure, enter locatecage -t 5.
b. To display the cage IDs, enter showcage.
c. To display information about the serviced drive cage, enter showcage -d <cageID>.
If required:
• To upgrade the firmware on the I/O module, enter upgradecage <cageID>.

NOTE:
The drive enclosure blue service LED blinks during the upgrade process.
• To display information about the drive cage, enter showcage –d.
d. To verify the current state of the system, enter checkhealth -detail.
11. Exit and logout of the session.

Replacing mini-SAS cables for a drive enclosure

Procedure
Preparation:
1. Unpack the replacement mini-SAS cable and place on an ESD safe mat.
2. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
3. Set Maintenance Mode.
A maintenance window is initiated that stops the flow of system alerts from being sent to HPE
4. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
5. Initiate Check Health of the storage system.
Issue the checkhealth -detail command.

66 Replacing mini-SAS cables for a drive enclosure


CAUTION:
If health issues are identified during the Check Health scan, resolve these issues before
continuing. Refer to the details in the Check Health results and contact HPE support if
necessary.

A scan of the storage system will be run to make sure that there are no additional issues.
6. Identify the <cage_name> for the drive enclosure containing the failed cable.
Issue the showcage command.
7. Display information about the drive enclosure.
Issue the showcage -d <cage_name>.

NOTE:
I/O module 0 is at the bottom, I/O module 1 is at the top, port DP-1 is on the left, and DP-2 is
on the right.
8. Initiate the Locate action for the I/O module containing the failed cable.
Issue the locatecage -t <sec> <cage_name> iocard <iocard_ID> command, where
<cage_name> is the drive cage name and <iocard_ID> is the I/O module number.

On the I/O module, the blue UID LED is illuminated.


Removal:
9. Disconnect the failed mini–SAS cable.
Pull straight out on the plastic release mechanism.

Figure 73: Disconnecting the mini–SAS cable


Replacement:
10. Install the replacement mini-SAS cable.
11. Label the replacement mini-SAS cable.
Verification:
12. Identify the ID of the replacement drive enclosure (cage).
Issue the showcage command.
13. Confirm the details about the replacement drive enclosure.
Issue the showcage -d <cage_ID> command.
14. Stop the Locate action on the drive enclosure.

Servicing the StoreServ 67


Issue the locatecage -t 0 <cage_name> command.
15. Initiate Check Health of the storage system.
A scan of the storage system will be run to make sure that there are no issues after the replacement.
16. If significant time is left in the maintenance window, end Maintenance Mode.
17. Follow the return instructions provided with the replacement component.

Replacing a drive chassis


This section describes the procedures for replacing a drive chassis.

WARNING:
Ensure that the data on the drives is backed up.

Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
2. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
3. Set Maintenance Mode.
4. Perform preliminary maintenance checks and initiate servicing.
a. To verify the current state of the system, enter checkhealth -detail.
b. To display the cage IDs, enter showcage.
c. To identify the drives to be replaced in the drive enclosure, enter showpd -p -cg
<CageNumber> and showpd -i -p -cg <CageNumber>.
d. Enter setpd ldalloc off followed by the pdids of all the drives in the drive enclosure to be
replaced.
e. To initiate the temporary removal of data from the drives and to store the data on spare drives,
enter servicemag start <CageNumber> <MagNumber>.

NOTE:
Run this command once for each drive in the drive enclosure to be replaced.
f. To monitor progress, enter servicemag status.

NOTE: The completion of the servicemag command illuminates the blue service LED on
each of the drives in the drive enclosure to be replaced.
g. To illuminate the blue service LED of the drive enclosure, enter locatecage <cageID>.
5. Disconnect the power cables from the power supplies of the drive enclosure to be replaced.
6. Label and remove the mini-SAS cables from the drive enclosure to be replaced.
7. Loosen the captive Torx T25 thumbscrews in the two rear hold-down brackets and slide the brackets
back from the drive enclosure.
8. To remove the front bezel of the drive enclosure, (1) press the release tab and (2) pivot it off the front
of the drive enclosure.
9. Label each drive in the drive enclosure to be replaced with the magazine (mag) number for later
replacement in the same magazine position.
10. Loosen the captive Torx T25 screws behind the latches on the front left and right bezel ears of the
drive enclosure. Support the enclosure from underneath and slide the failed drive enclosure out of its
rack.

68 Replacing a drive chassis


WARNING:
Two people are required to lift an enclosure into the rack. When you load the enclosure into the
rack above chest level, a third person also must assist with the alignment of the enclosure with
the rails.
11. Place the failed drive enclosure next to the replacement drive enclosure on a flat ESD safe work
surface and transfer the following components from the former to the latter in the exact position:
• I/O modules
• Fan modules, if required
• Power supplies
• Storage drives
12. Support the replacement drive enclosure from underneath. Then align the enclosure with the rack
rails and slide the enclosure into the rack.
13. Tighten the captive Torx T25 screws behind the latches on the front left and right bezel ears of the
drive enclosure. Also tighten the captive Torx T25 thumbscrews in the two rear hold-down brackets.
14. Connect the mini-SAS cables.
15. Connect the power cables to the power supplies and secure with cable straps.

NOTE:
When you attach the power cables, the drive enclosure is powered on.
16. Verify the drive enclosure, I/O modules, power supplies, fan module, and drive status LEDs are lit
green and operating normally.
17. On the CLI prompt:
a. To identify the new cage ID of the newly installed drive enclosure, enter showcage.
b. To confirm that system identifies the drives in the new drive enclosure, enter showcage -d
<cageID>.
c. To confirm the status of the drives in the newly installed drive enclosure, enter showpd -p -cg
<NewCageNumber> and showpd -i -p -cg <NewCageNumber>.
d. To restore data to the drives removed earlier, enter servicemag resume <NewCageNumber>
<MagNumber>.

NOTE:
Run this command once for each drive in the newly installed drive enclosure.
e. To monitor progress, enter servicemag status.

NOTE:
Depending on the data amount and system resources, the process can take several hours
to complete.
f. Replace the front bezel on the new drive enclosure.
To show specific status, use the servicemag status <CageNumber> <MagNumber>
command. To clear specific status of the failed drive enclosure, use the servicemag
clearstatus <CageNumber> <MagNumber> command.
g. To confirm the cage ID of the failed drive enclosure that was physically removed, enter
showcage.
h. To remove the entry of the old cage from the system information, enter servicecage remove
<OldCageNumber>.
i. To verify the current state of the system, enter checkhealth -detail.
18. Exit and logout of the session.

Servicing the StoreServ 69


Drive guidelines
CAUTION:
• To avoid potential damage to equipment and loss of data, handle drives carefully following
industry-standard practices and ESD precautions. Internal storage media can be damaged when
drives are shaken, dropped, or roughly placed on a work surface.
• When installing a storage drive, press firmly to make sure that the drive is fully seated in the
drive bay and then close the latch handle.
• When removing a storage drive, press the release button and pull the drive slightly out of the
enclosure. To allow time for the internal disk to stop rotating, wait approximately 10 seconds
before completely removing the drive from the enclosure.
• Storage drives are hot-pluggable only when they are in a service replaceable state initiated by
servicemag. If a storage drive fails, the system automatically runs servicemag in the
background.
• To avoid damage to hardware and the loss of data, never remove a drive without confirming that
the drive fault LED is lit.
• If the StoreServ is enabled with the HPE 3PAR Data Encryption feature, only use self-encrypting
drives (SED). Using a non-self-encrypting drive may cause errors during the repair process.

IMPORTANT:
The HPE 3PAR solid-state drives (SSDs) have a limited number of writes that can occur before
reaching a write-endurance limit for the SSD. This limit is expected to exceed the service life of the
HPE 3PAR StoreServ Storage system and is based on most configurations, I/O patterns, and
workloads. The storage system tracks all writes to an SSD to allow for proactively replacing an SSD
that is nearing the limit. If an SSD write-endurance limit is reached during the product-warranty
period, a replacement of the SSD follows the guidelines of the warranty. To prepare for an SSD
replacement after the product-warranty period has expired, Hewlett Packard Enterprise provides the
HPE 3PAR SSD Extended Replacement program. This program is available for eligible HPE 3PAR
SSDs.

IMPORTANT:
After initial storage system installation, to prevent damage to the drives, any HPE 3PAR StoreServ
20000 Storage system with LFF drives that require physical relocation must have all LFF drives
removed from the drive enclosures (cages). Clearly label and package all drives. Reinstall the drives
in the same drive enclosure locations after the move is complete. As a best practice, back up the
storage system data before relocation.

Replacing a drive
The storage system supports hard disk drives (HDD) and solid-state drives (SSD) in the following form
factors:
• Large form factor (LFF) drives
• Small form factor (SFF) drives
The drive replacement procedures are the same for all drives. Review and follow the drive guidelines
listed in Drive guidelines on page 70 before adding, removing, or replacing drives.

Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
2. Access the interactive CLI interface.

70 Drive guidelines
See Interfaces for the HPE 3PAR SP on page 156.
3. Set Maintenance Mode.
4. Perform preliminary maintenance checks and initiate servicing.
a. To verify the current state of the system, enter checkhealth -detail.
b. To identify the failed drive, enter showpd -i.

NOTE: If a storage drive fails, the system automatically runs the servicemag command in
the background. The servicemag command illuminates the blue drive LED to indicate a
fault and the drive to replace. Storage drives are replaced for various reasons and not
necessarily the result of a failure. In this case, the displayed output may not show errors. If
the replacement is a proactive replacement prior to a failure, enter servicemag start -
pdid <pdID> to initiate the removal of data from the drive. The system will store the
removed data on the spare chunklets.
c. To monitor progress, enter servicemag status.

NOTE: When servicemag successfully completes, the blue LED on the drive to be
replaced is illuminated.
d. To illuminate the blue service LED of the cage where the drive is located, enter locatecage
<cageID>.
5. Identify the drive enclosure by the blue service LED.
6. To remove the front bezel of the drive enclosure:
a. Press the release tab.
b. Pull it out from the front of the drive enclosure.
7. Identify the drive to replace. The blue service LED of the drive will be lit.
8. To remove the drive:
a. (1) Press the release button of the drive to open the latch handle. Extend the latch handle.
b. (2) Pull out the drive from the bay.

Figure 74: Removing the drive


9. To install the replacement drive:
a. (1) Press the release button of the new drive to open the latch handle. Insert it into the bay.
b. (2) Close the latch handle to lock it into place.

Servicing the StoreServ 71


Figure 75: Installing the drive
10. Confirm that the green LED is lit.
The blue service LED will blink while data is being restored to the replacement drive.

NOTE: The servicemag command begins automatically.

11. Enter servicemag status command to:


• Confirm the servicemag operation has begun.
• Display progress of the integration of the drive into the system.
• Display progress about the transfer of data onto the drive.
12. Replace the front bezel on the storage enclosure.
13. To verify the current state of the system, enter checkhealth -detail.
14. Exit and logout of the session.

Service Processor
Replacing a physical Service Processor
Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
2. Halt the SP.
• With HPE 3PAR SP 4.x using SPMAINT: On the 3PAR Service Processor Menu, select 1 SP
Control/Status and then select option 3 Halt SP and confirm all prompts to halt the SP.
• With HPE 3PAR SP 5.0 using the HPE 3PAR Service Console (SC): From the main menu,
select Service Processor, and then select Actions > Shutdown.
3. On the front of the SP, verify that the Power LED is off.
4. Record all the locations of the cable connections to the SP before disconnecting all cables.
5. Remove the AC cords from the rear.
6. To remove the front bezel SP from the cabinet, (1) press the release tab and (2) pivot it off the front
of the SP.

72 Service Processor
Figure 76: Removing the service processor
7. Loosen the two captive T25 Torx screws that secure the SP to the rack.
8. Support the failed SP from underneath and slide it out of the rack until it is fully extended.
9. Press the release buttons on the sides of the rails and remove the SP from the rails. Push the rack
rails into the rack.
10. Align the replacement SP with its shelf on the storage system chassis. Slide the SP into the cabinet
until the rails engage. Back out to ensure that the rails are locked.

Figure 77: Installing the service processor


11. Use a T25 Torx screwdriver to secure the two captive T25 Torx screws on the SP retainer.
12. At the rear of the storage system, connect the AC cords and any removed cabling. If applicable,
Secure the AC cords with a strap or fastener.
13. At the front of the storage system, press the power button if SP does not reboot automatically. On the
SP, verify that the power LED is illuminating green during the initializing process.
14. Configure the software on the SP. To begin Guided Setup with SP 5.x or the Moment of Birth (MOB
with SP 4.x) process, refer to the HPE 3PAR StoreServ 20000 Storage System Installation Guide.
15. Replace the front bezel on the SP.

Servicing the StoreServ 73


System power
In a storage rack, the power distribution units (PDU) are at the rear of the storage system. To support
redundant power, each power domain must include two power supplies that connect to separate PDUs,
and each PDU must connect to an independent AC line. The storage system supports either single-phase
or three-phase PDUs, which are at the rear of the cabinet.
For more information on rack and power infrastructure, see https://www.hpe.com/us/en/integrated-
systems/rack-power-cooling.html.

Node rack single-phase PDU


The following illustration provides an example of a node rack single-phase PDU.

Figure 78: Example of node rack single-phase PDU

NOTE:
For power plug details, refer to HPE 3PAR StoreServ 20000 Storage Site Planning Manual.

Table 2: Single-Phase PDU

Callout Description

1 Main power switch

2 Front of unit, circuit breakers

3 Rear of unit, power receptacle connectors

WARNING:
Do not connect non- HPE 3PAR StoreServ Storage or unsupported components to PDUs.

74 System power
Figure 79: 3PAR StoreServ 20450 rack with single-phase PDUs—North America and Japan

Servicing the StoreServ 75


Figure 80: 3PAR StoreServ 20800/20840/20850 rack with single-phase PDUs—North America and
Japan

76 Servicing the StoreServ


Figure 81: 208xx R2 example: 1075mm rack, US/Jpn, 1-phase

Servicing the StoreServ 77


NOTE:
The single-phase PDU for US/Japan for the 3PAR StoreServ 20800 R2/20840 R2/20850 R2
1075mm rack is P9Q39A. The single-phase PDU for WW/International for the 3PAR StoreServ
20800 R2/20840 R2/20850 R2 1075mm rack is P9Q43A.

78 Servicing the StoreServ


Figure 82: 208xx R2 example: 1200mm rack, US/Jpn, 1-phase

Servicing the StoreServ 79


NOTE:
The single phase PDU model for US/Japan for the 3PAR StoreServ 20800 R2/20840 R2/20850 R2
1200mm rack is P9Q41A. The single phase PDU model for WW/International for the 3PAR
StoreServ 20800 R2/20840 R2/20850 R2 1200mm rack is P9Q45A.

Replacing a horizontal single-phase PDU in a node rack


In a storage rack, the power distribution units (PDU) are located at the rear of the storage system. To
support redundant power, each power domain must include two power supplies that connect to separate
PDUs, and each PDU must connect to an independent AC line.

Procedure
Preparation:
1. Unpack the replacement PDU and place on an ESD safe mat.
2. Open or remove the rear door of the storage system.
Removal:
3. Set the main power breakers on the failed PDU to the OFF position.

Figure 83: Setting the power breakers to the OFF position, single-phase PDU
4. Unplug the main power cord of the PDU.

NOTE:
To remove the main power cord, access to the side of the cabinet may be required.

80 Replacing a horizontal single-phase PDU in a node rack


Figure 84: PDU circuit breakers, single-phase PDU

Item Description

1 Front of unit, main breaker

2 Front of unit, circuit breakers

3 Rear of unit, power banks

5. If necessary to gain access to the back of the PDU, remove the blank plates from the front of the
rack that cover the PDU rack unit (RU) location.
6. Disconnect the power cord retention straps securing the power cables to the PDU.
7. Disconnect the AC cords connecting the power extension bars to the failed PDU.

Servicing the StoreServ 81


NOTE:
Note how each extension bar is connecting to the PDU before removing:
• PDU 0 is connected to power extension bars A1 and A2.
• PDU 1 is connected to power extension bars B1 and B2.
• PDU 2 is connected to power extension bars A3 and A4.
• PDU 3 is connected to power extension bars B3 and B4.

Table 3: Single-phase cabling (cords to respective plugs on back of core


PDUs)

PDM PDU Plug PDM PDU Plug

A1 PDU0-L1 B1 PDU1-L1

A2 PDU0-L2 B2 PDU1-L2

A3 PDU2-L1 B3 PDU3-L1

A4 PDU2-L2 B4 PDU3-L2

A5 PDU2-L3 B5 PDU3-L3

8. Remove the PDUs.


The four PDUs are PDU0, PDU1, PDU2, and PDU3 from bottom to top.

a. Note the routing orientation of the PDU power cord.


b. Remove the four P2 Phillips-head screws securing the PDU to the rack (1).
c. Pull the PDU out from the rack space (2).
d. Disconnect the green/yellow ground wire from the back of the failed PDU. This wire can be
reused if it is in good condition.
e. Remove the failed PDU and power cable completely from the rack.

Figure 85: Removing the single-phase PDU


Replacement:
9. Install the ear mounting bracket on each side of the replacement PDU.
10. Install the replacement PDU.
a. Feed the power cable of the replacement PDU into the rack.
b. Connect the green/yellow wire (in the cabinet) to the back of the replacement PDU.

82 Servicing the StoreServ


c. Slide the PDU into the cabinet (1).
d. Tighten the four P2 Phillips-head screws to secure the PDU onto the rack (2).
e. Confirm the power breakers on the replacement single-phase PDU are set to the OFF position
before reconnecting any AC power cables.
f. Reconnect the AC power cables connecting the power extension bars to the PDU in the same
routing orientation as before they were removed.

Figure 86: Installing the single-phase PDU


11. Secure the AC cords to the PDU:
a. Position the AC cord between the two holes in the cord retention bracket of the PDU.
b. Fasten a tie wrap through the holes in the bracket on both sides of the AC cord and tighten.
c. Push the AC cord connector to ensure it is fully seated.

Figure 87: Securing the AC cord to the PDU, single-phase PDU


12. Plug the replacement PDU main power cord into the electrical outlet.
13. Set the main PDU circuit breaker to the ON position and confirm that the red LED is lit.
14. Set the rest of the PDU circuit breakers to the ON position.
Verification:
15. Verify that all connected power supplies are functioning properly and that all power Status LEDs are
green.
16. Reinstall the door to the rack (if it was removed) and secure it.
17. Follow the return instructions provided with the replacement component.

Node rack three-phase PDU


The following illustration provides an example of a node rack three-phase PDU.

Node rack three-phase PDU 83


Figure 88: Example of a node rack three-phase PDU

Table 4: Node rack three-Phase PDU

Callout Description

1 Front of unit, circuit breakers

2 Rear of unit, power receptacle connectors

—North America and Japan/international

WARNING:
Do not connect non- HPE 3PAR StoreServ Storage storage or unsupported components to PDUs.

84 Servicing the StoreServ


Figure 89: HPE 3PAR StoreServ Storage20450 node rack with three-phase PDUs

Servicing the StoreServ 85


Figure 90: HPE 3PAR StoreServ Storage20800/20850 node rack with three-phase PDUs

86 Servicing the StoreServ


Figure 91: 208xx R2 example: 1075mm rack, US/Jpn, 3-phase

Servicing the StoreServ 87


Figure 92: 208xx R2 example: 1075mm rack, WW/Intl, 3-phase

88 Servicing the StoreServ


Figure 93: 208xx R2 example: 1200mm rack, US/Jpn, 3-phase

Servicing the StoreServ 89


Figure 94: 208xx R2 example: 1200mm rack, WW/Intl, 3-phase

90 Servicing the StoreServ


Replacing a horizontal three-phase PDU in a node rack
In a storage rack, the power distribution units (PDU) are located at the rear of the storage system. To
support redundant power, each power domain must include two power supplies that connect to separate
PDUs and each PDU must connect to an independent AC line.

Procedure
Preparation:
1. Unpack the replacement PDU and place on an ESD safe mat.
Removal:
2. Open or remove the rear door of the storage system.
3. Set all six power breakers on the failed PDU to the OFF position.

Figure 95: Setting the power breakers to the OFF position, three-phase PDU
4. Unplug the main power cord of the PDU.

NOTE:
To remove the main power cord, access to the side of the cabinet may be required.

Replacing a horizontal three-phase PDU in a node rack 91


Figure 96: PDU circuit breakers, three-phase PDU

Item Description

1 Front of unit, circuit breakers

2 Rear of unit, power banks

5. If necessary to gain access to the back of the PDU, remove the blank plates from the front of the
rack that cover the PDU rack unit (RU) location.
6. Disconnect the power cord retention straps securing the power cables to the PDU.
7. Disconnect the AC cords connecting the power extension bars to the failed PDU.

NOTE:
Note how each extension bar is connecting to the PDU before removing:
• By convention, black power cords connect PDU 0 to the A power extension bars and the
grey power cords connect PDU 1 to the B power extension bars.
• PDU 0 is connected to power extension bars A1, A2, A3, and A4.
• PDU 1 is connected to power extension bars B1, B2, B3, and B4.

Table 5: Three-phase cabling (cords to respective plugs on back of core


PDUs)

PDM PDU Plug PDM PDU Plug

A1 PDU0-L1 B1 PDU1-L1

A2 PDU0-L2 B2 PDU1-L2

A3 PDU0-L3 B3 PDU1-L3

A4 PDU0-L6 B4 PDU1-L6

8. Remove the PDUs.

92 Servicing the StoreServ


a. Note the routing orientation of the PDU power cord.
b. Remove the four P2 Phillips-head screws securing the PDU to the rack (1).
c. Pull the PDU out from the rack space (2).
d. Disconnect the green/yellow ground wire from the back of the failed PDU. This wire can be
reused if it is in good condition.
e. Remove the failed PDU and power cable completely from the rack.

Figure 97: Removing the three-phase PDU


Replacement:
9. Install the ear mounting bracket on each side of the replacement PDU.
10. Install the replacement PDU.
a. Feed the power cable of the replacement PDU into the rack.
b. Connect the green/yellow wire (in the cabinet) to the back of the replacement PDU. Gaining
access to the back of the PDU after installation is difficult.
c. Slide the PDU into the cabinet (1).
d. Tighten the four P2 Phillips-head screws to secure the PDU onto the rack (2).
e. Confirm the power breakers on the replacement single-phase PDU are set to the OFF position
before reconnecting any AC power cables.
f. Reconnect the AC power cables connecting the power extension bars to the PDU in the same
routing orientation as before they were removed.

Figure 98: Installing the single-phase PDU


11. Secure the AC cords to the PDU:
a. Position the AC cord between the two holes in the cord retention bracket of the PDU.
b. Fasten a tie wrap through the holes in the bracket on both sides of the AC cord and tighten.
c. Push the AC cord connector to ensure it is fully seated.

Servicing the StoreServ 93


Figure 99: Securing the AC cord to the PDU, three-phase PDU
12. Plug the replacement PDU main power cord into the electrical outlet.
13. Set the PDU circuit breakers to the ON position.
Verification:
14. Verify that all connected power supplies are functioning properly and that all power Status LEDs are
green.
15. Reinstall the door to the rack (if it was removed) and secure it.
16. Follow the return instructions provided with the replacement component.

Expansion rack single-phase PDU


The following illustrations provide a description of an expansion rack with single-phase PDUs.

WARNING:
Do not connect non- HPE 3PAR StoreServ Storage or unsupported components to PDUs.

94 Expansion rack single-phase PDU


Figure 100: Expansion rack with single-phase PDUs—North America and Japan/International

Servicing the StoreServ 95


NOTE:
The PDU model for the North America/Japan expansion rack is 4X H5M58A. The PDU model for
the international expansion rack with a single-phase PDU is 4X H5M68A.

96 Servicing the StoreServ


Figure 101: 208xx R2 example:1075mm expansion rack, US/Jpn, 1-phase

Servicing the StoreServ 97


NOTE:
The single phase PDU model for US/Japan for the 3PAR StoreServ 20800 R2/20840 R2/20850 R2
expansion rack is P9Q41A. The single phase PDU model for the WW/International for the 3PAR
StoreServ 20800 R2/20840 R2/20850 R2 expansion rack is P9Q43A.

Expansion rack three-phase PDU


The following illustration provides a description of an expansion rack with three-phase PDUs.

98 Expansion rack three-phase PDU


Figure 102: Expansion rack with three-phase PDUs—North America and Japan

Servicing the StoreServ 99


Figure 103: Expansion Rack with three-phase PDUs—international

100 Servicing the StoreServ


Figure 104: 208xx R2 example:1075mm expansion rack, US/Jpn, 3-phase

Servicing the StoreServ 101


Figure 105: 20800 R2/20840 R2/20850 R2 1075mm expansion rack with three-phase PDUs—WW/
International

102 Servicing the StoreServ


Figure 106: 208xx R2 example:1075mm expansion rack, ww/Intl, 3-phase

Replacing a vertical single-phase or three-phase PDU in an expansion rack


In a storage rack, the power distribution units (PDU) are located at the rear of the storage system. To
support redundant power, each power domain must include two power supplies that connect to separate
PDUs and each PDU must connect to an independent AC line.

Replacing a vertical single-phase or three-phase PDU in an expansion rack 103


Table 6: PDUs used in single-phase and three-phase expansion racks

Input power type USA/Japan International

Single-phase Half-height Half-height


Three-phase Half-height Three-quarters height

The following figure shows a half-height PDU. Half-height PDUs have two power banks.

Figure 107: Half-height PDU

The following figure shows a three-quarters height PDU. It has three power banks.

Figure 108: Three-quarters height PDU

Procedure
Preparation:
1. Unpack the replacement power distribution unit (PDU) and place on an ESD safe mat.
2. Open or remove the rear door of the storage system.
Removal:
3. Set the main power breakers on the failed PDU to the OFF position.
4. Unplug the main power cord of the PDU.
5. Confirm (or label) the power cords that plug into the power banks of the PDU. These cords connect
to the power supplies of the drive enclosures.
6. Remove all power cords.
7. Remove the screw that secures the green and yellow ground wire, and remove the ground wire from
the PDU. In the following figure, the ground wire is circled in green.

104 Servicing the StoreServ


Figure 109: PDU ground wire and screw
8. Loosen or remove the mounting screws that secure the PDU to the rack.
Replacement:
9. Prepare the new PDU for installation:
a. Put the appropriate PDU cord sticker on each end of the main cord.
b. Affix the cord retention bracket to the PDU.
c. Loosely attach the mounting screws to the PDU.
10. Install the replacement PDU in the same orientation and location as the PDU that was removed.
11. Tighten the mounting hardware to secure the PDU to the rack.
12. Connect the green and yellow ground wire to the PDU.
13. Verify that the PDU breakers are in the OFF position.
14. Route and connect the main power cord to the electrical outlet.
15. Connect and secure all power supply cords to the appropriate PDU power banks.
16. Set the PDU breakers to the ON position.
Verification:
17. Verify that all connected power supplies are functioning properly and that all power Status LEDs are
green.
18. Reinstall the door to the rack (if it was removed) and secure it.
19. Follow the return instructions provided with the replacement component.

Servicing the StoreServ 105


Upgrading the StoreServ
Use this chapter to perform upgrade procedures on the HPE 3PAR StoreServ 20000 storage systems.
For more information on part numbers for storage system components listed in this chapter, see the Parts
catalog on page 122.
Preparing for Controller Node Upgrade
Before performing any upgrade or expansion, verify with the system administrator if a complete backup of
all data on the storage system has been performed. It is to install controller nodes into an active system.
The new nodes will not be able to start and join the cluster immediately after installation. Before a node
can join the cluster, you must load the proper HPE 3PAR OS version onto the node using the SP.

CAUTION:
Before servicing any component in the storage system, prepare an Electrostatic Discharge-safe
(ESD) work surface by placing an antistatic mat on the floor or table near the storage system. Attach
the ground lead of the mat to an unpainted surface of the rack. Always use a wrist-grounding strap
provided with the storage system. Attach the grounding strap clip directly to an unpainted surface of
the rack.

Loading Order for Controller Nodes


A storage system may contain two to eight controller nodes per system configuration. A pair of controller
nodes must be added during any upgrade process.
The controller node chassis is located at the rear of the storage rack. From the rear of the chassis, the
component numbering ordered by zero (0) from the bottom to top. Do not skip any bays when upgrading
controller nodes and refer to the table for the loading order. If there are empty bays, insert filler panels to
protect the node chassis.

Table 7: Controller Node Loading Order

System Controller Nodes Bay Loading Order


(bottom to top)

20450 2 0, 1

4 0, 1, 2, 3

20800 2 0, 1
20840
4 0, 1,2, 3
20850
6 0, 1, 2, 3, 4, 5

8 0, 1, 2, 3, 4, 5, 6, 7

Controller node and internal components

106 Upgrading the StoreServ


Adding controller nodes
NOTE:
Always install controller nodes as pairs during an upgrade.

Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
2. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
3. Set Maintenance Mode.
4. Remove the controller node and power supply blanks if required.
5. To install the node, (1) insert the controller node with the insertion handles fully extended into the
enclosure until it stops and then (2) press the insertion handles to close and fully engage the node.
Repeat the step for additional nodes.

Figure 110: Inserting the controller node


6. Confirm that the controller node power switch is in the Off position.
7. Insert the PCI adapters into the slots and slide them in until each click and locks into place.

Adding controller nodes 107


Figure 111: Inserting the PCI adapter
8. Insert the new power supply tray by sliding the unit in until it is fully engaged.

Figure 112: Inserting the power supply

NOTE: Insert the power supplies into the power supply trays in the enclosure.
A clicking sound indicates when the module is fully engaged.
9. Connect the power cables to the power supply and secure the cables with a retention strap.
10. Label the cables and then connect all the SAS cables to the peripheral components.

NOTE:
All SFP ports must contain an SFP transceiver and cable or a dust cover.
11. Remove the front bezel and a front blank if required.
12. Insert the BBUs.

108 Upgrading the StoreServ


Figure 113: Inserting the BBU
13. Insert the fan modules into the slot.

Figure 114: Inserting the fan module

NOTE:
• A clicking sound indicates when the module is fully engaged.
• Re-install the controller node front bezel after you install the second controller node.
14. Turn on the power to the lowest numbered controller node and allow the automatic node-rescue
process to load the OS on the node drives.

Upgrading the StoreServ 109


Figure 115: Power switch location on the controller node

NOTE:
The Automatic Node Rescue process may take up to 30 minutes.
15. Verify the controller node LEDs are in normal status with a steady, blinking green LED.

NOTE:
Power on the second controller node only when you get a confirmation of the first node joining
the cluster.
16. Access the interactive CLI interface.
17. Perform preliminary maintenance checks and initiate servicing.
a. To verify that the node has joined the cluster, enter shownode -d.

cli% shownode -d
---- shownode -d -----
---------------------------------------------
Nodes---------------------------------------------

Control Data Cache


Node ----Name---- -State- Master InCluster -Service_LED- ---LED---
Mem(MB) Mem(MB) Available(%)
0 4UW0001463-0 OK No Yes Off GreenBlnk
98304 131072 100
1 4UW0001463-1 OK Yes Yes Off GreenBlnk
98304 131072 100
2 4UW0001463-2 OK Yes Yes Off GreenBlnk
98304 131072 100
3 4UW0001463-3 OK Yes Yes Off GreenBlnk
98304 131072
100

b. Add drive enclosures and additional storage with cabling.


c. To verify the current state of the system, enter checkhealth -detail.
18. Exit and logout of the session.

110 Upgrading the StoreServ


Adding PCI adapters
Each controller node can support up to seven host bus adapters. The numbering sequence of the HBAs
is the same for all HPE 3PAR StoreServ 20000 Storage systems.
The table describes the default configurations for HPE 3PAR StoreServ 20000 storage systems.

Table 8: Controller node PCIe slots and ports

Adapter Cards Description of Slot Usage

16 Gb FC (0-20 ports/node) Host connections slot order: 5, 4, 6, 3, 0. Slot 0 is


an optional host connection slot.

10 GbE CNA (0-10 ports/node) Host connections slot order: 5, 4, 6, 3, 0. Slot 0 is


an optional host connection slot.

10 GbE NIC (0-4 ports/node) Host connections slot order: 5, 4, 6, 3, 0. Slot 0 is


an optional host connection slot.
IMPORTANT:
The Ethernet ports on this host adapter can
be configured for HPE 3PAR File Persona.

12 G SAS (4-12 ports/node) Drive connections slot order: 2, 1, 0. Slot 0 is an


optional host connection slot.

Figure 116: Controller node PCIe slots

To install additional adapters:

CAUTION:
The PCI Adapters are not hot-swappable.

Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).

Adding PCI adapters 111


See Connection methods for the SP on page 154.
2. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
3. Set Maintenance Mode.
4. Initiate Check Health of the storage system.
5. Check that the current state of the controller nodes is healthy. Issue the HPE 3PAR CLI shownode
command.
6. If upgrading to add a PCI adapter, locate the desired PCI slot. Enter locatenode -pci <slot>
<nodeID>. Identify the PCI adapter with the service LED illuminating blue.

NOTE:
• To illuminate the service LED when servicing a component, use the locatenode
command.
• The locatenode command confirms that the correct component is being serviced.

7. Ensure that the cables to the PCI adapter are properly labeled to facilitate reconnecting the cables
for later service actions.
8. Only for the addition of a 10Gb NIC adapter with HPE 3PAR File Persona running, delete File
Persona ports on the controller node. Otherwise skip this step.

IMPORTANT:
If HPE 3PAR File Persona is running, the 10Gb NIC adapter must be added as a pair. Add a
10Gb NIC adapter to the same slot in the other node of the File Persona node pair. Use the
same steps documented in this section.

a. To identify the File Persona ports, issue the showport -fs command.
b. Delete File Persona ports on the controller node. Issue the controlport fs delete -f
<N:S:P> command where: <N:S:P> is the node:slot:port.
Deleting File Persona ports on the controller nodes causes existing FPGs on the controller node
to fail over to the second node.
9. To halt the desired node, enter shutdownnode halt <nodeID>.
10. When prompted, enter yes to confirm halting of the node.

NOTE:
Wait for 10 minutes for the node to halt. When the node halts, the node status LED blinks
green, and the node service LED displays blue.
11. To power off the node, set the node power switch to the OFF position.

112 Upgrading the StoreServ


Figure 117: Location of power switch on controller node

CAUTION:
The PCI adapters are not hot pluggable. Power off the node.
12. Remove the new adapter from the protective packaging. The following power indicators represent (I)
On and (O) Off.
13. Remove the PCI filler panel from the designated slot.
14. Insert the PCI adapter into the slot and slide it in until it clicks and locks into place. Repeat the two
previous steps for additional adapters.

Figure 118: Inserting the PCI adapter


15. Connect the cables to the PCI adapter.
16. To apply power to the node, set the node power switch to the ON position.

NOTE: When power is supplied to the node, it begins to boot. This process takes
approximately 10 minutes. The node becomes part of the cluster when the process is
complete.
17. Verify the controller node LEDs are in normal status with a steady, blinking green LED.
18. To verify that the node has joined the cluster, enter shownode -d.

Upgrading the StoreServ 113


cli% shownode -d
---- shownode -d -----
---------------------------------------------
Nodes---------------------------------------------
Control
Data Cache
Node ----Name---- -State- Master InCluster -Service_LED- ---LED--- Mem(MB)
Mem(MB) Available(%)
0 4UW0001463-0 OK No Yes Off GreenBlnk 98304
131072 100
1 4UW0001463-1 OK Yes Yes Off GreenBlnk 98304
131072 100
2 4UW0001463-2 OK Yes Yes Off GreenBlnk 98304
131072 100
3 4UW0001463-3 OK Yes Yes Off GreenBlnk 98304
131072
100

19. Only for the addition of a 10Gb NIC adapter with HPE 3PAR File Persona running, configure
File Persona interfaces on the Ethernet ports of the adapter on the controller node. Otherwise skip
this step.
a. To check the status of File Persona, enter showfs. Wait until the state of the File Persona
<node> is shown with a status of Running.
b. Add File Persona ports that you deleted from step 9. Issue the controlport fs add
<N:S:P> command where: <N:S:P> is the node:slot:port for the ports in the adapter.
c. Add File Persona ports for the new adapter. Issue the controlport fs add <N:S:P>
command where: <N:S:P> is the node:slot:port for the ports in the new adapter.
20. To verify the current state of the system, enter checkhealth -detail.
21. Only for the addition of a 10Gb NIC adapter with HPE 3PAR File Persona running, restore any
failed-over FPGs to the proper controller node.
a. Once the check health states that there are FPGs that are degraded due to being "failed over,"
issue the setfpg -failover <FPG_Name>. This command restores the FPGs to the proper
controller node.
For example:

cli% checkhealth -detail fs


Checking fs
Component --Summary Description--- Qty
fs File Services FPG issues 1
--------------------------------------
1 total 1

Component -Identifier- -------------------Detailed Description-------------------


fs vfs1138_9 vfs1138_9 is degraded: Failed over from node0fs to node1fs
---------------------------------------------------------------------------------
1 total

cli% setfpg -failover vfs1138_9


This action will lead to unavailability of shares on FPG vfs1138_9.
select y=yes n=no: y

114 Upgrading the StoreServ


cli% checkhealth -detail fs
Checking fs
The following components are healthy: fs

22. To verify that the PCI adapter is installed in the correct slot location, enter shownode -pci.

cli% shownode -pci


Node Slot Type -Manufacturer- -Model-- ----Serial---- -Rev- Firmware
0 0 SAS LSI 9205-8e Onboard 01 11.00.00.00
0 1 FC EMULEX LPe12002 Onboard 03 2.01.X.9
0 2 CNA QLOGIC QLE8242 PCGLT0ARC2E3PE 58 4.8.110.1335395
0 9 Eth Intel e1000 Onboard n/a
7.3.21-k4.1
1 0 FC Unknown EMULEX LPe12004 BT94849246
03 2.00.X.1
1 4 FC Unknown EMULEX LPe12004 BT94849087
03 2.00.X.1
1 8 FC Unknown EMULEX LPe12004 BT94849069
03 2.00.X.1
1 9 Eth n/a Intel e1000
Onboard n/a 7.3.21-k4.1

23. To verify that all ports are connected to the system and in a ready state, enter showport.

cli% showport
N:S:P Mode State ----Node_WWN---- -Port_WWN/HW_Addr- Type Protocol
1:5:3 initiator loss_sync 2FF70002AC000167 21530002AC000167 free FC
1:5:4 initiator ready 2FF70002AC000167 21540002AC000167 free FC
1:8:1 target ready 2FF70002AC000167 21810002AC000167 host FCoE
1:8:2 suspended config_wait 0000000000000000 0000000000000000 cna -
1:9:1 peer ready - 0002AC800059 rcip IP

24. Confirm that all SAS cables are properly connected and verify that all LED statuses are illuminated
green.
25. To verify the current state of the system, enter checkhealth -detail.
26. To upgrade PCI adapters in additional controller nodes, repeat all the previous steps.

IMPORTANT:
Before starting another node upgrade process, wait approximately 10 minutes to allow
multipathing software processes to complete.
27. Exit and logout of the session.
More Information
Alert notifications from the SP on page 168
Check health action from the HPE 3PAR SP on page 159
Connection methods for the SP on page 154
Locate action from the SP on page 167
Maintenance mode action from the SP on page 167

Drive chassis and drives


Expanding a storage system by connecting a new drive chassis to existing PCI back-end adapters with
open ports does not require system downtime. Before upgrading, verify with the local system
administrator if a complete backup of all data on the storage system has been performed before installing
additional hardware.

Drive chassis and drives 115


Adding drives
Based on the hard disk drives (HDD) or solid-state drives (SSD) used, the following numbers of drives
can be installed in a single enclosure:
• Large form factor (LFF): 12 HDDs
• Small form factor (SFF): 24 HDDs
Drive installation guidelines

Procedure
1. Follow industry-standard practices when handling disk drives. Internal storage media can be damaged
when drives are shaken, dropped, or roughly placed on a work surface.
2. When installing a disk drive, press firmly to make sure that the drive is fully seated in the drive bay and
then close the latch handle.
3. Always populate hard drive bays starting with the lowest bay number. Hewlett Packard Enterprise
requires a minimum of two drives of the same drive type.
4. Populate the drive types in pairs and evenly beginning at the bottom in slot 0 and progress from left to
right and bottom to top. See the following tables for examples.

Table 9: Drive Slot Order for SFF Drive Enclosure

20 (...) 21 (...) 22 (...) 23 (...) 24 (Do not use)

15 (...) 16 (...) 17 (...) 18 (...) 19 (...)

10 (NL) 11 (NL) 12 (...) 13 (...) 14 (...)

5 (SSD) 6 (FC) 7 (FC) 8 (FC) 9 (FC)

0 (SSD) 1 (SSD) 2 (SSD) 3 (SSD) 4 (SSD)

Table 10: Drive Slot Order for LFF Drive Enclosure

8 (...) 9 (...) 10 (...) 11 (...)

4 (NL) 5 (NL) 6 (NL) 7 (NL)

0 (SSD) 1 (SSD) 2 (SSD) 3 (SSD)

CAUTION:
To prevent improper cooling and thermal damage, operate the enclosure only when all bays are
populated with either a component or a blank.

To install an HDD or SSD:


• To remove the front bezel of the drive enclosure, press the release tab and pivot it off the front of the
drive enclosure.
• Remove the drive blank panel.

116 Adding drives


Figure 119: Removing the blank panel (LFF shown)
• Unlatch and swing out the latch handle on the drive before (1) sliding the drive into the bay. To seat it,
press firmly on the drive. Close the latch handle by (2) pressing firmly until it locks in place.

Figure 120: Inserting an HDD drive

IMPORTANT:
When a drive is inserted in an operational enclosure, the drive LED flashes green to indicate that
the drive is seated properly and receiving power.
• Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
• Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
• Set Maintenance Mode.
• Perform preliminary maintenance checks and initiate servicing.
◦ To verify that the new drive appears and the disk state is normal, enter showpd.
◦ Attach the front bezel to the front of the new drive enclosure.
◦ To verify the current state of the system, enter checkhealth -detail.
• Exit and logout of the session.

Adding a drive chassis


NOTE:
Adding a drive enclosure to a system with available ports in the SAS back-end adapters of the
system does not require system downtime. If no ports are available, add a SAS PCI adapter first.

Adding a drive chassis 117


CAUTION:
Observe all precautions when adding new components. Review and follow storage drive guidelines
before adding, removing, or replacing any drives.

To install additional drive chassis:

Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
2. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
3. Set Maintenance Mode.
4. Perform preliminary maintenance checks and initiate servicing.
a. To verify the current state of the system, enter checkhealth -detail.
5. At the front of the system, remove the blank filler panels where the new drive enclosure will be
installed.
6. Position the left and right rack rails at the desired U position in the rack. Adjust the rails to fit the rack.
7. The bottom edge of rails must align at the front and back with the bottom of RETMA boundary of the
rack in the lowermost U position.

NOTE: The rails are marked Front L and Front R with an arrow indicating the direction of the
rail installation.
8. Use guide pins to align the shelf mount kit to the RETMA column holes.
9. To engage the rear, push the rail toward the back of the rack until the spring hook snaps into place.
10. To engage the front, pull the rail towards the front of the rack to engage the spring hook with the
RETMA column in the same manner as the rear spring hook.

NOTE:
Make sure that the respective guide pins for the square hole rack align properly into RETMA
column hole spacing.
You can use the same procedure to secure the left rail.
11. Secure the rear of the rack rail to the RETMA column with the square-hole shoulder screws provided
in the package.

118 Upgrading the StoreServ


Figure 121: Securing the rail
12. You can use the same procedure to affix the left rail.

Figure 122: Mounting the rail


13. Secure the front of the rail to the front RETMA column using the provided flat securing screw/guide
pin in the bottom screw position of the rail.

WARNING:
Always use at least two people to lift an enclosure into the rack. If the enclosure is being loaded
into the rack above chest level, a third person must assist with aligning the enclosure with the
rails while the other two people support the weight of the enclosure.
14. To install the enclosure, (1) slide the enclosure into position on the rails and (2) secure the chassis
into the rack by tightening the captive screw behind the latch on the front left and right bezel ears of
the chassis.

Upgrading the StoreServ 119


Figure 123: Installing the drive enclosure

CAUTION:
The front screw must be attached at all times when installed on the rack.
15. Attach rear hold-down brackets by (1) sliding the tab with the arrow pointed forward into the
corresponding slot on the left and right side of the rear of the chassis and (2) use the black headed
thumb screw to secure tightly to the rail.

Figure 124: Attaching the rear hold-down brackets


16. Remove the drive blank panels that will be replaced with drives.

120 Upgrading the StoreServ


Figure 125: Removing the blank panel (LFF shown)
17. Unlatch and swing out the latch handle on the drive before (1) sliding the drive into the bay and press
firmly on the drive to seat it. Close the latch handle by (2) pressing firmly until it locks in place.

Figure 126: Inserting the HDD


18. Connect the mini-SAS cables from the appropriate controller node SAS ports to the I/O modules in
the rear of the drive enclosure according to guidelines. See Data cables on page 191.
19. Label the mini-SAS cables according to guidelines.
20. Connect the data and power cables. For more information about labeling and connecting power and
data cables, refer to Connecting the power and data cables on page 186.
21. Secure the cables with cable straps.
22. When the power cords are connected, power is supplied to the power supplies and the power on
LED turns solid green.
23. Wait a few minutes for the drive enclosures to complete their startup routines.
24. Verify the drive LEDs display a steady green light.
25. Verify the drive enclosure, fan module, I/O modules, drive enclosure, power supplies, and drive
status LEDs are lit green and operating normally.
26. On the CLI prompt, perform post-maintenance checks:
a. To identify the new cage ID of the newly installed drive enclosure, enter showcage.
b. To confirm that the drives in the new drive enclosure are spun-up and seen by the system, enter
showcage -d <cageID> of the new drive enclosure.
c. To verify that the new drives appear and the disk state is normal, enter showpd.
d. Attach the front bezel to the front of the new drive enclosure.
e. To verify the current state of the system, enter checkhealth -detail.
27. Exit and logout of the session.

Upgrading the StoreServ 121


Parts catalog

Parts catalog for 20000 models


NOTE:
The information provided for this parts catalog is subject to change without notice.

Cable parts list


Table 11: AOC cables parts list

Part number Description

793446-001 SPS-CBL 12Gb Mini-SAS HD AOC 10m

793447-001 SPS-CBL 12Gb Mini-SAS HD AOC 25m

793448-001 SPS-CBL 12Gb Mini-SAS HD AOC 100m

Table 12: Copper cables parts list

Part number Description

717431-001 SPS-CBL 12Gb Mini-SAS HD 0.5m

717433-001 SPS-CBL 12Gb Mini-SAS HD 2m

Table 13: FC cables parts list

Part number Description

656428-001 SPS-CA 2m PREMIER FLEX FC OM4

656429-001 SPS-CA 5m PREMIER FLEX FC OM4

656430-001 SPS-CA 15m PREMIER FLEX FC OM4

656431-001 SPS-CA 30m PREMIER FLEX FC OM4

656432-001 SPS-CA 50m PREMIER FLEX FC OM4

122 Parts catalog


SAS3 cables—passive copper SAS3 cables—active optical

Controller node enclosure parts list


Table 14: Controller node enclosure (chassis) parts list

Part number Description

782416-001 SPS-4 Way Chassis L-Frame Assembly

782417-001 SPS-8 Way Chassis L-Frame Assembly

782418-001 SPS-Assy, Rail Kit, StoreServ 20000

782419-001 SPS-System LED Board; StoreServ 20000

Table 15: Controller node accessories parts list

Part number Description

782404-001 SPS-NODE Assembly V1; 12 core; w/o HBA; Mem (20800)

782404-002 SPS-NODE Assembly V2; 12 CORE; W/O HBA MEM (20800)

782405-001 SPS-NODE Assembly V1; 16 core w/o HBA; Mem (20840 and 20x50)

782405-002 SPS-NODE ASSY V2, 16 CORE W/O HBA, MEM

782406-001 SPS-Memory DIMM,16GB, DDR3, CC or DC (M)

782408-001 SPS-Memory DIMM,32GB, DDR3, CC or DC (M)

782407-001 SPS-Node Drive; 256GB; StoreServ 20800

782409-001 SPS-Node Drive; 512GB; StoreServ 20x40 and 20x50

811220-001 SPS-Memory DIMM,16GB, DDR3, CC or DC (S)

811221-001 SPS-Memory DIMM, 32GB, DDR3 (SMSG)

Table Continued

Controller node enclosure parts list 123


Part number Description

782410-001 SPS-Battery Backup Unit StoreServ 20000

782411-001 SPS-Fan Module; StoreServ 20000

660183-001 SPS-POWER SUPPLY 750W 1U HEPB

782421-001 SPS-Power Supply Extender; StoreServ 20000

683249-001 SPS-Node TOD Battery

Table 16: Adapters parts list

Part number Description

782412-001 SPS-Adapter; FC; 16Gb; 4 Port (LPE16004)

793444-001 SPS-SFP Transceiver; 16 GBIT; LC (E7Y10A)

782413-001 SPS-Adapter; SAS; 12GB; 4 Port (9300-4P)

782414-001 SPS-Adapter; CNA; 10GB; 2 Port (QTH8362)

782415-001 SPS-Adapter; FBO; 10GB; 2 Port (560SFP+)

657884-001 SPS-SFP TRANSCEIVER; LC; 10GBIT; CNA and Ethernet

124 Parts catalog


1. Memory
2. Node drive
3. Time of day battery
4. SFP transceiver
5. Adapter
6. Node assembly
Figure 127: Controller node and internal components part identification

Drive enclosure parts list


Table 17: Drive enclosure parts list

Part number Description

781532-001 SPS Fan Assembly

781533-001 SPS I/O Module; SFF 2 Port

781867-001 SPS I/O Module; LFF 2 Port

511777-001 SPS-POWER SUPPLY; 460W

833037-001 SPS-Drv Enclsr SFF 2U SS20000 w/o PS IOM

833038-001 SPS-Drv Enclsr LFF 2U SS20000 w/o PS IOM

Drive enclosure parts list 125


Table 18: SSD drives parts list

Part number Description

809585-001 SPS-DRV 480GB SSD SAS cMLC SFF XCSD

809586-001 SPS-DRV 480GB SSD SAS MLC SFF XCH

809587-001 SPS-DRV 920GB SSD SAS cMLC SFF XCH FIPS

809588-001 SPS-DRV 1.92TB SSD SAS cMLC SFF XCH FIPS

809589-001 SPS-DRV 3.84TB SSD SAS cMLC SFF XCSD

809590-001 SPS-DRV 1.92TB SSD SAS cMLC SFF XCSD

809591-001 SPS-DRV 300GB HDD SAS 15K RPM SFF XCH

809592-001 SPS-DRV 600GB HDD SAS 15K RPM SFF XCH

809593-001 SPS-DRV 600GB HDD SAS 15K SFF XCH FIPS

809594-001 SPS-DRV 600GB HDD SAS 10K RPM SFF XCH

809596-001 SPS-DRV 1.2TB HDD SAS 10K RPM SFF XCH

809597-001 SPS-DRV 1.8TB HDD SAS 10K RPM SFF XCH

809598-001 SPS-DRV 1.2TB HDD SAS 10K SFF XCH FIPS

809599-001 SPS-DRV 480GB SSD SAS cMLC LFF XCSD

809600-001 SPS-DRV 2TB HDD SAS 7.2K RPM LFF XCSG

809601-001 SPS-DRV 4TB HDD SAS 7.2K RPM LFF XCSG

809602-001 SPS-DRV 6TB HDD SAS 7.2K RPM LFF XCSG

809603-001 SPS-DRV 6TB HDD SAS 7.2K LFF XCSG FIPS

814666-001 SPS-DRV 2TB HDD SAS 7.2K SFF XCS

823119-001 SPS-DRV 4TB HDD SAS 7.2K RPM LFF XCH

823120-001 SPS-DRV 6TB HDD SAS 7.2K RPM LFF XCH

840463-001 SPS-DRV 600GB HDD 6G SAS 10K SFF XCSG

840464-001 SPS-DRV 1.2TB HDD 6G SAS 10K SFF XCSG

840465-001 SPS-DRV 1.8TB HDD 6G SAS 10K SFF XCSG

Table Continued

126 Parts catalog


Part number Description

840466-001 SPS-DRV 1.92TB SSD 12G SAS cMLC XCSM FIPS

840467-001 SPS-DRV 1.92TB SSD 12G SAS cMLC XCSM

840468-001 SPS-DRV 3.84TB SSD 12G SAS cMLC XCSM FIPS

840469-001 SPS-DRV 3.84TB SSD 12G SAS cMLC XCSM

844274-001 SPS-DRV 2TB HDD SAS 7.2K LFF XCSG FIPS

844275-001 SPS-DRV 4TB HDD SAS 7.2K LFF XCSG FIPS

844276-001 SPS-DRV 400GB SSD SAS MLC SFF XCSM

844277-001 SPS-DRV 480GB SSD SAS cMLC SFF XCSM

844278-001 SPS-DRV 920GB SSD SAS MLC SFF XCSM FIPS

846591-001 SPS-DRV 8TB HDD SAS 7.2K RPM LFF XCSG

846592-001 SPS-DRV 8TB HDD SAS 7.2K LFF XCSG FIPS

863459-001 SPS-DRV 7.68TB SSD SAS SFF XCSM

867547-001 SPS-DRV 15.36TB SSD SAS SFF XCSM FIPS

869336-001 SPS-DRV 7.68TB SSD SAS SFF XCSM FIPS

1. Power supply
2. Drive enclosure
3. I/O module
4. Fan assembly
5. SSD drive
Figure 128: Drive enclosure components

Parts catalog 127


Service processor parts list
Part number Description Customer self
repair (CSR)

818724-001 SPS-SERVICE PROCESSOR; RPS; DL120 Gen 9 No

HPE3PAR Service Processor

Parts catalog for 20000 R2 models


NOTE:
The information provided for this parts catalog is subject to change without notice.

Cable parts list


Table 19: AOC cables parts list

Part number Description

793446-001 SPS-CBL 12Gb Mini-SAS HD AOC 10m

793447-001 SPS-CBL 12Gb Mini-SAS HD AOC 25m

793448-001 SPS-CBL 12Gb Mini-SAS HD AOC 100m

Table 20: Copper cables parts list

Part number Description

717431-001 SPS-CBL 12Gb Mini-SAS HD 0.5m

717433-001 SPS-CBL 12Gb Mini-SAS HD 2m

128 Service processor parts list


Table 21: FC cables parts list

Part number Description

656428-001 SPS-CA 2m PREMIER FLEX FC OM4

656429-001 SPS-CA 5m PREMIER FLEX FC OM4

656430-001 SPS-CA 15m PREMIER FLEX FC OM4

656431-001 SPS-CA 30m PREMIER FLEX FC OM4

656432-001 SPS-CA 50m PREMIER FLEX FC OM4

SAS3 cables—passive copper SAS3 cables—active optical

Controller node enclosure parts list


Table 22: Controller node enclosure (chassis) parts list

Part number Description

782417-001 SPS-8 Way Chassis L-Frame Assembly

782418-001 SPS-Assy, Rail Kit, StoreServ 20000

782419-001 SPS-System LED Board; StoreServ 20000

Table 23: Controller node accessories parts list

Part number Description

782405-002 SPS-NODE ASSY V2, 16 CORE W/O HBA, MEM

811220-001 SPS-Memory DIMM,16GB, DDR3, CC or DC (S)

782406-001 SPS-Memory DIMM,16GB, DDR3, CC or DC (M)

782408-001 SPS-Memory DIMM,32GB, DDR3, CC or DC (M)

Table Continued

Controller node enclosure parts list 129


Part number Description

811221-001 SPS-Memory DIMM, 32GB, DDR3 (SMSG)

782410-001 SPS-Battery Backup Unit StoreServ 20000

782411-001 SPS-Fan Module; StoreServ 20000

660183-001 SPS-POWER SUPPLY 750W 1U HEPB

782421-001 SPS-Power Supply Extender; StoreServ 20000

683249-001 SPS-Node TOD Battery

Part number Description

872569-001 SPS-Controller Node, 20 Core, No Mem

877700-001 SPS-BOOT DRV, SS, 480G, SATA, 2.5,MC5100

Table 24: Adapters parts list

Part number Description

782412-001 SPS-Adapter; FC; 16Gb; 4 Port (LPE16004)

793444-001 SPS-SFP TRANSCEIVER 16GBIT LC

782413-001 SPS-Adapter; SAS; 12GB; 4 Port

782414-001 SPS-Adapter; CNA; 10GB; 2 Port (QTH8362)

782415-001 SPS-Adapter; FBO; 10GB; 2 Port (560SFP+)

657884-001 SPS-SFP TRANSCEIVER 10GBIT LC CNA

130 Parts catalog


1. Memory
2. Node drive
3. Time of day battery
4. SFP transceiver
5. Adapter
6. Node assembly
Figure 129: Controller node and internal components part identification

Drive enclosure parts list


Table 25: Drive enclosure parts list

Part number Description

781532-001 SPS Fan Assembly

781533-001 SPS I/O Module; SFF 2 Port

781867-001 SPS I/O Module; LFF 2 Port

511777-001 SPS-POWER SUPPLY; 460W

833037-001 SPS-Drv Enclsr SFF 2U SS20000 w/o PS IOM

833038-001 SPS-Drv Enclsr LFF 2U SS20000 w/o PS IOM

Drive enclosure parts list 131


Table 26: SSD drives parts list

Part number Description

844276-002 SPS-DRV 400GB SSD SAS MLC SFF XCSM

840469-001 SPS-DRV 3.84TB SSD 12G SAS cMLC XCSM

863459-002 SPS-DRV 7.68TB SSD SAS SFF XCSM

867546-001 SPS-DRV 15.36TB SSD SAS SFF XCSM

840468-002 SPS-DRV 3.84TB SSD 12G SAScMLC FIPS XCSM

869336-001 SPS-DRV 7.68TB SSD SAS SFF XCSM FIPS

867547-001 SPS-DRV 15.36TB SSD SAS SFF XCSM FIPS

872800-001 SPS-DRV 400GB SSD +SW SFF SD

872801-001 SPS-DRV 3.84TB SSD +SW SFF SD

872802-001 SPS-DRV 7.68TB SSD +SW SFF SD

872803-001 SPS-DRV 3.84TB SDD +SW SFF SD FIPS

873102-001 SPS-DRV 1.92TB SSD +SW SFF StoreServ SD

1. Power supply
2. Drive enclosure
3. I/O module
4. Fan assembly
5. SSD drive
Figure 130: Drive enclosure components

132 Parts catalog


Service processor parts list
Part number Description Customer self
repair (CSR)

875660-001 SPS-SERVICE PROCESSOR; RPS; DL120 SP-5.0 No

HPE3PAR Service Processor

Service processor parts list 133


Component LEDs
The storage system components have LEDs to indicate whether the hardware is functioning properly and
to help identify errors. The LEDs help diagnose basic hardware problems. The drive enclosures and
many of the components have blue LEDs for physically identifying and locating the components in the
rack.

Storage system status LEDs


Storage system status LEDs on the controller node enclosure (chassis)
The controller node enclosure (chassis) has system status LEDs located at the front, right corner of the
front of the chassis.
Figure 131: Storage system status LEDs (controller node enclosure front view)

134 Component LEDs


Table 27: Storage system status LEDs (controller node enclosure front view)

System status LEDs (controller node enclosure front view)

LED Function Status Indicates

1 UID/Service Blue Solid Locate active

2 Status Green Solid Normal operation; no fault

3 Fault Amber Solid Fault

Controller node LEDs


Controller node LEDs (controller node enclosure rear view)

Figure 132: Controller node LEDs (controller node enclosure rear view)

Table 28: Controller node LEDs (controller node enclosure rear view)

Controller node LEDs (controller node enclosure rear view)

LED Port/Function Status Indicates

1 Fault Amber Solid Fault; node or component in


faulted state

2 Status Green Solid Booting; not a cluster member

Green Flashing (1 Normal operation


blink/sec)

Table Continued

Controller node LEDs 135


Controller node LEDs (controller node enclosure rear view)

LED Port/Function Status Indicates

3 UID/Service Blue Solid Locate active and/or shutdown


(halted); not a cluster member;
safe to remove

Blue Flashing Locate active; do not remove


component.

4, 5, 6 10 Gb Ethernet (RCIP) Green Solid Normal/connected; link up

Green Flashing Link down or not connected

Amber Flashing Connected at high speed

Off Not connected; port failed or


power not applied

7 1 Gb Ethernet MGMT N/A


(Management port)

8, 9 • Link Up Speed Green Solid 10 GbE link


• Activity
Amber Solid 100 Mb link

Off No link established

Green Solid No link activity

Green Flashing Link activity

Off No link established

10 Service (Console port) N/A

4-port 12 Gb/s SAS adapter LEDs


4-port 12 Gb/s SAS adapter LEDs

136 4-port 12 Gb/s SAS adapter LEDs


Figure 133: 4-port 12 Gb/s SAS adapter LEDs

Table 29: 4-port 12 Gb/s SAS adapter LEDs

4-port 12 Gb/s SAS adapter LEDs

LED Function Status Indicates

1 Port Speed Amber Solid Not connected

Off Connected

2 Link Status Green Solid Normal/connected - link up

Off Not connected

3 Fault Amber Solid Fault

4 Status Green Solid Normal operation; no fault

5 UID/ Blue Solid Locate active and/or safe to remove


Service
Blue Flashing Locate active; do not remove component

Component LEDs 137


4-port 16 Gb/s FC host adapter LEDs
4-port 16 Gb/s FC host adapter LEDs

Figure 134: 4-port 16 Gb/s FC host adapter LEDs

Table 30: 4-port 16 Gb/s FC host adapter LEDs

4-port 16 Gb/s FC host adapter LEDs

LED Function Status Indicates

1 Link/Status Green Solid Normal/connected - link up

Green Flashing Link down or not connected

2 Port/Speed Amber Flashing Connecting at high speed

Off Not connected - port failed or power not applied

3 Fault Amber Solid Fault

4 Status Green Solid Normal operation; no fault

5 UID/Service Blue Solid Locate active; safe to remove

Blue Flashing Locate active; do not remove component

138 4-port 16 Gb/s FC host adapter LEDs


2-port 10 Gb/s iSCSI host adapter LEDs
2-port 10 Gb/s iSCSI host adapter LEDs

1. Ethernet LED
2. Activity LED
3. Link LED
4. Fault LED
5. Status LED
6. UID/Service LED
Figure 135: 2-port 10 Gb/s iSCSI host adapter LEDs

Table 31: 2-port 10 Gb/s iSCSI host adapter LEDs

12-port 10 Gb/s iSCSI host adapter LEDs

Ethernet LED Activity LED Link LED Indicates

Off Off Off Power off

Green Solid Off Off Power on; no link

Green Solid Green Solid Green Solid Power on; 10 Gb/s link established; no activity

Green Solid Green Flashing Green Solid Power on; 10 Gb/s link established; receive/
transmit activity

Green Solid Off Amber Solid Firmware fault

2-port 10 Gb/s iSCSI host adapter LEDs 139


2-port 10 Gb/s iSCSI host adapter LEDs

LED Function Status Indicates

Fault Amber Solid Fault

Status Green Solid Normal operation; no fault

UID/Service Blue Solid Locate active; safe to remove

Blue Flashing Locate active; do not remove component

2-port 10 GbE NIC host adapter LEDs


2-port 10 GbE NIC host adapter LEDs

Figure 136: 2-port 10 GbE NIC host adapter LEDs

Table 32: 2-port 10 GbE NIC host adapter LEDs

2-port 10 GbE NIC host adapter LEDs

LED Function Status Indicates

1 Link/Status Green Solid Normal/connected - link up

Green Flashing Link down or not connected

2 Port/Speed Amber Flashing Connected at high speed

Table Continued

140 2-port 10 GbE NIC host adapter LEDs


2-port 10 GbE NIC host adapter LEDs

LED Function Status Indicates

Off Not connected - port failed or power not applied

3 Fault Amber Solid Fault

4 Status Green Solid Normal operation; no fault

5 UID/Service Blue Solid Locate is active and/or safe to remove

Blue Flashing Locate active; do not remove component

Power supply unit LEDs (controller node enclosure)


PSU LEDs (controller node enclosure rear view)

Figure 137: PSU LEDs (controller node enclosure rear view)

Table 33: PSU LEDs (controller node enclosure rear view)

PSU LEDs (controller node enclosure rear view)

LED function Status Indicates

1 Status Green Solid Normal operation

Amber Solid PSU failed

2 UID/Service Blue Solid Locate active and/or safe to remove

Blue Flashing Locate active; do not remove component

Power supply unit LEDs (controller node enclosure) 141


Backup battery unit LEDs (controller node enclosure)
BBU LEDs (controller node enclosure front view)

Figure 138: BBU LEDs (controller node enclosure front view)

Table 34: BBU LEDs (controller node enclosure front view)

BBU LEDs (controller node enclosure front view)

LED Function Status Description

1 UID/ Blue Solid Locate active and/or safe to remove


Service
Blue Flashing Locate active; do not remove component

2 Status Green Solid Battery functioning

3 Fault Amber Solid Fault

4 Battery Green Solid Battery charging

142 Backup battery unit LEDs (controller node enclosure)


Fan module LEDs (controller node enclosure)
Fan module LEDs (controller node enclosure front view)

Figure 139: Fan module LEDs (controller node enclosure front view)

Table 35: Fan module LEDs (controller node enclosure front view)

Fan module LEDs (controller node enclosure front view)

LED Function Status Indicates

1 UID/ Blue Solid Locate active and/or safe to remove


Service
Blue Blinking Locate active; do not remove component

2 Status Green Solid Normal operation; no fault

3 Fault Amber Solid Fault

Drive enclosure LEDs


SFF drive enclosure front view LEDs

Figure 140: SFF drive enclosure front view LEDs

Fan module LEDs (controller node enclosure) 143


Table 36: SFF drive enclosure front view LEDs

SFF drive enclosure front view LEDs

LED Function Status Indicates

1 Status Green Solid Normal operation; no faults detected

Amber Solid Critical fault detected

Amber Flashing Non-critical fault detected within the enclosure


Example: failed or removed fan module

2 UID/Service
TIP:
This UID push button activates or deactivates the Blue UID LED on the
rear and front of the drive enclosure.

Blue Solid Locate active

SFF drive enclosure rear view LEDs

Figure 141: SFF drive enclosure rear view LEDs

Table 37: SFF drive enclosure rear view LEDs

SFF drive enclosure rear view LEDs

LED function Display Indicates

Locate UID
TIP:
This UID push button activates or deactivates the Blue UID LED on the
rear and front of the drive enclosure.

Blue Solid Location activated

Status Green Solid Normal operation

Amber Solid Critical fault

Amber Flashing Non-critical fault

Drive LEDs (drive enclosure)


SFF drive LEDs (drive enclosure front view)

144 Drive LEDs (drive enclosure)


Figure 142: SFF drive LEDs (drive enclosure front view)

Table 38: SFF drive LEDs (drive enclosure front view)

SFF drive LEDs (drive enclosure front view)

LED Function Status Indicates

1 UID/Service Blue Solid Locate active; safe to remove

Blue Flashing Locate active; do not remove component

2 Status/Activity Green Solid Normal operation; drive in OK state; admitted by theHPE


3PAR OS

Green Drive activity


Flashing

Amber Solid • Critical fault


• Solid amber in conjunction with the blue indicates the
drive being admitted or serviced

Amber Noncritical fault


Flashing

Off No power

Component LEDs 145


I/O module LEDs (drive enclosure)
I/O module LEDs (drive enclosure rear view)

Figure 143: I/O module LEDs (drive enclosure rear view)

Table 39: I/O module LEDs (drive enclosure rear view)

I/O module LEDs (drive enclosure rear view)

LED function Display Indicates

Health/Fault Green Solid Normal operation

Amber Solid Fault

UID/Service Blue Solid Location requested; safe to remove

Blue Flashing Location requested; do not remove


Indicates maintenance in progress; for example,
firmware updating

146 I/O module LEDs (drive enclosure)


SAS port LEDs (drive enclosure rear view)

Figure 144: SAS port LEDs (drive enclosure rear view)

Table 40: SAS port LEDs (drive enclosure rear view)

SAS port LEDs (drive enclosure rear view)

LED Function Status Indicates

1 Activity Green Solid • With amber Fault LED off, link at high speed with no
activity.
• With solid amber Fault LED, link at low speed with no
activity.

Green Flashing • With amber Fault LED off, link at high speed with activity.
• With solid amber Fault LED, link at low speed with
activity.
• With flashing amber Fault LED, locate requested.

Off • With solid amber Fault LED, no link or no cable


connected.

2 Fault Amber Solid • With solid green Activity LED, link at low speed with no
activity.
• With flashing green Activity LED, link at low speed with
activity.
• With green Activity LED off, no link or no cable
connected.

Amber Flashing • With flashing green Activity LED, locate requested.

Off • With solid green Activity LED, link at high speed with no
activity.
• With flashing green Activity LED, link at high speed with
activity.

Component LEDs 147


Fan module LEDs (drive enclosure)
Fan module LEDs (drive enclosure rear view)

Figure 145: Fan module LEDs (drive enclosure rear view)

Table 41: Fan module LEDs (drive enclosure rear view)

Fan LEDs (drive enclosure rear view)

LED function Status Indicates

Locate UID Blue Solid Locate activate; safe to remove

Blue Flashing Locate activate ; do not remove component

Status Green Solid Normal operation

Amber Solid Fault

Off No power

148 Fan module LEDs (drive enclosure)


Power supply unit LEDs (drive enclosure)
Power supply unit (PSU) LEDs (drive enclosure rear view)

Figure 146: PSU LEDs (drive enclosure rear view)

Table 42: PSU LEDs (drive enclosure rear view)

PSU LEDs (drive enclosure rear view)

LED function Display Description

Status Green Solid Normal operation

Amber Flashing Non-critical fault

Amber Solid Critical fault

Off No power

Power supply unit LEDs (drive enclosure) 149


Physical service processor LEDs

Figure 147: Ethernet ports on the rear panel of the physical SP

Table 43: Ethernet ports on the rear panel of the physical SP

Ethernet ports on the rear panel of the physical SP

Port Description

1 Left port is the MGMT port (Eth0/Port 1)

2 Right port is the Service port (Eth1/Port 2/iLO)

Figure 148: LEDs on the rear panel of the physical SP

150 Physical service processor LEDs


Table 44: LEDs on the rear panel of the physical SP

LEDs on the rear panel of the physical SP

LED/Port Function Status Indicates

1 UID/Service Blue Solid Activated

Blue Flashing SP managed remotely

Off Deactivated

2 NIC Link Green Solid Network link

Off No network link

3 NIC Activity Green Solid Link to network

Green Flashing Network activity

Off No network activity

4 Power Supply Green Solid Normal

Off The physical SP has redundant power supplies


(RPS) and the LEDs are the same on both.
Off represents one or more of the following
conditions:
• Power unavailable
• Power supply failure
• Power supply in standby mode
• Power supply error

Figure 149: LEDs on the front panel of the physical SP

Component LEDs 151


Table 45: LEDs on the front panel of the physical SP

LEDs on the front panel of the physical SP

LED/Port Function Status Indicates

1 Power On/ Green Solid SP on


Standby button and SP
power Green Flashing Performing power on sequence

Amber Solid SP in standby, power still on

Off Power cord not attached, no power supplies


installed, or power failure

2 Health Green Solid SP on and health normal

Amber Flashing SP health degraded

Red Flashing SP health critical

Off SP power off

3 NIC Status Green Solid Link to network

Green Flashing Network activity

Off No network link/activity

4 UID/Service Blue Solid Active

Blue Flashing Either remote management, firmware upgrade in


progress, or iLO manual reboot sequence initiated

152 Component LEDs


HPE 3PAR Service Processor
The HPE 3PAR Service Processor (SP) is a physical SP. The HPE 3PAR SP software is designed to
provide remote error detection and reporting and to support diagnostic and maintenance activities
involving the storage systems. The HPE 3PAR SP is composed of a Linux OS and the HPE 3PAR SP
software, and it exists as a single undivided entity.

Physical SP:
The physical SP is a hardware device mounted in the system rack. If the customer chooses a physical
SP, each storage system installed at the operating site includes a physical SP. The physical is installed in
the same rack as the controller nodes. A physical SP uses two physical network connections:
• The left, Port 1 (Eth0/Mgmt) requires a connection from the customer network to communicate with the
storage system.
• The right, Port 2 (Eth1/Service) is for maintenance purposes only and is not connected to the
customer network.

HPE 3PAR SP documentation:


For more information about the HPE 3PAR SP, see the HPE 3PAR Service Processor Software User
Guide.
The HPE 3PAR SP documents are available at the Hewlett Packard Enterprise Information Library
Storage website.

Training video about the HPE 3PAR SP 5.0 and HPE 3PAR
OS 3.3.1
A training video is available for the HPE 3PAR Service Processor (SP) 5.0 and the HPE 3PAR OS 3.3.1.

Course title Course ID Internal website

HPE 3PAR Service Accelerating U Course ID: 01090119 hpe.sabacloud.com/Saba/


Processor 5.0 and HPE Web_spf/HPE/common/
The Learning Center Course ID: 01090120
3PAR OS 3.3.1 Upgrades ledetail/01090119

Network and firewall support access


Before performing the Service Processor (SP) connection setup, ensure that there are no customer
firewall restrictions to the existing HP servers and new HPE servers on port 443. Firewall and proxy
server configuration must be updated to allow outbound connections from the Service Processor to the
existing HP servers and new HPE servers.
For a list of HP and HPE server host names and IP addresses, see Firewall and proxy server
configuration on page 153.

Firewall and proxy server configuration


Firewall and proxy server configuration must be updated on the customer network to allow outbound
connections from the Service Processor to the existing HP servers and new HPE servers.
HP and HPE server host names and IP addresses:
• HPE Remote Support Connectivity Collector Servers:

HPE 3PAR Service Processor 153


◦ https://storage-support.glb.itcs.hpe.com (16.248.72.63)
◦ https://storage-support2.itcs.hpe.com (16.250.72.82)
• HPE Remote Support Connectivity Global Access Servers:
◦ https://c4t18808.itcs.hpe.com (16.249.3.18)
◦ https://c4t18809.itcs.hpe.com (16.249.3.14)
◦ https://c9t18806.itcs.hpe.com (16.251.3.82)
◦ https://c9t18807.itcs.hpe.com (16.251.4.224)
• HP Remote Support Connectivity Global Access Servers:
◦ https://g4t2481g.houston.hp.com (15.201.200.205)
◦ https://g4t2482g.houston.hp.com (15.201.200.206)
◦ https://g9t1615g.houston.hp.com (15.240.0.73)
◦ https://g9t1616g.houston.hp.com (15.240.0.74)
• HPE RDA Midway Servers:
◦ https://midway5v6.houston.hpe.com (2620:0:a13:100::105)
◦ https://midway6v6.houston.hpe.com (2620:0:a12:100::106)
◦ https://s54t0109g.sdc.ext.hpe.com (15.203.174.94)
◦ https://s54t0108g.sdc.ext.hpe.com (15.203.174.95)
◦ https://s54t0107g.sdc.ext.hpe.com (15.203.174.96)
◦ https://g4t8660g.houston.hpe.com (15.241.136.80)
◦ https://s79t0166g.sgp.ext.hpe.com (15.211.158.65)
◦ https://s79t0165g.sgp.ext.hpe.com (15.211.158.66)
◦ https://g9t6659g.houston.hpe.com (15.241.48.100)
• HPE StoreFront Remote Servers:
◦ https://sfrm-production-llb-austin1.itcs.hpe.com (16.252.64.51)
◦ https://sfrm-production-llb-houston9.itcs.hpe.com (16.250.64.99)
• For communication between the Service Processor and the HPE 3PAR StoreServ Storage system, the
customer network must allow access to the following ports on the storage system.
◦ Port 22 (SSH)
◦ Port 5781 (Event Monitor)
◦ Port 5783 (CLI)
• For communication between the browser and the Service Processor, the customer network must allow
access to port 8443 on the SP.

Connection methods for the SP


Use one of the following methods to establish a connection to the HPE 3PAR Service Processor (SP).
• Web browser connection—Use a standard web browser and browse to the HPE 3PAR SP IP
address.
• Secure Shell (SSH) connection—Use a terminal emulator application to establish a Secure Shell
(SSH) session connection.
• Laptop connection—Connect the laptop to the physical SP with an Ethernet connection (LAN).

154 Connection methods for the SP


IMPORTANT:
If firewall permissive mode for the HPE 3PAR SP is disabled, you must add firewall rules to allow
access to port 8443 or add the hosts to the firewall. By default, permissive mode is enabled for the
firewall. To add rules using the HPE 3PAR SC interface (SP 5.x) or HPE 3PAR SPOCC interface
(SP 4.x), you must first enable permissive mode through the HPE 3PAR TUI interface (SP 5.x) or
HPE 3PAR SPMaint interface (SP 4.x). After adding the rules, you can then use the interface to
disable permissive mode again.

Connecting to the SP 5.x from a web browser


Procedure
1. Browse to the HPE 3PAR Service Processor (SP) IP address https://<sp_ip_address>:8443.
2. Enter the account credentials, and then click Login.
More Information
Accounts and credentials for service and upgrade on page 173

Connecting to the SP 4.x from a web browser


Procedure
1. Browse to the HPE 3PAR Service Processor (SP) 4.x IP address or hostname https://
<sp_ip_address>, and then press Enter.
2. Enter the account credentials, and then click Ok.
More Information
Accounts and credentials for service and upgrade on page 173

Connecting to the SP Linux console through an SSH


Procedure
1. Initiate a Secure Shell (SSH) session from a host, laptop, or other computer connected on the same
network, and then connect to the HPE 3PAR Service Processor (SP) IP address or hostname.
2. Log in to the HPE 3PAR SP software.
More Information
Accounts and credentials for service and upgrade on page 173

Connecting to the physical SP from a laptop


Procedure
1. Connect an Ethernet cable between the physical SP Service port (Eth1) and a laptop Ethernet port.

Connecting to the SP 5.x from a web browser 155


Figure 150: Connecting an Ethernet cable to the physical SP Service port
2. Temporarily configure the LAN connection of the laptop as follows:
a. IP Address: 10.255.155.49
b. Subnet mask: 255.255.255.248
3. Log in to the HPE 3PAR SP software. In a browser window, enter: https://
10.255.155.54:8443/.
More Information
Accounts and credentials for service and upgrade on page 173

Interfaces for the HPE 3PAR SP


Interfaces with HPE 3PAR SP 5.x:

IMPORTANT:
HPE 3PAR SP 5.x requires HPE 3PAR OS 3.3.1 and later versions.

• HPE 3PAR Service Console (SC) interface: The HPE 3PAR SC interface is accessed when you log
in to the HPE 3PAR SP. This interface collects data from the managed HPE 3PAR StoreServ Storage
system in predefined intervals as well as an on-demand basis. If configured, the data is sent to HPE
3PAR Remote Support. A company administrator, Hewlett Packard Enterprise Support, or an
authorized service provider can also perform service functions through the HPE 3PAR SC. The HPE
3PAR SC replaces the HPE 3PAR Service Processor Onsite Customer Care (SPOCC) interface and
the HPE 3PAR SC functionality is similar to HPE 3PAR SPOCC.
• HPE 3PAR Text-based User Interface (TUI): The HPE 3PAR TUI is a utility on the SP 5.x software,
and it enables limited configuration and management of the HPE 3PAR SP and access to the HPE
3PAR CLI for the attached storage system. The intent of the HPE 3PAR TUI is not to duplicate the
functionality of the HPE 3PAR SC GUI, but to allow a way to fix problems that may prevent you from
using the HPE 3PAR SC GUI. The HPE 3PAR TUI appears the first time you log in to the Linux
console through a terminal emulator using Secure Shell (SSH). Prior to the HPE 3PAR SP
initialization, you can log in to the HPE 3PAR TUI with the admin user name and no password. To
access the HPE 3PAR TUI after the HPE 3PAR SP has been initialized, log in to the console with the
admin, hpepartner, or hpesupport accounts and credentials set during the initialization.

Interfaces with HPE 3PAR SP 4.x:

IMPORTANT:
HPE 3PAR SP 4.x requires HPE 3PAR OS 3.2.2.

• HPE 3PAR Service Processor Onsite Customer Care (SPOCC): The HPE 3PAR SPOCC interface
is accessed when you log in to the HPE 3PAR SP and is a web-based graphical user interface (GUI)
that is available for support of the HPE 3PAR StoreServ Storage system and its HPE 3PAR SP. HPE

156 Interfaces for the HPE 3PAR SP


3PAR SPOCC is the web-based alternative to accessing most of the features and functionality that are
available through the HPE 3PAR SPMAINT.
• HPE 3PAR SPMAINT interface: The HPE 3PAR SPMAINT interface is for the support (configuration
and maintenance) of both the storage system and its HPE 3PAR SP. Use HPE 3PAR SPMAINT as a
backup method for accessing the HPE 3PAR SP. The HPE 3PAR SPOCC is the preferred access
method. An HPE 3PAR SPMAINT session can be started either from the menu option in HPE 3PAR
SPOCC, through a connection to the HPE 3PAR SP through a Secure Shell (SSH), or logging in to the
Linux console; however, only one HPE 3PAR SPMAINT session is allowed at a time.

CAUTION:
Many of the features and functions that are available through HPE 3PAR SPMAINT can
adversely affect a running system. To prevent potential damage to the system and irrecoverable
loss of data, do not attempt the procedures described in this manual until you have taken all
necessary safeguards.
• HPE 3PAR CPMAINT interface: The HPE 3PAR CPMAINT terminal user interface is the primary user
interface for the support of the HPE 3PAR Secure Service Agent as well as a management interface
for the HPE 3PAR Policy Server and Collector Server.

Accessing the SP 5.x SC interface


Procedure
1. From a web browser, enter the HPE 3PAR Service Processor (SP) 5.x IP address https://
<sp_ip_address>:8443.
2. Enter the account credentials, and then click Login to gain access to the HPE 3PAR Service Console
(SC) interface.
More Information
Connection methods for the SP on page 154

Accessing the SP 5.x TUI


Procedure
1. Connect to the HPE 3PAR Service Processor (SP) 5.x Linux console.
2. Log in to gain access to the HPE 3PAR Text-based User Interface (TUI).
More Information
Connection methods for the SP on page 154

Accessing the SP 4.x SPOCC interface


Procedure
1. Browse to the HPE 3PAR Service Processor (SP) 4.x IP address or hostname https://
<sp_ip_address>, and then press Enter.
2. Enter the account credentials, and then click Ok.
3. Log in to gain access to the HPE 3PAR Service Processor Onsite Customer Care (SPOCC) interface.
More Information
Connection methods for the SP on page 154

Accessing the SP 5.x SC interface 157


Accessing the SP 4.x SPMaint interface directly
Procedure
1. Connect to the HPE 3PAR Service Processor (SP) 4.x Linux console.
2. Log in to gain access to the HPE 3PAR SPMaint main menu, HPE 3PAR Service Processor Menu.
More Information
Connection methods for the SP on page 154

Accessing the CLI session from the SP 5.x SC interface


The HPE 3PAR Service Console (SC) interface of the HPE 3PAR Service Processor (SP) 5.x provides a
CLI session only for issuing noninteractive HPE 3PAR CLI commands.

Procedure
1. From a web browser, enter the HPE 3PAR Service Processor (SP) 5.x IP address https://
<sp_ip_address>:8443.
2. Enter the account credentials, and then click Login to gain access to the HPE 3PAR Service Console
(SC) interface.
3. On the HPE 3PAR SC main menu, select Systems.
4. On the Actions menu, select Start CLI session.
More Information
Connection methods for the SP on page 154

Accessing the interactive CLI interface from the SP 5.x TUI


The HPE 3PAR Text-based User Interface (TUI) of the HPE 3PAR Service Processor (SP) 5.x provides
an interactive CLI interface for issuing HPE 3PAR CLI commands.

Procedure
1. Connect to the HPE 3PAR SP 5.x Linux console.
2. Log in to gain access to the HPE 3PAR TUI.

IMPORTANT:
When logging in using the admin or hpepartner accounts, a customer-supplied user ID and
password must be obtained for the storage system. The hpesupport account requires a strong
password to gain access to the SP, but no user ID or password is needed to access the storage
system from the SP.
3. From the HPE 3PAR TUI main menu, enter 7 for Interactive CLI/Maintenance Mode.
4. To start an interactive CLI session, enter 2 for 2 == Open interactive CLI.
More Information
Connection methods for the SP on page 154

Accessing the CLI session from the SP 4.x SPOCC interface


The HPE 3PAR Service Processor Onsite Customer Care (SPOCC) interface of the HPE 3PAR Service
Processor (SP) 4.x provides a CLI session only for issuing noninteractive HPE 3PAR CLI commands.

158 Accessing the SP 4.x SPMaint interface directly


Procedure
1. Browse to the HPE 3PAR Service Processor (SP) 4.x IP address or hostname https://
<sp_ip_address>, and then press Enter.
2. Enter the account credentials, and then click Ok.
3. From the left side of the HPE 3PAR SPOCC home page, click Support.
4. From the Service Processor - Support page, under Service Processor, click SPMAINT on the Web
in the Action column.
5. From the 3PAR Service Processor Menu, enter 7 for Execute a CLI command, and then select the
system.
More Information
Connection methods for the SP on page 154

Accessing the interactive CLI interface from the SP 4.x SPMaint interface
The HPE 3PAR SPMaint interface of the HPE 3PAR Service Processor (SP) 4.x provides an HPE 3PAR
interactive CLI interface for issuing HPE 3PAR CLI commands.

Procedure
1. Connect to the HPE 3PAR SP 4.x Linux console.
2. Log in to gain access to the HPE 3PAR SPMaint interface.
3. From the HPE 3PAR SPMaint main menu, enter 7 for Interactive CLI for a StoreServ.
More Information
Connection methods for the SP on page 154

Check health action from the HPE 3PAR SP


The Check health action can be initiated from the HPE 3PAR Service Processor (SP) in the following
ways:
• With an HPE 3PAR interactive CLI interface, initiate Check health with HPE 3PAR CLI command
checkhealth -detail.
• With HPE 3PAR SP 5.0 and later, initiate Check health from the HPE 3PAR Service Console (SC)
interface.
• With HPE 3PAR SP 4.x , initiate Check health from the HPE 3PAR SPOCC or SPMAINT interface.

Checking the health of the storage system using CLI commands


The HPE 3PAR CLI checkhealth command checks and displays the status of storage system hardware
and software components. For example, the checkhealth command can check for unresolved system
alerts, display issues with hardware components, or display information about virtual volumes that are not
optimal.
By default the checkhealth command checks most storage system components, but you can also
check the status of specific components. For a complete list of storage system components analyzed by
the checkhealth command, issue the checkhealth -list command.

Procedure
• Issue the HPE 3PAR CLI checkhealth command without any specifier to check the health of all the
components that can be analyzed.
◦ The checkhealth command authority is Super, Service.
◦ The checkhealth command syntax is: checkhealth [<options> | <component>...].

Accessing the interactive CLI interface from the SP 4.x SPMaint interface 159
The checkhealth command <options> include the following:
– The -list option lists all components that checkhealth can analyze.
– The -quiet option suppresses the display of the item currently being checked.
– The -detail option displays detailed information regarding the status of the storage system.
– The -detail node option checks the controller nodes for the presence of a touch file that
disables strong passwords.
– The -detail cabling option displays issues with cabling of drive enclosures.
– The -full option displays information about the status of the full system. This is a hidden
option and only appears in the CLI Hidden Help. This option is prohibited if the -lite option is
specified. Some of the additional components evaluated take longer to run than other
components.
◦ The checkhealth command <component> is the command specifier, which indicates the
component to check.

Examples:
To display both summary and detailed information about the hardware and software components,
issue checkhealth -detail:

cli% checkhealth -detail


Checking alert
Checking ao
Checking cabling
Checking cage
Checking cert
Checking dar
Checking date
Checking file
Checking fs
Checking host
Checking ld
Checking license
Checking network
Checking node
Checking pd
Checking pdch
Checking port
Checking qos
Checking rc
Checking snmp
Checking task
Checking vlun
Checking vv
Checking sp
Component -----------Summary Description----------- Qty
Alert New alerts 4
Date Date is not the same on all nodes 1
LD LDs not mapped to a volume 2
vlun Hosts not connected to a port 5
-------------------------------------------------------
4 total 12

When the -detail option is included, the following output might be displayed:

160 HPE 3PAR Service Processor


cli% checkhealth -detail
Component ----Identifier---- -----------Detailed Description-------
Alert sw_port:1:3:1 Port 1:3:1 Degraded (Target Mode Port Went Offline)
Alert sw_port:0:3:1 Port 0:3:1 Degraded (Target Mode Port Went
Offline)
Alert sw_sysmgr Total available FC raw space has reached threshold of
800G
(2G remaining out of 544G total)
Alert sw_sysmgr Total FC raw space usage at 307G (above 50% of total
544G)
Date -- Date is not the same on all
nodes
LD ld:name.usr.0 LD is not mapped to a volume
LD ld:name.usr.1 LD is not mapped to a volume
vlun host:group01 Host wwn:2000000087041F72 is not connected to a port
vlun host:group02 Host wwn:2000000087041F71 is not connected to a port
vlun host:group03 Host iscsi_name:2000000087041F71 is not connected to a
port
vlun host:group04 Host wwn:210100E08B24C750 is not connected to a port
vlun host:Host_name Host wwn:210000E08B000000 is not connected to a port
----------------------------------------------------------------------------------
---
12 total

When the -detail node option is included, the following information might be included:

cli% checkhealth -detail node

Component -Identifier- ---------Detailed Description----------


Node node0 Nodes with strong password disable file
--------------------------------------------------------------
1 total

If there are no faults or exception conditions, the checkhealth command indicates that the System
is healthy:

cli% checkhealth
Checking alert
Checking cabling

Checking vlun
Checking vv
System is healthy

With the <component> specifier, you can check the status of one or more specific components:

cli% checkhealth node pd


Checking node
Checking pd
The following components are healthy: node, pd

The -svc -full option provides a summary of service-related issues by default.


If you use the -detail option, both a summary and a detailed list of service issues are displayed.

HPE 3PAR Service Processor 161


The -svc -full option displays the service-related information and the customer-related
information:

IMPORTANT:
The following -svc example displays information intended only for service users.

cli% checkhealth -svc


Checking alert
Checking ao
Checking cabling
...
Checking vv
Checking sp
Component -----------Summary Description------------------- Qty
Alert New alerts 2
File Nodes with Dump or HBA core files 1
PD There is an imbalance of active pd ports 1
PD PDs that are degraded or failed 2
pdch LDs with chunklets on a remote disk 2
pdch LDs with connection path different than ownership 2
Port Missing SFPs 6

When the -svc -detail option is included, the following information is included:

NOTE:
If a controller node or drive enclosure (cage) is down, the detailed output can be long.

162 HPE 3PAR Service Processor


cli% checkhealth -svc -detail
Checking alert
Checking ao
Checking cabling
...
Checking vv
Checking sp
Component -----------Summary Description------------------- Qty
Alert New alerts 2
File Nodes with Dump or HBA core files 1
PD There is an imbalance of active pd ports 1
PD PDs that are degraded or failed 2
pdch LDs with chunklets on a remote disk 2
pdch LDs with connection path different than ownership 2
Port Missing SFPs 6

Component --------Identifier--------- --------Detailed


Description---------------------
Alert hw_cage_sled:3:8:3,sw_pd:91 Magazine 3:8:3, Physical Disk 91 Degraded
(Prolonged Missing B Port) Alert hw_cage_sled:N/A,sw_pd:54 Magazine N/A,
Physical Disk 54 Failed (Prolonged Missing, Missing A Port, Missing B Port)
File node:0 Dump or HBA core files found
PD disk:54 Detailed State: prolonged_missing
PD disk:91 Detailed State: prolonged_missing_B_port
PD -- There is an imbalance of active pd ports
pdch LD:35 Connection path is not the same as LD
ownership
pdch LD:54 Connection path is not the same as LD
ownership
pdch ld:35 LD has 1 remote chunklets
pdch ld:54 LD has 10 remote chunklets
Port port:2:2:3 Port or devices attached to port have
experienced
within the last day

To check for inconsistencies between the System Manager and kernel states and CRC errors for FC
and SAS ports, use the -full option:

HPE 3PAR Service Processor 163


cli% checkhealth -list -svc -full
Component --------------------------Component
Description------------------------------
alert Displays any non-resolved
alerts.
ao Displays any Adaptive Optimization issues.
cabling Displays any cabling
errors.
cage Displays non-optimal drive cage
conditions.
cert Displays Certificate issues.
consistency Displays inconsistencies between sysmgr and kernel**
dar Displays Data Encryption issues.
date Displays if nodes have different
dates.
file Displays non-optimal file system
conditions.
fs Displays Files Services health.
host Checks for FC host ports that are not configured for virtual port
support.
ld Displays non-optimal
LDs.
license Displays license
violations.
network Displays ethernet
issues.
node Displays non-optimal node
conditions.
pd Displays PDs with non-optimal states or
conditions.
pdch Displays chunklets with non-optimal
states.
port Displays port connection
issues.
portcrc Checks for increasing port CRC
errors.**
portpelcrc Checks for increasing SAS port CRC errors.**
qos Displays Quality of Service issues.
rc Displays Remote Copy
issues.
snmp Displays issues with
SNMP.
sp Checks the status of connection between sp and
nodes.
task Displays failed
tasks.
vlun Displays inactive VLUNs and those which have not been reported by the
host agent.
vv Displays non-optimal VVs.

Checking the health from the SP 5.x SC interface


Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP) 5.x.
2. From the HPE 3PAR Service Console (SC) main menu, select Systems.
3. Select Actions > Check health.

164 Checking the health from the SP 5.x SC interface


Checking health from the SP 4.x SPOCC interface
IMPORTANT:
Ensure that browser pop-ups are allowed.

Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP) 4.x.
2. From the HPE 3PAR Service Processor Onsite Customer Care (SPOCC) interface main menu, click
Support in the left navigation pane.
3. From the Service Processor - Support page, under StoreServs, click Health Check in the Action
column.
4. A pop-up window appears showing a status message while the health check runs.

NOTE:
When running the Health Check using Internet Explorer, the screen might remain blank while
information is gathered. This process could take a few minutes before displaying results. Wait for
the process to complete and do not attempt to cancel or close the browser.
5. When the health check process completes, it creates a report and displays in a new browser window.
Click either Details or View Summary to review the report.
6. Resolve issues if any. Close the report window when you are done.

Checking health from the SP 4.x SPMaint interface


Procedure
1. Connect to the HPE 3PAR Service Processor (SP) 4.x either by initiating an SSH session or logging in
to the Linux console opened from the VMware vSphere Client.
2. Log in to gain access to the HPE 3PAR SPMaint interface.
3. From the HPE 3PAR SPMaint main menu, HPE 3PAR Service Processor Menu, enter 4 for
StoreServ Product Maintenance, and then press Enter.
4. From the StoreServ Product Maintenance Menu, enter 4 for Perform StoreServ Health Check, and
then press Enter.
5. Enter the number corresponding to the storage system (HPE 3PAR StoreServ) you want to run the
health check on and press Enter.
6. Enter: y to retrieve and transfer the check health data and press Enter.

Checking health from the SP 4.x SPOCC interface 165


Are you sure you want to retrieve and transfer
the check health data for StoreServ <StoreServ_Name>?
(y or n)
y
...

16:44.51 Checking health of alert


16:44.52 Checking health of cabling
16:44.52 Checking health of cage
16:44.53 Checking health of date
16:44.54 Checking health of file
16:44.55 Checking health of ld
16:44.56 Checking health of license
16:44.56 Checking health of network
16:44.57 Checking health of node
16:44.59 Checking health of pd
16:45.05 Checking health of pdch
16:45.06 Checking health of port
16:45.14 Checking health of rc
16:45.14 Checking health of snmp
16:45.15 Checking health of sp
16:45.15 Checking health of task
16:45.16 Checking health of vlun
16:45.16 Checking health of vv

7. After the health check completes gathering the data, the HPE 3PAR SP displays a list of files to view.

4.4.2 Show latest health check status from StoreServ

Available files

1 ==> /sp/prod/data/files/1300338/status/
110420.101029.all
2 ==> /sp/prod/data/files/1300338/status/
110420.101029.det
3 ==> /sp/prod/data/files/1300338/status/
110420.101029.err
4 ==> /sp/prod/data/files/1300338/status/
110420.101029.sum

0 ==> Abort Operation

Please select a file to display

8. To view the available files, enter the corresponding number, and then press Enter to continue.
9. Select the number corresponding to the data file with the .all extension and press Enter. After the
file is reviewed, press Enter to continue, and then select option 0 to exit health check.

NOTE:
The HPE 3PAR SPMaint interface uses the more command to view files. To move to the next
page, press the spacebar. After viewing the contents of the file, to exit press Enter then select 0
(Abort Operation to return to the previous menu. After you return to the previous menu, the
report is discarded. To view the health status again, run the health check again.

166 HPE 3PAR Service Processor


Maintenance mode action from the SP
From theHPE 3PAR Service Processor (SP), the storage system can be set to Maintenance Mode to
prevent support information and local notifications of alerts related to planned maintenance from being
sent to Hewlett Packard Enterprise.
The Maintenance Mode action can be set in the following ways:
• With HPE 3PAR SP 5.x: Set Maintenance Mode from the HPE 3PAR Service Console (SC). In the
HPE 3PAR SC interface, the state of Maintenance Mode is indicated as either Enabled or Disabled
on the System page under the Overview section.
• With HPE 3PAR SP 4.x: Set Maintenance Mode from the HPE 3PAR SPMaint interface.

Setting maintenance mode from the SP 5.x SC interface


Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP) 5.x.
2. From the HPE 3PAR Service Console (SC) main menu, select Systems.
3. Select Actions > Set maintenance mode.

Setting maintenance mode from the SP 4.x interactive CLI interface


With HPE 3PAR Service Processor (SP) 4.x, a prompt to set Maintenance Mode automatically occurs
when starting an interactive CLI session from SPMaint.

Procedure
1. Connect and log in to the HPE 3PAR SP 4.x.
2. From the HPE 3PAR SPMaint main menu, enter 7 for Interactive CLI for a StoreServ.
3. To select your storage system, enter 1.
4. If you are prompted to turn on Maintenance Mode, enter y. The prompt message states Do you
wish to turn ON maintenance mode for StoreServ ###### before performing any
CLI operations? (y or n).

Setting or modifying maintenance mode from the SP 4.x SPMaint interface


Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP) 4.x.
2. From the HPE 3PAR Service Processor Onsite Customer Care (SPOCC) main menu, select
SPMAINT in the left navigation pane.
3. From the HPE 3PAR SPMaint main menu under Service Processor - SP Maintenance, select
StoreServ Configuration Management.
4. Under Service Processor - StoreServ Configuration, select Modify under Action.
5. Under Service Processor - StoreServ Info, select either On or Off for the Maintenance Mode
setting.

Locate action from the SP


From the HPE 3PAR Service Processor (SP), the Locate action can be initiated in the following ways to
light a specific LED for the specified components:

Maintenance mode action from the SP 167


• With HPE 3PAR SP 5.x: From the HPE 3PAR Service Console (SC) interface
• With HPE 3PAR SP 4.x: From the HPE 3PAR Service Processor Onsite Customer Care (SPOCC)
interface, the HPE 3PAR SPMaint interface, and interactive CLI interface using HPE 3PAR CLI
commands

Running the locate action from the SP SC interface


Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP) 5.0.
2. From the HPE 3PAR Service Console (SC) main menu, select the system or a component.
3. Run the Locate action in the following ways.
• On the Actions menu, select Locate.
• From the Views menu, select Schematic, and then click the locate LED icon ( ) on the
component in the schematic diagram.

Running the locate action from the SP 4.x SPOCC interface


IMPORTANT:
Ensure that browser pop-ups are allowed.

Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP) 4.x.
2. From the HPE 3PAR Service Processor Onsite Customer Care (SPOCC) main menu, select Support
in the left navigation pane.
3. From the Service Processor - Support page, under StoreServs, select Locate Cage in the Action
column.
When you select Locate Cage for an identified storage system, the HPE 3PAR SP queries the storage
system to determine available drive enclosures (cages), and then prompts you to select the cage to
locate. After you select the cage, the LEDs on the cage flash amber for 30 seconds.

Alert notifications from the SP


Alert notifications by email from the HPE 3PAR Service Processor (SP):
During the HPE 3PAR SP setup, the Send email notification of system alerts option was either
enabled or disabled. If enabled, the HPE 3PAR SP sends email notifications of alerts to the system
support contact. The email might include a Corrective Action for the failure and the spare part number
for the failed part. The spare part number is used to order a replacement part.

Alert notifications in the HPE 3PAR SP 5.0 HPE 3PAR Service Console (SC):
In the Detail pane of the HPE 3PAR SC interface, an alert notification will display in the Notifications
box.

168 Running the locate action from the SP SC interface


Figure 151: Detail pane of the HPE 3PAR SC

Views (1)—The Views menu identifies the currently selected view. Most List panes have several views
that you can select by clicking the down arrow ( ).
Actions (2)—The Actions menu allows you to perform actions on one or more resources that you have
selected in the list pane. If you do not have permission to perform an action, the action is not displayed in
the menu. Also, some actions might not be displayed due to system configurations, user roles, or
properties of the selected resource.
Notifications box (3)—The notifications box is displayed when an alert or task has affected the resource.
Resource detail (4)—Information for the selected view is displayed in the resource detail area.

Browser warnings
When connecting to the HPE 3PAR Service Processor (SP) IP address, you might receive a warning from
your browser that there is a problem with the security certificate for the website, that the connection is not
private, or the connection is not secure. To continue to the site, clear the warning.
More Information
Clear Internet Explorer browser warning on page 169
Clear Google Chrome browser warning on page 170
Clear Mozilla Firefox browser warning on page 171

Clear Internet Explorer browser warning


Procedure
• Click Continue to this website (not recommended).

Browser warnings 169


Clear Google Chrome browser warning
Procedure
1. Click the Advanced link.

2. Click Proceed to <sp_ip_address> (unsafe).

170 Clear Google Chrome browser warning


Clear Mozilla Firefox browser warning
Procedure
1. Click Advanced.

2. Click Add Exception....

Clear Mozilla Firefox browser warning 171


3. (Optional) To remove the warning for this site in the future, select Permanently store this exception
in the Add Security Exception dialog.

4. In the Add Security Exception dialog, click Confirm Security Exception.

172 HPE 3PAR Service Processor


Accounts and credentials for service and
upgrade
IMPORTANT:
There are separate accounts for access to the storage system or the service processor. The
account options and type of password vary based on the version of the software installed for the
storage system and the version of the software installed on the SP.

• Beginning with HPE 3PAR SP 5.0 for the service processor, time-based or encryption-based
passwords are implemented for the support accounts used with the SP.
• Beginning with HPE 3PAR OS 3.2.2 for the storage system, time-based or encryption-based
passwords are implemented for the support accounts used with the storage system.
Customers increasingly require the ability to have local control over all passwords on their storage
system, have assurance that passwords do not remain static for long periods of time, and need an audit
trail of access to the credentials. The time-based (default) or encryption-based password options allow
the customer control over the passwords used for access to their system by approved service providers.

NOTE:
Dark sites will likely want to change to the encryption-based mode. Hewlett Packard Enterprise can
provide information to Dark Site customers under a Confidential Disclosure Agreement (CDA)
regarding what is contained in the ciphertext export. It is necessary to communicate with such
customers in advance to work out an acceptable process to choose the correct mode, and to help
them craft a process that suits their needs.
For example, the Department of Defense sites are likely to switch to the encryption-based
(ciphertext) mode and possibly export the ciphertext to HPE Support in advance of the need for
support assistance so that this security measure is already in place before an escalation event
occurs. Either in advance or during the escalation, the ciphertext is supplied to HPE Support,
decrypted, and the credential communicated to the Hewlett Packard Enterprise approved service
provider. After the escalation concludes, the customer regenerates a new ciphertext to make the
prior password unusable.

Password considerations based on HPE 3PAR Service Processor (SP) software upgrades

IMPORTANT:
Prior to upgrading the HPE 3PAR SP software for all sites (including dark sites), discuss and plan
the upgrade with the customer.

• For an upgrade to HPE 3PAR SP 5.0:


With a successful upgrade to HPE 3PAR SP 5.0 from an older release, the root and hpesupport
accounts have the time-based password (TOTP) option enabled by default. These support accounts
use either a time-based password (default) or an encryption-based password that is set by the
customer using the HPE 3PAR Service Console (SC). For the encryption-based password, the
customer obtains the ciphertext from the HPE 3PAR Service Console (SC) and provides it to HPE
Support for deciphering. HPE Support uses HPE StoreFront Remote to obtain the time-based and
encryption-based passwords.
• With HPE 3PAR SP 4.x:
The spood support account uses a static password set by Hewlett Packard Enterprise.

Accounts and credentials for service and upgrade 173


Password considerations based on HPE 3PAR OS software upgrades
• For an upgrade to HPE 3PAR OS 3.3.1:
With a successful upgrade to HPE 3PAR OS 3.3.1 from HPE 3PAR OS 3.2.2, the password option set
with HPE 3PAR OS 3.2.2 for the root and console accounts are retained with the upgrade.

Training video about the passwords for support accounts


A Strong Password Service training video is available that explains the passwords (strong passwords)
associated with the support accounts used with the HPE 3PAR Service Processor (SP) and HPE 3PAR
StoreServ Storage system for service events:

Course title Course ID Internal website

Strong Password Service Accelerating U Course ID: 01058793 hpe.sabacloud.com/Saba/


Web_spf/HPE/common/
The Learning Center Course ID: 01072401
ledetail/01058793

HPE 3PAR Service Processor accounts for service and


upgrade
For access to the HPE 3PAR Service Processor (SP) interfaces, there are the following account options
for the administrator or for HPE Support personnel and authorized service providers. Based on the
account, there are differences in the access it provides to the HPE 3PAR SP interfaces, the type of
password options, and the permissions associated with the account.

Interfaces for HPE 3PAR SP 5.x:


• HPE 3PAR Service Console (SC)
• HPE 3PAR Text-based User Interface (TUI)

Interfaces for HPE 3PAR SP 4.x:


• HPE 3PAR Service Processor Onsite Customer Care (SPOCC)
• HPE 3PAR SPMaint utility (SPMaint)
• HPE 3PAR CPMaint utility (CPMaint)

Accounts with HPE 3PAR SP 5.x for service and upgrade


Account Password options Interface access Permissions

admin Static password • SC through a web browser • Only the administrator


• Administrator sets/ • TUI through a physical or • Default account
changes virtual console • Can create new local SP
• TUI through SSH users

hpepartner Static password • SC through a web browser • Only authorized service


• TUI through a physical or providers
• Administrator sets/
changes virtual console • Service and diagnostic
• TUI through SSH functions

Table Continued

174 Training video about the passwords for support accounts


Accounts with HPE 3PAR SP 5.x for service and upgrade
Account Password options Interface access Permissions

hpesupport Time-based or encryption- • SC through a web browser • Only HPE Support


based password • TUI and Linux Shell through • Service and diagnostic
• Administrator sets the a physical or virtual console functions
password option • TUI and Linux Shell through
through the SC or TUI SSH
• For encryption-based
password, administrator
regenerates ciphertext
(blob) through the SC
or TUI
• Authorized service
provider obtains the
ciphertext (blob) from
the administrator and
retrieves the password
through the StoreFront
Remote

root Time-based or encryption- • SP Linux shell • Only HPE Support and


based password authorized service
providers
• Administrator sets the
password option • Service and diagnostic
through the SC or TUI functions
• For encryption-based
password, administrator
regenerates ciphertext
(blob) through the SC
or TUI
• Authorized service
provider obtains the
ciphertext (blob) from
the administrator and
retrieves the password
through the StoreFront
Remote

Accounts and credentials for service and upgrade 175


Accounts with HPE 3PAR SP 4.0 for service and upgrade
Account Password options Interface access Permissions

3parcust Static password • SPOCC through a web • Only the administrator


browser • Default account
• Administrator sets/
changes • SPMaint through a physical • Can create new local SP
or virtual console users
• SPMaint through SSH

cpmaint Static password • SP Linux shell • Only the administrator


• Administrator sets/ • CPMaint • Administrative Secure
changes Service Agent (SSA)
functions

spvar Static password • SPOCC through a web • Only HPE personnel and
browser authorized service
• Administrator sets/
• SPMaint through a physical providers
changes
or virtual console • Service and diagnostic
• SPMaint through SSH functions

spdood Static password • SPOCC through a web • Only HPE Support


browser • Service and diagnostic
• HPE sets/changes per
release • SPMaint through a physical functions
or virtual console
• SPMaint through SSH

root Static password • SP Linux shell • Only HPE Support and


authorized service
• HPE sets/changes per
providers
release
• Service and diagnostic
functions

More Information
Interfaces for the HPE 3PAR SP on page 156

Storage system accounts for service and upgrade


For access to the HPE 3PAR StoreServ Storage system interfaces, there are the following account
options for the administrator or for Hewlett Packard Enterprise support personnel and authorized service
providers. Based on the account, there are differences in the access it provides to the storage system
interfaces, the type of password options, and the permissions associated with the account.

176 Storage system accounts for service and upgrade


Storage system accounts with HPE 3PAR OS 3.3.1 and 3.2.2 for service and upgrade
Account PW options Interface access Permissions

3paradm Static password • Main console • Only the administrator


• Administrator sets/ • Administrator console • Creating new CLI user
changes through the • Interactive CLI accounts
Administrator console • Service and diagnostic
functions
• Super rights

3paredit Randomized password • Main console • Communication between


• Administrator console the Service Processor (SP)
and the storage system
• Edit rights

3parsvc Randomized password and • Interactive CLI • Used by the SP to monitor


known only to the SP code the storage system
If the SP is not being used for • Super rights
monitoring and is used only
for maintenance activities,
you can change the
password. Changing the
password prevents the SP
from performing monitoring
operations. When a
maintenance activity takes
place, the administrator must
set the password to a defined
value. After the maintenance,
the SP changes the
password to a randomized
value again. Once the
maintenance is complete, the
password can again be
changed.

3parservice Randomized password • Interactive CLI • Only HPE Support and


authorized service providers
• Service and diagnostic
functions
• Service rights

Table Continued

Accounts and credentials for service and upgrade 177


Storage system accounts with HPE 3PAR OS 3.3.1 and 3.2.2 for service and upgrade
Account PW options Interface access Permissions

console Time-based or encryption- • Node's serial console • Only HPE Support and
based password authorized service providers
• Administrator sets the • Service and diagnostic
password option through functions
CLI commands
• For encryption-based
password, administrator
retrieves/exports the
ciphertext (blob) through
CLI commands

root Time-based or encryption- • Linux Shell on the • Only HPE Support and
based password storage system authorized service providers
• Administrator sets the • Service and diagnostic
password option through functions
CLI commands
• For encryption-based
password, administrator
retrieves/exports the
ciphertext (blob) through
CLI commands

Time-based password (strong password)


With the time-based password option, the HPE Support person or authorized service provider can acquire
the account password when needed without the involvement of the administrator. The time-based
password is generated using strong cryptographic algorithms and large key sizes, is valid for 60 minutes,
and is automatically regenerated at the start of each hour.
During the service, upgrade, or diagnostic procedure, the account password remains active until logging
out of the account, even if 60 minutes is exceeded. During the procedure, if it is necessary to log out of
the account and then log back in to the account (for example, closing the session or rebooting a controller
node), do either of the following:
• If 60 minutes has not been exceeded, use the same password.
• If 60 minutes has been exceeded, obtain a newly generated password.

Encryption-based password (strong password)


With the encryption-based (ciphertext) password option, the administrator initiates the generation or
regeneration of account ciphertext that is copied and provided to the authorized service provider. The
authorized service provider decrypts the ciphertext to obtain the account password that they will use for
the service, upgrade, or diagnostic procedure. The password does not expire. After the service, upgrade,
or diagnostic procedure is completed, the administrator may regenerate a new ciphertext to make the
current password invalid. Only the administrator initiates the generation or regeneration of the account
ciphertext for a new password.

178 Time-based password (strong password)


Retrieving HPE 3PAR SP account passwords from HPE
StoreFront Remote
Use HPE StoreFront Remote to retrieve the passwords for the HPE 3PAR Service Processor (SP) root or
hpesupport accounts or the HPE 3PAR StoreServ Storage system root or console accounts.

Prerequisites
To use HPE StoreFront Remote, you must have access to HPE StoreFront Remote with privileges for
secure password access.

Procedure
1. Log in to HPE StoreFront Remote using your HPE email address and password.
www.storefrontremote.com/
2. From the HPE StoreFront Remote main menu, select Request Secure Password.

3. In the Request Secure Password window, select the Password Type: Time-Based Password or
Encryption-Based Password.
4. If Encryption-Based is selected, paste the ciphertext (blob) provided by the customer in the
Ciphertext box. The text must include the begin and end tokens.
5. For the Product Specifier drop-down list, select either Service Processor or StoreServ.
6. For the Target User ID drop-down list, select the associated account.
7. For a Service Processor account, enter the SP values in the SP ID field and the SP Model field.
Make sure the SP model selected/entered matches the SP model displayed on the SP TUI or SP
HPE 3PAR Service Console.
If you receive a warning message that states the device hasn't called home, ignore the warning and
proceed.
8. In the CRM Case Number field, enter the support case number associated with the support activity.
If no case number is associated with the support activity, a warning message appears after you click
Submit Password Request, which you can override by clicking Yes, Override CRM Validation and
Force This Request.

Retrieving HPE 3PAR SP account passwords from HPE StoreFront Remote 179
9. In the Beneficiary Email field, enter an email address for the approved service provider who will be
using the password.
10. Click Submit Password Request.
11. Click Show Password, and then copy the password to the clipboard.
12. Click Close.

Communication of the password via phone


The passwords are 32 characters long and include a combination of upper and lower case alphabet,
numbers, and special symbols. If the only way to communicate a password is via the phone, the
suggested means of communicating is found in the following table.
Examples of passwords:
• For the password HelloMyNameIsFred!, the telephony protocol is to read the following:
Big Hello echo lima lima oscar Big Mike yankee Big November alpha mike echo
Big India sierra Big Foxtrot romeo echo delta exclamation mark
• For a more typical password such as [^ak&,l@W6?ZqZ)+Jv%+Zc]IL@q1`-HE, the telephony is:
Open-bracket carat kilo ampersand comma lima at-sign Big Whiskey six
question-mark Big Zulu Quebec Big Zulu close-paren plus-sign Big Juliette
victor percent-sign plus-sign Big Zulu Charlie close-bracket Big India Big
Lima at-sign Quebec one tic-mark dash Big Hotel Big Echo

Table 46: HPE Support password telephony alphabet

Character Telephony Written Pronunciation

A Big Alpha ALPHA BIG AL-FAH

a Alpha alpha AL-FAH

B Big Bravo BRAVO BIG BRAH-VOH

b Bravo bravo BRAH-VOH

C Big Charlie CHARLIE BIG CHAR-LEE

c Charlie charlie CHAR-LEE

D Big Delta DELTA BIG DELL-TAH

d Delta delta DELL-TAH

E Big Echo ECHO BIG ECK-OH

e Echo echo ECK-OH

F Big Foxtrot FOXTROT BIG FOKS-TROT

f Foxtrot foxtrot FOKS-TROT

G Big Golf GOLF BIG GOLF

Table Continued

180 Communication of the password via phone


Character Telephony Written Pronunciation

g Golf golf GOLF

H Big Hotel HOTEL BIG HOH-TEL

h Hotel hotel HOH-TEL

I Big India INDIA BIG IN-DEE-AH

i India india IN-DEE-AH

J Big Juliette JULIETTE BIG JEW-LEE-ETT

j Juliette juliette JEW-LEE-ETT

K Big Kilo KILO BIG KEY-LOH

k Kilo kilo KEY-LOH

L Big Lima LIMA BIG LEE-MAH

l Lima lima LEE-MAH

M Big Mike MIKE BIG MIKE

m Mike mike MIKE

N Big November NOVEMBER BIG NO-VEM-BER

n November november NO-VEM-BER

O Big Oscar OSCAR BIG OSS-CAH

o Oscar oscar OSS-CAH

P Big Papa PAPA BIG PAH-PAH

p Papa papa PAH-PAH

Q Big Quebec QUEBEC BIG KEH-BECK

Q Quebec quebec KEH-BECK

R Big Romeo ROMEO BIG ROW-ME-OH

r Romeo romeo ROW-ME-OH

S Big Sierra SIERRA BIG SEE-AIR-RAH

s Sierra sierra SEE-AIR-RAH

Table Continued

Accounts and credentials for service and upgrade 181


Character Telephony Written Pronunciation

T Big Tango TANGO BIG TANG-GO

t Tango tango TANG-GO

U Big Uniform UNIFORM BIG YOU-NEE-FORM

u Uniform uniform YOU-NEE-FORM

V Big Victor VICTOR BIG VIK-TAH

v Victor victor VIK-TAH

W Big Whiskey WHISKEY BIG WISS-KEY

w Whiskey whiskey WISS-KEY

X Big X-Ray XRAY BIG ECKS-RAY

x X-ray xray ECKS-RAY

Y Big Yankee YANKEE BIG YANG-KEE

y Yankee yankee YANG-KEE

Z Big Zulu ZULU BIG ZOO-LOO

z Zulu zulu ZOO-LOO

` tic mark ` TICK-MARK

~ tilde ~ TEEL-DUH

1 One 1 WUN

! Exclamation ! ECKS-KLA-MAY-SHUN

2 Two 2 TOO

@ At Sign @ AT-SINE

3 Three 3 THREE

# Pound-sign # POWND-SINE

4 Four 4 FORE

$ Dollar Sign $ DOLL-ARE-SINE

5 Five 5 FIVE

Table Continued

182 Accounts and credentials for service and upgrade


Character Telephony Written Pronunciation

% Percent-sign % PER-SENT-SINE

6 Six 6 SICKS

^ Carat ^ CARE-AT

7 Seven 7 SEV-EN

& Ampersand & AMP-ER-SAND

8 Eight 8 AYT

* Asterisk * ASS-TER-ISK

9 Nine 9 NINE

( Open Paren ( OPEN-PAIR-EN

0 Zero 0 ZEE-ROH

) Close Paren ) CLOS-PAIR-EN

- Dash - DASH

_ Underline _ UN-DER-LINE

= Equal-sign = EE-KWAL-SINE

+ Plus-sign + PLUS-SINE

[ Open Bracket [ OPEN-BRAK-ET

{ Open Curly Brace { OPEN-CUR-LEE-


BRAYS

] Close Bracket ] CLOS-BRAK-ET

} Close Curly Brace } CLOS-CUR-LEE-


BRAYS

\ Back Slash \ BAK-SLASH

| Vertical Bar | VER-TEE-CAL-BAR

; Semi-colon ; SEM-EE-COL-UN

: Colon : COL-UN

‘ Quote ‘ KWOT

Table Continued

Accounts and credentials for service and upgrade 183


Character Telephony Written Pronunciation

“ Double quote “ DUB-EL-KWOT

, Comma , COM-AH

< Less Than Sign < LES-THAN-SINE

. Period/dot . PEER-EE-OD / DOT

> Greater Than Sign > GRATE-ER-THAN-SINE

/ Forward Slash / FORE-WORD-SLASH

? Question Mark ? KWES-TYUN-MARK

Troubleshooting issues with passwords


Cause

Issue Solution

A customer cannot recall a service or You can display the method, current hour value, and encryption
support password, or the tpdtcl is ciphertext values for both root and console on the serial
broken and a 3PAR CLI command console of the storage system. To display these items, connect
can't be executed. to the serial console, and at the login prompt, enter
exportcreds for the userid.

The time of day on the storage system If the storage system clocks are 60 or more minutes out of sync
is more than one hour out of sync with with real time, the password generated at Hewlett Packard
real time. Enterprise might be unusable or have a reduced lifetime. If the
passwords are unusable, use the exportcreds user to export
the current time of day. Provide that value, along with the user’s
correct time of day, to HPE Support and request to have it
escalated to Level 4, where tools exist to generate a password
for the correct time.
Alternatively, the customer can correct the time on the storage
system or SP, or they can switch the method to ciphertext, and
then export the ciphertext for decryption at Hewlett Packard
Enterprise. In either event, make it a priority to fix the time sync
problem.

Table Continued

184 Troubleshooting issues with passwords


Issue Solution

I do not have access to HPE If your job description requires you to have access, you can
StoreFront Remote (SFRM) or the request access. Only users whose job functions require access
Strong Password Generation tool. are eligible, including:
• Tech Support engineers who have responsibility for
supporting HPE 3PAR devices in the field
• Tech Support personnel who provide backline support to
CEs in the field
• Certain internal software partners
• QA and internal HPE 3PAR engineering users

The generation tool requires a CRM This CRM case number is an audit requirement. When you
case number. submit your request, your CRM case number is validated to
ensure that it corresponds to an open case. If not, you are
prompted to send your request anyway, overriding the
validation. Requests without an open case are recorded in an
elevated security log. Be sure that an open case describes an
escalation at a customer site that requires use of the generator.

The generation tool requires a This beneficiary email is an audit requirement. This field
Beneficiary email. identifies the actual user of the password being generated, as
opposed to yourself if you are using the tool on behalf of
another (for example, over the phone for a remote on-site
engineer). It is expected that you have verified the identities of
individuals and that they are authorized to receive passwords. If
you are using the password yourself, the email field is pre-
populated with your email.

I cannot create a password for The storage system only implements these passwords for the
3paradm in the SFRM tool. root and console users. 3paradm is an account controlled
by the customer.

A customer requests the account Do not share account passwords with customers. These
password. passwords are reserved for Hewlett Packard Enterprise Support
employees only. When in doubt, escalate.

Accounts and credentials for service and upgrade 185


Connecting the power and data cables
Connecting the power and data cables is described here. The figures illustrate how the power cords are
connected in an HPE 3PAR rack to provide redundancy and fault tolerance. For third-party racks, apply
similar cabling methods.

Power cables
Use the following applicable diagrams to connect the power cables.

Figure 152: 3PAR StoreServ 20450 Storage Single-Phase Power Configuration

186 Connecting the power and data cables


Figure 153: 3PAR StoreServ 20800/20850 Storage Single-Phase Power Configuration

Connecting the power and data cables 187


Figure 154: 3PAR StoreServ Storage Expansion Rack Single-Phase Power Configuration

188 Connecting the power and data cables


Figure 155: 3PAR StoreServ 20450 Storage Three-Phase Power Configuration

Connecting the power and data cables 189


Figure 156: 3PAR StoreServ 20800/20850 Storage Three-Phase Power Configuration

190 Connecting the power and data cables


Figure 157: 3PAR StoreServ Storage Expansion Rack Three-Phase Power Configuration

NOTE:
The international expansion rack three-phase power configuration has two PDUs.

Data cables
This section describes the configuration options available for connecting storage drive enclosures to an
HPE 3PAR StoreServ 20000 Storage system:
• Direct Connect Cable Configuration
• Daisy-Chained Cable Configuration

NOTE:
Converting from one configuration option to the other is not supported.

Direct Connect Cable Configuration


Direct connect cabling is the default option that provides maximum throughput, but with fewer drives and
enclosures on each path (each SAS port and cable). It supports connectivity for up to 48 drive enclosures
for a system with the maximum eight controller nodes (four node pairs).

Data cables 191


Figure 158: Direct Connect Cable Connections

The drive enclosures connected to node pair 0/1 would be labeled in the rack as “Node 0/1 E0” through
“Node 0/1 E11.”
Use the procedure that follows to connect data cables between controller nodes and drive enclosure I/O
modules. The tables under steps 1 and 2 list the cable connections in the recommended order of slot and
port use for controller nodes 0 and 1 with SAS cards located in slots 0, 1, and 2 of each node. Direct

192 Connecting the power and data cables


Connect Cable Connections shows the red and green connections from a node pair to the first four
drive enclosures. For close-up views to help identify the ports specified for use by those procedures on
both the controller nodes and the drive enclosures, see Controller Node Slot and Port Details.
Each node pair supports up to 12 drive enclosures. Repeat the same cabling scheme on additional node
pairs as needed.

NOTE:
If a SAS card does not exist in a particular slot (slot 0, for example), proceed down the list to the
next available slot and port.

Procedure
1. Connect node‐0 (or the even numbered node in a node pair) to the DP‐1 port on the red color-
coded I/O module (module 0) of the drive enclosures in the following sequence for slot and port usage,
for up to 12 drive enclosures:

Order of Use Controller Node Port Red Node Loop to Drive Enclosure

1. Slot 2, Port 1 Enclosure 0, I/O Module Slot 0, DP-1

2. Slot 1, Port 1 Enclosure 1, I/O Module Slot 0, DP-1

3. Slot 0, Port 1 Enclosure 2, I/O Module Slot 0, DP-1

4. Slot 2, Port 3 Enclosure 3, I/O Module Slot 0, DP-1

5. Slot 1, Port 3 Enclosure 4, I/O Module Slot 0, DP-1

6. Slot 0, Port 3 Enclosure 5, I/O Module Slot 0, DP-1

7. Slot 2, Port 2 Enclosure 6, I/O Module Slot 0, DP-1

8. Slot 1, Port 2 Enclosure 7, I/O Module Slot 0, DP-1

9. Slot 0, Port 2 Enclosure 8, I/O Module Slot 0, DP-1

10. Slot 2, Port 4 Enclosure 9, I/O Module Slot 0, DP-1

11. Slot 1, Port 4 Enclosure 10, I/O Module Slot 0, DP-1

12. Slot 0, Port 4 Enclosure 11, I/O Module Slot 0, DP-1

2. Connect node‐1 (or the odd numbered node in a node pair) to the DP‐1 port on the green color-
coded I/O module (module 1) of the drive enclosures in the following sequence for slot and port usage,
for up to 12 drive enclosures:

Order of Use Controller Node Port Green Node Loop to Drive Enclosure

1. Slot 2, Port 1 Enclosure 0, I/O Module Slot 1, DP-1

2. Slot 1, Port 1 Enclosure 1, I/O Module Slot 1, DP-1

Table Continued

Connecting the power and data cables 193


Order of Use Controller Node Port Green Node Loop to Drive Enclosure

3. Slot 0, Port 1 Enclosure 2, I/O Module Slot 1, DP-1

4. Slot 2, Port 3 Enclosure 3, I/O Module Slot 1, DP-1

5. Slot 1, Port 3 Enclosure 4, I/O Module Slot 1, DP-1

6. Slot 0, Port 3 Enclosure 5, I/O Module Slot 1, DP-1

7. Slot 2, Port 2 Enclosure 6, I/O Module Slot 1, DP-1

8. Slot 1, Port 2 Enclosure 7, I/O Module Slot 1, DP-1

9. Slot 0, Port 2 Enclosure 8, I/O Module Slot 1, DP-1

10. Slot 2, Port 4 Enclosure 9, I/O Module Slot 1, DP-1

11. Slot 1, Port 4 Enclosure 10, I/O Module Slot 1, DP-1

12. Slot 0, Port 4 Enclosure 11, I/O Module Slot 1, DP-1

3. You can connect drive enclosures to other node pairs as follows:


a. For a 4‐node system, repeat the same cabling scheme to connect up to 12 more drive enclosures
(for a total of 24) to controller nodes 2 and 3. Use step 1 to connect the node‐2 ports to the DP‐1
port on the red color-coded I/O module (module 0) on each of those additional drive enclosures.
Use step 2 to connect the node‐3 ports to the DP‐1 port on the green color-coded I/O module
(module 1) of each of those additional drive enclosures. These drive enclosures would be identified
in the rack as Node 2/3 E0 through Node 2/3 E11.
b. For a 6‐node system, repeat the same cabling scheme to connect up to 12 more drive enclosures
(for a total of 36) to controller nodes 4 and 5. Use step 1 to connect the node‐4 ports to the DP‐1
port on the red color-coded I/O module (module 0) on each of those additional drive enclosures.
Use step 2 to connect the node‐5 ports to the DP‐1 port on the green color-coded I/O module
(module 1) of each of those additional drive enclosures. These drive enclosures would be identified
in the rack as Node 4/5 E0 through Node 4/5 E11.
c. For an 8‐node system, repeat the same cabling scheme to connect up to 12 more drive
enclosures (a total of 48) to controller nodes 6 and 7. Use step 1 to connect the node‐6 ports to
the DP‐1 port on the red color-coded I/O module (module 0) on each of those additional drive
enclosures. Use step 2 to connect the node‐7 ports to the DP‐1 port on the green color-coded
I/O module (module 1) of each of those additional drive enclosures. These drive enclosures would
be identified in the rack as Node 6/7 E0 through Node 6/7 E11.

Daisy-Chained Cable Configuration


The daisy-chained configuration is is an alternative configuration that provides maximum scalability. When
connecting a node pair to the drive enclosures, each port on the node drive adapters connects to two
daisy-chained drive enclosures. Daisy-chained drive enclosures must be vertically adjacent because the
daisy-chain cable is 0.5 meters in length.

NOTE:
Only two drive enclosures are supported on each daisy-chained loop.

194 Daisy-Chained Cable Configuration


Figure 159: Daisy-Chained Cable Connections

Use the following guidelines to connect the data cables between the controller node ports and drive
enclosure I/O modules.

Procedure
1. Connect the drive enclosure pair to the red color-coded (even numbered) node of the controller node
pair:
a. Connect the port on the controller node to the lower drive enclosure, red color-coded I/O module 0
(IOM-0), DP-1, as illustrated by line 1a in Daisy-Chained Cable Connections.
b. Connect DP-2 of that same I/O module (the red color-coded one) of the lower drive enclosure to
the red color-coded I/O module 0, DP-1 of the upper drive enclosure in the pair, as illustrated by
line 1b.
2. Connect the drive enclosure pair to the same slot and port number on the green color-coded (odd
numbered) node of the controller node pair:
a. Connect the port on the controller node to the upper drive enclosure, green color-coded I/O module
1 (IOM-1), DP‐1, as illustrated by line 2a in Daisy-Chained Cable Connections.
b. Connect DP-2 of that same I/O module (the green color-coded one) of the upper drive enclosure to
the green color-coded I/O module 1, DP‐1 of the lower drive enclosure in the pair, as illustrated by
line 2b.

Connecting the power and data cables 195


Order of Controller Node Port Use for Daisy-Chained Cable Connections
The port on each controller node used in the example in Daisy-Chained Cable Connections is Slot 2,
Port 1. That is the first port recommended for use on each node. For connecting additional drive
enclosure pairs to the SAS HBA ports on the controller node pairs, follow the order as listed in the table
that follows, always using the same slot and port on even and odd nodes for each drive pair (for a
symmetric cabling configuration). If a SAS card does not exist in a particular slot (slot 0, for example),
proceed down the list to the next available slot and port.

Order of Use Controller Node Port

1. Slot 2, Port 1

2. Slot 1, Port 1

3. Slot 0, Port 1

4. Slot 2, Port 3

5. Slot 1, Port 3

6. Slot 0, Port 3

7. Slot 2, Port 2

8. Slot 1, Port 2

9. Slot 0, Port 2

10. Slot 2, Port 4

11. Slot 1, Port 4

12. Slot 0, Port 4

For a close-up view to help identify those slot and port locations, see Controller Node Slot and Port
Details.

Data Cable Slot and Port Details


This section provides close-up views of the slots and ports used for both the Daisy-Chained Cable
Configuration and the Direct Connect Cable Configuration options.
Controller Node Slot and Port Details shows the SAS HBA slots (S0, S1, S2) and ports (DP1–DP4)
located on the controller nodes. The numbers (1–12) to the right of each port indicate the recommended
order of usage for both daisy-chained and direct connect configurations.

196 Order of Controller Node Port Use for Daisy-Chained Cable Connections
Figure 160: Controller Node Slot and Port Details

Drive Enclosure Cable Ports provides a close-up view of ports DP‐1 and DP-2 on each of the two I/O
modules of a drive enclosure. The data cable procedures for the daisy-chained and direct connect
configurations specify how to use those ports on the red color-coded I/O module (module 0) and the
green color-coded I/O module (module 1) for your configuration.

Figure 161: Drive Enclosure Cable Ports

Connecting the power and data cables 197


Controller node rescue

Node-to-Node Rescue processes


IMPORTANT:
When all other nodes are active and healthy in the storage system, the Node-to-Node Rescue
process must be run on a node in the following circumstances:
• Installation of a replacement node
• Installation of additional node pairs to upgrade the storage system
• Replacement of one or more failed node drives within a node

Each controller node has a built-in node-rescue network that connects the nodes in the system together
in a cluster through the node-chassis. This connection in a cluster allows for a rescue to occur between
an active node in the cluster and the replacement or new node added to the storage system. This rescue
is called a Node-to-Node Rescue and is used in place of needing to connect the service processor (SP)
for the rescue.
Node-to-Node Rescue is performed over fixed physical Ethernet connections through the backplane.
There are two backplane Ethernet connections per node and consequently there are two rescuers per
node.
For example: With a 4-node storage system, a node in an immediately adjacent slot must rescue the new
replacement node. In the following diagram, a node installed in a slot diagonally across cannot rescue the
new node. For example, nodes 0 or 3 can rescue node 2. Node 1 cannot rescue node 2.

Backplane Ethernet connections for a StoreServ 20000


(4 node system backplane)

(4 node)
0 ---- 1
| |
2 ---- 3
With an 8-node storage system, a node installed in a slot immediately above or below a node can rescue
it. A node in the lowest numbered node pair (0/1) and highest numbered node pair (6/7) can also be
rescued by its partner node. For example, in the following diagram, node 1 or node 2 can rescue node 0.
Only node 3 and node 7 can rescue node 5.

Backplane Ethernet connections for a StoreServ 20000


(8 node system backplane)

(8 node)
0 ---- 1
| |
2 3
| |
4 5
| |
6 ---- 7
Based on the circumstance for needing a Node-to-Node Rescue, there are two process options:

198 Controller node rescue


• Node-to-Node Rescue initiated automatically on page 199
• Node-to-Node Rescue initiated by CLI command on page 199
More Information
Replacing a controller node on page 11

Node-to-Node Rescue initiated automatically


An Automatic Node-to-Node Rescue process occurs for a controller node (node) in these circumstances:
• Installation of additional node pairs to upgrade the storage system
• Replacement of one or more failed node drives within a node
The Automatic Node-to-Node Rescue process is initiated after the node is installed, and then powered
on.

NOTE:
In rare instances, a new node drive is not recognized as being blank, which prevents the start of the
Automatic Node-to-Node Rescue process. If this occurs, manually initiate the Node-to-Node Rescue
by issuing the CLI startnoderescue -node <nodeID> command, where <nodeID> is the new
or replacement node.

Node-to-Node Rescue initiated by CLI command


A Manual Node-to-Node Rescue process is initiated by a CLI command and is needed for a controller
node (node) in these circumstances:
• Installation of a replacement node
• Automatic Node-to-Node Rescue process does not initiate automatically
When a node has failed and the node drives are still healthy, the node drives from the failed node are
moved to the replacement node. Since the node drives are not blank, the Manual Node-to-Node Rescue
process is necessary to initiate the rescue that joins the node to the cluster.

Initiate the Manual Node-to-Node Rescue process using the HPE 3PAR CLI command:
Before powering on the replacement controller node (node), issue the CLI startnoderescue -node
<nodeID> command, where <nodeID> is the replacement node.

Power on the node to begin the Manual Node-to-Node Rescue process:


When power is supplied to the replacement node, the node begins to boot and the Node-to-Node Rescue
begins. This boot process might take approximately 10 to 20 minutes. When the Node-to-Node Rescue
and boot are completed, the node becomes part of the cluster.

Monitor the progress of the Manual Node-to-Node Rescue process:


• Locate the <taskID> of the current and active Node-to-Node Rescue process by issuing the CLI
showtask command.
Monitor the progress and results by issuing the CLI showtask -d <taskID> command.
For example:

Node-to-Node Rescue initiated automatically 199


cli% showtask -d
Id Type Name Status Phase Step ----StartTime---
----FinishTime---- Priority User
4 node_rescue node_0_rescue done --- --- 2012-04-10 13:42:37 PDT
2012-04-10 13:47:22 PDT n/a sys:3parsys
Detailed status:
2012-04-10 13:42:37 PDT Created task.
2012-04-10 13:42:37 PDT Updated Running node rescue for node 0 as
1:8915
2012-04-10 13:42:44 PDT Updated Using IP 169.254.136.255
2012-04-10 13:42:44 PDT Updated Informing system manager to not
autoreset node 0.
2012-04-10 13:42:44 PDT Updated Resetting node 0.
2012-04-10 13:42:53 PDT Updated Attempting to contact node 0 via
NEMOE.
2012-04-10 13:42:53 PDT Updated Setting boot parameters.
2012-04-10 13:44:08 PDT Updated Waiting for node 0 to boot the node
rescue kernel.
2012-04-10 13:44:54 PDT Updated Kernel on node 0 has started.
Waiting for node to retrieve install details.
2012-04-10 13:45:14 PDT Updated Node 32768 has retrieved the install
details. Waiting for file sync to begin.
2012-04-10 13:45:36 PDT Updated File sync has begun. Estimated time
to complete this step is 5 minutes on a lightly loaded system.
2012-04-10 13:47:22 PDT Updated Remote node has completed file sync,
and will reboot.
2012-04-10 13:47:22 PDT Updated Notified NEMOE of node 0 that node-
rescue is done.
2012-04-10 13:47:22 PDT Updated Node 0 rescue complete.
2012-04-10 13:47:22 PDT Completed scheduled task.

• To monitor the progress with more details, connect a serial cable to the Service port on the node that
is being rescued.

SP-to-Node rescue
SP-to-Node rescue, also known as “all nodes down rescue,” is performed by HPE Support personnel and
only under the guidance of Level 3 support.
Perform the SP-to-Node rescue procedure if all nodes in the HPE 3PAR system are down and must be
rebuilt. For individual node replacement or node-drive rebuilding, use the "node-to-node" rescue
procedure on the StoreServ array.
The SP-to-Node rescue procedure involves the following:
• Nodes are rescued one at time using either the public Ethernet connection (eth0), or for rescues being
performed from Physical SPs, a private network is used. For a private network, connect the Ethernet
port on the node being rescued to the second Ethernet port (eth1) on the Physical SP.

NOTE: Systems with encrypted node drives require extra steps to recover and restore. Each
node with encrypted node drives must be rescued twice to allow encryption to be restored. HPE
3PAR 8000, 9000, and 20000 StoreServ systems support encrypted node drives.
• Only the selected 3PAR OS version will be restored. No additional patches will be installed as part of
the rescue process. Install these patches after a successful rescue.
• While the rescue is being performed, the services that are required to perform the SP-to-Node will be
started and the firewall opened accordingly.
• After all the nodes have been rescued, the SP must be “de-configured” to restore the firewall to its
previous state and to terminate the services that were initiated for the rescue.

200 SP-to-Node rescue


SP-to-Node rescue flowchart

Accessing the DL120 SP Console Using a Laptop


The ProLiant DL120 Service Processor does not have a physical serial port. The Eth1/Port2 on the
DL120 system is configured in shared mode for providing access for both iLO and Service Port
functionality. The ports are assigned as follows:
• iLO port: 10.255.155.52 (static private network address)

NOTE:
The shared iLO port may also be used to perform an SP-to-Node Rescue. For more information,
see SP-to-Node rescue on page 200.
• Service port: 10.255.155.54 (same static private IP address that is available for other platform
models)
Use the following procedure to access the iLO from a laptop for the DL120.

Procedure
1. Connect a laptop to the DL120.

Accessing the DL120 SP Console Using a Laptop 201


a. Configure any available Ethernet port on the laptop with the following private network address:
• IP address: 10.255.155.49
• Subnet mask: 255.255.255.248
b. Connect the laptop to port 2 of the DL120 through a switch.

NOTE:
Use of a straight or crossover Ethernet cable is not recommended.
A straight or crossover Ethernet cable may be used, however, Hewlett Packard Enterprise
recommends using a small private switch between the DL120 and the laptop to ensure that the
laptop does not lose its network connection during the build process. When the DL120 resets,
the NIC port resets and drops the link. This connection loss can result in the failure of the
software load process.
Any personal switch with four to eight ports is supported, such as the HPE 1405-5G Switch
(J97982A), which is available as a non-catalog item from HPE SmartBuy.
2. Log in to the iLO port using SSH.

NOTE:
The iLO port is also accessible using HTTPS. To log in to iLO using HTTPS, open an Internet
Explorer web browser and type https://10.255.155.52 into the address bar to launch iLO
in the browser.

a. Open a console terminal program (such as putty) and open 10.255.155.52 using SSH on port
22.
b. Supply iLO credentials (Administrator/PASSWORD) at the login prompt to open the iLO command
console prompt </>hpiLO->.

202 Controller node rescue


NOTE:
The iLO credentials can be found on the pull-out tab on the DL120.

1 Serial Number and Product Number


2 iLO User Name, DNS Name, and Password
3 QR Code

Controller node rescue 203


Figure 162: Logging into the iLO virtual serial port using new private IP address
10.255.155.52
3. Log in to SP through the iLO serial connectivity option
a. At the iLO prompt, enter the vsp command to invoke the SP login prompt using the virtual serial
port connectivity.
b. At the SP login prompt, enter the SP administrator credentials to log in to the SP.

Figure 163: Accessing SP console from the iLO prompt

SP-to-Node rescue with SP 4.x software


Prerequisites
• HPE Support personnel must perform the SP-to-Node rescue.
• Connect the HPE 3PAR StoreServ Storage system to the SP.
• Connect the maintenance PC to the SP.

204 SP-to-Node rescue with SP 4.x software


For a DL120 SP, since it does not have a serial connection, follow the instructions in Accessing the
DL120 SP Console Using a Laptop on page 201.

SP-to-Node rescue with SP 4.x software for a non-encrypted system

Procedure
1. Log in to the SP console as spvar, if you are not already logged in.
2. Select option 4 StoreServ Product Maintenance.
3. Select option 11 Node Rescue.
4. Follow the SP-to-Node rescue instructions on the dialog script that is presented.
5. After the final node has rebooted and joined the cluster, log in to any node console as the console
user. Select option 11 Finish SP-to-Node rescue procedure.
6. Allow about 5 minutes for the cluster to synchronize and networking to restore.
Use the showsys and shownet CLI commands to verify that the cluster is running.
7. On the SP console, follow the prompts to complete and de-configure the SP-to-Node rescue
procedure.

SP-to-Node rescue with SP 4.x software for an encrypted system

Procedure
1. Log in to the SP console as spvar, if you are not already logged in.
2. Select option 4 StoreServ Product Maintenance.
3. Select option 11 Node Rescue.
4. Follow the SP-to-Node rescue instructions on the dialog script that is presented, but do not continue
after the last node has been rescued. DO NOT press Enter on the SP console.
5. Log in to any node console as root.
6. To determine the cluster master, enter the command clwait.
7. Log in as root on a non-master node console and issue the command controlpd revertnode
<X:X>, where X:X is the node:drive to be wiped out.
Each node drive must be wiped out to clear the node drives and allow the cluster to form.

For example, on a system with two node drives, to wipe out both drives in node 3, use the command
controlpd revertnode 3:0 immediately followed by the command controlpd revertnode
3:1.
8. Allow the node to reboot automatically after about 30 seconds.
9. Use CTRL -WHACK to get the node back at the Whack> prompt.
10. To rescue the node a second time, use the SP to node rescue procedure.
11. Perform steps 3 through 6 on each non-master node.
12. After all non-master nodes have been rescued the second time, wipe out the drives on the master
node.
13. The master node is automatically rescued using node to node rescue.
14. After the final node has rebooted and joined the cluster, wait about 5 minutes and then log in to the
console of the current master node and select 11. Finish SP-to-Node rescue procedure.
15. Allow about 5 minutes for the cluster to synchronize and networking to restore.
Use the showsys and shownet CLI commands to verify that the cluster is running.
16. On the SP console, follow the prompts to complete and de-configure the SP-to-Node rescue
procedure.

SP-to-Node rescue with SP 4.x software for a non-encrypted system 205


SP-to-Node rescue with SP 5.x software
Prerequisites
• HPE Support personnel must perform the SP-to-Node rescue.
• Connect the HPE 3PAR StoreServ Storage system to the SP.
• Connect the maintenance PC to the SP.
For a DL120 SP, since it does not have a serial connection, follow the instructions in Accessing the
DL120 SP Console Using a Laptop on page 201.

SP-to-Node rescue with SP 5.x software for a non-encrypted system

Procedure
1. Log in to the SP console as hpesupport, if you are not already logged in.
2. Enter the CLI command sp2node.
3. Follow the SP-to-Node rescue instructions on the dialog script that is presented.
4. After the final node has rebooted and joined the cluster, wait about 5 minutes and then log in to any
node console as the console user and select 11. Finish SP-to-Node rescue procedure.
5. Allow about 5 minutes for the cluster to synchronize and networking to restore. Use the showsys and
shownet CLI commands to verify that the cluster is running.
6. On the SP console, follow the prompts to complete and de-configure the SP-to-Node rescue
procedure.

SP-to-Node rescue with SP 5.x software for an encrypted system

Procedure
1. Log in to the SP console as hpesupport, if you are not already logged in.
2. Enter the CLI command sp2node.
3. Follow the SP-to-Node rescue instructions on the dialog script that is presented, but do not continue
after the last node has been rescued. DO NOT press Enter on the SP console.
4. Log in to any node console as root.
5. To determine the cluster master, enter the command clwait.
6. Log in as root on a non-master node console and issue the command controlpd revertnode
<X:X>, where X:X is the node:drive to be wiped out.
Each node drive must be wiped out to clear the node drives and allow the cluster to form.

For example, on a system with two node drives, to wipe out both drives in node 3, use the command
controlpd revertnode 3:0 immediately followed by the command controlpd revertnode
3:1.
7. Allow the node to reboot automatically after about 30 seconds.
8. Use CTRL -WHACK to get the node back at the Whack> prompt.
9. To rescue the node a second time, use the SP-to-Node rescue procedure.
10. Perform steps 3 through 6 on each non-master node.
11. After all non-master nodes have been rescued the second time, wipe out the drives on the master
node.
12. The master node is automatically rescued using node to node rescue.
13. After the final node has rebooted and joined the cluster, wait about 5 minutes and then log in to the
console of the current master node and select 11. Finish SP-to-Node rescue procedure.
14. Allow about 5 minutes for the cluster to synchronize and networking to restore.

206 SP-to-Node rescue with SP 5.x software


Use the showsys and shownet CLI commands to verify that the cluster is running.
15. On the SP console, follow the prompts to complete and de-configure the SP-to-Node rescue
procedure.

Controller node rescue 207


Deinstalling the storage system and restoring
the storage system to factory defaults
WARNING:
De-installation destroys all user data. There is no recovery. This de-installation procedure is not the
procedure for regular relocation services (when the user data must be saved).

WARNING:
Following de-installation of strong-password-enabled HPE 3PAR StoreServ storage systems, the
console user account password will revert to its static password.

NOTE:
If you are planning to reinstall and use the existing license, record the existing license before
performing the uninstall by using the showlicense and showlicense -raw commands.

Only use this de-installation procedure in the following conditions:


• To wipe all data and format from the drives to prevent access to the old data.
• To save time to correct an initial installation that was installed incorrectly and that has no customer
data on it.

NOTE:
In this case, complete only steps 1 to 11 of Uninstalling the system on page 209, and then
begin again with the system installation procedures to reinstall the storage system.

Before uninstalling a system:


• Verify with a system administrator that the system is prepared for shutdown.
• Complete the System Inventory on page 208 .

System Inventory
To complete the system inventory, record the following information for each system to be uninstalled:
• Customer name
• Site information
• System serial numbers. Issue the showinventory CLI command.
• Software currently running on the system. Issue the following CLI commands to obtain the listed
information:
◦ HPE 3PAR Operating System version —
showversion –b –a
◦ Drive cage firmware version —
showcage
◦ Disk drive firmware version —
showpd –i
◦ HPE 3PAR CBIOS version —
shownode -verbose

208 Deinstalling the storage system and restoring the storage system to factory defaults
◦ Features licensed on the array —
showlicense
◦ Raw license key for licensed features on the array —
showlicense -raw
• Storage system hardware configuration
◦ Number of cabinets
◦ Number of controller nodes
◦ Amount of data cache in the controller nodes —
shownode
◦ Amount of control cache in the controller nodes —
shownode
◦ Number and type of Fibre Channel adapters in each node —
showport -i
◦ Number of drive cages—
showcage
(also used above for cage firmware)
◦ Number of drive magazines —
showcage –d
◦ Number and sizes of drives on the magazines —
showpd
• Physical condition of system hardware and cabinet (note presence of scratches, dents, missing
screws, broken bezels, damaged ports, and other visible anomalies)
• Destination address or addresses and list of the equipment going to each address

NOTE:
In this and other chapters, the command-line examples use bold type to indicate user input and
angle bracket (< >) to denote variables. Examples may not match the exact output any particular
system.

Uninstalling the system


This section provides instructions for uninstalling a system.

NOTE:
If you are planning to reinstall using the existing license, record the license before completing this
uninstall procedure. To record the existing license, use the following commands to record the
existing license:
showlicense and showlicense -raw.

Procedure
1. Connect the laptop to the highest numbered controller node with a serial cable.
2. Using a terminal emulator program, such as PuTTy, set your maintenance laptop console to the
following settings.

Uninstalling the system 209


Setting Value

Baud rate 57600

Data bits 8

Parity None

Stop bits 1

Flow control Xon/Xoff


3. Log in to the node as the console user.

3PAR(TM) OS <version> <sernum>-<nodeID> ttyS0


(none) login: console<password>

NOTE: To cancel the process at anytime, press CTRL+C.


4. From the console menu, enter 8 to select Perform a deinstallation.

3PAR Console Menu XXXXXXX-X 3.x.x.xxx

1. Out Of The Box Procedure


2. Re-enter network configuration
3. Update the CBIOS
4. Enable or disable CLI error injections
5. Perform a Node-to-Node rescue
6. Set up the system to wipe and rerun ootb
7. Cancel a wipe
8. Perform a deinstallation
9. Update the system for recently added hardware (admithw)
10. Check system health (checkhealth)
11. Load network configuration
12. Exit
> 8

WARNING:
Proceeding with the deinstallation causes complete and irrecoverable loss of data on the
storage system.
5. A warning message appears to indicate that all data on this system will be lost. Enter y to begin
the deinstallation process and the system reboots.

WARNING:
Running this script will result in complete loss of data on this system.
Are you sure you want to continue? (y/n)
y

6. After the system reboots, log on as the console user.

210 Deinstalling the storage system and restoring the storage system to factory defaults
3PAR(TM) InForm(R) OS <version> <sernum>-<nodeID> ttyS0
(none) login: console<password>

7. From the console menu, enter 8 to select the Perform a deinstallation.


8. At this point, all chunklets in the system will be initialized to clear volume data. The deinstallation is
estimated to take about one to two hours, depending on the capacity of the drives in the system.
Enter y to begin the deinstallation process to reboot the system.

NOTE:
The time required for all chunklet reinitializations during deinstallation depends on the type,
size, and number of disks. In the example below, the estimate is 71 minutes.

NOTE:
If you do not wait for the chunklets to be initialized, data still resides on the disks but cannot be
easily accessed. When the chunklets are initialized, zeros are written over the existing data.
If you require additional assistance, contact HPE 3PAR Technical Support.

At this point, all chunklets in the system will be initialized to clear


volume
data. This is estimated to take about 71 minutes.

Wait for all chunklets to be initialized? (y/n)


y
'
32 of 2176 chunklets initialized (1%).
Est. completion in 70 minutes 0 seconds.

80 of 2176 chunklets initialized (3%).


Est. completion in 68 minutes 25 seconds.

9. From the SP Main menu, enter 3 for StoreServ Configuration Management.

1 SP Main
3PAR Service Processor Menu

Transfer media: ethernet Transfer status: Ok

Enter Control-C at any time to abort this process

1 ==> SP Control/Status
2 ==> Network Configuration
3 ==> StoreServ Configuration Management
4 ==> StoreServ Product Maintenance
5 ==> Local Notification Configuration
6 ==> Site Authentication Key Manipulation
7 ==> Interactive CLI for an StoreServ

X Exit

10. From the StoreServ Configs menu, enter 4 for Remove a StoreServ, and then press ENTER.

Deinstalling the storage system and restoring the storage system to factory defaults 211
3 StoreServ Configs
3PAR Service Processor Menu

Transfer media: ethernet Transfer status: Ok

SP - StoreServ Configuration Manipulation

Enter Control-C at any time to abort this process

1 ==> Display StoreServ information


2 ==> Add a new StoreServ
3 ==> Modify an StoreServ config parameters
4 ==> Remove an StoreServ

X Return to the previous menu

11. Enter x to return to the previous menu.


12. From the SP Main menu, enter 1 for SP Control/Status and press ENTER.

1 SP Main
HP 3PAR Service Processor Menu

Transfer media: ethernet Transfer status: Ok

Enter Control-C at any time to abort this process

1 ==> SP Control/Status
2 ==> Network Configuration
3 ==> StoreServ Configuration Management
4 ==> StoreServ Product Maintenance
5 ==> Local Notification Configuration
6 ==> Site Authentication Key Manipulation
7 ==> Interactive CLI for a StoreServ

X Exit

13. From the SP Control menu, enter 3 to select Halt SP and press ENTER.

212 Deinstalling the storage system and restoring the storage system to factory defaults
1 SP CONTROL
HP 3PAR Service Processor Menu

Transfer media: ethernet Transfer status: Ok

SP Control Functions

Enter Control-C at any time to abort this process

1 ==> Display SP Version


2 ==> Reboot SP
3 ==> Halt SP
4 ==> Stop StoreServ related Processes
5 ==> Start StoreServ related Processes
6 ==> File Transfer Monitor
7 ==> SP File Transfer Trigger
8 ==> Reset Quiesce state in Transfer process
9 ==> Mount a CDROM
10 ==> Unmount a CDROM
11 ==> SP Date/Time/Geographical Location maintenance
12 ==> Manage NTP configuration
13 ==> Display SP status
14 ==> SP User Access Control
15 ==> SP Process Control Parameters
16 ==> Maintain SP Software
17 ==> SP File Maintenance
18 ==> RESERVED
19 ==> Request a SPLOR
20 ==> Request an MSPLOR
21 ==> Run SPCheckhealth
22 ==> SP Certificate Information

X Return to previous menu

14. Enter Y to confirm that you want to halt and power off the SP.

1.3.1 SP SHUTDOWN
HP 3PAR Service Processor Menu

Transfer media: ethernet Transfer status: Ok

Confirmation

Enter Control-C at any time to abort this process

Halt will power off the SP!


Are you certain you want to halt the SP now?
(y or n)

15. Set all power breakers on the PDUs to the OFF position.

CAUTION:
For safety precaution, all drive magazines must be properly grounded before removing from the
drive chassis.

Deinstalling the storage system and restoring the storage system to factory defaults 213
16. Unplug the system main power cords.
17. For system and drive expansion cabinets, coil all main powers cords and strap the cable along the
mounting rails on the side of the cabinet panel. Use a cable tie wrap to secure the cords inside the
rack.
18. Disconnect all external connections from the host computers and drive cage expansion racks to the
system and remove the cables from the rack. Leave the internal Fibre Channel and SP connections
intact if possible.
19. Insert dust plugs into all open system Fibre Channel ports and secure all Fibre Channel, Ethernet,
and serial cables remaining inside the rack.

Restoring the system to factory defaults


To restore the HPE 3PAR StoreServ storage to the factory default settings, follow these steps:

Procedure
1. Connect the laptop to the highest numbered controller node with a serial cable.
2. Using a terminal emulator program, such as PuTTy, set your maintenance laptop console to the
following settings.

Setting Value

Baud rate 57600

Data bits 8

Parity None

Stop bits 1

Flow control Xon/Xoff


3. Log in to the node as the console user.

3PAR(TM) OS <version> <sernum>-<nodeID> ttyS0


(none) login: console<password>

NOTE: To cancel the process at anytime, press CTRL+C.


4. From the console menu, enter 8 to select Perform a deinstallation.

214 Restoring the system to factory defaults


3PAR Console Menu XXXXXXX-X 3.x.x.xxx

1. Out Of The Box Procedure


2. Re-enter network configuration
3. Update the CBIOS
4. Enable or disable CLI error injections
5. Perform a Node-to-Node rescue
6. Set up the system to wipe and rerun ootb
7. Cancel a wipe
8. Perform a deinstallation
9. Update the system for recently added hardware (admithw)
10. Check system health (checkhealth)
11. Load network configuration
12. Exit
> 8

WARNING:
Proceeding with the deinstallation causes complete and irrecoverable loss of data on the
storage system.
5. A warning message appears to indicate that all data on this system will be lost. Enter y to begin
the deinstallation process and the system reboots.

WARNING:
Running this script will result in complete loss of data on this system.
Are you sure you want to continue? (y/n)
y

6. After the system reboots, log on as the console user.

3PAR(TM) InForm(R) OS <version> <sernum>-<nodeID> ttyS0


(none) login: console<password>

7. From the console menu, enter 8 to select the Perform a deinstallation.


8. At this point, all chunklets in the system will be initialized to clear volume data. The deinstallation is
estimated to take about one to two hours, depending on the capacity of the drives in the system.
Enter y to begin the deinstallation process to reboot the system.

NOTE:
The time required for all chunklet reinitializations during deinstallation depends on the type,
size, and number of disks. In the example below, the estimate is 71 minutes.

NOTE:
If you do not wait for the chunklets to be initialized, data still resides on the disks but cannot be
easily accessed. When the chunklets are initialized, zeros are written over the existing data.
If you require additional assistance, contact HPE 3PAR Technical Support.

Deinstalling the storage system and restoring the storage system to factory defaults 215
At this point, all chunklets in the system will be initialized to clear
volume
data. This is estimated to take about 71 minutes.

Wait for all chunklets to be initialized? (y/n)


y
'
32 of 2176 chunklets initialized (1%).
Est. completion in 70 minutes 0 seconds.

80 of 2176 chunklets initialized (3%).


Est. completion in 68 minutes 25 seconds.

9. After the system reboots, log on as the console user.

3PAR(TM) InForm(R) OS <version> <sernum>-<nodeID> ttyS0


(none) login: console<password>

10. From the console menu, enter 6 to select Set up the system to wipe and rerun ootb.

3PAR Console Menu XXXXXXX-X 3.x.x.xxx

1. Out Of The Box Procedure


2. Re-enter network configuration
3. Update the CBIOS
4. Enable or disable CLI error injections
5. Perform a Node-to-Node rescue
6. Set up the system to wipe and rerun ootb
7. Cancel a wipe
8. Perform a deinstallation
9. Update the system for recently added hardware (admithw)
10. Check system health (checkhealth)
11. Load network configuration
12. Exit
> 6

The OOTB procedure can take several minutes to complete.

216 Deinstalling the storage system and restoring the storage system to factory defaults
Troubleshooting
NOTE:
This section describes how to troubleshoot the StoreServ system.
The outputs shown in this section are only examples and might not reflect your system
configuration.

Troubleshooting issues with the storage system


Alerts issued by the storage system
Alerts are triggered by events that require intervention by the system administrator. To learn more about
alerts, see the HPE 3PAR StoreServ Storage 3PAR Alerts Reference: Service Edition at the Hewlett
Packard Enterprise Service Access Workbench (SAW) website.
Alerts are processed by the service processor (SP). The Hewlett Packard Enterprise Support Center
(HPESC) takes action on alerts that are not customer administration alerts. Customer administration
alerts are managed by customers.

Alert severity levels

Severity Description

Fatal A fatal event has occurred. It is no longer possible to take remedial action.

Critical The event is critical and requires immediate action.

Major The event requires immediate action.

Minor An event has occurred that requires action, but the situation is not yet
serious.

Degraded An aspect of performance or availability might have become degraded. You


must determine whether action is necessary.

Informational The event is informational. No action is required other than acknowledging or


removing the alert.

Alert notifications by email from the service processor


During the service processor (SP) setup, the Send email notification of system alerts option was either
enabled or disabled. If enabled, the SP sends email notifications of alerts to the system support contact.
The following notification rules might have been selected during the SP setup:
• Default notifications—Give the set of notifications currently used by default.
• All notifications—Selects all notifications available.
• No notifications—Suppresses all notifications from the system.
• Custom—Creates a user-defined set of notification rules. You can filter rules by All types, All
severities, All codes, or All sources.

Troubleshooting 217
For drive alerts, the Component line in the right column lists the cage number, magazine number, and
drive number (cage:magazine:drive). The first and second numbers are sufficient to identify the exact
drive in a storage system, since there is always only a single drive (drive 0) in a single magazine.

Alert notifications in the HPE 3PAR StoreServ Storage 3PAR Service Console
In the Detail pane of the HPE 3PAR StoreServ Storage 3PAR Service Console (SC), an alert notification
will display in the Notifications box.

Views (1)—The Views menu identifies the currently selected view. Most list panes have several views
that you can select. Clicking the down arrow ( ) to the right of a view exposes or hides the Views drop-
down list. Map views, when available, can be selected from the Views menu by clicking the map icon
( ).
Actions (2)—The Actions menu allows you to perform actions on one or more resources that you have
selected, in the list pane. If you do not have permission to perform an action, the action is not displayed in
the menu. Also, some actions might not be displayed due to system configurations, user roles, or
properties of the selected resource.
Notifications box (3)—The Notifications box is displayed when an alert or task has affected the
resource.
Resource detail (4)—Information for the selected view is displayed in the Resource detail area.

Viewing alerts

Procedure
1. To view the alerts, issue the HPE 3PAR CLI showalert command.
Alert message codes have seven digits in the schema AAABBBB, where:
• AAA is a 3-digit major code.
• BBBB is a 4-digit sub-code.
• 0x precedes the code to indicate hexadecimal notation.
Message codes ending in de indicate a degraded state alert.
Message codes ending in fa indicate a failed state alert.

218 Alert notifications in the HPE 3PAR StoreServ Storage 3PAR Service Console
See the HPE 3PAR OS Command Line Interface Reference for complete information on the display
options on the event logs.

HPE 3PAR BIOS Error Codes


For information about the HPE 3PAR BIOS Error Codes, see the HPE 3PAR BIOS Error Codes
Reference Guide available at the HPE 3PAR Services Access Workbench (SAW) website.
• Direct link: h41302.www4.hp.com/km/saw/view.do?docId=emr_na-a00019677en_us
• Search by ID: From the Search page, enter a00019677en_us in the query field.

I/O module error codes


This table describes the possible error codes appearing in the 7–segment display on the back panel of
the I/O module. For information on the location of the 7–segment display, see I/O Module LEDs.

Error Type Error Code Error Detail Recommended Action

I/O Module Error A0 ESP generic error 1. Remove the module,


wait 10 seconds,
A1 ESP watchdog is fired reinsert the module.
2. If the error persists,
A2 ESP Conflict in SAS then check for new
domain (domain A/B) firmware releases
and upgrade the
A3 Error in Expander enclosure firmware.
communication New firmware
versions, containing
A4 Missing information in new features and
IO Module defect fixes, are
manufacturing NVRAM released periodically.
3. If the error persists,
A5 I2C arbitration error contact a Hewlett
Packard Enterprise
A6 Error in inter ESP representative. An
communication I/O module
replacement may be
A8 Error in GPIO Expander necessary.
I2C bus

A9 Permanent error in ESP


NVRAM I2C bus

AD Error in ESP event log

AE Permanent error in
Backplane I2C bus

AF Error in Backplane
NVRAM access

B0 Expander generic error

Table Continued

HPE 3PAR BIOS Error Codes 219


Error Type Error Code Error Detail Recommended Action

B1 Expander watchdog is
fired

B2 Expander conflict in
SAS domain (side A/B)

B3 Expander using default


SAS address

B4 I2C arbitration error

B5 Error in inter Expander


communication

B7 System event log error

BD Error in ESP
communication

BF System identification
value is not available

I/O Module Firmware A7 ESP firmware version 1. Update the firmware


Error mismatch with ESP of the I/O module
firmware version in that displays the
partner I/O module error and wait until it
restarts.
B6 Expander firmware 2. Then update the
version mismatch with firmware of the other
Expander firmware I/O module and wait
version in partner I/O until it restarts.
module

B8 Expander firmware
image error

BE Expander firmware
version mismatch with
ESP firmware version in
own I/O module

SAS Cable Error B9 SAS cable hardware 1. Verify the SAS cable
error status indicators. For
the cables with an
amber LED (error),
check if the cables
are properly

Table Continued

220 Troubleshooting
Error Type Error Code Error Detail Recommended Action

BA Hewlett Packard connected in both


Enterprise unsupported sides.
SAS cable 2. If the cables are
properly connected
and the error
persists, replace the
cables.
3. If replacing the
cables does not
resolve the issue,
then check for new
firmware releases. A
firmware upgrade
might fix the issue
4. If there is no new
firmware available, or
upgrading the
firmware is not
possible, contact a
Hewlett Packard
Enterprise
representative. An
I/O module
replacement may be
necessary.

Disk Drive Error BB Error in disk drive Check your storage


administration software
for more information
about the problem
detected and how to
properly fix it.

BC Not authentic drive in There is at least one


the enclosure non-authentic Hewlett
Packard Enterprise disk
drive on the system.
Non-authentic disks may
not work properly on this
system.
Check your storage
administration software
for more information
about the problem
detected and how to
properly fix it.

Table Continued

Troubleshooting 221
Error Type Error Code Error Detail Recommended Action

Thermal Control Error C0 Generic temperature 1. Remove the module,


error wait 10 seconds,
reinsert the module.
C1 Permanent error in 2. If the error persists,
temperature sensor I2C then check for new
bus firmware releases
and upgrade the
C2 Error reading data from enclosure firmware.
temperature sensor New firmware
versions, containing
new features and
defect fixes, are
released periodically.
3. If the error persists,
contact a Hewlett
Packard Enterprise
representative. An
I/O module
replacement may be
necessary.

Thermal Shutdown C3 Warning temperature Check for thermal


Alarms reached in temperature issues, such as
sensor extremely hot drives, air
blockages, missing or
C4 Critical temperature failed fans, or high
reached in temperature ambient temperature.
sensor

C5 Minimum temperature
reached in temperature
sensor

C6 Fans commanded to
maximum speed

C7 System shutdown
because of over
temperature

Power Supply Module D0 Generic error in Power If a power supply


Error Supply module 1 module does not have a
green LED illuminated,
verify that it is correctly
cabled to a power
source.

NOTE:
This warning can
also be caused by

Table Continued

222 Troubleshooting
Error Type Error Code Error Detail Recommended Action

D1 Generic error in Power a failed power


Supply module 2 supply.

1. If cabling was not the


root cause,
troubleshoot by
reinserting each
power supply in turn.
2. If the error persists,
then check for new
firmware releases
and upgrade the
enclosure firmware.
New firmware
versions, containing
new features and
defect fixes, are
released periodically.
3. If the error persists,
contact a Hewlett
Packard Enterprise
representative. A
power supply
replacement may be
necessary.

D2 Absence of the Power If a power supply


Supply module 1 module does not have a
green LED illuminated,
verify that it is correctly
cabled to a power
source.

NOTE:
This warning can
also be caused by
a failed power
supply.

1. If cabling was not the


root cause,
troubleshoot by
reinserting each
power supply in turn.
2. If the error persists,
then check for new
firmware releases
and upgrade the
enclosure firmware.
New firmware

Table Continued

Troubleshooting 223
Error Type Error Code Error Detail Recommended Action

D3 Absence of the Power versions, containing


Supply module 2 new features and
defect fixes, are
released periodically.
3. If the error persists,
contact a Hewlett
Packard Enterprise
representative. A
power supply
replacement may be
necessary.

D9 Error in system voltage

DA Input power loss in


Power Supply module 1

DB Input power loss in


Power Supply module 2

Power Supply Module D4 Permanent error in 1. Remove and reinsert


Communication Error Power Supply modules each power supply in
I2C bus turn.
2. If the error persists,
D5 Communication error then check for new
with Power Supply firmware releases
module 1 and upgrade the
enclosure firmware.
D6 Communication error New firmware
with Power Supply versions, containing
module 2 new features and
defect fixes, are
released periodically.
3. If the error persists,
contact a Hewlett
Packard Enterprise
representative. A
power supply
replacement may be
necessary.

Fan Module Error E0 Generic error in Fan 1. If a fan module has


module 1 an amber LED
indication, try
removing and
reinserting it. If none
of the fans have an
amber LED, replace

Table Continued

224 Troubleshooting
Error Type Error Code Error Detail Recommended Action

E1 Generic error in Fan one fan module and


module 2 wait 30 seconds.
2. If the error persists,
then check for new
firmware releases
and upgrade the
enclosure firmware.
New firmware
versions, containing
new features and
defect fixes, are
released periodically.
3. If the error persists,
contact a Hewlett
Packard Enterprise
representative, a fan
module replacement
may be necessary.

E2 Absence of the Fan 1. If a fan module has


module 1 an amber LED
indication, try
E3 Absence of the Fan reinserting it.
module 2 2. If none of the fans
have an amber LED,
replace one fan
module and wait 30
seconds.
3. If the error persists,
then check for new
firmware releases
and upgrade the
enclosure firmware.
New firmware
versions, containing
new features and
defect fixes, are
released periodically.
4. If the error persists,
contact a Hewlett
Packard Enterprise
representative, a fan
module replacement
may be necessary.

E9 Failure in one or more


rotors of Fan module 1

EA Failure in one or more


rotors of Fan module 2

Table Continued

Troubleshooting 225
Error Type Error Code Error Detail Recommended Action

Fan Module E4 Permanent error in Fan 1. Remove and reinsert


Communication Error modules I2C bus each fan module in
turn.
E5 Communication error 2. If this does not
with Fan module 1 resolve the issue,
then remove and
E6 Communication error reinsert the I/O
with Fan module 2 module that shows
the status code.
3. If the error persists,
then check for new
firmware releases
and upgrade the
enclosure firmware.
New firmware
versions, containing
new features and
defect fixes, are
released periodically.
4. If the error persists,
contact a Hewlett
Packard Enterprise
representative, a fan
module replacement
may be necessary.

I/O Module LEDs

Figure 164: I/O module LEDs

Indicator Startup condition Operating condition Fault conditions

1. Port Link Blinking or solid green Off

2. Port Error Off Solid amber

3. 7–segment A number, representing the Off


display box number, or an error/
warning code.

4. UID Blue Off Off

Table Continued

226 I/O Module LEDs


Indicator Startup condition Operating condition Fault conditions

5. Health Blinking green Solid green Off

6. Fault Off Blinking or solid amber

Collecting log files


For a service event, it might be necessary to collect the HPE 3PAR Service Processor (SP) log files for
Hewlett Packard Enterprise Support.

Collecting the HPE 3PAR SmartStart log files

NOTE:
You can continue to access the HPE 3PAR SmartStart log files in the Users folder after you have
removed HPE 3PAR SmartStart from your storage system.

Procedure
1. Locate this folder: C:\Users\<username>\SmartStart\log.
2. Zip all the files in the log folder.

Collecting SP log files from the SC interface


The following tools collect data from the HPE 3PAR Service Processor (SP):

Audit and Logging Information—Provides audit and logging information of an attached storage
system and HPE 3PAR SP usage. This file is gathered as part of an SPLOR and Hewlett Packard
Enterprise Support personnel can view the file using HPE Service Tools and Technical Support
(STaTS).
HPE 3PAR SP audit Information is contained in the audit.log file, which provides the following audit
information:
◦ Users who accessed the HPE 3PAR SP
◦ Logon and logoff times
◦ The functionality used, such as Interactive CLI.
• SPLOR—Gathers files to diagnose HPE 3PAR SP issues. The SPLOR data can be retrieved through
the Collect support data action from the Service Processor page.

Procedure
1. Connect and log in to the HPE 3PAR SP.
2. From the HPE 3PAR Service Console (SC) main menu, select Service Processor.
3. Select Actions > Collect support data.
4. Select SPLOR data, and then click Collect to start data retrieval.
When support data collection is in progress, it will start a task which will be displayed at the top of the
page. To see details for a specific collection task in Activity view, expand the task message and click
the Details link for the task.

Collecting log files 227


Collecting SP log files from the SPOCC interface

Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
2. From the 3PAR Service Processor Onsite Customer Care (SPOCC) main menu, click Files from the
navigation pane.
3. Click the folder icons for files > syslog > apilogs.
4. In the Action column, click Download for each log file:

SPSETLOG.log Service Processor setup log

ARSETLOG.system_serial_number.log Storage System setup log

errorLog.log General errors


5. Zip the downloaded log files.

Health check on the storage system

Checking health of the storage system—HPE 3PAR SSMC


Health panels are included in the Overview view of most detail panes. The dashboard screen
summarizes the key properties and health of all connected storage systems.

Procedure
1. Log in to the HPE 3PAR StoreServ Management Console (SSMC) and on the main menu, click
Storage Systems > Systems. The systems managed by the SSMC are listed and the health and
configuration summary panels provide a system overview and shows the system health status.

Checking health of the storage system—HPE 3PAR CLI


The HPE 3PAR CLI checkhealth command checks and displays the status of storage system hardware
and software components. For example, the checkhealth command can check for unresolved system
alerts, display issues with hardware components, or display information about virtual volumes that are not
optimal.
By default the checkhealth command checks most storage system components, but you can also
check the status of specific components. For a complete list of storage system components analyzed by
the checkhealth command, see Troubleshooting StoreServ System Components.
The service processor (SP) also runs the checkhealth command once an hour and sends the
information to the HPE Support Center where the information is monitored periodically for unusual system
conditions.

Procedure
1. Issue the HPE 3PAR CLI checkhealth command without any specifiers to check the health of all the
components that can be analyzed.
Command syntax is: checkhealth [<options> | <component>...]
Command authority is Super, Service.
Command options are:

228 Collecting SP log files from the SPOCC interface


• The -list option, which lists all components that checkhealth can analyze.
• The -quiet option, which suppresses the display of the item currently being checked.
• The -detail option, which displays detailed information regarding the status of the system.
• The -svc option, which performs the default list of checks on the system and reports the status.
• The -full option, which displays information about the status of the full system. This is a hidden
option and only appears in the CLI Hidden Help. This option is prohibited if the -lite option is
specified. Some of the additional components evaluated take longer to run than other components.
The <component> is the command specifier, which indicates the component to check. Use the -list
option to get the list of components.

Issue checkhealth -detail to display both summary and detailed information about the hardware
and software components:

cli% checkhealth -detail


Checking alert
Checking ao
Checking cabling
Checking cage
Checking cert
Checking dar
Checking date
Checking file
Checking fs
Checking host
Checking ld
Checking license
Checking network
Checking node
Checking pd
Checking pdch
Checking port
Checking qos
Checking rc
Checking snmp
Checking task
Checking vlun
Checking vv
Checking sp
Component -----------Summary Description----------- Qty
Alert New alerts 4
Date Date is not the same on all nodes 1
LD LDs not mapped to a volume 2
License Golden License. 1
vlun Hosts not connected to a port 5
-------------------------------------------------------
5 total 13

The following information is included when you use the -detail option:

Troubleshooting 229
Component ----Identifier---- -----------Detailed Description-------
Alert sw_port:1:3:1 Port 1:3:1 Degraded (Target Mode Port Went
Offline)
Alert sw_port:0:3:1 Port 0:3:1 Degraded (Target Mode Port Went
Offline)
Alert sw_sysmgr Total available FC raw space has reached
threshold of 800G
(2G remaining out of 544G total)
Alert sw_sysmgr Total FC raw space usage at 307G (above 50% of total
544G)
Date -- Date is not the same on all
nodes
LD ld:name.usr.0 LD is not mapped to a volume
LD ld:name.usr.1 LD is not mapped to a volume
vlun host:group01 Host wwn:2000000087041F72 is not connected to a
port
vlun host:group02 Host wwn:2000000087041F71 is not connected to a
port
vlun host:group03 Host iscsi_name:2000000087041F71 is not
connected to a port
vlun host:group04 Host wwn:210100E08B24C750 is not connected to a
port
vlun host:Host_name Host wwn:210000E08B000000 is not connected to a
port
--------------------------------------------------------------------------
-----------
13 total

If there are no faults or exception conditions, the checkhealth command indicates that the System
is healthy:

cli% checkhealth
Checking alert
Checking cabling

Checking vlun
Checking vv
System is healthy

With the <component> specifier, you can check the status of one or more specific storage system
components. For example:

cli% checkhealth node pd


Checking node
Checking pd
The following components are healthy: node, pd

The -svc -full option provides a summary of service related issues by default. If you use the -
detail option, both a summary and a detailed list of service issues are displayed. The -svc -full
option displays the service related information in addition to the customer related information.
The following example displays information that is intended only for service users:

230 Troubleshooting
cli% checkhealth -svc
Checking alert
Checking ao
Checking cabling
...
Checking vv
Checking sp
Component -----------Summary Description------------------- Qty
Alert New alerts 2
File Nodes with Dump or HBA core files 1
PD There is an imbalance of active pd ports 1
PD PDs that are degraded or failed 2
pdch LDs with chunklets on a remote disk 2
pdch LDs with connection path different than ownership 2
Port Missing SFPs 6

The following information is included when you use the -detail option. The detailed output can be
very long if a node or cage is down.

cli% checkhealth -svc -detail


Checking alert
Checking ao
Checking cabling
...
Checking vv
Checking sp
Component -----------Summary Description------------------- Qty
Alert New alerts 2
File Nodes with Dump or HBA core files 1
PD There is an imbalance of active pd ports 1
PD PDs that are degraded or failed 2
pdch LDs with chunklets on a remote disk 2
pdch LDs with connection path different than ownership 2
Port Missing SFPs 6

Component --------Identifier--------- --------Detailed


Description---------------------
Alert hw_cage_sled:3:8:3,sw_pd:91 Magazine 3:8:3, Physical Disk 91
Degraded (Prolonged Missing B Port) Alert hw_cage_sled:N/A,sw_pd:54
Magazine N/A, Physical Disk 54 Failed (Prolonged Missing, Missing A Port,
Missing B Port)
File node:0 Dump or HBA core files found
PD disk:54 Detailed State: prolonged_missing
PD disk:91 Detailed State:
prolonged_missing_B_port
PD -- There is an imbalance of active pd
ports
pdch LD:35 Connection path is not the same as
LD ownership
pdch LD:54 Connection path is not the same as
LD ownership
pdch ld:35 LD has 1 remote chunklets
pdch ld:54 LD has 10 remote chunklets
Port port:2:2:3 Port or devices attached to port
have experienced
within the last day

Troubleshooting 231
To check for inconsistencies between the System Manager and kernel states and CRC errors for FC
and SAS ports, use the -full option:

cli% checkhealth -list -svc -full


Component --------------------------Component
Description------------------------------
alert Displays any non-resolved
alerts.
ao Displays any Adaptive Optimization issues.
cabling Displays any cabling
errors.
cage Displays non-optimal drive cage
conditions.
cert Displays Certificate issues.
consistency Displays inconsistencies between sysmgr and kernel**
dar Displays Data Encryption issues.
date Displays if nodes have different
dates.
file Displays non-optimal file system
conditions.
fs Displays Files Services health.
host Checks for FC host ports that are not configured for virtual
port support.
ld Displays non-optimal
LDs.
license Displays license
violations.
network Displays ethernet
issues.
node Displays non-optimal node
conditions.
pd Displays PDs with non-optimal states or
conditions.
pdch Displays chunklets with non-optimal
states.
port Displays port connection
issues.
portcrc Checks for increasing port CRC
errors.**
portpelcrc Checks for increasing SAS port CRC errors.**
qos Displays Quality of Service issues.
rc Displays Remote Copy
issues.
snmp Displays issues with
SNMP.
sp Checks the status of connection between sp and
nodes.
task Displays failed
tasks.
vlun Displays inactive VLUNs and those which have not been
reported by the host agent.
vv Displays non-optimal VVs.

Troubleshooting system components


Cause
Use the HPE 3PAR CLI checkhealth -list command to list all the components that can be analyzed
by the checkhealth command.

232 Troubleshooting system components


For detailed troubleshooting information about specific components, examples, and suggested actions for
correcting issues with components, refer to the section corresponding to the component name in
Troubleshooting StoreServ System Components.

Troubleshooting StoreServ System Components


Cause
Use the checkhealth -list command to list all components that can be analyzed by the
checkhealth command.
For detailed troubleshooting information about specific components, examples, and suggested actions for
correcting issues with components, refer to the section corresponding to the component name in the
following table.

Table 47: Component Functions

Component Function

Alert Displays any unresolved alerts.

AO Displays Adaptive Optimization issues.

Cabling Displays any cabling errors.

Cage Displays drive cage conditions that are not optimal.

Cert Displays Certificate issues.

Consistency Displays inconsistencies between sysmgr and the


kernel.

Date Displays if nodes have different dates.

File Displays file system conditions that are not optimal.

FS Displays File Services health.

Host Checks for FC host ports that are not configured


for virtual port support.

LD Displays LDs that are not optimal.

License Displays license violations.

Network Displays Ethernet issues.

Node Displays node conditions that are not optimal.

PD Displays PDs with states or conditions that are not


optimal.

PDCH Displays chunklets with states that are not optimal.

Table Continued

Troubleshooting StoreServ System Components 233


Component Function

Port Displays port connection issues.

Portcrc Checks for increasing port CRC errors.

Portpelcrc Checks for increasing SAS port CRC errors

QOS Displays Quality of Service issues.*

RC Displays Remote Copy issues.

SNMP Displays issues with SNMP.

SP Checks the status of Ethernet connections


between the Service Processor and nodes, when
run from the SP.

Task Displays failed tasks.

VLUN Displays inactive VLUNs and those which have not


been reported by the host agent.

VV Displays VVs that are not optimal.

Alert
Displays any unresolved alerts and shows any alerts that would be seen by showalert -n.

Format of Possible Alert Exception Messages

Alert <component> <alert_text>

Alert Example

Component -Identifier- --------Description--------------------


Alert hw_cage:1 Cage 1 Degraded (Loop Offline)
Alert sw_cli 11 authentication failures in 120 secs

Alert Suggested Action


View the full Alert output using the SSMC (GUI) or the showalert -d CLI command.

Cabling
Displays issues with cabling of drive enclosures.
• Check for drive enclosures that are not supported with 20000 systems.
• Check for balanced drive enclosures counts for ports in node-pairs.
• Check for drive enclosures not connected to node-pairs.
• Check for drive enclosures not connected to same port of node-pairs.
• Check for drive enclosures to wrong ports or I/O modules.

234 Alert
• Check for drive enclosures cables in wrong order.
• Check for drive enclosures with no PDs installed.
• Check for broken SAS cables.

NOTE:
To avoid any cabling errors, all drive enclosures must have at least one or more hard drives installed
before powering on the enclosure.

Format of Possible Cabling Exception Messages

Cabling "--" "Unexpected cage found on node<node> DP-<port#>"


Cabling <cageid> "Cabled to node<node> DP-<port#> remove a cable from <nsp
list>"
Cabling <cageid> "All three SAS ports of I/O <io> used, cabling check
incomplete"
Cabling <cageid> "Cage is connected to too many node ports node<node> DP-
<port#>"
Cabling <cageid> "Cage has multiple paths to node<node> DP-<port#>, correct
cabling"
Cabling <cageid> "I/O <io> missing. Check status and cabling to <cageid>
I/O <io>"
Cabling <cageid> "Cage not connected to node<node>, move one connection
from node<node> to node<partner node>"
Cabling <cageid> "Cage connected to different ports node<node> DP-<port#>
and node<partner node> DP-<port#>”
Cabling <cageid> "Cage connected to non-paired nodes node<node> DP-<port#>
and node<other node> DP-<port#>”
Cabling <cageid> "Check connections or replace cable from (<cageid>, I/O
<io>, DP-<port#>) to (<cageid>, IO <io>, DP-<port#>) - failed links
Cabling <cageid> "Check connections or replace cable from (<cageid>,
node<node>, DP-<port#>) to (<cageid>, IO <io>, DP-<port#>) - failed links
Cabling <cageid> "Check connections or replace cable from (<cageid>, I/O
<io>, DP-<port#>) to (<cageid>, IO <io>, DP-<port#>) - links at 3Gbps
Cabling <cageid> "Check connections or replace cable from (<cageid>,
node<node>, DP-<port#>) to (<cageid>, IO <io>, DP-<port#>) - links at 3Gbps
Cabling <cageid> "node<node> DP-<port#> has <current count> cages
connected, <max count>"
Cabling <cageid> "node<node> DP-<port#> has <count> cages, node<partner
node> DP-<port#> has <count> cages"
Cabling <cageid> "Cable in (<cageid>, I/O <io>, DP-<port#>) should be in
(<cageid>, I/O <io>, DP-<port#>)”
Cabling <cageid> "node<node> should be calbed in order: <cagelist>”
Cabling <cageid> "No PDs installed in cage, cabling check incomplete"

Troubleshooting 235
Cabling Example 1

Checking cabling
Component ---Description---- Qty
Cabling Bad SAS connection 2

Component -Identifier- ----------------------------------------------


Description----------------------------------------------
Cabling cage2 Check connections or replace cable from (cage2, I/O
0, DP-2) to (cage3, I/O 0, DP-1) - links at 3Gbps
Cabling cage5 Check connections or replace cable from (cage5, I/O
0, DP-2) to (cage6, I/O 0, DP-1) - failed links

Cabling Suggested Action 1


Cables with failed links have between 1 and 3 links working in the cable. A cable with 4 working links is
healthy and a cable with 0 working links will be reported as a missing I/O module.
Cables with links at 3Gbps will have at least one link that is working slower than the expected 6Gbps.
To gather more precise information about the issue, use the showcage and showportdev commands.
The showcage command shows the cages and to which <nsp> they are attached.

cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model FormFactor
0 cage0 0:2:1 0 1:2:1 1 17 24-34 1.76 1.76 DCS6 SFF
1 cage1 0:2:1 1 1:2:1 0 17 25-33 1.76 1.76 DCS6 SFF
2 cage2 0:2:3 0 1:2:3 1 17 24-32 1.76 1.76 DCS6 SFF
3 cage3 0:2:3 1 1:2:3 0 16 24-33 1.76 1.76 DCS6 SFF
4 cage4 0:1:1 0 1:1:1 1 4 32-34 1.76 1.76 DCS5 LFF
5 cage5 0:1:1 1 1:1:1 0 4 32-35 1.76 1.76 DCS5 LFF
6 cage6 0:1:3 0 1:1:3 1 4 32-35 1.76 1.76 DCS5 LFF
7 cage7 0:1:3 1 1:1:3 0 4 31-35 1.76 1.76 DCS5 LFF

The showportdev sas <nsp> command shows every SAS entity attached to the port.

cli% showportdev sas 3:0:1


ID DevName SASAddr Phy ParentDevHdl DevHdl AttDevHdl
Link AttID AttDevName AttSASAddr AttPhy
<3:0:1> 50002ACFF70184F9 50002AC3010184F9 0 - 0x01 0x0a
6Gbps exp0a 50050CC10230567F 50050CC10230567F 8
<3:0:1> 50002ACFF70184F9 50002AC3010184F9 1 - 0x01 0x0a
6Gbps exp0a 50050CC10230567F 50050CC10230567F 9
<3:0:1> 50002ACFF70184F9 50002AC3010184F9 2 - 0x01 0x0a
6Gbps exp0a 50050CC10230567F 50050CC10230567F 10
<3:0:1> 50002ACFF70184F9 50002AC3010184F9 3 - 0x01 0x0a
6Gbps exp0a 50050CC10230567F 50050CC10230567F 11

For each cage:


• Phy’s 0-3 are mfg port for external cage I/O modules, connection to nothing for nodes.
• Phy’s 4-7 are DP-2 (out) port for non-node cage I/O modules, connection to DP-1 for node cages.

236 Troubleshooting
• Phy’s 8-11 are DP-1 (in) port for non-node cage I/O modules, connection to on board SAS IOC for
node cages.
• Phy’s 12-35 are PD slot Phy’s.
• Phy’s 36 is the expander chip, where the cage name can be found in the “AttID” column.
Cabling Example 2

Component -Identifier- --------Description------------------------


Cabling cage:0 Not connected to the same slot & port

Cabling Suggested Action 2


The recommended and factory-default node-to-drive-chassis (cage) cabling configuration is to connect a
drive cage to a node-pair (two node). Generally nodes 0/1, 2/3, 4/5 or 6/7, achieve symmetry between
slots and ports (use the same slot and port on each node to a cage). In the next example, cage0 is
incorrectly connected to either slot-0 of node-0 or slot-1 of node-1.

cli% showcage cage0


Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
0 cage0 0:0:1 0 1:1:1 0 24 28-38 2.37 2.37 DC2 n/a

After determining the desired cabling and reconnecting correctly to slot-0 and port-1 of nodes 0 & 1, the
output should look like this:

cli% showcage cage0


Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
0 cage0 0:0:1 0 1:0:1 0 24 28-38 2.37 2.37 DC2 n/a

Cage
Displays drive cage conditions that are not optimal and reports exceptions if any of the following do not
have normal states:
• Ports
• Drive magazine states (DC1, DC2, & DC4)
• Small form-factor pluggable (SFP) voltages (DC2 and DC4)
• SFP signal levels (RX power low and TX failure)
• Power supplies
• Cage firmware (is not current)
Reports if a servicecage operation has been started and has not ended.

Cage 237
Format of Possible Cage Exception Messages

Cage cage:<cageid> "Missing A loop" (or "Missing B loop")


Cage cage:<cageid> "Interface Card <STATE>, SFP <SFPSTATE>" (is
unqualified, is disabled, Receiver Power Low: Check FC Cable, Transmit
Power Low: Check FC Cable, has RX loss, has TX fault)"
Cage cage:<cageid>,mag:<magpos> "Magazine is <MAGSTATE>"
Cage cage:<cageid> "Power supply <X> fan is <FANSTATE>"
Cage cage:<cageid> "Power supply <X> is <PSSTATE>" (Degraded, Failed,
Not_Present)
Cage cage:<cageid> "Power supply <X> AC state is <PSSTATE>"
Cage cage:<cageid> "Cage is in 'servicing' mode (Hot-Plug LED may be
illuminated)"
Cage cage:<cageid> "Firmware is not current"

Cage Example 1

Component -------------Description-------------- Qty


Cage Cages missing A loop 1
Cage SFPs with low receiver power 1

Component -Identifier- --------Description------------------------


Cage cage:4 Missing A loop
Cage cage:4 Interface Card 0, SFP 0: Receiver Power Low: Check
FC Cable

Cage Suggested Action 1


Check the connection/path to the SFP in the cage and the level of signal the SFP is receiving. An RX
Power reading below 100 µW signals the RX Power Low condition; typical readings are between 300 and
400 µW. Useful CLI commands are showcage -d and showcage -sfp ddm.
At least two connections are expected for drive cages, and this exception is flagged if that is not the case.

238 Troubleshooting
cli% showcage -d cage4
Id Name
LoopA
Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
4 cage4 --- 0 3:2:1 0 8 28-36 2.37 2.37 DC4 n/a

-----------Cage detail info for cage4 ---------

Fibre Channel Info PortA0 PortB0 PortA1 PortB1


Link_Speed 0Gbps -- -- 4Gbps

----------------------------------SFP
Info-----------------------------------
FCAL SFP -State- --Manufacturer-- MaxSpeed(Gbps) TXDisable TXFault RXLoss
DDM
0 0 OK FINISAR CORP. 4.1 No No Yes
Yes
1 1 OK FINISAR CORP. 4.1 No No No
Yes

Interface Board Info FCAL0 FCAL1


Link A RXLEDs Off Off
Link A TXLEDs Green Off
Link B RXLEDs Off Green
Link B TXLEDs Off Green
LED(Loop_Split) Off Off
LEDS(system,hotplug) Green,Off Green,Off

-----------Midplane Info-----------
Firmware_status Current
Product_Rev 2.37
State Normal Op
Loop_Split 0
VendorId,ProductId 3PARdata,DC4
Unique_ID 1062030000098E00
...

-------------Drive Info------------- ----LoopA----- ----LoopB-----


Drive NodeWWN LED Temp(C) ALPA LoopState ALPA LoopState
0:0 2000001d38c0c613 Green 33 0xe1 Loop fail 0xe1 OK
0:1 2000001862953510 Green 35 0xe0 Loop fail 0xe0 OK
0:2 2000001862953303 Green 35 0xdc Loop fail 0xdc OK
0:3 2000001862953888 Green 31 0xda Loop fail 0xda OK

cli% showcage -sfp cage4


Cage FCAL SFP -State- --Manufacturer-- MaxSpeed(Gbps) TXDisable TXFault
RXLoss DDM
4 0 0 OK FINISAR CORP. 4.1 No No
Yes Yes
4 1 1 OK FINISAR CORP. 4.1 No No
No Yes

cli% showcage -sfp -ddm cage4

Troubleshooting 239
---------Cage 4 Fcal 0 SFP 0 DDM----------
-Warning- --Alarm--
--Type-- Units Reading Low High Low High
Temp C 33 -20 90 -25 95
Voltage mV 3147 2900 3700 2700 3900
TX Bias mA 7 2 14 1 17
TX Power uW 394 79 631 67 631
RX Power uW 0 15 794 10* 1259

---------Cage 4 Fcal 1 SFP 1 DDM----------


-Warning- --Alarm--
--Type-- Units Reading Low High Low High
Temp C 31 -20 90 -25 95
Voltage mV 3140 2900 3700 2700 3900
TX Bias mA 8 2 14 1 17
TX Power uW 404 79 631 67 631
RX Power uW 402 15 794 10 1259

Cage Example 2

Component -------------Description-------------- Qty


Cage Degraded or failed cage power supplies 2
Cage Degraded or failed cage AC power 1

Component -Identifier- ------------Description------------


Cage cage:1 Power supply 0 is Failed
Cage cage:1 Power supply 0's AC state is Failed
Cage cage:1 Power supply 2 is Off

Cage Suggested Action 2


A cage power supply or power supply fan has failed, is missing input AC power, or the switch is turned
OFF. The showcage -d cageX and showalert commands provide more detail.

240 Troubleshooting
cli% showcage -d cage1
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
1 cage1 0:0:2 0 1:0:2 0 24 27-39 2.37 2.37 DC2 n/a

-----------Cage detail info for cage1 ---------


Interface Board Info FCAL0 FCAL1
Link A RXLEDs Green Off
Link A TXLEDs Green Off
Link B RXLEDs Off Green
Link B TXLEDs Off Green
LED(Loop_Split) Off Off
LEDS(system,hotplug) Amber,Off Amber,Off

-----------Midplane Info-----------
Firmware_status Current
Product_Rev 2.37
State Normal Op
Loop_Split 0
VendorId,ProductId 3PARdata,DC2
Unique_ID 10320300000AD000

Power Supply Info State Fan State AC Model


ps0 Failed OK Failed POI <AC input is missing
ps1 OK OK OK POI
ps2 Off OK OK POI <PS switch is turned off
ps3 OK OK OK POI

Cage Example 3

Component -Identifier- --------------Description----------------


Cage cage:1 Cage has a hotplug enabled interface card

Cage Suggested Action 3


When a servicecage operation is started, the targeted cage goes into servicing mode, illuminating the
hot plug LED on the FCAL module (DC1, DC2, DC4), and routing I/O through another path. When the
service action is finished, enter the servicecage endfc command to return the cage to normal status.
The checkhealth exception is reported if the FCAL module's hot plug LED is illuminated or if the cage
is in servicing mode. If a maintenance activity is currently occurring on the drive cage, this condition may
be ignored.

NOTE:
The primary path can be seen by an asterisk (*) in showpd's Ports columns.

Troubleshooting 241
cli% showcage -d cage1
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
1 cage1 0:0:2 0 1:0:2 0 24 28-40 2.37 2.37 DC2 n/a

-----------Cage detail info for cage1 ---------

Interface Board Info FCAL0 FCAL1


Link A RXLEDs Green Off
Link A TXLEDs Green Off
Link B RXLEDs Off Green
Link B TXLEDs Off Green
LED(Loop_Split) Off Off
LEDS(system,hotplug) Green,Off Green,Amber

-----------Midplane Info-----------
Firmware_status Current
Product_Rev 2.37
State Normal Op
Loop_Split 0
VendorId,ProductId 3PARdata,DC2
Unique_ID 10320300000AD000

cli% showpd -s
Id CagePos Type -State-- -----Detailed_State------
20 1:0:0 FC degraded disabled_B_port,servicing
21 1:0:1 FC degraded disabled_B_port,servicing
22 1:0:2 FC degraded disabled_B_port,servicing
23 1:0:3 FC degraded disabled_B_port,servicing

cli% showpd -p -cg 1


---Size(MB)---- ----Ports----
Id CagePos Type Speed(K) State Total Free A B
20 1:0:0 FC 10 degraded 139520 119808 0:0:2* 1:0:2-
21 1:0:1 FC 10 degraded 139520 122112 0:0:2* 1:0:2-
22 1:0:2 FC 10 degraded 139520 119552 0:0:2* 1:0:2-
23 1:0:3 FC 10 degraded 139520 122368 0:0:2* 1:0:2-

Cage Example 4

SComponent ---------Description--------- Qty


Cage Cages not on current firmware 1

Component -Identifier- ------Description------


Cage cage:3 Firmware is not current

Cage Suggested Action 4


Check the drive cage firmware revision using the commands showcage and showcage -d cageX. The
showfirwaredb command displays current firmware level required for the specific drive cage type.

242 Troubleshooting
NOTE:
The DC1 and DC3 cages have firmware in the FCAL modules. The DC2 and DC4 cages have
firmware on the cage mid-plane. Use the upgradecage command to upgrade the firmware.

cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
2 cage2 2:0:3 0 3:0:3 0 24 29-43 2.37 2.37 DC2 n/a
3 cage3 2:0:4 0 3:0:4 0 32 29-41 2.36 2.36 DC2 n/a

cli% showcage -d cage3


Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
3 cage3 2:0:4 0 3:0:4 0 32 29-41 2.36 2.36 DC2 n/a

-----------Cage detail info for cage3 ---------


.
.
.
-----------Midplane Info-----------
Firmware_status Old
Product_Rev 2.36
State Normal Op
Loop_Split 0
VendorId,ProductId 3PARdata,DC2
Unique_ID 10320300000AD100

cli% showfirmwaredb
Vendor Prod_rev Dev_Id Fw_status Cage_type Firmware_File
...
3PARDATA [2.37] DC2 Current DC2 /opt...dc2/
lbod_fw.bin-2.37

Cage Example 5

Component -Identifier- ------------Description------------


Cage cage:4 Interface Card 0, SFP 0 is unqualified

Cage Suggested Action 5


In this example, a 2 Gb/s SFP was installed in a 4 Gb/s drive cage (DC4), and the 2 Gb/s SFP is not
qualified for use in this drive cage. For cage problems, the following CLI commands are useful:
showcage -d, showcage -sfp, showcage -sfp -ddm, showcage -sfp -d, and showpd -
state.

Troubleshooting 243
cli% showcage -d cage4
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
4 cage4 2:2:1 0 3:2:1 0 8 30-37 2.37 2.37 DC4 n/a

-----------Cage detail info for cage4 ---------


Fibre Channel Info PortA0 PortB0 PortA1 PortB1
Link_Speed 2Gbps -- -- 4Gbps

----------------------------------SFP
Info-----------------------------------
FCAL SFP -State- --Manufacturer-- MaxSpeed(Gbps) TXDisable TXFault RXLoss
DDM
0 0 OK SIGMA-LINKS 2.1 No No No
Yes
1 1 OK FINISAR CORP. 4.1 No No No
Yes

Interface Board Info FCAL0 FCAL1


Link A RXLEDs Green Off
Link A TXLEDs Green Off
Link B RXLEDs Off Green
Link B TXLEDs Off Green
LED(Loop_Split) Off Off
LEDS(system,hotplug) Amber,Off Green,Off
...

cli% showcage -sfp -d cage4


--------Cage 4 FCAL 0 SFP 0--------
Cage ID : 4
Fcal ID : 0
SFP ID : 0
State : OK
Manufacturer : SIGMA-LINKS
Part Number : SL5114A-2208
Serial Number : U260651461
Revision : 1.4
MaxSpeed(Gbps) : 2.1
Qualified : No <<< Unqualified SFP
TX Disable : No
TX Fault : No
RX Loss : No
RX Power Low : No
DDM Support : Yes

--------Cage 4 FCAL 1 SFP 1--------


Cage ID : 4
Fcal ID : 1
SFP ID : 1
State : OK
Manufacturer : FINISAR CORP.
Part Number : FTLF8524P2BNV
Serial Number : PF52GRF
Revision : A

244 Troubleshooting
MaxSpeed(Gbps) : 4.1
Qualified : Yes
TX Disable : No
TX Fault : No
RX Loss : No
RX Power Low : No
DDM Support : Yes

Consistency
Displays inconsistencies between sysmgr and the kernel.
The check is added to find inconsistent and unusual conditions between of the system manager and the
node kernel. The check requires the hidden -svc -full parameter because the check can take 20
minutes or longer for a large system.
Format of Possible Consistency Exception Messages

Consistency --<err>

Consistency Example

Component -Identifier- --------Description------------------------


Consistency -- Region Mover Consistency Check Failed
Consistency -- CH/LD/VV Consistency Check Failed

Consistency Suggested Action


Gather InSplore data and escalate to HPESC.

Data Encryption at Rest (DAR)


Checks issues with data encryption. If the system is not licensed for HPE 3PAR Data Encryption, no
checks are made.
Format of Possible DAR Exception Messages

Dar -- "There are 5 disks that are not self-encrypting"

DAR Suggested Action


Remove the drives that are not self-encrypting from the system because the non-encrypted drives cannot
be admitted into a system that is running with data encryption. Also, if the system is not yet enabled for
data encryption, the presence of these disks prevents data encryption from being enabled.

Consistency 245
DAR Example 2

Dar -- "DAR Encryption key needs backup"

DAR Suggested Action 2


Issue the controlencryption backup command to generate a password-enabled backup file.

Date
Checks the date and time on all nodes.
Format of Possible Date Exception Messages

Date -- "Date is not the same on all nodes"

Date Example

Component -Identifier- -----------Description-----------


Date -- Date is not the same on all nodes

Date Suggested Action


The time on the nodes should stay synchronized whether there is an NTP server or not. Use showdate
to see if a node is out of sync. Use shownet and shownet -d commands to view network and NTP
information.

cli% showdate
Node Date
0 2010-09-08 10:56:41 PDT (America/Los_Angeles)
1 2010-09-08 10:56:39 PDT (America/Los_Angeles)

cli% shownet
IP Address Netmask/PrefixLen Nodes Active Speed
192.168.56.209 255.255.255.0 0123 0 100
Duplex AutoNeg Status
Full Yes Active

Default route: 192.168.56.1


NTP server
: 192.168.56.109

File
Displays file system conditions that are not optimal.

246 Date
Checks for the following:
• The presence of special files on each node, for example:

touch
/common/touchfiles/manualstartup

• That the persistent repository (Admin VV) is mounted


• Whether the file-systems on any node disk are close to full
• The presence of any HBA core files or user process dumps
• Whether the amount of free node memory is sufficient
• Whether the on-node System reporter volume is mounted and is not 100% full
File Format of Possible Exception Messages

File node:<node> "Behavior altering file "<file> " exists created on


<filetime>"
File node:master "Admin Volume is not mounted"
File Node node:,<node> "Filesystem <filesys> mounted on "<mounted_on> " is
over xx% full" (Warnings are given at 80, 90 and 95%.)
File node:<node> "Dump or HBA core files found"
File -- "An online upgrade is in progress"
File node:<node> "sr_mnt is full"
File node:<node> "sr_mnt not mounted"

File Example 1

File node:2 Behavior altering file "manualstartup" exists created on


Oct 7 14:16

File Suggested Action 1


After understanding the condition of the file, remove the file to prevent unwanted behavior. As root on a
node, use the UNIX rm command to remove the file.
A known condition includes some undesirable touch files are not being detected (bug 45661).
File Example 2

Component -----------Description----------- Qty


File Admin Volume is not mounted 1

File Suggested Action 2


Each node has a file system link so the admin volume can be mounted if the node is the master node.
This exception is reported if the link is missing or if the System Manager (sysmgr) is not running at the
time. For example, sysmgr may have restarted manually, due to error or during a change of master-
nodes. If sysmgr is restarted, the sysmgr to remount the admin volume every few minutes.

Troubleshooting 247
Every node should have the following file system link so that the admin volume can be mounted, if the
node becomes the master node:

root@1001356-1~# onallnodes ls -l /dev/tpd_vvadmin


Node 0:
lrwxrwxrwx 1 root root 12 Oct 23 09:53 /dev/tpd_vvadmin -> tpddev/vvb/0
Node 1:
ls: /dev/tpd_vvadmin: No such file or directory

The corresponding alert when the admin volume is not properly mounted is as follows:

Message Code: 0xd0002


Severity : Minor
Type : PR transition
Message : The PR is currently getting data from the internal drive on
node 1, not the admin volume. Previously recorded alerts will not be
visible until the PR transitions to the admin volume.

If a link for the admin volume is not present, it can be recreated by rebooting the node.
File Example 3

Component -----------Description----------- Qty


File Nodes with Dump or HBA core files 1

Component ----Identifier----- ----Description------


File node:1 Dump or HBA core files found

File Suggested Action 3


This condition may be transient because the Service Processor retrieves the files and cleans up the dump
directory. If the SP is not gathering the dump files, check the condition and state of the SP.

LD
Checks the following and displays logical disks (LD) that are not optimal:
• Preserved LDs
• Verifies that current and created availability are the same
• Owner and backup
• Verifies preserved data space (pdsld) is the same as total data cache
• Size and number of logging LDs

248 LD
Format of Possible LD Exception Messages

LD ld:<ldname> "LD is not mapped to a volume"


LD ld:<ldname> "LD is in write-through mode"
LD ld:<ldname> "LD has <X> preserved RAID sets and <Y> preserved chunklets"
LD ld:<ldname> "LD has reduced availability. Current: <cavail>,
Configured: <avail>"
LD ld:<ldname> "LD does not have a backup"
LD ld:<ldname> "LD does not have owner and backup"
LD ld:<ldname> "Logical Ddisk is owned by <owner>, but preferred owner is
<powner>"
LD ld:<ldname> "Logical Disk is backed by <backup>, but preferred backup is
<pbackup>"
LD ld:<ldname> "A logging LD is smaller than 20G in size"
LD ld:<ldname> "Detailed State:<ldstate>" (degraded or failed)
LD -- "Number of logging LD's does not match number of nodes in the
cluster"
LD -- "Preserved data storage space does not equal total node's Data
memory"

LD Example 1

Component -------Description-------- Qty


LD LDs not mapped to a volume 10

Component -Identifier-- --------Description---------


LD ld:Ten.usr.0 LD is not mapped to a volume

LD Suggested Action 1
Examine the identified LDs using the following CLI commands:showld, showld –d, showldmap, and
showvvmap.
LDs are normally mapped to (used by) VVs but they can be disassociated with a VV if a VV is deleted
without the underlying LDs being deleted, or by an aborted tune operation. Normally, you would remove
the unmapped LD to return its chunklets to the free pool.

cli% showld Ten.usr.0


Id Name RAID -Detailed_State- Own SizeMB UsedMB Use Lgct LgId
WThru
MapV

88 Ten.usr.0 0 normal 0/1/2/3 8704 0 V 0 ---


N
N

cli% showldmap Ten.usr.0


Ld space not used by any vv

Troubleshooting 249
LD Example 2

Component -------Description-------- Qty


LD LDs in write through mode 3

Component -Identifier-- --------Description---------


LD ld:Ten.usr.12 LD is in write-through mode

LD Suggested Action 2
Examine the identified LDs for failed or missing disks by using the following CLI commands:showld,
showld –d, showldch, and showpd. Write-through mode (WThru) indicates that host I/O operations
must be written through to the disk before the host I/O command is acknowledged. This is usually due to
a node-down condition, when node batteries are not working, or where disk redundancy is not optimal.

cli% showld Ten*


Id Name RAID -Detailed_State- Own SizeMB UsedMB Use Lgct LgId
WThru
MapV
91 Ten.usr.3 0 normal 1/0/3/2 13824 0 V 0 ---
N N
92 Ten.usr.12 0 normal 2/3/0/1 28672 0 V 0 ---
Y N

cli% showldch Ten.usr.12


Ldch Row Set PdPos Pdid Pdch State Usage Media Sp From To
0 0 0 3:3:0 108 6 normal ld valid N --- ---
11 0 11 --- 104 74 normal ld valid N --- ---

cli% showpd 104


-Size(MB)-- ----Ports----
Id CagePos Type Speed(K) State Total Free A B
104 4:9:0? FC 15 failed 428800 0 ----- -----

LD Example 3

Component ---------Description--------- Qty


LD LDs with reduced availability 1

Component --Identifier-- ------------Description---------------


LD ld:R1.usr.0 LD has reduced availability. Current: ch,
Configured: cage

250 Troubleshooting
LD Suggested Action 3
LDs are created with certain high-availability characteristics, such as ha-cage. Reduced availability can
occur if chunklets in an LD are moved to a location where the current availability (CAvail) is below the
desired level of availability (Avail). Chunklets may have been manually moved with movech or by
specifying it during a tune operation or during failure conditions such as node, path, or cage failures. The
HA levels from highest to lowest are port, cage, mag, and ch (disk).
Examine the identified LDs for failed or missing disks by using the following CLI commands: showld,
showld –d, showldch, and showpd. In the example below, the LD should have cage-level availability,
but it currently has chunklet (disk) level availability (the chunklets are on the same disk).

cli% showld -d R1.usr.0


Id Name CPG RAID Own SizeMB RSizeMB RowSz StepKB SetSz Refcnt Avail
CAvail
32 R1.usr.0 --- 1 0/1/3/2 256 512 1 256 2 0 cage
ch

cli% showldch R1.usr.0


Ldch Row Set PdPos Pdid Pdch State Usage Media Sp From To
0 0 0 0:1:0 4 0 normal ld valid N --- ---
1 0 0 0:1:0 4 55 normal ld valid N --- ---

LD Example 4

Component -Identifier-- -----Description-------------


LD -- Preserved data storage space does not equal total
node's Data memory

LD Suggested Action 4
Preserved data LDs (pdsld) are created during system initialization Out-of-the-Box (OOTB) and after
some hardware upgrades (through admithw command). The total size of the pdsld should match the
total size of all data-cache in the storage system (see below). This message appears if a node is offline
because the comparison of LD size to data cache size does not match. This message can be ignored
unless all nodes are online. If all nodes are online and the error condition persists, determine the cause of
the failure. Use the admithw command to correct the condition.

Troubleshooting 251
cli% shownode
Control Data
Cache
Node --Name--- -State- Master InCluster ---LED--- Mem(MB) Mem(MB)
Available(%)
0 1001335-0 OK Yes Yes GreenBlnk 2048 4096
100
1 1001335-1 OK No Yes GreenBlnk 2048 4096
100

cli% showld pdsld*


Id Name RAID -Detailed_State- Own SizeMB UsedMB Use Lgct LgId WThru MapV
19 pdsld0.0 1 normal 0/1 256 0 P,F 0 --- Y N
20 pdsld0.1 1 normal 0/1 7680 0 P 0 --- Y N
21 pdsld0.2 1 normal 0/1 256 0 P 0 --- Y N
----------------------------------------------------------------------------
3 8192 0

License
Displays license violations.
Format of Possible License Exception Messages

License <feature_name> "License has expired"

License Example

Component -Identifier- --------Description-------------


License -- System Tuner License has expired

License Suggested Action


Request a new or updated license from your Sales Engineer.

Network
Displays Ethernet issues for administrative and Remote Copy over IP (RCIP) networks that have been
logged on the previous 24-hours. Also, reports the storage system has fewer than two nodes with working
administrative Ethernet connections.
• Check the number of collisions in the previous day log. The number of collisions should be less than
5% of the total packets for the day.
• Check for Ethernet errors and transmit (TX) or receive (RX) errors in previous day’s log.

252 License
Format of Possible Network Exception Messages

Network -- "IP address change has not been completed"


Network "Node<node>:<type>" "Errors detected on network"
Network "Node<node>:<type>" "There is less than one day of network history
for this node"
Network -- "No nodes have working admin network connections"
Network -- "Node <node> has no admin network link detected"
Network -- "Nodes <nodelist> have no admin network link detected"
Network -- "checkhealth was unable to determine admin link status

Network Example 1

Network -- "IP address change has not been completed"

Network Suggested Action 1


The setnet command is issued to change some network parameter, such as the IP address, but the
action has not completed. Use setnet finish to complete the change, or setnet abort to cancel.
Use the shownet command to examine the current condition.

cli% shownet
IP Address Netmask/PrefixLen Nodes Active Speed Duplex AutoNeg
Status
192.168.56.209 255.255.255.0 0123 0 100 Full Yes
Changing
192.168.56.233 255.255.255.0 0123 0 100 Full Yes
Unverified

Network Example 2

Component ---Identifier---- -----Description----------


Network Node0:Admin Errors detected on network

Network Suggested Action 2


Network errors have been detected on the specified node and network interface. Commands such as
shownet and shownet -d are useful for troubleshooting network problems. These commands display
current network counters as checkhealth shows errors from the last logging sample.

NOTE:
The error counters shown by shownet and shownet -d cannot be cleared except by rebooting a
controller node. Because checkhealth is showing network counters from a history log,
checkhealth stops reporting the issue if there is no increase in error in the next log entry.

Troubleshooting 253
shownet -d
IP Address: 192.168.56.209 Netmask 255.255.255.0
Assigned to nodes: 0123
Connected through node 0
Status: Active

Admin interface on node 0


MAC Address: 00:02:AC:25:04:03
RX Packets: 1225109 TX Packets: 550205
RX Bytes: 1089073679 TX Bytes: 568149943
RX Errors: 0 TX Errors: 0
RX Dropped: 0 TX Dropped: 0
RX FIFO Errors: 0 TX FIFO Errors: 0
RX Frame Errors: 60 TX Collisions: 0
RX Multicast: 0 TX Carrier Errors: 0
RX Compressed: 0 TX Compressed: 0

Node
Checks the following node conditions and displays nodes that are not optimal:
• Verifies node batteries have been tested in the last 30 days
• Offline nodes
• Power supply and battery problems
The following checks are performed only if the -svc option is used.
• Checks for symmetry of components between nodes such as Control-Cache and Data-Cache size, OS
version, bus speed, MCU version, and CPU speed
• Checks if diagnostics such as
ioload
are running on any of the nodes
• Checks for stuck-threads, such as I/O operations that cannot complete
Format of Possible Node Exception Messages

Node node:<nodeID> "Node is not online"


Node node:<nodeID> "Power supply <psID> detailed state is <status>
Node node:<nodeID> "Power supply <psID> AC state is <acStatus>"
Node node:<nodeID> "Power supply <psID> DC state is <dcStatus>"
Node node:<nodeID> "Power supply <psID> battery is <batStatus>"
Node node:<nodeID> "Node <nodeID> battery is <batStatus>"
Node node:<priNodeID> "<bat> has not been tested within the last 30 days"
Node node:<nodeID> "Node <nodeID> battery is expired"
Node node:<nodeID> "Power supply <psID> is expired"
Node node:<nodeID> "Fan is <fanID> is <status>"
Node node:<nodeID> "Power supply <psID> fan module <fanID> is <status>"
Node node:<nodeID> "Fan module <fanID> is <status>
Node node:<nodeID> "Detailed State <state>" (degraded or failed)

254 Node
The following checks are performed when the -svc option is used:

Node -- "BIOS version is not the same on all nodes"


Node -- "NEMOE version is not the same on all nodes"
Node -- "Control memory is not the same on all nodes"
Node -- "Data memory is not the same on all nodes"
Node -- "CPU Speed is not the same on all nodes"
Node -- "CPU Bus Speed is not the same on all nodes"
Node -- "HP 3PAR OS version is not the same on all nodes"
Node node:<nodenum> "Flusher speed set incorrectly to: <speeed>" (should be
0)
Node node:<nodenum> "Environmental factor <factor> is <state>" (DDR2,
Node), (UNDER LIMIT, OVER LIMIT)
Node node:<node> "Ioload is running"
Node node:<node> "Node has less than 100MB of free memory"
Node node:<node> "BIOS skip mask is <skip_mask>"
Node node:<node> "quo_cex_flags are not set correctly"
Node node:<node> "clus_upgr_group state is not set correctly"
Node node:<node> clus_upgr_state is not set correctly"
Node node:<node> Process <processID> has reached 90% of maximum size"
Node node:<node> VV <vvID> has outstanding <command> with a maximum wait
time of <sleeptime>"
Node -- "There is at least one active servicenode operation in progress"

Node Suggested Action


For node error conditions, examine the node and node-component states by using the following
commands: shownode, shownode -s, shownode -d, showbattery, and showsys -d.

Node Example 1

Component -Identifier- ---------------Description----------------


Node node:0 Power supply 1 detailed state is DC Failed
Node node:0 Power supply 1 DC state is Failed
Node node:1 Power supply 0 detailed state is AC Failed
Node node:1 Power supply 0 AC state is Failed
Node node:1 Power supply 0 DC state is Failed

Node Suggested Action 1


Examine the states of the power supplies with commands such as shownode, shownode -s, shownode
-ps. Turn on or replace the failed power supply.

NOTE:
In the example below, the battery state is considered degraded because the power supply is failed.

Troubleshooting 255
cli% shownode
Control Data
Cache
Node --Name--- -State-- Master InCluster ---LED--- Mem(MB) Mem(MB)
Available(%)
0 1001356-0 Degraded Yes Yes AmberBlnk 2048 8192
100
1 1001356-1 Degraded No Yes AmberBlnk 2048 8192
100

cli% shownode -s
Node -State-- -Detailed_State-
0 Degraded PS 1 Failed
1 Degraded PS 0 Failed

cli% shownode -ps


Node PS -Serial- -PSState- FanState ACState DCState -BatState- ChrgLvl(%)
0 0 FFFFFFFF OK OK OK OK OK 100
0 1 FFFFFFFF Failed -- OK Failed Degraded 100
1 0 FFFFFFFF Failed -- Failed Failed Degraded 100
1 1 FFFFFFFF OK OK OK OK OK 100

Node Example 2

Component -Identifier- ---------Description------------


Node node:3 Power supply 1 battery is Failed

Node Suggested Action 2


Examine the state of the battery and power supply by using the following commands: shownode,
shownode -s, shownode -ps, showbattery (and showbattery with -d, -s, -log). Turn on, fix, or
replace the battery backup unit.

NOTE:
The condition of the degraded power supply is caused by the failing battery. The degraded PS state
is not the expected behavior. This issue will be fixed in a future release. (bug 46682).

256 Troubleshooting
cli% shownode
Control Data
Cache
Node --Name--- -State-- Master InCluster ---LED--- Mem(MB) Mem(MB)
Available(%)
2 1001356-2 OK No Yes GreenBlnk 2048 8192
100
3 1001356-3 Degraded No Yes AmberBlnk 2048 8192
100

cli% shownode -s
Node -State-- -Detailed_State-
2 OK OK
3 Degraded PS 1 Degraded

cli% shownode -ps


Node PS -Serial- -PSState- FanState ACState DCState -BatState- ChrgLvl(%)
2 0 FFFFFFFF OK OK OK OK OK 100
2 1 FFFFFFFF OK OK OK OK OK 100
3 0 FFFFFFFF OK OK OK OK OK 100
3 1 FFFFFFFF Degraded OK OK OK Failed 0

cli% showbattery
Node PS Bat Serial -State-- ChrgLvl(%) -ExpDate-- Expired Testing
3 0 0 100A300B OK 100 07/01/2011 No No
3 1 0 12345310 Failed 0 04/07/2011 No No

Node Example 3

Component -Identifier- --------------Description----------------


Node node:3 Node:3, Power Supply:1, Battery:0 has not been
tested within the last 30 days

Node Suggested Action 3


The indicated battery has not been tested in 30 days. A node backup battery is tested every 14 days
under normal conditions. If the main battery is missing, expired, or failed, the backup battery is not tested.
A backup battery connected to the same node is not tested because testing it can cause loss of power to
the node. An untested battery has an unknown status in the showbattery -s output. Use the following
commands: showbattery, showbattery -s, and showbattery -d.

showbattery -s
Node PS Bat -State-- -Detailed_State-
0 0 0 OK normal
0 1 0 Degraded Unknown

Examine the date of the last successful test of that battery. Assuming the comment date was 2009-10-14,
the last battery test on Node 0, PS 1, Bat 0 was 2009-09-10, which is more than 30 days ago.

Troubleshooting 257
showbattery -log
Node PS Bat Test Result Dur(mins) ---------Time----------
0 0 0 0 Passed 1 2009-10-14 14:34:50 PDT
0 0 0 1 Passed 1 2009-10-28 14:36:57 PDT
0 1 0 0 Passed 1 2009-08-27 06:17:44 PDT
0 1 0 1 Passed 1 2009-09-10 06:19:34 PDT

showbattery
Node PS Bat Serial -State-- ChrgLvl(%) -ExpDate-- Expired Testing
0 0 0 83205243 OK 100 04/07/2011 No No
0 1 0 83202356 Degraded 100 04/07/2011 No No

Example Node 4

Component ---Identifier---- -----Description----------


Node node:0 Ioload is running

Suggested Action Node 4


This output appears only if the -svc option of checkhealth is used. The output it is not displayed for
the non-service check. When a disk diagnostic stress test is detected running on the node, and the test
can affect the node performance. After installing the HPE 3PAR Storage System, diagnostic stress tests
exercise the disks for up to two hours following the initial setup (OOTB). If the stress test is detected
within three hours of the initial setup, disregard the warning. If the test detected after the setup, the test
may have been manually started. Investigate the operation and contact Hewlett Packard Enterprise
Support.
From a node's root login prompt, check the UNIX processes for ioload:

root@1001356-0 Tue Nov 03 13:37:31:~# onallnodes ps -ef |grep ioload


root 13384 1 2 13:36 ttyS0 00:00:01 ioload -n -c 2 -t 20000 -
i 256 -o 4096 /dev/tpddev/pd/100

PD
Displays physical disks with states or conditions that are not optimal:
• Checks for failed and degraded PDs
• Checks for an imbalance of PD ports, for example, if Port-A is used on more disks than Port-B
• Checks for an
Unknown
sparing algorithm.
• Checks for disks experiencing a high number of IOPS
• Reports if a
servicemag
operation is outstanding (

258 PD
servicemag status
)
• Reports if there are PDs that do not have entries in the firmware DB file
Format of Possible PD Exception Messages

PD disk:<pdid> "Degraded States: <showpd -s -degraded">


PD disk:<pdid> "Failed States: <showpd -s -failed">
PD -- "There is an imbalance of active PD ports"
PD -- "Sparing algorithm is not set"
PD disk:<pdid> "Disk is experiencing a high level of I/O per second:
<iops>"
PD -- There is at least one active servicemag operation in progress

The following checks are performed when the -svc option is used, or on 7400/7200 hardware:

PD File: <filename> "Folder not found on all Nodes in <folder>"


PD File: <filename> "Folder not found on some Nodes in <folder>"
PD File: <filename> "File not found on all Nodes in <folder>"
PD File: <filename> "File not found on some Nodes in <folder>"
PD Disk:<pdID> "<pdmodel> PD for cage type <cagetype> in cage position
<pos> is missing from firmware database"

PD Example 1

Component -------------------Description------------------- Qty


PD PDs that are degraded or failed 40

Component -Identifier- ---------------Description-----------------


PD disk:48 Detailed State:
missing_B_port,loop_failure
PD disk:49 Detailed State:
missing_B_port,loop_failure
...
PD disk:107 Detailed State: failed,notready,missing_A_port

PD Suggested Action 1
Both degraded and failed disks are reported. When an FC path to a drive cage is not working, all disks in
the cage have a degraded state due to the non-redundant condition. To further diagnose, use the
following commands: showpd, showpd -s, showcage, showcage -d, showport -sfp.

Troubleshooting 259
cli% showpd -degraded -failed
----Size(MB)---- ----Ports----
Id CagePos Type Speed(K) State Total Free A B
48 3:0:0 FC 10 degraded 139520 115200 2:0:4* -----
49 3:0:1 FC 10 degraded 139520 121344 2:0:4* -----

107 4:9:3 FC 15 failed 428800 0 ----- 3:2:1*

cli% showpd -s -degraded -failed


Id CagePos Type -State-- -----------------Detailed_State--------------
48 3:0:0 FC degraded missing_B_port,loop_failure
49 3:0:1 FC degraded missing_B_port,loop_failure

107 4:9:3 FC failed prolonged_not_ready,missing_A_port,relocating

cli% showcage -d cage3


Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
3 cage3 2:0:4 0 --- 0 32 28-39 2.37 2.37 DC2 n/a

-----------Cage detail info for cage3 ---------


Fibre Channel Info PortA0 PortB0 PortA1 PortB1
Link_Speed 2Gbps -- -- 0Gbps

----------------------------------SFP
Info-----------------------------------
FCAL SFP -State- --Manufacturer-- MaxSpeed(Gbps) TXDisable TXFault RXLoss
DDM
0 0 OK SIGMA-LINKS 2.1 No No No
Yes
1 1 OK SIGMA-LINKS 2.1 No No Yes
Yes

Interface Board Info FCAL0 FCAL1


Link A RXLEDs Green Off
Link A TXLEDs Green Off
Link B RXLEDs Off Off
Link B TXLEDs Off Green
LED(Loop_Split) Off Off
LEDS(system,hotplug) Green,Off Green,Off

-------------Drive Info------------- ----LoopA----- ----LoopB-----


Drive NodeWWN LED Temp(C) ALPA LoopState ALPA LoopState
0:0 20000014c3b3eab9 Green 34 0xe1 OK 0xe1 Loop fail
0:1 20000014c3b3e708 Green 36 0xe0 OK 0xe0 Loop fail

260 Troubleshooting
PD Example 2

Component --Identifier-- --------------Description---------------


PD -- There is an imbalance of active pd ports

PD Suggested Action 2
The primary and secondary I/O paths for disks (PDs) are balanced between nodes. The primary path is
indicated in the showpd -path output and by an asterisk in the showpd output. An imbalance of active
ports is usually caused by a nonfunctional path/loop to a cage, or because an odd number of drives is
installed or detected. To further diagnose, use the following commands: showpd, showpd path,
showcage, and showcage -d.

Troubleshooting 261
cli% showpd
----Size(MB)----- ----Ports----
Id CagePos Type Speed(K) State Total Free A B
0 0:0:0 FC 10 normal 139520 119040 0:0:1* 1:0:1
1 0:0:1 FC 10 normal 139520 121600 0:0:1 1:0:1*
2 0:0:2 FC 10 normal 139520 119040 0:0:1* 1:0:1
3 0:0:3 FC 10 normal 139520 119552 0:0:1 1:0:1*
...
46 2:9:2 FC 10 normal 139520 112384 2:0:3* 3:0:3
47 2:9:3 FC 10 normal 139520 118528 2:0:3 3:0:3*
48 3:0:0 FC 10 degraded 139520 115200 2:0:4* -----
49 3:0:1 FC 10 degraded 139520 121344 2:0:4* -----
50 3:0:2 FC 10 degraded 139520 115200 2:0:4* -----
51 3:0:3 FC 10 degraded 139520 121344 2:0:4* -----

cli% showpd -path


-----------Paths-----------
Id CagePos Type -State-- A B Order
0 0:0:0 FC normal 0:0:1 1:0:1 0/1
1 0:0:1 FC normal 0:0:1 1:0:1 1/0
2 0:0:2 FC normal 0:0:1 1:0:1 0/1
3 0:0:3 FC normal 0:0:1 1:0:1 1/0
...
46 2:9:2 FC normal 2:0:3 3:0:3 2/3
47 2:9:3 FC normal 2:0:3 3:0:3 3/2
48 3:0:0 FC degraded 2:0:4 3:0:4\missing 2/-
49 3:0:1 FC degraded 2:0:4 3:0:4\missing 2/-
50 3:0:2 FC degraded 2:0:4 3:0:4\missing 2/-
51 3:0:3 FC degraded 2:0:4 3:0:4\missing 2/-

cli% showcage -d cage3


Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
3 cage3 2:0:4 0 --- 0 32 29-41 2.37 2.37 DC2 n/a

-----------Cage detail info for cage3 ---------

Fibre Channel Info PortA0 PortB0 PortA1 PortB1


Link_Speed 2Gbps -- -- 0Gbps

----------------------------------SFP
Info-----------------------------------
FCAL SFP -State- --Manufacturer-- MaxSpeed(Gbps) TXDisable TXFault RXLoss
DDM
0 0 OK SIGMA-LINKS 2.1 No No No
Yes
1 1 OK SIGMA-LINKS 2.1 No No Yes
Yes

Interface Board Info FCAL0 FCAL1


Link A RXLEDs Green Off
Link A TXLEDs Green Off
Link B RXLEDs Off Off

262 Troubleshooting
Link B TXLEDs Off Green
LED(Loop_Split) Off Off
LEDS(system,hotplug) Green,Off Green,Off
...
-------------Drive Info------------- ----LoopA----- ----LoopB-----
Drive NodeWWN LED Temp(C) ALPA LoopState ALPA LoopState
0:0 20000014c3b3eab9 Green 35 0xe1 OK 0xe1 Loop fail
0:1 20000014c3b3e708 Green 38 0xe0 OK 0xe0 Loop fail
0:2 20000014c3b3ed17 Green 35 0xdc OK 0xdc Loop fail
0:3 20000014c3b3dabd Green 30 0xda OK 0xda Loop fail

PD Example 3

Component -------------------Description------------------- Qty


PD Disks experiencing a high level of I/O per second 93

Component --Identifier-- ---------Description----------


PD disk:100 Disk is experiencing a high level of I/O per
second: 789.0

PD Suggested Action 3
This check samples the I/O per second (IOPS) information in statpd to see if any disks are being
overworked, and then it samples again after five seconds. This does not necessarily indicate a problem,
but it could negatively affect system performance. The IOPS thresholds currently set for this condition are
listed:
• NL disks < 75
• FC 10K RPM disks < 150
• FC 15K RPM disks < 200
• SSD < 1500
Operations such as servicemag and tunevv can cause this condition. If the IOPS rate is very high
and/or a large number of disks are experiencing very heavy I/O, examine the system further using
statistical monitoring commands/utilities such as statpd, the OS MC (GUI) and System Reporter. The
following example shows a report for a disk with a total I/O is 150 kb/s or more.

cli% statpd -filt curs,t,iops,150


14:51:49 11/03/09 r/w I/O per second KBytes per sec ... Idle %
ID Port Cur Avg Max Cur Avg Max ... Cur Avg
100 3:2:1 t 658 664 666 172563 174007 174618 ... 6 6

PD Example 4

Component --Identifier-- -------Description----------


PD disk:3 Detailed State: old_firmware

PD Suggested Action 4
The identified disk does not have firmware that the storage system considers current. When a disk is
replaced, the servicemag operation should upgrade the disk's firmware. When disks are installed or

Troubleshooting 263
added to a system, the admithw command can perform the firmware upgrade. Check the state of the
disk by using CLI commands such as showpd -s, showpd -i, and showfirmwaredb.

cli% showpd -s 3
Id CagePos Type -State-- -Detailed_State-
3 0:4:0 FC degraded old_firmware

cli% showpd -i 3
Id CagePos State ----Node_WWN---- --MFR-- ---Model--- -Serial- -FW_Rev-
3 0:4:0 degraded 200000186242DB35 SEAGATE ST3146356FC 3QN0290H XRHJ

cli% showfirmwaredb
Vendor Prod_rev Dev_Id Fw_status Cage_type
...
SEAGATE [XRHK] ST3146356FC Current DC2.DC3.DC4

PD Example 5

Component --Identifier-- -------Description----------


PD -- Sparing Algorithm is not set

PD Suggested Action 5
Check the system’s Sparing Algorithm value using the CLI command showsys -param. The value is
normally set during the initial installation (OOTB). If it must be set later, use the command setsys
SparingAlgorithm; valid values are Default, Minimal, Maximal, and Custom. After setting the
parameter, use the admithw command to programmatically create and distribute the spare chunklets.

% showsys -param
System parameters from configured settings

----Parameter----- --Value--
RawSpaceAlertFC : 0
RawSpaceAlertNL : 0
RemoteSyslog : 0
RemoteSyslogHost : 0.0.0.0
SparingAlgorithm : Unknown

PD Example 6

Component --Identifier-- -------Description----------


PD Disk:32 ST3400755FC PD for cage type DC3 in cage position 2:0:0 is
missing from the firmware database

PD Suggested Action 6
Check the release notes for mandatory updates and patches. Install updates and patches to HPE 3PAR
OS as needed to support the PD in the cage.

264 Troubleshooting
PDCH
Checks for Physical Disk Chunklets (PDCH) with states that are not optimal.
• Chunklets are not used by multiple LDs.
• Media failed chunklets.
• Verifies if LD ownership is the same as the physical connection.
Format of Possible PDCH Exception Messages

pdch ch:<pdid> "Chunklet is on a remote disk"


pdch LD:<ldid> "LD has <count> remote chunklets
pdch LD:<ldid> "Connection path is not the same as LD ownership"
pdch ch:<initpdid>:<initpdpos> "Chunklet used previously by multiple LDs"
pdch ch:<initpdid>:<initpdpos> "Chunklet used previously by LD <ldname>
(ch: <id>: <pdch), currently used by LD <ldname>"

PDCH Example 1

Component ------------Description------------ Qty


pdch LDs with chunklets on a remote disk 3

Component -Identifier- -------Description--------


pdch ld:19 LD has 3 remote chunklets
pdch ld:20 LD has 90 remote chunklets
pdch ld:21 LD has 3 remote chunklets

Suggested PDCH Action 1


If the message LD has remote chunklets is for Preserved Data LDs (pdslds), those warnings can be
ignored. See KB Solution 14550 for details. From the example above, LDs 19, 20, and 21 are pdslds and
can be seen from the showld command:

cli% showld
Id Name RAID -Detailed_State- Own SizeMB UsedMB Use Lgct LgId WThru
MapV
19 pdsld0.0 1 normal 1/0 256 0 P,F 0 ---
Y N
20 pdsld0.1 1 normal 1/0 7680 0 P 0 ---
Y N
21 pdsld0.2 1 normal 1/0 256 0 P 0 ---
Y N

PDCH 265
PDCH Example 2

Component -------------------Description------------------- Qty


pdch LDs with connection path different than ownership 23
pdch LDs with chunklets on a remote disk 18

Component -Identifier- ---------------Description--------------


pdch LD:35 Connection path is not the same as LD ownership
pdch ld:35 LD has 1 remote chunklet

PDCH Suggested Action 2


The primary I/O paths for disks are balanced between the two nodes that are physically connected to the
drive cage. The node with the primary path to a disk is considered as the owning node. If the path of the
secondary node needs to be used for I/O to the disk, the secondary node is considered remote I/O.
These messages usually indicate a node-to-cage FC path problem because the disks (chunklets) are
being accessed through their secondary path. The messages are usually a by product of other conditions
such as drive-cage/node-port/FC-loop problems and need to be investigated. If a node is offline due to a
service action, such as hardware or software upgrades, these exceptions can be ignored until the action
is finished and the node is back online.
In this example, LD 35, with a name of R1.usr.3, is owned (Own) by nodes 3/2/0/1, and the primary/
secondary physical paths to the disks (chunklets) in this LD are from nodes 3 and 2. However, the FC
path (Port B) from node 3 to PD 91 is failed/missing, node 2 is performing the I/O to PD 91. When the
path from node 3 to cage 3 is fixed (N:S:P 3:0:4 in this example), the condition should disappear.

266 Troubleshooting
cli% showld
Id Name RAID -Detailed_State- Own SizeMB UsedMB Use Lgct LgId
WThru MapV
35 R1.usr.3 1 normal 3/2/0/1 256 256 V 0 ---
N Y

cli% showldch R1.usr.3


Ldch Row Set PdPos Pdid
Pdch State Usage Media Sp From To 0 0 0 2:2:3 63
0 normal ld valid N --- --- 1 0 0 3:8:3 91
0 normal ld valid N --- ---
cli% showpd 91 63

----Size(MB)---- ----Ports----
Id CagePos Type Speed(K) State Total Free A B
63 2:2:3 FC 10 normal 139520 124416 2:0:3* 3:0:3
91 3:8:3 FC 10 degraded 139520 124416 2:0:4* -----

cli% showpd -s -failed -degraded


Id CagePos Type -State-- ---------------Detailed_State----------
91 3:8:3 FC degraded missing_B_port,loop_failure

cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
2 cage2 2:0:3 0 3:0:3 0 24 29-42 2.37 2.37 DC2 n/a
3 cage3 2:0:4 0 ----- 0 32 28-40 2.37 2.37 DC2 n/a

Normal condition (after fixing):

cli% showpd 91 63

----Size(MB)---- ----Ports----
Id CagePos Type Speed(K) State Total Free A B
63 2:2:3 FC 10 normal 139520 124416 2:0:3* 3:0:3
91 3:8:3 FC 10 normal 139520 124416 2:0:4 3:0:4*

Port
Checks for the following port connection issues:
• Ports in unacceptable states
• Mismatches in type and mode, such as hosts connected to initiator ports, or host and Remote Copy
over Fibre Channel (RCFC) ports configured on the same FC adapter
• Degraded SFPs and those with low power; perform this check only if this FC Adapter type uses SFPs

Port 267
Format of Possible Port Exception Messages

Port port:<nsp> "Port mode is in <mode> state"


Port port:<nsp> "is offline"
Port port:<nsp> "Mismatched mode and type"
Port port:<nsp> "Port is <state>"
Port port:<nsp> "SFP is missing"
Port port:<nsp> SFP is <state>" (degraded or failed)
Port port:<nsp> "SFP is disabled"
Port port:<nsp> "Receiver Power Low: Check FC Cable"
Port port:<nsp> "Transmit Power Low: Check FC Cable"
Port port:<nsp> "SFP has TX fault"

Port Suggested Actions


Some specific examples are displayed below, but in general, use the following CLI commands to check
for port SPF errors: showport, showport -sfp,showport -sfp -ddm, showcage, showcage
-sfp, and showcage -sfp -ddm.

Port Example 1

Component ------Description------ Qty


Port Degraded or failed SFPs 1

Component -Identifier- --Description--


Port port:0:0:2 SFP is Degraded

Port Suggested Action 1


An SFP in a node-port is reporting a degraded condition. This is most often caused by the SFP receiver
circuit detecting a low signal level (RX Power Low), and usually caused by a cable with poor or
contaminated FC connection. An alert can identify the following condition:

Port 0:0:2, SFP Degraded (Receiver Power Low: Check FC Cable)

Check SFP statistics using CLI commands such as showport -sfp, showport -sfp -ddm,
showcage.

cli% showport -sfp


N:S:P -State-- -Manufacturer- MaxSpeed(Gbps) TXDisable TXFault RXLoss DDM
0:0:1 OK FINISAR_CORP. 2.1 No No No Yes
0:0:2 Degraded FINISAR_CORP. 2.1 No No No Yes

In the following example an RX power level of 361 microwatts (uW) for Port 0:0:1 DDM is a good reading;
and 98 uW for Port 0:0:2 is a weak reading (< 100 uW). Normal RX power level readings are 200-400 uW.

268 Troubleshooting
cli% showport -sfp -ddm
--------------Port 0:0:1 DDM--------------
-Warning- --Alarm--
--Type-- Units Reading Low High Low High
Temp C 41 -20 90 -25 95
Voltage mV 3217 2900 3700 2700 3900
TX Bias mA 7 2 14 1 17
TX Power uW 330 79 631 67 631
RX Power uW 361 15 794 10 1259

--------------Port 0:0:2 DDM--------------


-Warning- --Alarm--
--Type-- Units Reading Low High Low High
Temp C 40 -20 90 -25 95
Voltage mV 3216 2900 3700 2700 3900
TX Bias mA 7 2 14 1 17
TX Power uW 335 79 631 67 631
RX Power uW 98 15 794 10 1259

cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
0 cage0 0:0:1 0 1:0:1 0 15 33-38 08 08 DC3 n/a
1 cage1 --- 0 1:0:2 0 15 30-38 08 08 DC3 n/a

cli% showpd -s
Id CagePos Type -State-- -Detailed_State-
1 0:2:0 FC normal normal
...
13 1:1:0 NL degraded missing_A_port
14 1:2:0 FC degraded missing_A_port

cli% showpd -path


---------Paths---------
Id CagePos Type -State-- A B Order
1 0:2:0 FC normal 0:0:1 1:0:1 0/1
...
13 1:1:0 NL degraded 0:0:2\missing 1:0:2 1/-
14 1:2:0 FC degraded 0:0:2\missing 1:0:2 1/-

Port Example 2

Component -Description- Qty


Port Missing SFPs 1

Component -Identifier- -Description--


Port port:0:3:1 SFP is missing

Troubleshooting 269
Port Suggested Action 2
FC node-ports that normally contain SFPs will report an error if the SFP has been removed. The condition
can be checked using the showport -sfp command. In this example, the SFP in 0:3:1 has been
removed from the adapter:

cli% showport -sfp


N:S:P -State- -Manufacturer- MaxSpeed(Gbps) TXDisable TXFault RXLoss DDM
0:0:1 OK FINISAR_CORP. 2.1 No No No Yes
0:0:2 OK FINISAR_CORP. 2.1 No No No Yes
0:3:1 - - - - - - -
0:3:2 OK FINISAR_CORP. 2.1 No No No Yes

Port Example 3

Component -Description- Qty


Port Disabled SFPs 1

Component -Identifier- --Description--


Port port:3:5:1 SFP is disabled

Port Suggested Action 3


A node-port SFP will be disabled if the port has been placed offline using the controlport offline
command. See Example 4.

cli% showport -sfp


N:S:P -State- -Manufacturer- MaxSpeed(Gbps) TXDisable TXFault RXLoss DDM
3:5:1 OK FINISAR_CORP. 4.1 Yes No No Yes
3:5:2 OK FINISAR_CORP. 4.1 No No No Yes

Port Example 4

Component -Description- Qty


Port Offline ports 1

Component -Identifier- --Description--


Port port:3:5:1 is offline

Port Suggested Action 4


Check the state of the port with showport. If a port is offline, it is deliberately put in the particular state by
using the controlport offline command. Offline ports can be restored using controlport rst.

cli% showport
N:S:P Mode State ----Node_WWN---- -Port_WWN/HW_Addr- Type
3:5:1 target offline 2FF70002AC00054C 23510002AC00054C free

270 Troubleshooting
Port Example 5

Component ------------Description------------ Qty


Port Ports with mismatched mode and type 1

Component -Identifier- ------Description-------


Port port:2:0:3 Mismatched mode and type

Port Suggested Action 5


The output indicates that the port's mode, such as an initiator or target, is not correct for the connection
type, such as disk, host, iSCSI or RCFC. Useful CLI command include: showport, showport -c,
showport -par, showport -rcfc, showcage.

cli% showport
N:S:P Mode State ----Node_WWN---- -Port_WWN/HW_Addr- Type
2:0:1 initiator ready 2FF70002AC000591 22010002AC000591 disk
2:0:2 initiator ready 2FF70002AC000591 22020002AC000591 disk
2:0:3 target ready 2FF70002AC000591 22030002AC000591 disk
2:0:4 target loss_sync 2FF70002AC000591 22040002AC000591 free

Component -Identifier- ------Description-------


Port port:0:1:1 Mismatched mode and type

cli% showport
N:S:P Mode State ----Node_WWN---- -Port_WWN/HW_Addr- Type
0:1:1 initiator ready 2FF70002AC000190 20110002AC000190 rcfc
0:1:2 initiator loss_sync 2FF70002AC000190 20120002AC000190 free
0:1:3 initiator loss_sync 2FF70002AC000190 20130002AC000190 free
0:1:4 initiator loss_sync 2FF70002AC000190 20140002AC000190 free

Port Example 6

Component -----------Description------------------------- Qty


Port Ports with increasing CRC error counts 2

Component -Identifier- ------Description-----------


Port port:3:2:1 Port or devices attached to port have experienced
CRC
_______________________errors within the last day

Port Suggested Action 6


Check the fibre channel error counters for the port using the CLI commands showportlesb single
and showportlesb hist. Devices with high InvCRC values are receiving bad packets from an
upstream device (disk, HBA, SFP, or cable).

Troubleshooting 271
cli% showportlesb single 3:2:1
ID ALPA ----Port_WWN---- LinkFail LossSync LossSig InvWord InvCRC
<3:2:1> 0x1 23210002AC00054C 20697 2655432 20700 37943749 1756
pd107 0xa3 2200001D38C28AA3 0 157 0 1129 0
pd106 0xa5 2200001D38C0D01E 0 279 0 1551 0

Port Example 7

Component -----------Description------------------------- Qty


Port Ports with increasing CRC error counts 2

Component -Identifier- ------Description-----------


Port port:2:2:1 CRC errors have been increasing by more than one per
day
______________________ over the past week

Port Suggested Action 7


Check the fibre channel error counters for the port using the CLI commands showportlesb single
and showportlesb hist.
The message "CRC errors have been increasing … over the past week" comes from a check of the daily
port-LESB history as seen in showportlesb hist. If the error condition is corrected, checkhealth
port may continue to report the error until the next daily update is stored. The checkhealth port
should stop reporting within 24 hours after the CRC counter stops counting.

cli% showportlesb single 3:2:1


ID ALPA ----Port_WWN---- LinkFail LossSync LossSig InvWord InvCRC
<3:2:1> 0x1 23210002AC00054C 20697 2655432 20700 37943749 1756
pd107 0xa3 2200001D38C28AA3 0 157 0 1129 0
pd106 0xa5 2200001D38C0D01E 0 279 0 1551 0

Port CRC
Checks for increasing FC port CRC errors.
• Compares current lesb errors for active FC ports with most recent sample.
• If no error reported for current counters, compare most recent sample with sample from day before.
Format of Possible Port CRC Exception Messages

portcrc port:<nsp> "There is less than two days of LESB history for this
port"
portcrc port:<nsp> "Port or devices attached to port have experienced CRC
errors within the last day"
portcrc port:<nsp> "Port or devices attached to port have experienced CRC
errors within the last two days"

272 Port CRC


Port CRC Example
FC port CRC errors is detected on the specified port. The command showportlesb hist 1:5:1 is
useful for troubleshooting FC port CRC problems. This command will display current network counters
and the recent log entries that checkhealth uses to evaluate and report errors.

Port PELCRC
Checks for increasing SAS port CRC errors.
• Compares current PEL errors for active SAS ports with most recent sample.
• If no error reported for current counters, compare most recent sample with sample from day before.
Format of Possible PELCRC Exception Messages

Portpelcrc port:<nsp> "There is less than one week of PEL history for this
port"
Portpelcrc port:<nsp> "Port or devices attached to port have experienced
PEL errors within the last day"
Portpelcrc port:<nsp> "PEL errors have been increasing by more than
<maxCRC> per day over the last two days"

Port PELCRC Example


SAS port CRC errors is detected on the specified port. The command showportpel hist 1:5:1 is
useful for troubleshooting SAS port CRC problems. This command will display current network counters
and the recent log entries that checkhealth uses to evaluate and report errors.

RC
Checks for the following Remote Copy issues.
• Remote Copy targets
• Remote Copy links
• Remote Copy Groups and VVs
Format of Possible RC Exception Messages

RC rc:<name> "All links for target <name> are down but target not yet
marked failed."
RC rc:<name> "Target <name> has failed."
RC rc:<name> "Link <name> of target <target> is down."
RC rc:<name> "Group <name> is not started to target <target>."
RC rc:<vvname> "VV <vvname> of group <name> is stale on target <target>."
RC rc:<vvname> "VV <vvname> of group <name> is not synced on target
<target>."

Port PELCRC 273


RC Example

Component -Description- Qty


RC Stale volumes 1

Component --Identifier--- ---------Description---------------


RC rc:yush_tpvv.rc VV yush_tpvv.rc of group yush_group.r1127
is stale on target S400_Async_Primary.

RC Suggested Action
Perform remote copy troubleshooting such as checking the physical links between the storage system.
Useful CLI commands are showrcopy, showrcopy -d, showport -rcip, showport -rcfc,
shownet -d, controlport rcip ping.

SNMP
Displays issues with SNMP. Attempts the showsnmpmgr command and reports errors if the CLI returns
an error.
Format of Possible SNMP Exception Messages

SNMP -- <err>

SNMP Example

Component -Identifier- ----------Description---------------


SNMP -- Could not obtain snmp agent handle. Could be
misconfigured.

SNMP Suggested Action


Any error message that can be produced by showsnmpmgr can display.

SP
Checks the status of the Ethernet connection between the SP and nodes.
The Ethernet connection can only be checked from the SP because it performs a short Ethernet transfer
check between the SP and the storage system.
Format of Possible SP Exception Messages

Network SP->InServ "SP ethernet Stat <stat> has increased too quickly check
SP network settings"

274 SNMP
SP Example

Component -Identifier- --------Description------------------------


SP ethernet "State rx_errs has increased too quickly check SP
network
settings"

SP Suggested Action
The <stat> variable can be any of the following: rx_errs, rx_dropped, rx_fifo, rx_frame,
tx_errs, tx_dropped, tx_fifo.
This message is usually caused by customer network issues, but may be caused by conflicting or
mismatching network settings between the SP, customer switch(es), and the storage system. Check the
SP network interface settings using SPmaint or SPOCC. Check the storage system settings by using
commands such as shownet and shownet -d.

Task
Displays failed tasks. Checks for any tasks that have failed within the past 24 hours. This is the default
time frame for the showtask -failed command.

Format of Possible Task Exception Messages

Task Task:<Taskid> "Failed Task"

Task Example

Component --Identifier--- -------Description--------


Task Task:6313 Failed Task

In this example, checkhealth also showed an alert. The task failed because the command is entered
with a syntax error:

Alert sw_task:6313 Task 6313 (type 'background_command', name


'upgradecage -a -f') has failed (Task Failed). Please see task status for
details.

Task Suggested Action


The CLI command showtask -d Task_id displays detailed information about the task. To clean up the
alerts and the reporting of checkhealth, you can delete the failed-task alerts. The alerts are not auto-
resolved and remain until they are manually removed with the MC (GUI) or CLI with removealert or
setalert ack. To display system-initiated tasks, use showtask -all.

cli% showtask -d 6313


Id Type Name Status Phase Step
6313 background_command upgradecage -a -f failed --- ---

Detailed status is as follows:

Task 275
2010-10-22 10:35:36 PDT Created task.
2010-10-22 10:35:36 PDT Updated Executing "upgradecage -a -f" as 0:12109
2010-10-22 10:35:36 PDT Errored upgradecage: Invalid option: -f

VLUN
Displays host agent inactive and non-reported virtual LUNs (VLUNs). Also reports VLUNs that have been
configured but are not currently being exported to hosts or host-ports.
Format of Possible VLUN Exception Messages

vlun vlun:(<vvID>, <lunID>, <hostname>)"Path to <wwn> is not reported by


host agent"
vlun vlun:(<vvID>, <lunID>, <hostname>)"Path to <wwn> is not is not seen by
host" vlun
vlun:(<vvID>, <lunID>, <hostname>) "Path to <wwn> is failed"
vlun host:<hostname> "Host <ident>(<type>):<connection> is not connected to
a port"

VLUN Example

Component ---------Description--------- Qty


vlun Hosts not connected to a port 1

Component -----Identifier----- ---------Description--------


vlun host:cs-wintec-test1 Host wwn:10000000C964121D is not connected
to a port

VLUN Suggested Action


Check the export status and port status for the VLUN and HOST by using CLI commands: showvlun,
showvlun -pathsum, showhost, showhost pathsum, showport, servicehost list.

276 VLUN
cli% showvlun -host cs-wintec-test1
Active VLUNs
Lun VVName HostName -Host_WWN/iSCSI_Name- Port Type
2 BigVV cs-wintec-test1 10000000C964121C 2:5:1 host
-----------------------------------------------------------
1 total

VLUN Templates
Lun VVName HostName -Host_WWN/iSCSI_Name- Port Type
2 BigVV cs-wintec-test1 ---------------- --- host

cli% showhost cs-wintec-test1


Id Name Persona -WWN/iSCSI_Name- Port
0 cs-wintec-test1 Generic 10000000C964121D ---
10000000C964121C 2:5:1
cli% servicehost list
HostName -WWN/iSCSI_Name- Port
host0 10000000C98EC67A 1:1:2
host1 210100E08B289350 0:5:2

Lun VVName HostName -Host_WWN/iSCSI_Name- Port Type


2 BigVV cs-wintec-test1 10000000C964121D 3:5:1 unknown

VV
Displays Virtual Volumes (VV) that are not optimal. Checks for abnormal state of VVs and Common
Provisioning Groups (CPG).
Format of Possible VV Exception Messages

VV vv:<vvname> "IO to this volume will fail due to no_stale_ss policy"


VV vv:<vvname> "Volume has reached snapshot space allocation limit"
VV vv:<vvname> "Volume has reached user space allocation limit"
VV vv:<vvname> "VV has expired"
VV vv:<vvname> "Detailed State: <state>" (failed or degraded)
VV cpg:<cpg> "CPG is unable to grow SA (or SD) space"

VV Suggested Action
Check status by using CLI commands such as showvv, showvv -d, and showvv -cpg.

VV 277
Websites
General websites
Hewlett Packard Enterprise Information Library
www.hpe.com/info/EIL
Single Point of Connectivity Knowledge (SPOCK) Storage compatibility matrix
www.hpe.com/storage/spock
Storage white papers and analyst reports
www.hpe.com/storage/whitepapers
For additional websites, see Support and other resources.

278 Websites
Support and other resources

Accessing Hewlett Packard Enterprise Support


• For live assistance, go to the Contact Hewlett Packard Enterprise Worldwide website:
http://www.hpe.com/assistance
• To access documentation and support services, go to the Hewlett Packard Enterprise Support Center
website:
http://www.hpe.com/support/hpesc

Information to collect
• Technical support registration number (if applicable)
• Product name, model or version, and serial number
• Operating system name and version
• Firmware version
• Error messages
• Product-specific reports and logs
• Add-on products or components
• Third-party products or components

Accessing updates
• Some software products provide a mechanism for accessing software updates through the product
interface. Review your product documentation to identify the recommended software update method.
• To download product updates:
Hewlett Packard Enterprise Support Center
www.hpe.com/support/hpesc
Hewlett Packard Enterprise Support Center: Software downloads
www.hpe.com/support/downloads
Software Depot
www.hpe.com/support/softwaredepot
• To subscribe to eNewsletters and alerts:
www.hpe.com/support/e-updates
• To view and update your entitlements, and to link your contracts and warranties with your profile, go to
the Hewlett Packard Enterprise Support Center More Information on Access to Support Materials
page:
www.hpe.com/support/AccessToSupportMaterials

IMPORTANT:
Access to some updates might require product entitlement when accessed through the Hewlett
Packard Enterprise Support Center. You must have an HPE Passport set up with relevant
entitlements.

Support and other resources 279


Customer self repair
Hewlett Packard Enterprise customer self repair (CSR) programs allow you to repair your product. If a
CSR part needs to be replaced, it will be shipped directly to you so that you can install it at your
convenience. Some parts do not qualify for CSR. Your Hewlett Packard Enterprise authorized service
provider will determine whether a repair can be accomplished by CSR.
For more information about CSR, contact your local service provider or go to the CSR website:
http://www.hpe.com/support/selfrepair

Remote support
Remote support is available with supported devices as part of your warranty or contractual support
agreement. It provides intelligent event diagnosis, and automatic, secure submission of hardware event
notifications to Hewlett Packard Enterprise, which will initiate a fast and accurate resolution based on your
product's service level. Hewlett Packard Enterprise strongly recommends that you register your device for
remote support.
If your product includes additional remote support details, use search to locate that information.

Remote support and Proactive Care information


HPE Get Connected
www.hpe.com/services/getconnected
HPE Proactive Care services
www.hpe.com/services/proactivecare
HPE Proactive Care service: Supported products list
www.hpe.com/services/proactivecaresupportedproducts
HPE Proactive Care advanced service: Supported products list
www.hpe.com/services/proactivecareadvancedsupportedproducts

Proactive Care customer information


Proactive Care central
www.hpe.com/services/proactivecarecentral
Proactive Care service activation
www.hpe.com/services/proactivecarecentralgetstarted

Warranty information
To view the warranty for your product or to view the Safety and Compliance Information for Server,
Storage, Power, Networking, and Rack Products reference document, go to the Enterprise Safety and
Compliance website:
www.hpe.com/support/Safety-Compliance-EnterpriseProducts

Additional warranty information


HPE ProLiant and x86 Servers and Options
www.hpe.com/support/ProLiantServers-Warranties
HPE Enterprise Servers
www.hpe.com/support/EnterpriseServers-Warranties
HPE Storage Products
www.hpe.com/support/Storage-Warranties

280 Customer self repair


HPE Networking Products
www.hpe.com/support/Networking-Warranties

Regulatory information
To view the regulatory information for your product, view the Safety and Compliance Information for
Server, Storage, Power, Networking, and Rack Products, available at the Hewlett Packard Enterprise
Support Center:
www.hpe.com/support/Safety-Compliance-EnterpriseProducts

Additional regulatory information


Hewlett Packard Enterprise is committed to providing our customers with information about the chemical
substances in our products as needed to comply with legal requirements such as REACH (Regulation EC
No 1907/2006 of the European Parliament and the Council). A chemical information report for this product
can be found at:
www.hpe.com/info/reach
For Hewlett Packard Enterprise product environmental and safety information and compliance data,
including RoHS and REACH, see:
www.hpe.com/info/ecodata
For Hewlett Packard Enterprise environmental information, including company programs, product
recycling, and energy efficiency, see:
www.hpe.com/info/environment

Documentation feedback
Hewlett Packard Enterprise is committed to providing documentation that meets your needs. To help us
improve the documentation, send any errors, suggestions, or comments to Documentation Feedback
(docsfeedback@hpe.com). When submitting your feedback, include the document title, part number,
edition, and publication date located on the front cover of the document. For online help content, include
the product name, product version, help edition, and publication date located on the legal notices page.

Regulatory information 281


Identifying physical locations of the logical
cage numbers
Use the following CLI commands to identify the physical locations of the logical cage numbers. Enter the
numbers and then store them on the storage system admin volume by following these instructions.

Procedure
1. Connect and log into the service processor with the admin account credentials. Start a CLI session.
2. Enter showcage to display the drive cage numbers/names.
3. Enter locatecage cage<n> where cage<n> is the drive cage number/name, to blink the LEDs on
the front of the drive cage. This will be performed one cage at a time.
4. Identify the physical location of the drive cage and make a note of the drive cage number on a
separate paper for reference during servicing.
5. Enter setcage position "Rack<xx> Rack-Unit<yy>" cage<n>, where <xx> is the rack
designator (00 is the main rack which has nodes, and 01 or higher are expansion cabinets), <yy> is
the Rack-Unit number in the rack (e.g., 1 – 50) at the bottom of the drive cage, and cage <n> is the
logical drive cage number/name.
6. Enter showcage -d cage<n> to verify these settings.
7. Repeat for each drive cage displayed in step 2.
8. Exit and logout of the session.

282 Identifying physical locations of the logical cage numbers

You might also like