Professional Documents
Culture Documents
Abstract
This Hewlett Packard Enterprise (HPE) guide provides authorized technicians information
about servicing and upgrading the hardware components for the HPE 3PAR StoreServ 20000
storage systems. This document is for HEWLETT PACKARD ENTERPRISE INTERNAL USE
ONLY.
Notices
The information contained herein is subject to change without notice. The only warranties for Hewlett
Packard Enterprise products and services are set forth in the express warranty statements accompanying
such products and services. Nothing herein should be construed as constituting an additional warranty.
Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained
herein.
Confidential computer software. Valid license from Hewlett Packard Enterprise required for possession,
use, or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer
Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government
under vendor's standard commercial license.
Links to third-party websites take you outside the Hewlett Packard Enterprise website. Hewlett Packard
Enterprise has no control over and is not responsible for information outside the Hewlett Packard
Enterprise website.
Acknowledgments
Intel®, Itanium®, Pentium®, Intel Inside®, and the Intel Inside logo are trademarks of Intel Corporation in
the United States and other countries.
Microsoft® and Windows® are either registered trademarks or trademarks of Microsoft Corporation in the
United States and/or other countries.
Adobe® and Acrobat® are trademarks of Adobe Systems Incorporated.
Java® and Oracle® are registered trademarks of Oracle and/or its affiliates.
UNIX® is a registered trademark of The Open Group.
Contents
Getting started.........................................................................................8
Redeem and register HPE 3PAR licenses.................................................................................... 8
Precautions and advisories...........................................................................................................8
Use proper tools............................................................................................................................8
Handling Field Replaceable Units (FRU)...................................................................................... 9
Preventing Electrostatic Discharge (ESD)......................................................................... 9
Contents 3
Adding drives..................................................................................................................116
Adding a drive chassis....................................................................................................117
Parts catalog........................................................................................122
Parts catalog for 20000 models................................................................................................ 122
Cable parts list................................................................................................................122
Controller node enclosure parts list................................................................................123
Drive enclosure parts list................................................................................................125
Service processor parts list............................................................................................ 128
Parts catalog for 20000 R2 models...........................................................................................128
Cable parts list................................................................................................................128
Controller node enclosure parts list................................................................................129
Drive enclosure parts list................................................................................................131
Service processor parts list............................................................................................ 133
4 Contents
Checking health from the SP 4.x SPOCC interface....................................................... 165
Checking health from the SP 4.x SPMaint interface...................................................... 165
Maintenance mode action from the SP.....................................................................................167
Setting maintenance mode from the SP 5.x SC interface..............................................167
Setting maintenance mode from the SP 4.x interactive CLI interface............................167
Setting or modifying maintenance mode from the SP 4.x SPMaint interface.................167
Locate action from the SP.........................................................................................................167
Running the locate action from the SP SC interface......................................................168
Running the locate action from the SP 4.x SPOCC interface........................................ 168
Alert notifications from the SP...................................................................................................168
Browser warnings......................................................................................................................169
Clear Internet Explorer browser warning........................................................................169
Clear Google Chrome browser warning.........................................................................170
Clear Mozilla Firefox browser warning........................................................................... 171
Contents 5
Troubleshooting.................................................................................. 217
Troubleshooting issues with the storage system...................................................................... 217
Alerts issued by the storage system.............................................................................. 217
Alert notifications by email from the service processor....................................... 217
Alert notifications in the HPE 3PAR StoreServ Storage 3PAR Service Console.218
Viewing alerts...................................................................................................... 218
HPE 3PAR BIOS Error Codes........................................................................................219
I/O module error codes.................................................................................................. 219
I/O Module LEDs............................................................................................................226
Collecting log files.......................................................................................................... 227
Collecting the HPE 3PAR SmartStart log files.....................................................227
Collecting SP log files from the SC interface.......................................................227
Collecting SP log files from the SPOCC interface............................................... 228
Health check on the storage system.............................................................................. 228
Checking health of the storage system—HPE 3PAR SSMC...............................228
Checking health of the storage system—HPE 3PAR CLI....................................228
Troubleshooting system components....................................................................................... 232
Troubleshooting StoreServ System Components.......................................................... 233
Alert..................................................................................................................... 234
Cabling................................................................................................................ 234
Cage....................................................................................................................237
Consistency.........................................................................................................245
Data Encryption at Rest (DAR)............................................................................245
Date.....................................................................................................................246
File.......................................................................................................................246
LD........................................................................................................................248
License................................................................................................................ 252
Network............................................................................................................... 252
Node....................................................................................................................254
PD........................................................................................................................258
PDCH.................................................................................................................. 265
Port......................................................................................................................267
Port CRC............................................................................................................. 272
Port PELCRC...................................................................................................... 273
RC....................................................................................................................... 273
SNMP.................................................................................................................. 274
SP........................................................................................................................274
Task..................................................................................................................... 275
VLUN...................................................................................................................276
VV........................................................................................................................277
Websites.............................................................................................. 278
6 Contents
Identifying physical locations of the logical cage numbers........... 282
Contents 7
Getting started
Before you start servicing, upgrading, or troubleshooting an HPE 3PAR StoreServ 20000 storage system,
make sure to plan and coordinate the process with an authorized Hewlett Packard Enterprise
representative. Proper planning provides a more efficient maintenance process and leads to greater
availability and reliability of the system.
If you require additional assistance, contact Hewlett Packard Enterprise Support.
CAUTION:
Always wear an electrostatic discharge (ESD) wrist-grounding strap when installing a storage
system hardware part.
8 Getting started
Handling Field Replaceable Units (FRU)
Use the following preventative guidelines and adhere to any cautionary statements when you are
handling field replaceable units during a servicing, upgrading or troubleshooting action.
TIP:
If an ESD kit is unavailable, touch an unpainted metal surface to discharge static electricity from
your body before handling a FRU. Repeat the discharge action before handling other FRUs.
Static electricity can damage the components. Before removing or replacing a component, observe the
following to prevent damage.
• Remove all ESD-generating materials from your work area.
• Avoid hand contact. Transport and store all electrostatic parts and assemblies in conductive or
approved ESD packaging such as ESD tubes, bags, or boxes.
• Keep electrostatic-sensitive parts in their containers until they arrive at static-free stations. Before
removing items from their containers, place the containers on a grounded surface.
• Use the wrist-grounding strap when servicing the storage system.
• Connect your ESD wrist or shoe strap to a grounded surface before removing the new component
from its ESD package.
The same precaution applies when handling a component.
• Avoid contact with pins, leads, or circuitry.
• Use the ESD package provided with the new part to return the old part.
• Prepare an Electrostatic Discharge-safe work surface, before servicing components. Preparing this
work surface can be done by placing an anti-static mat on the floor or table near the storage system.
• Attach the ground lead of the mat to an unpainted surface of the rack.
• Attach the grounding strap clip to an unpainted surface of the rack.
Procedure
Preparation:
1. Unpack the replacement controller node (node) and place on an ESD safe mat.
2. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
3. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
4. Set Maintenance Mode.
5. Initiate Check Health of the storage system.
Issue the checkhealth -detail command.
CAUTION:
If health issues are identified during the Check Health scan, resolve these issues before
continuing. Refer to the details in the Check Health results and contact HPE support if
necessary.
A scan of the storage system will be run to make sure that there are no additional issues.
6. Identify the storage system to be serviced.
Issue the showsys command.
7. Verify the current state of the nodes.
Issue the shownode command.
8. Locate the failed node.
NOTE:
Some faults will automatically illuminate a blue LED (safe-to-remove) on some of the
components equipped with a UID/Service LED. Search for a designated blue LED when
replacing or servicing a controller node component. If the LED is not illuminated, issue the
locatenode <node_ID> command.
9. Shut down the node, or if it has already been halted (which could be due to a hardware failure), skip
this step.
a. Issue the shutdownnode halt <node_ID> command, where <node_ID> is the number for
the node being shut down.
For example, with node 0:
IMPORTANT:
Allow up to 10 minutes for the controller node to shut down (halt). When the controller node
is fully shut down, the Status LED rapidly flashes green and the UID/Service LED is solid
blue. The Fault LED might be solid amber, depending on the nature of the node failure.
10. Turn off power to the failed node.
Move the power switch to the OFF position.
Table Continued
NOTE:
If the failed node is already offline, it is not necessary to shut down the node because it is not part of
the cluster.
Each node contains two SSDs, also called SATA node drives. As shown on the label on the inside of the
controller node cover, the two SATA node drives are numbered.
• SATA 1 drive is the node drive on the top.
• SATA 0 drive is the node drive on the bottom.
Procedure
Preparation:
1. Unpack the replacement node drive and place on an ESD safe mat.
NOTE:
The node drive is factory installed in a bracket required for installation.
2. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
3. Initiate a maintenance window that stops the flow of system alerts from being sent to HPE by setting
Maintenance Mode.
4. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
5. Initiate Check Health of the storage system.
Issue the checkhealth -detail command.
CAUTION:
If health issues are identified during the Check Health scan, resolve these issues before
continuing. Refer to the details in the Check Health results and contact HPE support if
necessary.
A scan of the storage system will be run to make sure that there are no additional issues.
6. Verify the current state of the nodes by issuing the shownode command.
7. Verify the current state of the node drives by issuing the shownode -drive command.
8. Locate the node with the failed node drive.
CAUTION:
If the controller node is properly shut down (halted) before removal, the storage system will
continue to function, but data loss might occur if the replacement procedure is not followed
correctly.
a. Issue the shutdownnode halt <node_ID> command, where <node_ID> is the number for
the node being shut down.
For example, with node 0:
IMPORTANT:
Allow up to 10 minutes for the controller node to shut down (halt). When the controller node
is fully shut down, the Status LED rapidly flashes green and the UID/Service LED is solid
blue. The Fault LED might be solid amber, depending on the nature of the node failure.
11. Turn off power to the node.
Move the power switch to the OFF position.
CAUTION:
Do not install controller nodes with mismatched memory configurations. Installing controller nodes
with mismatched memory configurations may cause the controller nodes to function improperly or
fail.
NOTE:
If the failed node is already offline, it is not necessary to shut down the node because the node is no
longer part of the cluster.
Each controller node contains two banks of Control Cache DIMMs and four banks of Data Cache DIMMs.
Refer to the label on the inside of the node cover, or the labels on the controller node board itself, to help
locate the failed DIMM.
Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
2. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
3. Set Maintenance Mode.
4. Perform preliminary maintenance checks and initiate servicing.
NOTE: Some faults will automatically illuminate a blue LED (safe-to-remove) on some of the
components equipped with a UID/Service LED. Search for a designated blue LED when
replacing or servicing a controller node component. If the LED is not illuminating, use the
command locatenode with specific options to illuminate the component.
NOTE:
You must wait up to ten minutes for the node to halt. When the node halts, the node status LED
rapidly blinks green, and the node service LED displays blue.
6. Turn the node power switch to the OFF position.
NOTE:
Each controller node contains two banks of Controller Cache DIMMs and four banks of Data
Cache DIMMs.
NOTE:
When you switch on the power, the node reboots. This process might take approximately ten to
fifteen minutes. The node becomes part of the cluster when the process is complete.
20. After node rescue completes and the node has booted, verify the controller node LEDs are in normal
status with a steady, blinking green LED. Uniform blinking indicates the node has joined the cluster.
21. On the CLI prompt:
IMPORTANT:
The Time-of-Day (TOD) battery, also called an RTC battery, is a 3-V lithium coin battery. The lithium
coin battery might explode if it is incorrectly installed in the node. Replace the TOD battery only with
a battery supplied by Hewlett Packard Enterprise. Do not use non Hewlett Packard Enterprise
supplied batteries. Dispose of used batteries according to the manufacturer’s instructions.
Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
2. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
3. Set Maintenance Mode.
4. Perform preliminary maintenance checks and initiate servicing.
NOTE:
Some faults will automatically illuminate a blue LED (safe-to-remove) on some of the
components equipped with a UID/Service LED. Search for a designated blue LED when
replacing or servicing a controller node component. If the LED is not illuminating, use the
command
locatenode
with specific options to illuminate the component.
NOTE:
You must wait up to ten minutes for the node to halt. When the node halts, the node status LED
rapidly blinks green, and the node service LED displays blue.
6. Turn the node power switch to the OFF position.
NOTE:
Note the location of the battery polarity (+/- symbols) before removing the failed TOD battery.
11. Push the retaining clip over the TOD battery, (also called an RTC battery) and lift the battery out of
the housing in the controller node.
12. Refer to Replacing controller node internal components on page 15 to identify the TOD battery
location.
13. Remove the replacement TOD battery from its protective packaging.
NOTE:
When you switch on the power, the node reboots. This process might take approximately ten to
fifteen minutes. The node becomes part of the cluster when the process is complete.
24. After node rescue completes and the node has booted, verify the controller node LEDs are in normal
status with a steady, blinking green LED. Uniform blinking indicates the node has joined the cluster.
25. On the CLI prompt:
a. Enter shownode to verify the node has joined the cluster
b. Enter checkhealth -detail to verify the current state of the system.
26. Exit and logout of the session.
Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
c. Enter locatenode -ps <psID> <nodeID> to prepare the power supply for service.
NOTE:
The system illuminates the service LED blue when there is a failure and the component is
safe to replace. If the blue service LED is not lit or if this is a proactive replacement, enter
locatenode –ps <powersupplyID> <nodeID>
. This will force the service LED to light blue.
5. Locate the power supply with the blue LED requiring service. The blue LED indicates the power
supply is ready for service.
6. Remove the power cord retaining strap and disconnect the power cable from the failed power supply.
7. Remove the power supply by (1) pressing the release tab and then (2) pulling out the power supply
out of the power supply tray in the enclosure and place onto the ESD safe work surface..
NOTE:
A clicking sound indicates when the module is fully engaged.
9. Connect the power cable and secure it with the retaining strap.
10. Confirm that green LED of the power supply is lit to indicate normal operation.
NOTE:
This process might take up to a minute to light the green LED and turn off the blue service
LED.
11. If green LED of the replacement power supply does not light after a minute, the power supply tray
might have failed, and you must remove the power supply tray.
a. Disconnect the power cable from the failed power supply.
b. Squeeze the release tab towards the handle and pull the power supply out of the power supply
tray in the enclosure and place onto the ESD safe work surface.
c. Press the release tab and pull the failed power supply tray out of the enclosure.
d. Align and slide the replacement power supply tray into the supply bay until it clicks into place and
is fully engaged in the middle plane.
e. Align and slide the power supply into power supply tray in the enclosure, until it clicks into place
and is fully engaged in the power supply tray.
f. Connect the power cable to the power supply.
g. Secure the power cable with the cable retention strap.
12. Confirm that green LED of the power supply is lit to indicate normal operation.
13. On the CLI prompt:
a. Enter shownode -ps to verify the status of the node power supply is OK.
Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
2. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
3. Set Maintenance Mode.
4. Performing preliminary maintenance checks and initiate servicing.
NOTE:
Some faults will automatically illuminate a blue LED (safe-to-remove) on some of the
components equipped with a UID/Service LED. Search for a designated blue LED when
replacing or servicing a controller node component. If the LED is not illuminating, use the
command
locatenode
with specific options to illuminate the component.
NOTE:
A clicking sound indicates when the module is fully engaged.
10. Insert the new power supply tray and power supply by sliding the unit in until it is fully engaged.
NOTE:
A clicking sound indicates when the module is fully engaged.
11. Connect the power cable and secure the cord to the handle of the power supply with a strap.
12. Enter shownode -ps to verify the status of the node power supply is OK.
13. Enter checkhealth -detail to verify the current state of the system.
14. Exit and logout of the session.
Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
2. Press the release tab and pivot out the bezel to remove the controller node front bezel.
3. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
4. Set Maintenance Mode.
5. Perform preliminary maintenance checks and initiate servicing.
NOTE:
Some faults will automatically illuminate a blue LED (safe-to-remove) on some of the
components equipped with a UID/Service LED. Search for a designated blue LED when
replacing or servicing a controller node component. If the LED is not illuminating, use the
command
locatenode
with specific options to illuminate the component.
cli% showbattery
NOTE:
The system illuminates a blue LED when there is a failure.
NOTE:
If the blue service LED is not lit, enter
locatenode –bat
followed by
<nodeID>
to locate the failed BBU.
6. At the front of the system, identify the failed battery module and verify the service LED is illuminated
blue.
7. Remove the failed BBU by (1) pressing the ejector and (2) pulling it out of the slot.
cli% showbattery
----- showbattery -----
Node Assy_Serial -State- -Service_LED- ChrgLvl(%) -ExpDate-- Expired
Testing
0 00000169 OK Off 100 03/18/2020 No
No
1 00000265 OK Off 100 03/18/2020 No
No
IMPORTANT:
The charge time for the batteries can be up to 24 hours.
Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
2. To remove the controller node front bezel, (1) press the release tab and (2) pivot out the bezel.
3. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
4. Set Maintenance Mode.
5. Perform preliminary maintenance checks and initiate servicing.
NOTE:
Some faults will automatically illuminate a blue LED (safe-to-remove) on some of the
components equipped with a UID/Service LED. Search for a designated blue LED when
replacing or servicing a controller node component. If the LED is not illuminating, use the
command
locatenode -fan <fanID> <nodeID>
to illuminate the component.
NOTE:
The system illuminates the UID/Service LED blue when a failure occurs.
d. Enter locatenode -fan <fanID> <nodeID> to prepare the node fan for service.
NOTE:
A clicking sound indicates when the module is fully engaged.
NOTE:
• PCI adapters, also called HBAs (Host Bus Adapters), are located in slots at the rear of the
controller node. Generally, SAS PCI adapters are on the left and fiber channel and other PCI
adapters are on the right. The replacement procedures are the same for all PCI adapters.
• If the controller node has failed due to an issue with a PCI adapter, it is not necessary to shut
down the node. The node is no longer part of the cluster.
Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
2. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
3. Set Maintenance Mode.
4. Initiate Check Health of the storage system.
5. Check that the current state of the controller nodes is healthy. Issue the HPE 3PAR CLI shownode
command.
6. If applicable, enter locatenode -pci <slot> <nodeID> to locate the PCI adapter for service.
NOTE:
• The system illuminates the UID/Service LED blue when a failure occurs. To illuminate the
service LED when servicing a component, use the locatenode command.
• The locatenode command confirms that the correct component is being serviced.
7. Identify the failed PCI adapter with the service LED illuminating blue.
8. Ensure that the cables to the PCI adapter are properly labeled to facilitate reconnecting the cables
later.
9. Only for the replacement of a 10Gb NIC adapter with HPE 3PAR File Persona running, delete
File Persona interfaces from the ports of the failed adapter on the controller node. Otherwise skip this
step.
a. Issue the showport -fs command to identify the File Persona ports.
b. Delete File Persona interfaces from the ports of the failed adapter on the controller node. Issue
the controlport fs delete -f <N:S:P> command where: <N:S:P> is the node:slot:port
for the ports in the NIC PCIe adapter being replaced in the controller node.
Deleting File Persona interfaces causes existing FPGs on this node to fail over to the second
node.
10. To halt the desired node, enter shutdownnode halt <nodeID>.
11. When prompted, enter yes to confirm halting of the node.
CAUTION:
The PCI adapters are not hot pluggable. Power off the node.
13. Disconnect all cables from the PCI adapter.
NOTE: When power is supplied to the node, it begins to boot. This process takes
approximately 10 minutes. The node becomes part of the cluster when the process is
complete.
20. After node rescue completes and the node has booted, verify the controller node LEDs are in normal
status with a steady, blinking green LED.
21. Uniform blinking indicates that the node has joined the cluster.
22. To verify that the node has joined the cluster, enter shownode.
cli% shownode
Control
Data Cache
Node ----Name---- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB)
Available(%)
0 4UW0001463-0 OK No Yes Off GreenBlnk 98304
131072 100
1 4UW0001463-1 OK Yes Yes Off GreenBlnk 98304
131072 100
NOTE:
If the node that was halted also contained the active Ethernet session, you must exit and
restart the CLI session.
23. Only for the replacement of a 10Gb NIC adapter with File Persona running, configure File
Persona interfaces on the Ethernet ports of the adapter on the controller node. Otherwise skip this
step.
a. To check the status of File Persona, enter showfs. Wait until the state of the File Persona
<node> is shown with a status of Running.
b. To add back the deleted ports as File Persona interfaces from step 9, issue the controlport
fs add <N:S:P> command where: <N:S:P> is the node:slot:port for the ports in the adapter.
24. To verify the current state of the system, enter checkhealth -detail.
Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
NOTE:
The controller node L-frame assembly is available in two sizes, either 4-node or 8-node. This
procedure is for the 4-node size. The replacement process is essentially the same for both.
IMPORTANT:
To avoid configuration problems, track the location of the components removed, so that you can
reinstall them in the exact same location later. If required, refer to recent configuration files and/or
the hardware inventory file (HWINVENT) for the system. These files can be found in the Files links/
pages on the SP, or by referencing the StoreServ system serial number in STaTs if the system calls
home.
Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
2. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
3. Set Maintenance Mode.
4. Perform preliminary maintenance checks and initiate servicing.
5. To assist in the reassembly, gather information about the current system and its components:
a. To verify the current state of the system, enter checkhealth -detail.
b. Enter showsys.
c. Enter shownode -d.
d. Enter shownode -i.
e. Enter showcage.
6. To illuminate blue service LEDs in the system for identification, enter locatesys.
7. To halt the system, enter shutdownsys halt.
8. When prompted, enter yes to confirm halting of the system.
NOTE:
Wait up to 10 minutes for the system to halt. When the nodes have halted, the node status
LEDs blink green and the node service LEDs display blue.
9. Record and label the location of the cable connections before disconnecting the cables from the PCI
adapter.
10. Set all the node power switches to the OFF position.
11. Remove the power cables from all the power supplies.
12. Remove all power supplies and power supply trays starting from the lowest.
NOTE:
The number of screws depends on the size of the enclosure.
Figure 52: Loosening the exterior retaining screws (4-way node chassis)
NOTE:
The number of screws depends on the size of the enclosure.
CAUTION:
Use two or more people to lift and guide the assembly over to an ESD mat or a safe, flat
surface area.
NOTE:
Transfer the status LED board from the failed L-frame assembly to the replacement L-frame
assembly, if the replacement L-frame assembly does not contain a status LED board.
NOTE:
The interior retaining screw torque specification is approximately 9.6 In-Lb.
NOTE:
The exterior retaining screw torque specification is approximately 31.7 In-Lb.
Figure 63: Tightening the exterior retaining screws (8-way node chassis)
25. Install the previously removed components:
a. Side flange bezels
b. If removed previously, BBU/fan module blanks
c. Fans
CAUTION:
• Only one power supply unit (PSU) can be serviced at a time. If another PSU is to be serviced,
verify that the first serviced PSU is healthy and functioning, and then restart this servicing
procedure from the beginning for the next PSU to be serviced.
• To prevent overheating, the replacement of the PSU requires a maximum service time of six
minutes.
• Ensure that cables are clear of the PSU when installing in the enclosure.
Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
2. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
3. Set Maintenance Mode.
4. Perform preliminary maintenance checks and initiate servicing.
a. To verify the current state of the system, enter checkhealth -detail.
b. To display the cage IDs, enter showcage.
c. To display information about the drive cage with the problematic power supply, enter showcage
-d <cageID>.
d. To illuminate the blue service LED of the drive enclosure with the failed power supply, enter the
locatecage <cageID>.
5. Locate the power supply to be replaced.
6. The power supply is in the drive enclosure with the illuminated blue service LED.
7. Remove the retaining strap from the power cable and then remove power cable from the power
supply being replaced.
Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
2. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
3. Set Maintenance Mode.
4. Perform preliminary maintenance checks and initiate servicing.
a. To verify the current state of the system, enter checkhealth -detail.
b. To display the cage IDs, enter showcage.
c. To display information about the drive cage with the failed fan module, enter showcage -d
<cageID>.
5. If a blue LED is not illuminated, to illuminate the blue LED on the drive cage to be serviced, run the
locatecage <cageID> command.
6. Identify the failed drive enclosure fan module with the service LED illuminating blue.
7. To remove the module, (1) press the release buttons and (2) pull out the module.
CAUTION:
• To prevent overheating, the I/O module bay in the enclosure should not be left open for more
than 6 minutes.
• Storage systems operate using two I/O modules per drive enclosure and can temporarily operate
using one I/O module when removing the other I/O module for servicing.
NOTE:
I/O module 0 is at the bottom and I/O module 1 is on top.
6. If required, label and then unplug any cables from the back panel of the I/O module.
7. To remove the module, (1) loosen the captive retaining screw, then (2) pull out the latch and (3) slide
out the I/O module.
NOTE:
The drive enclosure blue service LED blinks during the upgrade process.
• To display information about the drive cage, enter showcage –d.
d. To verify the current state of the system, enter checkhealth -detail.
11. Exit and logout of the session.
Procedure
Preparation:
1. Unpack the replacement mini-SAS cable and place on an ESD safe mat.
2. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
3. Set Maintenance Mode.
A maintenance window is initiated that stops the flow of system alerts from being sent to HPE
4. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
5. Initiate Check Health of the storage system.
Issue the checkhealth -detail command.
A scan of the storage system will be run to make sure that there are no additional issues.
6. Identify the <cage_name> for the drive enclosure containing the failed cable.
Issue the showcage command.
7. Display information about the drive enclosure.
Issue the showcage -d <cage_name>.
NOTE:
I/O module 0 is at the bottom, I/O module 1 is at the top, port DP-1 is on the left, and DP-2 is
on the right.
8. Initiate the Locate action for the I/O module containing the failed cable.
Issue the locatecage -t <sec> <cage_name> iocard <iocard_ID> command, where
<cage_name> is the drive cage name and <iocard_ID> is the I/O module number.
WARNING:
Ensure that the data on the drives is backed up.
Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
2. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
3. Set Maintenance Mode.
4. Perform preliminary maintenance checks and initiate servicing.
a. To verify the current state of the system, enter checkhealth -detail.
b. To display the cage IDs, enter showcage.
c. To identify the drives to be replaced in the drive enclosure, enter showpd -p -cg
<CageNumber> and showpd -i -p -cg <CageNumber>.
d. Enter setpd ldalloc off followed by the pdids of all the drives in the drive enclosure to be
replaced.
e. To initiate the temporary removal of data from the drives and to store the data on spare drives,
enter servicemag start <CageNumber> <MagNumber>.
NOTE:
Run this command once for each drive in the drive enclosure to be replaced.
f. To monitor progress, enter servicemag status.
NOTE: The completion of the servicemag command illuminates the blue service LED on
each of the drives in the drive enclosure to be replaced.
g. To illuminate the blue service LED of the drive enclosure, enter locatecage <cageID>.
5. Disconnect the power cables from the power supplies of the drive enclosure to be replaced.
6. Label and remove the mini-SAS cables from the drive enclosure to be replaced.
7. Loosen the captive Torx T25 thumbscrews in the two rear hold-down brackets and slide the brackets
back from the drive enclosure.
8. To remove the front bezel of the drive enclosure, (1) press the release tab and (2) pivot it off the front
of the drive enclosure.
9. Label each drive in the drive enclosure to be replaced with the magazine (mag) number for later
replacement in the same magazine position.
10. Loosen the captive Torx T25 screws behind the latches on the front left and right bezel ears of the
drive enclosure. Support the enclosure from underneath and slide the failed drive enclosure out of its
rack.
NOTE:
When you attach the power cables, the drive enclosure is powered on.
16. Verify the drive enclosure, I/O modules, power supplies, fan module, and drive status LEDs are lit
green and operating normally.
17. On the CLI prompt:
a. To identify the new cage ID of the newly installed drive enclosure, enter showcage.
b. To confirm that system identifies the drives in the new drive enclosure, enter showcage -d
<cageID>.
c. To confirm the status of the drives in the newly installed drive enclosure, enter showpd -p -cg
<NewCageNumber> and showpd -i -p -cg <NewCageNumber>.
d. To restore data to the drives removed earlier, enter servicemag resume <NewCageNumber>
<MagNumber>.
NOTE:
Run this command once for each drive in the newly installed drive enclosure.
e. To monitor progress, enter servicemag status.
NOTE:
Depending on the data amount and system resources, the process can take several hours
to complete.
f. Replace the front bezel on the new drive enclosure.
To show specific status, use the servicemag status <CageNumber> <MagNumber>
command. To clear specific status of the failed drive enclosure, use the servicemag
clearstatus <CageNumber> <MagNumber> command.
g. To confirm the cage ID of the failed drive enclosure that was physically removed, enter
showcage.
h. To remove the entry of the old cage from the system information, enter servicecage remove
<OldCageNumber>.
i. To verify the current state of the system, enter checkhealth -detail.
18. Exit and logout of the session.
IMPORTANT:
The HPE 3PAR solid-state drives (SSDs) have a limited number of writes that can occur before
reaching a write-endurance limit for the SSD. This limit is expected to exceed the service life of the
HPE 3PAR StoreServ Storage system and is based on most configurations, I/O patterns, and
workloads. The storage system tracks all writes to an SSD to allow for proactively replacing an SSD
that is nearing the limit. If an SSD write-endurance limit is reached during the product-warranty
period, a replacement of the SSD follows the guidelines of the warranty. To prepare for an SSD
replacement after the product-warranty period has expired, Hewlett Packard Enterprise provides the
HPE 3PAR SSD Extended Replacement program. This program is available for eligible HPE 3PAR
SSDs.
IMPORTANT:
After initial storage system installation, to prevent damage to the drives, any HPE 3PAR StoreServ
20000 Storage system with LFF drives that require physical relocation must have all LFF drives
removed from the drive enclosures (cages). Clearly label and package all drives. Reinstall the drives
in the same drive enclosure locations after the move is complete. As a best practice, back up the
storage system data before relocation.
Replacing a drive
The storage system supports hard disk drives (HDD) and solid-state drives (SSD) in the following form
factors:
• Large form factor (LFF) drives
• Small form factor (SFF) drives
The drive replacement procedures are the same for all drives. Review and follow the drive guidelines
listed in Drive guidelines on page 70 before adding, removing, or replacing drives.
Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
2. Access the interactive CLI interface.
70 Drive guidelines
See Interfaces for the HPE 3PAR SP on page 156.
3. Set Maintenance Mode.
4. Perform preliminary maintenance checks and initiate servicing.
a. To verify the current state of the system, enter checkhealth -detail.
b. To identify the failed drive, enter showpd -i.
NOTE: If a storage drive fails, the system automatically runs the servicemag command in
the background. The servicemag command illuminates the blue drive LED to indicate a
fault and the drive to replace. Storage drives are replaced for various reasons and not
necessarily the result of a failure. In this case, the displayed output may not show errors. If
the replacement is a proactive replacement prior to a failure, enter servicemag start -
pdid <pdID> to initiate the removal of data from the drive. The system will store the
removed data on the spare chunklets.
c. To monitor progress, enter servicemag status.
NOTE: When servicemag successfully completes, the blue LED on the drive to be
replaced is illuminated.
d. To illuminate the blue service LED of the cage where the drive is located, enter locatecage
<cageID>.
5. Identify the drive enclosure by the blue service LED.
6. To remove the front bezel of the drive enclosure:
a. Press the release tab.
b. Pull it out from the front of the drive enclosure.
7. Identify the drive to replace. The blue service LED of the drive will be lit.
8. To remove the drive:
a. (1) Press the release button of the drive to open the latch handle. Extend the latch handle.
b. (2) Pull out the drive from the bay.
Service Processor
Replacing a physical Service Processor
Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
2. Halt the SP.
• With HPE 3PAR SP 4.x using SPMAINT: On the 3PAR Service Processor Menu, select 1 SP
Control/Status and then select option 3 Halt SP and confirm all prompts to halt the SP.
• With HPE 3PAR SP 5.0 using the HPE 3PAR Service Console (SC): From the main menu,
select Service Processor, and then select Actions > Shutdown.
3. On the front of the SP, verify that the Power LED is off.
4. Record all the locations of the cable connections to the SP before disconnecting all cables.
5. Remove the AC cords from the rear.
6. To remove the front bezel SP from the cabinet, (1) press the release tab and (2) pivot it off the front
of the SP.
72 Service Processor
Figure 76: Removing the service processor
7. Loosen the two captive T25 Torx screws that secure the SP to the rack.
8. Support the failed SP from underneath and slide it out of the rack until it is fully extended.
9. Press the release buttons on the sides of the rails and remove the SP from the rails. Push the rack
rails into the rack.
10. Align the replacement SP with its shelf on the storage system chassis. Slide the SP into the cabinet
until the rails engage. Back out to ensure that the rails are locked.
NOTE:
For power plug details, refer to HPE 3PAR StoreServ 20000 Storage Site Planning Manual.
Callout Description
WARNING:
Do not connect non- HPE 3PAR StoreServ Storage or unsupported components to PDUs.
74 System power
Figure 79: 3PAR StoreServ 20450 rack with single-phase PDUs—North America and Japan
Procedure
Preparation:
1. Unpack the replacement PDU and place on an ESD safe mat.
2. Open or remove the rear door of the storage system.
Removal:
3. Set the main power breakers on the failed PDU to the OFF position.
Figure 83: Setting the power breakers to the OFF position, single-phase PDU
4. Unplug the main power cord of the PDU.
NOTE:
To remove the main power cord, access to the side of the cabinet may be required.
Item Description
5. If necessary to gain access to the back of the PDU, remove the blank plates from the front of the
rack that cover the PDU rack unit (RU) location.
6. Disconnect the power cord retention straps securing the power cables to the PDU.
7. Disconnect the AC cords connecting the power extension bars to the failed PDU.
A1 PDU0-L1 B1 PDU1-L1
A2 PDU0-L2 B2 PDU1-L2
A3 PDU2-L1 B3 PDU3-L1
A4 PDU2-L2 B4 PDU3-L2
A5 PDU2-L3 B5 PDU3-L3
Callout Description
WARNING:
Do not connect non- HPE 3PAR StoreServ Storage storage or unsupported components to PDUs.
Procedure
Preparation:
1. Unpack the replacement PDU and place on an ESD safe mat.
Removal:
2. Open or remove the rear door of the storage system.
3. Set all six power breakers on the failed PDU to the OFF position.
Figure 95: Setting the power breakers to the OFF position, three-phase PDU
4. Unplug the main power cord of the PDU.
NOTE:
To remove the main power cord, access to the side of the cabinet may be required.
Item Description
5. If necessary to gain access to the back of the PDU, remove the blank plates from the front of the
rack that cover the PDU rack unit (RU) location.
6. Disconnect the power cord retention straps securing the power cables to the PDU.
7. Disconnect the AC cords connecting the power extension bars to the failed PDU.
NOTE:
Note how each extension bar is connecting to the PDU before removing:
• By convention, black power cords connect PDU 0 to the A power extension bars and the
grey power cords connect PDU 1 to the B power extension bars.
• PDU 0 is connected to power extension bars A1, A2, A3, and A4.
• PDU 1 is connected to power extension bars B1, B2, B3, and B4.
A1 PDU0-L1 B1 PDU1-L1
A2 PDU0-L2 B2 PDU1-L2
A3 PDU0-L3 B3 PDU1-L3
A4 PDU0-L6 B4 PDU1-L6
WARNING:
Do not connect non- HPE 3PAR StoreServ Storage or unsupported components to PDUs.
The following figure shows a half-height PDU. Half-height PDUs have two power banks.
The following figure shows a three-quarters height PDU. It has three power banks.
Procedure
Preparation:
1. Unpack the replacement power distribution unit (PDU) and place on an ESD safe mat.
2. Open or remove the rear door of the storage system.
Removal:
3. Set the main power breakers on the failed PDU to the OFF position.
4. Unplug the main power cord of the PDU.
5. Confirm (or label) the power cords that plug into the power banks of the PDU. These cords connect
to the power supplies of the drive enclosures.
6. Remove all power cords.
7. Remove the screw that secures the green and yellow ground wire, and remove the ground wire from
the PDU. In the following figure, the ground wire is circled in green.
CAUTION:
Before servicing any component in the storage system, prepare an Electrostatic Discharge-safe
(ESD) work surface by placing an antistatic mat on the floor or table near the storage system. Attach
the ground lead of the mat to an unpainted surface of the rack. Always use a wrist-grounding strap
provided with the storage system. Attach the grounding strap clip directly to an unpainted surface of
the rack.
20450 2 0, 1
4 0, 1, 2, 3
20800 2 0, 1
20840
4 0, 1,2, 3
20850
6 0, 1, 2, 3, 4, 5
8 0, 1, 2, 3, 4, 5, 6, 7
Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
2. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
3. Set Maintenance Mode.
4. Remove the controller node and power supply blanks if required.
5. To install the node, (1) insert the controller node with the insertion handles fully extended into the
enclosure until it stops and then (2) press the insertion handles to close and fully engage the node.
Repeat the step for additional nodes.
NOTE: Insert the power supplies into the power supply trays in the enclosure.
A clicking sound indicates when the module is fully engaged.
9. Connect the power cables to the power supply and secure the cables with a retention strap.
10. Label the cables and then connect all the SAS cables to the peripheral components.
NOTE:
All SFP ports must contain an SFP transceiver and cable or a dust cover.
11. Remove the front bezel and a front blank if required.
12. Insert the BBUs.
NOTE:
• A clicking sound indicates when the module is fully engaged.
• Re-install the controller node front bezel after you install the second controller node.
14. Turn on the power to the lowest numbered controller node and allow the automatic node-rescue
process to load the OS on the node drives.
NOTE:
The Automatic Node Rescue process may take up to 30 minutes.
15. Verify the controller node LEDs are in normal status with a steady, blinking green LED.
NOTE:
Power on the second controller node only when you get a confirmation of the first node joining
the cluster.
16. Access the interactive CLI interface.
17. Perform preliminary maintenance checks and initiate servicing.
a. To verify that the node has joined the cluster, enter shownode -d.
cli% shownode -d
---- shownode -d -----
---------------------------------------------
Nodes---------------------------------------------
CAUTION:
The PCI Adapters are not hot-swappable.
Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
NOTE:
• To illuminate the service LED when servicing a component, use the locatenode
command.
• The locatenode command confirms that the correct component is being serviced.
7. Ensure that the cables to the PCI adapter are properly labeled to facilitate reconnecting the cables
for later service actions.
8. Only for the addition of a 10Gb NIC adapter with HPE 3PAR File Persona running, delete File
Persona ports on the controller node. Otherwise skip this step.
IMPORTANT:
If HPE 3PAR File Persona is running, the 10Gb NIC adapter must be added as a pair. Add a
10Gb NIC adapter to the same slot in the other node of the File Persona node pair. Use the
same steps documented in this section.
a. To identify the File Persona ports, issue the showport -fs command.
b. Delete File Persona ports on the controller node. Issue the controlport fs delete -f
<N:S:P> command where: <N:S:P> is the node:slot:port.
Deleting File Persona ports on the controller nodes causes existing FPGs on the controller node
to fail over to the second node.
9. To halt the desired node, enter shutdownnode halt <nodeID>.
10. When prompted, enter yes to confirm halting of the node.
NOTE:
Wait for 10 minutes for the node to halt. When the node halts, the node status LED blinks
green, and the node service LED displays blue.
11. To power off the node, set the node power switch to the OFF position.
CAUTION:
The PCI adapters are not hot pluggable. Power off the node.
12. Remove the new adapter from the protective packaging. The following power indicators represent (I)
On and (O) Off.
13. Remove the PCI filler panel from the designated slot.
14. Insert the PCI adapter into the slot and slide it in until it clicks and locks into place. Repeat the two
previous steps for additional adapters.
NOTE: When power is supplied to the node, it begins to boot. This process takes
approximately 10 minutes. The node becomes part of the cluster when the process is
complete.
17. Verify the controller node LEDs are in normal status with a steady, blinking green LED.
18. To verify that the node has joined the cluster, enter shownode -d.
19. Only for the addition of a 10Gb NIC adapter with HPE 3PAR File Persona running, configure
File Persona interfaces on the Ethernet ports of the adapter on the controller node. Otherwise skip
this step.
a. To check the status of File Persona, enter showfs. Wait until the state of the File Persona
<node> is shown with a status of Running.
b. Add File Persona ports that you deleted from step 9. Issue the controlport fs add
<N:S:P> command where: <N:S:P> is the node:slot:port for the ports in the adapter.
c. Add File Persona ports for the new adapter. Issue the controlport fs add <N:S:P>
command where: <N:S:P> is the node:slot:port for the ports in the new adapter.
20. To verify the current state of the system, enter checkhealth -detail.
21. Only for the addition of a 10Gb NIC adapter with HPE 3PAR File Persona running, restore any
failed-over FPGs to the proper controller node.
a. Once the check health states that there are FPGs that are degraded due to being "failed over,"
issue the setfpg -failover <FPG_Name>. This command restores the FPGs to the proper
controller node.
For example:
22. To verify that the PCI adapter is installed in the correct slot location, enter shownode -pci.
23. To verify that all ports are connected to the system and in a ready state, enter showport.
cli% showport
N:S:P Mode State ----Node_WWN---- -Port_WWN/HW_Addr- Type Protocol
1:5:3 initiator loss_sync 2FF70002AC000167 21530002AC000167 free FC
1:5:4 initiator ready 2FF70002AC000167 21540002AC000167 free FC
1:8:1 target ready 2FF70002AC000167 21810002AC000167 host FCoE
1:8:2 suspended config_wait 0000000000000000 0000000000000000 cna -
1:9:1 peer ready - 0002AC800059 rcip IP
24. Confirm that all SAS cables are properly connected and verify that all LED statuses are illuminated
green.
25. To verify the current state of the system, enter checkhealth -detail.
26. To upgrade PCI adapters in additional controller nodes, repeat all the previous steps.
IMPORTANT:
Before starting another node upgrade process, wait approximately 10 minutes to allow
multipathing software processes to complete.
27. Exit and logout of the session.
More Information
Alert notifications from the SP on page 168
Check health action from the HPE 3PAR SP on page 159
Connection methods for the SP on page 154
Locate action from the SP on page 167
Maintenance mode action from the SP on page 167
Procedure
1. Follow industry-standard practices when handling disk drives. Internal storage media can be damaged
when drives are shaken, dropped, or roughly placed on a work surface.
2. When installing a disk drive, press firmly to make sure that the drive is fully seated in the drive bay and
then close the latch handle.
3. Always populate hard drive bays starting with the lowest bay number. Hewlett Packard Enterprise
requires a minimum of two drives of the same drive type.
4. Populate the drive types in pairs and evenly beginning at the bottom in slot 0 and progress from left to
right and bottom to top. See the following tables for examples.
CAUTION:
To prevent improper cooling and thermal damage, operate the enclosure only when all bays are
populated with either a component or a blank.
IMPORTANT:
When a drive is inserted in an operational enclosure, the drive LED flashes green to indicate that
the drive is seated properly and receiving power.
• Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
• Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
• Set Maintenance Mode.
• Perform preliminary maintenance checks and initiate servicing.
◦ To verify that the new drive appears and the disk state is normal, enter showpd.
◦ Attach the front bezel to the front of the new drive enclosure.
◦ To verify the current state of the system, enter checkhealth -detail.
• Exit and logout of the session.
Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
See Connection methods for the SP on page 154.
2. Access the interactive CLI interface.
See Interfaces for the HPE 3PAR SP on page 156.
3. Set Maintenance Mode.
4. Perform preliminary maintenance checks and initiate servicing.
a. To verify the current state of the system, enter checkhealth -detail.
5. At the front of the system, remove the blank filler panels where the new drive enclosure will be
installed.
6. Position the left and right rack rails at the desired U position in the rack. Adjust the rails to fit the rack.
7. The bottom edge of rails must align at the front and back with the bottom of RETMA boundary of the
rack in the lowermost U position.
NOTE: The rails are marked Front L and Front R with an arrow indicating the direction of the
rail installation.
8. Use guide pins to align the shelf mount kit to the RETMA column holes.
9. To engage the rear, push the rail toward the back of the rack until the spring hook snaps into place.
10. To engage the front, pull the rail towards the front of the rack to engage the spring hook with the
RETMA column in the same manner as the rear spring hook.
NOTE:
Make sure that the respective guide pins for the square hole rack align properly into RETMA
column hole spacing.
You can use the same procedure to secure the left rail.
11. Secure the rear of the rack rail to the RETMA column with the square-hole shoulder screws provided
in the package.
WARNING:
Always use at least two people to lift an enclosure into the rack. If the enclosure is being loaded
into the rack above chest level, a third person must assist with aligning the enclosure with the
rails while the other two people support the weight of the enclosure.
14. To install the enclosure, (1) slide the enclosure into position on the rails and (2) secure the chassis
into the rack by tightening the captive screw behind the latch on the front left and right bezel ears of
the chassis.
CAUTION:
The front screw must be attached at all times when installed on the rack.
15. Attach rear hold-down brackets by (1) sliding the tab with the arrow pointed forward into the
corresponding slot on the left and right side of the rear of the chassis and (2) use the black headed
thumb screw to secure tightly to the rail.
782405-001 SPS-NODE Assembly V1; 16 core w/o HBA; Mem (20840 and 20x50)
Table Continued
Table Continued
1. Power supply
2. Drive enclosure
3. I/O module
4. Fan assembly
5. SSD drive
Figure 128: Drive enclosure components
Table Continued
1. Power supply
2. Drive enclosure
3. I/O module
4. Fan assembly
5. SSD drive
Figure 130: Drive enclosure components
Figure 132: Controller node LEDs (controller node enclosure rear view)
Table 28: Controller node LEDs (controller node enclosure rear view)
Table Continued
Off Connected
1. Ethernet LED
2. Activity LED
3. Link LED
4. Fault LED
5. Status LED
6. UID/Service LED
Figure 135: 2-port 10 Gb/s iSCSI host adapter LEDs
Green Solid Green Solid Green Solid Power on; 10 Gb/s link established; no activity
Green Solid Green Flashing Green Solid Power on; 10 Gb/s link established; receive/
transmit activity
Table Continued
Figure 139: Fan module LEDs (controller node enclosure front view)
Table 35: Fan module LEDs (controller node enclosure front view)
2 UID/Service
TIP:
This UID push button activates or deactivates the Blue UID LED on the
rear and front of the drive enclosure.
Locate UID
TIP:
This UID push button activates or deactivates the Blue UID LED on the
rear and front of the drive enclosure.
Off No power
1 Activity Green Solid • With amber Fault LED off, link at high speed with no
activity.
• With solid amber Fault LED, link at low speed with no
activity.
Green Flashing • With amber Fault LED off, link at high speed with activity.
• With solid amber Fault LED, link at low speed with
activity.
• With flashing amber Fault LED, locate requested.
2 Fault Amber Solid • With solid green Activity LED, link at low speed with no
activity.
• With flashing green Activity LED, link at low speed with
activity.
• With green Activity LED off, no link or no cable
connected.
Off • With solid green Activity LED, link at high speed with no
activity.
• With flashing green Activity LED, link at high speed with
activity.
Off No power
Off No power
Port Description
Off Deactivated
Physical SP:
The physical SP is a hardware device mounted in the system rack. If the customer chooses a physical
SP, each storage system installed at the operating site includes a physical SP. The physical is installed in
the same rack as the controller nodes. A physical SP uses two physical network connections:
• The left, Port 1 (Eth0/Mgmt) requires a connection from the customer network to communicate with the
storage system.
• The right, Port 2 (Eth1/Service) is for maintenance purposes only and is not connected to the
customer network.
Training video about the HPE 3PAR SP 5.0 and HPE 3PAR
OS 3.3.1
A training video is available for the HPE 3PAR Service Processor (SP) 5.0 and the HPE 3PAR OS 3.3.1.
IMPORTANT:
HPE 3PAR SP 5.x requires HPE 3PAR OS 3.3.1 and later versions.
• HPE 3PAR Service Console (SC) interface: The HPE 3PAR SC interface is accessed when you log
in to the HPE 3PAR SP. This interface collects data from the managed HPE 3PAR StoreServ Storage
system in predefined intervals as well as an on-demand basis. If configured, the data is sent to HPE
3PAR Remote Support. A company administrator, Hewlett Packard Enterprise Support, or an
authorized service provider can also perform service functions through the HPE 3PAR SC. The HPE
3PAR SC replaces the HPE 3PAR Service Processor Onsite Customer Care (SPOCC) interface and
the HPE 3PAR SC functionality is similar to HPE 3PAR SPOCC.
• HPE 3PAR Text-based User Interface (TUI): The HPE 3PAR TUI is a utility on the SP 5.x software,
and it enables limited configuration and management of the HPE 3PAR SP and access to the HPE
3PAR CLI for the attached storage system. The intent of the HPE 3PAR TUI is not to duplicate the
functionality of the HPE 3PAR SC GUI, but to allow a way to fix problems that may prevent you from
using the HPE 3PAR SC GUI. The HPE 3PAR TUI appears the first time you log in to the Linux
console through a terminal emulator using Secure Shell (SSH). Prior to the HPE 3PAR SP
initialization, you can log in to the HPE 3PAR TUI with the admin user name and no password. To
access the HPE 3PAR TUI after the HPE 3PAR SP has been initialized, log in to the console with the
admin, hpepartner, or hpesupport accounts and credentials set during the initialization.
IMPORTANT:
HPE 3PAR SP 4.x requires HPE 3PAR OS 3.2.2.
• HPE 3PAR Service Processor Onsite Customer Care (SPOCC): The HPE 3PAR SPOCC interface
is accessed when you log in to the HPE 3PAR SP and is a web-based graphical user interface (GUI)
that is available for support of the HPE 3PAR StoreServ Storage system and its HPE 3PAR SP. HPE
CAUTION:
Many of the features and functions that are available through HPE 3PAR SPMAINT can
adversely affect a running system. To prevent potential damage to the system and irrecoverable
loss of data, do not attempt the procedures described in this manual until you have taken all
necessary safeguards.
• HPE 3PAR CPMAINT interface: The HPE 3PAR CPMAINT terminal user interface is the primary user
interface for the support of the HPE 3PAR Secure Service Agent as well as a management interface
for the HPE 3PAR Policy Server and Collector Server.
Procedure
1. From a web browser, enter the HPE 3PAR Service Processor (SP) 5.x IP address https://
<sp_ip_address>:8443.
2. Enter the account credentials, and then click Login to gain access to the HPE 3PAR Service Console
(SC) interface.
3. On the HPE 3PAR SC main menu, select Systems.
4. On the Actions menu, select Start CLI session.
More Information
Connection methods for the SP on page 154
Procedure
1. Connect to the HPE 3PAR SP 5.x Linux console.
2. Log in to gain access to the HPE 3PAR TUI.
IMPORTANT:
When logging in using the admin or hpepartner accounts, a customer-supplied user ID and
password must be obtained for the storage system. The hpesupport account requires a strong
password to gain access to the SP, but no user ID or password is needed to access the storage
system from the SP.
3. From the HPE 3PAR TUI main menu, enter 7 for Interactive CLI/Maintenance Mode.
4. To start an interactive CLI session, enter 2 for 2 == Open interactive CLI.
More Information
Connection methods for the SP on page 154
Accessing the interactive CLI interface from the SP 4.x SPMaint interface
The HPE 3PAR SPMaint interface of the HPE 3PAR Service Processor (SP) 4.x provides an HPE 3PAR
interactive CLI interface for issuing HPE 3PAR CLI commands.
Procedure
1. Connect to the HPE 3PAR SP 4.x Linux console.
2. Log in to gain access to the HPE 3PAR SPMaint interface.
3. From the HPE 3PAR SPMaint main menu, enter 7 for Interactive CLI for a StoreServ.
More Information
Connection methods for the SP on page 154
Procedure
• Issue the HPE 3PAR CLI checkhealth command without any specifier to check the health of all the
components that can be analyzed.
◦ The checkhealth command authority is Super, Service.
◦ The checkhealth command syntax is: checkhealth [<options> | <component>...].
Accessing the interactive CLI interface from the SP 4.x SPMaint interface 159
The checkhealth command <options> include the following:
– The -list option lists all components that checkhealth can analyze.
– The -quiet option suppresses the display of the item currently being checked.
– The -detail option displays detailed information regarding the status of the storage system.
– The -detail node option checks the controller nodes for the presence of a touch file that
disables strong passwords.
– The -detail cabling option displays issues with cabling of drive enclosures.
– The -full option displays information about the status of the full system. This is a hidden
option and only appears in the CLI Hidden Help. This option is prohibited if the -lite option is
specified. Some of the additional components evaluated take longer to run than other
components.
◦ The checkhealth command <component> is the command specifier, which indicates the
component to check.
Examples:
To display both summary and detailed information about the hardware and software components,
issue checkhealth -detail:
When the -detail option is included, the following output might be displayed:
When the -detail node option is included, the following information might be included:
If there are no faults or exception conditions, the checkhealth command indicates that the System
is healthy:
cli% checkhealth
Checking alert
Checking cabling
…
Checking vlun
Checking vv
System is healthy
With the <component> specifier, you can check the status of one or more specific components:
IMPORTANT:
The following -svc example displays information intended only for service users.
When the -svc -detail option is included, the following information is included:
NOTE:
If a controller node or drive enclosure (cage) is down, the detailed output can be long.
To check for inconsistencies between the System Manager and kernel states and CRC errors for FC
and SAS ports, use the -full option:
Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP) 4.x.
2. From the HPE 3PAR Service Processor Onsite Customer Care (SPOCC) interface main menu, click
Support in the left navigation pane.
3. From the Service Processor - Support page, under StoreServs, click Health Check in the Action
column.
4. A pop-up window appears showing a status message while the health check runs.
NOTE:
When running the Health Check using Internet Explorer, the screen might remain blank while
information is gathered. This process could take a few minutes before displaying results. Wait for
the process to complete and do not attempt to cancel or close the browser.
5. When the health check process completes, it creates a report and displays in a new browser window.
Click either Details or View Summary to review the report.
6. Resolve issues if any. Close the report window when you are done.
7. After the health check completes gathering the data, the HPE 3PAR SP displays a list of files to view.
Available files
1 ==> /sp/prod/data/files/1300338/status/
110420.101029.all
2 ==> /sp/prod/data/files/1300338/status/
110420.101029.det
3 ==> /sp/prod/data/files/1300338/status/
110420.101029.err
4 ==> /sp/prod/data/files/1300338/status/
110420.101029.sum
8. To view the available files, enter the corresponding number, and then press Enter to continue.
9. Select the number corresponding to the data file with the .all extension and press Enter. After the
file is reviewed, press Enter to continue, and then select option 0 to exit health check.
NOTE:
The HPE 3PAR SPMaint interface uses the more command to view files. To move to the next
page, press the spacebar. After viewing the contents of the file, to exit press Enter then select 0
(Abort Operation to return to the previous menu. After you return to the previous menu, the
report is discarded. To view the health status again, run the health check again.
Procedure
1. Connect and log in to the HPE 3PAR SP 4.x.
2. From the HPE 3PAR SPMaint main menu, enter 7 for Interactive CLI for a StoreServ.
3. To select your storage system, enter 1.
4. If you are prompted to turn on Maintenance Mode, enter y. The prompt message states Do you
wish to turn ON maintenance mode for StoreServ ###### before performing any
CLI operations? (y or n).
Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP) 4.x.
2. From the HPE 3PAR Service Processor Onsite Customer Care (SPOCC) main menu, select Support
in the left navigation pane.
3. From the Service Processor - Support page, under StoreServs, select Locate Cage in the Action
column.
When you select Locate Cage for an identified storage system, the HPE 3PAR SP queries the storage
system to determine available drive enclosures (cages), and then prompts you to select the cage to
locate. After you select the cage, the LEDs on the cage flash amber for 30 seconds.
Alert notifications in the HPE 3PAR SP 5.0 HPE 3PAR Service Console (SC):
In the Detail pane of the HPE 3PAR SC interface, an alert notification will display in the Notifications
box.
Views (1)—The Views menu identifies the currently selected view. Most List panes have several views
that you can select by clicking the down arrow ( ).
Actions (2)—The Actions menu allows you to perform actions on one or more resources that you have
selected in the list pane. If you do not have permission to perform an action, the action is not displayed in
the menu. Also, some actions might not be displayed due to system configurations, user roles, or
properties of the selected resource.
Notifications box (3)—The notifications box is displayed when an alert or task has affected the resource.
Resource detail (4)—Information for the selected view is displayed in the resource detail area.
Browser warnings
When connecting to the HPE 3PAR Service Processor (SP) IP address, you might receive a warning from
your browser that there is a problem with the security certificate for the website, that the connection is not
private, or the connection is not secure. To continue to the site, clear the warning.
More Information
Clear Internet Explorer browser warning on page 169
Clear Google Chrome browser warning on page 170
Clear Mozilla Firefox browser warning on page 171
• Beginning with HPE 3PAR SP 5.0 for the service processor, time-based or encryption-based
passwords are implemented for the support accounts used with the SP.
• Beginning with HPE 3PAR OS 3.2.2 for the storage system, time-based or encryption-based
passwords are implemented for the support accounts used with the storage system.
Customers increasingly require the ability to have local control over all passwords on their storage
system, have assurance that passwords do not remain static for long periods of time, and need an audit
trail of access to the credentials. The time-based (default) or encryption-based password options allow
the customer control over the passwords used for access to their system by approved service providers.
NOTE:
Dark sites will likely want to change to the encryption-based mode. Hewlett Packard Enterprise can
provide information to Dark Site customers under a Confidential Disclosure Agreement (CDA)
regarding what is contained in the ciphertext export. It is necessary to communicate with such
customers in advance to work out an acceptable process to choose the correct mode, and to help
them craft a process that suits their needs.
For example, the Department of Defense sites are likely to switch to the encryption-based
(ciphertext) mode and possibly export the ciphertext to HPE Support in advance of the need for
support assistance so that this security measure is already in place before an escalation event
occurs. Either in advance or during the escalation, the ciphertext is supplied to HPE Support,
decrypted, and the credential communicated to the Hewlett Packard Enterprise approved service
provider. After the escalation concludes, the customer regenerates a new ciphertext to make the
prior password unusable.
Password considerations based on HPE 3PAR Service Processor (SP) software upgrades
IMPORTANT:
Prior to upgrading the HPE 3PAR SP software for all sites (including dark sites), discuss and plan
the upgrade with the customer.
Table Continued
spvar Static password • SPOCC through a web • Only HPE personnel and
browser authorized service
• Administrator sets/
• SPMaint through a physical providers
changes
or virtual console • Service and diagnostic
• SPMaint through SSH functions
More Information
Interfaces for the HPE 3PAR SP on page 156
Table Continued
console Time-based or encryption- • Node's serial console • Only HPE Support and
based password authorized service providers
• Administrator sets the • Service and diagnostic
password option through functions
CLI commands
• For encryption-based
password, administrator
retrieves/exports the
ciphertext (blob) through
CLI commands
root Time-based or encryption- • Linux Shell on the • Only HPE Support and
based password storage system authorized service providers
• Administrator sets the • Service and diagnostic
password option through functions
CLI commands
• For encryption-based
password, administrator
retrieves/exports the
ciphertext (blob) through
CLI commands
Prerequisites
To use HPE StoreFront Remote, you must have access to HPE StoreFront Remote with privileges for
secure password access.
Procedure
1. Log in to HPE StoreFront Remote using your HPE email address and password.
www.storefrontremote.com/
2. From the HPE StoreFront Remote main menu, select Request Secure Password.
3. In the Request Secure Password window, select the Password Type: Time-Based Password or
Encryption-Based Password.
4. If Encryption-Based is selected, paste the ciphertext (blob) provided by the customer in the
Ciphertext box. The text must include the begin and end tokens.
5. For the Product Specifier drop-down list, select either Service Processor or StoreServ.
6. For the Target User ID drop-down list, select the associated account.
7. For a Service Processor account, enter the SP values in the SP ID field and the SP Model field.
Make sure the SP model selected/entered matches the SP model displayed on the SP TUI or SP
HPE 3PAR Service Console.
If you receive a warning message that states the device hasn't called home, ignore the warning and
proceed.
8. In the CRM Case Number field, enter the support case number associated with the support activity.
If no case number is associated with the support activity, a warning message appears after you click
Submit Password Request, which you can override by clicking Yes, Override CRM Validation and
Force This Request.
Retrieving HPE 3PAR SP account passwords from HPE StoreFront Remote 179
9. In the Beneficiary Email field, enter an email address for the approved service provider who will be
using the password.
10. Click Submit Password Request.
11. Click Show Password, and then copy the password to the clipboard.
12. Click Close.
Table Continued
Table Continued
~ tilde ~ TEEL-DUH
1 One 1 WUN
! Exclamation ! ECKS-KLA-MAY-SHUN
2 Two 2 TOO
@ At Sign @ AT-SINE
3 Three 3 THREE
# Pound-sign # POWND-SINE
4 Four 4 FORE
5 Five 5 FIVE
Table Continued
% Percent-sign % PER-SENT-SINE
6 Six 6 SICKS
^ Carat ^ CARE-AT
7 Seven 7 SEV-EN
8 Eight 8 AYT
* Asterisk * ASS-TER-ISK
9 Nine 9 NINE
0 Zero 0 ZEE-ROH
- Dash - DASH
_ Underline _ UN-DER-LINE
= Equal-sign = EE-KWAL-SINE
+ Plus-sign + PLUS-SINE
; Semi-colon ; SEM-EE-COL-UN
: Colon : COL-UN
‘ Quote ‘ KWOT
Table Continued
, Comma , COM-AH
Issue Solution
A customer cannot recall a service or You can display the method, current hour value, and encryption
support password, or the tpdtcl is ciphertext values for both root and console on the serial
broken and a 3PAR CLI command console of the storage system. To display these items, connect
can't be executed. to the serial console, and at the login prompt, enter
exportcreds for the userid.
The time of day on the storage system If the storage system clocks are 60 or more minutes out of sync
is more than one hour out of sync with with real time, the password generated at Hewlett Packard
real time. Enterprise might be unusable or have a reduced lifetime. If the
passwords are unusable, use the exportcreds user to export
the current time of day. Provide that value, along with the user’s
correct time of day, to HPE Support and request to have it
escalated to Level 4, where tools exist to generate a password
for the correct time.
Alternatively, the customer can correct the time on the storage
system or SP, or they can switch the method to ciphertext, and
then export the ciphertext for decryption at Hewlett Packard
Enterprise. In either event, make it a priority to fix the time sync
problem.
Table Continued
I do not have access to HPE If your job description requires you to have access, you can
StoreFront Remote (SFRM) or the request access. Only users whose job functions require access
Strong Password Generation tool. are eligible, including:
• Tech Support engineers who have responsibility for
supporting HPE 3PAR devices in the field
• Tech Support personnel who provide backline support to
CEs in the field
• Certain internal software partners
• QA and internal HPE 3PAR engineering users
The generation tool requires a CRM This CRM case number is an audit requirement. When you
case number. submit your request, your CRM case number is validated to
ensure that it corresponds to an open case. If not, you are
prompted to send your request anyway, overriding the
validation. Requests without an open case are recorded in an
elevated security log. Be sure that an open case describes an
escalation at a customer site that requires use of the generator.
The generation tool requires a This beneficiary email is an audit requirement. This field
Beneficiary email. identifies the actual user of the password being generated, as
opposed to yourself if you are using the tool on behalf of
another (for example, over the phone for a remote on-site
engineer). It is expected that you have verified the identities of
individuals and that they are authorized to receive passwords. If
you are using the password yourself, the email field is pre-
populated with your email.
I cannot create a password for The storage system only implements these passwords for the
3paradm in the SFRM tool. root and console users. 3paradm is an account controlled
by the customer.
A customer requests the account Do not share account passwords with customers. These
password. passwords are reserved for Hewlett Packard Enterprise Support
employees only. When in doubt, escalate.
Power cables
Use the following applicable diagrams to connect the power cables.
NOTE:
The international expansion rack three-phase power configuration has two PDUs.
Data cables
This section describes the configuration options available for connecting storage drive enclosures to an
HPE 3PAR StoreServ 20000 Storage system:
• Direct Connect Cable Configuration
• Daisy-Chained Cable Configuration
NOTE:
Converting from one configuration option to the other is not supported.
The drive enclosures connected to node pair 0/1 would be labeled in the rack as “Node 0/1 E0” through
“Node 0/1 E11.”
Use the procedure that follows to connect data cables between controller nodes and drive enclosure I/O
modules. The tables under steps 1 and 2 list the cable connections in the recommended order of slot and
port use for controller nodes 0 and 1 with SAS cards located in slots 0, 1, and 2 of each node. Direct
NOTE:
If a SAS card does not exist in a particular slot (slot 0, for example), proceed down the list to the
next available slot and port.
Procedure
1. Connect node‐0 (or the even numbered node in a node pair) to the DP‐1 port on the red color-
coded I/O module (module 0) of the drive enclosures in the following sequence for slot and port usage,
for up to 12 drive enclosures:
Order of Use Controller Node Port Red Node Loop to Drive Enclosure
2. Connect node‐1 (or the odd numbered node in a node pair) to the DP‐1 port on the green color-
coded I/O module (module 1) of the drive enclosures in the following sequence for slot and port usage,
for up to 12 drive enclosures:
Order of Use Controller Node Port Green Node Loop to Drive Enclosure
Table Continued
NOTE:
Only two drive enclosures are supported on each daisy-chained loop.
Use the following guidelines to connect the data cables between the controller node ports and drive
enclosure I/O modules.
Procedure
1. Connect the drive enclosure pair to the red color-coded (even numbered) node of the controller node
pair:
a. Connect the port on the controller node to the lower drive enclosure, red color-coded I/O module 0
(IOM-0), DP-1, as illustrated by line 1a in Daisy-Chained Cable Connections.
b. Connect DP-2 of that same I/O module (the red color-coded one) of the lower drive enclosure to
the red color-coded I/O module 0, DP-1 of the upper drive enclosure in the pair, as illustrated by
line 1b.
2. Connect the drive enclosure pair to the same slot and port number on the green color-coded (odd
numbered) node of the controller node pair:
a. Connect the port on the controller node to the upper drive enclosure, green color-coded I/O module
1 (IOM-1), DP‐1, as illustrated by line 2a in Daisy-Chained Cable Connections.
b. Connect DP-2 of that same I/O module (the green color-coded one) of the upper drive enclosure to
the green color-coded I/O module 1, DP‐1 of the lower drive enclosure in the pair, as illustrated by
line 2b.
1. Slot 2, Port 1
2. Slot 1, Port 1
3. Slot 0, Port 1
4. Slot 2, Port 3
5. Slot 1, Port 3
6. Slot 0, Port 3
7. Slot 2, Port 2
8. Slot 1, Port 2
9. Slot 0, Port 2
For a close-up view to help identify those slot and port locations, see Controller Node Slot and Port
Details.
196 Order of Controller Node Port Use for Daisy-Chained Cable Connections
Figure 160: Controller Node Slot and Port Details
Drive Enclosure Cable Ports provides a close-up view of ports DP‐1 and DP-2 on each of the two I/O
modules of a drive enclosure. The data cable procedures for the daisy-chained and direct connect
configurations specify how to use those ports on the red color-coded I/O module (module 0) and the
green color-coded I/O module (module 1) for your configuration.
Each controller node has a built-in node-rescue network that connects the nodes in the system together
in a cluster through the node-chassis. This connection in a cluster allows for a rescue to occur between
an active node in the cluster and the replacement or new node added to the storage system. This rescue
is called a Node-to-Node Rescue and is used in place of needing to connect the service processor (SP)
for the rescue.
Node-to-Node Rescue is performed over fixed physical Ethernet connections through the backplane.
There are two backplane Ethernet connections per node and consequently there are two rescuers per
node.
For example: With a 4-node storage system, a node in an immediately adjacent slot must rescue the new
replacement node. In the following diagram, a node installed in a slot diagonally across cannot rescue the
new node. For example, nodes 0 or 3 can rescue node 2. Node 1 cannot rescue node 2.
(4 node)
0 ---- 1
| |
2 ---- 3
With an 8-node storage system, a node installed in a slot immediately above or below a node can rescue
it. A node in the lowest numbered node pair (0/1) and highest numbered node pair (6/7) can also be
rescued by its partner node. For example, in the following diagram, node 1 or node 2 can rescue node 0.
Only node 3 and node 7 can rescue node 5.
(8 node)
0 ---- 1
| |
2 3
| |
4 5
| |
6 ---- 7
Based on the circumstance for needing a Node-to-Node Rescue, there are two process options:
NOTE:
In rare instances, a new node drive is not recognized as being blank, which prevents the start of the
Automatic Node-to-Node Rescue process. If this occurs, manually initiate the Node-to-Node Rescue
by issuing the CLI startnoderescue -node <nodeID> command, where <nodeID> is the new
or replacement node.
Initiate the Manual Node-to-Node Rescue process using the HPE 3PAR CLI command:
Before powering on the replacement controller node (node), issue the CLI startnoderescue -node
<nodeID> command, where <nodeID> is the replacement node.
• To monitor the progress with more details, connect a serial cable to the Service port on the node that
is being rescued.
SP-to-Node rescue
SP-to-Node rescue, also known as “all nodes down rescue,” is performed by HPE Support personnel and
only under the guidance of Level 3 support.
Perform the SP-to-Node rescue procedure if all nodes in the HPE 3PAR system are down and must be
rebuilt. For individual node replacement or node-drive rebuilding, use the "node-to-node" rescue
procedure on the StoreServ array.
The SP-to-Node rescue procedure involves the following:
• Nodes are rescued one at time using either the public Ethernet connection (eth0), or for rescues being
performed from Physical SPs, a private network is used. For a private network, connect the Ethernet
port on the node being rescued to the second Ethernet port (eth1) on the Physical SP.
NOTE: Systems with encrypted node drives require extra steps to recover and restore. Each
node with encrypted node drives must be rescued twice to allow encryption to be restored. HPE
3PAR 8000, 9000, and 20000 StoreServ systems support encrypted node drives.
• Only the selected 3PAR OS version will be restored. No additional patches will be installed as part of
the rescue process. Install these patches after a successful rescue.
• While the rescue is being performed, the services that are required to perform the SP-to-Node will be
started and the firewall opened accordingly.
• After all the nodes have been rescued, the SP must be “de-configured” to restore the firewall to its
previous state and to terminate the services that were initiated for the rescue.
NOTE:
The shared iLO port may also be used to perform an SP-to-Node Rescue. For more information,
see SP-to-Node rescue on page 200.
• Service port: 10.255.155.54 (same static private IP address that is available for other platform
models)
Use the following procedure to access the iLO from a laptop for the DL120.
Procedure
1. Connect a laptop to the DL120.
NOTE:
Use of a straight or crossover Ethernet cable is not recommended.
A straight or crossover Ethernet cable may be used, however, Hewlett Packard Enterprise
recommends using a small private switch between the DL120 and the laptop to ensure that the
laptop does not lose its network connection during the build process. When the DL120 resets,
the NIC port resets and drops the link. This connection loss can result in the failure of the
software load process.
Any personal switch with four to eight ports is supported, such as the HPE 1405-5G Switch
(J97982A), which is available as a non-catalog item from HPE SmartBuy.
2. Log in to the iLO port using SSH.
NOTE:
The iLO port is also accessible using HTTPS. To log in to iLO using HTTPS, open an Internet
Explorer web browser and type https://10.255.155.52 into the address bar to launch iLO
in the browser.
a. Open a console terminal program (such as putty) and open 10.255.155.52 using SSH on port
22.
b. Supply iLO credentials (Administrator/PASSWORD) at the login prompt to open the iLO command
console prompt </>hpiLO->.
Procedure
1. Log in to the SP console as spvar, if you are not already logged in.
2. Select option 4 StoreServ Product Maintenance.
3. Select option 11 Node Rescue.
4. Follow the SP-to-Node rescue instructions on the dialog script that is presented.
5. After the final node has rebooted and joined the cluster, log in to any node console as the console
user. Select option 11 Finish SP-to-Node rescue procedure.
6. Allow about 5 minutes for the cluster to synchronize and networking to restore.
Use the showsys and shownet CLI commands to verify that the cluster is running.
7. On the SP console, follow the prompts to complete and de-configure the SP-to-Node rescue
procedure.
Procedure
1. Log in to the SP console as spvar, if you are not already logged in.
2. Select option 4 StoreServ Product Maintenance.
3. Select option 11 Node Rescue.
4. Follow the SP-to-Node rescue instructions on the dialog script that is presented, but do not continue
after the last node has been rescued. DO NOT press Enter on the SP console.
5. Log in to any node console as root.
6. To determine the cluster master, enter the command clwait.
7. Log in as root on a non-master node console and issue the command controlpd revertnode
<X:X>, where X:X is the node:drive to be wiped out.
Each node drive must be wiped out to clear the node drives and allow the cluster to form.
For example, on a system with two node drives, to wipe out both drives in node 3, use the command
controlpd revertnode 3:0 immediately followed by the command controlpd revertnode
3:1.
8. Allow the node to reboot automatically after about 30 seconds.
9. Use CTRL -WHACK to get the node back at the Whack> prompt.
10. To rescue the node a second time, use the SP to node rescue procedure.
11. Perform steps 3 through 6 on each non-master node.
12. After all non-master nodes have been rescued the second time, wipe out the drives on the master
node.
13. The master node is automatically rescued using node to node rescue.
14. After the final node has rebooted and joined the cluster, wait about 5 minutes and then log in to the
console of the current master node and select 11. Finish SP-to-Node rescue procedure.
15. Allow about 5 minutes for the cluster to synchronize and networking to restore.
Use the showsys and shownet CLI commands to verify that the cluster is running.
16. On the SP console, follow the prompts to complete and de-configure the SP-to-Node rescue
procedure.
Procedure
1. Log in to the SP console as hpesupport, if you are not already logged in.
2. Enter the CLI command sp2node.
3. Follow the SP-to-Node rescue instructions on the dialog script that is presented.
4. After the final node has rebooted and joined the cluster, wait about 5 minutes and then log in to any
node console as the console user and select 11. Finish SP-to-Node rescue procedure.
5. Allow about 5 minutes for the cluster to synchronize and networking to restore. Use the showsys and
shownet CLI commands to verify that the cluster is running.
6. On the SP console, follow the prompts to complete and de-configure the SP-to-Node rescue
procedure.
Procedure
1. Log in to the SP console as hpesupport, if you are not already logged in.
2. Enter the CLI command sp2node.
3. Follow the SP-to-Node rescue instructions on the dialog script that is presented, but do not continue
after the last node has been rescued. DO NOT press Enter on the SP console.
4. Log in to any node console as root.
5. To determine the cluster master, enter the command clwait.
6. Log in as root on a non-master node console and issue the command controlpd revertnode
<X:X>, where X:X is the node:drive to be wiped out.
Each node drive must be wiped out to clear the node drives and allow the cluster to form.
For example, on a system with two node drives, to wipe out both drives in node 3, use the command
controlpd revertnode 3:0 immediately followed by the command controlpd revertnode
3:1.
7. Allow the node to reboot automatically after about 30 seconds.
8. Use CTRL -WHACK to get the node back at the Whack> prompt.
9. To rescue the node a second time, use the SP-to-Node rescue procedure.
10. Perform steps 3 through 6 on each non-master node.
11. After all non-master nodes have been rescued the second time, wipe out the drives on the master
node.
12. The master node is automatically rescued using node to node rescue.
13. After the final node has rebooted and joined the cluster, wait about 5 minutes and then log in to the
console of the current master node and select 11. Finish SP-to-Node rescue procedure.
14. Allow about 5 minutes for the cluster to synchronize and networking to restore.
WARNING:
Following de-installation of strong-password-enabled HPE 3PAR StoreServ storage systems, the
console user account password will revert to its static password.
NOTE:
If you are planning to reinstall and use the existing license, record the existing license before
performing the uninstall by using the showlicense and showlicense -raw commands.
NOTE:
In this case, complete only steps 1 to 11 of Uninstalling the system on page 209, and then
begin again with the system installation procedures to reinstall the storage system.
System Inventory
To complete the system inventory, record the following information for each system to be uninstalled:
• Customer name
• Site information
• System serial numbers. Issue the showinventory CLI command.
• Software currently running on the system. Issue the following CLI commands to obtain the listed
information:
◦ HPE 3PAR Operating System version —
showversion –b –a
◦ Drive cage firmware version —
showcage
◦ Disk drive firmware version —
showpd –i
◦ HPE 3PAR CBIOS version —
shownode -verbose
208 Deinstalling the storage system and restoring the storage system to factory defaults
◦ Features licensed on the array —
showlicense
◦ Raw license key for licensed features on the array —
showlicense -raw
• Storage system hardware configuration
◦ Number of cabinets
◦ Number of controller nodes
◦ Amount of data cache in the controller nodes —
shownode
◦ Amount of control cache in the controller nodes —
shownode
◦ Number and type of Fibre Channel adapters in each node —
showport -i
◦ Number of drive cages—
showcage
(also used above for cage firmware)
◦ Number of drive magazines —
showcage –d
◦ Number and sizes of drives on the magazines —
showpd
• Physical condition of system hardware and cabinet (note presence of scratches, dents, missing
screws, broken bezels, damaged ports, and other visible anomalies)
• Destination address or addresses and list of the equipment going to each address
NOTE:
In this and other chapters, the command-line examples use bold type to indicate user input and
angle bracket (< >) to denote variables. Examples may not match the exact output any particular
system.
NOTE:
If you are planning to reinstall using the existing license, record the license before completing this
uninstall procedure. To record the existing license, use the following commands to record the
existing license:
showlicense and showlicense -raw.
Procedure
1. Connect the laptop to the highest numbered controller node with a serial cable.
2. Using a terminal emulator program, such as PuTTy, set your maintenance laptop console to the
following settings.
Data bits 8
Parity None
Stop bits 1
WARNING:
Proceeding with the deinstallation causes complete and irrecoverable loss of data on the
storage system.
5. A warning message appears to indicate that all data on this system will be lost. Enter y to begin
the deinstallation process and the system reboots.
WARNING:
Running this script will result in complete loss of data on this system.
Are you sure you want to continue? (y/n)
y
210 Deinstalling the storage system and restoring the storage system to factory defaults
3PAR(TM) InForm(R) OS <version> <sernum>-<nodeID> ttyS0
(none) login: console<password>
NOTE:
The time required for all chunklet reinitializations during deinstallation depends on the type,
size, and number of disks. In the example below, the estimate is 71 minutes.
NOTE:
If you do not wait for the chunklets to be initialized, data still resides on the disks but cannot be
easily accessed. When the chunklets are initialized, zeros are written over the existing data.
If you require additional assistance, contact HPE 3PAR Technical Support.
1 SP Main
3PAR Service Processor Menu
1 ==> SP Control/Status
2 ==> Network Configuration
3 ==> StoreServ Configuration Management
4 ==> StoreServ Product Maintenance
5 ==> Local Notification Configuration
6 ==> Site Authentication Key Manipulation
7 ==> Interactive CLI for an StoreServ
X Exit
10. From the StoreServ Configs menu, enter 4 for Remove a StoreServ, and then press ENTER.
Deinstalling the storage system and restoring the storage system to factory defaults 211
3 StoreServ Configs
3PAR Service Processor Menu
1 SP Main
HP 3PAR Service Processor Menu
1 ==> SP Control/Status
2 ==> Network Configuration
3 ==> StoreServ Configuration Management
4 ==> StoreServ Product Maintenance
5 ==> Local Notification Configuration
6 ==> Site Authentication Key Manipulation
7 ==> Interactive CLI for a StoreServ
X Exit
13. From the SP Control menu, enter 3 to select Halt SP and press ENTER.
212 Deinstalling the storage system and restoring the storage system to factory defaults
1 SP CONTROL
HP 3PAR Service Processor Menu
SP Control Functions
14. Enter Y to confirm that you want to halt and power off the SP.
1.3.1 SP SHUTDOWN
HP 3PAR Service Processor Menu
Confirmation
15. Set all power breakers on the PDUs to the OFF position.
CAUTION:
For safety precaution, all drive magazines must be properly grounded before removing from the
drive chassis.
Deinstalling the storage system and restoring the storage system to factory defaults 213
16. Unplug the system main power cords.
17. For system and drive expansion cabinets, coil all main powers cords and strap the cable along the
mounting rails on the side of the cabinet panel. Use a cable tie wrap to secure the cords inside the
rack.
18. Disconnect all external connections from the host computers and drive cage expansion racks to the
system and remove the cables from the rack. Leave the internal Fibre Channel and SP connections
intact if possible.
19. Insert dust plugs into all open system Fibre Channel ports and secure all Fibre Channel, Ethernet,
and serial cables remaining inside the rack.
Procedure
1. Connect the laptop to the highest numbered controller node with a serial cable.
2. Using a terminal emulator program, such as PuTTy, set your maintenance laptop console to the
following settings.
Setting Value
Data bits 8
Parity None
Stop bits 1
WARNING:
Proceeding with the deinstallation causes complete and irrecoverable loss of data on the
storage system.
5. A warning message appears to indicate that all data on this system will be lost. Enter y to begin
the deinstallation process and the system reboots.
WARNING:
Running this script will result in complete loss of data on this system.
Are you sure you want to continue? (y/n)
y
NOTE:
The time required for all chunklet reinitializations during deinstallation depends on the type,
size, and number of disks. In the example below, the estimate is 71 minutes.
NOTE:
If you do not wait for the chunklets to be initialized, data still resides on the disks but cannot be
easily accessed. When the chunklets are initialized, zeros are written over the existing data.
If you require additional assistance, contact HPE 3PAR Technical Support.
Deinstalling the storage system and restoring the storage system to factory defaults 215
At this point, all chunklets in the system will be initialized to clear
volume
data. This is estimated to take about 71 minutes.
10. From the console menu, enter 6 to select Set up the system to wipe and rerun ootb.
216 Deinstalling the storage system and restoring the storage system to factory defaults
Troubleshooting
NOTE:
This section describes how to troubleshoot the StoreServ system.
The outputs shown in this section are only examples and might not reflect your system
configuration.
Severity Description
Fatal A fatal event has occurred. It is no longer possible to take remedial action.
Minor An event has occurred that requires action, but the situation is not yet
serious.
Troubleshooting 217
For drive alerts, the Component line in the right column lists the cage number, magazine number, and
drive number (cage:magazine:drive). The first and second numbers are sufficient to identify the exact
drive in a storage system, since there is always only a single drive (drive 0) in a single magazine.
Alert notifications in the HPE 3PAR StoreServ Storage 3PAR Service Console
In the Detail pane of the HPE 3PAR StoreServ Storage 3PAR Service Console (SC), an alert notification
will display in the Notifications box.
Views (1)—The Views menu identifies the currently selected view. Most list panes have several views
that you can select. Clicking the down arrow ( ) to the right of a view exposes or hides the Views drop-
down list. Map views, when available, can be selected from the Views menu by clicking the map icon
( ).
Actions (2)—The Actions menu allows you to perform actions on one or more resources that you have
selected, in the list pane. If you do not have permission to perform an action, the action is not displayed in
the menu. Also, some actions might not be displayed due to system configurations, user roles, or
properties of the selected resource.
Notifications box (3)—The Notifications box is displayed when an alert or task has affected the
resource.
Resource detail (4)—Information for the selected view is displayed in the Resource detail area.
Viewing alerts
Procedure
1. To view the alerts, issue the HPE 3PAR CLI showalert command.
Alert message codes have seven digits in the schema AAABBBB, where:
• AAA is a 3-digit major code.
• BBBB is a 4-digit sub-code.
• 0x precedes the code to indicate hexadecimal notation.
Message codes ending in de indicate a degraded state alert.
Message codes ending in fa indicate a failed state alert.
218 Alert notifications in the HPE 3PAR StoreServ Storage 3PAR Service Console
See the HPE 3PAR OS Command Line Interface Reference for complete information on the display
options on the event logs.
AE Permanent error in
Backplane I2C bus
AF Error in Backplane
NVRAM access
Table Continued
B1 Expander watchdog is
fired
B2 Expander conflict in
SAS domain (side A/B)
BD Error in ESP
communication
BF System identification
value is not available
B8 Expander firmware
image error
BE Expander firmware
version mismatch with
ESP firmware version in
own I/O module
SAS Cable Error B9 SAS cable hardware 1. Verify the SAS cable
error status indicators. For
the cables with an
amber LED (error),
check if the cables
are properly
Table Continued
220 Troubleshooting
Error Type Error Code Error Detail Recommended Action
Table Continued
Troubleshooting 221
Error Type Error Code Error Detail Recommended Action
C5 Minimum temperature
reached in temperature
sensor
C6 Fans commanded to
maximum speed
C7 System shutdown
because of over
temperature
NOTE:
This warning can
also be caused by
Table Continued
222 Troubleshooting
Error Type Error Code Error Detail Recommended Action
NOTE:
This warning can
also be caused by
a failed power
supply.
Table Continued
Troubleshooting 223
Error Type Error Code Error Detail Recommended Action
Table Continued
224 Troubleshooting
Error Type Error Code Error Detail Recommended Action
Table Continued
Troubleshooting 225
Error Type Error Code Error Detail Recommended Action
Table Continued
NOTE:
You can continue to access the HPE 3PAR SmartStart log files in the Users folder after you have
removed HPE 3PAR SmartStart from your storage system.
Procedure
1. Locate this folder: C:\Users\<username>\SmartStart\log.
2. Zip all the files in the log folder.
Procedure
1. Connect and log in to the HPE 3PAR SP.
2. From the HPE 3PAR Service Console (SC) main menu, select Service Processor.
3. Select Actions > Collect support data.
4. Select SPLOR data, and then click Collect to start data retrieval.
When support data collection is in progress, it will start a task which will be displayed at the top of the
page. To see details for a specific collection task in Activity view, expand the task message and click
the Details link for the task.
Procedure
1. Connect and log in to the HPE 3PAR Service Processor (SP).
2. From the 3PAR Service Processor Onsite Customer Care (SPOCC) main menu, click Files from the
navigation pane.
3. Click the folder icons for files > syslog > apilogs.
4. In the Action column, click Download for each log file:
Procedure
1. Log in to the HPE 3PAR StoreServ Management Console (SSMC) and on the main menu, click
Storage Systems > Systems. The systems managed by the SSMC are listed and the health and
configuration summary panels provide a system overview and shows the system health status.
Procedure
1. Issue the HPE 3PAR CLI checkhealth command without any specifiers to check the health of all the
components that can be analyzed.
Command syntax is: checkhealth [<options> | <component>...]
Command authority is Super, Service.
Command options are:
Issue checkhealth -detail to display both summary and detailed information about the hardware
and software components:
The following information is included when you use the -detail option:
Troubleshooting 229
Component ----Identifier---- -----------Detailed Description-------
Alert sw_port:1:3:1 Port 1:3:1 Degraded (Target Mode Port Went
Offline)
Alert sw_port:0:3:1 Port 0:3:1 Degraded (Target Mode Port Went
Offline)
Alert sw_sysmgr Total available FC raw space has reached
threshold of 800G
(2G remaining out of 544G total)
Alert sw_sysmgr Total FC raw space usage at 307G (above 50% of total
544G)
Date -- Date is not the same on all
nodes
LD ld:name.usr.0 LD is not mapped to a volume
LD ld:name.usr.1 LD is not mapped to a volume
vlun host:group01 Host wwn:2000000087041F72 is not connected to a
port
vlun host:group02 Host wwn:2000000087041F71 is not connected to a
port
vlun host:group03 Host iscsi_name:2000000087041F71 is not
connected to a port
vlun host:group04 Host wwn:210100E08B24C750 is not connected to a
port
vlun host:Host_name Host wwn:210000E08B000000 is not connected to a
port
--------------------------------------------------------------------------
-----------
13 total
If there are no faults or exception conditions, the checkhealth command indicates that the System
is healthy:
cli% checkhealth
Checking alert
Checking cabling
…
Checking vlun
Checking vv
System is healthy
With the <component> specifier, you can check the status of one or more specific storage system
components. For example:
The -svc -full option provides a summary of service related issues by default. If you use the -
detail option, both a summary and a detailed list of service issues are displayed. The -svc -full
option displays the service related information in addition to the customer related information.
The following example displays information that is intended only for service users:
230 Troubleshooting
cli% checkhealth -svc
Checking alert
Checking ao
Checking cabling
...
Checking vv
Checking sp
Component -----------Summary Description------------------- Qty
Alert New alerts 2
File Nodes with Dump or HBA core files 1
PD There is an imbalance of active pd ports 1
PD PDs that are degraded or failed 2
pdch LDs with chunklets on a remote disk 2
pdch LDs with connection path different than ownership 2
Port Missing SFPs 6
The following information is included when you use the -detail option. The detailed output can be
very long if a node or cage is down.
Troubleshooting 231
To check for inconsistencies between the System Manager and kernel states and CRC errors for FC
and SAS ports, use the -full option:
Component Function
Table Continued
Alert
Displays any unresolved alerts and shows any alerts that would be seen by showalert -n.
Alert Example
Cabling
Displays issues with cabling of drive enclosures.
• Check for drive enclosures that are not supported with 20000 systems.
• Check for balanced drive enclosures counts for ports in node-pairs.
• Check for drive enclosures not connected to node-pairs.
• Check for drive enclosures not connected to same port of node-pairs.
• Check for drive enclosures to wrong ports or I/O modules.
234 Alert
• Check for drive enclosures cables in wrong order.
• Check for drive enclosures with no PDs installed.
• Check for broken SAS cables.
NOTE:
To avoid any cabling errors, all drive enclosures must have at least one or more hard drives installed
before powering on the enclosure.
Troubleshooting 235
Cabling Example 1
Checking cabling
Component ---Description---- Qty
Cabling Bad SAS connection 2
cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model FormFactor
0 cage0 0:2:1 0 1:2:1 1 17 24-34 1.76 1.76 DCS6 SFF
1 cage1 0:2:1 1 1:2:1 0 17 25-33 1.76 1.76 DCS6 SFF
2 cage2 0:2:3 0 1:2:3 1 17 24-32 1.76 1.76 DCS6 SFF
3 cage3 0:2:3 1 1:2:3 0 16 24-33 1.76 1.76 DCS6 SFF
4 cage4 0:1:1 0 1:1:1 1 4 32-34 1.76 1.76 DCS5 LFF
5 cage5 0:1:1 1 1:1:1 0 4 32-35 1.76 1.76 DCS5 LFF
6 cage6 0:1:3 0 1:1:3 1 4 32-35 1.76 1.76 DCS5 LFF
7 cage7 0:1:3 1 1:1:3 0 4 31-35 1.76 1.76 DCS5 LFF
The showportdev sas <nsp> command shows every SAS entity attached to the port.
236 Troubleshooting
• Phy’s 8-11 are DP-1 (in) port for non-node cage I/O modules, connection to on board SAS IOC for
node cages.
• Phy’s 12-35 are PD slot Phy’s.
• Phy’s 36 is the expander chip, where the cage name can be found in the “AttID” column.
Cabling Example 2
After determining the desired cabling and reconnecting correctly to slot-0 and port-1 of nodes 0 & 1, the
output should look like this:
Cage
Displays drive cage conditions that are not optimal and reports exceptions if any of the following do not
have normal states:
• Ports
• Drive magazine states (DC1, DC2, & DC4)
• Small form-factor pluggable (SFP) voltages (DC2 and DC4)
• SFP signal levels (RX power low and TX failure)
• Power supplies
• Cage firmware (is not current)
Reports if a servicecage operation has been started and has not ended.
Cage 237
Format of Possible Cage Exception Messages
Cage Example 1
238 Troubleshooting
cli% showcage -d cage4
Id Name
LoopA
Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
4 cage4 --- 0 3:2:1 0 8 28-36 2.37 2.37 DC4 n/a
----------------------------------SFP
Info-----------------------------------
FCAL SFP -State- --Manufacturer-- MaxSpeed(Gbps) TXDisable TXFault RXLoss
DDM
0 0 OK FINISAR CORP. 4.1 No No Yes
Yes
1 1 OK FINISAR CORP. 4.1 No No No
Yes
-----------Midplane Info-----------
Firmware_status Current
Product_Rev 2.37
State Normal Op
Loop_Split 0
VendorId,ProductId 3PARdata,DC4
Unique_ID 1062030000098E00
...
Troubleshooting 239
---------Cage 4 Fcal 0 SFP 0 DDM----------
-Warning- --Alarm--
--Type-- Units Reading Low High Low High
Temp C 33 -20 90 -25 95
Voltage mV 3147 2900 3700 2700 3900
TX Bias mA 7 2 14 1 17
TX Power uW 394 79 631 67 631
RX Power uW 0 15 794 10* 1259
Cage Example 2
240 Troubleshooting
cli% showcage -d cage1
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
1 cage1 0:0:2 0 1:0:2 0 24 27-39 2.37 2.37 DC2 n/a
-----------Midplane Info-----------
Firmware_status Current
Product_Rev 2.37
State Normal Op
Loop_Split 0
VendorId,ProductId 3PARdata,DC2
Unique_ID 10320300000AD000
Cage Example 3
NOTE:
The primary path can be seen by an asterisk (*) in showpd's Ports columns.
Troubleshooting 241
cli% showcage -d cage1
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
1 cage1 0:0:2 0 1:0:2 0 24 28-40 2.37 2.37 DC2 n/a
-----------Midplane Info-----------
Firmware_status Current
Product_Rev 2.37
State Normal Op
Loop_Split 0
VendorId,ProductId 3PARdata,DC2
Unique_ID 10320300000AD000
cli% showpd -s
Id CagePos Type -State-- -----Detailed_State------
20 1:0:0 FC degraded disabled_B_port,servicing
21 1:0:1 FC degraded disabled_B_port,servicing
22 1:0:2 FC degraded disabled_B_port,servicing
23 1:0:3 FC degraded disabled_B_port,servicing
Cage Example 4
242 Troubleshooting
NOTE:
The DC1 and DC3 cages have firmware in the FCAL modules. The DC2 and DC4 cages have
firmware on the cage mid-plane. Use the upgradecage command to upgrade the firmware.
cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
2 cage2 2:0:3 0 3:0:3 0 24 29-43 2.37 2.37 DC2 n/a
3 cage3 2:0:4 0 3:0:4 0 32 29-41 2.36 2.36 DC2 n/a
cli% showfirmwaredb
Vendor Prod_rev Dev_Id Fw_status Cage_type Firmware_File
...
3PARDATA [2.37] DC2 Current DC2 /opt...dc2/
lbod_fw.bin-2.37
Cage Example 5
Troubleshooting 243
cli% showcage -d cage4
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
4 cage4 2:2:1 0 3:2:1 0 8 30-37 2.37 2.37 DC4 n/a
----------------------------------SFP
Info-----------------------------------
FCAL SFP -State- --Manufacturer-- MaxSpeed(Gbps) TXDisable TXFault RXLoss
DDM
0 0 OK SIGMA-LINKS 2.1 No No No
Yes
1 1 OK FINISAR CORP. 4.1 No No No
Yes
244 Troubleshooting
MaxSpeed(Gbps) : 4.1
Qualified : Yes
TX Disable : No
TX Fault : No
RX Loss : No
RX Power Low : No
DDM Support : Yes
Consistency
Displays inconsistencies between sysmgr and the kernel.
The check is added to find inconsistent and unusual conditions between of the system manager and the
node kernel. The check requires the hidden -svc -full parameter because the check can take 20
minutes or longer for a large system.
Format of Possible Consistency Exception Messages
Consistency --<err>
Consistency Example
Consistency 245
DAR Example 2
Date
Checks the date and time on all nodes.
Format of Possible Date Exception Messages
Date Example
cli% showdate
Node Date
0 2010-09-08 10:56:41 PDT (America/Los_Angeles)
1 2010-09-08 10:56:39 PDT (America/Los_Angeles)
cli% shownet
IP Address Netmask/PrefixLen Nodes Active Speed
192.168.56.209 255.255.255.0 0123 0 100
Duplex AutoNeg Status
Full Yes Active
File
Displays file system conditions that are not optimal.
246 Date
Checks for the following:
• The presence of special files on each node, for example:
touch
/common/touchfiles/manualstartup
File Example 1
Troubleshooting 247
Every node should have the following file system link so that the admin volume can be mounted, if the
node becomes the master node:
The corresponding alert when the admin volume is not properly mounted is as follows:
If a link for the admin volume is not present, it can be recreated by rebooting the node.
File Example 3
LD
Checks the following and displays logical disks (LD) that are not optimal:
• Preserved LDs
• Verifies that current and created availability are the same
• Owner and backup
• Verifies preserved data space (pdsld) is the same as total data cache
• Size and number of logging LDs
248 LD
Format of Possible LD Exception Messages
LD Example 1
LD Suggested Action 1
Examine the identified LDs using the following CLI commands:showld, showld –d, showldmap, and
showvvmap.
LDs are normally mapped to (used by) VVs but they can be disassociated with a VV if a VV is deleted
without the underlying LDs being deleted, or by an aborted tune operation. Normally, you would remove
the unmapped LD to return its chunklets to the free pool.
Troubleshooting 249
LD Example 2
LD Suggested Action 2
Examine the identified LDs for failed or missing disks by using the following CLI commands:showld,
showld –d, showldch, and showpd. Write-through mode (WThru) indicates that host I/O operations
must be written through to the disk before the host I/O command is acknowledged. This is usually due to
a node-down condition, when node batteries are not working, or where disk redundancy is not optimal.
LD Example 3
250 Troubleshooting
LD Suggested Action 3
LDs are created with certain high-availability characteristics, such as ha-cage. Reduced availability can
occur if chunklets in an LD are moved to a location where the current availability (CAvail) is below the
desired level of availability (Avail). Chunklets may have been manually moved with movech or by
specifying it during a tune operation or during failure conditions such as node, path, or cage failures. The
HA levels from highest to lowest are port, cage, mag, and ch (disk).
Examine the identified LDs for failed or missing disks by using the following CLI commands: showld,
showld –d, showldch, and showpd. In the example below, the LD should have cage-level availability,
but it currently has chunklet (disk) level availability (the chunklets are on the same disk).
LD Example 4
LD Suggested Action 4
Preserved data LDs (pdsld) are created during system initialization Out-of-the-Box (OOTB) and after
some hardware upgrades (through admithw command). The total size of the pdsld should match the
total size of all data-cache in the storage system (see below). This message appears if a node is offline
because the comparison of LD size to data cache size does not match. This message can be ignored
unless all nodes are online. If all nodes are online and the error condition persists, determine the cause of
the failure. Use the admithw command to correct the condition.
Troubleshooting 251
cli% shownode
Control Data
Cache
Node --Name--- -State- Master InCluster ---LED--- Mem(MB) Mem(MB)
Available(%)
0 1001335-0 OK Yes Yes GreenBlnk 2048 4096
100
1 1001335-1 OK No Yes GreenBlnk 2048 4096
100
License
Displays license violations.
Format of Possible License Exception Messages
License Example
Network
Displays Ethernet issues for administrative and Remote Copy over IP (RCIP) networks that have been
logged on the previous 24-hours. Also, reports the storage system has fewer than two nodes with working
administrative Ethernet connections.
• Check the number of collisions in the previous day log. The number of collisions should be less than
5% of the total packets for the day.
• Check for Ethernet errors and transmit (TX) or receive (RX) errors in previous day’s log.
252 License
Format of Possible Network Exception Messages
Network Example 1
cli% shownet
IP Address Netmask/PrefixLen Nodes Active Speed Duplex AutoNeg
Status
192.168.56.209 255.255.255.0 0123 0 100 Full Yes
Changing
192.168.56.233 255.255.255.0 0123 0 100 Full Yes
Unverified
Network Example 2
NOTE:
The error counters shown by shownet and shownet -d cannot be cleared except by rebooting a
controller node. Because checkhealth is showing network counters from a history log,
checkhealth stops reporting the issue if there is no increase in error in the next log entry.
Troubleshooting 253
shownet -d
IP Address: 192.168.56.209 Netmask 255.255.255.0
Assigned to nodes: 0123
Connected through node 0
Status: Active
Node
Checks the following node conditions and displays nodes that are not optimal:
• Verifies node batteries have been tested in the last 30 days
• Offline nodes
• Power supply and battery problems
The following checks are performed only if the -svc option is used.
• Checks for symmetry of components between nodes such as Control-Cache and Data-Cache size, OS
version, bus speed, MCU version, and CPU speed
• Checks if diagnostics such as
ioload
are running on any of the nodes
• Checks for stuck-threads, such as I/O operations that cannot complete
Format of Possible Node Exception Messages
254 Node
The following checks are performed when the -svc option is used:
Node Example 1
NOTE:
In the example below, the battery state is considered degraded because the power supply is failed.
Troubleshooting 255
cli% shownode
Control Data
Cache
Node --Name--- -State-- Master InCluster ---LED--- Mem(MB) Mem(MB)
Available(%)
0 1001356-0 Degraded Yes Yes AmberBlnk 2048 8192
100
1 1001356-1 Degraded No Yes AmberBlnk 2048 8192
100
cli% shownode -s
Node -State-- -Detailed_State-
0 Degraded PS 1 Failed
1 Degraded PS 0 Failed
Node Example 2
NOTE:
The condition of the degraded power supply is caused by the failing battery. The degraded PS state
is not the expected behavior. This issue will be fixed in a future release. (bug 46682).
256 Troubleshooting
cli% shownode
Control Data
Cache
Node --Name--- -State-- Master InCluster ---LED--- Mem(MB) Mem(MB)
Available(%)
2 1001356-2 OK No Yes GreenBlnk 2048 8192
100
3 1001356-3 Degraded No Yes AmberBlnk 2048 8192
100
cli% shownode -s
Node -State-- -Detailed_State-
2 OK OK
3 Degraded PS 1 Degraded
cli% showbattery
Node PS Bat Serial -State-- ChrgLvl(%) -ExpDate-- Expired Testing
3 0 0 100A300B OK 100 07/01/2011 No No
3 1 0 12345310 Failed 0 04/07/2011 No No
Node Example 3
showbattery -s
Node PS Bat -State-- -Detailed_State-
0 0 0 OK normal
0 1 0 Degraded Unknown
Examine the date of the last successful test of that battery. Assuming the comment date was 2009-10-14,
the last battery test on Node 0, PS 1, Bat 0 was 2009-09-10, which is more than 30 days ago.
Troubleshooting 257
showbattery -log
Node PS Bat Test Result Dur(mins) ---------Time----------
0 0 0 0 Passed 1 2009-10-14 14:34:50 PDT
0 0 0 1 Passed 1 2009-10-28 14:36:57 PDT
0 1 0 0 Passed 1 2009-08-27 06:17:44 PDT
0 1 0 1 Passed 1 2009-09-10 06:19:34 PDT
showbattery
Node PS Bat Serial -State-- ChrgLvl(%) -ExpDate-- Expired Testing
0 0 0 83205243 OK 100 04/07/2011 No No
0 1 0 83202356 Degraded 100 04/07/2011 No No
Example Node 4
PD
Displays physical disks with states or conditions that are not optimal:
• Checks for failed and degraded PDs
• Checks for an imbalance of PD ports, for example, if Port-A is used on more disks than Port-B
• Checks for an
Unknown
sparing algorithm.
• Checks for disks experiencing a high number of IOPS
• Reports if a
servicemag
operation is outstanding (
258 PD
servicemag status
)
• Reports if there are PDs that do not have entries in the firmware DB file
Format of Possible PD Exception Messages
The following checks are performed when the -svc option is used, or on 7400/7200 hardware:
PD Example 1
PD Suggested Action 1
Both degraded and failed disks are reported. When an FC path to a drive cage is not working, all disks in
the cage have a degraded state due to the non-redundant condition. To further diagnose, use the
following commands: showpd, showpd -s, showcage, showcage -d, showport -sfp.
Troubleshooting 259
cli% showpd -degraded -failed
----Size(MB)---- ----Ports----
Id CagePos Type Speed(K) State Total Free A B
48 3:0:0 FC 10 degraded 139520 115200 2:0:4* -----
49 3:0:1 FC 10 degraded 139520 121344 2:0:4* -----
…
107 4:9:3 FC 15 failed 428800 0 ----- 3:2:1*
----------------------------------SFP
Info-----------------------------------
FCAL SFP -State- --Manufacturer-- MaxSpeed(Gbps) TXDisable TXFault RXLoss
DDM
0 0 OK SIGMA-LINKS 2.1 No No No
Yes
1 1 OK SIGMA-LINKS 2.1 No No Yes
Yes
260 Troubleshooting
PD Example 2
PD Suggested Action 2
The primary and secondary I/O paths for disks (PDs) are balanced between nodes. The primary path is
indicated in the showpd -path output and by an asterisk in the showpd output. An imbalance of active
ports is usually caused by a nonfunctional path/loop to a cage, or because an odd number of drives is
installed or detected. To further diagnose, use the following commands: showpd, showpd path,
showcage, and showcage -d.
Troubleshooting 261
cli% showpd
----Size(MB)----- ----Ports----
Id CagePos Type Speed(K) State Total Free A B
0 0:0:0 FC 10 normal 139520 119040 0:0:1* 1:0:1
1 0:0:1 FC 10 normal 139520 121600 0:0:1 1:0:1*
2 0:0:2 FC 10 normal 139520 119040 0:0:1* 1:0:1
3 0:0:3 FC 10 normal 139520 119552 0:0:1 1:0:1*
...
46 2:9:2 FC 10 normal 139520 112384 2:0:3* 3:0:3
47 2:9:3 FC 10 normal 139520 118528 2:0:3 3:0:3*
48 3:0:0 FC 10 degraded 139520 115200 2:0:4* -----
49 3:0:1 FC 10 degraded 139520 121344 2:0:4* -----
50 3:0:2 FC 10 degraded 139520 115200 2:0:4* -----
51 3:0:3 FC 10 degraded 139520 121344 2:0:4* -----
----------------------------------SFP
Info-----------------------------------
FCAL SFP -State- --Manufacturer-- MaxSpeed(Gbps) TXDisable TXFault RXLoss
DDM
0 0 OK SIGMA-LINKS 2.1 No No No
Yes
1 1 OK SIGMA-LINKS 2.1 No No Yes
Yes
262 Troubleshooting
Link B TXLEDs Off Green
LED(Loop_Split) Off Off
LEDS(system,hotplug) Green,Off Green,Off
...
-------------Drive Info------------- ----LoopA----- ----LoopB-----
Drive NodeWWN LED Temp(C) ALPA LoopState ALPA LoopState
0:0 20000014c3b3eab9 Green 35 0xe1 OK 0xe1 Loop fail
0:1 20000014c3b3e708 Green 38 0xe0 OK 0xe0 Loop fail
0:2 20000014c3b3ed17 Green 35 0xdc OK 0xdc Loop fail
0:3 20000014c3b3dabd Green 30 0xda OK 0xda Loop fail
PD Example 3
PD Suggested Action 3
This check samples the I/O per second (IOPS) information in statpd to see if any disks are being
overworked, and then it samples again after five seconds. This does not necessarily indicate a problem,
but it could negatively affect system performance. The IOPS thresholds currently set for this condition are
listed:
• NL disks < 75
• FC 10K RPM disks < 150
• FC 15K RPM disks < 200
• SSD < 1500
Operations such as servicemag and tunevv can cause this condition. If the IOPS rate is very high
and/or a large number of disks are experiencing very heavy I/O, examine the system further using
statistical monitoring commands/utilities such as statpd, the OS MC (GUI) and System Reporter. The
following example shows a report for a disk with a total I/O is 150 kb/s or more.
PD Example 4
PD Suggested Action 4
The identified disk does not have firmware that the storage system considers current. When a disk is
replaced, the servicemag operation should upgrade the disk's firmware. When disks are installed or
Troubleshooting 263
added to a system, the admithw command can perform the firmware upgrade. Check the state of the
disk by using CLI commands such as showpd -s, showpd -i, and showfirmwaredb.
cli% showpd -s 3
Id CagePos Type -State-- -Detailed_State-
3 0:4:0 FC degraded old_firmware
cli% showpd -i 3
Id CagePos State ----Node_WWN---- --MFR-- ---Model--- -Serial- -FW_Rev-
3 0:4:0 degraded 200000186242DB35 SEAGATE ST3146356FC 3QN0290H XRHJ
cli% showfirmwaredb
Vendor Prod_rev Dev_Id Fw_status Cage_type
...
SEAGATE [XRHK] ST3146356FC Current DC2.DC3.DC4
PD Example 5
PD Suggested Action 5
Check the system’s Sparing Algorithm value using the CLI command showsys -param. The value is
normally set during the initial installation (OOTB). If it must be set later, use the command setsys
SparingAlgorithm; valid values are Default, Minimal, Maximal, and Custom. After setting the
parameter, use the admithw command to programmatically create and distribute the spare chunklets.
% showsys -param
System parameters from configured settings
----Parameter----- --Value--
RawSpaceAlertFC : 0
RawSpaceAlertNL : 0
RemoteSyslog : 0
RemoteSyslogHost : 0.0.0.0
SparingAlgorithm : Unknown
PD Example 6
PD Suggested Action 6
Check the release notes for mandatory updates and patches. Install updates and patches to HPE 3PAR
OS as needed to support the PD in the cage.
264 Troubleshooting
PDCH
Checks for Physical Disk Chunklets (PDCH) with states that are not optimal.
• Chunklets are not used by multiple LDs.
• Media failed chunklets.
• Verifies if LD ownership is the same as the physical connection.
Format of Possible PDCH Exception Messages
PDCH Example 1
cli% showld
Id Name RAID -Detailed_State- Own SizeMB UsedMB Use Lgct LgId WThru
MapV
19 pdsld0.0 1 normal 1/0 256 0 P,F 0 ---
Y N
20 pdsld0.1 1 normal 1/0 7680 0 P 0 ---
Y N
21 pdsld0.2 1 normal 1/0 256 0 P 0 ---
Y N
PDCH 265
PDCH Example 2
266 Troubleshooting
cli% showld
Id Name RAID -Detailed_State- Own SizeMB UsedMB Use Lgct LgId
WThru MapV
35 R1.usr.3 1 normal 3/2/0/1 256 256 V 0 ---
N Y
----Size(MB)---- ----Ports----
Id CagePos Type Speed(K) State Total Free A B
63 2:2:3 FC 10 normal 139520 124416 2:0:3* 3:0:3
91 3:8:3 FC 10 degraded 139520 124416 2:0:4* -----
cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
2 cage2 2:0:3 0 3:0:3 0 24 29-42 2.37 2.37 DC2 n/a
3 cage3 2:0:4 0 ----- 0 32 28-40 2.37 2.37 DC2 n/a
cli% showpd 91 63
----Size(MB)---- ----Ports----
Id CagePos Type Speed(K) State Total Free A B
63 2:2:3 FC 10 normal 139520 124416 2:0:3* 3:0:3
91 3:8:3 FC 10 normal 139520 124416 2:0:4 3:0:4*
Port
Checks for the following port connection issues:
• Ports in unacceptable states
• Mismatches in type and mode, such as hosts connected to initiator ports, or host and Remote Copy
over Fibre Channel (RCFC) ports configured on the same FC adapter
• Degraded SFPs and those with low power; perform this check only if this FC Adapter type uses SFPs
Port 267
Format of Possible Port Exception Messages
Port Example 1
Check SFP statistics using CLI commands such as showport -sfp, showport -sfp -ddm,
showcage.
In the following example an RX power level of 361 microwatts (uW) for Port 0:0:1 DDM is a good reading;
and 98 uW for Port 0:0:2 is a weak reading (< 100 uW). Normal RX power level readings are 200-400 uW.
268 Troubleshooting
cli% showport -sfp -ddm
--------------Port 0:0:1 DDM--------------
-Warning- --Alarm--
--Type-- Units Reading Low High Low High
Temp C 41 -20 90 -25 95
Voltage mV 3217 2900 3700 2700 3900
TX Bias mA 7 2 14 1 17
TX Power uW 330 79 631 67 631
RX Power uW 361 15 794 10 1259
cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
0 cage0 0:0:1 0 1:0:1 0 15 33-38 08 08 DC3 n/a
1 cage1 --- 0 1:0:2 0 15 30-38 08 08 DC3 n/a
cli% showpd -s
Id CagePos Type -State-- -Detailed_State-
1 0:2:0 FC normal normal
...
13 1:1:0 NL degraded missing_A_port
14 1:2:0 FC degraded missing_A_port
Port Example 2
Troubleshooting 269
Port Suggested Action 2
FC node-ports that normally contain SFPs will report an error if the SFP has been removed. The condition
can be checked using the showport -sfp command. In this example, the SFP in 0:3:1 has been
removed from the adapter:
Port Example 3
Port Example 4
cli% showport
N:S:P Mode State ----Node_WWN---- -Port_WWN/HW_Addr- Type
3:5:1 target offline 2FF70002AC00054C 23510002AC00054C free
270 Troubleshooting
Port Example 5
cli% showport
N:S:P Mode State ----Node_WWN---- -Port_WWN/HW_Addr- Type
2:0:1 initiator ready 2FF70002AC000591 22010002AC000591 disk
2:0:2 initiator ready 2FF70002AC000591 22020002AC000591 disk
2:0:3 target ready 2FF70002AC000591 22030002AC000591 disk
2:0:4 target loss_sync 2FF70002AC000591 22040002AC000591 free
cli% showport
N:S:P Mode State ----Node_WWN---- -Port_WWN/HW_Addr- Type
0:1:1 initiator ready 2FF70002AC000190 20110002AC000190 rcfc
0:1:2 initiator loss_sync 2FF70002AC000190 20120002AC000190 free
0:1:3 initiator loss_sync 2FF70002AC000190 20130002AC000190 free
0:1:4 initiator loss_sync 2FF70002AC000190 20140002AC000190 free
Port Example 6
Troubleshooting 271
cli% showportlesb single 3:2:1
ID ALPA ----Port_WWN---- LinkFail LossSync LossSig InvWord InvCRC
<3:2:1> 0x1 23210002AC00054C 20697 2655432 20700 37943749 1756
pd107 0xa3 2200001D38C28AA3 0 157 0 1129 0
pd106 0xa5 2200001D38C0D01E 0 279 0 1551 0
Port Example 7
Port CRC
Checks for increasing FC port CRC errors.
• Compares current lesb errors for active FC ports with most recent sample.
• If no error reported for current counters, compare most recent sample with sample from day before.
Format of Possible Port CRC Exception Messages
portcrc port:<nsp> "There is less than two days of LESB history for this
port"
portcrc port:<nsp> "Port or devices attached to port have experienced CRC
errors within the last day"
portcrc port:<nsp> "Port or devices attached to port have experienced CRC
errors within the last two days"
Port PELCRC
Checks for increasing SAS port CRC errors.
• Compares current PEL errors for active SAS ports with most recent sample.
• If no error reported for current counters, compare most recent sample with sample from day before.
Format of Possible PELCRC Exception Messages
Portpelcrc port:<nsp> "There is less than one week of PEL history for this
port"
Portpelcrc port:<nsp> "Port or devices attached to port have experienced
PEL errors within the last day"
Portpelcrc port:<nsp> "PEL errors have been increasing by more than
<maxCRC> per day over the last two days"
RC
Checks for the following Remote Copy issues.
• Remote Copy targets
• Remote Copy links
• Remote Copy Groups and VVs
Format of Possible RC Exception Messages
RC rc:<name> "All links for target <name> are down but target not yet
marked failed."
RC rc:<name> "Target <name> has failed."
RC rc:<name> "Link <name> of target <target> is down."
RC rc:<name> "Group <name> is not started to target <target>."
RC rc:<vvname> "VV <vvname> of group <name> is stale on target <target>."
RC rc:<vvname> "VV <vvname> of group <name> is not synced on target
<target>."
RC Suggested Action
Perform remote copy troubleshooting such as checking the physical links between the storage system.
Useful CLI commands are showrcopy, showrcopy -d, showport -rcip, showport -rcfc,
shownet -d, controlport rcip ping.
SNMP
Displays issues with SNMP. Attempts the showsnmpmgr command and reports errors if the CLI returns
an error.
Format of Possible SNMP Exception Messages
SNMP -- <err>
SNMP Example
SP
Checks the status of the Ethernet connection between the SP and nodes.
The Ethernet connection can only be checked from the SP because it performs a short Ethernet transfer
check between the SP and the storage system.
Format of Possible SP Exception Messages
Network SP->InServ "SP ethernet Stat <stat> has increased too quickly check
SP network settings"
274 SNMP
SP Example
SP Suggested Action
The <stat> variable can be any of the following: rx_errs, rx_dropped, rx_fifo, rx_frame,
tx_errs, tx_dropped, tx_fifo.
This message is usually caused by customer network issues, but may be caused by conflicting or
mismatching network settings between the SP, customer switch(es), and the storage system. Check the
SP network interface settings using SPmaint or SPOCC. Check the storage system settings by using
commands such as shownet and shownet -d.
Task
Displays failed tasks. Checks for any tasks that have failed within the past 24 hours. This is the default
time frame for the showtask -failed command.
Task Example
In this example, checkhealth also showed an alert. The task failed because the command is entered
with a syntax error:
Task 275
2010-10-22 10:35:36 PDT Created task.
2010-10-22 10:35:36 PDT Updated Executing "upgradecage -a -f" as 0:12109
2010-10-22 10:35:36 PDT Errored upgradecage: Invalid option: -f
VLUN
Displays host agent inactive and non-reported virtual LUNs (VLUNs). Also reports VLUNs that have been
configured but are not currently being exported to hosts or host-ports.
Format of Possible VLUN Exception Messages
VLUN Example
276 VLUN
cli% showvlun -host cs-wintec-test1
Active VLUNs
Lun VVName HostName -Host_WWN/iSCSI_Name- Port Type
2 BigVV cs-wintec-test1 10000000C964121C 2:5:1 host
-----------------------------------------------------------
1 total
VLUN Templates
Lun VVName HostName -Host_WWN/iSCSI_Name- Port Type
2 BigVV cs-wintec-test1 ---------------- --- host
VV
Displays Virtual Volumes (VV) that are not optimal. Checks for abnormal state of VVs and Common
Provisioning Groups (CPG).
Format of Possible VV Exception Messages
VV Suggested Action
Check status by using CLI commands such as showvv, showvv -d, and showvv -cpg.
VV 277
Websites
General websites
Hewlett Packard Enterprise Information Library
www.hpe.com/info/EIL
Single Point of Connectivity Knowledge (SPOCK) Storage compatibility matrix
www.hpe.com/storage/spock
Storage white papers and analyst reports
www.hpe.com/storage/whitepapers
For additional websites, see Support and other resources.
278 Websites
Support and other resources
Information to collect
• Technical support registration number (if applicable)
• Product name, model or version, and serial number
• Operating system name and version
• Firmware version
• Error messages
• Product-specific reports and logs
• Add-on products or components
• Third-party products or components
Accessing updates
• Some software products provide a mechanism for accessing software updates through the product
interface. Review your product documentation to identify the recommended software update method.
• To download product updates:
Hewlett Packard Enterprise Support Center
www.hpe.com/support/hpesc
Hewlett Packard Enterprise Support Center: Software downloads
www.hpe.com/support/downloads
Software Depot
www.hpe.com/support/softwaredepot
• To subscribe to eNewsletters and alerts:
www.hpe.com/support/e-updates
• To view and update your entitlements, and to link your contracts and warranties with your profile, go to
the Hewlett Packard Enterprise Support Center More Information on Access to Support Materials
page:
www.hpe.com/support/AccessToSupportMaterials
IMPORTANT:
Access to some updates might require product entitlement when accessed through the Hewlett
Packard Enterprise Support Center. You must have an HPE Passport set up with relevant
entitlements.
Remote support
Remote support is available with supported devices as part of your warranty or contractual support
agreement. It provides intelligent event diagnosis, and automatic, secure submission of hardware event
notifications to Hewlett Packard Enterprise, which will initiate a fast and accurate resolution based on your
product's service level. Hewlett Packard Enterprise strongly recommends that you register your device for
remote support.
If your product includes additional remote support details, use search to locate that information.
Warranty information
To view the warranty for your product or to view the Safety and Compliance Information for Server,
Storage, Power, Networking, and Rack Products reference document, go to the Enterprise Safety and
Compliance website:
www.hpe.com/support/Safety-Compliance-EnterpriseProducts
Regulatory information
To view the regulatory information for your product, view the Safety and Compliance Information for
Server, Storage, Power, Networking, and Rack Products, available at the Hewlett Packard Enterprise
Support Center:
www.hpe.com/support/Safety-Compliance-EnterpriseProducts
Documentation feedback
Hewlett Packard Enterprise is committed to providing documentation that meets your needs. To help us
improve the documentation, send any errors, suggestions, or comments to Documentation Feedback
(docsfeedback@hpe.com). When submitting your feedback, include the document title, part number,
edition, and publication date located on the front cover of the document. For online help content, include
the product name, product version, help edition, and publication date located on the legal notices page.
Procedure
1. Connect and log into the service processor with the admin account credentials. Start a CLI session.
2. Enter showcage to display the drive cage numbers/names.
3. Enter locatecage cage<n> where cage<n> is the drive cage number/name, to blink the LEDs on
the front of the drive cage. This will be performed one cage at a time.
4. Identify the physical location of the drive cage and make a note of the drive cage number on a
separate paper for reference during servicing.
5. Enter setcage position "Rack<xx> Rack-Unit<yy>" cage<n>, where <xx> is the rack
designator (00 is the main rack which has nodes, and 01 or higher are expansion cabinets), <yy> is
the Rack-Unit number in the rack (e.g., 1 – 50) at the bottom of the drive cage, and cage <n> is the
logical drive cage number/name.
6. Enter showcage -d cage<n> to verify these settings.
7. Repeat for each drive cage displayed in step 2.
8. Exit and logout of the session.