You are on page 1of 130

HP 3PAR F-Class, T-Class, and StoreServ

10000 Storage Troubleshooting Guide


for HP 3PAR OS 3.1.2

Abstract
This guide is for system administrators and experienced users who are familiar with HP 3PAR F-Class, T-Class, and StoreServ
10000 Storage systems, understand the operating system(s) they are using, and have a working knowledge of RAID. This
guide provides information on storage system alerts, components, LEDs and power on/off procedures.

HP Part Number: QL226-96953


Published: May 2013
© Copyright 2008, 2013 Hewlett-Packard Development Company, L.P.
Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial
Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under
vendor's standard commercial license.

The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express
warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall
not be liable for technical or editorial errors or omissions contained herein.
Acknowledgments

Adobe® and Acrobat® are trademarks of Adobe Systems Incorporated.


Warranty
WARRANTY STATEMENT: To obtain a copy of the warranty for this product, see the warranty information website:

http://www.hp.com/go/storagewarranty
Contents
1 Component Numbering for T-Class Storage System.........................................7
Identifying Storage System Components.....................................................................................34
Service Processor Placement.....................................................................................................37
Understanding Component Numbering.....................................................................................39
Cabinet Numbering...........................................................................................................39
PDU Numbering................................................................................................................26
Battery Backup Unit Numbering...........................................................................................44
Magnetek Battery Backup Units.......................................................................................14
Controller Node Numbering................................................................................................47
Drive Chassis Numbering....................................................................................................30
Drive Magazine Allocation.............................................................................................51
Power Supply Numbering...................................................................................................22
2 Component Numbering for F-Class Storage System.......................................23
Identifying Storage System Components.....................................................................................34
Service Processor Placement.....................................................................................................37
Understanding Component Numbering.....................................................................................39
Cabinet Numbering...........................................................................................................39
PDU Numbering................................................................................................................26
Controller Node Numbering................................................................................................47
Drive Chassis Numbering....................................................................................................30
Drive Magazine Allocation.............................................................................................51
Power Supply Numbering...................................................................................................32
3 Component Numbering for HP 3PAR StoreServ 10000 Storage......................34
Identifying Storage System Components.....................................................................................34
Service Processor Placement.....................................................................................................37
Understanding Component Numbering.....................................................................................39
Cabinet Numbering...........................................................................................................39
Power Distribution Unit Numbering.......................................................................................39
Fan Module Numbering.....................................................................................................43
Battery Module Numbering.................................................................................................44
Controller Node Numbering................................................................................................47
Drive Chassis Numbering....................................................................................................48
Drive Magazine Allocation.............................................................................................51
Power Supply Numbering...................................................................................................52
4 Understanding T-Class Storage System LED Status.........................................55
Using the T-Class Component LEDs............................................................................................66
Removing the Bezels and Unlocking the Door........................................................................55
Drive Cage LEDs................................................................................................................55
DC4 Drive Cage FC-AL Module LEDs...............................................................................55
Drive Magazine LEDs....................................................................................................57
Controller Node LEDs.........................................................................................................84
Fibre Channel Port LEDs......................................................................................................60
QLogic iSCSI Port LEDs.......................................................................................................75
Power Supply LEDs.............................................................................................................85
Battery Backup Unit LEDs....................................................................................................63
Power Distribution Unit Lamps..............................................................................................77
Service Processor LEDs............................................................................................................63
Supermicro Service Processor LEDs.......................................................................................64
Supermicro II Service Processor............................................................................................64
Securing the Storage System....................................................................................................79

Contents 3
5 Understanding F-Class Storage System LED Status.........................................66
Using the F-Class Component LEDs............................................................................................66
Bezel LEDs........................................................................................................................66
Removing the Bezels and Unlocking the Door...................................................................67
Drive Chassis LEDs.............................................................................................................67
OPs Panel LEDs.............................................................................................................67
Interface Card LEDs.......................................................................................................71
Power Supply/Cooling Module LEDs...............................................................................71
Drive Magazine LEDs.........................................................................................................72
Controller Node LEDs.........................................................................................................84
Fibre Channel Port LEDs......................................................................................................75
QLogic iSCSI Port LEDs.......................................................................................................75
Emulex Fibre Channel Port LEDs...........................................................................................76
Controller Node Power Supply LEDs.....................................................................................85
Power Distribution Unit Lamps..............................................................................................77
Service Processor LEDs............................................................................................................78
Supermicro Service Processor..............................................................................................78
Supermicro II Service Processor............................................................................................78
Securing the Storage System....................................................................................................79
6 Understanding HP 3PAR StoreServ 10000 Storage LED Status........................80
Drive Cage LEDs....................................................................................................................80
DC4 Drive Cage FC-AL Module LEDs...................................................................................81
Drive Magazine LEDs.........................................................................................................83
Controller Node LEDs..............................................................................................................84
Controller Node Status Panel LEDs.......................................................................................85
Fan Module LEDs....................................................................................................................86
Fibre Channel Adapter Port LEDs..............................................................................................87
CNA Port LEDs.......................................................................................................................88
Ethernet LEDs.........................................................................................................................89
Power Supply LEDs..................................................................................................................90
Drive Chassis Power Supply LEDs.........................................................................................90
Controller Node Power Supply LEDs.....................................................................................91
Battery Module LEDs...............................................................................................................92
Service Processor LEDs............................................................................................................93
Supermicro II Service Processor............................................................................................93
HP 3PAR Service Processor..................................................................................................93
7 Power Off/On the Storage System..............................................................96
Powering Off the Storage System..............................................................................................96
Powering On the Storage System..............................................................................................96
8 Alerts......................................................................................................98
9 Troubleshooting........................................................................................99
The checkhealth Command......................................................................................................99
Using the checkhealth Command.........................................................................................99
Troubleshooting Storage System Components...........................................................................100
Alert..............................................................................................................................101
Format of Possible Alert Exception Messages...................................................................101
Alert Example.............................................................................................................101
Alert Suggested Action................................................................................................101
Cage.............................................................................................................................101
Format of Possible Cage Exception Messages.................................................................102
Cage Example 1.........................................................................................................102
Cage Suggested Action 1............................................................................................102
Cage Example 2.........................................................................................................103

4 Contents
Cage Suggested Action 2............................................................................................103
Cage Example 3.........................................................................................................104
Cage Suggested Action 3............................................................................................104
Cage Example 4.........................................................................................................105
Cage Suggested Action 4............................................................................................105
Cage Example 5.........................................................................................................106
Cage Suggested Action 5............................................................................................106
Date..............................................................................................................................107
Format of Possible Date Exception Messages...................................................................107
Date Example.............................................................................................................107
Date Suggested Action................................................................................................107
LD..................................................................................................................................107
Format of Possible LD Exception Messages......................................................................108
LD Example 1.............................................................................................................108
LD Suggested Action 1.................................................................................................108
LD Example 2.............................................................................................................108
LD Suggested Action 2.................................................................................................108
LD Example 3.............................................................................................................109
LD Suggested Action 3.................................................................................................109
LD Example 4.............................................................................................................110
LD Suggested Action 4.................................................................................................110
License...........................................................................................................................110
Format of Possible License Exception Messages...............................................................110
License Example..........................................................................................................110
License Suggested Action.............................................................................................110
Network.........................................................................................................................110
Format of Possible Network Exception Messages.............................................................111
Network Example 1....................................................................................................111
Network Suggested Action 1........................................................................................111
Network Example 2....................................................................................................111
Network Suggested Action 2........................................................................................111
Node.............................................................................................................................112
Format of Possible Node Exception Messages.................................................................112
Suggested Node Action, General..................................................................................112
Node Example 1........................................................................................................112
Node Suggested Action 1............................................................................................112
Node Example 2........................................................................................................113
Node Suggested Action 2............................................................................................113
Node Example 3........................................................................................................113
Node Suggested Action 3............................................................................................113
PD.................................................................................................................................114
Format of Possible PD Exception Messages.....................................................................114
PD Example 1.............................................................................................................115
PD Suggested Action 1................................................................................................115
PD Example 2.............................................................................................................116
PD Suggested Action 2................................................................................................116
PD Example 3.............................................................................................................117
PD Suggested Action 3................................................................................................117
PD Example 4.............................................................................................................117
PD Suggested Action 4................................................................................................117
PD Example 5.............................................................................................................118
PD Suggested Action 5................................................................................................118
PD Example 6.............................................................................................................118
PD Suggested Action 6................................................................................................118
Port................................................................................................................................119

Contents 5
Format of Possible Port Exception Messages....................................................................119
Port Suggested Actions, General...................................................................................119
Port Example 1...........................................................................................................119
Port Suggested Action 1...............................................................................................119
Port Example 2...........................................................................................................120
Port Suggested Action 2...............................................................................................120
Port Example 3...........................................................................................................121
Port Suggested Action 3...............................................................................................121
Port Example 4...........................................................................................................121
Port Suggested Action 4...............................................................................................121
Port Example 5...........................................................................................................121
Port Suggested Action 5...............................................................................................122
RC.................................................................................................................................122
Format of Possible RC Exception Messages.....................................................................122
RC Example...............................................................................................................122
RC Suggested Action...................................................................................................122
SNMP............................................................................................................................123
Format of Possible SNMP Exception Messages................................................................123
SNMP Example..........................................................................................................123
SNMP Suggested Action..............................................................................................123
Service Processor.............................................................................................................123
Format of Possible SP Exception Messages......................................................................123
SP Example................................................................................................................123
SP Suggested Action....................................................................................................123
Task...............................................................................................................................123
Format of Possible Task Exception Messages...................................................................123
Task Example.............................................................................................................124
Task Suggested Action.................................................................................................124
VLUN.............................................................................................................................124
Format of Possible VLUN Exception Messages.................................................................124
VLUN Example...........................................................................................................124
VLUN Suggested Action...............................................................................................125
VV.................................................................................................................................125
Format of Possible VV Exception Messages.....................................................................125
VV Suggested Action...................................................................................................125
10 Support and Other Resources.................................................................126
Contacting HP......................................................................................................................126
HP 3PAR documentation........................................................................................................126
Typographic conventions.......................................................................................................129
HP 3PAR branding information...............................................................................................129
11 Documentation feedback........................................................................130

6 Contents
1 Component Numbering for T-Class Storage System
NOTE: Illustrations in this chapter show sample systems and might not match your configuration.

Identifying Storage System Components


Figure 1 (page 7) and Figure 2 (page 8) identify the major components of the T400 Storage
System in a 2M (40U) HP 3PAR cabinet.

Figure 1 T400 Front View

Identifying Storage System Components 7


Figure 2 T400 Rear View

Service Processor Placement


The Service Processor (SP) is located at the bottom of the cabinet and is designed to support all
actions required for maintenance of the storage system, providing real-time, automated monitoring.
The SP also supports remote access to diagnose and resolve potential problems.
The SP is usually installed directly above the PDUs and below the battery tray (Figure 3 (page 9))
and is powered internally by the storage system. The SP does not require an external power
connection.

8 Component Numbering for T-Class Storage System


Figure 3 Placement of the SP

NOTE: In the T800, the SP is located above the backplane, below the lowest drive chassis but
above the upper battery tray. Figure 5 (page 11) illustrates SP placement for the T800.
When a cabinet does not include a SP, a filler panel covers the area of the cabinet that the SP
normally occupies.

Understanding Component Numbering


Because of the large number of potential configurations, we have standardized component
placement and internal cabling to simplify installation and maintenance. For this reason, system
components are placed in the cabinet according to the principles outlined in this section and
numbered according to their order and location in the cabinet.

NOTE: For information about standardized cabling, see the HP 3PAR T-Class Storage System
Installation and Deinstallation Guide.

Cabinet Numbering
The T-Class Storage System 2M (40U) cabinet is an EIA-standard rack that accepts storage system
components. Numbers for chassis bays are assigned:
• beginning with 0.
• from top to bottom.
Figure 4 (page 10) illustrates numbering of chassis bays in T-Class cabinet.

Understanding Component Numbering 9


Figure 4 Numbering of Chassis Bays in the Cabinet

A storage system can be housed in a single cabinet or multiple cabinets. When multiple cabinets
are required, the first cabinet (the controller node cabinet ) holds the storage system backplane
populated with controller nodes. Any additional cabinets, or drive chassis cabinets , hold the
additional drive chassis that do not fit into the controller node cabinet.
Table 1 (page 10) describes the pattern for cabinet numbering in multi-cabinet storage systems
and for operating sites with multiple systems:
Table 1 Cabinet Numbering
Cabinet Number

Controller node cabinet C00

Drive chassis cabinets connecting to the first node cabinet C01, C02, C03 ... C09

Figure 5 (page 11) shows the location of system components for the T400 and T800 controller
node cabinets. Figure 6 (page 12) shows the location of system components for drive chassis
cabinets.

10 Component Numbering for T-Class Storage System


Figure 5 Controller Node Cabinet Component Layout

Understanding Component Numbering 11


Figure 6 Drive Chassis Cabinet Component Layout

PDU Numbering
For each cabinet, the four Power Distribution Units (PDUs) occupy the lowest chassis bay in the
cabinet. Numbers for PDUs are assigned:
• beginning with 0.
• from top to bottom.
Figure 7 (page 12) illustrates the four PDUs at the bottom of a T-Class cabinet.

Figure 7 Numbering of PDUs

12 Component Numbering for T-Class Storage System


NOTE: In the T800, PDUs are positioned back-to-back so that they only take up 2U of space at
the bottom of the cabinet rather than the standard 4U of space. PDUs are accessible from both the
front and the rear of the storage system. “Controller Node Cabinet Component Layout” (page 11)
illustrates PDU placement for the T800.
Each PDU has two power banks, each with a separate circuit breaker, to be used exclusively for
storage system components (Figure 8 (page 13)).

Figure 8 Power Banks in the PDU

WARNING! To avoid possible injury, damage to storage system equipment, and potential loss
of data, do not use the surplus power outlets in the storage system PDUs. Never use outlets in the
PDUs to power components that do not belong to the storage system or to power storage system
components that reside in other cabinets.

NOTE: For more information on PDUs and storage system configurations, see the T-Class Storage
System Installation and Deinstallation Guide .

Battery Backup Unit Numbering


The controller node cabinet includes one or two battery trays that hold the battery backup units
(BBU). The BBUs supply enough power to write the cache memory to the IDE drive inside the nodes
in the event of a power failure. One battery per controller node is required for all storage system
configurations. BBU placement and numbering schemes vary according to the type of components
used in the system.
There is always a battery tray located directly below the backplane. When a second battery tray
is required, as is the case with storage systems that have six or eight controller nodes, a second
battery tray rests immediately above the backplane.
Storage systems use Magnetek BBUs. Each battery unit contains two independently-switched
batteries, labeled battery a and battery b (Figure 9 (page 45)).

Figure 9 Battery Backup Unit

Understanding Component Numbering 13


A battery tray can hold a maximum of four BBUs. The number of BBUs and battery trays in a system
depends on the number of controller nodes installed (Table 2 (page 45)).
Table 2 Number of BBUs and Battery Tray Placement by Storage System Backplane and Number
of Controller Nodes
Backplane Nodes BBU Battery Trays Tray Placement

T400 2 2 1 Below backplane

4 4 1 Below backplane

T800 2 2 1 Below backplane

4 4 1 Below backplane

6 6 2 Below backplane (1)


Above backplane (1)

8 8 2 Below backplane (1)


Above backplane (1)

Magnetek Battery Backup Units


Magnetek BBUs have batteries that sit vertically, with battery A above battery B. (Figure 10
(page 14)).

Figure 10 Magnetek BBUs

When facing the rear of the storage system, Magnetek BBUs are numbered from right to left, 0
through 3. When two battery trays are present, the upper tray is numbered 0 and the lower tray
is numbered 1 (Figure 11 (page 15)).

14 Component Numbering for T-Class Storage System


Figure 11 Magnetek BBU Numbering Scheme

Controller Node Numbering


The T-Class Storage System contain two, four, six, or eight controller nodes per system and only
use T-Class controller nodes. Controller nodes are loaded into the backplane enclosure from bottom
to top.
Therefore, for the T800 with only two controller nodes installed, those controller nodes would
occupy the lowest 4U of the backplane and would be numbered, node 6 and node 7. The other
bays in the backplane enclosure would be protected with filler panels that block insertion of other
components.
A controller node takes on the number of the bay that it occupies in the backplane, as shown in
Figure 12 (page 47).

Understanding Component Numbering 15


Figure 12 Numbering of Controller Nodes

As shown in Figure 13 (page 17), a controller node contains six PCI slots. These slots accept PCI
adapters such as dual-port Fibre Channel adapters, iSCSI adapters, and Ethernet adapters. The
controller node also has a management Ethernet port (E0) and a maintenance port (C1).

16 Component Numbering for T-Class Storage System


Figure 13 Numbering for Dual-Port Fibre Channel Adapters in the Controller Node PCI Slots

Each Fibre Channel adapter in a PCI slot has four ports. Each iSCSI adapter in a PCI slot has two
or four ports. PCI adapters assume the numbers of the PCI slots they occupy.
• In dual-port adapters, ports are labeled port 1 and 2, from top to bottom.
• In quad-port Fibre Channel adapters, the ports are numbered port 1–4, from top to bottom.
Inside the controller node are control cache DIMMs and data cache DIMMs.
• Control cache DIMMs are located in control cache slots 0 and 1 (Figure 14 (page 18)).
• Data cache DIMMs are located on data cache riser cards (Figure 14 (page 18)).

Understanding Component Numbering 17


Figure 14 Control Cache and Data Cache DIMMs in a T-Class Controller Node

Numbers for controller nodes and their components are assigned in the order indicated in Table 3
(page 18).
Table 3 Numbering System for Controller Nodes and their Components
The Following Components... Are Numbered... Running from...

Controller nodes 0,1,2,3,4,5,6,7 left to right1 and top to bottom

PCI adapters 0,1,2,3,4,5 left to right1

PCI ports - top to bottom


dual-port adapters 1,2 top to bottom
quad-port adapters 1,2,3,4

Control Cache DIMMs - left to right1


control cache 0,1
data cache 0,1,2,3,4,5,6,7

Data Cache DIMMs - top to bottom


Bank 0 0,1
Bank 1 0,1
Bank 2 0,1
1
When facing the storage system.

Drive Chassis Numbering


Depending on the specific configuration, a storage system can include up to 64 drive chassis. A
drive chassis houses two drive cages where each contains five drive bays. Each drive bay can
accommodate a single drive magazine holding four disks for a total of 20 disks per drive cage
and 40 disks per drive chassis.
Numbers for drive chassis are assigned beginning with 0, from bottom to top, beginning with the
drive chassis directly above the storage system backplane.

18 Component Numbering for T-Class Storage System


Drive chassis are always placed above the storage system backplane enclosure and numbered
according to their position in relation to the backplane, as shown in Figure 15 (page 49).

Figure 15 Numbering of Drive Chassis

NOTE: For systems occupying multiple cabinets, drive chassis numbers continue at the bottom
of the next cabinet and progress through the top of the cabinet.
Figure 16 (page 20) and Figure 17 (page 20) illustrate individual drive chassis components and
how they are numbered. Fibre Channel ports in the FC-AL adapters at the sides of the drive chassis
enable connection to the controller nodes.

Understanding Component Numbering 19


Figure 16 Numbering of Drive Chassis Components

Figure 17 Numbering of Disks on a DC4 and DC4 Type-2 Drive Magazine

Numbers for drive chassis components are assigned:


• from bottom to top.
• from rear to front (in the case of disks).
• in the order indicated by Table 4 (page 51).
Table 4 Numbering System for Drive Chassis Components
The Following Components... Are Numbered... Running from...

Drive cages 0,1,... bottom to top

FC-AL modules 0,1 left to right

20 Component Numbering for T-Class Storage System


Table 4 Numbering System for Drive Chassis Components (continued)
The Following Components... Are Numbered... Running from...

Fibre Channel ports FC-AL 0 FC-AL 1 A0,B0 A1,B1 top to bottom

Drive magazines 0,1,2,3,4,5,6,7,8,9 left to right

Disks on the drive magazine 0,1,2,3 rear to front

Drive Magazine Allocation


For highest availability and data protection, drive magazines are placed on different loops and
internal power domains by loading them in the order illustrated by Figure 18 (page 52).

NOTE: See the systems planning document or HP 3PAR Systems Assurance and Pre-Site Planning
Guide for drive magazine allocation instructions specific to your system.

Figure 18 Pattern for Loading Initial Drive Magazines into the Drive Chassis

Understanding Component Numbering 21


NOTE: For further instructions on drive magazine allocation, see the HP 3PAR T-Class Storage
System Installation and Deinstallation Guide .

Power Supply Numbering


Cabinets are divided into upper and lower power domains that contain drive cages or controller
nodes and dedicated power supplies. Drive cages and controller nodes depend on these power
supplies, located at the rear of the system, to supply power from the PDUs at the bottom of the
cabinet.
When viewing the cabinet from the rear, the power supplies in each power domain are numbered
from 0 to 3, from left to right. Figure 19 (page 22) shows an expansion cabinet.

Figure 19 Numbering of Power Supplies within the Power Domains

22 Component Numbering for T-Class Storage System


2 Component Numbering for F-Class Storage System
NOTE: Illustrations in this chapter show sample systems and might not match your configuration.

Identifying Storage System Components


Figure 20 (page 23) and Figure 21 (page 24) identify the major components of an F-Class Storage
System.

Figure 20 F400 Front View

Identifying Storage System Components 23


Figure 21 F400 Rear View

Service Processor Placement


The Service Processor (SP) is located at the bottom of the cabinet and is designed to support all
actions required for maintenance of the storage system, providing real-time automated monitoring.
The SP also supports remote access to diagnose and resolve potential problems.
Because the SP is capable of supporting multiple storage systems at the same operating site, not
all cabinets contain a SP. However, when present, the SP is usually installed directly above the
PDUs and below the drive cage (see Figure 20 (page 23)). The SP is powered internally by the
storage system and does not require an external power connection. When a cabinet does not
include a SP, a filler panel covers the area of the cabinet that the SP normally occupies.

24 Component Numbering for F-Class Storage System


Understanding Component Numbering
Because of the almost unlimited number of potential configurations, there is standardized component
placement and internal cabling to simplify installation and maintenance. For this reason, system
components are placed in the cabinet according to the principles outlined in this section and
numbered according to their order and location in the cabinet.

Cabinet Numbering
The F-Class Storage System 2M (40U) cabinet is an EIA-standard rack that holds storage system
components. Numbers for chassis bays are assigned beginning with 0, from top to bottom.
Figure 22 (page 25) illustrates numbering of chassis bays in an HP 3PAR cabinet.

Figure 22 Numbering of Chassis Bays in the Cabinet

A storage system can be housed in a single cabinet or multiple cabinets. When multiple cabinets
are required, the first cabinet (the controller node cabinet ) holds the backplane populated with
controller nodes. Any additional cabinets, or drive chassis cabinets, hold the additional drive
chassis that do not fit into the controller node cabinet.

Understanding Component Numbering 25


Table 5 (page 26) describes the pattern for cabinet numbering in multi-cabinet storage systems
and for operating sites with multiple systems:
Table 5 Cabinet Numbering
Cabinet Number

Controller node cabinet C00

Drive chassis cabinets connecting to the first node cabinet C01, C02, C03...C09

Figure 23 (page 26) shows the location of controller node and drive chassis components for the
storage system cabinet in the F200 and F400.

Figure 23 Controller Node and Drive Chassis Component Layout

PDU Numbering
The four Power Distribution Units (PDUs) occupy the lowest chassis bay in the cabinet. Refer to
Figure 22 (page 25) for bay numbering.
Numbers for PDUs are assigned :
• beginning with 0.
• from top to bottom.
Figure 24 (page 27) illustrates the four PDUs at the bottom of an HP 3PAR cabinet.

26 Component Numbering for F-Class Storage System


Figure 24 Numbering of PDUs

Each PDU has two power banks, each with a separate circuit breaker, to be used exclusively for
storage system components (Figure 25 (page 27)).

Figure 25 PDU Power Banks

WARNING! To avoid possible injury, damage to storage system equipment, and potential loss
of data, do not use the surplus power outlets in the storage system PDUs. Never use outlets in the
PDUs to power components that do not belong to the storage system or to power storage system
components that reside in other cabinets.

Controller Node Numbering


The F-Class Storage System contains two or four nodes per system. Controller nodes are numbered
from top to bottom node 0 and node 1 for a two node system, and node 0, node 1, node 2, node
3 for a four node system.

Understanding Component Numbering 27


Figure 26 Numbering of Controller Nodes

A controller node contains two controller slots and two on-board Ethernet ports. See Figure 27
(page 29) for specific port type assignments.

28 Component Numbering for F-Class Storage System


Figure 27 Numbering for Dual-Port Fibre Channel Adapters in the Controller Node PCI Slots

Each Fibre Channel adapter in a PCI slot has two or four Fibre Channel ports. Fibre Channel
adapters assume the numbers of the PCI slots they occupy.
• In dual-port adapters, ports are labeled port 1 and port 2, from top to bottom.
• In quad-port Fibre Channel adapters, the ports are numbered port 1, port 2, port 3, and port
4, horizontally.
Inside the controller node are data cache DIMMs and control cache DIMMs.
• Data cache DIMMs are located in data cache slots 0 through 2.
• Control cache DIMMs are located on control cache slots 0 and 1 (Figure 28 (page 29)).

Figure 28 Control Cache and Data Cache DIMMs in the Controller Node

Understanding Component Numbering 29


Drive Chassis Numbering
Depending on configuration, an F-Class Storage System can include up to 10 drive chassis. A
drive chassis houses 16 drive magazines. Drive chassis are first placed sequentially below controller
node 1 (controller node 3 in an F400) and then sequentially above controller node 0. Drive chassis
are numbered as shown in Figure 29 (page 30).

Figure 29 Numbering of Drive Chassis

NOTE: For systems occupying multiple cabinets, drive chassis numbers continue at the bottom
of the next cabinet and progress through the top of the cabinet.
Figure 30 (page 31) and Figure 31 (page 31) illustrate individual drive chassis components and
how they are numbered. Fibre Channel ports in the Fibre Channel Arbitrated Loop (FC-AL) at the
sides of the drive chassis enable connection to the controller nodes.

30 Component Numbering for F-Class Storage System


Figure 30 Drive Chassis - Front View, Drive Magazine Bay Numbering

Figure 31 Drive Chassis - Rear View, Port Numbering

Drive Magazine Allocation


For highest availability and data protection, drive magazines are placed on different loops and
internal power domains by loading them in the order described in by Table 6 (page 32).
The following figure shows the drive magazine numbering:

NOTE: See the systems planning document or the HP 3PAR Systems Assurance and Pre-Site
Planning Guide for drive magazine allocation instructions specific to your system.

Figure 32 Drive Magazine Bay Numbering

Drive magazines are loaded in the following ordered pairs:

Understanding Component Numbering 31


Table 6 Drive Magazine Loading Pattern
Group Number Drive Magazine Pair Number Drive Magazine Bay

1 1 0, 4

2 11, 15

2 3 8, 12

4 3, 7

3 5 1, 5

6 10, 14

4 7 9, 13

8 2, 6

NOTE: The loading sequence displayed in the table above indicates the loading order is in
vertical columns. All drives in a vertical column must be of the same type and speed. Mixing drive
types and speeds in the same column may cause unpredictable results.

Power Supply Numbering


F-Class Storage System cabinets share a single power domain that contains a drive cage or
controller node and two dedicated power supplies. Drive cages and controller nodes depend on
these two power supplies to supply power from the PDUs at the bottom of the cabinet. When
viewing the cabinet from the rear, the power supplies are numbered as follows (Figure 33
(page 33)):

32 Component Numbering for F-Class Storage System


Figure 33 Numbering of Power Supplies

Understanding Component Numbering 33


3 Component Numbering for HP 3PAR StoreServ 10000
Storage
NOTE: Illustrations in this chapter only display examples of systems and may not match any
particular storage system configuration.

Identifying Storage System Components


Figure 34 (page 34) through Figure 37 (page 37) identify major components of an HP 3PAR
StoreServ 10000 Storage in a 2M cabinet.

Figure 34 Front View of the HP 3PAR StoreServ 10000 Storage, 3PAR PDU

Use the following table for Figure 35 (page 35) and Figure 36 (page 36):

Item Description

1 Drive Cage FC-AL Modules

2 Cooling Fans

3 Battery Modules

4 Server Processor

5 Drive Chassis

6 Leveling Feet

34 Component Numbering for HP 3PAR StoreServ 10000 Storage


Figure 35 Front View of HP 3PAR StoreServ 10400 and 10800 Storage Systems, Single Phase PDU

Identifying Storage System Components 35


Figure 36 Front View of HP 3PAR StoreServ 10400 and 10800 Storage Systems, 3–Phase PDU

36 Component Numbering for HP 3PAR StoreServ 10000 Storage


Figure 37 Rear View of HP 3PAR StoreServ 10400 and 10800 Storage Systems, 3PAR PDU

Service Processor Placement


The Service Processor (SP) supports all actions required for the maintenance of the storage system.
The SP provides real-time, automated monitoring of the storage system. The SP also supports remote
access for HP to diagnose and resolve potential problems. The SP is adjacent to the controller node
chassis and resides in the lower section of the cabinet or third-party rack.An SP is typically placed
and installed directly below the node chassis power supply compartment and seated directly above
the DC4 drive chassis (Figure 38 (page 38), Figure 39 (page 38)) and Figure 40 (page 39). The
SP is internally powered by the storage system and does not require an external power connection.

Service Processor Placement 37


Figure 38 Placement of the SP, 3PAR PDU Systems

Figure 39 Placement of the Service Processor, Single Phase PDU Systems

38 Component Numbering for HP 3PAR StoreServ 10000 Storage


Figure 40 Placement of the Service Processor, 3–Phase PDU Systems

NOTE: For both 10400 and 10800 Storage systems, the SP is located below the controller node
chassis (Figure 38 (page 38), “Placement of the Service Processor, Single Phase PDU Systems”
(page 38) and “Placement of the Service Processor, 3–Phase PDU Systems” (page 39)).

Understanding Component Numbering


Because of the large number of possible storage system configurations, HP has standardized
component placement and internal cabling to simplify installation and maintenance. System
components are placed in the cabinet according to the principles outlined in this section and
numbered according to their order and location in the cabinet.

Cabinet Numbering
The 2M storage system cabinet is an EIA-standard rack and houses the storage system components.
A storage system can be housed in a single or multiple cabinets. When multiple cabinets are
required, the first cabinet (known as the controller node cabinet) holds the storage system node
chassis populated with controller nodes. Any additional cabinets, or drive chassis cabinets, store
the additional drive chassis that do not fit into the controller node cabinet.
Table 7 (page 101) describes the pattern for cabinet numbering in multi-cabinet storage systems
and for operating sites with multiple systems.
Table 7 Cabinet Numbering
Cabinet Number

Controller node cabinet C0

Drive chassis cabinets connecting to the first node cabinet C1, C2, C3...C5

Power Distribution Unit Numbering


Each cabinet contains one of the following configurations. Numbers for PDUs are assigned beginning
with 0, from bottom to top.
• four 3PAR PDUs mounted along the side
• four Single-Phase PDUs mounted at the bottom
• two 3–Phase PDUs mounted at the bottom

Understanding Component Numbering 39


Figure 41 (page 40), Figure 42 (page 41), and Figure 43 (page 41) illustrate the PDUs in the HP
3PAR StoreServ 10000 Storage cabinet.

Figure 41 Power Distribution Units, 3PAR PDU

40 Component Numbering for HP 3PAR StoreServ 10000 Storage


Figure 42 Power Distribution Units, Single Phase PDU

Item Description

1 PDU #3

2 PDU #2

3 PDU #1

4 PDU #0

Figure 43 Power Distribution Units, 3–Phase PDU

Item Description

1 PDU #2

2 PDU #1

Each PDU is equipped with separate power banks and separate circuit breakers, used exclusively
for storage system components.

Understanding Component Numbering 41


Figure 44 Power Banks in the PDU, 3PAR PDU

Figure 45 Power Banks in the PDU, Single Phase PDU

Item Description

1 Front of unit, main breaker

2 Front of unit, circuit breakers

3 Rear of unit, power banks

42 Component Numbering for HP 3PAR StoreServ 10000 Storage


Figure 46 Power Banks in the PDU, 3–Phase PDU

Item Description

1 Front of unit, circuit breakers

2 Rear of unit, power banks

WARNING! To avoid possible injury, damage to storage system equipment, and potential loss
of data, do not use the surplus power outlets in the storage server PDUs. Never use outlets in the
PDUs to power components that do not belong to the storage server or to power storage server
components that reside in other cabinets.

Fan Module Numbering


A fan module provides proper cooling for controller nodes in a controller node chassis. A controller
node chassis may contain one or two fan module compartments and hold a maximum of 16 fan
modules (two per node, see Figure 47 (page 43)).

Figure 47 Fan Module

The number of fans required for a system depends on the system configuration. Each controller
node requires two fan modules to provide cooling and maintain a constant operating temperature.
The fans are paired directly with a controller node and located at the front of the cabinet.

Understanding Component Numbering 43


Table 8 Fan Module Configuration Based Upon System and Number of Controller Nodes
System Controller Nodes Fan Modules

10400 2 4

4 8

10800 2 4

4 8

6 12

8 16

The illustration explains the numbering scheme for the fan modules within a controller node.

Figure 48 Fan Module Numbering Scheme, Single Phase and 3–Phase PDU

Battery Module Numbering


Depending on the controller node configuration, the controller node chassis may include one or
two battery compartments to situate the battery modules. The battery modules supply enough power
to write the cache memory to the drive inside the nodes during a power failure.
Each controller node requires an associated battery module for all configurations (Figure 9
(page 45)).

44 Component Numbering for HP 3PAR StoreServ 10000 Storage


Figure 49 Battery Module

Battery module placement may vary according to the type of system configuration and number of
installed controller nodes (Table 2 (page 45)).
Table 9 Battery Module Configuration and Placement Based Upon System and Number of Controller
Nodes
System Controller Nodes Battery Modules Placement

10400 2 2 Below Cooling Fan


Compartment

4 4 Below Cooling Fan


Compartment

10800 2 2 Above and Below Each


Cooling Fan Compartment

4 4 Above and Below Each


Cooling Fan Compartment

6 6 Above and Below Each


Cooling Fan Compartment

8 8 Above and Below Each


Cooling Fan Compartment

When facing the front of the storage system, the battery modules are numbered from right to left
and are directly connected to the associated controller node (Figure 50 (page 46) and Figure 51
(page 46)).
• Single battery compartment: 0 through 3 (lower)
• Two battery compartments: 0 through 3 (lower); 4 through 7 (upper)

Understanding Component Numbering 45


Figure 50 Battery Module Numbering Scheme, 3PAR PDU Systems

Figure 51 Battery Module Numbering Scheme, Single Phase and 3–Phase PDU

46 Component Numbering for HP 3PAR StoreServ 10000 Storage


Controller Node Numbering
A storage system may contain two to eight controller nodes per system configuration. The controller
node chassis is located at the rear of the storage cabinet.
From the rear of the storage cabinet, component numbering starts with zero (0) from left to right.
For example, the node in the lower left position is identified as Node 0 and the adjacent node
(right) is identifed as Node 1.
With a larger controller node chassis that supports a maximum of eight controller nodes, the
orientation of the lower and upper controller nodes becomes inverted. The controller node located
in upper right corner of a 10800 is identified as controller node 7. If there are empty bays, install
filler panels to protect the node chassis.
Use Table 10 (page 47) as a guideline for loading controller nodes into the chassis.
Table 10 Controller Node Loading Order
System Controller Nodes Loading Order

10400 2 0, 1

4 0, 1, 2, 3

10800 2 0, 1

4 0, 1, 4, 5

6 0, 1, 4, 5, 2, 3

8 0, 1, 4, 5, 2, 3, 6, 7

The following example in Figure 12 (page 47) illustrates the numbering and positioning of each
controller node in a cabinet.

Figure 52 Numbering and Positioning of Controller Nodes, Rear View

Understanding Component Numbering 47


As shown in Figure 53 (page 48), a controller node contains nine PCI slots. The following slots
connect PCI adapters such as quad-port Fibre Channel Adapters and Converged Network Adapters
(CNA). Each controller node features a main administrative Ethernet port (E0), Remote Copy Ethernet
port (RCIP) (E1), and a console maintenance port (S0).

Figure 53 Controller Node PCI Slots and Port Numbering

Each CNA in a PCI slot features two iSCSI/FCoE ports and each Fibre Channel adapter in a PCI
slot offers four ports. Each PCI adapter is identifiable by its accompanying PCI slot number position.
• In dual-port adapters, the ports are labeled port 1 and port 2 in ascending order away from
the adapter handle.
• In quad-port Fibre Channel adapters, the ports are numbered port 1, port 2, port 3, and port
4 in ascending order away from the adapter handle.

Drive Chassis Numbering


Depending on the specific configuration, a 10400 system may include up to 24 drive chassis and
the 10800 system may hold up to 48 drive chassis.
Each drive chassis contains ten drive magazine slots and each slot can accommodate a single
drive magazine holding four disks for a total of 40 hard drives.
Numbers for drive chassis are assigned:
• Beginning with 0
• From bottom to top
Drive chassis are placed above the 10400 node chassis and numbered according to their position
in relation to the controller node chassis as shown in Figure 15 (page 49) and Figure 55 (page 50).
No drive chassis are permitted in the 10800 cabinet.

48 Component Numbering for HP 3PAR StoreServ 10000 Storage


Figure 54 Numbering of Drive Chassis, 3PAR PDU Systems

Understanding Component Numbering 49


Figure 55 Numbering of Drive Chassis, Single Phase and 3–Phase PDU

NOTE: For systems with multiple cabinets, drive chassis numbering may vary.
Figure 56 (page 50) and Figure 57 (page 51) illustrate individual drive chassis components and
the numbering scheme. The Fibre Channel ports in the FC-AL adapters located on each side of the
drive chassis enable connection to the controller nodes.

Figure 56 Numbering of Drive Chassis Components

50 Component Numbering for HP 3PAR StoreServ 10000 Storage


Figure 57 Numbering of Disks on a Drive Magazine

Numbers for drive chassis components are assigned:


• From bottom to top.
• From rear to front (in reference of disks).
• In the order indicated by Table 4 (page 51).
Table 11 Numbering System for Drive Chassis Components
The Following Components... Are Numbered... Running from...

Drive cages 0,1,... bottom to top

FC-AL modules 0,1 left to right

Fibre Channel ports — —


FC-AL 0 A0,B0 top to bottom
FC-AL 1 A1,B1

Drive magazine 0,1,2,3,4,5,6,7,8,9 left to right

Hard drives on the drive magazine 0,1,2,3 rear to front

Drive Magazine Allocation


Drive magazines are equally distributed between drive cages and placed on different loops and
internal power domains.
Figure 18 (page 52) illustrates the loading order of drive magazines for highest availability and
data protection.

NOTE: Refer to the HP 3PAR Systems Assurance Document for drive magazine allocation
instructions for a specific layout.

Understanding Component Numbering 51


Figure 58 Pattern for Loading Initial Drive Magazines Into the Drive Chassis

Power Supply Numbering


Cabinets are divided into lower and upper power domains containing drive cages or controller
nodes with dedicated power supplies. The drive cages and controller nodes depend on the power
supplies located at the rear of the system to supply power from the PDUs.
When viewing the rear of the cabinet:
• Drive chassis power supplies in both the upper and lower power domains are numbered: 0
through 3 from left to right (“Numbering of Power Supplies Within the Power Domains, 3–Phase
PDU Systems” (page 54)).
• Each controller node requires two power supplies. The power supplies in the lower controller
node chassis are numbered 0 and 1 from left to right. The power supplies in the upper node
chassis are numbered 0 and 1 from right to left.

52 Component Numbering for HP 3PAR StoreServ 10000 Storage


Figure 59 Numbering of Power Supplies Within the Power Domains, Single Phase PDU Systems

Understanding Component Numbering 53


Figure 60 Numbering of Power Supplies Within the Power Domains, 3–Phase PDU Systems

Use the following table for Figure 59 (page 53) and Figure 60 (page 54):

Item Description

1 DC Power Supplies

2 Node Power Supplies

3 Node Power Supplies

4 DC Power Supplies

5 Power Distribution Units (PDU)

54 Component Numbering for HP 3PAR StoreServ 10000 Storage


4 Understanding T-Class Storage System LED Status
Using the T-Class Component LEDs
The T-Class Storage System components have LEDs to indicate whether or not the hardware is
functioning properly and to help identify errors. These LEDs help diagnose basic hardware problems.
You can quickly identify hardware problems by examining the LEDs on all the components and
using the following tables and illustrations in this chapter.
If you detect any problems during inspection of the LEDs, contact your Authorized Service Provider.

Removing the Bezels and Unlocking the Door


If your cabinet has locking fascias, you must first remove the fascias to access the system bezel.

WARNING! Hazardous energy is located behind the rear access door of the storage system
cabinet. Use caution when working with the door open.

• To view the node, drive chassis or SP LEDs, remove the bezels.


• To view the power supply, battery or PDU LEDs, open the rear door by unlatching the three
latches on the door.

NOTE: Many LEDs are visible without removing the bezels. To view the power supply, battery
or PDU LEDs, open the rear door of the cabinet.

Drive Cage LEDs


The DC4 drive chassis holds one DC4 drive cage housing with two drive cage FC-AL modules and
a maximum of ten drive magazines.

Figure 61 DC4 Drive Cage

DC4 Drive Cage FC-AL Module LEDs


DC4 drive cage FC-AL modules have the following LEDs (Figure 62 (page 56)):

Using the T-Class Component LEDs 55


Figure 62 Connections and LEDs on the DC4 Drive Cage FC-AL Modules

Table 12 Drive Cage DC4 FC-AL Module LED Displays


LED Appearance Indicates

RX Steady green light The presence of a small form-factor


pluggable optical transceiver (SFP) and
a valid signal from the node.

No light No connection to the node or no SFP


is installed.

TX Steady green light The presence of an SFP and the LED is


on and transmitting.

No light No SFP is present or the SFP transmitter


failed.

FC-AL status Steady green light The drive cage is functioning properly,
but is not communicating with other
nodes.

Flashing green light (1 blink per The drive cage is connected and
second) communicating with the system
manager of a node in the cluster.

Steady amber light Normal, initial indication for two


seconds upon power up. Otherwise,
FC-AL module error or other cage error.
If both FC-AL modules have a steady
light, the temperature of a disk drive in
the drive-cage has exceeded its

56 Understanding T-Class Storage System LED Status


Table 12 Drive Cage DC4 FC-AL Module LED Displays (continued)
LED Appearance Indicates

high-level threshold, or a power supply


has failed.

Flashing amber light (1 blink per The drive cage has some type of error,
second) such as a failed or missing power
supply, but is communicating with a
node.

Rapid toggle between amber and A cage firmware upgrade initiated by


green light the upgradecage CLI command is in
progress. A firmware upgrade normally
takes less than two minutes to complete.

Hot plug Steady amber light The FC-AL module is prepared for hot
plug replacement.

No light The FC-AL module is not prepared for


the hot plug.

Split mode Steady green light The drive cage is split into two logical
portions.

No light The drive cage is not split.

4GB/s Steady green light The transfer rate is operating at 4GB/s.

No light The transfer rate is operating at 2GB/s.

Drive Magazine LEDs

NOTE: After powering on, allow approximately two minutes for the disks on the DC4 drive
magazine to spin up before checking the drive magazine LEDs.
Drive magazines contain the following LEDs (Figure 63 (page 58)):

Using the T-Class Component LEDs 57


Figure 63 DC4 Drive Magazine LEDs

Table 13 Drive Magazine LED Displays


LED Appearance Indicates

Drive magazine status Steady green light The drive magazine is functioning
properly.

Steady amber light A drive magazine error, or one or more


drives are bypassed on at least one
path.

Disk status Quick flashing, or 20 percent on, 80 The disk is not spun up but has power.
percent off green light

Steady green light The disk is spun up and waiting for a


command.

Flashing green light The disk is executing commands.

No light No disk is present.

Steady amber light A disk error, or the disk is bypassed on


both paths (loops).

Hot plug Steady amber light The drive magazine is prepared for hot
plug replacement.

Flashing amber light That there is a connection failure


between the drive magazine and the
drive chassis.

No light The drive magazine is not prepared for


hot-plug.

58 Understanding T-Class Storage System LED Status


Controller Node LEDs
Depending on configuration, storage systsem contain between two and eight controller nodes, all
located in the chassis. Controller nodes contain the following LEDs (Figure 64 (page 59)):

Figure 64 Controller Node LEDs

Table 14 Controller Node LED Displays


LED Appearance Indicates

Disk hot plug Steady amber light Disk is prepared for hot plug.

No light Disk is not prepared for hot plug.

Node hot plug Steady amber light In combination with the status LED
blinking green three times per second,
indicates the node is prepared for
removal. In combination with the status
LED being solid, indicates a fatal
failure.

No light The node is not prepared for removal.

Node status Flashing green light (1 blink per The node is fully functional and part of
second) the cluster.

Flashing amber light(1 blink per The node has a failed or missing power
second) supply, fan, battery backup unit, or
TOD battery but the node is still
operational.

Steady green light The node is in the process of joining the


cluster.

Rapidly flashing green (three times per In combination with the service LED
second) being solid amber, the node is safe to
remove.

Steady amber light An error within the node.

Solid amber In combination with the service LED


being amber, a fatal node failure.

Ethernet activity Steady green light An Ethernet link.

Using the T-Class Component LEDs 59


Table 14 Controller Node LED Displays (continued)
LED Appearance Indicates

Flashing green light No Ethernet activity.

No light No Ethernet connection.

Ethernet status Steady amber light 1000 MB/s mode

Steady green light 100 MB/s mode

No light 10 MB/s mode (or disconnected)

Fibre Channel Port LEDs


The Fibre Channel adapter in the controller node also contains Fibre Channel port LEDs (Figure 65
(page 60)).

Figure 65 4-Port Fibre Channel LEDs

Table 15 Fibre Channel Adapter LED Displays


LED Appearance Indicates

Port 1, 2, 3 No light Wake up failure (dead device)

Steady green light Normal - link up at 2-4GBs/s

Flashing green light Link down or not connected

QLogic iSCSI Port LEDs


The QLogic iSCSI adapter contains two ports with one LED for each port.

60 Understanding T-Class Storage System LED Status


Figure 66 iSCSI Adapter Ports and LEDs

Table 16 iSCSI Adapter Port LED Displays


LED Appearance Indicates

Port 1, 2 No light No connection or active link.

Steady green light Link is established.

Flashing green light Recieving or transmitting activity.

Power Supply LEDs


Power supply units are located at the rear of all drive cages and controller nodes, and have the
following LEDs (Figure 67 (page 62)):

Using the T-Class Component LEDs 61


Figure 67 Power Supply LEDs

NOTE: The appearance of the drive chassis and controller node power supplies can vary slightly
according to manufacturer and location.
Table 17 Power Supply LED Displays
LED Appearance Indicates

Power supply status Steady green light Power is on.

Steady amber light Power supply error.

No light Broken connection.

AC Steady green light AC is entering from an external source.

No light No AC is entering from an external


source (for example, when power is off
or when using battery power).

62 Understanding T-Class Storage System LED Status


Battery Backup Unit LEDs
Depending on the configuration, storage systems with HP 3PAR cabinets include one or more
battery trays that hold up to four BBUs each. BBUs supply power to write the cache memory to the
drive inside the node in the event of a power failure.
BBUs contain two batteries, labeled battery a and battery b, and include the following LEDs
(Figure 68 (page 63)):

Figure 68 BBU LEDs (Magnetek)

Table 18 BBU LED Displays


LED Appearance Indicates

Battery A, B Solid green light Normal operation (charged or


charging).

Flashing green light Battery is undergoing a test.

Solid amber Battery error.

No light BBUs or power supply is turned off.

Power Distribution Unit Lamps


T-Class storage systems include four PDUs that contain two power bank lamps (Figure 69 (page 63)):

Figure 69 Power Distribution Unit Lamps

A blue illuminated lamp indicates that power is being supplied to a power bank. When the blue
lamp is not illuminated, the power bank is not receiving AC input.

Service Processor LEDs


There are two types of service processors.

Service Processor LEDs 63


Supermicro Service Processor LEDs
Supermicro LEDs are located at the top of the SP.

Figure 70 Supermicro SP LEDs

Table 19 Supermicro SP LEDs


LED Appearance Indicates

Power No light SP is off.

Steady green light SP is on.

Hard disk drive No light No hard drive activity.

Flashing amber light Hard drive activity.

NIC Port 1, 2 No light Port is not connected.

Steady green light Port is connected.

Flashing green light Network activity.

Overheat No light SP temperature is normal.

Steady red light SP temperature is overheating.

Supermicro II Service Processor


Supermicro II LEDs are located at the top of the SP.

64 Understanding T-Class Storage System LED Status


Figure 71 Supermicro II SP LEDs

Table 20 Supermicro II SP LEDs


LED Appearance Indicates

Power No light SP is off.

Steady green light SP is on.

Hard disk drive No light No hard drive activity.

Flashing amber light Hard drive activity.

NIC Port 1, 2 No light Port is not connected.

Steady green light Port is connected.

Flashing green light Network activity.

Overheat No light SP temperature is normal.

Steady red light SP temperature is overheating.

Flashing red light SP has a failed fan.

Securing the Storage System


After verifying that the storage system is functioning properly, secure the system by closing the rear
door and locking it with the keys provided.

WARNING! Hazardous energy is located behind the rear access door of the storage system
cabinet. Use caution when working with the door open.

Securing the Storage System 65


5 Understanding F-Class Storage System LED Status
Using the F-Class Component LEDs
The F-Class Storage System components have LEDs to indicate whether or not the hardware is
functioning properly and to help identify errors. These LEDs help diagnose basic hardware problems.
You can quickly identify hardware problems by examining the LEDs on all the components and
using the following tables and illustrations in this chapter.
If you detect any problems during inspection of the LEDs, contact your Authorized Service Provider.

Bezel LEDs
LEDs are located at the front of the F-Class Storage System on the bezel for quick assessment of
node health.

Figure 72 Bezel LEDs

Table 21 Bezel LED Displays


LED Appearance Indicates

Fan 0, 1, 2, 3 Solid green light Fan is operating normally.

Solid amber light Fan error.

Node 0, 1, 2, 3 Flashing green light (1 blink per Node is fully functional and part of the
second) cluster.

Flashing amber light(1 blink per Node has a failed or missing power
second) supply, fan or battery, but still
operational.

Steady green light Node is in the process of joining the


cluster.

Rapidly flashing green (three blinks In conjunction with the nodes hot-plug
per second) LED being solid amber (see Controller
Node LEDs on page 5.13), node is safe
to remove.

66 Understanding F-Class Storage System LED Status


Table 21 Bezel LED Displays (continued)
LED Appearance Indicates

Steady amber light Error within the node.

Steady amber and hot-plug LED amber Fatal node failure.


(see Controller Node LEDs on page
5.13)

Removing the Bezels and Unlocking the Door


If your HP 3PAR cabinet has locking fascias, you must first remove the fascias to access the system
bezel.

WARNING! Hazardous energy is located behind the rear access door of the storage system
cabinet. Use caution when working with the door open.

• To view the power supply, battery or PDU LEDs, open the rear door by unlatching the three
latches on the door.

NOTE: Many LEDs are visible without removing the bezels. To view the power supply, battery
or PDU LEDs, open the rear door of the cabinet.

Drive Chassis LEDs


The drive chassis LEDs are located at the front and rear of the chassis. The drive chassis houses
the following components, each with their own LEDs:
• One OPs panel
• Two interface cards
• Two power supply/cooling modules

Figure 73 Drive Chassis Components

OPs Panel LEDs


The drive chassis OPs panel has the following LEDs :

Using the F-Class Component LEDs 67


Figure 74 Drive Chassis OPs Panel LEDs

68 Understanding F-Class Storage System LED Status


Table 22 Drive Chassis OPs Panel LED Displays
LED Appearance Indicates

Power On Steady green light Used in conjunction with Power Supply/Cooling/Temperature


Fault LED, 2GB Link Speed LED, Invalid Address LED, and System
Fault LED as described below.

Power Steady amber light • Test state (5 seconds), if:


Supply/Cooling/Temperature
Fault LED ◦ Power On LED is steady green.

◦ System Fault LED is steady amber.

◦ Invalid Address LED is steady amber.

◦ 2GB Link Speed LED is steady green.

• Power supply or fan fault, if:

◦ Power On LED is steady green.

◦ System Fault LED is off.

• Over or under temperature, if:

◦ Power On LED is steady green.

◦ System Fault LED is flashing amber.

Flashing amber light • One power supply is removed if:

◦ Power On LED is steady green.

◦ System Fault LED is flashing amber.

• OPs to ESI communications failed, if:

◦ Power On LED is steady green.

◦ System Fault LED is steady amber.

2GBLink Speed LED Steady green light • Test state (5 seconds), if:

◦ Power On LED is steady green.

◦ Power Supply/Cooling/Temperature Fault LED is steady


amber.

◦ System Fault LED is steady amber.

◦ Invalid Address LED is steady amber.

• 2GB drive loop is selected, if Power On LED is steady green.

Flashing green light One or more drives are bypassed on at least one loop.

No green light The 5V aux is present, overall power fail, if Power On LED is
green and all other LEDs are off.

Using the F-Class Component LEDs 69


Table 22 Drive Chassis OPs Panel LED Displays (continued)
LED Appearance Indicates

Invalid Address LED Steady amber light • Indicates test state (5 seconds), if:

◦ Power On LED is steady green.

◦ Power Supply/Cooling/Temperature Fault LED is steady


amber.

◦ System Fault LED is steady amber.

◦ 2GB Link Speed LED is steady amber.

Flashing amber light Invalid address mode ID switch setting if Power On LED is steady
green.

System Fault LED Steady amber light • Test state (5 seconds), if:

◦ Power On LED is steady green.

◦ Power Supply/Cooling/Temperature Fault LED is steady


amber.

◦ Invalid Address LED is steady amber.

◦ 2GB Link Speed LED is steady amber.

• Processor module in FC-AL failure, if:

◦ Power ON LED is steady green.

◦ Power Supply/Cooling/Temperature Fault LED is off.

• Unknown FC-AL module type installed, I2C Bus failure, or


backplane autostart watchdog failure, if:

◦ Power ON LED is steady green.

◦ Power Supply/Cooling/Temperature Fault LED is off.

• Ops to ESI communication failure, if:

◦ Power ON LED is steady green.

◦ Power Supply/Cooling/Temperature Fault LED is flashing.

Flashing amber light • Over or under temperature, if:

◦ Power On LED is steady green.

◦ Power Supply/Cooling/Temperature Fault LED is steady


amber.
• Power supply is removed if:

◦ Power On LED is steady green.

◦ Power Supply/Cooling/Temperature Fault LED is flashing.

• No drives fitted, if:

◦ Power ON LED is steady green.

◦ Power Supply/Cooling/Temperature Fault LED is off.

70 Understanding F-Class Storage System LED Status


Interface Card LEDs
The drive chassis contains two interface cards, FC-AL-A and FC-AL-B, with the following LEDs:

Figure 75 Interface Card LEDs

Table 23 Interface Card LED Displays


LED Appearance Indicates

Host Port 0, 1, 2, 3 Steady green light The incoming Fibre Channel signal is good.
Signal Good

Loop Status Steady green light All device ports are good at 2GB.

No light All device ports are good at 1GB.

Flashing green light Drives are bypassed by module.

Module Fault Steady amber light FC-AL module is failed.

Power Supply/Cooling Module LEDs


Drive chassis power supplies/cooling modules have the following LEDs:

Using the F-Class Component LEDs 71


Figure 76 Drive Chassis Power Supply/Cooling Module LEDs

Table 24 Power Supply/Cooling Module LED Displays


LED Appearance Indicates

Power Supply Good Steady green light The power supply is operating normally.

Steady amber light The power supply is not operating correctly.

AC Input Fail Steady green light Indicates the AC input is normal.

Steady amber light Indicates AC input failure.

Fan Fault Steady green light Indicates the fan is operating normally.

Steady amber light There is a fan fault.

DC Output Fail Steady green light DC output is normal.

Steady amber light DC output failure.

Drive Magazine LEDs


Drive magazine LEDs are located at the front of the storage system and contain the following LEDs:

72 Understanding F-Class Storage System LED Status


Figure 77 Drive Magazine LEDs

Table 25 Drive Magazine LED Displays


LED Appearance Indicates

Activity Steady green light Drive power is present.

Blinking green light There is drive activity.

Slowly blinking green light (once every 3 The drive has spun down.
seconds)

No light A drive is not present.

Fault Steady amber light There is a drive fault.

No light No drive is present. Drive power is on. Drive


activity.

Slowly blinking amber light The drive is bypassed by the FC-AL module or
ready for removal.

Controller Node LEDs


Depending on the configuration, storage systems contain two or four controller nodes, all located
in the storage system chassis. Controller nodes contain the following LEDs (Figure 78 (page 74)):

Using the F-Class Component LEDs 73


Figure 78 Controller Node LEDs

Table 26 Controller Node LED Displays


LED Appearance Indicates

Disk hot plug Steady amber light Disk is prepared for hot plug.

No light Disk is not prepared for hot plug.

Node hot plug Steady amber light In combination with the status LED
blinking green three times per second,
indicates the node is prepared for
removal. In combination with the status
LED being solid, indicates a fatal
failure.

No light The node is not prepared for removal.

Node status Flashing green light (1 blink per The node is fully functional and part of
second) the cluster.

Flashing amber light(1 blink per The node has a failed or missing power
second) supply, fan, battery backup unit, or
TOD battery but the node is still
operational.

Steady green light The node is in the process of joining the


cluster.

Rapidly flashing green (three times per In combination with the service (hot
second) plug) LED being solid amber, the node
is safe to remove.

Steady amber light An error within the node.

Solid amber In combination with the service (hot


plug) LED being amber, a fatal node
failure.

Ethernet activity Steady green light An Ethernet link.

Flashing green light No Ethernet activity.

No light No Ethernet connection.

Ethernet status Steady amber light 1000 MB/s mode

74 Understanding F-Class Storage System LED Status


Table 26 Controller Node LED Displays (continued)
LED Appearance Indicates

Steady green light 100 MB/s mode

No light 10 MB/s mode (or disconnected)

Fibre Channel Port LEDs


The Fibre Channel adapter in the controller node also contains Fibre Channel port LEDs:

Figure 79 4-Port Fibre Channel LEDs

Table 27 Fibre Channel Adapter LED Displays


LED Appearance Indicates

No light Wake up failure (dead device)

Steady green light Normal - link up at 2-4GBs/s

Flashing green light Link down or not connected

QLogic iSCSI Port LEDs


The QLogic iSCSI adapter contains two ports with one LED for each port.

Figure 80 iSCSI Adapter Port LEDs

Using the F-Class Component LEDs 75


Table 28 iSCSI Adapter Port LED Displays
LED Appearance Indicates

No light No connection or active link.

Steady green light Link is established.

Flashing green light Recieving or transmitting activity.

Emulex Fibre Channel Port LEDs


The Emulex Fibre Channel adapter in the controller node also contain Fibre Channel port LEDs.
Two port Emulex Fibre Channel adapters are only used in an F-Class Storage System (Figure 81
(page 76)).

Figure 81 Emulex 2-Port Fibre Channel LEDs

Table 29 Fibre Channel Port Status LED Displays (Emulex 2-Port Adapter)
Yellow LED Green LED Port Status

Internal Test Only Off Wake-Up Failure (dead device)

Internal Test Only On Normal - Link up at 1-4 GB/sec

Internal Test Only Slow Blink Normal - Link Down

76 Understanding F-Class Storage System LED Status


Controller Node Power Supply LEDs
F-Class Storage System controller node power supply units are located on both sides of the controller
nodes. The battery is integral to the controller node power supply. The LEDs are located on the
rear of the power supply units:

Figure 82 Controller Node Power Supply LEDs

Table 30 Power Supply LED Displays


LED Appearance Indicates

Power supply status Steady green light Power is on.

Steady amber light Power supply error.

No light Broken connection to the power source.

AC Steady green light AC is entering from an external source.

No light No AC is entering from an external


source (for example, when power is off
or when using battery power).

Battery status Steady green light The battery is charged.

Blinking green light The battery is undergoing a test.

Steady amber light There is a battery error.

Power Distribution Unit Lamps


The F-Class Storage Systems include four PDUs that contain two power bank lamps (Figure 83
(page 77)):

Figure 83 Power Distribution Unit Lamps

Using the F-Class Component LEDs 77


A blue illuminated lamp denotes that power is being supplied to a power bank. When the blue
lamp is not illuminated, the power bank is not receiving AC input.

Service Processor LEDs


There are two types of service processors.

Supermicro Service Processor


The Supermicro SP LEDs are located at the top of the SP (Figure 84 (page 78)).

Figure 84 Supermicro SP LEDs

Table 31 Supermicro SP LED Displays


LED Appearance Indicates

Power No light SP is off.

Steady green light SP is on.

Hard disk drive No light No hard drive activity.

Flashing amber light Hard drive activity.

NIC Port 1, 2 No light Port is not connected.

Steady green light Port is connected.

Flashing green light Network activity.

Overheat No light SP temperature is normal.

Steady red light SP temperature is overheating.

Supermicro II Service Processor


Supermicro II SP LEDs are located at the top of the SP (Figure 85 (page 79)).

78 Understanding F-Class Storage System LED Status


Figure 85 Supermicro II SP LEDs

Table 32 Supermicro II SP LED Displays


LED Appearance Indicates

Power No light SP is off.

Steady green light SP is on.

Hard disk drive No light No hard drive activity.

Flashing amber light Hard drive activity.

NIC Port 1, 2 No light Port is not connected.

Steady green light Port is connected.

Flashing green light Network activity.

Overheat No light SP temperature is normal.

Steady red light SP temperature is overheating.

Flashing red light SP has a failed fan.

Securing the Storage System


After verifying that the storage system is functioning properly, secure the system by closing the rear
door and locking it with the keys provided.

WARNING! Hazardous energy is located behind the rear access door of the cabinet. Use caution
when working with the door open.

Securing the Storage System 79


6 Understanding HP 3PAR StoreServ 10000 Storage LED
Status
The storage system components have LEDs to indicate whether or not the hardware is functioning
properly and to help identify errors. The LEDs help diagnose basic hardware problems. You can
quickly identify hardware problems by examining the LEDs on all of the components and using the
tables and illustrations in this chapter.
If you detect any problems during inspection of the LEDs, contact your Authorized Service Provider.

Drive Cage LEDs


The DC4 drive chassis holds one DC4 drive cage housing two drive cage FC-AL modules and a
maximum of ten drive magazines.

Figure 86 DC4 Drive Cage

80 Understanding HP 3PAR StoreServ 10000 Storage LED Status


DC4 Drive Cage FC-AL Module LEDs
The DC4 drive cage FC-AL modules have the following LEDs:

Figure 87 FC-AL LED and Port Locations

Table 33 Drive Cage DC4 FC-AL Module LEDs


LED Appearance Indicates

RX Steady green light A presence of a small form-factor


pluggable optical transceiver (SFP) and
a valid signal from the node.

No light No connection to the node or no SFP


is installed.

TX Steady green light A presence of an SFP and that the LED


is on and transmitting.

No light No SFP is present or the SFP transmitter


failed.

FC-AL status Steady green light The drive cage is functioning properly,
but is not communicating with any
nodes.

Flashing green light (1 blink per The drive cage is connected and
second) communicating with the system
manager of a node in the cluster.

Steady amber light Normal, initial indication for two


seconds upon power up. Otherwise,

Drive Cage LEDs 81


Table 33 Drive Cage DC4 FC-AL Module LEDs (continued)
LED Appearance Indicates

FC-AL module error or other cage error.


If both FC-AL modules have a steady
light, the temperature of a disk drive in
the drive-cage has exceeded its
high-level threshold, or a power supply
has failed.

Flashing amber light (1 blink per The drive cage has some type of error,
second) such as a failed or missing power
supply, but is communicating with a
node.

Rapid toggle between amber and A cage firmware upgrade initiated by


green light the upgradecage CLI command is
in progress. A firmware upgrade
normally takes less than two minutes to
complete.

Hot plug Steady amber light The FC-AL module is prepared for hot
plug replacement.

No light The FC-AL module is not prepared for


hot plug.

4GB/s Steady green light The transfer rate is operating at 4GB/s.

No light The transfer rate is not operating at


4GB/s.

82 Understanding HP 3PAR StoreServ 10000 Storage LED Status


Drive Magazine LEDs
NOTE: After powering on, allow approximately two minutes for the disks on the drive magazine
to spin up before checking the drive magazine LEDs.
Drive magazines have the following LEDs:

Figure 88 Drive Magazine LEDs

Table 34 Drive Magazine LEDs


LED Appearance Indicates

Drive magazine status Steady green light The drive magazine is functioning
properly.

Steady amber light A drive magazine error, or one or more


drives are bypassed on at least one
path.

Disk status Quick flashing, or 20 percent on, 80 The disk is not spun up but has power.
percent off green light

Steady green light The disk is spun up and waiting for a


command.

Flashing green light The disk is executing commands.

No light No disk is present.

Steady amber light A disk error, or the disk is bypassed on


both paths (loops).

Hot plug Steady amber light The drive magazine is prepared for hot
plug replacement.

Flashing amber light That there is a connection failure


between the drive magazine and the
drive chassis.

No light The drive magazine is not prepared for


hot plug.

Drive Cage LEDs 83


Controller Node LEDs
Depending on configuration, storage systems contain between two and eight controller nodes, all
located in the chassis. Controller nodes have the following LEDs:

NOTE: You can issue the locatenode command to flash all service LEDs associated to a
controller node blue. This includes the power supplies, battery modules, and fan module LEDs.
Table 35 Controller Node LEDs
LED Appearance Indicates

Node Disk No light Normal operation.

Node Service Steady blue • In combination with the status LED


blinking green three times per
second, indicates the controller node
is prepared for removal.
• In combination with the status LED
being solid, indicates a fatal node
failure.
• In combination with the node status
LED blinking green or amber one
time per second, indicates the
servicenode start command
was issued to illuminate the node
service LEDs.

No light Normal operation.

Flashing blue light The locatenode command was issued


to locate the node or the FRU is not fully
seated.

Node Status Flashing green light A quick flashing light means the node
is booting. A slow flashing light means
the node is part of the cluster.

Steady green light The node is booting but has not joined
the cluster.

84 Understanding HP 3PAR StoreServ 10000 Storage LED Status


Table 35 Controller Node LEDs (continued)
LED Appearance Indicates

Rapidly flashing green light (three The node is booting, or, in combination
times per second) with a blue service LED, the node is safe
to remove.

Flashing amber light The node has joined the cluster but
there is a degraded component
associated with the node. A slow
flashing light means the node is part of
the cluster.

Steady amber light An error within the node.

HBA Service Blue The servicenode start command


was issued to illuminate the service LED
or the HBA has failed and needs to be
replaced. If the LED is off, the HBA is
normal.

Controller Node Status Panel LEDs


The controller node LED status panel is located at the front of the system. The 10400 includes four
LEDs and the 10800 has eight LEDs. Each LED on the node-chassis panel should be identical to
the individual controller node status LED, as shown in the Node Status LED section of the Controller
Node LEDs table.

Figure 89 LED Status Panel on a 10400

Controller Node LEDs 85


Fan Module LEDs
The 10400 controller node chassis can hold up to eight fan modules that each hold two fans, and
the 10800 can hold up to 16. Fan modules have the following LEDs:

Table 36 Fan Module LEDs


LED Appearance Indicates

Status Green Normal operation, no faults.

Amber Fan speed is too low, failed, off or not working properly. With a blue service
LED, the fan module failed and was not able to recover in 60 seconds.
Replace the fan module.

Service Solid Blue The servicenode start fan has been issued. With the amber status LED,
the fan module is failed and needs servicing.

Blue Blinking The locatenode fan command has been issued.

Off Node fan module no longer needs servicing.

86 Understanding HP 3PAR StoreServ 10000 Storage LED Status


Fibre Channel Adapter Port LEDs
The Fibre Channel adapter in the controller node also contains Fibre Channel port LEDs:

Figure 90 Fibre Channel LEDs

Table 37 Fibre Channel Adapter LEDs


LED Appearance Indicates

Port 1-4 No light Wake up failure (dead device) or


power is not applied.

(Port speed) Amber light off Not connected.

Amber (3 fast blinks) Connected at 4GB/sec.

Amber (4 fast blinks) Connected at 8GB/sec.

(Link status) Steady green light Normal/Connected - link up.

Flashing green light Link down or not connected.

Fibre Channel Adapter Port LEDs 87


CNA Port LEDs
The Converged Network Adapter (CNA) includes two ports with corresponding LEDs:

Figure 91 CNA Port LEDs

Table 38 CNA Port LEDs


LED Appearance Indicates

Link No light Link down

Steady green light Link up

ACT (Activity) No light No activity.

Flashing green light Activity

88 Understanding HP 3PAR StoreServ 10000 Storage LED Status


Ethernet LEDs
The controller node has two built-in Ethernet ports and each port contains two LEDs:

Figure 92 Ethernet LEDs

Table 39 Ethernet LEDs


LED Appearance Indicates

ACT/LNK (top E0, E1) Steady green light Valid link partner

Flashing green light Data activity

No light ACT/LNK is off

Speed (bottom E0, E1) Steady yellow light 1000Mb/sec mode

Steady green light 100Mb/sec mode

No light 10Mb/sec mode

Ethernet LEDs 89
Power Supply LEDs
Power supplies are located at the rear of the storage system. The drive chassis and controller node
power supplies have the following LEDs:

Drive Chassis Power Supply LEDs


Drive chassis power supplies are located at the rear of the drive chassis.

Figure 93 Drive Chassis Power Supply LEDs

Table 40 Drive Chassis Power Supply LEDs


LED Appearance Indicates

Power Supply Status Steady green light Power is on.

Steady amber light Power supply error.

No light Broken connection.

AC Steady green light AC is entering from an external source.

No light Power supply output is off.

90 Understanding HP 3PAR StoreServ 10000 Storage LED Status


Controller Node Power Supply LEDs
The controller node power supplies are located behind the cable management tray in the node
chassis.

Figure 94 Controller Node Power Supply LEDs

The power supply service LED is located on the dividers between the power supplies.

Figure 95 Controller Node Power Supply Service LED

Table 41 Controller Node Power Supply LEDs


LED Appearance Indicates

Power Status Steady green light Power is on.

Steady amber light Power supply error.

No light Broken connection.

AC Status Steady green light AC is entering from an external source.

No light Power supply output is off.

Fault Steady amber light Failed power supply.

Service Blue The servicenode start or


locatenode command was issued to
illuminate the service LED or the power
supply has failed and needs to be
replaced.

Power Supply LEDs 91


Battery Module LEDs
Depending on configuration, storage systems include one or two battery compartments that hold
up to four battery modules each. Each node has one battery module. Each battery module has
three LEDs:

Figure 96 Battery Module LEDs

Table 42 Battery Module LEDs


LED Appearance Indicates

Charging Green Battery modules is being charged.

Amber Battery module is at fault.

Off Battery module is not in node or connected.

Discharging Green Battery module output is on and supplying power to the node.

Off Battery module is not providing power to the node.

Service LED Blue Battery module needs servicing or the servicenode start or
locatenode command was issued.

Off Battery module no longer needs servicing.

92 Understanding HP 3PAR StoreServ 10000 Storage LED Status


Service Processor LEDs
There are two types of SPs.

Supermicro II Service Processor


The Supermicro II LEDs are located at the front of the SP:

Figure 97 Supermicro II SP LEDs

Table 43 Supermicro II SP LEDs


LED Appearance Indicates

Power No light SP is off.

Steady green light SPis on.

Hard disk drive No light No hard drive activity.

Flashing amber light Hard drive activity.

NIC Port 1/2 No light Port is not connected.

Steady green light Port is connected.

Flashing green light Network activity.

Overheat No light SP temperature is normal.

Steady red light SP temperature is overheating.

Flashing red light SP has a failed fan.

HP 3PAR Service Processor


The HP 3PAR SP (Proliant DL320e) LEDs are located at the front and rear of the SP:

Service Processor LEDs 93


Figure 98 Front panel LEDs

Table 44 Front panel LEDs


Item LED Appearance Description

1 UID LED/button Blue Active

Flashing Blue System is being managed remotely

Off Deactivated

2 Power On/Standby button and Green System is on


system power
Flashing Green Waiting for power

Amber System is on standby, power still on

Off Power cord is not attached or power


supplied has failed

3 Health Green System is on and system health is


normal

Flashing Amber System health is degraded

Flashing Red System health is critical

Off System power is off

4 NIC status Green Linked to network

Flashing Green Network activity

Off No network link

Figure 99 Rear panel LEDs

94 Understanding HP 3PAR StoreServ 10000 Storage LED Status


Table 45 Rear panel LEDs
Item LED Appearance Description

1 NIC link Green Link

Off No link

2 NIC status Green or Flashing Green Activity

Off No activity

3 UID LED/button Blue Active

Flashing Blue System is being managed remotely

Off Deactivated

4 Power supply Green Normal


NOTE: May not be applicable to Off Off = one or more of the following
your system (for hot-plug HP CS conditions:
power supplies ONLY)
• Power is unavailable
• Power supply has failed
• Power supply is in standby mode
• Power supply error

Service Processor LEDs 95


7 Power Off/On the Storage System
The following describes the procedures for powering off and on the storage system.

Powering Off the Storage System


When it is necessary to power off the storage system, use the following steps to safely remove
power from the storage system and the SP.

NOTE: PDUs in any expansion cabinets connected to the storage system may need to be shut
off. Use the locatesys command to identify all connected cabinets. locatesys will blink all
node and drive cage LEDs. Note this information now, as it is needed for the last step in this
procedure.
To power off the storage system:
1. Connect the maintenance PC to the SP using the serial connection. Refer to the “Connecting
the Maintenance PC” chapter in the appropriate HP 3PAR Storage System Maintenance
Manual.
2. Start an spmaint session.

NOTE: If using SPOCC, on the SPOCC homepage, click Support > SPMAINT on the Web
> InServ Product Maintenance > Halt an InServ cluster/node and skip to step 7.

3. In the 3PAR Service Processor Menu, select option 4, InServ Product Maintenance.
4. Select option 6, Halt an InServ cluster/node.
5. Select the desired system and confirm all prompts to halt the system.
6. Press x to return to the 3PAR Service Processor Menu.

CAUTION: Failure to wait until all controller nodes are in a halted state as defined in Step
7could cause the system to view the shutdown as uncontrolled and place the system in a
checkld state upon power up. This can seriously impact host access to data.

7. Allow 2-3 minutes for the system to halt, then verify the node status LEDs are flashing green
(at a rate of three blinks per second) and the node hot-plug LEDs are solid amber (for the
StoreServ 10000 Storage, the node service LED is blue) which indicate the nodes have halted.
For more information see the LED chapters in this manual.
8. Select option 1, SP Control/Status.
9. Select option 3, Halt SP.
10. When prompted, press Y to confirm halting the SP.
11. Wait approximately 30 seconds and verify if the LED on the front of the SP is no longer
illuminated.
12. Remove AC to the storage system by turning off all of the PDU circuit breakers in the cabinet.
13. If necessary, replace the bezel and close and lock the rear door.

Powering On the Storage System


NOTE: The system takes approximately 10-15 minutes to become fully operational providing it
was gracefully shut down. If the system was powered off abruptly, powering on could take
considerably longer.
If the TOC is over a day old, it is considered invalid, and the setsysmgr tocgen command
must be issued to select the appropriate TOC. Refer to the HP 3PAR OS CLI Reference for more
information.

To power on the storage system:


96 Power Off/On the Storage System
1. Turn on AC power to the cabinet(s) by turning on all of the PDU circuit breakers.
2. Verify that the blue LED on the front of the SP is illuminated.
3. Verify that all drive chassis LEDs are solid green and all controller node status LEDs are blinking
green once per second.

Powering On the Storage System 97


8 Alerts
Alerts are triggered by events that require intervention by the system administrator. This chapter
provides a list of alerts identified by message code, the message(s), and what action should be
taken for each alert. To learn more about alerts, see the HP 3PAR OS Concepts Guide.
For information about system alerts, go to http://www.hp.com/support/hpgt/3par and select
your server platform.
To view the alerts, use the showalert command. Alert message codes have seven digits in the
following schema:
• AAABBBB
• AAA is a 3-digit "major code"
• BBBB is a 4-digit sub-code
• 0x precedes the code to indicate hexadecimal notation.

NOTE: Message codes ending in de indicate a degraded state alert. Message codes ending in
fa indicate a failed state alert.
Refer to the HP 3PAR OS Command Line Interface Reference for complete information on the
display options on the event logs.
Table 46 Alert Severity Levels
Severity Description

Fatal A fatal event has occurred. It is no longer possible to take


remedial action.

Critical The event is critical and requires immediate action.

Major The event requires immediate action.

Minor An event has occurred that requires action, but the situation
is not yet serious.

Degraded An aspect of performance or availability may have become


degraded. You must decide if action is necessary.

Informational The event is informational. No action is required other than


acknowledging or removing the alert.

98 Alerts
9 Troubleshooting
The HP 3PAR OS CLI checkhealth command checks and displays the status of storage system
hardware and software components. For example, the checkhealth command can check for
unresolved system alerts, display issues with hardware components or display information about
virtual volumes that are not optimal.
By default the checkhealth command checks most storage system components, but you can also
check the status of specific components. For a complete list of storage system components analyzed
by the checkhealth command, see Section (page 100).

The checkhealth Command


COMMAND
checkhealth
DESCRIPTION
The checkhealth command checks and displays the status of system hardware and software
components.
SYNTAX
checkhealth [<options> | <component>...]
AUTHORITY
Super, Service
OPTIONS
-list
Lists all components that checkhealth can analyze.
-quiet
Does not display which component is currently being checked.
-detail
Displays detailed information regarding the status of the system.
SPECIFIERS
<component>
Indicates the component to check. Use the -list option to get the list of components.

Using the checkhealth Command


Use the checkhealth command without any specifiers to check the health of all the components
analyzed by the checkhealth command.
The following example displays both summary and detailed information about the hardware and
software components:

cli% checkhealth -detail


Checking alert
Checking cage
Checking date
Checking ld
Checking license
Checking network
Checking node
Checking pd
Checking port
Checking rc

The checkhealth Command 99


Checking snmp
Checking task
Checking vlun
Checking vv
Component -----------Description----------- Qty
Alert New alerts 4
Date Date is not the same on all nodes 1
LD LDs not mapped to a volume 2
License Golden License. 1
vlun Hosts not connected to a port 5

The following information is included when you use the -detail option:

Component ----Identifier---- -----------Description-------


Alert sw_port:1:3:1 Port 1:3:1 Degraded (Target Mode Port Went Offline)
Alert sw_port:0:3:1 Port 0:3:1 Degraded (Target Mode Port Went Offline)
Alert sw_sysmgr Total available FC raw space has reached threshold of 800G
(2G remaining out of 544G total)
Alert sw_sysmgr Total FC raw space usage at 307G (above 50% of total 544G)
Date -- Date is not the same on all nodes

LD ld:name.usr.0 LD is not mapped to a volume


LD ld:name.usr.1 LD is not mapped to a volume
vlun host:group01 Host wwn:2000000087041F72 is not connected to a port
vlun host:group02 Host wwn:2000000087041F71 is not connected to a port
vlun host:group03 Host iscsi_name:2000000087041F71 is not connected to a port
vlun host:group04 Host wwn:210100E08B24C750 is not connected to a port
vlun host:Host_name Host wwn:210000E08B000000 is not connected to a port

If there are no faults or exception conditions, the checkhealth command indicates that the system
is healthy.

cli% checkhealth
Checking alert
Checking cage

Checking vlun
Checking vv
System is healthy

With the checkhealth <component> specifier you can check the status of one or more specific
storage system components. For example:

cli% checkhealth node pd


Checking node
Checking pd
The following components are healthy: node, pd

Troubleshooting Storage System Components


Use the checkhealth-list command to list all the components that can be analyzed by the
checkhealth command.
For detailed troubleshooting information about specific components, examples, and suggested
actions for correcting issues with components, click on the component name in Table 7 (page 101).

100 Troubleshooting
Table 47 Component Functions
Component Function

Alert Displays any unresolved alerts.

Cage Displays drive cage conditions that are not optimal.

Date Displays if nodes have different dates.

LD Displays LDs that are not optimal.

License Displays license violations.

Network Displays Ethernet issues.

Node Displays node conditions that are not optimal.

PD Displays PDs with states or conditions that are not optimal.

Port Displays port connection issues.

RC Displays Remote Copy issues.

SNMP Displays issues with SNMP.

Task Displays failed tasks.

VLUN Displays inactive VLUNs and those which have not been
reported by the host agent.

VV Displays VVs that are not optimal.

The following sections provide details about troubleshooting specific components.

Alert
Displays any unresolved alerts. Shows any alerts that would be seen by showalert -n.

Format of Possible Alert Exception Messages

Alert <component> <alert_text>

Alert Example

Component -Identifier- --------Description--------------------


Alert hw_cage:1 Cage 1 Degraded (Loop Offline)
Alert sw_cli 11 authentication failures in 120 secs

Alert Suggested Action


View the full Alert output using the IMC (GUI) or the showalert -d CLI command. Consult the
Alerts chapter in this manual for more information about the alert.

Cage
Displays drive cage conditions that are not optimal. Reports exceptions if any of the following do
not have normal states:
• ports
• drive magazine states (DC1, DC2, & DC4)
• Small Form-factor Pluggable (SFP) voltages (DC2 and DC4)
• SFP signal levels (RX power low and TX failure)

Troubleshooting Storage System Components 101


• power supplies
• cage firmware (is not current)
Reports if a servicecage operation has been started and has not ended.

Format of Possible Cage Exception Messages

Cage cage:<cageid> "Missing A loop" (or "Missing B loop")


Cage cage:<cageid> "Interface Card <STATE>, SFP <SFPSTATE>" (is unqualified, is
disabled, Receiver Power Low: Check FC Cable, Transmit Power Low: Check FC Cable, has
RX loss, has TX fault)"
Cage cage:<cageid>,mag:<magpos> "Magazine is <MAGSTATE>"
Cage cage:<cageid> "Power supply <X> fan is <FANSTATE>"
Cage cage:<cageid> "Power supply <X> is <PSSTATE>" (Degraded, Failed, Not_Present)
Cage cage:<cageid> "Power supply <X> AC state is <PSSTATE>"
Cage cage:<cageid> "Cage is in 'servicing' mode (Hot-Plug LED may be illuminated)"
Cage cage:<cageid> "Firmware is not current"

Cage Example 1

Component -------------Description-------------- Qty


Cage Cages missing A loop 1
Cage SFPs with low receiver power 1

Component -Identifier- --------Description------------------------


Cage cage:4 Missing A loop
Cage cage:4 Interface Card 0, SFP 0: Receiver Power Low: Check FC Cable

Cage Suggested Action 1


Check the connection/path to the SFP in the cage and the level of signal the SFP is receiving. An
RX Power reading below 100 uW will signal the RX Power Low condition; typical readings are
between 300 and 400 uW. Helpful CLI commands are showcage -d and showcage -sfp
ddm.
At least two connections are expected for drive cages and this exception will be flagged if that is
not the case.

cli% showcage -d cage4


Id Name
LoopA
Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
4 cage4 --- 0 3:2:1 0 8 28-36 2.37 2.37 DC4 n/a

-----------Cage detail info for cage4 ---------

Fibre Channel Info PortA0 PortB0 PortA1 PortB1


Link_Speed 0Gbps -- -- 4Gbps

----------------------------------SFP Info-----------------------------------
FCAL SFP -State- --Manufacturer-- MaxSpeed(Gbps) TXDisable TXFault RXLoss DDM
0 0 OK FINISAR CORP. 4.1 No No Yes Yes
1 1 OK FINISAR CORP. 4.1 No No No Yes

Interface Board Info FCAL0 FCAL1


Link A RXLEDs Off Off
Link A TXLEDs Green Off
Link B RXLEDs Off Green
Link B TXLEDs Off Green
LED(Loop_Split) Off Off
LEDS(system,hotplug) Green,Off Green,Off

102 Troubleshooting
-----------Midplane Info-----------
Firmware_status Current
Product_Rev 2.37
State Normal Op
Loop_Split 0
VendorId,ProductId 3PARdata,DC4
Unique_ID 1062030000098E00
...

-------------Drive Info------------- ----LoopA----- ----LoopB-----


Drive NodeWWN LED Temp(C) ALPA LoopState ALPA LoopState
0:0 2000001d38c0c613 Green 33 0xe1 Loop fail 0xe1 OK
0:1 2000001862953510 Green 35 0xe0 Loop fail 0xe0 OK
0:2 2000001862953303 Green 35 0xdc Loop fail 0xdc OK
0:3 2000001862953888 Green 31 0xda Loop fail 0xda OK

cli% showcage -sfp cage4


Cage FCAL SFP -State- --Manufacturer-- MaxSpeed(Gbps) TXDisable TXFault RXLoss DDM
4 0 0 OK FINISAR CORP. 4.1 No No Yes Yes
4 1 1 OK FINISAR CORP. 4.1 No No No Yes

cli% showcage -sfp -ddm cage4


---------Cage 4 Fcal 0 SFP 0 DDM----------
-Warning- --Alarm--
--Type-- Units Reading Low High Low High
Temp C 33 -20 90 -25 95
Voltage mV 3147 2900 3700 2700 3900
TX Bias mA 7 2 14 1 17
TX Power uW 394 79 631 67 631
RX Power uW 0 15 794 10* 1259

---------Cage 4 Fcal 1 SFP 1 DDM----------


-Warning- --Alarm--
--Type-- Units Reading Low High Low High
Temp C 31 -20 90 -25 95
Voltage mV 3140 2900 3700 2700 3900
TX Bias mA 8 2 14 1 17
TX Power uW 404 79 631 67 631
RX Power uW 402 15 794 10 1259

Cage Example 2

Component -------------Description-------------- Qty


Cage Degraded or failed cage power supplies 2
Cage Degraded or failed cage AC power 1

Component -Identifier- ------------Description------------


Cage cage:1 Power supply 0 is Failed
Cage cage:1 Power supply 0's AC state is Failed
Cage cage:1 Power supply 2 is Off

Cage Suggested Action 2


A cage power supply or power supply fan is failed, is missing input AC power, or the switch is
turned OFF. The showcage -d cageX and showalert commands should provide more detail.

cli% showcage -d cage1


Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
1 cage1 0:0:2 0 1:0:2 0 24 27-39 2.37 2.37 DC2 n/a

Troubleshooting Storage System Components 103


-----------Cage detail info for cage1 ---------
Interface Board Info FCAL0 FCAL1
Link A RXLEDs Green Off
Link A TXLEDs Green Off
Link B RXLEDs Off Green
Link B TXLEDs Off Green
LED(Loop_Split) Off Off
LEDS(system,hotplug) Amber,Off Amber,Off

-----------Midplane Info-----------
Firmware_status Current
Product_Rev 2.37
State Normal Op
Loop_Split 0
VendorId,ProductId 3PARdata,DC2
Unique_ID 10320300000AD000

Power Supply Info State Fan State AC Model


ps0 Failed OK Failed POI <AC input is missing
ps1 OK OK OK POI
ps2 Off OK OK POI <PS switch is turned off
ps3 OK OK OK POI

Cage Example 3

Component -Identifier- --------------Description----------------


Cage cage:1 Cage has a hotplug enabled interface card

Cage Suggested Action 3


When a servicecage operation is started, it puts the targeted cage into servicing mode and
illuminates the hot plug LED on the FCAL module (DC1, DC2, DC4), and causes I/O to be routed
through the other path. When the service action is finished, the servicecageendfc command
should be issued to return the cage to normal status. This checkhealth exception will be reported
if the FCAL module's hot plug LED is illuminated or if the cage is in servicing mode. If a maintenance
activity is currently occurring on the drive cage, this condition may be ignored.

NOTE: The primary path can be seen by an asterisk (*) in showpd's Ports columns.

cli% showcage -d cage1


Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
1 cage1 0:0:2 0 1:0:2 0 24 28-40 2.37 2.37 DC2 n/a

-----------Cage detail info for cage1 ---------

Interface Board Info FCAL0 FCAL1


Link A RXLEDs Green Off
Link A TXLEDs Green Off
Link B RXLEDs Off Green
Link B TXLEDs Off Green
LED(Loop_Split) Off Off
LEDS(system,hotplug) Green,Off Green,Amber

-----------Midplane Info-----------
Firmware_status Current
Product_Rev 2.37
State Normal Op
Loop_Split 0
VendorId,ProductId 3PARdata,DC2
Unique_ID 10320300000AD000

104 Troubleshooting
cli% showpd -s
Id CagePos Type -State-- -----Detailed_State------
20 1:0:0 FC degraded disabled_B_port,servicing
21 1:0:1 FC degraded disabled_B_port,servicing
22 1:0:2 FC degraded disabled_B_port,servicing
23 1:0:3 FC degraded disabled_B_port,servicing

cli% showpd -p -cg 1


---Size(MB)---- ----Ports----
Id CagePos Type Speed(K) State Total Free A B
20 1:0:0 FC 10 degraded 139520 119808 0:0:2* 1:0:2-
21 1:0:1 FC 10 degraded 139520 122112 0:0:2* 1:0:2-
22 1:0:2 FC 10 degraded 139520 119552 0:0:2* 1:0:2-
23 1:0:3 FC 10 degraded 139520 122368 0:0:2* 1:0:2-

Cage Example 4

SComponent ---------Description--------- Qty


Cage Cages not on current firmware 1

Component -Identifier- ------Description------


Cage cage:3 Firmware is not current

Cage Suggested Action 4


Check the drive cage firmware revision using the command showcage and showcage -d
cageX. The showfirwaredb command indicates what the current firmware level should be for
the specific drive cage type.

NOTE: DC1 and DC3 cages have firmware in the FCAL modules; DC2 and DC4 cages have
firmware on the cage midplane. Theupgradecage command may be used to upgrade the firmware.

cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
2 cage2 2:0:3 0 3:0:3 0 24 29-43 2.37 2.37 DC2 n/a
3 cage3 2:0:4 0 3:0:4 0 32 29-41 2.36 2.36 DC2 n/a

cli% showcage -d cage3


Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
3 cage3 2:0:4 0 3:0:4 0 32 29-41 2.36 2.36 DC2 n/a

-----------Cage detail info for cage3 ---------


.
.
.
-----------Midplane Info-----------
Firmware_status Old
Product_Rev 2.36
State Normal Op
Loop_Split 0
VendorId,ProductId 3PARdata,DC2
Unique_ID 10320300000AD100

cli% showfirmwaredb
Vendor Prod_rev Dev_Id Fw_status Cage_type Firmware_File
...
3PARDATA [2.37] DC2 Current DC2 /opt...dc2/lbod_fw.bin-2.37

Troubleshooting Storage System Components 105


Cage Example 5

Component -Identifier- ------------Description------------


Cage cage:4 Interface Card 0, SFP 0 is unqualified

Cage Suggested Action 5


In this example, a 2Gb/sec SFP was installed in a 4Gb/sec drive cage (DC4), and the 2Gb SFP
is not qualified for use in this drive cage. For cage problems, the following CLI commands are
helpful: showcage -d, showcage -sfp, showcage -sfp -ddm, showcage -sfp -d, and
showpd -state.

cli% showcage -d cage4


Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
4 cage4 2:2:1 0 3:2:1 0 8 30-37 2.37 2.37 DC4 n/a

-----------Cage detail info for cage4 ---------


Fibre Channel Info PortA0 PortB0 PortA1 PortB1
Link_Speed 2Gbps -- -- 4Gbps

----------------------------------SFP Info-----------------------------------
FCAL SFP -State- --Manufacturer-- MaxSpeed(Gbps) TXDisable TXFault RXLoss DDM
0 0 OK SIGMA-LINKS 2.1 No No No Yes
1 1 OK FINISAR CORP. 4.1 No No No Yes

Interface Board Info FCAL0 FCAL1


Link A RXLEDs Green Off
Link A TXLEDs Green Off
Link B RXLEDs Off Green
Link B TXLEDs Off Green
LED(Loop_Split) Off Off
LEDS(system,hotplug) Amber,Off Green,Off
...

cli% showcage -sfp -d cage4


--------Cage 4 FCAL 0 SFP 0--------
Cage ID : 4
Fcal ID : 0
SFP ID : 0
State : OK
Manufacturer : SIGMA-LINKS
Part Number : SL5114A-2208
Serial Number : U260651461
Revision : 1.4
MaxSpeed(Gbps) : 2.1
Qualified : No <<< Unqualified SFP
TX Disable : No
TX Fault : No
RX Loss : No
RX Power Low : No
DDM Support : Yes

--------Cage 4 FCAL 1 SFP 1--------


Cage ID : 4
Fcal ID : 1
SFP ID : 1
State : OK
Manufacturer : FINISAR CORP.
Part Number : FTLF8524P2BNV
Serial Number : PF52GRF
Revision : A
MaxSpeed(Gbps) : 4.1

106 Troubleshooting
Qualified : Yes
TX Disable : No
TX Fault : No
RX Loss : No
RX Power Low : No
DDM Support : Yes

Date
Checks the date and time on all nodes and reports an error if they are not the same.

Format of Possible Date Exception Messages

Date -- "Date is not the same on all nodes"

Date Example

Component -Identifier- -----------Description-----------


Date -- Date is not the same on all nodes

Date Suggested Action


The time on the nodes should stay synchronized whether there is an NTP server or not. Use
showdate to see if a node is out of sync, and shownet and shownet -d to see the network
and NTP information.

cli% showdate
Node Date
0 2010-09-08 10:56:41 PDT (America/Los_Angeles)
1 2010-09-08 10:56:39 PDT (America/Los_Angeles)

cli% shownet
IP Address Netmask/PrefixLen Nodes Active Speed
192.168.56.209 255.255.255.0 0123 0 100
Duplex AutoNeg Status
Full Yes Active

Default route: 192.168.56.1


NTP server
: 192.168.56.109

LD
Displays Logical Disks (LDs) that are not optimal.
• Checks for preserved LDs
• Checks that current and created availability are the same
• Checks for owner and backup
• Checks that preserved data space (pdsld's) is same as total data cache
• Checks size and number of logging LDs

Troubleshooting Storage System Components 107


Format of Possible LD Exception Messages

LD ld:<ldname> "LD is not mapped to a volume"


LD ld:<ldname> "LD is in write-through mode"
LD ld:<ldname> "LD has <X> preserved RAID sets and <Y> preserved chunklets"
LD ld:<ldname> "LD has reduced availability. Current: <cavail>, Configured: <avail>"

LD ld:<ldname> "LD does not have a backup"


LD ld:<ldname> "LD does not have owner and backup"
LD ld:<ldname> "Logical Ddisk is owned by <owner>, but preferred owner is <powner>"
LD ld:<ldname> "Logical Disk is backed by <backup>, but preferred backup is <pbackup>"
LD ld:<ldname> "A logging LD is smaller than 20G in size"
LD ld:<ldname> "Detailed State:<ldstate>" (degraded or failed)
LD -- "Number of logging LD's does not match number of nodes in the cluster"
LD -- "Preserved data storage space does not equal total node's Data memory"

LD Example 1

Component -------Description-------- Qty


LD LDs not mapped to a volume 10

Component -Identifier-- --------Description---------


LD ld:Ten.usr.0 LD is not mapped to a volume

LD Suggested Action 1
Examine the identified LD(s) using CLI commands such as showld, showld –d, showldmap,
showvvmap, etc.
LDs are normally mapped to (used by) VVs but they can be disassociated with a VV if a VV is
deleted without the underlying LDs being deleted, or by an aborted tune operation. Normally, you
would remove the unmapped LD to return its chunklets to the free pool.

cli% showld Ten.usr.0


Id Name RAID -Detailed_State- Own SizeMB UsedMB Use Lgct LgId WThru
MapV

88 Ten.usr.0 0 normal 0/1/2/3 8704 0 V 0 --- N


N

cli% showldmap Ten.usr.0


Ld space not used by any vv

LD Example 2

Component -------Description-------- Qty


LD LDs in write through mode 3

Component -Identifier-- --------Description---------


LD ld:Ten.usr.12 LD is in write-through mode

LD Suggested Action 2
Examine the identified LD(s) using CLI commands such as showld, showld –d, showldch, and
showpd for any failed/missing disks. Write-through mode (WThru) indicates that host I/O operations
must be written through to the disk before the host I/O command will be acknowledged. This is

108 Troubleshooting
usually due to a node-down condition, when node batteries are not working, or where disk
redundancy is not optimal.

cli% showld Ten*


Id Name RAID -Detailed_State- Own SizeMB UsedMB Use Lgct LgId
WThru
MapV
91 Ten.usr.3 0 normal 1/0/3/2 13824 0 V 0 --- N N
92 Ten.usr.12 0 normal 2/3/0/1 28672 0 V 0 --- Y N

cli% showldch Ten.usr.12


Ldch Row Set PdPos Pdid Pdch State Usage Media Sp From To
0 0 0 3:3:0 108 6 normal ld valid N --- ---
11 0 11 --- 104 74 normal ld valid N --- ---

cli% showpd 104


-Size(MB)-- ----Ports----
Id CagePos Type Speed(K) State Total Free A B
104 4:9:0? FC 15 failed 428800 0 ----- -----

LD Example 3

Component ---------Description--------- Qty


LD LDs with reduced availability 1

Component --Identifier-- ------------Description---------------


LD ld:R1.usr.0 LD has reduced availability. Current: ch, Configured: cage

LD Suggested Action 3
LDs are created with certain high-availability characteristics, such as ha-cage. If chunklets in an
LD get moved to locations where the Current Availability (CAvail) is not at least as good as the
desired level of Availability (Avail), this condition will be reported. Chunklets may have been
manually moved with movech or by specifying it during a tune operation or during failure conditions
such as node, path, or cage failures. The HA levels from highest to lowest are port, cage, mag,
and ch (disk).
Examine the identified LD(s) using CLI commands such as showld, showld –d, showldch, and
showpd for any failed or missing disks. In the example below, the LD should have cage-level
availability but it currently has chunklet (disk) level availability (i.e., the chunklets are on the same
disk).

cli% showld -d R1.usr.0


Id Name CPG RAID Own SizeMB RSizeMB RowSz StepKB SetSz Refcnt Avail CAvail
32 R1.usr.0 --- 1 0/1/3/2 256 512 1 256 2 0 cage ch

cli% showldch R1.usr.0


Ldch Row Set PdPos Pdid Pdch State Usage Media Sp From To
0 0 0 0:1:0 4 0 normal ld valid N --- ---
1 0 0 0:1:0 4 55 normal ld valid N --- ---

Troubleshooting Storage System Components 109


LD Example 4

Component -Identifier-- -----Description-------------


LD -- Preserved data storage space does not equal total node's Data
memory

LD Suggested Action 4
Preserved data LDs (pdsld's) are created during system initialization (OOTB) and after some
hardware upgrades (via admithw). The total size of the pdsld's should match the total size of all
data-cache in the storage system (see below). This message will appear if a node is offline because
the comparison of LD size to data cache size does not match; this message can be ignored unless
all nodes are online. If all nodes are online and the error condition persists, determine the cause
of the failure. Use the admithw command to correct the condition.

cli% shownode
Control Data Cache
Node --Name--- -State- Master InCluster ---LED--- Mem(MB) Mem(MB) Available(%)
0 1001335-0 OK Yes Yes GreenBlnk 2048 4096 100
1 1001335-1 OK No Yes GreenBlnk 2048 4096 100

cli% showld pdsld*


Id Name RAID -Detailed_State- Own SizeMB UsedMB Use Lgct LgId WThru MapV
19 pdsld0.0 1 normal 0/1 256 0 P,F 0 --- Y N
20 pdsld0.1 1 normal 0/1 7680 0 P 0 --- Y N
21 pdsld0.2 1 normal 0/1 256 0 P 0 --- Y N
----------------------------------------------------------------------------
3 8192 0

License
Displays license violations. Returns information if a license is temporary, and if it has expired.

Format of Possible License Exception Messages

License <feature_name> "License has expired"

License Example

Component -Identifier- --------Description-------------


License -- System Tuner License has expired

License Suggested Action


If desired, request a new/updated license from your Sales Engineer.

Network
Displays Ethernet issues for the Administrative and Remote Copy over IP (RCIP) networks that have
been logged in the previous 24 hour sampling window. Reports if storage system has fewer than
two nodes with working admin Ethernet connections.
• Check if the number of collisions is > 5% of total packets in previous day’s log.
• Check for Ethernet errors and transmit (TX) or receive (RX) errors in previous day’s log.

110 Troubleshooting
Format of Possible Network Exception Messages

Network -- "IP address change has not been completed"


Network "Node<node>:<type>" "Errors detected on network"
Network "Node<node>:<type>" "There is less than one day of network history for this
node"
Network -- "No nodes have working admin network connections"
Network -- "Node <node> has no admin network link detected"
Network -- "Nodes <nodelist> have no admin network link detected"
Network -- "checkhealth was unable to determine admin link status

Network Example 1

Network -- "IP address change has not been completed"

Network Suggested Action 1


The setnet command was issued to change some network parameter, such as the IP address,
but the action has not been completed. Use setnetfinish to complete the change, or
setnetabort to cancel. Use shownet to examine the current condition.

cli% shownet
IP Address Netmask/PrefixLen Nodes Active Speed Duplex AutoNeg Status
192.168.56.209 255.255.255.0 0123 0 100 Full Yes Changing
192.168.56.233 255.255.255.0 0123 0 100 Full Yes Unverified

Network Example 2

Component ---Identifier---- -----Description----------


Network Node0:Admin Errors detected on network

Network Suggested Action 2


Network errors have been detected on the specified node and network interface. Commands such
as shownet and shownet -d are useful for troubleshooting network problems. These commands
will display current network counters as checkhealth shows errors from the last logging sample.

NOTE: The error counters shown by shownet and shownet -d cannot be cleared except by
rebooting a controller node. Because checkhealth is showing network counters from a history log,
checkhealth will stop reporting the issue if there is no increase in error in the next log entry.

shownet -d
IP Address: 192.168.56.209 Netmask 255.255.255.0
Assigned to nodes: 0123
Connected through node 0
Status: Active

Admin interface on node 0


MAC Address: 00:02:AC:25:04:03
RX Packets: 1225109 TX Packets: 550205
RX Bytes: 1089073679 TX Bytes: 568149943
RX Errors: 0 TX Errors: 0
RX Dropped: 0 TX Dropped: 0
RX FIFO Errors: 0 TX FIFO Errors: 0
RX Frame Errors: 60 TX Collisions: 0

Troubleshooting Storage System Components 111


RX Multicast: 0 TX Carrier Errors: 0
RX Compressed: 0 TX Compressed: 0

Node
Displays node conditions that are not optimal.
• Checks if node batteries have been tested in the last 30 days.
• Checks for offline nodes.
• Checks for power supply and battery problems.

Format of Possible Node Exception Messages

Node node:<nodeID> "Node is not online"


Node node:<nodeID> "Power supply <psID> detailed state is <status>
Node node:<nodeID> "Power supply <psID> AC state is <acStatus>"
Node node:<nodeID> "Power supply <psID> DC state is <dcStatus>"
Node node:<nodeID> "Power supply <psID> battery is <batStatus>"
Node node:<nodeID> "Node <nodeID> battery is <batStatus>"
Node node:<priNodeID> "<bat> has not been tested within the last 30 days"
Node node:<nodeID> "Node <nodeID> battery is expired"
Node node:<nodeID> "Power supply <psID> is expired"
Node node:<nodeID> "Fan is <fanID> is <status>"
Node node:<nodeID> "Power supply <psID> fan module <fanID> is <status>"
Node node:<nodeID> "Fan module <fanID> is <status>
Node node:<nodeID> "Detailed State <state>" (degraded or failed)

Suggested Node Action, General


For node error conditions, examine the node and node-component states with commands such as
shownode, shownode -s, shownode -d, showbattery, and showsys -d.

Node Example 1

Component -Identifier- ---------------Description----------------


Node node:0 Power supply 1 detailed state is DC Failed
Node node:0 Power supply 1 DC state is Failed
Node node:1 Power supply 0 detailed state is AC Failed
Node node:1 Power supply 0 AC state is Failed
Node node:1 Power supply 0 DC state is Failed

Node Suggested Action 1


Examine the states of the power supplies with commands such as shownode, shownode -s,
shownode -ps, etc. Turn on or replace the failed power supply.

NOTE: In the example below, the battery state is considered Degraded because the power supply
is Failed; this is normal.

cli% shownode
Control Data Cache
Node --Name--- -State-- Master InCluster ---LED--- Mem(MB) Mem(MB) Available(%)
0 1001356-0 Degraded Yes Yes AmberBlnk 2048 8192 100
1 1001356-1 Degraded No Yes AmberBlnk 2048 8192 100

cli% shownode -s
Node -State-- -Detailed_State-

112 Troubleshooting
0 Degraded PS 1 Failed
1 Degraded PS 0 Failed

cli% shownode -ps


Node PS -Serial- -PSState- FanState ACState DCState -BatState- ChrgLvl(%)
0 0 FFFFFFFF OK OK OK OK OK 100
0 1 FFFFFFFF Failed -- OK Failed Degraded 100
1 0 FFFFFFFF Failed -- Failed Failed Degraded 100
1 1 FFFFFFFF OK OK OK OK OK 100

Node Example 2

Component -Identifier- ---------Description------------


Node node:3 Power supply 1 battery is Failed

Node Suggested Action 2


Examine the state of the battery and power supplies with commands such as shownode, shownode
-s, shownode -ps, showbattery (and showbattery with -d, -s, -log), etc. Turn on, fix,
or replace the battery backup unit.

NOTE: The condition of the Degraded Power Supply (PS) is due to the battery failing.

cli% shownode
Control Data Cache
Node --Name--- -State-- Master InCluster ---LED--- Mem(MB) Mem(MB) Available(%)
2 1001356-2 OK No Yes GreenBlnk 2048 8192 100
3 1001356-3 Degraded No Yes AmberBlnk 2048 8192 100

cli% shownode -s
Node -State-- -Detailed_State-
2 OK OK
3 Degraded PS 1 Degraded

cli% shownode -ps


Node PS -Serial- -PSState- FanState ACState DCState -BatState- ChrgLvl(%)
2 0 FFFFFFFF OK OK OK OK OK 100
2 1 FFFFFFFF OK OK OK OK OK 100
3 0 FFFFFFFF OK OK OK OK OK 100
3 1 FFFFFFFF Degraded OK OK OK Failed 0

cli% showbattery
Node PS Bat Serial -State-- ChrgLvl(%) -ExpDate-- Expired Testing
3 0 0 100A300B OK 100 07/01/2011 No No
3 1 0 12345310 Failed 0 04/07/2011 No No

Node Example 3

Component -Identifier- --------------Description----------------


Node node:3 Node:3, Power Supply:1, Battery:0 has not been tested within
the last 30 days

Node Suggested Action 3


The indicated battery has not been tested in the past 30 days. A node backup battery will be tested
every 14 days under normal conditions, but if a battery is missing, expired, or failed, it will not
be tested. In addition, the other battery connected to the same node will not be tested because
testing it would cause loss of battery backup to the node, and the system will not allow that. An

Troubleshooting Storage System Components 113


untested battery will have an Unknown status in the showbattery -s output. Use commands
such as showbattery, showbattery -s, showbattery -d, and showbattery -log.

showbattery -s
Node PS Bat -State-- -Detailed_State-
0 0 0 OK normal
0 1 0 Degraded Unknown

Examine the date of the last successful test of that battery. Assuming the comment date was
2009-10-14, the last battery test on Node 0, PS 1, Bat 0 was 2009-09-10, which is more
than 30 days in the past.

showbattery -log
Node PS Bat Test Result Dur(mins) ---------Time----------
0 0 0 0 Passed 1 2009-10-14 14:34:50 PDT
0 0 0 1 Passed 1 2009-10-28 14:36:57 PDT
0 1 0 0 Passed 1 2009-08-27 06:17:44 PDT
0 1 0 1 Passed 1 2009-09-10 06:19:34 PDT

showbattery
Node PS Bat Serial -State-- ChrgLvl(%) -ExpDate-- Expired Testing
0 0 0 83205243 OK 100 04/07/2011 No No
0 1 0 83202356 Degraded 100 04/07/2011 No No

PD
Displays Physical Disks (PDs) with states or conditions that are not optimal.
• Checks for failed and degraded PDs
• Checks for an imbalance of PD ports, for example, if Port-A is used on more disks than Port-B
• Checks for an "Unknown" Sparing Algorithm. For example, when it hasn't been set
• Checks for disks experiencing a high number of IOPS
• Reports if a servicemag operation is outstanding (servicemag status)
• Reports if there are PDs that do not have entries in the firwmare DB file

Format of Possible PD Exception Messages

PD disk:<pdid> "Degraded States: <showpd -s -degraded">


PD disk:<pdid> "Failed States: <showpd -s -failed">
PD -- "There is an imbalance of active PD ports"
PD -- "Sparing algorithm is not set"
PD disk:<pdid> "Disk is experiencing a high level of I/O per second: <iops>"
PD -- There is at least one active servicemag operation in progress

The following checks are performed when the -svc option is used, or on 7400/7200 hardware:

PD File: <filename> "Folder not found on all Nodes in <folder>"


PD File: <filename> "Folder not found on some Nodes in <folder>"
PD File: <filename> "File not found on all Nodes in <folder>"
PD File: <filename> "File not found on some Nodes in <folder>"
PD Disk:<pdID> "<pdmodel> PD for cage type <cagetype> in cage position <pos> is missing
from firmware database"

114 Troubleshooting
PD Example 1

Component -------------------Description------------------- Qty


PD PDs that are degraded or failed 40

Component -Identifier- ---------------Description-----------------


PD disk:48 Detailed State: missing_B_port,loop_failure
PD disk:49 Detailed State: missing_B_port,loop_failure
...
PD disk:107 Detailed State: failed,notready,missing_A_port

PD Suggested Action 1
Both degraded and failed disks show up in this report. When a FC path to a drive cage is not
working, all disks in the cage will have a state of Degraded due to the nonredundant condition.
Use commands such as showpd, showpd -s, showcage, showcage -d, showport -sfp,
etc., to diagnose further.

cli% showpd -degraded -failed


----Size(MB)---- ----Ports----
Id CagePos Type Speed(K) State Total Free A B
48 3:0:0 FC 10 degraded 139520 115200 2:0:4* -----
49 3:0:1 FC 10 degraded 139520 121344 2:0:4* -----

107 4:9:3 FC 15 failed 428800 0 ----- 3:2:1*

cli% showpd -s -degraded -failed


Id CagePos Type -State-- -----------------Detailed_State--------------
48 3:0:0 FC degraded missing_B_port,loop_failure
49 3:0:1 FC degraded missing_B_port,loop_failure

107 4:9:3 FC failed prolonged_not_ready,missing_A_port,relocating

cli% showcage -d cage3


Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
3 cage3 2:0:4 0 --- 0 32 28-39 2.37 2.37 DC2 n/a

-----------Cage detail info for cage3 ---------


Fibre Channel Info PortA0 PortB0 PortA1 PortB1
Link_Speed 2Gbps -- -- 0Gbps

----------------------------------SFP Info-----------------------------------
FCAL SFP -State- --Manufacturer-- MaxSpeed(Gbps) TXDisable TXFault RXLoss DDM
0 0 OK SIGMA-LINKS 2.1 No No No Yes
1 1 OK SIGMA-LINKS 2.1 No No Yes Yes

Interface Board Info FCAL0 FCAL1


Link A RXLEDs Green Off
Link A TXLEDs Green Off
Link B RXLEDs Off Off
Link B TXLEDs Off Green
LED(Loop_Split) Off Off
LEDS(system,hotplug) Green,Off Green,Off

-------------Drive Info------------- ----LoopA----- ----LoopB-----


Drive NodeWWN LED Temp(C) ALPA LoopState ALPA LoopState
0:0 20000014c3b3eab9 Green 34 0xe1 OK 0xe1 Loop fail
0:1 20000014c3b3e708 Green 36 0xe0 OK 0xe0 Loop fail

Troubleshooting Storage System Components 115


PD Example 2

Component --Identifier-- --------------Description---------------


PD -- There is an imbalance of active pd ports

PD Suggested Action 2
The primary and secondary I/O paths for disks (PD's) are balanced between nodes. The primary
path is indicated in the showpd -path output and by an asterisk in the showpd output. An
imbalance of active ports is usually caused by a non-functioning path/loop to a cage, or because
an odd number of drives is installed or detected. To diagnose further, use CLI commands such as
showpd, showpd path, showcage, and showcage -d.

cli% showpd
----Size(MB)----- ----Ports----
Id CagePos Type Speed(K) State Total Free A B
0 0:0:0 FC 10 normal 139520 119040 0:0:1* 1:0:1
1 0:0:1 FC 10 normal 139520 121600 0:0:1 1:0:1*
2 0:0:2 FC 10 normal 139520 119040 0:0:1* 1:0:1
3 0:0:3 FC 10 normal 139520 119552 0:0:1 1:0:1*
...
46 2:9:2 FC 10 normal 139520 112384 2:0:3* 3:0:3
47 2:9:3 FC 10 normal 139520 118528 2:0:3 3:0:3*
48 3:0:0 FC 10 degraded 139520 115200 2:0:4* -----
49 3:0:1 FC 10 degraded 139520 121344 2:0:4* -----
50 3:0:2 FC 10 degraded 139520 115200 2:0:4* -----
51 3:0:3 FC 10 degraded 139520 121344 2:0:4* -----

cli% showpd -path


-----------Paths-----------
Id CagePos Type -State-- A B Order
0 0:0:0 FC normal 0:0:1 1:0:1 0/1
1 0:0:1 FC normal 0:0:1 1:0:1 1/0
2 0:0:2 FC normal 0:0:1 1:0:1 0/1
3 0:0:3 FC normal 0:0:1 1:0:1 1/0
...
46 2:9:2 FC normal 2:0:3 3:0:3 2/3
47 2:9:3 FC normal 2:0:3 3:0:3 3/2
48 3:0:0 FC degraded 2:0:4 3:0:4\missing 2/-
49 3:0:1 FC degraded 2:0:4 3:0:4\missing 2/-
50 3:0:2 FC degraded 2:0:4 3:0:4\missing 2/-
51 3:0:3 FC degraded 2:0:4 3:0:4\missing 2/-

cli% showcage -d cage3


Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
3 cage3 2:0:4 0 --- 0 32 29-41 2.37 2.37 DC2 n/a

-----------Cage detail info for cage3 ---------

Fibre Channel Info PortA0 PortB0 PortA1 PortB1


Link_Speed 2Gbps -- -- 0Gbps

----------------------------------SFP Info-----------------------------------
FCAL SFP -State- --Manufacturer-- MaxSpeed(Gbps) TXDisable TXFault RXLoss DDM
0 0 OK SIGMA-LINKS 2.1 No No No Yes
1 1 OK SIGMA-LINKS 2.1 No No Yes Yes

Interface Board Info FCAL0 FCAL1


Link A RXLEDs Green Off
Link A TXLEDs Green Off

116 Troubleshooting
Link B RXLEDs Off Off
Link B TXLEDs Off Green
LED(Loop_Split) Off Off
LEDS(system,hotplug) Green,Off Green,Off
...
-------------Drive Info------------- ----LoopA----- ----LoopB-----
Drive NodeWWN LED Temp(C) ALPA LoopState ALPA LoopState
0:0 20000014c3b3eab9 Green 35 0xe1 OK 0xe1 Loop fail
0:1 20000014c3b3e708 Green 38 0xe0 OK 0xe0 Loop fail
0:2 20000014c3b3ed17 Green 35 0xdc OK 0xdc Loop fail
0:3 20000014c3b3dabd Green 30 0xda OK 0xda Loop fail

PD Example 3

Component -------------------Description------------------- Qty


PD Disks experiencing a high level of I/O per second 93

Component --Identifier-- ---------Description----------


PD disk:100 Disk is experiencing a high level of I/O per second: 789.0

PD Suggested Action 3
This check samples the I/O per second (IOPS) information in statpd to see if any disks are being
overworked, and then it samples again after five (5) seconds. This does not necessarily indicate
a problem, but it could negatively affect system performance. The IOPS thresholds currently set for
this condition are as follows:
• NL disks > 75
• FC 10K RPM disks > 150
• FC 15K RPM disks > 200
• SSD > 1500
Operations such as servicemag and tunevv can cause this condition. If the IOPS rate is very
high and/or a large number of disks are experiencing very heavy I/O, examine the system further
using statistical monitoring commands/utilities such as statpd, the OS IMC (GUI) and System
Reporter. The following example will report disks whose total I/O is 150/sec or more.

cli% statpd -filt curs,t,iops,150


14:51:49 11/03/09 r/w I/O per second KBytes per sec ... Idle %
ID Port Cur Avg Max Cur Avg Max ... Cur Avg
100 3:2:1 t 658 664 666 172563 174007 174618 ... 6 6

PD Example 4

Component --Identifier-- -------Description----------


PD disk:3 Detailed State: old_firmware

PD Suggested Action 4
The identified disk does not have firmware that the storage system considers current. When a disk
is replaced, the servicemag operation should upgrade the disk's firmware. When disks are
installed or added to a system, the admithw command can perform the firmware upgrade. Check

Troubleshooting Storage System Components 117


the state of the disk using CLI commands such as showpd -s, showpd -i, and
showfirmwaredb.

cli% showpd -s 3
Id CagePos Type -State-- -Detailed_State-
3 0:4:0 FC degraded old_firmware

cli% showpd -i 3
Id CagePos State ----Node_WWN---- --MFR-- ---Model--- -Serial- -FW_Rev-
3 0:4:0 degraded 200000186242DB35 SEAGATE ST3146356FC 3QN0290H XRHJ

cli% showfirmwaredb
Vendor Prod_rev Dev_Id Fw_status Cage_type
...
SEAGATE [XRHK] ST3146356FC Current DC2.DC3.DC4

PD Example 5

Component --Identifier-- -------Description----------


PD -- Sparing Algorithm is not set

PD Suggested Action 5
Check the system’s Sparing Algorithm value using the CLI command showsys -param. The value
is normally set during the initial installation (OOTB). If it must be set later, use the command setsys
SparingAlgorithm; valid values are Default, Minimal, Maximal, and Custom. After setting the
parameter, use the admithw command to programmatically create and distribute the spare
chunklets.

% showsys -param
System parameters from configured settings

----Parameter----- --Value--
RawSpaceAlertFC : 0
RawSpaceAlertNL : 0
RemoteSyslog : 0
RemoteSyslogHost : 0.0.0.0
SparingAlgorithm : Unknown

PD Example 6

Component --Identifier-- -------Description----------


PD Disk:32 ST3400755FC PD for cage type DC3 in cage position 2:0:0 is missing from
the firmware database

PD Suggested Action 6
Check the release notes for mandatory updates and patches to the HP 3PAR OS version that is
installed and install as needed to support this PD in this cage.

118 Troubleshooting
Port
Displays port connection issues.
• Checks for ports in unacceptable states
• Checks for mismatches in type and mode, such as hosts connected to initiator ports, or host
and Remote Copy over Fibre Channel (RCFC) ports configured on the same FC adapter
• Checks for degraded SFPs and those with low power; perform this check only if this FC Adapter
type uses SFPs

Format of Possible Port Exception Messages

Port port:<nsp> "Port mode is in <mode> state"


Port port:<nsp> "is offline"
Port port:<nsp> "Mismatched mode and type"
Port port:<nsp> "Port is <state>"
Port port:<nsp> "SFP is missing"
Port port:<nsp> SFP is <state>" (degraded or failed)
Port port:<nsp> "SFP is disabled"
Port port:<nsp> "Receiver Power Low: Check FC Cable"
Port port:<nsp> "Transmit Power Low: Check FC Cable"
Port port:<nsp> "SFP has TX fault"

Port Suggested Actions, General


Some specific examples are displayed below, but in general, use the following CLI commands to
check for these conditions:
• For port SFP errors, use commands such as showport, showport -sfp,showport -sfp
-ddm, showcage, showcage -sfp, and showcage -sfp -ddm.

Port Example 1

Component ------Description------ Qty


Port Degraded or failed SFPs 1

Component -Identifier- --Description--


Port port:0:0:2 SFP is Degraded

Port Suggested Action 1


An SFP in a Node-Port is reporting a degraded condition. This is most often caused by the SFP
receiver circuit detecting a low signal level (RX Power Low), and that is usually caused by a poor
or contaminated FC connection, such as a cable. An alert should identify the condition, such as
the following:

Port 0:0:2, SFP Degraded (Receiver Power Low: Check FC Cable)

Check SFP statistics using CLI commands such as showport -sfp, showport -sfp -ddm,
showcage, etc.

cli% showport -sfp


N:S:P -State-- -Manufacturer- MaxSpeed(Gbps) TXDisable TXFault RXLoss DDM
0:0:1 OK FINISAR_CORP. 2.1 No No No Yes
0:0:2 Degraded FINISAR_CORP. 2.1 No No No Yes

Troubleshooting Storage System Components 119


In the following example an RX power level of 361 microwatts (uW) for Port 0:0:1 DDM is a good
reading; and 98 uW for Port 0:0:2’s is a weak reading ( < 100 uW). Normal RX power level
readings are 200-400 uW.

cli% showport -sfp -ddm


--------------Port 0:0:1 DDM--------------
-Warning- --Alarm--
--Type-- Units Reading Low High Low High
Temp C 41 -20 90 -25 95
Voltage mV 3217 2900 3700 2700 3900
TX Bias mA 7 2 14 1 17
TX Power uW 330 79 631 67 631
RX Power uW 361 15 794 10 1259

--------------Port 0:0:2 DDM--------------


-Warning- --Alarm--
--Type-- Units Reading Low High Low High
Temp C 40 -20 90 -25 95
Voltage mV 3216 2900 3700 2700 3900
TX Bias mA 7 2 14 1 17
TX Power uW 335 79 631 67 631
RX Power uW 98 15 794 10 1259

cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
0 cage0 0:0:1 0 1:0:1 0 15 33-38 08 08 DC3 n/a
1 cage1 --- 0 1:0:2 0 15 30-38 08 08 DC3 n/a

cli% showpd -s
Id CagePos Type -State-- -Detailed_State-
1 0:2:0 FC normal normal
...
13 1:1:0 NL degraded missing_A_port
14 1:2:0 FC degraded missing_A_port

cli% showpd -path


---------Paths---------
Id CagePos Type -State-- A B Order
1 0:2:0 FC normal 0:0:1 1:0:1 0/1
...
13 1:1:0 NL degraded 0:0:2\missing 1:0:2 1/-
14 1:2:0 FC degraded 0:0:2\missing 1:0:2 1/-

Port Example 2

Component -Description- Qty


Port Missing SFPs 1

Component -Identifier- -Description--


Port port:0:3:1 SFP is missing

Port Suggested Action 2


FC node-ports that normally contain SFPs will report an error if the SFP has been removed. The
condition can be checked using the showport -sfp command. In this example, the SFP in 0:3:1
has been removed from the adapter:

cli% showport -sfp


N:S:P -State- -Manufacturer- MaxSpeed(Gbps) TXDisable TXFault RXLoss DDM
0:0:1 OK FINISAR_CORP. 2.1 No No No Yes
0:0:2 OK FINISAR_CORP. 2.1 No No No Yes

120 Troubleshooting
0:3:1 - - - - - - -
0:3:2 OK FINISAR_CORP. 2.1 No No No Yes

Port Example 3

Component -Description- Qty


Port Disabled SFPs 1

Component -Identifier- --Description--


Port port:3:5:1 SFP is disabled

Port Suggested Action 3


A node-port SFP will be disabled if the port has been placed offline using the controlport
offline command. Also see Example 4.

cli% showport -sfp


N:S:P -State- -Manufacturer- MaxSpeed(Gbps) TXDisable TXFault RXLoss DDM
3:5:1 OK FINISAR_CORP. 4.1 Yes No No Yes
3:5:2 OK FINISAR_CORP. 4.1 No No No Yes

Port Example 4

Component -Description- Qty


Port Offline ports 1

Component -Identifier- --Description--


Port port:3:5:1 is offline

Port Suggested Action 4


Check the state of the port with showport. If a port is offline, it was deliberately put in that state
using the controlport offline command. Offline ports may be restored using controlport
rst.

cli% showport
N:S:P Mode State ----Node_WWN---- -Port_WWN/HW_Addr- Type
3:5:1 target offline 2FF70002AC00054C 23510002AC00054C free

Port Example 5

Component ------------Description------------ Qty


Port Ports with mismatched mode and type 1

Component -Identifier- ------Description-------


Port port:2:0:3 Mismatched mode and type

Troubleshooting Storage System Components 121


Port Suggested Action 5
This output indicates that the port's mode, such as an initiator or target, is not correct for the
connection type, such as disk, host, iscsi or rcfc. Useful CLI commands are showport, showport
-c, showport -par, showport -rcfc, showcage, etc.

cli% showport
N:S:P Mode State ----Node_WWN---- -Port_WWN/HW_Addr- Type
2:0:1 initiator ready 2FF70002AC000591 22010002AC000591 disk
2:0:2 initiator ready 2FF70002AC000591 22020002AC000591 disk
2:0:3 target ready 2FF70002AC000591 22030002AC000591 disk
2:0:4 target loss_sync 2FF70002AC000591 22040002AC000591 free

Component -Identifier- ------Description-------


Port port:0:1:1 Mismatched mode and type

cli% showport
N:S:P Mode State ----Node_WWN---- -Port_WWN/HW_Addr- Type
0:1:1 initiator ready 2FF70002AC000190 20110002AC000190 rcfc
0:1:2 initiator loss_sync 2FF70002AC000190 20120002AC000190 free
0:1:3 initiator loss_sync 2FF70002AC000190 20130002AC000190 free
0:1:4 initiator loss_sync 2FF70002AC000190 20140002AC000190 free

RC
Displays Remote Copy issues.
• Checks Remote Copy targets
• Checks Remote Copy links
• Checks Remote Copy Groups and VVs

Format of Possible RC Exception Messages

RC rc:<name> "All links for target <name> are down but target not yet marked failed."

RC rc:<name> "Target <name> has failed."


RC rc:<name> "Link <name> of target <target> is down."
RC rc:<name> "Group <name> is not started to target <target>."
RC rc:<vvname> "VV <vvname> of group <name> is stale on target <target>."
RC rc:<vvname> "VV <vvname> of group <name> is not synced on target <target>."

RC Example

Component -Description- Qty


RC Stale volumes 1

Component --Identifier--- ---------Description---------------


RC rc:yush_tpvv.rc VV yush_tpvv.rc of group yush_group.r1127
is stale on target S400_Async_Primary.

RC Suggested Action
Perform remote copy troubleshooting such as checking the physical links between the storage
system, and using CLI commands such as showrcopy, showrcopy -d, showport -rcip,
showport -rcfc, shownet -d, controlport rcip ping, etc.

122 Troubleshooting
SNMP
Displays issues with SNMP. Attempts the showsnmpmgr command and reports errors if the CLI
returns an error.

Format of Possible SNMP Exception Messages

SNMP -- <err>

SNMP Example

Component -Identifier- ----------Description---------------


SNMP -- Could not obtain snmp agent handle. Could be
misconfigured.

SNMP Suggested Action


Any error message that can be produced by showsnmpmgr may be displayed.

Service Processor
Checks the status of the Ethernet connection between the SP and nodes. This can only be run from
the SP because it performs a short Ethernet transfer check between the SP and the storage system.

Format of Possible SP Exception Messages

Network SP->InServ "SP ethernet Stat <stat> has increased too quickly check SP network
settings"

SP Example

Component -Identifier- --------Description------------------------


SP ethernet "State rx_errs has increased too quickly check SP network
settings"

SP Suggested Action
The <stat> variable can be any of the following: rx_errs, rx_dropped, rx_fifo, rx_frame,
tx_errs, tx_dropped, tx_fifo.
This message is usually caused by customer network issues, but may be caused by conflicting or
mismatching network settings between the SP, customer switch(es), and the storage system. Check
the SP network interface settings using SPMAINT or SPOCC. Check the storage system settings
using commands such as shownet and shownet -d.

Task
Displays failed tasks. Checks for any tasks that have failed within the past 24 hours. This is the
default time frame for the showtask -failed command.

Format of Possible Task Exception Messages

Task Task:<Taskid> "Failed Task"

Troubleshooting Storage System Components 123


Task Example

Component --Identifier--- -------Description--------


Task Task:6313 Failed Task

For this example, checkhealth also showed an Alert; this task failed because the command was
entered with a syntax error:

Alert sw_task:6313 Task 6313 (type 'background_command', name 'upgradecage -a


-f') has failed (Task Failed). Please see task status for details.

Task Suggested Action


The CLI command showtask -d Task_id will display detailed information about the task. To
clean up the Alerts and the Alert-reporting of checkhealth, you can delete the failed-task alerts
if they are of no further use. They will not be auto-resolved and they will remain until they are
manually removed with the IMC (GUI) or CLI with removealert or setalert ack. To display
system-initiated tasks, use showtask -all.

cli% showtask -d 6313


Id Type Name Status Phase Step
6313 background_command upgradecage -a -f failed --- ---

Detailed status is as follows:

2010-10-22 10:35:36 PDT Created task.


2010-10-22 10:35:36 PDT Updated Executing "upgradecage -a -f" as 0:12109
2010-10-22 10:35:36 PDT Errored upgradecage: Invalid option: -f

VLUN
Displays inactive Virtual LUNs (VLUNs) and those which have not been reported by the host agent.
Reports VLUNs that have been configured but are not currently being exported to hosts or host-ports.

Format of Possible VLUN Exception Messages

vlun vlun:(<vvID>, <lunID>, <hostname>)"Path to <wwn> is not reported by host agent"

vlun vlun:(<vvID>, <lunID>, <hostname>)"Path to <wwn> is not is not seen by host" vlun
vlun:(<vvID>, <lunID>, <hostname>) "Path to <wwn> is failed"
vlun host:<hostname> "Host <ident>(<type>):<connection> is not connected to a port"

VLUN Example

Component ---------Description--------- Qty


vlun Hosts not connected to a port 1

Component -----Identifier----- ---------Description--------


vlun host:cs-wintec-test1 Host wwn:10000000C964121D is not connected to a port

124 Troubleshooting
VLUN Suggested Action
Check the export status and port status for the VLUN and HOST with CLI commands such as
showvlun, showvlun -pathsum, showhost, showhost pathsum, showport, servicehost
list, etc. For example:

cli% showvlun -host cs-wintec-test1


Active VLUNs
Lun VVName HostName -Host_WWN/iSCSI_Name- Port Type
2 BigVV cs-wintec-test1 10000000C964121C 2:5:1 host
-----------------------------------------------------------
1 total

VLUN Templates
Lun VVName HostName -Host_WWN/iSCSI_Name- Port Type
2 BigVV cs-wintec-test1 ---------------- --- host

cli% showhost cs-wintec-test1


Id Name Persona -WWN/iSCSI_Name- Port
0 cs-wintec-test1 Generic 10000000C964121D ---
10000000C964121C 2:5:1
cli% servicehost list
HostName -WWN/iSCSI_Name- Port
host0 10000000C98EC67A 1:1:2
host1 210100E08B289350 0:5:2

Lun VVName HostName -Host_WWN/iSCSI_Name- Port Type


2 BigVV cs-wintec-test1 10000000C964121D 3:5:1 unknown

VV
Displays Virtual Volumes (VV) that are not optimal. Checks for VVs and Common Provisioning
Groups (CPG) whose state is not normal.

Format of Possible VV Exception Messages

VV vv:<vvname> "IO to this volume will fail due to no_stale_ss policy"


VV vv:<vvname> "Volume has reached snapshot space allocation limit"
VV vv:<vvname> "Volume has reached user space allocation limit"
VV vv:<vvname> "VV has expired"
VV vv:<vvname> "Detailed State: <state>" (failed or degraded)
VV cpg:<cpg> "CPG is unable to grow SA (or SD) space"

VV Suggested Action
Check status with CLI commands such as showvv, showvv -d, showvv -cpg.

Troubleshooting Storage System Components 125


10 Support and Other Resources
Contacting HP
For worldwide technical support information, see the HP support website:
http://www.hp.com/support

Before contacting HP, collect the following information:


• Product model names and numbers
• Technical support registration number or Service Agreement ID (if applicable)
• Product serial numbers
• Error messages
• Operating system type and revision level
• Detailed questions
Specify the type of support you are requesting:

HP 3PAR storage system Support request

HP 3PAR StoreServ 7200 and 7400 Storage systems StoreServ 7000 Storage

HP 3PAR StoreServ10000 Storage systems 3PAR or 3PAR Storage


HP 3PAR T-Class storage systems
HP 3PAR F-Class storage systems

HP 3PAR documentation
For information about: See:

Supported hardware and software platforms The Single Point of Connectivity Knowledge for HP
Storage Products (SPOCK) website:
http://www.hp.com/storage/spock

Locating HP 3PAR documents The HP 3PAR StoreServ Storage site:


http://www.hp.com/go/3par
To access HP 3PAR documents, click the Support link for
your product.

HP 3PAR storage system software

Storage concepts and terminology HP 3PAR StoreServ Storage Concepts Guide

Using the HP 3PAR Management Console (GUI) to configure HP 3PAR Management Console User's Guide
and administer HP 3PAR storage systems

Using the HP 3PAR CLI to configure and administer storage HP 3PAR Command Line Interface Administrator’s
systems Manual

CLI commands HP 3PAR Command Line Interface Reference

Analyzing system performance HP 3PAR System Reporter Software User's Guide

Installing and maintaining the Host Explorer agent in order HP 3PAR Host Explorer User’s Guide
to manage host configuration and connectivity information

Creating applications compliant with the Common Information HP 3PAR CIM API Programming Reference
Model (CIM) to manage HP 3PAR storage systems

126 Support and Other Resources


For information about: See:

Migrating data from one HP 3PAR storage system to another HP 3PAR-to-3PAR Storage Peer Motion Guide

Configuring the Secure Service Custodian server in order to HP 3PAR Secure Service Custodian Configuration Utility
monitor and control HP 3PAR storage systems Reference

Using the CLI to configure and manage HP 3PAR Remote HP 3PAR Remote Copy Software User’s Guide
Copy

Updating HP 3PAR operating systems HP 3PAR Upgrade Pre-Planning Guide

Identifying storage system components, troubleshooting HP 3PAR F-Class, T-Class, and StoreServ 10000 Storage
information, and detailed alert information Troubleshooting Guide

Installing, configuring, and maintaining the HP 3PAR Policy HP 3PAR Policy Server Installation and Setup Guide
Server HP 3PAR Policy Server Administration Guide

HP 3PAR documentation 127


For information about: See:

Planning for HP 3PAR storage system setup


Hardware specifications, installation considerations, power requirements, networking options, and cabling information
for HP 3PAR storage systems

HP 3PAR 7200 and 7400 storage systems HP 3PAR StoreServ 7000 Storage Site Planning Manual

HP 3PAR 10000 storage systems HP 3PAR StoreServ 10000 Storage Physical Planning
Manual
HP 3PAR StoreServ 10000 Storage Third-Party Rack
Physical Planning Manual

Installing and maintaining HP 3PAR 7200 and 7400 storage systems

Installing 7200 and 7400 storage systems and initializing HP 3PAR StoreServ 7000 Storage Installation Guide
the Service Processor HP 3PAR StoreServ 7000 Storage SmartStart Software
User’s Guide

Maintaining, servicing, and upgrading 7200 and 7400 HP 3PAR StoreServ 7000 Storage Service Guide
storage systems

Troubleshooting 7200 and 7400 storage systems HP 3PAR StoreServ 7000 Storage Troubleshooting Guide

Maintaining the Service Processor HP 3PAR Service Processor Software User Guide
HP 3PAR Service Processor Onsite Customer Care
(SPOCC) User's Guide

HP 3PAR host application solutions

Backing up Oracle databases and using backups for disaster HP 3PAR Recovery Manager Software for Oracle User's
recovery Guide

Backing up Exchange databases and using backups for HP 3PAR Recovery Manager Software for Microsoft
disaster recovery Exchange 2007 and 2010 User's Guide

Backing up SQL databases and using backups for disaster HP 3PAR Recovery Manager Software for Microsoft SQL
recovery Server User’s Guide

Backing up VMware databases and using backups for HP 3PAR Management Plug-in and Recovery Manager
disaster recovery Software for VMware vSphere User's Guide

Installing and using the HP 3PAR VSS (Volume Shadow Copy HP 3PAR VSS Provider Software for Microsoft Windows
Service) Provider software for Microsoft Windows User's Guide

Best practices for setting up the Storage Replication Adapter HP 3PAR Storage Replication Adapter for VMware
for VMware vCenter vCenter Site Recovery Manager Implementation Guide

Troubleshooting the Storage Replication Adapter for VMware HP 3PAR Storage Replication Adapter for VMware
vCenter Site Recovery Manager vCenter Site Recovery Manager Troubleshooting Guide

Installing and using vSphere Storage APIs for Array HP 3PAR VAAI Plug-in Software for VMware vSphere
Integration (VAAI) plug-in software for VMware vSphere User's Guide

128 Support and Other Resources


Typographic conventions
Table 48 Document conventions
Convention Element

Bold text • Keys that you press


• Text you typed into a GUI element, such as a text box
• GUI elements that you click or select, such as menu items, buttons,
and so on

Monospace text • File and directory names


• System output
• Code
• Commands, their arguments, and argument values

<Monospace text in angle brackets> • Code variables


• Command variables

Bold monospace text • Commands you enter into a command line interface
• System output emphasized for scannability

WARNING! Indicates that failure to follow directions could result in bodily harm or death, or in
irreversible damage to data or to the operating system.

CAUTION: Indicates that failure to follow directions could result in damage to equipment or data.

NOTE: Provides additional information.

Required
Indicates that a procedure must be followed as directed in order to achieve a functional and
supported implementation based on testing at HP.

HP 3PAR branding information


• The server previously referred to as the "InServ" is now referred to as the "HP 3PAR StoreServ
Storage system."
• The operating system previously referred to as the "InForm OS" is now referred to as the "HP
3PAR OS."
• The user interface previously referred to as the "InForm Management Console (IMC)" is now
referred to as the "HP 3PAR Management Console."
• All products previously referred to as “3PAR” products are now referred to as "HP 3PAR"
products.

Typographic conventions 129


11 Documentation feedback
HP is committed to providing documentation that meets your needs. To help us improve the
documentation, send any errors, suggestions, or comments to Documentation Feedback
(docsfeedback@hp.com). Include the document title and part number, version number, or the URL
when submitting your feedback.

130 Documentation feedback

You might also like