Professional Documents
Culture Documents
TCI2208
© Hitachi Data Systems Corporation 2014. All rights reserved. HITACHI is a trademark or registered trademark of Hitachi, Ltd. Innovate With Information is a trademark or
registered trademark of Hitachi Data Systems Corporation. All other trademarks, service marks, and company names are properties of their respective owners.
ii
Contents
Introduction ............................................................................................................ xvii
Welcome and Introductions .......................................................................................... xvii
Course Description .......................................................................................................xviii
Prerequisites ................................................................................................................ xix
Course Objectives .......................................................................................................... xx
Course Topics............................................................................................................... xxi
Learning Paths ............................................................................................................ xxii
HDS Academy Is on Twitter and LinkedIn ......................................................................xxiii
Collaborate and Share ................................................................................................. xxiv
Hitachi Data Systems Community .................................................................................. xxv
iii
Contents
iv
Contents
v
Contents
vi
Contents
vii
Contents
viii
Contents
ix
Contents
x
Contents
xi
Contents
xii
Contents
xiii
Contents
xiv
Contents
xv
Contents
xvi
Introduction
Welcome and Introductions
Student Introductions
• Name
• Position
• Experience
• Expectations
xvii
Introduction
Course Description
Course Description
xviii
Introduction
Prerequisites
Prerequisites
Prerequisite Courses
• CCI0110 — Storage Concepts
• THE1860 — Hitachi Data Systems Foundations — Modular
• TCC1690 — Introduction to the Hitachi Adaptable Modular Storage 2000
Family
Other Prerequisites
• Basic knowledge and understanding of SAN
• Use of Microsoft® Windows® Operating System
Supplemental Courses
• TSI2258 — Provisioning for Hitachi Unified Storage
• CSI0157 — Data Protection Techniques for Hitachi Modular Storage
xix
Introduction
Course Objectives
Course Objectives
xx
Introduction
Course Topics
Course Topics
xxi
Introduction
Learning Paths
Learning Paths
HDS.com: http://www.hds.com/services/education/
HDSnet: http://hdsnet.hds.com/hds_academy/
Please contact your local training administrator if you have any questions regarding Learning
Paths or visit your applicable website.
xxii
Introduction
HDS Academy Is on Twitter and LinkedIn
http://twitter.com/#!/HDSAcademy
http://www.linkedin.com/groups?gid=3044480&trk=myg_ugrp_ovr
xxiii
Introduction
Collaborate and Share
Academy in theLoop!
theLoop:
http://loop.hds.com/community/hds_academy/course_announcements_and_feedback_communi
ty ― HDS internal only
xxiv
Introduction
Hitachi Data Systems Community
community.hds.com
10
xxv
Introduction
Hitachi Data Systems Community
xxvi
1. Hitachi Unified Storage Family Overview
Module Objectives
Page 1-1
Hitachi Unified Storage Family Overview
Overview
Overview
64% 77%
1PB holds about 10B objects or files: over 100T objects stored in 2010
File and content solutions are critical to solve this management problem
Page 1-2
Hitachi Unified Storage Family Overview
Market Drivers to Unified Storage
Unified storage solutions have recently been designed and adopted to address this problem.
Unified storage provides several benefits including:
In an environment where there is a predominance of one type of usage (such as file storage for
unstructured data), but there remains a need for some block storage (an Exchange database,
for example), unified storage can not only save on CAPEX but also OPEX through lower floor
space, power consumption, administration, licensing and maintenance.
Customers can also gain advantages from unified storage through its flexibility in quickly and
efficiently provisioning storage to block and file applications on demand.
Customers who want to deploy a block storage solution today but have the ability to consolidate
their file storage onto a single platform with their block storage in the future, will benefit from
the investment protection that unified storage provides.
Page 1-3
Hitachi Unified Storage Family Overview
Hitachi Key Differentiators
Hitachi has combined our leading block technology with the file
storage technology from BlueArc, now a part of Hitachi Data
Systems, to deliver unified storage without compromise
We know we are not first to market, but we are better than our competitors because of the
following three key points:
Only platform to support three data types: Block, File and Object*.
Unified redefined means that we are providing unified storage management not just on this
new platform, but across the entire HDS portfolio (AMS – HUS – HNAS – HCP – HDI – VSP).
Unified without compromise means that unlike our competitors who are strong in one data type,
but weak on the other, we have strength in both block and file. Customers get the best of both
worlds without compromise.
* Object data support provides unique differentiation for us since no other vendor does it as
comprehensively as we do. First, the File Module supports an object-based file system which
enables tiering and migration, object-based replication and fast searches of data. Second, we
support the Hitachi Content Platform (HCP) to support true object store of data with regulatory
compliance and ability to add custom metadata. But unlike one competitor, the HCP can use the
Block Module capacity as a common storage pool to store objects. This is more space efficient
and cost effective than our competitors’ separate and siloed object store solution.
Page 1-4
Hitachi Unified Storage Family Overview
Unified Redefined: Unifying Across the Storage Infrastructure
Specialized
Appliances HDI Software HDI Single HDI Dual
Content
HCP 300 HCP 500
Platform
Command Suite
File-Only
Platform HNAS 3080 HNAS 3090 HNAS 3200
Unified
Platform HUS 110 HUS 130 HUS 150 HUS VM
Block-Only
Platforms HUS 110 HUS 130 HUS 150 HUS VM
VSP
Page 1-5
Hitachi Unified Storage Family Overview
Hitachi Unified Storage Portfolio
Unified redefined
Unified management platform for all data types
Applications from local office to entry enterprise
Flash storage options across the product line
Regional Medium
Office Business Large Business Entry Enterprise
Our unified storage portfolio redefines unified storage. HDS offers unified management of all
data across the entire portfolio. The individual models are designed to span the needs of
organizations from regional offices to entry enterprise. We offer flash storage across our entire
product line. When making a purchase decision one of the primary tradeoffs to consider is the
price of the capacity purchased and the performance. Our portfolio offers models that span the
price, capacity and performance range to meet the needs of all businesses. All models support
the internal virtualization technologies such as thin provisioning and dynamic tiering while the
HUS VM model brings enterprise storage virtualization from our enterprise VSP platform.
Page 1-6
Hitachi Unified Storage Family Overview
Hitachi Unified Storage Models
To be specific, the Unified Storage portfolio includes 4 models. As you can see they differ in the
scalability of their capacity, cache and connectivity. All of them utilize Hitachi Command Suite
for management of storage.
FC — Fibre Channel
Page 1-7
Hitachi Unified Storage Family Overview
Configurations
Configurations
Page 1-8
Hitachi Unified Storage Family Overview
HUS 110 Box Names
Page 1-9
Hitachi Unified Storage Family Overview
HUS 130 Configurations
2U Block Module
• 2.5-inch x 24 internal drives 4U LFF Dense Disk Tray
• 3.5-inch x 12 internal drives • 3.5-inch x 48 drives
Drive Box (Expansion Units) SAS 7.2k HDD
Page 1-10
Hitachi Unified Storage Family Overview
HUS 130 Box Names
Page 1-11
Hitachi Unified Storage Family Overview
HUS 150 Configurations
3U Block Module
• No internal HDDs 4U LFF Dense Disk Tray
Drive Box (Expansion Units) • 3.5-inch x 48 drives
SAS 7.2k HDD
Page 1-12
Hitachi Unified Storage Family Overview
HUS 150 Box Names
Page 1-13
Hitachi Unified Storage Family Overview
HUS Family Summary
HUS 130
16GB – 32GB cache HUS 150
16FC, 8FC and 4 iSCSI 16-32GB cache
HDDs: Mix up to 360 SSD, SAS 16FC, 8FC and 4 or 8 iSCSI
Throughput
Scalability
Page 1-14
Hitachi Unified Storage Family Overview
Features
Features
Dynamic Virtual Controller Front End: A feature where a request for LU at any host port can be
executed by either of the controllers.
Load Balancing: The controllers can load balance when there is an imbalance between the
controllers. LU management is shifted to an underutilized controller.
SAS Back End: 2 or 4 SAS I/O controller processors (IOC) and SAS disk switches which work in
parallel with the Intel CPUs and DCTL ASICs. Each IOC chip auto-selects which of its 8-6Gb/sec
SAS links to use to communicate with each disk in that set of trays. The back end from each
IOC is operated as a matrix instead of static connections.
Dynamic Provisioning: Provides wide striping across RAID groups. Since RAID groups may not
have an uniform work load, they can become a performance bottle neck. With Dynamic
Provisioning, volumes are spread across RAID groups, which also allows for thin provisioning.
Page 1-15
Hitachi Unified Storage Family Overview
HUS Product Family Features
Standby
path
Active Active-active
path
The user does not need to consider the load balance of controllers and ports when doing the
performance design. The user should set just the path management software of all hosts as the
active-active mode, and then the load of controllers is automatically balanced.
Page 1-16
Hitachi Unified Storage Family Overview
HUS Product Family Features
Performance aggregation
• I/O is processed on two controllers
• Port performance can equal maximum for both controllers
I/O request
to the port can
be processed
by two CTLs
I/O request
over one CTL
maximum port port port port
throughput
MPU MPU MPU MPU
CTL 0 CTL 1 Automatic CTL 0 CTL 1
Load
Balancing
I/O requests to the port can be processed on two controllers by using a cross-path mechanism.
So the port performance can exceed the maximum performance of a single controller, and it
can be expanded to the maximum performance for both controllers.
Page 1-17
Hitachi Unified Storage Family Overview
HUS Product Family Features
Benefits
• Nondisruptive firmware updates are easily and quickly accomplished
• Firmware can be updated without interrupting I/O
User must change paths No requirement to change paths
Page 1-18
Hitachi Unified Storage Family Overview
HUS Product Family Features
Green IT
• Power savings (power down) option available for RG on all systems
Tray power saving available for DBWs on HUS 150 (V6)
• Higher density drives using less power = fewer BTUs
Saves money, saves the environment
• Variable fan speeds
Internal temperature of subsystem will control fan speed
Encryption
• Hardware Encryption
Write data (plaintext) from the host is encrypted by the encryption BE
input/output or I/O module with keys and is then stored to the physical
drives
VAAI (VMware API for array integration) support
• All HUS models support VAAI
Hitachi Unified Storage undertakes the operations to copy and back up on behalf of VMware.
RG — RAID Group
Extreme reliability
• 99.999% data availability with no single point of failure
• Nondisruptive firmware updates
• Hot swappable major components
• Cache backed up to flash on power failure (unlimited retention)
• Flexible drive sparing with no copy back required after a RAID rebuild
• In-system and remote data replication options
• Support for RAID-6
Page 1-19
Hitachi Unified Storage Family Overview
Model Architecture
Model Architecture
Internal Names
HUS 110
• DF850XS
HUS 130
• DF850S
HUS 150
• DF850MH
Note : The internal names are given only for their relevance to product documentation.
When referring to Hitachi Unified Storage products, please always use the official
product names of HUS 110, HUS 130 or HUS 150.
Page 1-20
Hitachi Unified Storage Family Overview
HUS 110 (Block Module) Architecture
• 1 SAS port
• 4 FC ports
• 2 Ethernet ports
• 1 slot for the expansion card
Page 1-21
Hitachi Unified Storage Family Overview
HUS 130 (Block Module) Architecture
330 Disks
(SAS, SSD)
• 2 SAS ports
• 4 FC ports
• 2 Ethernet ports
• 1 slot for the expansion card
Page 1-22
Hitachi Unified Storage Family Overview
HUS 150 (Block Module) Architecture
• 4 SAS ports
Page 1-23
Hitachi Unified Storage Family Overview
Common Features Across the Hitachi Unified Storage Models
Host Interface
• FC — 8Gb/sec
• iSCSI — 10Gb/sec
• iSCSI — 1Gb/sec (HUS 110 and HUS 130)
Each front-end port can connect up to:
• 128 FC devices
• 255 iSCSI devices
Port security
Internal bus is PCIe 2.0
Cache protected by flash backup
Port security is also known as host-group security and is enabled per front-end port.
Page 1-24
Hitachi Unified Storage Family Overview
Cache Protection
Cache Protection
Write-back
• Default logic for writes received from hosts
• Host issues a write
• Data written into cache and acknowledged to host
• Data (dirty) then asynchronously destaged to disks
Electrical power lost
• Cache (volatile) contents are backed Host
up to flash memory (non-volatile)
Flash memory (non-volatile) Host Write ACK (OK)
• Contents are retained for
infinite time Cache Memory Flash
(volatile) Memory
Small (Non-
Battery
Volatile)
Drive
The HUS 100 product provides an improved method to handle dirty data in cache when a power
outage occurs. HUS 100 models still use a battery to keep cache alive during an outage, but
only for as long as it takes to back up the contents of cache to a flash memory module installed
on the controller board. Once cache data has been copied to flash, the data will be safe for an
infinite amount of time and can be recovered to cache at any time once power is restored and
the system is again powered on.
Batteries can be fully recharged in 3 hours, enabling 2 consecutive outages to occur with no
loss of data.
Note: The second consecutive outage without a recharge will trigger write-through mode by
default (user-definable) until the battery is recharged to full, thus avoiding loss of cached data
under any circumstances.
Page 1-25
Hitachi Unified Storage Family Overview
On Power Outage with HUS 100 (Backup)
Controller 0
Flash memory
Cache (Volatile)
(Non-volatile)
Controller 1
Flash memory
Cache (Volatile)
(Non-volatile)
Page 1-26
Hitachi Unified Storage Family Overview
On Return of Power (Restore)
Controller 1
Serial #
Flash memory
Cache (Volatile)
(Non-volatile)
Serial #
The remaining flash backup process is completed (all cache area data is backed up to the flash
memory once).
Then restoration of all data from flash memory to cache memory is performed.
Page 1-27
Hitachi Unified Storage Family Overview
Clearing the Data From Flash Memory
Controller 0
Flash memory
Cache (Volatile)
(Non Volatile)
100%
Charged Controller 1
Page 1-28
Hitachi Unified Storage Family Overview
Write Command Mode (During Battery Charging)
Page 1-29
Hitachi Unified Storage Family Overview
Management Tools
Management Tools
Hitachi NAS
Hitachi Universal Platform
Storage Platform V
Page 1-30
Hitachi Unified Storage Family Overview
Management Tools Overview
Hitachi Storage Navigator Modular 2 (SNM 2) can be installed in either Microsoft® Windows®,
Solaris or RH Linux environments. It is the integrated interface for standard firmware and
software features of Hitachi Unified Storage (HUS). It is required for taking advantage of the
full feature sets that HUS offers.
To use the GUI interface requires Internet Explorer (or any other browser).
For customer use, the JRE component is not required. But when the CE needs the Additional
Settings applet, they need to install the JRE 1.6.
Page 1-31
Hitachi Unified Storage Family Overview
Module Summary
Module Summary
Page 1-32
Hitachi Unified Storage Family Overview
Module Review
Module Review
Page 1-33
Hitachi Unified Storage Family Overview
Module Review
Page 1-34
2. Hitachi Unified Storage Components
Module Objectives
Page 2-1
Hitachi Unified Storage Components
Components
Components
This section presents the hardware and software components of the HUS product family.
HUS Components
Hardware
• HUS 110 (2 models) — CBXSL and CBXSS
• HUS 130 (2 models) — CBSL and CBSS
• HUS 150 (1 model) — CBL
• DBL (3.5-inch x 12 disks) — NLSAS disks
• DBF (12*FMD) — Flash module drives
• DBS (2.5-inch x 24 disks) — SAS and flash drive disks
• DBX (3.5-inch x 48 disks) — NLSAS disks
• DBW (3.5-inch x 84 disks) — NLSAS disks
• Rack
Page 2-2
Hitachi Unified Storage Components
HUS 110 CBXSL1 Controller
Components
1 or 2 controllers 4 embedded Fibre Channel host
interface ports (4 per controller, 8 max)
Drives supported (3.5-inch x 12)
• SAS 7.2k Optional host interface cards
(1 per CTRL, 2 max)
Serial number begins with 911
Power supply unit x 2
Front View
Rear View
Page 2-3
Hitachi Unified Storage Components
HUS 110 CBXSS1 Controller
Components
1 or 2 controllers 4 embedded Fibre Channel host
interface ports (4 per controller, 8 max)
Drives supported (2.5-inch x 24)
• SAS 10k/15k and SSD Optional host interface cards
(1 per CTRL, 2 max)
Serial number begins with 912
Power supply unit 2
Front View
Rear View
1 HUS 110 and HUS 130 share the same controller box and are known as the CBSL (2U x 12)
and CBSS (2U x24). CBXSL and CBXSS are terms used to describe the HUS 110 controller boxes
in the HUS 100 Maintenance Manual; they are not used elsewhere.
Page 2-4
Hitachi Unified Storage Components
HUS 130 CBSL Controller
Components
2 controllers 4 embedded Fibre Channelhost
interface ports (4 per controller, 8 max)
Drives supported (3.5-inch x 12)
• SAS 7.2k Optional host interface cards
(1 per CTRL, 2 max)
Serial number begins with 921
Power supply unit 2
Front View
Rear View
Page 2-5
Hitachi Unified Storage Components
HUS 130 CBSS Controller
Components
1 frame assembly 4 embedded Fibre Channel host
2 controllers interface ports (4 per controller, 8 max)
Drives supported (2.5-inch x 24) Optional host interface cards
(1 per CTRL, 2 max)
• SAS 10k/15k and SSD
Power supply unit 2
Serial number begins with 922
Front View
Rear View
Page 2-6
Hitachi Unified Storage Components
HUS 150 CBL Controller
2 controllers
1 bezel
No drives
Serial number begins with 930
2 power supplies
6 fans
2 batteries
2 management modules
2 back-end I/O modules
Rear View
2 host interface modules
Page 2-7
Hitachi Unified Storage Components
DBL Disk Tray
Consists of:
• 1 frame assembly
• 12 3.5-inch x 12 disks drive slots (SAS 7.2k)
• 2 Enclosure Controllers (ENCs)
• 2 power supply units
• 1 bezel
• 2 1 m SAS cables
Front View
Rear View
Page 2-8
Hitachi Unified Storage Components
DBS Disk Tray
Consists of:
• 1 frame assembly
• 24 2.5-inch drive slots (SAS 10k/15k and SSD)
• 2 Enclosure Controllers (ENCs)
• 2 power supply units
• 1 bezel
• 2 1 m SAS cables
Front View
Rear View
Page 2-9
Hitachi Unified Storage Components
DBX Disk Tray
Consists of:
• 1 frame assembly
• 48 3.5-inch drive slots (SAS 7.2k)
• 4 Enclosure Controllers (ENCs)
• 4 power supply units
• 1 bezel
• 4 3 m SAS cables ENC
• 1 rail kit
Drive
Page 2-10
Hitachi Unified Storage Components
DBX Disk Tray (Rear)
Rear
Power Unit
Legend:
Page 2-11
Hitachi Unified Storage Components
DBW Disk Tray
Front Rear
#8 (x1)
#4 (x84)
#6 (x4)
#2 (x 2)
#5 (x2)
#3 (x5)
# Component Function #1 (x2)
#7 (x2) 1 Power supply unit AC input power supply
2 I/O module I/F between controller box/other dense
3 FAN module Cooling
4 Drive Slots Data storage
5 Drawer One drawer can store max. 42 HDDs
6 Side Card Assembly containing expander, power connection
7 Bezel n/a
8 Frame Assembly Chassis
SAS Cables (2) 3 m cables
Rail Kit (1) Rail kit
Page 2-12
Hitachi Unified Storage Components
DBW Dense Box 84 HDDs
Side Card-A-U
3)
4) 2 (lower) HDD#56 HDD#69
#14 ------ #27 Drawer 1 (Upper)
5) 1 (upper) HDD#28 HDD#41
1) 6) 2 (lower) HDD#70 HDD#83
#0 ------ #13
6)
Right SideCard
Left SideCard
Side Card-A-L
4)
#56 ------ #69 Drawer 2 (Lower)
2)
#42 ------ #55 84sp Dense
From the beginning of using 84HDD dense, mounting 14 HDDs in this area is mandatory
Page 2-13
Hitachi Unified Storage Components
New Drive Type FMD
DF850
NF1000
High performance
Higher reliability
Page 2-14
Hitachi Unified Storage Components
DF850 Supports NF1000
9/30/2013 Q-code
Page 2-15
Hitachi Unified Storage Components
DBF Drive Box for NF1000
2U
Jumper Pin
(Not used now)
IN OUT
2U
Page 2-16
Hitachi Unified Storage Components
Flash Module Drive (FMD)
DIMM x 4
ASIC
Battery DIMM x 4
Page 2-17
Hitachi Unified Storage Components
DBF Spec
DBF Spec
BECK (Ver.2.5.0.0)
Page 2-18
Hitachi Unified Storage Components
Power Supplies Batteries
Power Cable
Note:
1. DBW PDU connector should be C19 type
2. DBW power cable end type should be C19-C20 type
3. Cable connection between DBW and new PDU must take into account that the DBW is
rated for 16A current
IEC320-C20 IEC320-C14
(PDU) (PDU)
IEC320-C19 IEC320-C13
(Power supply) (Power supply)
Page 2-19
Hitachi Unified Storage Components
HUS 110 and HUS 130 Power Supply Unit
Page 2-20
Hitachi Unified Storage Components
HUS 110 and 130 Batteries
Battery connector
Connect
as shown
Page 2-21
Hitachi Unified Storage Components
HUS 110 Controller Board
Note there is 1 user cache dual in-line memory module (DIMM) in the HUS 110 controller board,
compared to 2 in HUS 130.
Page 2-22
Hitachi Unified Storage Components
HUS 130 Controller Board
Page 2-23
Hitachi Unified Storage Components
HUS 110 and 130 Option Module
Page 2-24
Hitachi Unified Storage Components
HUS 150 Battery — Front View
Loosen thumbscrew to
remove battery module
There are 2 battery modules, which are located in the front. The arrow shows the MAIN SW for
switching power on or off.
To remove the batteries, loosen the thumb screw and remove the battery unit.
Page 2-25
Hitachi Unified Storage Components
HUS 150 Fan Unit — Front View
6 fan units
Loosen
thumbscrew
to remove
fan unit
Page 2-26
Hitachi Unified Storage Components
HUS 150 Controller Unit
Page 2-27
Hitachi Unified Storage Components
HUS 150 I/O Modules
Close up view of
I/O module;
Fibre Channel
module shown
From left to right, you see the LAN management module, the ENC module (SAS ports), the host
interface module and the Fibre Channel module (FC Ports 8Gb/sec x 4).
Page 2-28
Hitachi Unified Storage Components
Module Summary
Module Summary
Page 2-29
Hitachi Unified Storage Components
Module Review
Module Review
Page 2-30
3. Installation — Part 1
Module Objectives
Page 3-1
Installation — Part 1
Installation Resource Documentation
Overview
Maintenance Manual
Getting Started Guide
• Publication #: MK-91DF8303
System Assurance Document
Note: A User Manual and Quickstart Guide are included with purchase of a
Hitachi Unified Storage system.
Page 3-2
Installation — Part 1
Maintenance Manual Overview
Safety Summary
Introduction
Installation
Firmware
System Parameter
Addition/Removal/Relocation
Upgrade
Troubleshooting
Message
Replacement
Parts Catalog
WEB
Page 3-3
Installation — Part 1
Instructor Demonstration
Instructor Demonstration
Prior to installing the HUS storage system, you must verify that the proper components are
available to produce the purchased configuration. The System Assurance Document (SAD) and
any other available documentation, such as the purchase order or bill of materials, can be
referenced for purposes of verification.
Page 3-4
Installation — Part 1
System Assurance Document (SAD)
Page 3-5
Installation — Part 1
Recommended Safety Precautions
• A wrist strap prevents part failures caused by static electrical charge built up in your
own body
• Be sure to wear a wrist strap connected to the chassis:
Wrist Straps
Precautions
• Always use wrist straps and antistatic mats when handling components
• Put components into ESD bags for transport
• Components are almost always damaged when handled without ESD bags
Damaged components are more likely to fail in the future
Page 3-6
Installation — Part 1
Electrostatic Discharge (ESD)
Damage Example
Page 3-7
Installation — Part 1
Installing a New Frame
• In cases where there is no firmware on the storage system, an initial firmware has to be
installed.
• After completing the new frame installation, close the doors and configure the host
computers.
Page 3-8
Installation — Part 1
Tools Required for Installation
Hitachi Modular racks always come with side panels in Americas and APAC, and always come
without side panels in EMEA.
Hitachi Modular racks are designed to hold a Hitachi Unified Storage 110, 130 and 150
consisting of a Controller Box and 1 or more DBS and DBL Drive Boxes. All Hitachi Data Systems
Modular racks are 42U high X 1.96 feet (600 mm) wide X 3.60 feet (1100 mm) deep 19-inch
cabinets capable of containing all components required for a full installation of the Hitachi
Unified Storage system.
Page 3-9
Installation — Part 1
Hitachi Modular Racks
Designed to hold Hitachi Unified Storage 110, 130 and 150 systems
and related components
Page 3-10
Installation — Part 1
Step 1: Unpacking the Rack Frame
This table has been copied from the Installation Manual’s installation section.
In Chapter 2.4.3 you will find examples on how to combine the different tray types and how to
install them into the available racks.
Page 3-11
Installation — Part 1
• Installation Manual - INST 02-0020, “Section 2.1 Procedures for Installing Array”
Controllers:
Disk trays:
• Check that the contents (model names, product serial numbers and quantities) agree
with the packing list shipped with the storage system
Page 3-12
Installation — Part 1
• Service personnel must keep the supplied key with the storage system
(CBXSL/CBXSS/CBSL/CBSS/DBL/DBS for front bezel, DBX for front lock) in order to
prevent users from maintaining the storage system
• The key for the front bezel is used to mount and dismount front bezel
• The key for the front lock is used to lock and unlock the front of the DBX
Page 3-13
Installation — Part 1
Step 2: Unpacking the Storage System
The Genie lift (GL-8) with GL-LP platform or (compatible lift device) is required to install the
DF850-DBX. This can be ordered from HDS Logistics if unavailable from the customer site.
• Please order the following part number well in advance of the install and allow 5 days
for delivery:
Note : The GL-8 can be raised to 8’ 3” and the load rating is 400lbs.
Note : In the case of installation in the UK and Europe a Transport/Logistic company will
provide the physical installation of the High Capacity Expansion units into the rack.
Page 3-14
Installation — Part 1
Step 4: Mounting Components on the Rack Frame
Procedures for
installation of DBX/DBW
may be different
depending on the region
Picture shows an
example; see notes
section for more
information
Peel off anti-adhesion sheet and place EMI gasket in correct location
Page 3-15
Installation — Part 1
Step 4: Mounting Components on the Rack Frame
Page 3-16
Installation — Part 1
Step 4: Mounting Components on the Rack Frame
• CBXSL
• CBXSS
• CBSL
• CBSS
• DBS
• DBL
• CBL
• DBX
Page 3-17
Installation — Part 1
Page 3-18
Installation — Part 1
Page 3-19
Installation — Part 1
Module Summary
Module Summary
Page 3-20
Installation — Part 1
Module Review
Module Review
Page 3-21
Installation — Part 1
Module Review
Page 3-22
4. Installation — Part 2
Module Objectives
Page 4-1
Installation — Part 2
Installing a New Frame
Normally, firmware comes installed on the storage system. You may need to update the
firmware if an update is available. In cases where there is no firmware on the storage system,
an initial firmware has to be installed.
After completing the new frame installation, close the doors and configure the host computers.
Page 4-2
Installation — Part 2
Step 5: Installing the Components
Controllers:
Page 4-3
Installation — Part 2
Step 5: Installing the Components
Disk trays:
• DBL (3.5 inch x 12 disks) – nearline serial attached SCSI (NLSAS) disks
• DBS (2.5 inch x 24 disks) – serial attached SCSI (SAS) and flash drive disks
• DBX (3.5 inch x 48 disks) – NLSAS disks
Page 4-4
Installation — Part 2
Step 5: Installing the Components
Page 4-5
Installation — Part 2
Step 5: Installing the Components
Page 4-6
Installation — Part 2
Step 6: Connecting the Cables
Page 4-7
Installation — Part 2
Step 6: Connecting the Cables
Page 4-8
Installation — Part 2
Step 6: Connecting the Cables
Page 4-9
Installation — Part 2
Step 6: Connecting the Cables
The above diagram shows the rear view of a CBL. The CBL (HUS 150) is made of 2 controllers.
Page 4-10
Installation — Part 2
Step 6: Connecting the Cables
The above diagram shows the rear view of a CBL. The CBL (HUS 150) is made of 2 controllers.
Page 4-11
Installation — Part 2
Step 6: Connecting the Cables
The above diagram shows the cable connection between CBXSL/CBXSS and DBL/DBS.
Page 4-12
Installation — Part 2
Step 6: Connecting the Cables
The above diagram shows the port numbers on HUS 110, HUS 130 and HUS 150.
Page 4-13
Installation — Part 2
Step 6: Connecting the Cables
The above diagram shows the port IDs when iSCSI ports are installed.
• Each option card on HUS 110 or HUS 130 has 2 iSCSI ports.
• Each I/O module on HUS 150 has 2 iSCSI ports.
Take care when connecting cables; the SAS connectors for OUT are different from the
connectors for IN.
Page 4-14
Installation — Part 2
Step 7: Attaching the Decoration Panel
Page 4-15
Installation — Part 2
Step 8: Powering On the Storage System
ON/OFF Switch
Note: Power LED on front of controller box can be:
• Off
o If power cables are not connected or if power cables are connected but PDB
circuit breakers are off
• Orange
o If the PDU circuit breakers are on and the power cables are connected but main
switch is off
• Green
o If everything is connected and turned on
Page 4-16
Installation — Part 2
Step 8: Powering On the Storage System
ON/OFF Switch
Page 4-17
Installation — Part 2
Step 9: Connecting a Service PC or Laptop to the Storage System
Page 4-18
Installation — Part 2
Step 10: Installing and Updating the Firmware
This topic provides instructions for installing and updating firmware, and presents 2 methods for
firmware updates.
Hitachi Unified Storage should come with firmware when it arrives from the factory. If it does,
the customer engineer should perform a firmware update if one is needed.
If the storage system does not arrive from the factory with firmware, then an initial firmware
update is needed.
This requires the Web Tool, Java Runtime Environment 1.6, httpclient.jar and modifications to
the java.policy file.
Page 4-19
Installation — Part 2
Step 11: Setting the Storage System
This section provides information about the components and steps required for setting system
storage.
RAID groups and license settings are discussed in the “Storage Configuration” module.
Page 4-20
Installation — Part 2
Step 11: Setting the Storage System
• Flash drives
• SAS 10k
• SAS 7.2k
Notes :
Page 4-21
Installation — Part 2
Step 11: Setting the Storage System
Page 4-22
Installation — Part 2
Step 12: Powering Off and Restarting Storage System
ON/OFF Switch
Page 4-23
Installation — Part 2
Step 13: Connecting Host Interface Cables
0E 0F 0G 0H
• Installation Manual – INST 02-0940, “Section 2.4.10 Connecting the Interface Cables”
Page 4-24
Installation — Part 2
Back End Configuration Kit (BECK) Tool
Located at SNM2 CD
Can be used to create a back end configuration
Standalone program
Can be used to create a fresh configuration
User can input simple trace file and modify existing configurations
Page 4-25
Installation — Part 2
Using the BECK Tool — Blank Configuration
Note : The next 3 slides show an example of configuring a new (empty) HUS150.
Page 4-26
Installation — Part 2
Configuring HUS 150
• Click edit.
• Right click mouse into Path1 field.
• Choose drive box from list.
Page 4-27
Installation — Part 2
Configuring HUS 150
Page 4-28
Installation — Part 2
Configuring HUS 150
Page 4-29
Installation — Part 2
Appendix A
Appendix A
Page 4-30
Installation — Part 2
1. Overview
1. Overview
Points
Restricting some configurations
Behind the first DBW, the total number of connectable Drive Box is restricted as below.
HUS150 = 11 Drive Boxes (DBx or DBW), HUS130 = 5 Drive Boxes (DBx or DBW)
Non-affected configurations/cases
DBx
All DBWs configurations Reference
= DBS/DBL/DBX
Existing configurations (all DBWs) + newly added DBx or DBW NL-SAS DBX 48
Existing configurations (DBx) + newly added DBW + newly added DBx DBW 84
Guarding logic
1. Array Firmware and SNM 2:
• System boot-up case: Not READY status with error message
• Drive Box addition case: Can’t be added the Drive Box with error message
2. BECK tool (Back-End Configuration Kit):
• Guarding the restricted configurations
Page 4-31
Installation — Part 2
2. Support Configurations and Schedules
Supported HDD count 960 336 960 960 240 360 360 360
Support schedule Supported Supported Supported Supported Supported
(v1.5) (v2.0) (v3.0) v5.0 (v3.7) (v3.7) v5.0
DBW DBW
DBx
DBx
“DBx” means DBS, DBL or DBX.
DBW
Page 4-32
Installation — Part 2
3. Support Configuration Restrictions
This is because of the Flat mounting rule, which states that only 2 or fewer Drive Boxes can be connected
downstream of DBW in the same path. See next slide for BECK Tool example.
HUS150 HUS130
DBx DBx DBx DBx DBx DBx
Page 4-33
Installation — Part 2
Support Configuration Restrictions BECK Tool Example
Page 4-34
Installation — Part 2
4. Summary
4. Summary
Configuration restriction
In the same path, only 2 or fewer Drive Boxes (DBx or DBW) can be connected
downstream of DBW, not allowing 3 or more.
When 1 DBW is connected as the system, the remaining number of Drive Boxes to be connected is restricted.
HUS 150: 11 Drive Boxes (DBx or DBW)
HUS 130: 5 Drive Boxes (DBx or DBW)
Page 4-35
Installation — Part 2
4. Summary (Case Study #1: DBx + DBW + DBx)
Depending on the position of DBW in the back end path, the maximum drive number
for the system cannot be installed. Each Drive type (SSD/SAS/NLSAS) must be
considered to add the Drive Box. See next 3 slides for BECK Tool examples
1st Configuration Addition 2nd Configuration Addition
Adding a lot of NLSASs for Adding SSD/SAS for performance
improvement.
Example: capacity expansion.
HUS150 HUS150 HUS150
DBX 48 DBX 48 DBX 48 DBX 48 DBX 48 DBX 48
DBS DBS DBS DBS DBS DBS DBS DBS DBS DBS DBS DBS
24 24 24 24 24 24 24 24 24 24 24 24
SSD/SAS DBS 24
156 156 156 156 204 204 204 204
DBL 12 [NL 132] [NL 132] [NL 132] [NL 132] [NL 132] [NL 132] [NL 132] [NL 132]
[SAS 12] [SAS 24] [SAS 24] [SAS 24] [SAS 60] [SAS 60] [SAS 72] [SAS 72]
NL-SAS DBX 48 [SSD 12] [SSD 12] [SSD 12]
Total: 624 Total: 816
DBW 84 “DBx” :means DBS, DBL or DBX.
Page 4-36
Installation — Part 2
4. Conclusions
4. Conclusions
1. The 2nd configuration is: 4 * DBW + 4* DBX +12 * DBS = 816 disk
maximum
2. The location where boxes are installed is important
3. The previous example is impossible if the configuration was started
with DBW boxes on top, because only 2 additional boxes
downstream of DBWs are possible
4. BECK Tool can be used for these additions (step by step)
5. For an initial installation, BECK Tool will always start with the
largest boxes (DBWs) and would not accept this configuration for
initialization
Page 4-37
Installation — Part 2
4. Summary (Case Study #2: DBW + DBx or DBW)
Depending on the position of DBW in the back-end path, the maximum drive
number for the system cannot be installed. Also, each drive type (SSD/SAS/NLSAS)
must be considered to add the drive box.
1st Configuration Addition 2nd Configuration Addition
Adding SSD/SAS for Adding both NLSAS and SSD/SAS
Example: performance improvement for customer’s use cases
SSD/SAS DBS 24
DBL 12
NL-SAS DBX 48 “DBx” :means DBS, DBL or DBX.
DBW 84
Page 4-38
Installation — Part 2
Module Summary
Module Summary
Page 4-39
Installation — Part 2
Module Review
Module Review
Page 4-40
5. Using the Hitachi Unified Storage Web
Tool
Module Objectives
Page 5-1
Using the Hitachi Unified Storage Web Tool
Post-Installation Tasks
Post-Installation Tasks
After installation and power up of Hitachi Unified Storage systems, you must complete several
tasks before the storage system is ready for configuration and operation.
Page 5-2
Using the Hitachi Unified Storage Web Tool
Web Tool Introduction
What you see in the above image is the Web Tool in Normal Mode.
Page 5-3
Using the Hitachi Unified Storage Web Tool
Web Tool Functions
Page 5-4
Using the Hitachi Unified Storage Web Tool
Location and Function of Ethernet Ports
CBSS/CBSL Controller
CBL Controller
User Port
Maintenance Port
The User port is the port that can be assigned any IP address and is normally connected to the
customer's network.
The location of Ethernet ports is the same in CBXSS, CBXSL, CBSS and CBSL.
Page 5-5
Using the Hitachi Unified Storage Web Tool
IP Addresses on LAN Ports
Controller 0 Controller 1
The IP address for the Maintenance port is either 10.0.0.16/17 (default) or 192.168.0.16. It
depends on what the address configuration is on the User port. The underlying idea is to
guarantee that, if the Maintenance port and the User port are connected to the same network,
there will not be a IP address conflict (duplicate).
Page 5-6
Using the Hitachi Unified Storage Web Tool
Preferred Way of Connecting
Ethernet ports on HUS controllers are auto sensing and function with straight or
crossover cables.
Engineer's
Laptop
Maintenance Port
Customer Network
Switch
Hitachi
Unified
Storage
Page 5-7
Using the Hitachi Unified Storage Web Tool
Maintenance Mode
Maintenance Mode
Page 5-8
Using the Hitachi Unified Storage Web Tool
Entering Maintenance Mode
The location of RST button is different on HUS 110, HUS 130 and HUS 150.
HUS 110 and HUS 130: located on the rear of the storage system.
The arrows in the diagram point to the locations of the RST buttons.
Page 5-9
Using the Hitachi Unified Storage Web Tool
Maintenance Mode User ID and Password
The user name and password displayed are not for customer use.
Page 5-10
Using the Hitachi Unified Storage Web Tool
Setting Controller IP Addresses
Setting IP Addresses
When first installed, the storage system most likely will not
communicate with the customer network
Customer engineer or partner resource must set IP addresses
IP addresses must be set before connecting with SNM 2
Use the Web Tool in Maintenance Mode to set initial IP settings
Page 5-11
Using the Hitachi Unified Storage Web Tool
Setting IP Addresses
Note :
Page 5-12
Using the Hitachi Unified Storage Web Tool
Setting IP Addresses
Page 5-13
Using the Hitachi Unified Storage Web Tool
Setting IP Addresses
Note :
Page 5-14
Using the Hitachi Unified Storage Web Tool
Verifying and Updating Firmware
o All data, license keys and settings are lost when you do an initial firmware on an
existing system.
o Production system, with everything running fine, should not be initialized.
o Can be done from the Web Tool only (also when the storage system is in
Maintenance Mode).
Page 5-15
Using the Hitachi Unified Storage Web Tool
Preparing for Firmware Update
Take the array into Maintenance Mode and follow the steps on the
next slide
For example, if you are doing a firmware install/update for one storage system model,
you should not delete folders for the other models; the system will pick up the correct
folder.
Page 5-16
Using the Hitachi Unified Storage Web Tool
Before Starting Initial Microcode Setup
Page 5-17
Using the Hitachi Unified Storage Web Tool
Selecting the Options
1 2
5
4
Place the storage system in Maintenance Mode. Follow the steps in the previous procedure:
• Click Microprogram.
o A dialog box appears with the Firmware version you want to install.
Page 5-18
Using the Hitachi Unified Storage Web Tool
Selecting the Options
6
7
• Check OK to execute.
• When you click OK, a dialog box prompts you to reboot the storage system.
• You must reboot to complete the installation.
Page 5-19
Using the Hitachi Unified Storage Web Tool
Successful Firmware Update Completion
2
1
3
Page 5-20
Using the Hitachi Unified Storage Web Tool
Using the Web Tool in Normal Mode
Steps:
• Open a browser.
• Enter the IP address of the Hitachi Unified Storage system controller.
• You will see the following web page.
Notes :
Page 5-21
Using the Hitachi Unified Storage Web Tool
Normal Mode Requirements
Page 5-22
Using the Hitachi Unified Storage Web Tool
Cache Backup Battery Status
In the above picture, if the cache backup battery is either Low or Red, it means the battery
does not have enough power to backup cache to the flash drive in the event of a power failure.
Page 5-23
Using the Hitachi Unified Storage Web Tool
Other Components
Other Components
This slide conveys visual snapshots of how components look when they are OK and when there
is an error.
Page 5-24
Using the Hitachi Unified Storage Web Tool
Controller/Battery/Cache/Fan Status
Controller/Battery/Cache/Fan Status
The image matches the status we see from the front of the CBL, hence Controller 1 is shown on
the left and Controller 0 is shown on the right.
Page 5-25
Using the Hitachi Unified Storage Web Tool
Status of Disk Trays
Page 5-26
Using the Hitachi Unified Storage Web Tool
Collecting Traces with the Web Tool
Trace Types
Page 5-27
Using the Hitachi Unified Storage Web Tool
Collecting a Simple Trace
Page 5-28
Using the Hitachi Unified Storage Web Tool
Collecting a Controller Alarm Trace
The above figure shows collecting the Controller Alarm Trace in Maintenance Mode.
Page 5-29
Using the Hitachi Unified Storage Web Tool
Collecting a Full Dump
Full Dump:
Page 5-30
Using the Hitachi Unified Storage Web Tool
Cache Memory Access Failure (Full Dump)
The screens for saving a full dump are similar to the screen for saving other types of trace.
Page 5-31
Using the Hitachi Unified Storage Web Tool
Troubleshooting — Open a Case
Page 5-32
Using the Hitachi Unified Storage Web Tool
Technical Upload Facility (TUF)
Page 5-33
Using the Hitachi Unified Storage Web Tool
Instructor Demonstration
Instructor Demonstration
Page 5-34
Using the Hitachi Unified Storage Web Tool
Module Summary
Module Summary
Page 5-35
Using the Hitachi Unified Storage Web Tool
Module Review
Module Review
Page 5-36
6. Updating Hitachi Unified Storage
Firmware
Module Objectives
Page 6-1
Updating Hitachi Unified Storage Firmware
HUS Firmware Overview
General information
• Firmware is installed on the HUS storage system
• Requirements:
Firmware on a CD
Cross or straight cables
Management PC
HUS firmware is 09xx/y:
• 09: HUS 100 series
• x: major version
• y: minor version
Page 6-2
Updating Hitachi Unified Storage Firmware
Serial Number of HUS Box
Serial Number
Model Controller Description
Begins with
CBXSL Controller with 3.5-inch disks 911xxxxx
HUS 110
CBXSS Controller with 2.5-inch disks 912xxxxx
CBSL Controller with 3.5-inch disks 921xxxxx
HUS 130
CBSS Controller with 2.5-inch disks 922xxxxx
HUS 150 CBL (Controller has no disks) 930xxxxx
Page 6-3
Updating Hitachi Unified Storage Firmware
Firmware Update Methods
Page 6-4
Updating Hitachi Unified Storage Firmware
Nondisruptive Firmware Update Overview
Page 6-5
Updating Hitachi Unified Storage Firmware
Nondisruptive Firmware Update Procedure
Page 6-6
Updating Hitachi Unified Storage Firmware
Nondisruptive Firmware Update Procedure
Basic tab
Advanced tab
• Transfer Only
• Update
o This option works only if microcode has already been transferred with the above
step.
o Check this option if you want to ensure microcode to complete only at low I/O
time.
Page 6-7
Updating Hitachi Unified Storage Firmware
Nondisruptive Firmware Update Procedure
• Check Revision.
Click Confirm
to update
Page 6-8
Updating Hitachi Unified Storage Firmware
Verifying Successful Completion of Code Update
Page 6-9
Updating Hitachi Unified Storage Firmware
Successful Microcode Update
2. Array should be in
Ready mode.
Page 6-10
Updating Hitachi Unified Storage Firmware
Module Summary
Module Summary
Page 6-11
Updating Hitachi Unified Storage Firmware
Module Review
Module Review
Page 6-12
7. Hitachi Storage Navigator Modular 2
Installation and Configuration
Module Objectives
Page 7-1
Hitachi Storage Navigator Modular 2 Installation and Configuration
Overview
Overview
This section presents the components, requirements, features, functions and additional information
about SNM 2.
Architecture
SSL
SSL
Using Storage Navigator Modular 2, you can configure and manage your storage assets from a local
host and from a remote host across an Intranet or TCP/IP network to ensure maximum data
reliability, network up-time and system serviceability. You install Storage Navigator Modular 2 on a
management platform (a desktop computer, a Linux workstation or a laptop) that acts as a console
for managing your HUS family storage. This PC management console connects to the management
ports on the HUS system controllers, and uses Storage Navigator Modular 2 to manage your storage
assets and resources. The management console can connect to HUS via a network interface card, an
Ethernet cable, a switch or a hub (for Fibre Channel networks, use a Fibre Channel switch or hub;
for iSCSI networks, use an iSCSI switch or hub).
Data flow in a Hitachi Unified Storage system is as follows:
• The front end controller communicates to the back-end controller of the storage system
• The back end controller communicates with the SAN (typically through a Fibre Channel
switch)
• Hosts or application servers contact the SAN to retrieve data from the storage system for
use in applications (commonly databases and data processing programs)
Page 7-2
Hitachi Storage Navigator Modular 2 Installation and Configuration
Installation Requirements
Installation Requirements
Page 7-3
Hitachi Storage Navigator Modular 2 Installation and Configuration
Features and Functions
SNM 2 is the integrated interface for standard firmware and software features of Hitachi Unified
Storage (and earlier storage systems). It is required for taking advantage of the full feature sets
Hitachi Unified Storage offers.
Page 7-4
Hitachi Storage Navigator Modular 2 Installation and Configuration
Features and Functions
The point-and-click graphical interface has initial set-up wizards that simplify configuration,
management and visualization of Hitachi storage systems.
SNM 2:
• Enables you to know available storage and current usage quickly and easily
• Allows you to protect access to information by restricting storage access at the port level,
requiring case-sensitive password logins and providing secure domains for application-
specific data
o SNM 2 also protects your information by letting you configure data redundancy and
assign hot spares
• Provides online functions for Hitachi storage systems, such as storage system status, event
logging, email alert notifications and statistics
• Is compatible with Microsoft Windows, Red Hat Enterprise Linux or Oracle Solaris
environments
• SNM 2 online help provides easy access to information about use of features and enables
you to get the most out of your storage system
• Provides a full featured and scriptable command line interface, in addition to a GUI view
Page 7-5
Hitachi Storage Navigator Modular 2 Installation and Configuration
Features and Functions
The SNM 2 management console provides views of feature settings on the storage system in
addition to enabling you to configure and manage those features to optimize your experience
with Hitachi Unified Storage. This page lists several of the functions enabled by SNM 2.
Page 7-6
Hitachi Storage Navigator Modular 2 Installation and Configuration
Features and Functions
Storage Navigator Modular 2 works in conjunction with storage features found in the Hitachi Unified
Storage 100 family.
• Account authentication and audit logging provide access control to management functions
and record all system changes
• Performance monitoring software allows you to see performance within the storage system
• Modular volume migration software enables dynamic data migration
• Volume management software streamlines configuration management processes by allowing
you to define, configure, add, delete, expand, revise and reassign LUNs to specific paths without
having to reboot your storage system
• Replication setup and management feature provides basic configuration and management of
Hitachi ShadowImage In-System Replication software bundle, Hitachi Copy-on-Write Snapshot
and Hitachi TrueCopy mirrored pairs
• The cache residency manager feature allows you to lock and unlock data into a cache in real
time for optimal access to your most frequently accessed data
• Cache partition manager feature allows the application to partition the cache for improved
performance
• Online RAID group expansion feature enables dynamic addition of HDDs to a RAID group
• System maintenance feature allows online controller microcode updates and other system
maintenance functions
• SAN security software helps ensure security in open systems storage area networking
environments through restricted server access
• SNMP agent support includes management information bases (MIBs) specific to Hitachi Data
Systems and enables SNMP based reporting on status and alerts for Hitachi storage systems
Page 7-7
Hitachi Storage Navigator Modular 2 Installation and Configuration
Initial Setup Tasks
Initial Setup
• SNM 2 installation CDs or access to the Hitachi Data Systems Web Portal:
support.hds.com
• A PC that will act as the management console for managing the storage system using
SNM 2
Page 7-8
Hitachi Storage Navigator Modular 2 Installation and Configuration
Initial Setup
• The IP address of each management port on your HUS 100 family storage system that
will connect to the SNM 2 management console
• The IP address of the management console
• The port number used to access SNM 2 from your browser (default port is 1099)
• The password you will use to replace the default system account password
• License keys required by each program product you want to use
Step 4: You can download JRE 6.0 from the following site and install it by following the on-
screen prompts: http://java.com/en/download/
If your management console runs Microsoft Windows, perform the following procedure.
• Click the Windows Start menu, point to Settings, and click Control Panel
• In the Windows Control Panel, double-click Java Control Panel
• Click the Java tab (the Java tab appears)
• Click View in the Java Applet Runtime Settings section
• In the Java Runtime Parameters field, type “–Xmx464m”
• Click OK to exit the Java Runtime Settings window
• Click OK in the Java tab to close the Java Control Panel window
• Close the Windows Control Panel
If your management console runs a supported Solaris or Linux operating system, perform the
following procedure.
• From a Windows terminal, execute the <JRE installed directory>/bin/jcontrol to run the
Java Control Panel
• Click View in the Java Applet Runtime Settings section
• In the Java Runtime Parameters field, type “–Xmx464m”
• Click OK to exit the Java Runtime Settings window
• Click OK in the Java tab to close the Java Control Panel window
Step 5: A firewall's main purpose is to block incoming unsolicited connection attempts to your
network. If the HUS 100 family system is used within an environment that has a firewall, there
will be times when the storage system’s outbound connections will need to traverse the firewall.
The storage system's incoming indication ports are ephemeral, with the system randomly
selecting the first available open port that is not being used by another Transmission Control
Protocol (TCP) application. To permit outbound connections from the storage system, you must
either disable the firewall or create or revise a source-based firewall rule (not a port-based rule),
so that items coming from the storage system are allowed to traverse the firewall. Firewalls
should be disabled when installing SNM 2 (refer to the documentation for your firewall). After
the installation completes, you can turn on your firewall. If you use Windows firewall, the SNM
2 installer automatically registers the SNM 2 file and Command Suite Common Components as
exceptions to the firewall. Therefore, before you install SNM 2, confirm that no security
problems exist.
Page 7-9
Hitachi Storage Navigator Modular 2 Installation and Configuration
Initial Setup
Step 6: Antivirus programs, except Microsoft Windows built-in firewall, must be disabled before
installing SNM 2. In addition, SNM 2 cannot operate with firewalls that can terminate local host
socket connections. As a result, configure your antivirus software to prevent socket connections
from being terminated at the local host (refer to the documentation for your antivirus software).
Step 7: Some Program Products require a license key before you can use them. Typically, the
license key required to activate these products is furnished with the product. We recommend
that you have these license keys available before you activate the Program Products that
require them. If you do not have license keys for the Program Products, please contact
technical support.
Step 8: Being familiar with the Technical Guidelines of installing SNM 2 will keep you on the
optimal installation path and help you avoid potential pitfalls.
Page 7-10
Hitachi Storage Navigator Modular 2 Installation and Configuration
Initial Setup
If the management console has other Hitachi Command products installed, the Hitachi
Command Suite Common Component overwrites the current Hitachi Command Suite Common
Component.
Be sure no products other than Hitachi Command Suite Common Component are using port
numbers 1099, 23015 to 23018, 23032, and 45001 to 49000. If other products are using these
ports, you cannot start Storage Navigator Modular 2, even if the Storage Navigator Modular 2
installation completes without errors.
Determine the IP address and port number of the management console (for example, using
ipconfig on Windows or ifconfig on Solaris and Linux). The IP address you use to log in to
Storage Navigator Modular 2 must be a static IP address. On Hitachi storage systems, the
Page 7-11
Hitachi Storage Navigator Modular 2 Installation and Configuration
Initial Setup
default IP addressed for the management ports are 192.168.0.16 for Controller 0 and
192.168.0.17 for Controller 1. Use a port number such as 2500 if available.
We also recommend that you disable antivirus software and proxy settings on the management
console when installing the Storage Navigator Modular 2 software. On Hitachi storage systems,
the default IP addressed for the management ports are 192.168.0.16 for Controller 0 and
192.168.0.17 for Controller 1.
Use the appropriate section for the operating system running on your management console
(Microsoft Windows, Oracle Solaris or Red Hat Enterprise Linux 4).
Page 7-12
Hitachi Storage Navigator Modular 2 Installation and Configuration
Installation
Installation
This section presents instructions for installation on Windows, Sun Solaris and Red Hat Linux.
Installation on Windows
2. Click System
Page 7-13
Hitachi Storage Navigator Modular 2 Installation and Configuration
Installation on Windows
4. In the Performance area, click Settings and then click the Data Execution
Prevention tab
5. Click Turn on DEP for all programs and services except those I select
6. Click Add and specify the Navigator Modular 2 installer HSNM2-x x x x -W-GUI.exe,
where xxxx varies with the version of Navigator Modular 2 (the Navigator Modular 2
installer HSNM2-x x x x -W-GUI.exe is added to the list)
7. Click the checkbox next to the Navigator Modular 2 installer HSNM2-x x x x -W-GUI.exe
and click OK
Page 7-14
Hitachi Storage Navigator Modular 2 Installation and Configuration
Installation on Sun Solaris
If the CD-ROM cannot be read, copy the files install-hsnm2.sh and HSNM2-X XX X -S-
GUI.tar.gz to a file system that the host can recognize. XXXX varies with the version of SNM 2.
[IP address] is the IP address used to access SNM 2 from your browser.
[port number] is the port number used to access SNM 2 from your browser.
For environments using DHCP, enter the host name (computer name) for the IP address.
Page 7-15
Hitachi Storage Navigator Modular 2 Installation and Configuration
Installation on Red Hat Linux
If the CD-ROM cannot be read, copy the files install-hsnm2.sh and HSNM2-XXXX-L-GUI.rpm to
a file system that the host can recognize.
When entering an IP address, do not specify 127.0.0.1 and local host. For DHCP environments,
specify the host name (computer name).
The default port number is 1099. If you use it, you can omit the –p option from the command
line.
[IP address] is the IP address used to access SNM 2 from your browser.
[port number] is the port number used to access SNM 2 from your browser.
Page 7-16
Hitachi Storage Navigator Modular 2 Installation and Configuration
Instructor Demonstration
Instructor Demonstration
SNM 2 Installation
Page 7-17
Hitachi Storage Navigator Modular 2 Installation and Configuration
Storage Navigator Modular 2 Wizards
Page 7-18
Hitachi Storage Navigator Modular 2 Installation and Configuration
Add Array Wizard
IP Address or Array Name – Discovers storage systems using a specific IP address or storage
system name in the Controller 0 and 1 fields. The default IP addresses are:
• Controller 0: 192.168.0.16
• Controller 1: 192.168.0.17
For directly connected consoles, enter the default IP address just for the port to which you are
connected; you will configure the other controller later.
Range of IP Addresses – Discovers storage systems using a starting (From) and ending (To)
range of IP addresses. Check Range of IPv4 Address and/or Search for IPv6 Addressees
automatically to widen the search, if desired.
Using Ports – Select whether communications between the console and management ports will
be secure, non-secure or both.
You can also run the Add Array Wizard manually to add storage systems after initial log in by
clicking Add Array at the bottom of the Arrays window.
Page 7-19
Hitachi Storage Navigator Modular 2 Installation and Configuration
Instructor Demonstration
Instructor Demonstration
Page 7-20
Hitachi Storage Navigator Modular 2 Installation and Configuration
User Management
User Management
This section describes how to use SNM 2 to manage users in HUS.
The default user to access SNM 2 is “system” and the password is “manager.”
After installing SNM 2, customers should be encouraged to change the password for system.
Page 7-21
Hitachi Storage Navigator Modular 2 Installation and Configuration
User Management in SNM 2
In the Edit Profile window, you can modify the user’s full name, email address or description.
Page 7-22
Hitachi Storage Navigator Modular 2 Installation and Configuration
User Management in SNM 2
If you no longer need a user, you can delete the user from SNM 2.
Page 7-23
Hitachi Storage Navigator Modular 2 Installation and Configuration
HUS 100 User Management Account Authentication
A user who uses the storage system registers an account (user ID, password, and so on) before
beginning to configure account authentication. When a user accesses the storage system, the
Account Authentication feature verifies whether the user is registered. From this information,
users who use the storage system can be discriminated and restricted. A user who registered
an account is given authority (role information) to view and modify the storage system
resources according to each purpose of system management and the user can access each
resource of the storage system within the range of the authority (Access control).
Page 7-24
Hitachi Storage Navigator Modular 2 Installation and Configuration
Default Authentication
Default Authentication
HUS 100
Family Array
SNM 2
Server
SNM 2 Client
User: system User: root
Password: manager Password: storage
Be sure to use the proper logout function when you leave the array.
Page 7-25
Hitachi Storage Navigator Modular 2 Installation and Configuration
Managing Account Authentication
In the example, a user is logged into SNM 2 as root to access the storage system using
Account Authentication.
Note: the root is the build-in-system account and should not be used for managing the array!
Page 7-26
Hitachi Storage Navigator Modular 2 Installation and Configuration
Setting Permissions
Setting Permissions
The Account Authentication user must be assigned all 3 view and modify rights as shown to
have full control on the array.
Page 7-27
Hitachi Storage Navigator Modular 2 Installation and Configuration
Storage Navigator Modular 2 Command Line Interface
Install
Page 7-28
Hitachi Storage Navigator Modular 2 Installation and Configuration
Start the Command Line Interface
To start the CLI, browse to the Installation folder and double click
startsnmen.bat
Page 7-29
Hitachi Storage Navigator Modular 2 Installation and Configuration
Check the Environment Variables
When starting the CLI as described here, the environment variables are set automatically. It is
recommended to double check the settings just in case.
Page 7-30
Hitachi Storage Navigator Modular 2 Installation and Configuration
Register a Storage System
Page 7-31
Hitachi Storage Navigator Modular 2 Installation and Configuration
Register a Storage System
Page 7-32
Hitachi Storage Navigator Modular 2 Installation and Configuration
Create a RAID Group
Format
aurgadd -unit unit_name -rg rg_no
-RAID0 | -RAID1 | -RAID5 | -RAID10 | -RAID6
-drive unit_no. hdu_no ...
-pnum pty_num
Description
• This command creates a RAID group in a specified array unit
Example:
aurgadd –unit array01 -rg 0 -RAID5 –drive 0.0 0.1 0.2 0.3 0.4 -pnum 1
• This will create RAID group 0 in RAID5 4+1 from the first five disks in Unit
0
Disk Number = X.Y, where X = Tray Number, Y = Disk Number
Page 7-33
Hitachi Storage Navigator Modular 2 Installation and Configuration
Create a RAID Group
Page 7-34
Hitachi Storage Navigator Modular 2 Installation and Configuration
Referencing the RAID Groups
Page 7-35
Hitachi Storage Navigator Modular 2 Installation and Configuration
Referencing the RAID Groups
Page 7-36
Hitachi Storage Navigator Modular 2 Installation and Configuration
Deleting RAID Groups
Format
aurgdel -unit unit_name -rg rg_no [ -f ]
aurgdel -unit unit_name -ALL [ -f ]
• Description
Command deletes specified RAID group or deletes all RAID groups in
an array unit
• Example:
aurgdel –unit array01 –rg 1
Page 7-37
Hitachi Storage Navigator Modular 2 Installation and Configuration
Deleting RAID Groups
Page 7-38
Hitachi Storage Navigator Modular 2 Installation and Configuration
Creating Volumes
Creating Volumes
Format
auluadd -unit unit_name [ -lu lun ] -rg rg_no -size num [ m | g | t ] |
rest
[ -stripesize 64 | 256 | 512 ]
[ -cachept pt_no ]
[ -paircachept pt_no | auto ]
[ -createarea area_no ]
[ -noluformat]
Description
• Command is used to create Volumes
Example:
auluadd –unit array01 –lu 0 –rg 0 –size 100g –stripesize 256 -
noluformat
Page 7-39
Hitachi Storage Navigator Modular 2 Installation and Configuration
Creating Volumes
Page 7-40
Hitachi Storage Navigator Modular 2 Installation and Configuration
Format Volumes
Format Volumes
Format
• auformat -unit unit_name -lu lun...
Description
• Command formats a specified Volume or a group of Volumes
Example:
• auformat –unit array01 –lu 0
Page 7-41
Hitachi Storage Navigator Modular 2 Installation and Configuration
Format Volumes
Page 7-42
Hitachi Storage Navigator Modular 2 Installation and Configuration
Referencing Volumes
Referencing Volumes
Format
• auluref -unit unit_name [ -m | -g ] [ -lu lun ... ]
Description
• Command displays information of existing Volumes (capacity, status,
current controller number, default controller number, RAID group number
of a RAID group and its RAID level)
• Example: auluref –unit array01 -g
Page 7-43
Hitachi Storage Navigator Modular 2 Installation and Configuration
Referencing Volumes
Page 7-44
Hitachi Storage Navigator Modular 2 Installation and Configuration
Display Help
Display Help
Format
• auman [ -en | -jp ] command_name
Description
• Command displays the help information in English (-en) or Japanese (-jp)
for a command
Page 7-45
Hitachi Storage Navigator Modular 2 Installation and Configuration
Module Summary
Module Summary
Page 7-46
Hitachi Storage Navigator Modular 2 Installation and Configuration
Module Review
Module Review
1. SNM 2 can be used for which of the following? (Choose all that
apply)
A. RAID level configurations
B. Volume creation and expansion
C. Offline volume migrations
D. Configuring and managing Hitachi replication products
E. Online microcode updates and other system maintenance functions
2. Which operating system requires that you find out the IP address of
the management console when installing SNM 2?
A. Windows
B. Sun Solaris
C. Red Hat Linux
D. All of the above
Page 7-47
Hitachi Storage Navigator Modular 2 Installation and Configuration
Module Review
3. Which steps are required prior to installing SNM 2? (Choose all that
apply)
A. Enable your firewall
B. Disable your antivirus software
C. All of the above
4. Which of the following are true about the Add Array Wizard?
(Choose all that apply)
A. Lets you search for a system based on its WWPN
B. Lets you search for a system based on its IP address
C. Lets you search for a system based on its host name
D. Lets you search for multiple systems based on a range of IP addresses
Page 7-48
Hitachi Storage Navigator Modular 2 Installation and Configuration
Module Review
5. Which of the following are true about new SNM 2 users? (Choose
all that apply)
A. New users have only the View permission.
B. New users can be added by the default System user.
C. New users can be added by users who have been granted User
Management/Admin permission.
D. New users can be added by the default Administrator user.
Page 7-49
Hitachi Storage Navigator Modular 2 Installation and Configuration
Module Review
Page 7-50
8. RAID Group and Volume
Configuration
Module Objectives
Page 8-1
RAID Group and Volume Configuration
Supported RAID Types
Overview
block 1
Striping
block 2
block 7
block 8
RAID ensures data protection and integrity. It helps to recover from what could potentially be a
loss of data.
Striping involves breaking up a block of data into small equal segments and storing the
segments sequentially among the drives in the array.
Page 8-2
RAID Group and Volume Configuration
Supported RAID Levels
RAID-0
Data blocks
A B C D E F G H I J Outline: RAID-0 stripes the data
across disk drives for higher
Controller throughput
Data disks
Note: Model WMS100 does NOT support RAID-0 or striping. In terms of reliability, SATA drives
belong to a different category than Fibre Channel (SCSI) disks. Because of this, the RAID levels
supported on model WMS100 are RAID-1, RAID-1+0, RAID-5 and RAID-6. RAID-1+0 is a
combination of RAID-1 and RAID-0, which combines mirroring and striping.
Page 8-3
RAID Group and Volume Configuration
Supported RAID Levels
RAID-1
Data blocks
A B C D E F G H I J Outline: RAID-1 mirrors the data
RAID-5
Data blocks
A B C D E F G H I J Outline: RAID-5 consists of 3 or more
disk drives; 1 drive in
Controller round-robin mode contains
the parity
A B C D (A-D)P
E F G (E-H)P H Pro: Striping offers higher reading
: : : : : throughput
: : : : :
: : : : :
Con: Lower performance on
Data disks + Parity disks (small) random writes and in
the case a drive fails
:Parity
Raid-5: At least 3 disks are required to implement RAID-5. RAID-5 will not sustain a double-
disk failure which is more likely to occur with SATA drives.
Page 8-4
RAID Group and Volume Configuration
Supported RAID Levels
RAID-6
Data blocks
A B C D E F G H I J Outline: RAID-6 consists of 4 or more
disk drives; 2 independent
Controller drives in round-robin mode
contain the parity
A B C (A-C)P (A-C)P
Raid-6: At least 4 disks are required to implement RAID-6. This configuration is very similar to
RAID-5, with an additional parity block, allowing block level striping with 2 parity blocks. The
advantages and disadvantages are the same as the RAID-5, except the additional parity disk
protects the system against double-disk failure. This feature was implemented to ensure the
reliability of the SATA drives.
Key value: Two parity drives allow a customer to lose up to 2 hard disk drives (HDDs) in a
RAID group without losing data. RAID groups configured for RAID-6 are less likely to lose data
in the event of a failure. RAID-6 performs nearly as well as RAID-5 for similar usable capacity.
RAID-6 also gives the customer options as to when to rebuild the RAID group. When an HDD is
damaged, the RAID group must be rebuilt immediately (since a second failure may result in lost
data). During a rebuild, applications using the volumes on the damaged RAID group can expect
severely diminished performance. A customer using RAID-6 may elect to wait to rebuild until a
more opportune time (night or weekend) when applications will not require stringent
performance.
HDD roaming allows the spare to become a part of the RAID group; no copy back is required
saving rebuild time.
Page 8-5
RAID Group and Volume Configuration
Supported RAID Levels
RAID-1+0
Data blocks
Page 8-6
RAID Group and Volume Configuration
RAID Groups versus Parity Groups
RG2
SP
Page 8-7
RAID Group and Volume Configuration
Rules for Creating RAID Groups
Page 8-8
RAID Group and Volume Configuration
Rules for Creating RAID Groups
Page 8-9
RAID Group and Volume Configuration
Drives Supported in Hitachi Unified Storage
Page 8-10
RAID Group and Volume Configuration
System and User Data Areas
RAID group storage is divided into 2 data areas: System and User
• System Area space is used on the first 5 system disks only, which must
be of the same type (system uses this area to store microcode, trace
and configuration data)
• User Area: User data is stored here
If unequal disk sizes are used, space is lost on the larger disks
System Area
Used Area
Unused Area
This graphic shows that a part of the physical capacity is reserved as system area. The area is
only used as system area on the first 5 disks of a system, for example, disk 0-4. The system
area contains microcode, trace data and configuration data.
o If a RAID group existed with disks of different capacity, a substantial part could
be left unused on the bigger drives (see example)
o The user data area part must be the same for all disks in a RAID group
• The first 5 disks must always be of the same type (5*SAS or 5*NL-SAS); no mix is
possible
Page 8-11
RAID Group and Volume Configuration
Creating a RAID Group
Page 8-12
RAID Group and Volume Configuration
Creating a RAID Group
Page 8-13
RAID Group and Volume Configuration
Expanding a RAID Group
When a RAID group is given for expansion it can be in either of the following states:
1. Expanding – In this state, the RG is currently being expanded and the expansion
cannot be cancelled
o If we force cancel the expansion there can be data loss to the LUNs that have
already expanded
2. Waiting – In this state, the RG expansion has not yet started, so the RG expansion can
be cancelled
Page 8-14
RAID Group and Volume Configuration
Expand a RAID Group
You may not use RAID group expansion to change the RAID level of
a RAID group
Rules for expanding a RAID group
• You cannot expand a RAID group in the following conditions:
If the LU (Logical Unit) whose status of the forced parity correction is:
• Correcting
• Waiting
• Waiting Drive Reconstruction
• Unexecuted, Unexecuted 1 or Unexecuted 2
If an LU is being formatted and belongs to RAID group expansion
target
After setting or changing Cache Partition Manager configuration
When dynamic sparing/correction copy/copy back is operating
While installing firmware
If any of the forced parity status messages are displayed, you need to execute a forced parity
correction for this LU, change the LU status to Correction Completed and then execute the RAID
group expansion.
If an LU is being formatted and belongs to the RAID group expansion target, wait until the
formatting has completed and then execute the expansion command from SNM 2.
If you are expanding a RAID group after setting or changing Cache Partition Manager
configuration, the storage system must be rebooted. Expand the RAID group after rebooting
the storage system in which the Power Saving function is set. Change the status of the Power
Saving feature to “Normal (spin-up)” and then expand the RAID group.
If you are expanding a RAID group when the dynamic sparing/correction copy/copy back is
operating, expand the RAID group after the drive has been restored.
If you are expanding a RAID group while installing the firmware, expand the RAID group after
completing the firmware installation.
Page 8-15
RAID Group and Volume Configuration
Expanding a RAID Group
When backing up data, including data stored in cache memory. Data loss can occur due to a
loss of power or another type of system failure and the LU associated with the expansion can
become unformatted.
Host access performance deteriorates during RAID group expansion, especially for the LUs in
the RAID groups which are expanding.
By adding drives with the same capacity and rotational speed as the RAID group of the
expansion target, performance will be maximized.
Page 8-16
RAID Group and Volume Configuration
Expanding a RAID Group
To access the RAID group expansion window, click Expand RG after clicking the checkbox of
the RAID group you want to expand in the left column of the RAID Groups tab of the Logical
Units window.
You can use the added capacity immediately after the expansion process has completed.
1. Create RG
2. Delete RG
3. Expand RG
Page 8-17
RAID Group and Volume Configuration
Example
Example
4D+1P
HDD0 HDD1 HDD2 HDD3 HDD4 HDD5
LU #0
2D+2P
HDD0 HDD1 HDD2 HDD3 HDD4 HDD5
LU #0
Expansion with 2 available HDDs
3D+3P
New free space within the RAID
HDD0 HDD1 HDD2 HDD3 HDD4 HDD5 group
LU #0
Free Space
Page 8-18
RAID Group and Volume Configuration
Instructor Demonstration
Instructor Demonstration
Page 8-19
RAID Group and Volume Configuration
Creating Volumes
Creating Volumes
This section presents rules and steps for creating volumes.
Overview
• Volumes (also called Logical Unit Numbers or LUNs) are created in a
RAID group or in a Dynamic Provisioning (DP) pool
DP Pools are explained in detail in course TCI1950
• Volumes can
Be assigned to a host group or iSCSI target
Be presented to same or different servers
Exist in different sizes and multiple numbers
Other software tools, like HCS, also use the name “volume“ for a LUN.
Although this is not a consistent use of expressions, throughout this course
material, a volume and a LUN is considered the same subject for compatibility.
iSCSI (Internet Small Computer System Interface): An Internet Protocol (IP)-based storage
networking standard for linking data storage facilities. By carrying SCSI commands over IP networks,
iSCSI is used to facilitate data transfers over intranets and to manage storage over long distances.
iSCSI can be used to transmit data over local area networks (LANs), wide area networks (WANs) or
the Internet, and can enable location-independent data storage and retrieval.
LUN (Logical Unit Number): A unique identifier used on a SCSI bus to distinguish between devices
that share the same bus. SCSI is a parallel interface that allows up to 16 devices to be connected
along a single cable. The cable and the host adapter form the SCSI bus, and this operates
independently of the rest of the computer. Each of the devices is given a unique address by the
SCSI BIOS (Basic Input/Output System), ranging from 0 to 7 for an 8-bit bus or 0 to 15 for a 16-bit
bus. Devices that request I/O processes are called initiators. Targets are devices that perform
operations requested by initiators. Each target can accommodate up to 8 other devices, known as
volumes, and each is assigned a volume. Commands that are sent to the SCSI controller identify
devices based on their LUNs.
Page 8-20
RAID Group and Volume Configuration
Volume Configuration
Volume Configuration
Volumes are slices from the user data area of a RAID group
• 3 Volumes from RG0
• 1 Volume from RG1
Maximum volumes
• Model HUS 110 = 2,048
• Model HUS 130 = 4,096
• Model HUS 150 = 4,096
Maximum size of a volume = 128TB
RG0 RG1
Volume 0 Volume 3
Volume 1
Volume 2
Page 8-21
RAID Group and Volume Configuration
How to Create a Volume
• Log in to SNM 2
• Select the array
• Select Group > Volume from the tree
• Click Create VOL
Page 8-22
RAID Group and Volume Configuration
How to Create a Volume
o GB
o TB
o MB
o Blocks
Page 8-23
RAID Group and Volume Configuration
How to Create a Volume
In Advanced option:
o Default stripe size is 256KB (you can select 64KB, 256KB or 512 KB)
o The selection also depends on the cache partition
• Cache partition settings are required if you are using Cache Partition Manager
• Choose whether you want to format the volume once it is created
• Choose where the size of the volume should come from
Page 8-24
RAID Group and Volume Configuration
How to Create a Volume
Page 8-25
RAID Group and Volume Configuration
Changing Logical Unit Capacity
Volume Unification
Page 8-26
RAID Group and Volume Configuration
Changing LU Capacity
Changing LU Capacity
Adding LU
LUN 0
LU to be expanded LU 0 Resulting Volume
Only the expanded LU
Add LU LU 0 can be mapped
LU to be added LU 1 LUs that were added
LU 1 no longer appear in the
available list
LU to be added LU 2 LU 2
Volume = LUN
Page 8-27
RAID Group and Volume Configuration
Changing LU Capacity
Adding LUs
Page 8-28
RAID Group and Volume Configuration
Changing LU Capacity
Grow
LU 0
LU 0 LU 0 Additional LU 0
Logical
Free space Additional Additional
LU 2 LU 2
Additional
Free space LU 0 LU 0
Free space
LU 4094 LU 0 (Sub)
RG0
LU 2 LU 2 Physical
Before LU Grow LU 4095 LU 0 (Sub)
Free space Free space
Unified LU
Shrink
LU 0
LU 0
Not used
Note:
• The host OS must support volume shrinking if you use LU shrink
• You must execute the host OS side volume shrink first
• Then execute the storage array side LU shrink
Page 8-29
RAID Group and Volume Configuration
Changing LU Capacity
SNM 2
Page 8-30
RAID Group and Volume Configuration
Unifying a Volume
Unifying a Volume
1. New Capacity
o If you enter a capacity larger than the original volume size, the volume expands
o If you enter a capacity smaller than the original volume size, the volume shrinks
2. Add Volume
Page 8-31
RAID Group and Volume Configuration
Unifying a Volume
o Select this option to split all the volumes from a unified volume
o If the volume you selected is not unified, this option will be disabled
Note: If you select 3 or 4, ensure that you have backed up the volume. Many operating systems
do not support shrinking of volume. Windows 2008 supports shrinking of volume. Before you
can shrink the volume from SNM 2, do the shrinking at the OS side.
0009
Formatting a Unified Volume will format all the individual volumes as well.
Page 8-32
RAID Group and Volume Configuration
Instructor Demonstration
Instructor Demonstration
LU Operations
• Create LU
• Change LU Capacity
• Delete LU
Page 8-33
RAID Group and Volume Configuration
Module Summary
Module Summary
Page 8-34
RAID Group and Volume Configuration
Module Review
Module Review
Page 8-35
RAID Group and Volume Configuration
Module Review
Page 8-36
9. Storage Allocation
Module Objectives
Page 9-1
Storage Allocation
Connectivity Between Storage and Hosts on HUS
Page 9-2
Storage Allocation
Host Connection to HUS
Switch
Attached
Server
Switch Direct
Attached
Server
HUS
• Through Switch
• By Direct Attachment
Page 9-3
Storage Allocation
Host Connection to HUS
1b. iSCSI
Switch
Attached
Server
IP Switch Direct
Attached
Server
HUS
Page 9-4
Storage Allocation
Mapping Volumes to Ports
0E 0F 0G 0H
HUS 110 and HUS 130 have 4 embedded ports on the Controller.
HUS 110 embedded ports are disabled by default and need Fibre Channel Option Key to be
installed to work.
Page 9-5
Storage Allocation
Mapping Volumes to Ports
HUS 110 and 130 can have 2 10Gb/sec x iSCSI ports per controller.
Page 9-6
Storage Allocation
Mapping Volumes to Ports
Host Groups
Members of a host
group should be the on HUS 100
the same platform (OS) Host Volumes Internal RAID
Logical Units mapped to the port Vols Groups
Maximum of 128 host and accessible from the host
group.
groups per port Each Host
HBA is
Each Host Group
Maximum of 2048 Vol can reuse Vol
identified
by WWPN.
Each Host
Group supports
paths per host group numbers. a single OS type.
host group 00
host A host B Vol0
(HP-UX) (HP-UX) Physical Port Vol0 Vol1
host group 01 Vol25
Switch
host C Vol20
(Solaris) Vol0
host group 02 Vol95
host D host E Vol0 Vol1
(Windows) (Windows)
Vol31
LUN = Logical Unit Number
Page 9-7
Storage Allocation
Mapping Volumes to Ports
HUS100 HUS100
FC port FC port
Host group 000
Host groups
Only 1 host group per port
Only single platform hosts can Multi-host groups can be
be connected added to the port
To protect mission-critical data in your disk storage system from unauthorized access, you
should implement LUN security. LUN security allows you to prevent unauthorized hosts from
either seeing or accessing the data on the secured LUN. If LUN security is applied to a particular
port, that port can only be accessed from within its own host group (also known as a host
storage domain). The hosts cannot access LUs associated with the other host groups.
Page 9-8
Storage Allocation
Mapping Volumes to Ports
Page 9-9
Storage Allocation
Mapping Volumes to Ports
Page 9-10
Storage Allocation
Mapping Volumes to Ports
Select Host Volume from top box, and Available Volumes from
lower box; click Set to confirm
• Simple setting
• Detail setting
Page 9-11
Storage Allocation
Mapping Volumes to Ports
Page 9-12
Storage Allocation
Mapping Volumes to Ports
• Simple setting
• Detail setting
Page 9-13
Storage Allocation
Mapping Volumes to Ports
Page 9-14
Storage Allocation
Simple Settings
Simple Settings
Page 9-15
Storage Allocation
Advanced Settings Explanations
Page 9-16
Storage Allocation
Queue Depth
Queue Depth
Queue depth values higher than calculated above are fine unless:
• More than 512/1024 total commands to the port are exceeded
• 32 commands are exceeded for a LUN
The formula above guarantees you will never exceed the queue
capacity
• Maximum performance may be achieved at higher queue depth values
• The value above is quite general and assumes all LUNs are online and
available to all hosts
Simply spoken, avoid having more than 512/1024 commands arrive
at the port simultaneously and avoid exceeding 32 per LUN
Page 9-17
Storage Allocation
How to increase Queue Depth in SNM2
The following is a brief explanation of queue depth and how it pertains to HDS
arrays using Solaris' max_throttle parameter
The maximum # per LUN for sd_max_throttle/ssd_max_throttle is 32
• Setting “ssd_max_throttle = 8” means that the host can send 8
commands at a time to any particular Volume (LUN)
If this is set to 8 and there are 100 LUNs mapped out of the port, it is
possible to send a total of 800 commands at a time, which would
over-throttle the SCSI buffer on the port causing transport errors
• Depending on how timeout values are set on the HBA and system, this
can cause the target to fail resulting, in a loss of access to the devices on
that port
If, for example, one host only has 10 LUNs but there are 40 more
LUNs mapped out of that port, the other host’s I/O is going to affect
that port as well
• The calculation is “512/ALL LUNS” mapped out of that port to keep the
buffer from reaching a queue full condition.
Page 9-18
Storage Allocation
Instructor Demonstration
Instructor Demonstration
Page 9-19
Storage Allocation
Module Summary
Module Summary
Page 9-20
Storage Allocation
Module Review
Module Review
1. What are 2 ways that a Fibre Channel port can connect to host?
A. Direct connect
B. BUS connection
C. SAS connection
D. Switched connection
2. Host groups are created at what place?
A. Back-end ports
B. RAID groups
C. Front-end ports
D. DP pools
Page 9-21
Storage Allocation
Module Review
Page 9-22
10. Path Management
Module Objectives
Page 10-1
Path Management
HDLM Features and Benefits
Overview
Page 10-2
Path Management
Overview
Page 10-3
Path Management
Features
Features
Multipathing – Multiple paths can also be used to share I/O workloads and improve
performance.
Path failover – By removing the threat of I/O bottlenecks, HDLM protects your data paths and
increases performance and reliability.
Failback – By recovering a failed path and placing it back online when it becomes available, the
maximum number of paths available for load balancing and failover is assured.
Load balancing – By allocating I/O requests across all paths, load balancing ensures continuous
operation at optimum performance levels, along with improved system and application
performance. Several load balancing policies are supported.
Since HDLM automatically performs path health checking, the need to perform repeated manual
path status checks is eliminated.
Page 10-4
Path Management
Features
Multipathing
With multipathing, a failure with 1 or more components still allows applications to access their
data. In addition to providing fault tolerance, multipathing also serves to redistribute the
read/write load among multiple paths between the server and storage, helping to remove
bottlenecks and balance workloads. In addition, distributing data access across all the available
paths increases performance allowing more applications to be run and more work to be
performed in a shorter period of time.
The example shows an HDLM system configuration with the host and storage system attached
to a SAN using Fibre Channel connections. The host cable port is provided by the HBA. The
storage system cable port is a port (P). A logical unit (LU) in the storage system is the I/O
target of the host. The LU area called Dev is storage address space being written or read by the
host. The path is the route that connects the host and a Dev in an LU.
Page 10-5
Path Management
Features
Path Failover
When a path fails, all outstanding and subsequent I/O requests shift
automatically and transparently from the failed or down path to
alternative paths
Two types of failovers:
• Automatic
• Manual
A failure occurs when a path goes into the offline status. This can be caused by the following:
You can switch the status of a path by manually placing the path online or offline. Manually
switching a path is useful, for example, when system maintenance needs to be done. You can
manually place a path online or offline by doing the following:
Page 10-6
Path Management
Features
Path Failback
In order to use the automatic failback function, HDLM must already be regularly monitoring
error recovery.
HDLM will select the next path to be used first from among the online owner paths, and then
from the online non-owner paths. As a result, if an owner path recovers from an error, and then
HDLM automatically places the recovered path online while a non-owner path is in use, the path
will be automatically switched over from the non-owner path to the owner path that just
recovered from the error.
Page 10-7
Path Management
Features
Load Balancing
Note: Some I/O operations managed by HDLM can be distributed across all available paths, and
some cannot. Therefore, even when the load balancing function is used, a particular I/O
operation might not necessarily allocate data to every available path. RAID Manager issuing
IOCTL to a command device is an example of an I/O operation that cannot allocate data to
every path.
Do not use the load balancing function that is accessible from the Microsoft iSCSI
Software Initiator user interface.
All online paths are owner paths. Therefore, if one of the paths becomes unusable, the load will
be balanced among the remaining paths.
Page 10-8
Path Management
Features
Load Balancing
5 3 1
Storage
Host 6 4 2
The first and most basic algorithm is round robin. This algorithm simply distributes I/O by
alternating requests across all available data paths. Some multipath solutions, such as the IBM
MPIO default PCM, only provide this type of load balancing.
This algorithm is acceptable for applications with primarily random I/O characteristics.
Applications that primarily generate sequential I/O requests actually can have performance
degradation from the use of the round robin algorithm.
Page 10-9
Path Management
Features
Load Balancing
3 2 1
Storage
Host 6 5 4
Extended Round-Robin
Distributes I/Os to paths depending on whether the I/O involves sequential or random access:
• For sequential access, a certain number of I/Os are issued to one path in succession
• For random access, I/Os are distributed to multiple paths according to the round-robin
algorithm
Page 10-10
Path Management
Features
Load Balancing
Path Total I/Os Total Blocks Least I/O will select Path 1
1 2 5
Least Block will select Path 2
2 3 3
The HDLM developers recently evaluated 2 additional queuing algorithms in efforts to further
enhance I/O throughput. This investigation resulted in 4 additional load balancing algorithms for
HDLM. They are:
• Least I/O
• Least Block
• Extended Least I/O
• Extended Least Block
Extensive testing showed excellent all-around performance for the Extended Least I/O
algorithm in environments exhibiting both random and sequential I/O characteristics. Due to
this result, Extended Least I/O is the default host setting for HDLM load balancing.
Page 10-11
Path Management
Features
Without path health checking an error cannot be detected unless an I/O operation is performed.
If an error is detected in a path, the path health checking function switches the status of that
path to Offline (E) or Online (E).
Page 10-12
Path Management
Features
Centralized Management
o The HGLM server collects system configuration information from each host, and
provides the information to the HGLM client
o The HGLM server also performs requests received from the HGLM client
• HGLM client – Any machine on which the Web-based HGLM GUI is used
o The HGLM GUI provides the user interface for managing the multi-path
environment
o Each host accesses a storage network to read and write application data
o HDLM manages the paths between each host and the storage systems they
access
o Storage systems with paths managed by HDLM are incorporated into the HGLM
management environment
Page 10-13
Path Management
HDLM Pre-Installation Steps
Pre-Installation Process
Host
P P P P
LU
Storage Subsystem
Using multiple paths to connect a host to a storage subsystem before installing HDLM can result
in unstable Windows operations.
Page 10-14
Path Management
Pre-Installation Process
• In a cluster configuration, make sure that manufacturer and model of HBA is the same
for all hosts that make up the cluster
• Make sure that versions of HBA micro-programs are the same
Set up Switches
• For details on how to set up a switch, see the documentation for the particular switch
• This step is unnecessary if you do not use a switch
Page 10-15
Path Management
Pre-Installation Process
If your configuration uses an IP-SAN, install and set up the iSCSI initiator (iSCSI software
or HBA). For details, see iSCSI initiator documentation, documentation for the HBA, or storage
subsystem documentation.
• Write signatures for each LU, create partitions and then format them
• Because the system is still in the single path configuration, no problems will occur even
if you write a signature for each LU
Page 10-16
Path Management
Using HDLM GUI
HDLM GUI
Configuration window
Page 10-17
Path Management
HDLM GUI
Path status
• Online
• Offline(C): Indicates I/O
cannot be issued because the
path was placed offline
• Online(E): Indicates error
Current Path Status:
has occurred in the last Gray indicates normal status.
online path for each device Red indicates an error.
Status display
• Provides filters to pinpoint your display
Page 10-18
Path Management
Setting the HDLM Options Screen
Function Settings
When enabled (default), Dynamic Link Manager monitors all online paths at specified interval
and puts them into Offline(E) or Online(E) status if a failure is detected
When enabled (not the default), Dynamic Link Manager monitors all Offline(E) and Online(E)
paths at specified intervals and restores them to online status if they are found to be
operational
Page 10-19
Path Management
Setting the HDLM Options Screen
An intermittent error is a fault that occurs sporadically. A loose cable connection to an HBA, for
example, might cause intermittent errors. If automatic fallback is enabled, intermittent errors
will cause a path to alternate between online and offline frequently, which can impact I/O
performance. To eliminate failovers due to intermittent errors, HDLM can remove a path from
automatic failback if that path suffers intermittent errors. This process is called intermittent
error monitoring. With intermittent error monitoring, HDLM monitors paths to see whether an
error occurs a number of times within a specific error-monitoring interval.
Auto failback must be On. To prevent an intermittent error from reducing I/O performance, we
recommend that you monitor intermittent errors when automatic failback is enabled.
Remove LU (on/off):
Removes the LUN when all paths to the LUN are taken offline.
Page 10-20
Path Management
HDLM Path List Window
Path List: This main window for the Dynamic Link Manager GUI displays the detailed
configuration and path information, allows you to change the path status, and provides access
to the other windows.
Options window: This window displays and allows you to change the Dynamic Link Manager
operating environment settings, including function settings and error management settings.
Help window: This window displays the HTML version of the users manual. The Help window is
opened automatically by your default Web browser software.
Page 10-21
Path Management
Using HDLM CLI for Path Management
If you attempt to execute an HDLM command by any other method, you might be asked
whether you have administrator permissions.
Page 10-22
Path Management
HDLM CLI Overview
When you are using Dynamic Link Manager for Microsoft Windows® systems, execute the
command as a user of the Administrators group. When you are using Dynamic Link Manager for
Sun Solaris systems, execute the command as a user with root permission.
Page 10-23
Path Management
Viewing Path Information with the CLI
Page 10-24
Path Management
Changing Path Status with the CLI
• Example of a command that places all paths online that pass through
HBA port 1.1
Verify the statuses of all applicable paths have changed to online
>dlnkmgr view –path
Page 10-25
Path Management
Module Summary
Module Summary
Page 10-26
Path Management
Module Review
Module Review
Page 10-27
Path Management
Module Review
Page 10-28
Path Management
Module Review
Page 10-29
Path Management
Module Review
Page 10-30
11. Hitachi Unified Storage Program
Products
Module Objectives
Please note
• This module provides an overview of the features and program products
of the HUS family. Additional HUS 100 software courses are available that
contain detailed information on these features, such as replication and
advanced provisioning.
Page 11-1
Hitachi Unified Storage Program Products
Products: Array Based Software
Licensed features
• Cache residency manager • Audit logging; account
• Password protection authentication
• SNMP agent support function • Volume Migration
• ShadowImage In-System • Power savings feature
Replication • TrueCopy Extended Distance
• TrueCopy Remote Replication • Dynamic Provisioning
• LUN manager • Dynamic Tiering
• Copy-on-Write SnapShot • Fibre Channel option
• Data Retention Utility
• Performance Monitor
• Cache partition manager
• TrueCopy Modular Distributed
Page 11-2
Hitachi Unified Storage Program Products
Memory Management Layer
Page 11-3
Hitachi Unified Storage Program Products
Memory Management Layer
Cache Memory
User Data Region System Area
RAID group (RG) — sets aside space to take care of quick formatting.
Dynamic Provisioning (DP) Pool — space on a DP Pool for the metadata for Copy-on-Write
(CoW) and TrueCopy Extended Distance (TCE).
Page 11-4
Hitachi Unified Storage Program Products
Cache Partition Manager
No other modular product has the ability to manage cache at this level. Modular storage
systems typically have a simple single block-size caching algorithm, resulting in inefficient use of
cache and/or I/O.
RAID group stripe sizes can also be selected to allow further tuning of the array to deliver
better performance without additional cost.
Cache Partition Manager is included as a no-cost option for all HUS systems.
Page 11-5
Hitachi Unified Storage Program Products
Cache Residency Manager
Mirrored
Cache
Cache
Enhanced Read/Write
Performance
Cache
Residency
Area
LUN
Page 11-6
Hitachi Unified Storage Program Products
Volume Migration
Volume Migration
Volume Migration allows LUNs to be migrated across disk types and across RAID groups.
Volumes can be migrated across RAID groups (R10 to R5 to R6), or disk media (SSD to SAS 10k,
to SAS 7.2k).
Page 11-7
Hitachi Unified Storage Program Products
Replication Products
Replication Products
ShadowImage
Production Data LU 1
Backup Data LU 2
ShadowImage is the in-system copy facility for Modular Storage HUS/AMS2000 families of
storage systems. It enables server-free backups, which allows customers to exceed service level
agreements (SLAs). It fulfills 2 primary functions:
ShadowImage allows information to be split away and used for system backups, testing and
data mining applications while the customer’s business continues to run. It uses either graphical
or command line interfaces to create a copy and then control data replication and fast
resynchronization of logical volumes within the system.
Page 11-8
Hitachi Unified Storage Program Products
Replication Products
Copy-on-Write Snapshot
An essential component of business continuity is the ability to quickly replicate data. Hitachi
Copy-on-Write Snapshot provides logical snapshot data replication within Hitachi storage
systems for immediate use in decision support, software testing and development, data backup
or rapid recovery operations. Hitachi Unified Storage uses the DP pool to store the differential
data.
Page 11-9
Hitachi Unified Storage Program Products
Replication Products
P-VOL S-VOL
(4) Write complete (3) Remote copy complete
• The write I/O is not posted as complete to the application until it is written to a remote
system
• The remote copy is always a mirror image
• Provides fast recovery with no data loss
• Limited distance – response-time impact
Page 11-10
Hitachi Unified Storage Program Products
TrueCopy Extended Distance for Remote Backup
Local
Host
P-VOL S-VOL
Page 11-11
Hitachi Unified Storage Program Products
True Copy Modular Distributed (TCMD)
TCMD allows multiple arrays to connect to a remote array with TCE and TC
Fan-in: up to 8 LUs on separate arrays copied to 1 array
SAN
SAN
Page 11-12
Hitachi Unified Storage Program Products
Replication Products
Replication Products
Management Tools
Page 11-13
Hitachi Unified Storage Program Products
What is Dynamic Provisioning?
Page 11-14
Hitachi Unified Storage Program Products
What is Dynamic Provisioning?
Dynamic Provisioning
Real Capacity Pool
Page 11-15
Hitachi Unified Storage Program Products
Dynamic Tiering
Dynamic Tiering
Solution capabilities
Storage Tiers Data Heat Index
• Automated data placement for higher
performance and lower costs High
Activity
• Simplified ability to manage multiple Set
storage tiers as a single entity
• Self-optimized for higher performance
Normal
and space efficiency Working
Set
• Page-based granular data movement for
highest efficiency and throughput
Business value
• CAPEX and OPEX savings by moving
data to lower-cost tiers Quiet
Data
• Increase storage utilization up to 50% Set
Page 11-16
Hitachi Unified Storage Program Products
Use Cases
Use Cases
Compared to conventional HDP pools (all SAS), drive price per IOPS
can be reduced by about 15% by mixing with NL SAS
Performance [IOPS] SAS
With conventional
With HDT, mixed
ALL SAS HDP, need to select
config can be
HDT pool (mixed) HDP pools ALL SAS config
selected High cost
SAS NLSAS
IOPS
NLSAS Cost is reduced by 15%
ALL NLSAS
Drive cost
HDP pool
HUS 150 with 100 HDD x 300GB SAS 10K 10,000 IOPS
SSD
With HDT, mixed
With conventional HDP, need to
config can be
ALL SSD increase the number of SAS or
selected
HDT Pool (HDP Pool) select SSD pool to improve
SSD performance from SAS
SAS NLSAS
High cost
IOPS
Performance improves by YY%
SAS
Drive cost
ALL SAS
HDP pool
HUS 150 with 100 drives x 300GB SAS 10K 10,000 IOPS
Page 11-17
Hitachi Unified Storage Program Products
Software Bundles
Software Bundles
If purchasing HUS 110, HUS 130 or HUS 150 for BLOCK STORAGE ONLY, customers MUST
also purchase Hitachi Base Operating System M (BOS M). The key differences between this
bundle and the one sold with AMS 2000 family storage systems include:
Page 11-18
Hitachi Unified Storage Program Products
Software Bundles
Hitachi Base Operating System Security Extension includes program products that were formerly
included in BOS M for AMS 2000.
Page 11-19
Hitachi Unified Storage Program Products
Software Bundles
In addition to purchasing the base bundles, customers may purchase optional products as well. This
slide presents an overview of the optional products available.
Note: TrueCopy Remote Replication bundle and TrueCopy Extended Distance may not coexist on
the same HUS storage system. The system will only allow one of these 2 products to operate at a
time. To avoid any problems, do not allow a customer to purchase licenses for both products for
use on the same HUS system. The Hitachi Data Systems Configurator is being designed to prevent
both products from being purchased for the same HUS system.
• The power savings feature enables the spin down of RAID groups (RG) when they are not
being accessed by business applications, resulting in a decrease in energy consumption
• The Fibre Channel option must be purchased to enable the embedded Fibre Channel ports
of HUS 110
o Once the associated license key is installed or enabled the Fibre Channel ports
embedded in the base HUS system are enabled
• TrueCopy Extended Distance requires that Dynamic Provisioning be installed on the system
Page 11-20
Hitachi Unified Storage Program Products
Module Summary
Module Summary
Page 11-21
Hitachi Unified Storage Program Products
Module Review
Module Review
Page 11-22
12. Performing Hitachi Unified Storage
Maintenance
Module Objectives
Page 12-1
Performing Hitachi Unified Storage Maintenance
Maintenance Overview and Preparation
Maintenance Overview
Maintenance activities
• Replacing existing components
Hard disk drives
Control unit
Enclosure Controller (ENC) unit
Small Form-Factor Pluggable (SFP) Fibre Channel host connector
• Adding new components
Hard disk drives
Expansion trays
iSCSI interfaces
Page 12-2
Performing Hitachi Unified Storage Maintenance
Instructor Demonstration
Instructor Demonstration
Page 12-3
Performing Hitachi Unified Storage Maintenance
Maintenance Preparation
Maintenance Preparation
General guidelines
• Print relevant maintenance procedure (if required)
• Read through the entire procedure before performing maintenance tasks
• Check for Alerts and Techtips
• Check for any firmware requirements
Notes
• All component maintenance should be available online
• Model upgrades require downtime
The only supported upgrade is HUS 130 to HUS 150
Be sure to use the correct version of the Maintenance Manual that corresponds to the
microcode version of any systems you are supporting.
Note: You may need to keep multiple versions available for ready access.
Review the Engineering Change Notice (ECN) for every microcode release.
Note: You should be familiar with the updates and corrections that are being
implemented over time for HUS even when your customer is skipping some of the
releases on their systems.
Be sure you are on the CMS Alerts Internal distribution list. Review the Technical Tips and
Alerts that are distributed for HUS systems.
Note: You can review the Technical Tips and Alerts at any time in the HiPin system on
HDSNet.
Remember that information in Technical Tips and Alerts takes priority over information in the
ECNs or Maintenance Manual. Information in the ECN takes priority over any information in the
(correct version) of the Maintenance Manual.
Page 12-4
Performing Hitachi Unified Storage Maintenance
General Maintenance Information
CAUTION
Page 12-5
Performing Hitachi Unified Storage Maintenance
General HUS Information
Disk trays:
Page 12-6
Performing Hitachi Unified Storage Maintenance
Getting the Part Numbers
This is a snapshot from the manual. You can also use the P arts Catalog manual to find
location information and additional information on the parts.
Page 12-7
Performing Hitachi Unified Storage Maintenance
Drive Firmware
Drive Firmware
Page 12-8
Performing Hitachi Unified Storage Maintenance
Finding Drive Firmware
http://<IP>/drvfirm
Select a drive
o Web Tool
o HSNM 2
Page 12-9
Performing Hitachi Unified Storage Maintenance
Part Location
Part Location
A Revision — Controller
The controllers are accessed and located in HUS 110, HUS 130 and HUS 150 as described below.
• Access in HUS 110 and HUS 130 is from the rear of the rack
• Access in HUS 150 is from the front of the rack
• In HUS 110 and HUS 130, controller 0 is on the left and controller 1 is on the right
• In HUS 150, controller 1 is on the left and controller 0 is on the right
Page 12-10
Performing Hitachi Unified Storage Maintenance
Part Location
Page 12-11
Performing Hitachi Unified Storage Maintenance
Drive Numbers in the Trays
Page 12-12
Performing Hitachi Unified Storage Maintenance
Replacing Hard Disk Drives
Safety Precautions
Page 12-13
Performing Hitachi Unified Storage Maintenance
Replacing a Drive
Replacing a Drive
Controllers:
Disk trays:
Page 12-14
Performing Hitachi Unified Storage Maintenance
Replacing a Drive
DBX
Page 12-15
Performing Hitachi Unified Storage Maintenance
Standard Time for Correction Copy or Copy Back
Page 12-16
Performing Hitachi Unified Storage Maintenance
Standard Time for Correction Copy or Copy Back
Note: This is a partial list. For the full list, refer to the manual. When an SAS drive with higher
capacity becomes available in the future, the time will increase.
Page 12-17
Performing Hitachi Unified Storage Maintenance
Checking for Successful Replacement
Page 12-18
Performing Hitachi Unified Storage Maintenance
Replacing the Hitachi Unified Storage Control Unit
Page 12-19
Performing Hitachi Unified Storage Maintenance
Wear Wrist Strap
Page 12-20
Performing Hitachi Unified Storage Maintenance
Replacing the Control Unit
Controller positions are different for HUS 110, HUS 130 and HUS 150.
Page 12-21
Performing Hitachi Unified Storage Maintenance
Replacing the Control Unit
Page 12-22
Performing Hitachi Unified Storage Maintenance
Replacing the Control Unit
o When the drive box is connected, also remove the SAS (ENC) cable
1. Slide the right and left blue latches, and then open the levers toward you
Page 12-23
Performing Hitachi Unified Storage Maintenance
Replacing the Control Unit
HUS 150
Page 12-24
Performing Hitachi Unified Storage Maintenance
IPv6 Usage Details
Page 12-25
Performing Hitachi Unified Storage Maintenance
Replacing Hitachi Unified Storage ENC Unit and I/O Modules
Before unpacking and replacing maintenance components, be sure to wear a wrist strap and
connect the grounding clip at the opposite end of the wrist strap to the chassis frame.
Page 12-26
Performing Hitachi Unified Storage Maintenance
Replacing the ENC Unit
Page 12-27
Performing Hitachi Unified Storage Maintenance
Replacing the ENC Unit
Ensure the red ALM LED on the I/O Module (ENC) is illuminated.
When the ALM LED on the I/O Module (ENC) you are replacing is off, remove the I/O Module
(ENC) following these steps:
2. When the levers are completely opened, the I/O Module (ENC) is released
3. Remove the SAS (ENC) cable connected to the I/O Module (ENC) you are replacing
Note:
Page 12-28
Performing Hitachi Unified Storage Maintenance
Replacing the ENC Unit
On DBX
When the red ALM LED of the I/O Card (ENC) to be replaced is illuminated, follow the error
collection item in the generated error message. Verify that the required error information is collected.
1. Pull the DBX out of the rack, and remove the top cover
2. Ensure the red ALM LED of the I/O Card (ENC) to be replaced is illuminated
3. Open the right and left levers toward you, while at the same time pressing the right and left
blue buttons that secure the levers of the I/O Card (ENC)
6. Insert a new I/O Card (ENC) until its lever is slightly opened
7. Ensure the red ALM LED on the I/O Card (ENC) is off
8. Ensure the green READY LED on the front of the controller box is illuminated, and the red
ALARM LED and orange WARNING LED are off
o The green READY LED on the front of the controller box may blink rapidly (for 30 to
50 minutes, or 40 to 60 minutes for the CBL) before changing to a steady
illumination
Page 12-29
Performing Hitachi Unified Storage Maintenance
Replacing an I/O Module
On CBL
a. Select Components, and then I/O Modules on the unit window of Hitachi
Storage Navigator Modular 2
b. Select the module to change, and then click the Detach I/O Module button
c. When the confirmation message displays, click Confirm
d. Ensure the red STATUS LED on the Drive I/O Module is illuminated
2. Remove the SAS (ENC) cable connected to the Drive I/O Module to be replaced
Note:
a. Loosen one blue screw that secures the Drive I/O Module, and then pull the lever
open
Page 12-30
Performing Hitachi Unified Storage Maintenance
Replacing an I/O Module
When the lever is completely opened, the Drive I/O Module is released
c. Temporarily place the Drive I/O Module in a location where anti-static measures
are taken
a. Push the new Drive I/O Module into the slot with its right and left levers completely
opened
b. Close the levers and tighten one blue screw to secure the Drive I/O Module
6. Ensure the red STATUS LED on the Drive I/O Module is off
Page 12-31
Performing Hitachi Unified Storage Maintenance
Replacing the SFP Fibre Channel Host Connector
Page 12-32
Performing Hitachi Unified Storage Maintenance
Reviewing Host Connector Replacement Procedure
Page 12-33
Performing Hitachi Unified Storage Maintenance
Replacing the SFP Fibre Channel Host Connector
CBXSL, CBXSS,
CBSS and CBSL
CBL
2. Remove the FC cables connected to the control unit mounting of the FC host connector
to be replaced
o If the host connector is inserted before 20 seconds has elapsed, the host
connector may not recover normally
6. Insert the FC host connector into the port until it clicks into place
Page 12-34
Performing Hitachi Unified Storage Maintenance
Replacing the SFP Fibre Channel Host Connector
1. Remove the Fibre Channel cables connected to the controller mounting on the host
connector to be replaced
Note:
o If the host connector is inserted before 20 seconds has elapsed, the host
connector may not recover normally
4. Check the insertion direction of the host connector and insert the host connector in the
port until it clicks into place
Note: Be sure to install the same type of the host connector as the one which was
removed
Page 12-35
Performing Hitachi Unified Storage Maintenance
Replacing the SFP Fibre Channel Host Connector
Note: If the Link LED does not light, other failures may be considered. Restore it
following Troubleshooting “Chapter 1. Flowchart for Troubleshooting” (TRBL 01-0000)
6. Refer to the Information Message on the Web Tool, to ensure I I53A0g Host Connector
recovered (Port xy) is indicated
Page 12-36
Performing Hitachi Unified Storage Maintenance
HALM LED Locations
Fibre
Channel
Ports
Page 12-37
Performing Hitachi Unified Storage Maintenance
HALM LED Locations
iSCSI Ports
Page 12-38
Performing Hitachi Unified Storage Maintenance
Replacing the Cache Battery Module
Introduction
Page 12-39
Performing Hitachi Unified Storage Maintenance
Battery Location
Battery Location
1. Remove the power unit on which the red B-ALM LED for the cache backup battery is
located
Note: When the power unit is removed, W07zyC PS alarm (Unit-0, PS-x) displays in the
Information Message on the Web Tool
a. Lift the latch on the cable holder of the power unit to release the lock, and then slide
the cable holder out
b. Remove the power cable from the power unit on which the red B-ALM LED for the
cache backup battery is located
c. Pull the lever open while pressing the latch on the power unit inward with your right
thumb
d. Pull out and remove the power unit while holding its body with both hands
a. Loosen the blue screw on the cache backup battery cover and then open the cover
Page 12-40
Performing Hitachi Unified Storage Maintenance
Battery Location
b. Remove the cable for the cache backup battery from the cable clamp
c. Remove the cable for the cache backup battery from the connector on the power
unit
a. Put the new cache backup battery on the power unit, and then connect the cable for
the cache backup battery to the connector on the power unit
b. Secure the cable for the cache backup battery with the cable clamp
Page 12-41
Performing Hitachi Unified Storage Maintenance
Battery Location
CBL
1. Loosen the blue screw that secures the cache backup battery
2. Open the lever, and then pull out and remove the cache backup battery
Note: Since the depth of a cache backup battery is as long as about 488 mm and it is as
heavy as about 5.0 kg, please remove it carefully
Note: If you insert the cache backup battery without waiting for a minimum of 20
seconds, the cache backup battery may not recover normally
5. With the lever opened completely, insert the cache backup battery completely into the
slot
6. Close the lever, and tighten the blue screw to secure the cache backup battery
Page 12-42
Performing Hitachi Unified Storage Maintenance
Module Summary
Module Summary
Page 12-43
Performing Hitachi Unified Storage Maintenance
Module Review
Module Review
Page 12-44
13. Troubleshooting Hitachi Unified Storage
Module Objectives
Page 13-1
Troubleshooting Hitachi Unified Storage
Detecting Failures
Detecting Failures
This section presents methods used to detect failures.
Page 13-2
Troubleshooting Hitachi Unified Storage
SNMP Setup
SNMP Setup
Page 13-3
Troubleshooting Hitachi Unified Storage
SNMP Setup
Page 13-4
Troubleshooting Hitachi Unified Storage
SNMP Setup
Page 13-5
Troubleshooting Hitachi Unified Storage
SNMP Setup
Page 13-6
Troubleshooting Hitachi Unified Storage
Troubleshooting with Error Messages
Page 13-7
Troubleshooting Hitachi Unified Storage
Troubleshooting with Error Messages
Collect a trace
Failed parts
Page 13-8
Troubleshooting Hitachi Unified Storage
Troubleshooting with LEDs
LED Locations
Page 13-9
Troubleshooting Hitachi Unified Storage
Troubleshooting with LEDs
Page 13-10
Troubleshooting Hitachi Unified Storage
Troubleshooting with LEDs
Controller (HUS 110 and HUS 130) and Disk Tray LEDs
CBS
CBL
I/O Module
ENC
Page 13-11
Troubleshooting Hitachi Unified Storage
Troubleshooting with LEDs
Page 13-12
Troubleshooting Hitachi Unified Storage
Hi-Track Monitor
Hi-Track Monitor
This section presents installation and use of Hi-Track Monitor.
Installation Guide
Page 13-13
Troubleshooting Hitachi Unified Storage
Product Support
Product Support
Page 13-14
Troubleshooting Hitachi Unified Storage
Components
Components
HDS LAN
FTP Get
Hi-Track Internet
FTP Server
HDS DMZ
Dialup
transport
(option)
FTP Put
Internet
transport
via public HTTPS transport
internet via public
(option) Internet (option)
Customer WAN
TCP/IP Socket to 9500/9200
SNMP to Switches
Hi-Track Monitor
App on Customer
Workstation
Windows, Solaris
Page 13-15
Troubleshooting Hitachi Unified Storage
Components
Page 13-16
Troubleshooting Hitachi Unified Storage
Summary Screen
Summary Screen
Page 13-17
Troubleshooting Hitachi Unified Storage
Summary Screen
Page 13-18
Troubleshooting Hitachi Unified Storage
Troubleshooting
Troubleshooting
Opening a Case
Call the Global Support Center to open a case and obtain a case ID
if one does not yet exist for the implementation service
Upload Simple Trace data to open a support case
• https://tuf.hds.com
• Login info:
User: Case ID
Password: truenorth
Page 13-19
Troubleshooting Hitachi Unified Storage
Troubleshooting
TUF at HDS.COM
Page 13-20
Troubleshooting Hitachi Unified Storage
Module Summary
Module Summary
Page 13-21
Troubleshooting Hitachi Unified Storage
Module Review
Module Review
Page 13-22
14. Using Constitute Files
Module Objectives
System Parameter Manual — SYSPR 10-0000, Chapter 10. Setting Constitute Array
Page 14-1
Using Constitute Files
Overview
Overview
Notes:
When using Set Configuration to set configuration information, all prior set configuration
information is overwritten
When using Set Configuration to set RAID Group or Logical Unit settings, or to clone the
storage system, all previously set configuration is overwritten and the data on the affected
RAID groups or LUNs is overwritten
Page 14-2
Using Constitute Files
Overview
DP = Dynamic Provisioning
Page 14-3
Using Constitute Files
Exporting and Importing Constitute Files
Page 14-4
Using Constitute Files
Exporting and Importing Constitute Files
Page 14-5
Using Constitute Files
Exporting a Configuration
Exporting a Configuration
This screen appears if RAID group, DP pool or logical unit was selected on the previous screen.
Page 14-6
Using Constitute Files
Defining RAID Group, DP Pool and LUN Information
Click Save to
save the file
Page 14-7
Using Constitute Files
Viewing RAID Group, DP Pool and LUN Information
Page 14-8
Using Constitute Files
Viewing System Parameters
Page 14-9
Using Constitute Files
Configuring a Duplicate Storage System
Page 14-10
Using Constitute Files
Instructor Demonstration
Instructor Demonstration
Constitute Files
• Get Constitute files for ports
• View the Constitute file
Page 14-11
Using Constitute Files
Module Summary
Module Summary
Page 14-12
Using Constitute Files
Module Review
Module Review
Page 14-13
Using Constitute Files
Your Next Steps
Certification: http://www.hds.com/services/education/certification
Learning Paths:
Customer Learning Path (North America, Latin America, and APAC):
http://www.hds.com/assets/pdf/hitachi-data-systems-academy-customer-learning-paths.pdf
Page 14-14
Using Constitute Files
Your Next Steps
https://community.hds.com/welcome
For Customers, Partners, Employees – Hitachi Data Systems Academy link to Twitter:
http://www.twitter.com/HDSAcademy
Page 14-15
Using Constitute Files
Your Next Steps
Page 14-16
15. DBX/DBW High Density Tray Installation
Module Objectives
Page 15-1
DBX/DBW High Density Tray Installation
Overview
Overview
Page 15-2
DBX/DBW High Density Tray Installation
Rules and Safety Considerations
To avoid shipping a frame that is top heavy not all DBX/DBWs may
be installed upon arrival at the customers site.
Installer may need to install additional DBX/DBWs at customer site.
All dense expansion trays must be placed at lowest point in rack to
maintain a low center of gravity.
Depending on the GEO the following help can be available for
installing the trays in the rack:
• A Genie lift (e.g. GL-8 with Load Platform) or equivalent.
• Assistance provided by the transporting company
Due to the heavy weight of the trays, no attempt should be made to move the rack with more
than four DBX/DBWs installed.
Due to the fact that maintenance access is from the top, mounting a tray high in a rack may
make maintenance tasks difficult.
In the US, usually a Genie or compatible lift is used to install the DBX/DBW tray. HDS logistics
do not stock lifts nor ladders.
In Europe, the transporting company can provide (if ordered) the physical installation of the
high density trays into the rack.
15-3
DBX/DBW High Density Tray Installation
Safety Ladder
Safety Ladder
Page 15-4
DBX/DBW High Density Tray Installation
Genie Lift Assembly
15-5
DBX/DBW High Density Tray Installation
Prepare Genie Lift - Attach Supporting Feet
Page 15-6
DBX/DBW High Density Tray Installation
Prepare Genie Lift - Attach Lift Forks
Pins
15-7
DBX/DBW High Density Tray Installation
Prepare Genie Lift - Attach Load Platform
Page 15-8
DBX/DBW High Density Tray Installation
Prepare Genie Lift - Unlock the Lift
15-9
DBX/DBW High Density Tray Installation
Prepare Genie Lift - Configure Lift Handle
1. Pull out knob to unlock handle arm. 3. Reattach the handle arm with the handle facing
2. Remove the handle arm and turn it around outward.
so the handle faces outward.
Page 15-10
DBX/DBW High Density Tray Installation
Unpacking
Unpacking
15-11
DBX/DBW High Density Tray Installation
Rack and Rail Preparation
Mounting screws
Stabilizer bracket
Full size view of a 47U rack Swing bracket under frame to align with screw
shows physical location of holes, and then mount stabilizer with two screws
stabilizer foot at front of rack
Note: The 47U rack is much taller than previous versions and requires a
stable step stool to install cables in the upper half.
Using a rack stabilizer is a very important safety feature for all racks to prevent the rack from
falling over when a -heavy- tray is pulled out for a service task.
Page 15-12
DBX/DBW High Density Tray Installation
Install DBX Slide Brackets
L
“L” indicator for left rail
Note: The brackets are labeled “L” for left and “R” for right side locations.
15-13
DBX/DBW High Density Tray Installation
Attach DBX Rails
Four rack nuts are used on rack tracks for mounting DBX rails.
Additional slide-in nut on front of each rail secures tray when in place.
To access rear of rail during installation, use a long screw driver.
Page 15-14
DBX/DBW High Density Tray Installation
Installing DBX/DBW into Rack
Install DBX/DBW
To reduce the weight, remove all parts from the DBX/DBW tray
before lifting it into the rack
15-15
DBX/DBW High Density Tray Installation
DBX Fail Safe Lock
Page 15-16
DBX/DBW High Density Tray Installation
DBX Tray Releases
When tray is placed in rails, there are two releases on tray sides:
• Rear release: Depressing the rear release allows you to push tray
back into rack.
• Front release: Depressing the front release allows you to pull tray
forward for maintenance or removal.
15-17
DBX/DBW High Density Tray Installation
Mounting with Genie Lift
When rails are installed in rack, use Genie Lift to place the tray.
Turn Genie Lift crank handle clockwise to lift.
To lower lift, turn crank handle about ¼ turn counter-clockwise to
unlock safety latch; then continue turning handle to lower lift.
Crank handle for lifting
To load the DBX/DBW tray onto the
Genie Lift, slide the tray off the
pallet and onto Genie Lift at right
angles to the long side of the tray.
Page 15-18
DBX/DBW High Density Tray Installation
Install DBX/DBW in the Rack
WARNING – Make sure rack stabilizer is installed before installing the DBX/DBW.
DBX rails are beveled to aid in sliding tray easily into rack mounted rails.
Raise tray slightly above bottom of rack rails with Genie Lift before sliding
tray rearward into rack.
When tray is seated in rails, push tray until a snapping sound is heard.
• Sound indicates rails are matched and locked securely to prevent the tray
from being pulled forward and falling.
Raise Genie Lift to height of
rack mounted rails Slide tray rearward into rails
15-19
DBX/DBW High Density Tray Installation
Cable Routing Brackets
Page 15-20
DBX/DBW High Density Tray Installation
Cable Installation
Cable Installation
Remove the clear plastic cover Pull ENC assembly from bottom of tray
Power
ENC
Pull ENC assembly out from tray Insert ENC cable locking bar in place
15-21
DBX/DBW High Density Tray Installation
Install DBX/DBW ENC and Power Cables
Label each cable with identical tags on both ends as each cable is
installed.
Labeling tags come with kit.
Page 15-22
DBX/DBW High Density Tray Installation
Routing Channels for Power Cables
Follow similar routing plan for power cables as with ENC cables.
15-23
DBX/DBW High Density Tray Installation
Module Summary
Module Summary
Page 15-24
DBX/DBW High Density Tray Installation
Your Next Steps
Certification: http://www.hds.com/services/education/certification
Learning Paths:
15-25
DBX/DBW High Density Tray Installation
Your Next Steps
https://community.hds.com/welcome
For Customers, Partners, Employees – Hitachi Data Systems Academy link to Twitter:
http://www.twitter.com/HDSAcademy
Page 15-26
Training Course Glossary
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
ACC — Action Code. A SIM (System Information AL-PA — Arbitrated Loop Physical Address.
Message). AMS — Adaptable Modular Storage.
ACE — Access Control Entry. Stores access rights APAR — Authorized Program Analysis Reports.
for a single user or group within the APF — Authorized Program Facility. In IBM z/OS
Windows security model. and OS/390 environments, a facility that
ACL — Access Control List. Stores a set of ACEs, permits the identification of programs that
so that it describes the complete set of access are authorized to use restricted functions.
rights for a file system object within the API — Application Programming Interface.
Microsoft Windows security model.
APID — Application Identification. An ID to
ACP ― Array Control Processor. Microprocessor identify a command device.
mounted on the disk adapter circuit board
(DKA) that controls the drives in a specific Application Management — The processes that
disk array. Considered part of the back end; manage the capacity and performance of
it controls data transfer between cache and applications.
the hard drives. ARB — Arbitration or request.
ACP Domain ― Also Array Domain. All of the ARM — Automated Restart Manager.
array-groups controlled by the same pair of Array Domain — Also ACP Domain. All
DKA boards, or the HDDs managed by 1 functions, paths, and disk drives controlled
ACP PAIR (also called BED). by a single ACP pair. An array domain can
ACP PAIR ― Physical disk access control logic. contain a variety of LVI or LU
Each ACP consists of 2 DKA PCBs to configurations.
provide 8 loop paths to the real HDDs. Array Group — Also called a parity group. A
Actuator (arm) — Read/write heads are attached group of hard disk drives (HDDs) that form
to a single head actuator, or actuator arm, the basic unit of storage in a subsystem. All
that moves the heads around the platters. HDDs in a parity group must have the same
AD — Active Directory. physical capacity.
ADC — Accelerated Data Copy. Array Unit — A group of hard disk drives in 1
RAID structure. Same as parity group.
Address — A location of data, usually in main
memory or on a disk. A name or token that ASIC — Application specific integrated circuit.
identifies a network component. In local area ASSY — Assembly.
networks (LANs), for example, every node Asymmetric virtualization — See Out-of-band
has a unique address. virtualization.
ADP — Adapter. Asynchronous — An I/O operation whose
ADS — Active Directory Service. initiator does not await its completion before
• Virtual private cloud (or virtual private CMA — Cache Memory Adapter.
network cloud) CMD — Command.
Cloud Enabler —a concept, product or solution CMG — Cache Memory Group.
that enables the deployment of cloud CNAME — Canonical NAME.
computing. Key cloud enablers include:
DASD — Direct Access Storage Device. Data Transfer Rate (DTR) — The speed at which
data can be transferred. Measured in
Data block — A fixed-size unit of data that is kilobytes per second for a CD-ROM drive, in
transferred together. For example, the bits per second for a modem, and in
X-modem protocol transfers blocks of 128 megabytes per second for a hard drive. Also,
bytes. In general, the larger the block size, often called data rate.
the faster the data transfer rate.
DBL — Drive box.
Data Duplication — Software duplicates data, as
in remote copy or PiT snapshots. Maintains 2 DBMS — Data Base Management System.
copies of data. DBX — Drive box.
Data Integrity — Assurance that information will DCA ― Data Cache Adapter.
be protected from modification and DCTL — Direct coupled transistor logic.
corruption.
DDL — Database Definition Language.
Data Lifecycle Management — An approach to
information and storage management. The DDM — Disk Drive Module.
policies, processes, practices, services and DDNS — Dynamic DNS.
tools used to align the business value of data DDR3 — Double data rate 3.
with the most appropriate and cost-effective
storage infrastructure from the time data is DE — Data Exchange Software.
created through its final disposition. Data is Device Management — Processes that configure
aligned with business requirements through and manage storage systems.
management policies and service levels DFS — Microsoft Distributed File System.
associated with performance, availability,
DFSMS — Data Facility Storage Management
recoverability, cost, and what ever
Subsystem.
parameters the organization defines as
critical to its operations. DFSM SDM — Data Facility Storage Management
Subsystem System Data Mover.
Data Migration — The process of moving data
from 1 storage device to another. In this DFSMSdfp — Data Facility Storage Management
context, data migration is the same as Subsystem Data Facility Product.
Hierarchical Storage Management (HSM). DFSMSdss — Data Facility Storage Management
Data Pipe or Data Stream — The connection set up Subsystem Data Set Services.
between the MediaAgent, source or DFSMShsm — Data Facility Storage Management
destination server is called a Data Pipe or Subsystem Hierarchical Storage Manager.
more commonly a Data Stream.
DFSMSrmm — Data Facility Storage Management
Data Pool — A volume containing differential Subsystem Removable Media Manager.
data only.
DFSMStvs — Data Facility Storage Management
Data Protection Directive — A major compliance Subsystem Transactional VSAM Services.
and privacy protection initiative within the
DFW — DASD Fast Write.
European Union (EU) that applies to cloud
computing. Includes the Safe Harbor DICOM — Digital Imaging and Communications
Agreement. in Medicine.
Data Stream — CommVault’s patented high DIMM — Dual In-line Memory Module.
performance data mover used to move data Direct Access Storage Device (DASD) — A type of
back and forth between a data source and a storage device, in which bits of data are
MediaAgent or between 2 MediaAgents. stored at precise locations, enabling the
Data Striping — Disk array data mapping computer to retrieve information directly
technique in which fixed-length sequences of without having to scan a series of records.
HANA — High Performance Analytic Appliance, HLS — Healthcare and Life Sciences.
a database appliance technology proprietary HLU — Host Logical Unit.
to SAP. H-LUN — Host Logical Unit Number. See LUN.
HBA — Host Bus Adapter — An I/O adapter that HMC — Hardware Management Console.
sits between the host computer's bus and the
Fibre Channel loop and manages the transfer Homogeneous — Of the same or similar kind.
of information between the 2 channels. In Host — Also called a server. Basically a central
order to minimize the impact on host computer that processes end-user
processor performance, the host bus adapter applications or requests.
performs many low-level interface functions Host LU — Host Logical Unit. See also HLU.
automatically or with minimal processor
Host Storage Domains — Allows host pooling at
involvement.
the LUN level and the priority access feature
HCA — Host Channel Adapter. lets administrator set service levels for
HCD — Hardware Configuration Definition. applications.
HD — Hard Disk. HP — (1) Hewlett-Packard Company or (2) High
HDA — Head Disk Assembly. Performance.
PAT — Port Address Translation. PL — Platter. The circular disk on which the
magnetic data is stored. Also called
PATA — Parallel ATA. motherboard or backplane.
Path — Also referred to as a transmission channel, PM — Package Memory.
the path between 2 nodes of a network that a
POC — Proof of concept.
data communication follows. The term can
refer to the physical cabling that connects the Port — In TCP/IP and UDP networks, an
nodes on a network, the signal that is endpoint to a logical connection. The port
communicated over the pathway or a sub- number identifies what type of port it is. For
channel in a carrier frequency. example, port 80 is used for HTTP traffic.
Path failover — See Failover. POSIX — Portable Operating System Interface for
UNIX. A set of standards that defines an
PAV — Parallel Access Volumes. application programming interface (API) for
PAWS — Protect Against Wrapped Sequences. software designed to run under
PB — Petabyte. heterogeneous operating systems.
PBC — Port By-pass Circuit. PP — Program product.
PCB — Printed Circuit Board. P-P — Point-to-point; also P2P.
PCHIDS — Physical Channel Path Identifiers. PPRC — Peer-to-Peer Remote Copy.
PCI — Power Control Interface. Private Cloud — A type of cloud computing
defined by shared capabilities within a
PCI CON — Power Control Interface Connector
single company; modest economies of scale
Board.
and less automation. Infrastructure and data
PCI DSS — Payment Card Industry Data Security reside inside the company’s data center
Standard. behind a firewall. Comprised of licensed
PCIe — Peripheral Component Interconnect software tools rather than on-going services.
Express.
PD — Product Detail. Example: An organization implements its
own virtual, scalable cloud and business
PDEV— Physical Device. units are charged on a per use basis.
SA z/OS — System Automation for z/OS. SBSC — Smart Business Storage Cloud.
SBX — Small Box (Small Form Factor).
SAA — Share Access Authentication. The process
of restricting a user's rights to a file system SC — (1) Simplex connector. Fibre Channel
object by combining the security descriptors connector that is larger than a Lucent
from both the file system object itself and the connector (LC). (2) Single Cabinet.
share to which the user is connected. SCM — Supply Chain Management.
SaaS — Software as a Service. A cloud computing SCP — Secure Copy.
business model. SaaS is a software delivery SCSI — Small Computer Systems Interface. A
model in which software and its associated parallel bus architecture and a protocol for
data are hosted centrally in a cloud and are transmitting large data blocks up to a
typically accessed by users using a thin distance of 15 to 25 meters.
client, such as a web browser via the
SD — Software Division (of Hitachi).
Internet. SaaS has become a common
delivery model for most business SDH — Synchronous Digital Hierarchy.
applications, including accounting (CRM SDM — System Data Mover.
and ERP), invoicing (HRM), content SDSF — Spool Display and Search Facility.
management (CM) and service desk Sector — A sub-division of a track of a magnetic
management, just to name the most common disk that stores a fixed amount of data.
software that runs in the cloud. This is the
fastest growing service in the cloud market SEL — System Event Log.
today. SaaS performs best for relatively Selectable segment size — Can be set per partition.
simple tasks in IT-constrained organizations. Selectable Stripe Size — Increases performance by
SACK — Sequential Acknowledge. customizing the disk access size.
SACL — System ACL. The part of a security SENC — Is the SATA (Serial ATA) version of the
descriptor that stores system auditing ENC. ENCs and SENCs are complete
information. microprocessor systems on their own and
they occasionally require a firmware
SAIN — SAN-attached Array of Independent upgrade.
Nodes (architecture).
SeqRD — Sequential read.
SAN ― Storage Area Network. A network linking
Serial Transmission — The transmission of data
computing devices to disk or tape arrays and
bits in sequential order over a single line.
other devices over Fibre Channel. It handles
data at the block level. Server — A central computer that processes
end-user applications or requests, also called
SAP — (1) System Assist Processor (for I/O
a host.
processing), or (2) a German software
company. Server Virtualization — The masking of server
resources, including the number and identity
SAP HANA — High Performance Analytic of individual physical servers, processors,
Appliance, a database appliance technology and operating systems, from server users.
proprietary to SAP. The implementation of multiple isolated
SARD — System Assurance Registration virtual environments in one physical server.
Document. Service-level Agreement — SLA. A contract
SAS —Serial Attached SCSI. between a network service provider and a
SATA — Serial ATA. Serial Advanced Technology customer that specifies, usually in
Attachment is a new standard for connecting measurable terms, what services the network
hard drives into computer systems. SATA is service provider will furnish. Many Internet
based on serial signaling technology, unlike service providers (ISP) provide their
current IDE (Integrated Drive Electronics) customers with a SLA. More recently, IT
hard drives that use parallel signaling. departments in major enterprises have