You are on page 1of 496

Student Guide

Student Guide for


Implementing and Supporting Hitachi
Unified Storage

TCI2208

Courseware Version 3.2


Corporate Headquarters Regional Contact Information
2825 Lafayette Street Americas: +1 408 970 1000 or info@HDS.com
Santa Clara, California 95050-2639 USA Europe, Middle East and Africa: +44 (0) 1753 618000 or info.emea@HDS.com
www.HDS.com Asia Pacific: +852 3189 7900 or hds.marketing.apac@HDS.com

© Hitachi Data Systems Corporation 2014. All rights reserved. HITACHI is a trademark or registered trademark of Hitachi, Ltd. Innovate With Information is a trademark or
registered trademark of Hitachi Data Systems Corporation. All other trademarks, service marks, and company names are properties of their respective owners.

ii
Contents
Introduction ............................................................................................................ xvii
Welcome and Introductions .......................................................................................... xvii
Course Description .......................................................................................................xviii
Prerequisites ................................................................................................................ xix
Course Objectives .......................................................................................................... xx
Course Topics............................................................................................................... xxi
Learning Paths ............................................................................................................ xxii
HDS Academy Is on Twitter and LinkedIn ......................................................................xxiii
Collaborate and Share ................................................................................................. xxiv
Hitachi Data Systems Community .................................................................................. xxv

1. Hitachi Unified Storage Family Overview ......................................................... 1-1


Module Objectives ....................................................................................................... 1-1
Overview .................................................................................................................... 1-2
Market Shift to Unstructured Data.......................................................................... 1-2
Market Drivers to Unified Storage .......................................................................... 1-3
Hitachi Key Differentiators ..................................................................................... 1-4
Unified Redefined: Unifying Across the Storage Infrastructure .................................. 1-5
Hitachi Unified Storage Portfolio ............................................................................ 1-6
Hitachi Unified Storage Models .............................................................................. 1-7
Configurations ............................................................................................................. 1-8
HUS 110 Configurations ........................................................................................ 1-8
HUS 110 Box Names............................................................................................. 1-9
HUS 130 Configurations .......................................................................................1-10
HUS 130 Box Names............................................................................................1-11
HUS 150 Configurations .......................................................................................1-12
HUS 150 Box Names............................................................................................1-13
HUS Family Summary ..........................................................................................1-14
Features ....................................................................................................................1-15

iii
Contents

HUS Product Family Features ...............................................................................1-15


Model Architecture ......................................................................................................1-20
Hitachi Unified Storage Family ..............................................................................1-20
HUS 110 (Block Module) Architecture ....................................................................1-21
HUS 130 (Block Module) Architecture ....................................................................1-22
HUS 150 (Block Module) Architecture ....................................................................1-23
Common Features Across the Hitachi Unified Storage Models ..................................1-24
Cache Protection ........................................................................................................1-25
HUS 100 Flash Backup Functionality......................................................................1-25
On Power Outage with HUS 100 (Backup) .............................................................1-26
On Return of Power (Restore) ..............................................................................1-27
Clearing the Data From Flash Memory ...................................................................1-28
Write Command Mode (During Battery Charging) ...................................................1-29
Management Tools .....................................................................................................1-30
Management Tools Overview................................................................................1-30
Module Summary ........................................................................................................1-32
Module Review ...........................................................................................................1-33

2. Hitachi Unified Storage Components ................................................................ 2-1


Module Objectives ....................................................................................................... 2-1
Components ................................................................................................................ 2-2
HUS Components ................................................................................................. 2-2
HUS 110 CBXSL1 Controller ................................................................................... 2-3
HUS 110 CBXSS1 Controller................................................................................... 2-4
HUS 130 CBSL Controller ...................................................................................... 2-5
HUS 130 CBSS Controller ...................................................................................... 2-6
HUS 150 CBL Controller ........................................................................................ 2-7
DBL Disk Tray ...................................................................................................... 2-8
DBS Disk Tray ...................................................................................................... 2-9
DBX Disk Tray .....................................................................................................2-10
DBX Disk Tray (Rear)...........................................................................................2-11
DBW Disk Tray ....................................................................................................2-12

iv
Contents

DBW Dense Box 84 HDDs ....................................................................................2-13


New Drive Type FMD ..................................................................................................2-14
DF850 ................................................................................................................2-14
DF850 Supports NF1000 ......................................................................................2-15
DBF Drive Box for NF1000....................................................................................2-16
Flash Module Drive (FMD) ....................................................................................2-17
DBF Spec............................................................................................................2-18
Power Supplies Batteries .............................................................................................2-19
Power Cable .......................................................................................................2-19
HUS 110 and HUS 130 Power Supply Unit .............................................................2-20
HUS 110 and 130 Batteries ..................................................................................2-21
HUS 110 Controller Board ....................................................................................2-22
HUS 130 Controller Board ....................................................................................2-23
HUS 110 and 130 Option Module ..........................................................................2-24
HUS 150 Battery — Front View .............................................................................2-25
HUS 150 Fan Unit — Front View ...........................................................................2-26
HUS 150 Controller Unit .......................................................................................2-27
HUS 150 I/O Modules ..........................................................................................2-28
Module Summary ........................................................................................................2-29
Module Review ...........................................................................................................2-30

3. Installation — Part 1 ......................................................................................... 3-1


Module Objectives ....................................................................................................... 3-1
Installation Resource Documentation ............................................................................. 3-2
Overview ............................................................................................................. 3-2
Maintenance Manual Overview .............................................................................. 3-3
Instructor Demonstration ...................................................................................... 3-4
System Assurance Document (SAD) ....................................................................... 3-5
Recommended Safety Precautions ................................................................................. 3-6
Electrostatic Discharge (ESD) ................................................................................ 3-6
Installing a New Frame ................................................................................................ 3-8
Procedure for Installing a New Frame .................................................................... 3-8

v
Contents

Tools Required for Installation ............................................................................... 3-9


Hitachi Modular Racks..........................................................................................3-10
Step 1: Unpacking the Rack Frame .......................................................................3-11
Step 2: Unpacking the Storage System..................................................................3-14
Step 4: Mounting Components on the Rack Frame .................................................3-15
Module Summary ........................................................................................................3-20
Module Review ...........................................................................................................3-21

4. Installation — Part 2 ......................................................................................... 4-1


Module Objectives ....................................................................................................... 4-1
Installing a New Frame ................................................................................................ 4-2
Procedure for Installing a New Frame .................................................................... 4-2
Step 5: Installing the Components ......................................................................... 4-3
Step 6: Connecting the Cables ............................................................................... 4-7
Step 7: Attaching the Decoration Panel .................................................................4-15
Step 8: Powering On the Storage System ..............................................................4-16
Step 9: Connecting a Service PC or Laptop to the Storage System ...........................4-18
Step 10: Installing and Updating the Firmware ......................................................4-19
Step 11: Setting the Storage System .....................................................................4-20
Step 12: Powering Off and Restarting Storage System ............................................4-23
Step 13: Connecting Host Interface Cables ............................................................4-24
Back End Configuration Kit (BECK) Tool ........................................................................4-25
BECK Tool Overview ............................................................................................4-25
Using the BECK Tool — Blank Configuration...........................................................4-26
Configuring HUS 150 ...........................................................................................4-27
Configuring HUS 150 ...........................................................................................4-28
Appendix A.................................................................................................................4-30
1. Overview ........................................................................................................4-31
2. Support Configurations and Schedules...............................................................4-32
3. Support Configuration Restrictions ....................................................................4-33
Support Configuration Restrictions BECK Tool Example ...........................................4-34
4. Summary ........................................................................................................4-35

vi
Contents

4. Summary (Case Study #1: DBx + DBW + DBx) .................................................4-36


4. Conclusions .....................................................................................................4-37
4. Summary (Case Study #2: DBW + DBx or DBW)................................................4-38
Module Summary ........................................................................................................4-39
Module Review ...........................................................................................................4-40

5. Using the Hitachi Unified Storage Web Tool ..................................................... 5-1


Module Objectives ....................................................................................................... 5-1
Post-Installation Tasks ................................................................................................. 5-2
Web Tool Introduction ................................................................................................. 5-3
Web Tool Overview .............................................................................................. 5-3
Web Tool Functions .............................................................................................. 5-4
Location and Function of Ethernet Ports ................................................................. 5-5
IP Addresses on LAN Ports .................................................................................... 5-6
Preferred Way of Connecting ................................................................................. 5-7
Maintenance Mode ............................................................................................... 5-8
Entering Maintenance Mode .................................................................................. 5-9
Maintenance Mode User ID and Password .............................................................5-10
Setting Controller IP Addresses ....................................................................................5-11
Setting IP Addresses............................................................................................5-11
Verifying and Updating Firmware .................................................................................5-15
Disruptive Firmware Update Overview ...................................................................5-15
Preparing for Firmware Update .............................................................................5-16
Before Starting Initial Microcode Setup ..................................................................5-17
Selecting the Options ...........................................................................................5-18
Successful Firmware Update Completion ...............................................................5-20
Using the Web Tool in Normal Mode.............................................................................5-21
Normal Mode Requirements .................................................................................5-21
Cache Backup Battery Status ................................................................................5-23
Other Components ..............................................................................................5-24
Controller/Battery/Cache/Fan Status .....................................................................5-25
Status of Disk Trays.............................................................................................5-26

vii
Contents

Collecting Traces with the Web Tool .............................................................................5-27


Trace Types ........................................................................................................5-27
Collecting a Simple Trace .....................................................................................5-28
Collecting a Controller Alarm Trace .......................................................................5-29
Collecting a Full Dump .........................................................................................5-30
Cache Memory Access Failure (Full Dump) ............................................................5-31
Troubleshooting — Open a Case ...........................................................................5-32
Technical Upload Facility (TUF).............................................................................5-33
Instructor Demonstration .....................................................................................5-34
Module Summary ........................................................................................................5-35
Module Review ...........................................................................................................5-36

6. Updating Hitachi Unified Storage Firmware ..................................................... 6-1


Module Objectives ....................................................................................................... 6-1
HUS Firmware Overview ............................................................................................... 6-2
Serial Number of HUS Box ............................................................................................ 6-3
Firmware Update Methods ............................................................................................ 6-4
Nondisruptive Firmware Update Overview .............................................................. 6-5
Nondisruptive Firmware Update Requirements ........................................................ 6-5
Nondisruptive Firmware Update Procedure ............................................................. 6-6
Verifying Successful Completion of Code Update ..................................................... 6-9
Successful Microcode Update................................................................................6-10
Module Summary ........................................................................................................6-11
Module Review ...........................................................................................................6-12

7. Hitachi Storage Navigator Modular 2 Installation and Configuration ............... 7-1


Module Objectives ....................................................................................................... 7-1
Overview .................................................................................................................... 7-2
Architecture ......................................................................................................... 7-2
Installation Requirements ..................................................................................... 7-3
Features and Functions ......................................................................................... 7-4
Initial Setup Tasks ....................................................................................................... 7-8

viii
Contents

Initial Setup ......................................................................................................... 7-8


Installation .................................................................................................................7-13
Installation on Windows .......................................................................................7-13
Installation on Sun Solaris ....................................................................................7-15
Installation on Red Hat Linux ...............................................................................7-16
Instructor Demonstration .....................................................................................7-17
Storage Navigator Modular 2 Wizards ...........................................................................7-18
Add Array Wizard ................................................................................................7-18
Instructor Demonstration .....................................................................................7-20
User Management.......................................................................................................7-21
User Management in SNM 2 .................................................................................7-21
HUS 100 User Management Account Authentication .......................................................7-24
Account Authentication Overview ..........................................................................7-24
Default Authentication .........................................................................................7-25
Managing Account Authentication .........................................................................7-26
Setting Permissions .............................................................................................7-27
Storage Navigator Modular 2 Command Line Interface ...................................................7-28
Install ........................................................................................................................7-28
Start the Command Line Interface ........................................................................7-29
Check the Environment Variables ..........................................................................7-30
Register a Storage System ...................................................................................7-31
Create a RAID Group ...........................................................................................7-33
Referencing the RAID Groups ...............................................................................7-35
Deleting RAID Groups ..........................................................................................7-37
Creating Volumes ................................................................................................7-39
Format Volumes ..................................................................................................7-41
Referencing Volumes ...........................................................................................7-43
Display Help........................................................................................................7-45
Module Summary ........................................................................................................7-46
Module Review ...........................................................................................................7-47

ix
Contents

8. RAID Group and Volume Configuration ............................................................ 8-1


Module Objectives ....................................................................................................... 8-1
Supported RAID Types ................................................................................................. 8-2
Overview ............................................................................................................. 8-2
Supported RAID Levels ......................................................................................... 8-3
RAID Groups versus Parity Groups ......................................................................... 8-7
Rules for Creating RAID Groups ............................................................................. 8-8
Drives Supported in Hitachi Unified Storage ...........................................................8-10
System and User Data Areas ................................................................................8-11
Creating a RAID Group ................................................................................................8-12
Expanding a RAID Group .............................................................................................8-14
Expand a RAID Group ..........................................................................................8-14
Expanding a RAID Group .....................................................................................8-16
Example .............................................................................................................8-18
Instructor Demonstration .....................................................................................8-19
Creating Volumes .......................................................................................................8-20
Rules for Creating Volumes ..................................................................................8-20
Volume Configuration ..........................................................................................8-21
How to Create a Volume ......................................................................................8-22
Changing Logical Unit Capacity ....................................................................................8-26
Volume Unification ..............................................................................................8-26
Changing LU Capacity ..........................................................................................8-27
Unifying a Volume ...............................................................................................8-31
Instructor Demonstration .....................................................................................8-33
Module Summary ........................................................................................................8-34
Module Review ...........................................................................................................8-35

9. Storage Allocation ............................................................................................ 9-1


Module Objectives ....................................................................................................... 9-1
Connectivity Between Storage and Hosts on HUS ........................................................... 9-2
Storage Allocation with HUS ......................................................................................... 9-2
Host Connection to HUS........................................................................................ 9-3

x
Contents

Mapping Volumes to Ports ..................................................................................... 9-5


Simple Settings ...................................................................................................9-15
Advanced Settings Explanations............................................................................9-16
Configuration Options for Different OS ..................................................................9-16
Queue Depth ......................................................................................................9-17
How to increase Queue Depth in SNM2 .................................................................9-18
Instructor Demonstration .....................................................................................9-19
Module Summary ........................................................................................................9-20
Module Review ...........................................................................................................9-21

10. Path Management........................................................................................... 10-1


Module Objectives ......................................................................................................10-1
HDLM Features and Benefits ........................................................................................10-2
Overview ............................................................................................................10-2
Features .............................................................................................................10-4
HDLM Pre-Installation Steps ...................................................................................... 10-14
Pre-Installation Process...................................................................................... 10-14
Using HDLM GUI ....................................................................................................... 10-17
HDLM GUI................................................................................................................ 10-17
Setting the HDLM Options Screen ....................................................................... 10-19
HDLM Path List Window ..................................................................................... 10-21
Using HDLM CLI for Path Management ....................................................................... 10-22
Using Command Line Interface (CLI) for Path Management .................................. 10-22
HDLM CLI Overview........................................................................................... 10-23
Viewing Path Information with the CLI ................................................................ 10-24
Changing Path Status with the CLI ...................................................................... 10-25
Module Summary ...................................................................................................... 10-26
Module Review ......................................................................................................... 10-27

11. Hitachi Unified Storage Program Products ..................................................... 11-1


Module Objectives ......................................................................................................11-1
Products: Array Based Software ...................................................................................11-2

xi
Contents

Memory Management Layer.........................................................................................11-3


Cache Partition Manager..............................................................................................11-5
Cache Residency Manager ...........................................................................................11-6
Volume Migration........................................................................................................11-7
Replication Products....................................................................................................11-8
TrueCopy Extended Distance for Remote Backup......................................................... 11-11
True Copy Modular Distributed (TCMD) ...................................................................... 11-12
Replication Products.................................................................................................. 11-13
What is Dynamic Provisioning? ................................................................................... 11-14
Dynamic Tiering ....................................................................................................... 11-16
Use Cases ................................................................................................................ 11-17
Software Bundles ...................................................................................................... 11-18
Module Summary ...................................................................................................... 11-21
Module Review ......................................................................................................... 11-22

12. Performing Hitachi Unified Storage Maintenance ........................................... 12-1


Module Objectives ......................................................................................................12-1
Maintenance Overview and Preparation ........................................................................12-2
Maintenance Overview .........................................................................................12-2
Instructor Demonstration .....................................................................................12-3
Maintenance Preparation......................................................................................12-4
General Maintenance Information ................................................................................12-5
General HUS Information .....................................................................................12-5
Getting the Part Numbers ....................................................................................12-7
Drive Firmware ...................................................................................................12-8
Finding Drive Firmware ........................................................................................12-9
Part Location .................................................................................................... 12-10
Drive Numbers in the Trays ................................................................................ 12-12
Replacing Hard Disk Drives ........................................................................................ 12-13
Safety Precautions ............................................................................................. 12-13
Replacing a Drive .............................................................................................. 12-14
Standard Time for Correction Copy or Copy Back ................................................. 12-16

xii
Contents

Checking for Successful Replacement .................................................................. 12-18


Replacing the Hitachi Unified Storage Control Unit ....................................................... 12-19
Handle Components with Care............................................................................ 12-19
Wear Wrist Strap ............................................................................................... 12-20
Replacing the Control Unit.................................................................................. 12-21
IPv6 Usage Details ............................................................................................ 12-25
Replacing Hitachi Unified Storage ENC Unit and I/O Modules ........................................ 12-26
Wear Wrist Strap ............................................................................................... 12-26
Replacing the ENC Unit ...................................................................................... 12-27
Replacing an I/O Module .................................................................................... 12-30
Replacing the SFP Fibre Channel Host Connector ......................................................... 12-32
Wear Wrist Strap ............................................................................................... 12-32
Reviewing Host Connector Replacement Procedure .............................................. 12-33
Replacing the SFP Fibre Channel Host Connector ................................................. 12-34
HALM LED Locations .......................................................................................... 12-37
Replacing the Cache Battery Module........................................................................... 12-39
Introduction ...................................................................................................... 12-39
Battery Location ................................................................................................ 12-40
Module Summary ...................................................................................................... 12-43
Module Review ......................................................................................................... 12-44

13. Troubleshooting Hitachi Unified Storage ........................................................ 13-1


Module Objectives ......................................................................................................13-1
Detecting Failures .......................................................................................................13-2
Failure Detection Methods ....................................................................................13-2
SNMP Setup ........................................................................................................13-3
Troubleshooting with Error Messages ....................................................................13-7
Troubleshooting with LEDs ...................................................................................13-9
Hi-Track Monitor ....................................................................................................... 13-13
Installation Guide .............................................................................................. 13-13
Product Support ................................................................................................ 13-14
Components ..................................................................................................... 13-15

xiii
Contents

Summary Screen ............................................................................................... 13-17


Troubleshooting ................................................................................................ 13-19
Module Summary ...................................................................................................... 13-21
Module Review ......................................................................................................... 13-22

14. Using Constitute Files ..................................................................................... 14-1


Module Objectives ......................................................................................................14-1
Overview ...................................................................................................................14-2
Exporting and Importing Constitute Files ......................................................................14-4
Exporting a Configuration ............................................................................................14-6
Defining RAID Group, DP Pool and LUN Information ......................................................14-7
Viewing RAID Group, DP Pool and LUN Information .......................................................14-8
Viewing System Parameters .........................................................................................14-9
Configuring a Duplicate Storage System ..................................................................... 14-10
Instructor Demonstration .......................................................................................... 14-11
Module Summary ...................................................................................................... 14-12
Module Review ......................................................................................................... 14-13
Your Next Steps........................................................................................................ 14-14

15. DBX/DBW High Density Tray Installation....................................................... 15-1


Module Objectives ......................................................................................................15-1
Overview ...................................................................................................................15-2
Rules and Safety Considerations ..................................................................................15-3
Installation Rules and Tools .................................................................................15-3
Safety Ladder .....................................................................................................15-4
Genie Lift Assembly ....................................................................................................15-5
Unpack Genie Lift ................................................................................................15-5
Prepare Genie Lift - Attach Supporting Feet ...........................................................15-6
Prepare Genie Lift - Attach Lift Forks .....................................................................15-7
Prepare Genie Lift - Attach Load Platform ..............................................................15-8
Prepare Genie Lift - Unlock the Lift .......................................................................15-9
Prepare Genie Lift - Configure Lift Handle ............................................................ 15-10

xiv
Contents

Unpacking ........................................................................................................ 15-11


Rack and Rail Preparation .......................................................................................... 15-12
Prepare Rack Stabilizer ...................................................................................... 15-12
Install DBX Slide Brackets .................................................................................. 15-13
Attach DBX Rails ............................................................................................... 15-14
Installing DBX/DBW into Rack .................................................................................... 15-15
Install DBX/DBW ............................................................................................... 15-15
DBX Fail Safe Lock ............................................................................................ 15-16
DBX Tray Releases ............................................................................................ 15-17
Mounting with Genie Lift .................................................................................... 15-18
Install DBX/DBW in the Rack .............................................................................. 15-19
Cable Routing Brackets ............................................................................................. 15-20
Install Cable Routing Brackets ............................................................................ 15-20
Cable Installation ...................................................................................................... 15-21
Install DBX/DBW ENC and Power Cables ............................................................. 15-21
Routing Channels for Power Cables ..................................................................... 15-23
Module Summary ...................................................................................................... 15-24
Your Next Steps........................................................................................................ 15-25

Glossary .................................................................................................................. G-1

Evaluating This Course ............................................................................................ E-1

xv
Contents

xvi
Introduction
Welcome and Introductions

 Student Introductions
• Name
• Position
• Experience
• Expectations

xvii
Introduction
Course Description

Course Description

 This 4 day instructor-led course covers the installation and


configuration of the Hitachi Unified Storage (HUS) family and
describes the specifications, architecture and components for each
model. In addition, it takes you through the various steps to plan and
perform an installation and introduces you to the Web Tool, and
Hitachi Storage Navigator Modular 2 GUI and CLI.
 Once familiar with these tools, you will perform storage array
operations, including creating RAID groups and logical unit numbers
(LUNs), exporting Constitute files and performing maintenance
activities, including replacing and adding components. Finally, you
will go through the firmware update process, operational alerts and
issues and troubleshooting options.
 The classroom sessions are supported by lab exercises.

xviii
Introduction
Prerequisites

Prerequisites

 Prerequisite Courses
• CCI0110 — Storage Concepts
• THE1860 — Hitachi Data Systems Foundations — Modular
• TCC1690 — Introduction to the Hitachi Adaptable Modular Storage 2000
Family
 Other Prerequisites
• Basic knowledge and understanding of SAN
• Use of Microsoft® Windows® Operating System
 Supplemental Courses
• TSI2258 — Provisioning for Hitachi Unified Storage
• CSI0157 — Data Protection Techniques for Hitachi Modular Storage

xix
Introduction
Course Objectives

Course Objectives

xx
Introduction
Course Topics

Course Topics

xxi
Introduction
Learning Paths

Learning Paths

 Are a path to professional


certification
 Enable career advancement
 Are for customers, partners
and employees
• Available on HDS.com,
Partner Xchange and HDSnet
 Are available from the instructor
• Details or copies

HDS.com: http://www.hds.com/services/education/

Partner Xchange Portal: https://portal.hds.com/

HDSnet: http://hdsnet.hds.com/hds_academy/

Please contact your local training administrator if you have any questions regarding Learning
Paths or visit your applicable website.

xxii
Introduction
HDS Academy Is on Twitter and LinkedIn

HDS Academy Is on Twitter and LinkedIn

Follow the HDS Academy on Twitter for regular


training updates.

LinkedIn is an online community that enables


students and instructors to actively participate in
online discussions related to Hitachi Data Systems
products and training courses.

These are the URLs for Twitter and LinkedIn:

http://twitter.com/#!/HDSAcademy

http://www.linkedin.com/groups?gid=3044480&trk=myg_ugrp_ovr

xxiii
Introduction
Collaborate and Share

Collaborate and Share

 Learn what is new in the Academy


 Ask the Academy a question
 Discover and share expertise
 Shorten your time to mastery
 Give your feedback
 Participate in forums

Academy in theLoop!

theLoop:

http://loop.hds.com/community/hds_academy/course_announcements_and_feedback_communi
ty ― HDS internal only

xxiv
Introduction
Hitachi Data Systems Community

Hitachi Data Systems Community

JOIN THE CONVERSATION!


 Learn best practices to optimize
your IT environment.
 Share your expertise with
colleagues facing real challenges.
 Connect and collaborate with
experts from peer companies
and HDS.

community.hds.com

10

HDS Community: http://community.hds.com

xxv
Introduction
Hitachi Data Systems Community

xxvi
1. Hitachi Unified Storage Family Overview
Module Objectives

 Upon completion of this module, you should be able to:


• Describe the Hitachi Unified Storage (HUS) family features
• List the HUS model configurations
• List the HUS family management tools

Page 1-1
Hitachi Unified Storage Family Overview
Overview

Overview

Market Shift to Unstructured Data

2,405PB 16,416PB 79,796PB


100%
Structured Data 23% +33% per year
36%
64%

64% 77%

36% Unstructured Data +59% per year


0%
Source: IDC
Assumes 100KB
average file size
2005 2010 2014

 1PB holds about 10B objects or files: over 100T objects stored in 2010
 File and content solutions are critical to solve this management problem

Page 1-2
Hitachi Unified Storage Family Overview
Market Drivers to Unified Storage

Market Drivers to Unified Storage

 Cost savings from consolidating


separate block and file storage
systems
• CAPEX savings from
consolidation
• OPEX savings from
environment, licensing and
maintenance reductions
 Flexibility
• Allocate and manage capacity
wherever it is needed
 Investment protection
• Start with block storage and
upgrade to unified

Unified storage solutions have recently been designed and adopted to address this problem.
Unified storage provides several benefits including:

In an environment where there is a predominance of one type of usage (such as file storage for
unstructured data), but there remains a need for some block storage (an Exchange database,
for example), unified storage can not only save on CAPEX but also OPEX through lower floor
space, power consumption, administration, licensing and maintenance.

Customers can also gain advantages from unified storage through its flexibility in quickly and
efficiently provisioning storage to block and file applications on demand.

Customers who want to deploy a block storage solution today but have the ability to consolidate
their file storage onto a single platform with their block storage in the future, will benefit from
the investment protection that unified storage provides.

Page 1-3
Hitachi Unified Storage Family Overview
Hitachi Key Differentiators

Hitachi Key Differentiators

 Unified redefined by Hitachi


• Hitachi Unified Storage has intelligent storage management for all types
of data — file, block and object
• Hitachi Unified Storage is unified storage with the most balanced
scalability in the market
• Hitachi Unified Storage unifies the storage infrastructure with a single
management framework

 Hitachi has combined our leading block technology with the file
storage technology from BlueArc, now a part of Hitachi Data
Systems, to deliver unified storage without compromise

We know we are not first to market, but we are better than our competitors because of the
following three key points:

Only platform to support three data types: Block, File and Object*.

Unified redefined means that we are providing unified storage management not just on this
new platform, but across the entire HDS portfolio (AMS – HUS – HNAS – HCP – HDI – VSP).

Unified without compromise means that unlike our competitors who are strong in one data type,
but weak on the other, we have strength in both block and file. Customers get the best of both
worlds without compromise.

* Object data support provides unique differentiation for us since no other vendor does it as
comprehensively as we do. First, the File Module supports an object-based file system which
enables tiering and migration, object-based replication and fast searches of data. Second, we
support the Hitachi Content Platform (HCP) to support true object store of data with regulatory
compliance and ability to add custom metadata. But unlike one competitor, the HCP can use the
Block Module capacity as a common storage pool to store objects. This is more space efficient
and cost effective than our competitors’ separate and siloed object store solution.

Page 1-4
Hitachi Unified Storage Family Overview
Unified Redefined: Unifying Across the Storage Infrastructure

Unified Redefined: Unifying Across the Storage Infrastructure

Specialized
Appliances HDI Software HDI Single HDI Dual

Content
HCP 300 HCP 500
Platform

Command Suite
File-Only
Platform HNAS 3080 HNAS 3090 HNAS 3200

Unified
Platform HUS 110 HUS 130 HUS 150 HUS VM

Block-Only
Platforms HUS 110 HUS 130 HUS 150 HUS VM
VSP

Page 1-5
Hitachi Unified Storage Family Overview
Hitachi Unified Storage Portfolio

Hitachi Unified Storage Portfolio

Unified redefined
 Unified management platform for all data types
 Applications from local office to entry enterprise
 Flash storage options across the product line

Regional Medium
Office Business Large Business Entry Enterprise

HUS 110 HUS 130 HUS 150 HUS VM

File, block and object data


Lowest cost per capacity Highest performance
Solid state drives
Internal virtualization External virtualization

Our unified storage portfolio redefines unified storage. HDS offers unified management of all
data across the entire portfolio. The individual models are designed to span the needs of
organizations from regional offices to entry enterprise. We offer flash storage across our entire
product line. When making a purchase decision one of the primary tradeoffs to consider is the
price of the capacity purchased and the performance. Our portfolio offers models that span the
price, capacity and performance range to meet the needs of all businesses. All models support
the internal virtualization technologies such as thin provisioning and dynamic tiering while the
HUS VM model brings enterprise storage virtualization from our enterprise VSP platform.

Page 1-6
Hitachi Unified Storage Family Overview
Hitachi Unified Storage Models

Hitachi Unified Storage Models

Components HUS 110 HUS 130 HUS 150 HUS VM


Max Drives 120 360 960 1152
Drive Types SSD, FMD, SAS and NL-SAS
Flash Solid State Drives
Flash Module Drives (only HUS 150, HUS VM)
Max. Cache 8+128GB 32+128GB 32+128GB 256+128GB
Connectivity FC, iSCSI, GbE FC, GbE
Max. Capacity 360TB 1TB 2.9PB 3.5PB
File System 256TB, 16M file system objects per directory
Protocols NFS, CIFS, FTP, HTTP, SSL, SNMP
Management Hitachi Command Suite
Maximum Cache = Block + File

To be specific, the Unified Storage portfolio includes 4 models. As you can see they differ in the
scalability of their capacity, cache and connectivity. All of them utilize Hitachi Command Suite
for management of storage.

FC — Fibre Channel

iSCSI — Internet SCSI or Internet Small Computer Systems Interface

GbE — Gigabit Ethernet

Page 1-7
Hitachi Unified Storage Family Overview
Configurations

Configurations

HUS 110 Configurations

Controller Box (Base Units) Drive Box (Expansion Units)

 2U Block Module  2U LFF Standard Disk Tray


• 2.5-inch x 24 internal drives • 3.5-inch x 12 drives
• 3.5-inch x 12 internal drives  SAS 7.2k HDD

 2U SFF Standard Disk Tray


• 2.5-inch x 24 drives
 SAS 10k, SAS 15k HDD
SSD

Page 1-8
Hitachi Unified Storage Family Overview
HUS 110 Box Names

HUS 110 Box Names

Controller Box Height of Number of


Name Model Name Controller Box Drives Drive Type
CBXSS DF850-CBSS 2U 24 2.5-inch
CBXSL DF850-CBSL 2U 12 3.5-inch

Drive Box Model Height of Drive Number of


Name Name Box Drives Drive Type
DBS DF850-DBS 2U 24 2.5-inch
DBL DF850-DBL 2U 12 3.5-inch

Page 1-9
Hitachi Unified Storage Family Overview
HUS 130 Configurations

HUS 130 Configurations

Controller Box (Base Units) Drive Box (Expansion Units)

 2U Block Module
• 2.5-inch x 24 internal drives  4U LFF Dense Disk Tray
• 3.5-inch x 12 internal drives • 3.5-inch x 48 drives
Drive Box (Expansion Units)  SAS 7.2k HDD

 2U LFF Standard Disk Tray


• 3.5-inch x 12 drives
 5U LFF Ultra Dense Disk Tray
 SAS 7.2k HDD
• 3.5-inch x 84 drives
 2U SFF Standard Disk Tray  SAS 7.2k HDD
• 2.5-inch x 24 drives
 SAS 10k, SAS 15k HDD
SSD

Page 1-10
Hitachi Unified Storage Family Overview
HUS 130 Box Names

HUS 130 Box Names

Controller Box Height of Number of


Name Model Name Controller Box Drives Drive Type
CBSS DF850-CBSS 2U 24 2.5-inch
CBSL DF850-CBSL 2U 12 3.5-inch

Drive Box Model Height of Drive Number of


Name Name Box Drives Drive Type
DBS DF850-DBS 2U 24 2.5-inch
DBL DF850-DBL 2U 12 3.5-inch
DBX DF850-DBX 4U 48 3.5-inch
DBW DF850-DBW 5U 84 3.5-inch

Page 1-11
Hitachi Unified Storage Family Overview
HUS 150 Configurations

HUS 150 Configurations

Controller Box (Base Units) Drive Box (Expansion Units)

 3U Block Module
• No internal HDDs  4U LFF Dense Disk Tray
Drive Box (Expansion Units) • 3.5-inch x 48 drives
 SAS 7.2k HDD

 2U LFF Standard Disk Tray


• 3.5-inch x 12 drives
 SAS 7.2k HDD
 5U LFF Ultra Dense Disk Tray
 2U SFF Standard Disk Tray • 3.5-inch x 84 drives
• 2.5-inch x 24 drives  SAS 7.2k HDD
 SAS 10k, SAS 15k HDD SSD
 2U FMD Standard Tray
• FMD x 12

Page 1-12
Hitachi Unified Storage Family Overview
HUS 150 Box Names

HUS 150 Box Names

Controller Box Height of Number of


Name Model Name Controller Box Drives Drive Type
CBL DF850-CBL 3U 0 n/a

Drive Box Model Height of Drive Number of


Name Name Box Drives Drive Type
DBS DF850-DBS 2U 24 2.5-inch
DBL DF850-DBL 2U 12 3.5-inch
DBF HUS100/HUS-VM 2U 12
DBX DF850-DBX 4U 48 3.5-inch
DBW DF850-DBW 5U 84 3.5-inch

Page 1-13
Hitachi Unified Storage Family Overview
HUS Family Summary

HUS Family Summary

HUS 130
 16GB – 32GB cache HUS 150
 16FC, 8FC and 4 iSCSI  16-32GB cache
 HDDs: Mix up to 360 SSD, SAS  16FC, 8FC and 4 or 8 iSCSI
Throughput

HUS 110 and Capacity SAS  HDDs: Mix up to 960 SSD,


 8GB cache  16 SAS links (6Gb/sec) SAS and Capacity SAS
 8 FC and 4 iSCSI ports  Max. 9 standard (DBS) 2.5-inch  32 SAS links (6Gb/sec)
 HDDs: Mix up to 120 SSD, SAS trays  Max. 40 standard (DBS) 2.5-
and Capacity SAS  Max. 19 standard (DBL) 3.5-inch inch trays
 8 SAS links (6Gb/sec) trays  Max. 40 standard (DBL) 3.5-
 Max. 4 standard (DBS) 2.5-inch  Max. 5 dense (DBX) 3.5-inch inch trays
trays trays  Max. 20 dense (DBX) 3.5trays
 Max. 9 standard (DBL) 3.5-inch  Max. 4 ultra dense (DBW) 3.5-  Max. 4 11 ultra dense (DBW)
trays inch trays 3.5-inch trays
 2048 Volumes and 50 RAID  4096 volumes and 75 RAID  Max. 40 FMDTrays = 480
groups groups FMDs (max. recom. FMD# =40)
 4096 volumes and 200 RAID
groups total

Scalability

Page 1-14
Hitachi Unified Storage Family Overview
Features

Features

HUS Product Family Features

 Dynamic Virtual Controller Front-end


• Allows access by any host port to any volume using ports on either
controller
 Load Balancing
• Hardware-based load balancing of individual volume workloads between
the controllers
 SAS Back-end
• The HUS 100 systems use a 6Gb/sec SAS back-end architecture
 Dynamic Provisioning
• Provides wide striping across many RAID groups, eliminating hot spots
 Memory Management Layer
• Enhanced use of cache memory
 Dynamic Tiering
• Automated data placement for higher performance and lower costs

Dynamic Virtual Controller Front End: A feature where a request for LU at any host port can be
executed by either of the controllers.

Load Balancing: The controllers can load balance when there is an imbalance between the
controllers. LU management is shifted to an underutilized controller.

SAS Back End: 2 or 4 SAS I/O controller processors (IOC) and SAS disk switches which work in
parallel with the Intel CPUs and DCTL ASICs. Each IOC chip auto-selects which of its 8-6Gb/sec
SAS links to use to communicate with each disk in that set of trays. The back end from each
IOC is operated as a matrix instead of static connections.

Dynamic Provisioning: Provides wide striping across RAID groups. Since RAID groups may not
have an uniform work load, they can become a performance bottle neck. With Dynamic
Provisioning, volumes are spread across RAID groups, which also allows for thin provisioning.

Page 1-15
Hitachi Unified Storage Family Overview
HUS Product Family Features

Dynamic virtual controller front-end — performance design

 Load balancing automatic with active-active mode


Other AMS 2000
Modular and HUS 100
Host 0 Host 1 Host 2 Host 0 Host 1 Host 2
Systems Family
Path mgmt Path mgmt Path mgmt Path mgmt Path mgmt Path mgmt
software software software software software software

Standby
path
Active Active-active
path

CTL 0 CTL 1 CTL 0 CTL 1

LU0 LU1 LU2 LU0 LU1 LU2

The user does not need to consider the load balance of controllers and ports when doing the
performance design. The user should set just the path management software of all hosts as the
active-active mode, and then the load of controllers is automatically balanced.

Dynamic virtual controller front-end — performance design

 Automatic optimization for performance


• When rate of access for processor of one controller becomes high,
average response time for the controller becomes long
• If processor usage of the two controllers is balanced by automatic load
balancing, response time remains good
CTL 0 processor usage has
become high because
access pattern has MPU
changed timing Usage(%)
t t
Average Response Time 10 ms 3.6 ms 5.0 ms 5.0 ms
MPU usage 80% 20% 50% 50%

MPU MPU MPU MPU

CTL 0 CTL 1 Automatic


Load
CTL 0 CTL 1
Balancing

Page 1-16
Hitachi Unified Storage Family Overview
HUS Product Family Features

Dynamic virtual controller front-end — performance design

 Performance aggregation
• I/O is processed on two controllers
• Port performance can equal maximum for both controllers

I/O request
to the port can
be processed
by two CTLs
I/O request
over one CTL
maximum port port port port
throughput
MPU MPU MPU MPU
CTL 0 CTL 1 Automatic CTL 0 CTL 1
Load
Balancing

I/O requests to the port can be processed on two controllers by using a cross-path mechanism.
So the port performance can exceed the maximum performance of a single controller, and it
can be expanded to the maximum performance for both controllers.

Page 1-17
Hitachi Unified Storage Family Overview
HUS Product Family Features

Dynamic virtual controller front-end — microcode updates

 Benefits
• Nondisruptive firmware updates are easily and quickly accomplished
• Firmware can be updated without interrupting I/O
User must change paths No requirement to change paths

(1) Change path Path Path


Manager Manager
Internal
command
SW SW transfer

(2) Firmware Firmware updating Firmware updating


(1) Firmware
updating and
CTL 0 0 1 0 1 CTL 1 updating and CTL 0 0 1 0 1 CTL 1
rebooting
rebooting
Change Change
ownership ownership
controller controller
Other AMS 2000
LUN:0 LUN:1 LUN:2 LUN:3 LUN:0 LUN:1 LUN:2 LUN:3
Modular and HUS 100
Systems 0 Owning controller of LUN 0 Families

For firmware updates:

• No need to use host path management software


• No need to change path from firmware-updating CTL to other CTLs

Page 1-18
Hitachi Unified Storage Family Overview
HUS Product Family Features

 Green IT
• Power savings (power down) option available for RG on all systems
 Tray power saving available for DBWs on HUS 150 (V6)
• Higher density drives using less power = fewer BTUs
 Saves money, saves the environment
• Variable fan speeds
 Internal temperature of subsystem will control fan speed
 Encryption
• Hardware Encryption
 Write data (plaintext) from the host is encrypted by the encryption BE
input/output or I/O module with keys and is then stored to the physical
drives
 VAAI (VMware API for array integration) support
• All HUS models support VAAI

VAAI — VMware API for Array Integration

Hitachi Unified Storage undertakes the operations to copy and back up on behalf of VMware.

RG — RAID Group

DBW — Disk Box Wide

 Extreme reliability
• 99.999% data availability with no single point of failure
• Nondisruptive firmware updates
• Hot swappable major components
• Cache backed up to flash on power failure (unlimited retention)
• Flexible drive sparing with no copy back required after a RAID rebuild
• In-system and remote data replication options
• Support for RAID-6

Page 1-19
Hitachi Unified Storage Family Overview
Model Architecture

Model Architecture

Hitachi Unified Storage Family

Internal Names

 HUS 110
• DF850XS
 HUS 130
• DF850S
 HUS 150
• DF850MH

Note : The internal names are given only for their relevance to product documentation.
When referring to Hitachi Unified Storage products, please always use the official
product names of HUS 110, HUS 130 or HUS 150.

Page 1-20
Hitachi Unified Storage Family Overview
HUS 110 (Block Module) Architecture

HUS 110 (Block Module) Architecture

SAS CTLR SAS CTLR


4 x 6Gb/s 4 x 6Gb/s
Links Links

Hitachi Confidential and Restricted


The controller mother board has the following:

• 1 SAS port
• 4 FC ports
• 2 Ethernet ports
• 1 slot for the expansion card

Page 1-21
Hitachi Unified Storage Family Overview
HUS 130 (Block Module) Architecture

HUS 130 (Block Module) Architecture

330 Disks
(SAS, SSD)

Hitachi Confidential and Restricted


HUS 130 does not have I/O modules.

The controller mother board has the following:

• 2 SAS ports
• 4 FC ports
• 2 Ethernet ports
• 1 slot for the expansion card

Page 1-22
Hitachi Unified Storage Family Overview
HUS 150 (Block Module) Architecture

HUS 150 (Block Module) Architecture

Hitachi Confidential and Restricted


The controller mother board has the following:

• 4 SAS ports

If both I/O modules installed:

• 8 FC ports or 4 iSCSI ports or 4 FC ports plus 2 iSCSI ports

If only one I/O module installed:

• 4 FC ports od 2 iSCSI ports


• 2 Ethernet ports

Page 1-23
Hitachi Unified Storage Family Overview
Common Features Across the Hitachi Unified Storage Models

Common Features Across the Hitachi Unified Storage Models

 Host Interface
• FC — 8Gb/sec
• iSCSI — 10Gb/sec
• iSCSI — 1Gb/sec (HUS 110 and HUS 130)
 Each front-end port can connect up to:
• 128 FC devices
• 255 iSCSI devices
 Port security
 Internal bus is PCIe 2.0
 Cache protected by flash backup

PCIe – Peripheral Component Interconnect Express

Port security is also known as host-group security and is enabled per front-end port.

Hitachi Unified Storage uses internal bus as PCIe 2.0 x 8 (lanes).

In PCIe 2.0 each lane is at 500MB/sec.

Page 1-24
Hitachi Unified Storage Family Overview
Cache Protection

Cache Protection

HUS 100 Flash Backup Functionality

 Write-back
• Default logic for writes received from hosts
• Host issues a write
• Data written into cache and acknowledged to host
• Data (dirty) then asynchronously destaged to disks
 Electrical power lost
• Cache (volatile) contents are backed Host
up to flash memory (non-volatile)
 Flash memory (non-volatile) Host Write ACK (OK)
• Contents are retained for
infinite time Cache Memory Flash
(volatile) Memory
Small (Non-
Battery
Volatile)
Drive

Max backup time = Infinite

The HUS 100 product provides an improved method to handle dirty data in cache when a power
outage occurs. HUS 100 models still use a battery to keep cache alive during an outage, but
only for as long as it takes to back up the contents of cache to a flash memory module installed
on the controller board. Once cache data has been copied to flash, the data will be safe for an
infinite amount of time and can be recovered to cache at any time once power is restored and
the system is again powered on.

Batteries can be smaller due to cache data being copied to flash.

Batteries can be fully recharged in 3 hours, enabling 2 consecutive outages to occur with no
loss of data.

Note: The second consecutive outage without a recharge will trigger write-through mode by
default (user-definable) until the battery is recharged to full, thus avoiding loss of cached data
under any circumstances.

Page 1-25
Hitachi Unified Storage Family Overview
On Power Outage with HUS 100 (Backup)

On Power Outage with HUS 100 (Backup)

 Steps performed on power outage


• All cache copied to flash
 Parallel for both controllers
• Backup data is unencrypted
 If backup fails controller blocked on next reboot

Controller 0
Flash memory
Cache (Volatile)
(Non-volatile)

Controller 1
Flash memory
Cache (Volatile)
(Non-volatile)

Page 1-26
Hitachi Unified Storage Family Overview
On Return of Power (Restore)

On Return of Power (Restore)

 Controller starts to reboot


 Consistency checked between flash memory data (serial #) and HUS
management information (system disks)
• If consistent, restoration starts
 Executed in both controllers, parallel
 On successful completion, array continues with normal operation
 If fails, controller is blocked with error message
• User data lost!
Controller 0 System
Disk
Flash memory Serial #
Cache (Volatile)
(Non-volatile)

Controller 1
Serial #
Flash memory
Cache (Volatile)
(Non-volatile)
Serial #

Blackout is recovered in the middle of flash backup process.

The array starts the boot up process.

In the beginning of the boot up process:

The remaining flash backup process is completed (all cache area data is backed up to the flash
memory once).

Then restoration of all data from flash memory to cache memory is performed.

Page 1-27
Hitachi Unified Storage Family Overview
Clearing the Data From Flash Memory

Clearing the Data From Flash Memory

 After restoration completed


• Battery is fully charged (100%)
• Backup data in flash memory is cleared
 Maximum of 3 hours for full battery charge
• 1.5 hours for 1 flash backup

Controller 0
Flash memory
Cache (Volatile)
(Non Volatile)
100%
Charged Controller 1

Cache (Volatile) Flash memory


(Non Volatile)

It takes about 3 hours for the battery to fully charge.

Page 1-28
Hitachi Unified Storage Family Overview
Write Command Mode (During Battery Charging)

Write Command Mode (During Battery Charging)

 100% battery status


• Execute 2 cycles of flash backup
 50-99% battery status for both batteries
• Can execute 1 backup cycle
 <50% battery status
• User indicates host write behavior

Battery Charging 1.5 Hours Until Reserving One Flash Backup


Write Command
Mode Advantage Disadvantage
Write Through No risk of user data loss Low host I/O performance
(default)
Write Back High host I/O performance Risk of user data loss

The Battery Charge Write Command mode can be set:

• Using the Web Tool (in Maintenance Mode)


• Using Storage Navigator Modular 2 (SNM 2)

Page 1-29
Hitachi Unified Storage Family Overview
Management Tools

Management Tools

Management Tools Overview

Hitachi Command Suite 7


SN SN 2
Command Line Interface
Hitachi Universal
Storage Platform
Hitachi Unified
Storage (HUS) VM

SN SN2 SNM 2 Web GUI SMU

Hitachi NAS
Hitachi Universal Platform
Storage Platform V

File and content


storage platforms
Hitachi Virtual HUS and older
Storage Platform AMS Systems

 Single management tool


• All Hitachi storage systems and virtualized storage environments
 Common GUI and CLI
• No need to switch to element managers for everyday storage
management tasks
Hitachi Command Suite includes a major re-architecture that further integrates the products
from the bottom up. Instead of separate products linked together, the software products in the
Hitachi Command Suite use a common code base and moving forward will use a common
database of shared information. The Hitachi Command Suite software products will all use the
same graphical user interface, based on Adobe Flex, and share a common user experience. The
transition from product to product is virtually transparent to the user, who can simply click on a
task function in a common menu instead of having to know which product within the suite is
being accessed (such as, Hitachi Device Manager or Hitachi Tiered Storage Manager).
All configuration and storage tier information is combined and synchronized in one database
accessed by multiple products, rather than on separate databases requiring larger data
repositories that could be subject to inconsistencies.
The new design improves the efficiency and reliability of storage management information, as
well as a consistent and more flexible user interface.
SN – Hitachi Storage Navigator
SNM 2 – Hitachi Storage Navigator Modular 2
AMS – Hitachi Adaptable Modular Storage

Page 1-30
Hitachi Unified Storage Family Overview
Management Tools Overview

 Hitachi Storage Navigator Modular 2


• Online volume migrations
• Configure and manage Hitachi replication products
• Online firmware updates and other system maintenance functions
• Simple Network Management Protocol (SNMP) integration
• Collect trace, collect and display performance information, and display
logs
• Wizard performs basic configuration

Hitachi Storage Navigator Modular 2 (SNM 2) can be installed in either Microsoft® Windows®,
Solaris or RH Linux environments. It is the integrated interface for standard firmware and
software features of Hitachi Unified Storage (HUS). It is required for taking advantage of the
full feature sets that HUS offers.

SNM 2 can be installed as GUI or CLI.

To use the GUI interface requires Internet Explorer (or any other browser).

For customer use, the JRE component is not required. But when the CE needs the Additional
Settings applet, they need to install the JRE 1.6.

Page 1-31
Hitachi Unified Storage Family Overview
Module Summary

Module Summary

 In this module, you should have learned:


• HUS family features
• HUS model configurations
• HUS family management tools

Page 1-32
Hitachi Unified Storage Family Overview
Module Review

Module Review

1. Select the 3 true statements.


A. HUS family has 6Gb/sec SAS back-end ports
B. HUS family supports FC disks
C. HUS 130 supports 4GB and 8GB DIMMs
D. HUS controller cache is backed up to flash for unlimited retention
E. HUS 110 model can be upgraded to HUS 130
2. A customer purchases the following configuration — a CBSS with 3
DBL.
What is the total number of disks, assuming that all trays are full?
A. 96 disks
B. 60 disks
C. 128 disks
D. 48 disks

3. Which statement is false?


A. DBX supports NLSAS disks only
B. SSD drives do not have an rpm rating since they do not have
mechanical components
C. HUS 110 has 8 SAS links per controller
D. 8GB DIMMs are not supported in HUS 110

Page 1-33
Hitachi Unified Storage Family Overview
Module Review

Page 1-34
2. Hitachi Unified Storage Components
Module Objectives

 Upon completion of this module, you should be able to:


• Identify HUS 100 hardware components

Page 2-1
Hitachi Unified Storage Components
Components

Components
This section presents the hardware and software components of the HUS product family.

HUS Components

 Hardware
• HUS 110 (2 models) — CBXSL and CBXSS
• HUS 130 (2 models) — CBSL and CBSS
• HUS 150 (1 model) — CBL
• DBL (3.5-inch x 12 disks) — NLSAS disks
• DBF (12*FMD) — Flash module drives
• DBS (2.5-inch x 24 disks) — SAS and flash drive disks
• DBX (3.5-inch x 48 disks) — NLSAS disks
• DBW (3.5-inch x 84 disks) — NLSAS disks
• Rack

SAS — Serial Attached SCSI

NLSAS — Nearline Serial Attached SCSI

Page 2-2
Hitachi Unified Storage Components
HUS 110 CBXSL1 Controller

HUS 110 CBXSL1 Controller

Components
 1 or 2 controllers  4 embedded Fibre Channel host
interface ports (4 per controller, 8 max)
 Drives supported (3.5-inch x 12)
• SAS 7.2k  Optional host interface cards
(1 per CTRL, 2 max)
 Serial number begins with 911
 Power supply unit x 2

Front View

Rear View

Page 2-3
Hitachi Unified Storage Components
HUS 110 CBXSS1 Controller

HUS 110 CBXSS1 Controller

Components
 1 or 2 controllers  4 embedded Fibre Channel host
interface ports (4 per controller, 8 max)
 Drives supported (2.5-inch x 24)
• SAS 10k/15k and SSD  Optional host interface cards
(1 per CTRL, 2 max)
 Serial number begins with 912
 Power supply unit 2

Front View

Rear View

1 HUS 110 and HUS 130 share the same controller box and are known as the CBSL (2U x 12)
and CBSS (2U x24). CBXSL and CBXSS are terms used to describe the HUS 110 controller boxes
in the HUS 100 Maintenance Manual; they are not used elsewhere.

Page 2-4
Hitachi Unified Storage Components
HUS 130 CBSL Controller

HUS 130 CBSL Controller

Components
 2 controllers  4 embedded Fibre Channelhost
interface ports (4 per controller, 8 max)
 Drives supported (3.5-inch x 12)
• SAS 7.2k  Optional host interface cards
(1 per CTRL, 2 max)
 Serial number begins with 921
 Power supply unit 2

Front View

Rear View

Page 2-5
Hitachi Unified Storage Components
HUS 130 CBSS Controller

HUS 130 CBSS Controller

Components
 1 frame assembly  4 embedded Fibre Channel host
 2 controllers interface ports (4 per controller, 8 max)
 Drives supported (2.5-inch x 24)  Optional host interface cards
(1 per CTRL, 2 max)
• SAS 10k/15k and SSD
 Power supply unit 2
 Serial number begins with 922

Front View

Rear View

Page 2-6
Hitachi Unified Storage Components
HUS 150 CBL Controller

HUS 150 CBL Controller

 1 frame assembly Front View

 2 controllers
 1 bezel
 No drives
 Serial number begins with 930

 2 power supplies
 6 fans
 2 batteries
 2 management modules
 2 back-end I/O modules
Rear View
 2 host interface modules

Page 2-7
Hitachi Unified Storage Components
DBL Disk Tray

DBL Disk Tray

 Consists of:
• 1 frame assembly
• 12 3.5-inch x 12 disks drive slots (SAS 7.2k)
• 2 Enclosure Controllers (ENCs)
• 2 power supply units
• 1 bezel
• 2 1 m SAS cables

Front View

Rear View

Page 2-8
Hitachi Unified Storage Components
DBS Disk Tray

DBS Disk Tray

 Consists of:
• 1 frame assembly
• 24 2.5-inch drive slots (SAS 10k/15k and SSD)
• 2 Enclosure Controllers (ENCs)
• 2 power supply units
• 1 bezel
• 2 1 m SAS cables

Front View

Rear View

Page 2-9
Hitachi Unified Storage Components
DBX Disk Tray

DBX Disk Tray

 Consists of:
• 1 frame assembly
• 48 3.5-inch drive slots (SAS 7.2k)
• 4 Enclosure Controllers (ENCs)
• 4 power supply units
• 1 bezel
• 4 3 m SAS cables ENC
• 1 rail kit
Drive

Page 2-10
Hitachi Unified Storage Components
DBX Disk Tray (Rear)

DBX Disk Tray (Rear)

Rear

Power Unit

Legend:

• 24 Power supply units


• 30 SAS 5 m ENC cable
• 31 SAS 3 m ENC cable
• 32, 33 Cable holders

Page 2-11
Hitachi Unified Storage Components
DBW Disk Tray

DBW Disk Tray

Front Rear
#8 (x1)
#4 (x84)

#6 (x4)
#2 (x 2)
#5 (x2)
#3 (x5)
# Component Function #1 (x2)
#7 (x2) 1 Power supply unit AC input power supply
2 I/O module I/F between controller box/other dense
3 FAN module Cooling
4 Drive Slots Data storage
5 Drawer One drawer can store max. 42 HDDs
6 Side Card Assembly containing expander, power connection
7 Bezel n/a
8 Frame Assembly Chassis
SAS Cables (2) 3 m cables
Rail Kit (1) Rail kit

Page 2-12
Hitachi Unified Storage Components
DBW Dense Box 84 HDDs

DBW Dense Box 84 HDDs

 HDD addition is executed based on the following rule


 Firmware can guard HDD incorrect mounting
 No dummy canisters for 84sp dense
Order Drawer No. Order of HDD
1) 1 (upper) HDD#0  HDD#13 (see notes)
2) 2 (lower) HDD#42  HDD#55
5)
3) 1 (upper) HDD#14  HDD#27
Right SideCard
Left SideCard

#28 ------ #41


Side Card-B-U

Side Card-A-U

3)
4) 2 (lower) HDD#56  HDD#69
#14 ------ #27 Drawer 1 (Upper)
5) 1 (upper) HDD#28  HDD#41
1) 6) 2 (lower) HDD#70  HDD#83
#0 ------ #13

6)
Right SideCard
Left SideCard

#70 ------ #83


Side Card-B-L

Side Card-A-L

4)
#56 ------ #69 Drawer 2 (Lower)

2)
#42 ------ #55 84sp Dense

HDD#0 through #4 are assigned for system disk

From the beginning of using 84HDD dense, mounting 14 HDDs in this area is mandatory

Page 2-13
Hitachi Unified Storage Components
New Drive Type FMD

New Drive Type FMD


This section presents the hardware and software components of the HUS product family.

DF850

SSD (Solid State Drive)

NF1000

 High performance

 Higher reliability

 Better cost performance

Page 2-14
Hitachi Unified Storage Components
DF850 Supports NF1000

DF850 Supports NF1000

 Version 6.0/A, SNM 2 version 26.0

 9/30/2013 Q-code

 Only HUS 150, dual CTL

 HDP/HDT and all other PPs

 SSD can also be used in the array

Page 2-15
Hitachi Unified Storage Components
DBF Drive Box for NF1000

DBF Drive Box for NF1000

2U

Jumper Pin
(Not used now)
IN OUT

2U

Page 2-16
Hitachi Unified Storage Components
Flash Module Drive (FMD)

Flash Module Drive (FMD)

DIMM x 4

ASIC

Battery DIMM x 4

Page 2-17
Hitachi Unified Storage Components
DBF Spec

DBF Spec

 FMD x 12 (No FMD configuration in the DBF is also available)

 FMD only (Other HDD/SSD cannot be installed)

 ENC F/W (Different from other DB’s one in DF850,


but common with HM700’s one)
DBF ENC covers both DF850’s F/W and HM700’s F/W. Either one
is selected by the Pin Switch setting on ENC Hardware.

 FMD x 12 (Not FMD configuration in the DBF is also available)

 FMD only (Other HDD/SSD can’t be installed)

 ENC F/W (Different from other DB’s one in DF850,


but common with HM700’s one)

 CTL’s Flash F/W reworks is NOT necessary (IDC/EDC/ADC)

 Same as conventional Back-End cabling rule (Flat mounting)

 BECK (Ver.2.5.0.0)

Page 2-18
Hitachi Unified Storage Components
Power Supplies Batteries

Power Supplies Batteries


This section presents the hardware and software components of the HUS product family.

Power Cable

PDU Connector Cable Spec


Model Spec (From PDU to a power supply)
DF850MH (CBL) IEC320-C13 IEC320-C13 to C14
DF850S (CBSS/SL)
DF850XS (CBSXS/XL)
Drive Box (DBS/L)
Dense Drive Box (DBX)
DBW (Note) IEC320-C19 IEC320-C19 to C20

Note:
1. DBW PDU connector should be C19 type
2. DBW power cable end type should be C19-C20 type
3. Cable connection between DBW and new PDU must take into account that the DBW is
rated for 16A current

IEC320-C20 IEC320-C14
(PDU) (PDU)
IEC320-C19 IEC320-C13
(Power supply) (Power supply)

DBW Power Cable Other HUS Power Cables

Page 2-19
Hitachi Unified Storage Components
HUS 110 and HUS 130 Power Supply Unit

HUS 110 and HUS 130 Power Supply Unit

1. Push blue latch in 2. Pull down on lever

Page 2-20
Hitachi Unified Storage Components
HUS 110 and 130 Batteries

HUS 110 and 130 Batteries

Battery is located on side


of power supply module

Battery connector

Connect
as shown

Page 2-21
Hitachi Unified Storage Components
HUS 110 Controller Board

HUS 110 Controller Board

Note there is 1 user cache dual in-line memory module (DIMM) in the HUS 110 controller board,
compared to 2 in HUS 130.

Page 2-22
Hitachi Unified Storage Components
HUS 130 Controller Board

HUS 130 Controller Board

The HUS 130 controller board has 2 user cache DIMMs.

Page 2-23
Hitachi Unified Storage Components
HUS 110 and 130 Option Module

HUS 110 and 130 Option Module

 10Gb/sec x 2 ports iSCSI


option module
 10Gb/sec iSCSI connectors
are optical

Page 2-24
Hitachi Unified Storage Components
HUS 150 Battery — Front View

HUS 150 Battery — Front View

MAIN SW for switching Battery modules


power on or off

Loosen thumbscrew to
remove battery module

There are 2 battery modules, which are located in the front. The arrow shows the MAIN SW for
switching power on or off.

To remove the batteries, loosen the thumb screw and remove the battery unit.

Page 2-25
Hitachi Unified Storage Components
HUS 150 Fan Unit — Front View

HUS 150 Fan Unit — Front View

6 fan units

3 per controller 3 per controller

Loosen
thumbscrew
to remove
fan unit

Page 2-26
Hitachi Unified Storage Components
HUS 150 Controller Unit

HUS 150 Controller Unit

Press blue release latches to release controller

Page 2-27
Hitachi Unified Storage Components
HUS 150 I/O Modules

HUS 150 I/O Modules

Backend I/O Modules Host Interface Modules


LAN Management (SAS Ports) (FC or iSCSI Ports)

Close up view of
I/O module;
Fibre Channel
module shown

From left to right, you see the LAN management module, the ENC module (SAS ports), the host
interface module and the Fibre Channel module (FC Ports 8Gb/sec x 4).

Page 2-28
Hitachi Unified Storage Components
Module Summary

Module Summary

 In this module you have learned to:


• Identify HUS 100 hardware components

Page 2-29
Hitachi Unified Storage Components
Module Review

Module Review

1. Which model does not have disk drives in the controller?


A. HUS 110
B. HUS 130
C. HUS 150
D. None of the models have disks drives
2. Which of the following trays do not support 2.5-inch disks?
A. DBL
B. DBS
C. DBX
D. DBW

3. If an HUS serial number begins with 911, then the controller is a


_____?
A. CBXSL
B. CBXS
C. CBL
D. CBXSS
4. What is the maximum number of disks supported in a CBL
controller?
A. 12
B. 24
C. 15
D. 0

5. Choose the statements that are true for HUS150.


A. Batteries are a part of the power supply unit
B. Power supply units are located in the rear of the controller box
C. Controller box supports no disks

6. A DBX can hold _____ SSD drives?


A. 24 (12+12 on each side)
B. 30 (15+15 on each side)
C. 48 (24+24 on each side)
D. 0

Page 2-30
3. Installation — Part 1
Module Objectives

 Upon completion of this module, you should be able to:


• Follow the recommended safety precautions during installation
• Perform steps 1 through 4 of the procedure for installing a new frame for
Hitachi Unified Storage (HUS)
• Use proper documentation to verify storage system physical and logical
configuration

Page 3-1
Installation — Part 1
Installation Resource Documentation

Installation Resource Documentation

Overview

 Maintenance Manual
 Getting Started Guide
• Publication #: MK-91DF8303
 System Assurance Document

Note: A User Manual and Quickstart Guide are included with purchase of a
Hitachi Unified Storage system.

• Safety Summary: Cautionary notes to safely handle maintenance work


• Introduction: Maintenance cautionary and prohibited notes, for example, the outline of
the array and the configuration of the array
• Installation: Array settings and parts installation
• Firmware: Firmware installation update
• System Parameter: Setting the system parameter for the array
• Addition/Removal/Relocation: Add, remove and relocate as related to array setup
• Upgrade: Procedure for upgrading from the existing model to the upper model
• Troubleshooting: Trouble analysis for the array
• Message: Describes message content at the time the array fails
• Replacement: Replacement work for each part and the periodic maintenance
• Parts Catalog: Description of each part installed in the array
• WEB: Operating procedure for WEB
• Document number: GENE 0070-05

Page 3-2
Installation — Part 1
Maintenance Manual Overview

Maintenance Manual Overview

 Safety Summary
 Introduction
 Installation
 Firmware
 System Parameter
 Addition/Removal/Relocation
 Upgrade
 Troubleshooting
 Message
 Replacement
 Parts Catalog
 WEB

Page 3-3
Installation — Part 1
Instructor Demonstration

Instructor Demonstration

 Product documentation library usage


 Maintenance documentation library usage

Prior to installing the HUS storage system, you must verify that the proper components are
available to produce the purchased configuration. The System Assurance Document (SAD) and
any other available documentation, such as the purchase order or bill of materials, can be
referenced for purposes of verification.

Page 3-4
Installation — Part 1
System Assurance Document (SAD)

System Assurance Document (SAD)

 Multi-sheet Excel workbook


 Filled in by sales team technical members and prior to install
 Customer engineer should not proceed without completed SAD
 Includes:
• Detailed implementation information
• Customer and vendor contact information
• Bill of materials
• Implementation schedules
• Cabling diagrams
• Rack options

Page 3-5
Installation — Part 1
Recommended Safety Precautions

Recommended Safety Precautions


Many of us have become quite comfortable handling expensive electronic components, but any
static discharge to one of these components will cause damage, however slight. This will
complicate the troubleshooting and maintenance processes down the road, should the damage
become severe enough. Wrist straps are easy to put on, so just do it and protect electronic
parts from electrostatic discharge:

• A wrist strap prevents part failures caused by static electrical charge built up in your
own body
• Be sure to wear a wrist strap connected to the chassis:

o Before starting maintenance


o Whenever you unpack parts from a case
o Until you finish maintenance
o Otherwise, the static electrical charge on your body may damage the parts
o You can discharge static electricity by touching the metal plate
o The diagram shows how to properly wear a wristband when working with the
storage system

Electrostatic Discharge (ESD)

Wrist Straps

 Precautions
• Always use wrist straps and antistatic mats when handling components
• Put components into ESD bags for transport
• Components are almost always damaged when handled without ESD bags
 Damaged components are more likely to fail in the future

 Connect wristband lead wire to system enclosure before starting work


• Hold part with hand wearing wristband
• Do not remove it until
work is complete

Page 3-6
Installation — Part 1
Electrostatic Discharge (ESD)

Damage Example

 Microscopic view of damage caused by improper ESD protection

Page 3-7
Installation — Part 1
Installing a New Frame

Installing a New Frame


Note : It might be necessary to remove some of the components, for example, from a CB or DB
for easier installation weight. However, we will not be discussing this step within this module
(Please see “Maintenance” chapter for more information).

• Read the maintenance manual before performing each of the steps.

• Normally firmware comes installed on the storage system.

• You may need to update the firmware if an update is available.

• In cases where there is no firmware on the storage system, an initial firmware has to be
installed.

• After completing the new frame installation, close the doors and configure the host
computers.

Procedure for Installing a New Frame

 Steps covered in this module:


1. Unpack the rack frame
2. Unpack the storage system
3. Remove the components (see note)
4. Mount the components on the rack frame
 Steps covered in next module:
1. Install the components
2. Connect the cables
3. Attach the decoration panel
4. Power on the storage system
5. Connect a service PC or laptop to the storage system
6. Install and update the firmware
7. Set the storage system
8. Power off and restart the storage system
9. Connect the host interface cables

Page 3-8
Installation — Part 1
Tools Required for Installation

Tools Required for Installation

Hitachi Modular racks always come with side panels in Americas and APAC, and always come
without side panels in EMEA.

Hitachi Modular racks are designed to hold a Hitachi Unified Storage 110, 130 and 150
consisting of a Controller Box and 1 or more DBS and DBL Drive Boxes. All Hitachi Data Systems
Modular racks are 42U high X 1.96 feet (600 mm) wide X 3.60 feet (1100 mm) deep 19-inch
cabinets capable of containing all components required for a full installation of the Hitachi
Unified Storage system.

Page 3-9
Installation — Part 1
Hitachi Modular Racks

Hitachi Modular Racks

 Designed to hold Hitachi Unified Storage 110, 130 and 150 systems
and related components

Specification Americas EMEA/APAC


Product Codes Rack with side panels: A3BF- Rack with side panels: A3BF-
SOLUTION AMS-1
Rack without side panels: A3BF- Rack without side panels: A3BF-
SOLUTION-P AMS-P-1
External Dimensions Width: 600 mm (1.96 feet) Same as Americas
(with Panels) Depth: 1100 mm (3.60 feet)
Height: 2010 mm (6.59 feet)
PDUs Four 30-amp Nema PDUs, with Four 32-amp Nema PDUs, with
accessory kit and 42 power cords accessory kit and 42 power cords

This section presents information derived from the following document:

• Installation and Configuration Guide – MK91DF8273 – “Appendix C, Rack Mounting”

Page 3-10
Installation — Part 1
Step 1: Unpacking the Rack Frame

Step 1: Unpacking the Rack Frame

 Unpack the rack


• Check for condensation
• Do not unpack in places with outdoor dust,
sunlight or rain
 Check that all parts are included
 If installing multiple frames, ensure there is
sufficient distance between the frames
 Attach rack stabilizers

See notes section for further information

This table has been copied from the Installation Manual’s installation section.

In Chapter 2.4.3 you will find examples on how to combine the different tray types and how to
install them into the available racks.

Page 3-11
Installation — Part 1

 Valid combinations of Controller Boxes and Drive Boxes that can be


installed:

This section presents information derived from the following document:

• Installation Manual - INST 02-0020, “Section 2.1 Procedures for Installing Array”

Controllers:

• CBXSS (2.5 inch disks) – HUS 110


• CBXSL (3.5 inch disks) – HUS 110
• CBSL (3.5 inch disks) – HUS 130
• CBSS (2.5 inch disks) – HUS 130
• CBL (3.5 inch disks) – HUS 150

Disk trays:

• DBL (3.5 inch x 12 disks) – NLSAS disks


• DBS (2.5 inch x 24 disks) – SAS and flash drive disks
• DBX (3.5 inch x 48 disks) – NLSAS disks

When checking the package contents:

• Check that the contents (model names, product serial numbers and quantities) agree
with the packing list shipped with the storage system

Page 3-12
Installation — Part 1

• Service personnel must keep the supplied key with the storage system
(CBXSL/CBXSS/CBSL/CBSS/DBL/DBS for front bezel, DBX for front lock) in order to
prevent users from maintaining the storage system
• The key for the front bezel is used to mount and dismount front bezel
• The key for the front lock is used to lock and unlock the front of the DBX

Page 3-13
Installation — Part 1
Step 2: Unpacking the Storage System

Step 2: Unpacking the Storage System

 Storage systems are heavy, so


you will need 2 or more people
for unpacking
• CBXSS/CBSS: 40 kg
• CBXSL/CBSL: 43 kg
• CBL: 47 kg
• DBL: 27 kg
• DBS: 23 kg
• DBX: 85 kg
• DBW: 128 kg
 Beware of condensation
 Place peripherals in a safe
location

The Genie lift (GL-8) with GL-LP platform or (compatible lift device) is required to install the
DF850-DBX. This can be ordered from HDS Logistics if unavailable from the customer site.

• Please order the following part number well in advance of the install and allow 5 days
for delivery:

o IP2001-1.x – Step-ladder 3-Step (Qty 1)


o IP2000-2.x – Genie Lift - GL-8 (Qty 1)

Note : The GL-8 can be raised to 8’ 3” and the load rating is 400lbs.

Note : In the case of installation in the UK and Europe a Transport/Logistic company will
provide the physical installation of the High Capacity Expansion units into the rack.

Page 3-14
Installation — Part 1
Step 4: Mounting Components on the Rack Frame

Step 4: Mounting Components on the Rack Frame

4a. Using a Lift

 Since components are


heavy, use a lift for
installation

 Procedures for
installation of DBX/DBW
may be different
depending on the region
 Picture shows an
example; see notes
section for more
information

This section presents information derived from the following document:


• Installation Manual – INST 02-0530, “Section 2.4.3 Mounting on Rack Frame”

4b. Installing the EMI Gasket

 Peel off anti-adhesion sheet and place EMI gasket in correct location

Page 3-15
Installation — Part 1
Step 4: Mounting Components on the Rack Frame

4c. Installing the DBX in the Rack

 Pull right and left rails until locked


 Adjust storage system position so
it is seated in the center of the
rack frame

 Ensure that the inner rail fits


into the central rail
 Push DBX gently so that it
gets seated and that the left
or right rail locks

4c. Installing the DBW in the Rack

 Installing a DBW front side

Page 3-16
Installation — Part 1
Step 4: Mounting Components on the Rack Frame

4c. Installing the DBW in the Rack

 Installing a DBW rear side

The following are 2U boxes:

• CBXSL
• CBXSS
• CBSL
• CBSS
• DBS
• DBL

The following is a 3U box:

• CBL

The following is a 4U box:

• DBX

Page 3-17
Installation — Part 1

4d. Installing 2U Boxes

 Install the system with the bracket


 The left and right sides of the brackets are shown in the diagram

The diagram shows the attachment of the left bezel only.

4d. Installing 2U Boxes

 Attach the bezel


• The left side is shown in the diagram

The CBL is 3U in height.

Page 3-18
Installation — Part 1

4f. Installing the 3U Box

 Install the system with the bracket


 The left and right side brackets are shown

Page 3-19
Installation — Part 1
Module Summary

Module Summary

 In this module, you should have learned:


• Recommended safety precautions during installation
• To perform steps 1 through 4 of the procedure for installing a new
frame for HUS
• The proper documentation to verify storage system physical and
logical configuration

Page 3-20
Installation — Part 1
Module Review

Module Review

1. During a rack unpack: (select all that apply)


A. Avoid condensation
B. Do not unpack in sunlight
C. Dust has an impact
D. Avoid rain
E. All of the above
2. A DBX disk tray weighs approximately:
A. 47 kg
B. 27 kg
C. 23 kg
D. 85 kg

3. A DBS disk tray weighs approximately:


A. 47 kg
B. 27 kg
C. 23 kg
D. 85 kg
4. Which of the following controllers has a height of 3U?
A. CBXSL
B. CBL
C. DBS
D. DBX

Page 3-21
Installation — Part 1
Module Review

Page 3-22
4. Installation — Part 2
Module Objectives

 Upon completion of this module, you should be able to:


• Perform steps 5 through 13 of the procedure for installing a new frame for
Hitachi Unified Storage (HUS)
• Use the Back End Configuration Kit (BECK) Tool

Page 4-1
Installation — Part 2
Installing a New Frame

Installing a New Frame


This section provides a general procedure of the complete steps entailed in installing a new HUS
frame and the associated tools required.

Procedure for Installing a New Frame

 Steps covered in the previous module:


1. Unpack the rack frame
2. Unpack the storage system
3. Remove the components
4. Mount the components on the rack frame
 Steps covered in this module:
1. Install the components
2. Connect the cables
3. Attach the decoration panel
4. Power on the storage system
5. Connect a service PC or laptop to the storage system
6. Install and update the firmware
7. Set the storage system
8. Power off and restart the storage system
9. Connect the host interface cables

Read the maintenance manual before performing each of the steps.

Normally, firmware comes installed on the storage system. You may need to update the
firmware if an update is available. In cases where there is no firmware on the storage system,
an initial firmware has to be installed.

After completing the new frame installation, close the doors and configure the host computers.

Page 4-2
Installation — Part 2
Step 5: Installing the Components

Step 5: Installing the Components

5a. Hitachi Unified Storage Components

 Components of HUS 110 and HUS 130 controllers:


• Cache dual in-line memory modules (DIMMs) in controller
• I/O interface board
• Drives
• Power unit containing cache battery
 Components of HUS 150 (CBL) controller:
• Cache DIMMs in controller
• I/O modules
• Fans
• Cache batteries
• Power unit

This section presents information derived from the following document:

• Installation Manual – INST 02-0640, “Section 2.4.7 Installing Components”

Controllers:

• CBXSS (2.5 inch disks) – HUS 110


• CBXSL (3.5 inch disks) – HUS 110
• CBSL (3.5 inch disks) – HUS 130
• CBSS (2.5 inch disks) – HUS 130
• CBL (3.5 inch disks) – HUS 150

Page 4-3
Installation — Part 2
Step 5: Installing the Components

5a. Hitachi Unified Storage Components

 Components of DBS disk tray:  Components of DBX disk tray:


• 2.5 inch x 24 disks • 3.5 inch x 48 disks
• 2 enclosure controller (ENC) • 4 ENC modules
modules • 4 power supply units
• 2 power supply units • 4 SAS cables
• 2 SAS cables
 Components of DBW disk
 Components of DBL disk tray: tray:
• 3.5 in x 12 disks • 3.5 in x 84 disks
• 2 ENC modules • 2 ENC modules
• 2 power supply units • 2 power supply units
• 2 SAS cables
• 2 SAS cables

 Components of DBF tray


• 12 * flash module drives
• 2 ENC modules
• 2 power supply units
• 2 SAS cables

Disk trays:

• DBL (3.5 inch x 12 disks) – nearline serial attached SCSI (NLSAS) disks
• DBS (2.5 inch x 24 disks) – serial attached SCSI (SAS) and flash drive disks
• DBX (3.5 inch x 48 disks) – NLSAS disks

5b. Controller Components (HUS 110 and HUS 130)

 Step 1: Remove controller and


reverse it

 Step 2: Open controller by


loosening the screws

Page 4-4
Installation — Part 2
Step 5: Installing the Components

5b. Controller Components (HUS 110 and HUS 130)

 Installing I/O interface card

 Installing cache DIMMs

The 2 diagrams demonstrate the installation of:

• I/O interface board


• Can be Fibre Channel (8Gb/sec x 4 ports)
• Can be iSCSI (10Gb/sec x 2 ports)
• DIMMs
• The CBXSS and CBXSL contain 1 cache DIMM per controller
• The CBSS and CBSL contain 2 cache DIMMs per controller

Page 4-5
Installation — Part 2
Step 5: Installing the Components

5c. Controller Components (HUS 150)

 Removing controller and installing cache DIMMs

Follow the diagram above for the controller CBL:

• Remove the controller


• Each controller has 2 cache DIMMs

5d. I/O Module Installation (CBL)

 I/O modules are located on rear of the CBL


 Loosen thumb screw and pull out latch to remove I/O module

Page 4-6
Installation — Part 2
Step 6: Connecting the Cables

Step 6: Connecting the Cables

6a. DBX Cabling

1. Install the cable routing


bar; this should be done
on both the left and rear
side of the DBX

2. Complete the installation of


the cable routing bar.

This section presents information derived from the following document:

• Installation Manual – INST 02-0850, “Section 2.4.9 Connecting the Cables”

Page 4-7
Installation — Part 2
Step 6: Connecting the Cables

6b. Other Cable Connections

 Fibre Channel interface cable


 iSCSI interface cable
 SAS (ENC) cable
 Power cable (100V)
 Power cable (200 V)

6c. Cable Connections on CBSS/CBSL with DBS/DBL

The diagram shows the cable connections on a CBSS/CBSL.

• The power cables are shown on the left and right.


• The Fibre Channel cables are connected to the Fibre Channel switches.
• The back-end cables (ENC) are connected to the ENC of the disk units.
• The left controller is Controller 0 and the right is Controller 1.
• There are 2 SAS ports on each controller (hence it is CBSS or CBSL).

Page 4-8
Installation — Part 2
Step 6: Connecting the Cables

6d. Cable Connections on CBSS/CBSL with DBX

The diagram shows the cable connections on a CBSS/CBSL and DBX.

• The power cables are shown on the left and right.


• The Fibre Channel cables are connected to the Fibre Channel switches.
• The back-end cables (ENC) are connected to the ENC of the disk units.
• The left controller is Controller 0 and the right is Controller 1.
• There are 2 SAS ports on each controller (hence it is CBSS or CBSL).
• The DBX contains 4 ENCs; since CBSS/CBSL has only 2 ENCs, only 2 ENCs are used on
DBX.

Page 4-9
Installation — Part 2
Step 6: Connecting the Cables

6e. Cable Connections on CBL with DBL

The above diagram shows the rear view of a CBL. The CBL (HUS 150) is made of 2 controllers.

• There are 6 I/O modules per controller.


• The left 6 belong to Controller 0 and the right 6 belong to Controller 1.
• The SAS I/O modules have 2 ports each.
• The DBLs have 2 ENCs each.
• There are 2 Fibre Channel I/O modules that get connected to the Fibre Channel switch.

Page 4-10
Installation — Part 2
Step 6: Connecting the Cables

6f. Cable Connections on CBL with DBX

The above diagram shows the rear view of a CBL. The CBL (HUS 150) is made of 2 controllers.

• There are 6 I/O modules per controller.


• The left 6 belong to Controller 0, and the right 6 belong to Controller 1.
• The SAS I/O modules have two ports each.
• The DBXs have 4 ENCs each.
• There are 2 Fibre Channel I/O modules that get connected to the Fibre Channel switch.

Page 4-11
Installation — Part 2
Step 6: Connecting the Cables

6g. Cable Connections on CBXSL/XSS with DBL/S

The above diagram shows the cable connection between CBXSL/CBXSS and DBL/DBS.

• The controller has just 1 ENC port per controller.


• The iSCSI cables are connected to the option module.
• The option module can be either Fibre Channel or iSCSI.

Page 4-12
Installation — Part 2
Step 6: Connecting the Cables

6h. Port IDs on HUS (Fibre Channel)

The above diagram shows the port numbers on HUS 110, HUS 130 and HUS 150.

• Ports on Controller 0 begin with 0.


• Ports on Controller 1 begin with 1.

Page 4-13
Installation — Part 2
Step 6: Connecting the Cables

6i. Port IDs on HUS (iSCSI)

The above diagram shows the port IDs when iSCSI ports are installed.

• Each option card on HUS 110 or HUS 130 has 2 iSCSI ports.
• Each I/O module on HUS 150 has 2 iSCSI ports.

6j. SAS ENC Connectors

 SAS ENC connector for OUT


connection — diamond symbol
embossed

 SAS ENC connector for IN connection


— circle symbol embossed

Take care when connecting cables; the SAS connectors for OUT are different from the
connectors for IN.

Page 4-14
Installation — Part 2
Step 7: Attaching the Decoration Panel

Step 7: Attaching the Decoration Panel

 When slots are vacant,


decoration panels can be put in
the racks

This section presents information derived from the following document:

• Installation Manual – INST 02-1400, “Section 2.4.15 Attaching Decoration Panels”

Page 4-15
Installation — Part 2
Step 8: Powering On the Storage System

Step 8: Powering On the Storage System

 Make sure main switch is turned off (see note)


 If the power cables are not connected, connect to power unit
 Turn on power distribution box (PDB) circuit breaker
 Turn on main switch
• For CBXSL, CBXSS, CBSL and CBSS, press the main switch on either
Controller 0 or Controller 1 for 1 second or more using a pen
• Use a key for the bezel

ON/OFF Switch
Note: Power LED on front of controller box can be:

• Off
o If power cables are not connected or if power cables are connected but PDB
circuit breakers are off

• Orange
o If the PDU circuit breakers are on and the power cables are connected but main
switch is off

• Green
o If everything is connected and turned on

This section presents information derived from the following document:

• Installation Manual – INST 01-0220, “Section 1.5 Power On/Off Procedure”

Page 4-16
Installation — Part 2
Step 8: Powering On the Storage System

 Turn on main switch located on front of the CBL


 Check that the green RDY LED on the controller box lights up
• Usually takes 4 minutes for the CBXSL, CBXSS, CBSL and CBSS
• Usually takes 5 minutes for the CBL

ON/OFF Switch

Page 4-17
Installation — Part 2
Step 9: Connecting a Service PC or Laptop to the Storage System

Step 9: Connecting a Service PC or Laptop to the Storage System

 Maintenance Port IP Addresses


Controller 0 Controller 1
Maintenance IP 10.0.0.16 10.0.0.17
(at shipment time)
Public IP Address 192.168.0.16 192.168.0.17
(at shipment time)
 Steps
1. Connect a cross/straight cable from HUS storage system to the
Management PC
2. Install Hitachi Storage Navigator Modular 2
3. Change the public IP address to the desired customer IP
address

This topic connects a PC or laptop computer to the storage system.

Page 4-18
Installation — Part 2
Step 10: Installing and Updating the Firmware

Step 10: Installing and Updating the Firmware

 Firmware update can be done in 2 ways:


1. Disruptive firmware update
 Requires Web Tool
• Initial firmware
• Update firmware
2. Nondisruptive firmware update
 Requires Storage Navigator Modular 2
• Firmware update only
• Firmware updates will be discussed more in subsequent modules

This topic provides instructions for installing and updating firmware, and presents 2 methods for
firmware updates.

Hitachi Unified Storage should come with firmware when it arrives from the factory. If it does,
the customer engineer should perform a firmware update if one is needed.

If the storage system does not arrive from the factory with firmware, then an initial firmware
update is needed.

This requires the Web Tool, Java Runtime Environment 1.6, httpclient.jar and modifications to
the java.policy file.

Page 4-19
Installation — Part 2
Step 11: Setting the Storage System

Step 11: Setting the Storage System

 Storage Navigator Modular 2 is needed for the following settings:


• Boot options
• System parameters
• Setting port options
• Host group settings
• RAID group settings
• Spare drive settings
• Date and time settings
• License settings

This section provides information about the components and steps required for setting system
storage.

A detailed explanation of each of the settings is provided in the Systems Manual.

Host Group settings are discussed in the “Storage Allocation” module.

RAID groups and license settings are discussed in the “Storage Configuration” module.

Consult HDS Support before making any changes.

Page 4-20
Installation — Part 2
Step 11: Setting the Storage System

11a. Setting the Spare Drives

 A drive can be set as a spare drive


 A spare drive can replace a failed drive of same type
 The spare drives can be set from Storage Navigator Modular 2

The drives supported on Hitachi Unified Storage are:

• Flash drives
• SAS 10k
• SAS 7.2k

Notes :

• Flash drives can replace flash drives of same or higher capacity.


• SAS 10k can replace SAS 10k of same or higher capacity.
• SAS 7.2k can replace SAS 7.2k of same or higher capacity.

Page 4-21
Installation — Part 2
Step 11: Setting the Storage System

11a. Setting the Spare Drives

 Select the drives to be made spares

11b. Setting the Date and Time

 Login to the array and


select Settings > Date
and Time
 Edit the time
• Set the time zone and
any NTP server settings

You can adjust this setting in Storage Navigator Modular 2.

Page 4-22
Installation — Part 2
Step 12: Powering Off and Restarting Storage System

Step 12: Powering Off and Restarting Storage System

 Press the Power switch on the CBXSS, CBXSL, CBSS or CBSL


controller with a pen for at least 3 seconds
 On a CBL controller, switch the ON/OFF switch to OFF

ON/OFF Switch

This section presents information derived from the following document:

• Installation Manual – INST 01-0220, “Section 1.5 Power On/Off Procedure”

Page 4-23
Installation — Part 2
Step 13: Connecting Host Interface Cables

Step 13: Connecting Host Interface Cables

0E 0F 0G 0H

This section presents information derived from the following document:

• Installation Manual – INST 02-0940, “Section 2.4.10 Connecting the Interface Cables”

Page 4-24
Installation — Part 2
Back End Configuration Kit (BECK) Tool

Back End Configuration Kit (BECK) Tool


This section presents the purpose and configuration options of the BECK Tool.

BECK Tool Overview

 Located at SNM2 CD
 Can be used to create a back end configuration
 Standalone program
 Can be used to create a fresh configuration
 User can input simple trace file and modify existing configurations

Page 4-25
Installation — Part 2
Using the BECK Tool — Blank Configuration

Using the BECK Tool — Blank Configuration

• Start the .exe.


• Choose language.
• Chose an empty model from the model list or load an existing simple trace from an
existing model.

Note : The next 3 slides show an example of configuring a new (empty) HUS150.

Page 4-26
Installation — Part 2
Configuring HUS 150

Configuring HUS 150

• Click edit.
• Right click mouse into Path1 field.
• Choose drive box from list.

Page 4-27
Installation — Part 2
Configuring HUS 150

Configuring HUS 150

• You can see the physical and logical configuration.


• Click Cable Figure.

Page 4-28
Installation — Part 2
Configuring HUS 150

• The cabling diagram is shown above.


• You can click Path 0, 1, 2 or 3 to highlight the path in the diagram.
• Click End to close.

Page 4-29
Installation — Part 2
Appendix A

Appendix A

Back end intermix between DBW and


other Drive Boxes

Page 4-30
Installation — Part 2
1. Overview

1. Overview

Points
Restricting some configurations
 Behind the first DBW, the total number of connectable Drive Box is restricted as below.
 HUS150 = 11 Drive Boxes (DBx or DBW), HUS130 = 5 Drive Boxes (DBx or DBW)
Non-affected configurations/cases
DBx
 All DBWs configurations Reference
= DBS/DBL/DBX

 Existing configurations (DBx) + newly added DBWs SSD/SAS DBS 24

Affected configurations/cases (See details in slide#9) DBL 12

 Existing configurations (all DBWs) + newly added DBx or DBW NL-SAS DBX 48

 Existing configurations (DBx) + newly added DBW + newly added DBx DBW 84

Guarding logic
1. Array Firmware and SNM 2:
• System boot-up case: Not READY status with error message
• Drive Box addition case: Can’t be added the Drive Box with error message
2. BECK tool (Back-End Configuration Kit):
• Guarding the restricted configurations

Page 4-31
Installation — Part 2
2. Support Configurations and Schedules

2. Support Configurations and Schedules

BBW Support Schedules


HUS150 HUS130
Support configurations A B C A B C
of expansion units (DBx only) (DBW only) (DBx and DBW) (DBx only) (DBW only) (DBx and DBW)
(See the figure below)

Supported HDD count 960 336 960 960 240  360 360 360
Support schedule Supported Supported Supported Supported Supported
(v1.5) (v2.0) (v3.0) v5.0 (v3.7) (v3.7) v5.0

Support configurations of expansion units


HUS150 HUS130

A (DBx only) B (DBW only) C (DBx and DBW)


DBx DBW DBx

DBW DBW
DBx

DBx
“DBx” means DBS, DBL or DBX.
DBW

Page 4-32
Installation — Part 2
3. Support Configuration Restrictions

3. Support Configuration Restrictions


When 1 DBW is connected as the system, the remaining number of Drive Boxes to be connected is restricted.
HUS150: 11 Drive Boxes (DBx or DBW)
HUS130: 5 Drive Boxes (DBx or DBW)

This is because of the Flat mounting rule, which states that only 2 or fewer Drive Boxes can be connected
downstream of DBW in the same path. See next slide for BECK Tool example.

HUS150 HUS130
DBx DBx DBx DBx DBx DBx

DBx DBx DBx DBx DBW

DBx DBx DBx DBx

DBx DBx DBx DBx

Any Drive Box


DBW OK
(DBx or DBW)
N/A
Max 5 drive boxes at
the back of a DBW in
OK the system.
N/AMax 11 drive boxes at the back of a “DBx”: means DBS, DBL or DBX.

DBW in the system.

Page 4-33
Installation — Part 2
Support Configuration Restrictions BECK Tool Example

Support Configuration Restrictions BECK Tool Example


Example of maximum number of DBW in HUS150. In this example, the total number of disks is 948 because
each path has already 2 DBWs connected.

Page 4-34
Installation — Part 2
4. Summary

4. Summary

Configuration restriction
In the same path, only 2 or fewer Drive Boxes (DBx or DBW) can be connected
downstream of DBW, not allowing 3 or more.
 When 1 DBW is connected as the system, the remaining number of Drive Boxes to be connected is restricted.
HUS 150: 11 Drive Boxes (DBx or DBW)
HUS 130: 5 Drive Boxes (DBx or DBW)

1. Depending on the user’s existing configuration, available additional configuration


of Drive Boxes may be restricted (Please get the accurate configuration from
user in advance)
2. Depending on the position of DBW in the back end path, the maximum drive
number for the system cannot be installed

Please understand the condition of this restriction,


and carefully handle your actual sales case.
• Please install the DBW in the back end of the path as much as possible
• Please adopt the DBW for capacity expansion as much as possible

Page 4-35
Installation — Part 2
4. Summary (Case Study #1: DBx + DBW + DBx)

4. Summary (Case Study #1: DBx + DBW + DBx)

Depending on the position of DBW in the back end path, the maximum drive number
for the system cannot be installed. Each Drive type (SSD/SAS/NLSAS) must be
considered to add the Drive Box. See next 3 slides for BECK Tool examples
1st Configuration Addition 2nd Configuration Addition
Adding a lot of NLSASs for Adding SSD/SAS for performance
improvement.
Example: capacity expansion.
HUS150 HUS150 HUS150
DBX 48 DBX 48 DBX 48 DBX 48 DBX 48 DBX 48

DBX 48 DBX 48 DBX 48 DBX 48 DBX 48 DBX 48

DBS DBS DBS DBS DBS DBS DBS DBS DBS DBS DBS DBS
24 24 24 24 24 24 24 24 24 24 24 24

DBW 84 DBW 84 DBW 84 DBW 84 DBW84 DBW 84 DBW84 DBW 84


72 HDD 72 72 72
[NL 48] [NL 48] [NL 48] [NL 48] DBS DBS DBS DBS
24 24 24 24
[SAS 12] [SAS 24] [SAS 24] [SAS 24] Max 11 Drive boxes at the back of
[SSD 12] a DBW in the system. DBS DBS DBS DBS
Total: 288 24 24 24 24

SSD/SAS DBS 24
156 156 156 156 204 204 204 204
DBL 12 [NL 132] [NL 132] [NL 132] [NL 132] [NL 132] [NL 132] [NL 132] [NL 132]
[SAS 12] [SAS 24] [SAS 24] [SAS 24] [SAS 60] [SAS 60] [SAS 72] [SAS 72]
NL-SAS DBX 48 [SSD 12] [SSD 12] [SSD 12]
Total: 624 Total: 816
DBW 84 “DBx” :means DBS, DBL or DBX.

Page 4-36
Installation — Part 2
4. Conclusions

4. Conclusions

1. The 2nd configuration is: 4 * DBW + 4* DBX +12 * DBS = 816 disk
maximum
2. The location where boxes are installed is important
3. The previous example is impossible if the configuration was started
with DBW boxes on top, because only 2 additional boxes
downstream of DBWs are possible
4. BECK Tool can be used for these additions (step by step)
5. For an initial installation, BECK Tool will always start with the
largest boxes (DBWs) and would not accept this configuration for
initialization

Page 4-37
Installation — Part 2
4. Summary (Case Study #2: DBW + DBx or DBW)

4. Summary (Case Study #2: DBW + DBx or DBW)

Depending on the position of DBW in the back-end path, the maximum drive
number for the system cannot be installed. Also, each drive type (SSD/SAS/NLSAS)
must be considered to add the drive box.
1st Configuration Addition 2nd Configuration Addition
Adding SSD/SAS for Adding both NLSAS and SSD/SAS
Example: performance improvement for customer’s use cases

HUS150 HUS150 HUS150


DBW DBW DBW DBW DBW 84 DBW 84 DBW84 DBW 84 DBW DBW DBW DBW
84 84 84 84 84 84 84 84

DBS DBS DBS DBS DBW DBW


24 24 24 24 84 84
Maximum of 11 Drive boxes at the
back of a DBW in the system DBS DBW DBW DBW
24 84 84 84

84 HDD 84 84 84 108 108 84 84 132 192 252 252


[NL 84] [NL 84] [NL 84] [NL 84] [NL 84] [NL 84] [NL 84] [NL 84] [NL 84] [NL 168] [NL 252] [NL 252]
[SAS 24] [SAS 24] [SAS 48] [SAS 24]
Total: 336 Total: 384 Total: 828

SSD/SAS DBS 24

DBL 12
NL-SAS DBX 48 “DBx” :means DBS, DBL or DBX.
DBW 84

Page 4-38
Installation — Part 2
Module Summary

Module Summary

 In this module, you should have learned:


• How to perform steps 5 through 13 of the procedure for installing a new
frame for HUS
• How to use the BECK Tool for installations and additions with mixed
types of disk boxes

Page 4-39
Installation — Part 2
Module Review

Module Review

1. What type of ports can you see on an enclosure controller (ENC)?


A. SAS
B. Fibre Channel
C. iSCSI
D. Ethernet
2. To add cache DIMMs, which component do you need to remove?
A. Power supply unit
B. Management I/O modules
C. Controller
D. Host option cards

3. iSCSI can transfer data at:


A. 100MB/sec
B. 800MB/sec
C. 1Gb/sec
D. 10Gb/sec
E. 16GB/sec
4. The number of ENCs you can find on a DBX is:
A. 1
B. 2
C. 3
D. 4

5. The maximum number of Fibre Channel ports on a Hitachi Unified


Storage system is:
A. 4
B. 8
C. 16
D. 24
6. The default IP addresses of maintenance ports on Hitachi Unified
Storage systems are:
A. 10.0.0.16 and 10.0.0.17
B. 192.168.0.1 and 192.168.0.2
C. 192.168.254.1 and 192.168.254.2
D. 192.168.0.16 and 192.168.0.17

Page 4-40
5. Using the Hitachi Unified Storage Web
Tool
Module Objectives

 Upon completion of this module, you should be able to:


• Explain the purpose and function of the Web Tool
• Demonstrate the operation of the Web Tool
• Discuss network considerations
• Use the Web Tool to set customer IP addresses
• Activate different modes of operation
• Use the Web Tool to perform disruptive firmware update
• Demonstrate use of special functions

Page 5-1
Using the Hitachi Unified Storage Web Tool
Post-Installation Tasks

Post-Installation Tasks

 Use the Web Tool to perform these tasks:


• Set controller IP addresses
• Verify and update firmware (if required)
• Verify storage system status and physical configuration
 Ensure no errors exist
• Troubleshoot
 Collect initial Simple Trace to start case history

After installation and power up of Hitachi Unified Storage systems, you must complete several
tasks before the storage system is ready for configuration and operation.

Page 5-2
Using the Hitachi Unified Storage Web Tool
Web Tool Introduction

Web Tool Introduction


This section introduces the Web Tool, to describe its general purpose, functions, ports and
additional information.

Web Tool Overview

 Provides convenient browser interface


 Communicates to HTTP server in an HUS system and becomes
operational shortly after system power up

Microsoft® Internet Explorer® is recommended.

Enter the IP address of the controller in the address bar to connect.

What you see in the above image is the Web Tool in Normal Mode.

In this mode, the Web Tool is read only.

The Web Tool will show the booting process.

Page 5-3
Using the Hitachi Unified Storage Web Tool
Web Tool Functions

Web Tool Functions

 Operate with system in Normal or Maintenance Mode


• Normal Mode: mainly online monitoring in read-only mode
• Maintenance Mode: allows basic configuration changes
 User ID and password are required to operate in Maintenance Mode
 Functionally overlaps with Hitachi Storage Navigator Modular 2
(SNM 2)
 Required for initial setup, initial IP address and serial number setting
 Used for loading firmware
• Correct Java Runtime Environment (JRE) version required

Page 5-4
Using the Hitachi Unified Storage Web Tool
Location and Function of Ethernet Ports

Location and Function of Ethernet Ports

CBSS/CBSL Controller

Maintenance Port User Port

CBL Controller

User Port

Maintenance Port

CBXSS (2.5 inch disks) — HUS 110 controller

CBXSL (3.5 inch disks) — HUS 110 controller

CBSL (3.5 inch disks) — HUS 130 controller

CBSS (2.5 inch disks) — HUS 130 controller

CBL (3.5 inch disks) — HUS 150 controller

The Maintenance port is typically for field engineer access.

The User port is the port that can be assigned any IP address and is normally connected to the
customer's network.

The functionality of both ports is the same.

The location of Ethernet ports is the same in CBXSS, CBXSL, CBSS and CBSL.

Page 5-5
Using the Hitachi Unified Storage Web Tool
IP Addresses on LAN Ports

IP Addresses on LAN Ports

Controller 0 Controller 1

Port 0 Port 1 Port 0 Port 1


Maintenance User Maintenance User
(fixed) (variable) (fixed) (variable)
Default
10.0.0.16 192.168.0.16 10.0.0.17 192.168.0.17
IP Address

Subnet Mask 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0

Default Gateway 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0

DHCP Off Off Off Off

The IP address for the Maintenance port is either 10.0.0.16/17 (default) or 192.168.0.16. It
depends on what the address configuration is on the User port. The underlying idea is to
guarantee that, if the Maintenance port and the User port are connected to the same network,
there will not be a IP address conflict (duplicate).

Page 5-6
Using the Hitachi Unified Storage Web Tool
Preferred Way of Connecting

Preferred Way of Connecting

Ethernet ports on HUS controllers are auto sensing and function with straight or
crossover cables.

Engineer's
Laptop

Maintenance Port
Customer Network

Switch
Hitachi
Unified
Storage

SNM 2 Web HCS User Port

SNM 2 = Hitachi Storage Navigator Modular 2

HCS = Hitachi Command Suite

Page 5-7
Using the Hitachi Unified Storage Web Tool
Maintenance Mode

Maintenance Mode

 Used by service personnel to perform maintenance tasks:


• Set system parameters
• Initially install firmware
• Collect detailed information about system (full dump)
• Set system serial number
 Enter mode by performing a soft reset at each controller
• Host ports are blocked (I/O traffic is disrupted)
• User ID and password required to enter Maintenance Mode

Caution: Host access is lost when entering Maintenance mode.

Page 5-8
Using the Hitachi Unified Storage Web Tool
Entering Maintenance Mode

Entering Maintenance Mode

 Reset both controllers


1. Reset Controller 0
2. Wait 3-5 seconds, reset
Controller 1
3. Open browser and connect to
system

The location of RST button is different on HUS 110, HUS 130 and HUS 150.

HUS 110 and HUS 130: located on the rear of the storage system.

HUS 150: located in the front.

The arrows in the diagram point to the locations of the RST buttons.

Page 5-9
Using the Hitachi Unified Storage Web Tool
Maintenance Mode User ID and Password

Maintenance Mode User ID and Password

User Name: maintenance


Password: hosyu9500

The user name and password displayed are not for customer use.

Page 5-10
Using the Hitachi Unified Storage Web Tool
Setting Controller IP Addresses

Setting Controller IP Addresses


This section describes how to set controller IP addresses in the Web Tool.

Setting IP Addresses

 When first installed, the storage system most likely will not
communicate with the customer network
 Customer engineer or partner resource must set IP addresses
 IP addresses must be set before connecting with SNM 2
 Use the Web Tool in Maintenance Mode to set initial IP settings

Page 5-11
Using the Hitachi Unified Storage Web Tool
Setting IP Addresses

In order to reach this screen, follow these steps:

• Press the RST button (discussed previously).


• Use a browser to connect to Hitachi Unified Storage.
• A login screen is displayed (shown previously).
• Do not leave HUS in Maintenance Mode unattended.
• Click Network (IPv4) to change the IPv4 address (covered on next slide).
• Once done, return to this screen and click Go To Normal Mode.

Note :

• An alternate way to change the IP address is to use SNM 2 (preferred).


• If the Hitachi Unified Storage system is already in production, then use SNM 2 rather
than the method above, as the Maintenance Mode will cut off the host I/O.

Page 5-12
Using the Hitachi Unified Storage Web Tool
Setting IP Addresses

Page 5-13
Using the Hitachi Unified Storage Web Tool
Setting IP Addresses

• Change the IP address and click Set.

Note :

• Check the IP address you have entered.


• The system does not test whether the address you have entered is pingable.
• After you have made the changes, click Set to save and return to the previous screen.
• Once at the previous screen, click Go to Normal Mode to reboot the storage system
and return to normal operations.

Page 5-14
Using the Hitachi Unified Storage Web Tool
Verifying and Updating Firmware

Verifying and Updating Firmware


This section presents the procedure used to verify and update HUS hardware.

Disruptive Firmware Update Overview

 Firmware version installed at factory may not match customer


requirements
 Install latest version of firmware available
• Unless otherwise noted in Safety Assurance Document or other
implementation service documentation
 Use disruptive firmware update procedure from Maintenance Mode
of the Web Tool
• Do not perform disruptive firmware updates after system has entered
production use
 After production begins, SNM 2 GUI provides a convenient and safe
method of updating firmware non-disruptively

Firmware updates consist of 2 scenarios.

• Initial firmware install (when a storage system has no firmware):

o All data, license keys and settings are lost when you do an initial firmware on an
existing system.
o Production system, with everything running fine, should not be initialized.
o Can be done from the Web Tool only (also when the storage system is in
Maintenance Mode).

• Update firmware install (the storage system already has firmware):

o Data, license keys and settings are retained


o Can be done through the Web Tool
o Storage system is in Maintenance Mode, so host I/O is cut off (not preferred
method)
o Can also be done through Storage Navigator Modular 2
o This method is nondisruptive, so host I/O continues (preferred method)

Page 5-15
Using the Hitachi Unified Storage Web Tool
Preparing for Firmware Update

Preparing for Firmware Update

 Prepare the PC:


• Uninstall any previous Java/JRE versions
• Run the DFJavaSetup.exe
• Firmware is available as a zip file on CD ROM
 Will begin with 09xxxxxx.zip

 Take the array into Maintenance Mode and follow the steps on the
next slide

It is essential to complete the steps above in order.

• Picture shows the ZIP file contents.


• When you unzip the file, you will see the folders in order.
• The second stratum is the sub folder.
• Do not delete any folder.

For example, if you are doing a firmware install/update for one storage system model,
you should not delete folders for the other models; the system will pick up the correct
folder.

Page 5-16
Using the Hitachi Unified Storage Web Tool
Before Starting Initial Microcode Setup

Before Starting Initial Microcode Setup

Using Web GUI, follow these steps

1. Install Java 1.6.x.


2. Copy "HTTPClient.jar" to the following folder:
C:\Program Files\Java\jre1.6.x\lib\ext.
3. Run DF800JSetup10e.exe (creates "C:\diskarray-
microprogram\microprogram").
4. Copy entire micro code folder, for example20081006_V46B,
to “C:\diskarray-microprogram\microprogram”.
5. Start Internet Explorer .

Page 5-17
Using the Hitachi Unified Storage Web Tool
Selecting the Options

Selecting the Options

1 2

5
4

Place the storage system in Maintenance Mode. Follow the steps in the previous procedure:

• Click Microprogram.

o This screen may or not be shown (if shown click Run).


o This screen will only be shown if you have completed the initial microcode setup
with the Web GUI shown on the previous slide.

• Select Initial if you want an initial code update.


• Select Update if you want to an firmware update.
• Select the top folder of Firmware and click Open.

o A dialog box appears with the Firmware version you want to install.

• You may see a different version than shown.

Page 5-18
Using the Hitachi Unified Storage Web Tool
Selecting the Options

6
7

• Check OK to execute.

o A dialog box appears stating that the update is executing.


o This process may take several minutes.
o A dialog box appears when the process is complete.

• When you click OK, a dialog box prompts you to reboot the storage system.
• You must reboot to complete the installation.

Page 5-19
Using the Hitachi Unified Storage Web Tool
Successful Firmware Update Completion

Successful Firmware Update Completion

2
1
3

• The new microcode should be shown here.


• Array Status should be Ready.
• Click the Warning Information/Information Message; there should be a message
confirming a successful firmware update.

Page 5-20
Using the Hitachi Unified Storage Web Tool
Using the Web Tool in Normal Mode

Using the Web Tool in Normal Mode


This section presents the requirements and steps for using the Web Tool in Normal mode.

Normal Mode Requirements

Steps:

• Open a browser.
• Enter the IP address of the Hitachi Unified Storage system controller.
• You will see the following web page.

Notes :

• The storage system serial number is shown here.


• The serial number is unique.
• License keys are based on the serial number.
• The current firmware on the Hitachi Unified Storage is shown here.
• Hitachi Unified Storage firmware starts with 09.
• The next digit is the major version (1 in the above case).
• The next digit shows the minor version (0 in the above case).
• The alphabet after the slash (/) shows the release (A in the above case).

Page 5-21
Using the Hitachi Unified Storage Web Tool
Normal Mode Requirements

• The storage system can be in Ready, Warning or Alarm mode.


• Ready mode means there are no issues on the Hitachi Unified Storage system (this is
known as Patrol Lamp).
• This link shows the warning and information messages (see the previous slide on
successful firmware update completion).
• Simple Trace and CTRL alarm trace are discussed later in the module.
• The statuses of individual components are shown here.
• The components can be in red or yellow color signifying a failure in one or more
subcomponent of the component.

Page 5-22
Using the Hitachi Unified Storage Web Tool
Cache Backup Battery Status

Cache Backup Battery Status

It is crucial for cache backup battery to be charged.

In the above picture, if the cache backup battery is either Low or Red, it means the battery
does not have enough power to backup cache to the flash drive in the event of a power failure.

Page 5-23
Using the Hitachi Unified Storage Web Tool
Other Components

Other Components

This slide conveys visual snapshots of how components look when they are OK and when there
is an error.

Page 5-24
Using the Hitachi Unified Storage Web Tool
Controller/Battery/Cache/Fan Status

Controller/Battery/Cache/Fan Status

The above is a screen capture from a CBL or HUS 150.

• There are 2 controllers


• 2 batteries
• 2 cache DIMMs per controller
• 3 fans per controller

The image matches the status we see from the front of the CBL, hence Controller 1 is shown on
the left and Controller 0 is shown on the right.

Page 5-25
Using the Hitachi Unified Storage Web Tool
Status of Disk Trays

Status of Disk Trays

 The diagram shows the CBL connected to DBX


 Each DBX is shown in 2 rows:
• DBX (A) shows the A unit
• DBX (B) shows the B unit
• Each unit holds 24 disks and 2 ENCs

Page 5-26
Using the Hitachi Unified Storage Web Tool
Collecting Traces with the Web Tool

Collecting Traces with the Web Tool


This section describes how to use the Web Tool to collect traces, the types of traces available,
with troubleshooting and support options.

Trace Types

 Web Tool in Normal Mode


• Simple Trace
• Controller Alarm Trace
 Web Tool in Maintenance Mode
• Simple Trace
• Controller Alarm Trace
• Full Dump
• Full Dump during Cache Memory Access Failure

Page 5-27
Using the Hitachi Unified Storage Web Tool
Collecting a Simple Trace

Collecting a Simple Trace

Follow the steps above to collect a Simple Trace.

Page 5-28
Using the Hitachi Unified Storage Web Tool
Collecting a Controller Alarm Trace

Collecting a Controller Alarm Trace

Follow the procedure shown to collect the Controller Alarm Trace.

The above figure shows collecting the Controller Alarm Trace in Maintenance Mode.

The Controller Alarm Trace can be collected in Normal Mode as well.

Page 5-29
Using the Hitachi Unified Storage Web Tool
Collecting a Full Dump

Collecting a Full Dump

Full Dump:

• To collect, the storage system must be Maintenance Mode.


• Takes time to be collected and saved.
• The time to collect and its size depends on the size of the cache.

Page 5-30
Using the Hitachi Unified Storage Web Tool
Cache Memory Access Failure (Full Dump)

Cache Memory Access Failure (Full Dump)

The screens for saving a full dump are similar to the screen for saving other types of trace.

Page 5-31
Using the Hitachi Unified Storage Web Tool
Troubleshooting — Open a Case

Troubleshooting — Open a Case

 Call Global Support Center to open a case and get case ID


• If one does not exist for implementation service
 Upload Simple Trace data to open support case
• https://tuf.hds.com
 Enter login info:
• User: Case ID
• Password: truenorth

Page 5-32
Using the Hitachi Unified Storage Web Tool
Technical Upload Facility (TUF)

Technical Upload Facility (TUF)

 Upload files to open cases at https://tuf.hds.com


 Learn how to collect other data

Page 5-33
Using the Hitachi Unified Storage Web Tool
Instructor Demonstration

Instructor Demonstration

 Using the Web Tool


• Access Web Tool
• Parts Status
• Trace

Page 5-34
Using the Hitachi Unified Storage Web Tool
Module Summary

Module Summary

 In this module, you should have learned:


• The purpose and function of Web Tool
• The operation of the Web Tool
• Network considerations
• How to use the Web Tool to set customer IP addresses
• How to activate different modes of operation
• How to use Web Tool to perform disruptive firmware update
• The use of special functions

Page 5-35
Using the Hitachi Unified Storage Web Tool
Module Review

Module Review

1. Which 2 modes are available in the Web Tool?


A. Management mode
B. Normal mode
C. Initial mode
D. Maintenance mode
2. Web Tool firmware update is always __________.
A. Disruptive
B. Nondisruptive
3. Which trace can only be collected from the Web Tool in
Maintenance Mode?
A. Normal trace
B. Controller alarm trace
C. Full dump
D. Reboot trace

Page 5-36
6. Updating Hitachi Unified Storage
Firmware
Module Objectives

 Upon completion of this module, you should be able to:


• Identify firmware update requirements
• Perform a firmware update
• Verify that the firmware update is successful

Page 6-1
Updating Hitachi Unified Storage Firmware
HUS Firmware Overview

HUS Firmware Overview

 General information
• Firmware is installed on the HUS storage system
• Requirements:
 Firmware on a CD
 Cross or straight cables
 Management PC
 HUS firmware is 09xx/y:
• 09: HUS 100 series
• x: major version
• y: minor version

Page 6-2
Updating Hitachi Unified Storage Firmware
Serial Number of HUS Box

Serial Number of HUS Box

Serial Number
Model Controller Description
Begins with
CBXSL Controller with 3.5-inch disks 911xxxxx
HUS 110
CBXSS Controller with 2.5-inch disks 912xxxxx
CBSL Controller with 3.5-inch disks 921xxxxx
HUS 130
CBSS Controller with 2.5-inch disks 922xxxxx
HUS 150 CBL (Controller has no disks) 930xxxxx

Page 6-3
Updating Hitachi Unified Storage Firmware
Firmware Update Methods

Firmware Update Methods

 There are 2 ways to update HUS firmware:


1. Nondisruptive method
 Host connectivity is not lost
 Takes time (based on parameters)
 Microcode is presented as zip file
 Uses Hitachi Storage Navigator Modular 2
 Preferred method
2. Disruptive method
 Host connectivity is lost
 Completes fast
 Microcode presented as unzipped directory structure
 Uses Web Tool
 Not preferable on production hosts

The disruptive method can do the following:

• Initial microcode update: should be done only on new machines.

o It should never be done on production machines because, after the initial


microcode update, you lose:

 All RAID groups, volumes, pools


 All settings
 All license keys
 It is impossible to undo the effects of initial microcode update.

• Update microcode: can be done to update the microcode.

o It completes fast, but the customer has to take downtime.


o This is the reason this method is not a preferred method to do the code update.
o Nondisruptive microcode update is the preferred method of updating the
microcode.

Page 6-4
Updating Hitachi Unified Storage Firmware
Nondisruptive Firmware Update Overview

Nondisruptive Firmware Update Overview


This section provides instructions and illustrations of the nondisruptive method for the HUS
firmware update.

Nondisruptive Firmware Update Requirements

 Hitachi Storage Navigator Modular 2 (SNM 2)


 IP connection to Hitachi Unified Storage
 Firmware as a compressed file (zip)
 Procedure:
• Open SNM 2
• Select the storage system
• Proceed with the firmware update
• Verify that the firmware was successfully updated

Page 6-5
Updating Hitachi Unified Storage Firmware
Nondisruptive Firmware Update Procedure

Nondisruptive Firmware Update Procedure

Select the firmware


(You see the current version)

Click Update Firmware

Page 6-6
Updating Hitachi Unified Storage Firmware
Nondisruptive Firmware Update Procedure

Basic tab

Advanced tab

Basic Tab Explanations

• Transfer and Update Firmware

o This option will transfer and update the firmware.

• Transfer Only

o This option will transfer the firmware only.


o If you use this option, you will have to update it with next option later.

• Update

o This option works only if microcode has already been transferred with the above
step.

Advanced Tab Explanations

• Begin the operation if the array is not busy

o Check this option if you want to ensure microcode to complete only at low I/O
time.

Page 6-7
Updating Hitachi Unified Storage Firmware
Nondisruptive Firmware Update Procedure

• Check Revision.

o If you check this option then a microcode downgrade is not allowed.

• Interval to transfer the firmware to array:

o If SNM 2 is in maintenance mode:

 Choose 0 seconds for code to be transferred to array.


 Microcode transfer completes fast but servers may timeout.
 This method is not recommended in a production system.

o If SNM 2 is not in maintenance mode:

 Can choose from 3 to 60 seconds.


 Based on time chosen to transfer the array, the microcode update will
take that amount of time.

Click Confirm
to update

Click Close to finish


code update

Page 6-8
Updating Hitachi Unified Storage Firmware
Verifying Successful Completion of Code Update

Verifying Successful Completion of Code Update

 Click Alert and Events


• Select message to ensure code is updated successfully
 Check from Web Tool if code is updated successfully
• Web Tool should always be preferred method for verifying

Page 6-9
Updating Hitachi Unified Storage Firmware
Successful Microcode Update

Successful Microcode Update

2. Array should be in
Ready mode.

1. The new microcode


should be shown here.

3. Click the Warning


Information/Information
message.
This field should indicate a
Successful Firmware Update.

Page 6-10
Updating Hitachi Unified Storage Firmware
Module Summary

Module Summary

 In this module, you should have learned to:


• Identify firmware update requirements
• Perform a firmware update
• Verify that firmware update is successful

Page 6-11
Updating Hitachi Unified Storage Firmware
Module Review

Module Review

1. Nondisruptive microcode update can only be done with:


A. Storage Navigator Modular 2
B. Web Tool in Normal Model
C. Web Tool in Maintenance Mode
D. Automatically done from the HDS website

Page 6-12
7. Hitachi Storage Navigator Modular 2
Installation and Configuration
Module Objectives

 Upon completion of this module, you should be able to:


• Describe the features and functions of Hitachi Storage Navigator Modular
2 (SNM 2)
• Describe the initial setup tasks
• List the steps for installing SNM 2
• Use the Add Array Wizard to register an array
• Discuss the impact of enabling account authentication
• Describe the user management functions offered by SNM 2

Page 7-1
Hitachi Storage Navigator Modular 2 Installation and Configuration
Overview

Overview
This section presents the components, requirements, features, functions and additional information
about SNM 2.

Architecture

 Communication between SNM 2 server and storage array:


• Can be set up either unsecured or secured with SSL
 Communication between SNM 2 server and SNM 2 client:
• Can be set up either unsecured or secured with SSL

Storage Navigator Modular 2 Server

SSL

SSL

HUS 100 family


Storage System
Storage Navigator Modular 2 Client

Using Storage Navigator Modular 2, you can configure and manage your storage assets from a local
host and from a remote host across an Intranet or TCP/IP network to ensure maximum data
reliability, network up-time and system serviceability. You install Storage Navigator Modular 2 on a
management platform (a desktop computer, a Linux workstation or a laptop) that acts as a console
for managing your HUS family storage. This PC management console connects to the management
ports on the HUS system controllers, and uses Storage Navigator Modular 2 to manage your storage
assets and resources. The management console can connect to HUS via a network interface card, an
Ethernet cable, a switch or a hub (for Fibre Channel networks, use a Fibre Channel switch or hub;
for iSCSI networks, use an iSCSI switch or hub).
Data flow in a Hitachi Unified Storage system is as follows:
• The front end controller communicates to the back-end controller of the storage system
• The back end controller communicates with the SAN (typically through a Fibre Channel
switch)
• Hosts or application servers contact the SAN to retrieve data from the storage system for
use in applications (commonly databases and data processing programs)

Page 7-2
Hitachi Storage Navigator Modular 2 Installation and Configuration
Installation Requirements

Installation Requirements

 SNM 2 can be installed on:


• Microsoft Windows®
• Red Hat Enterprise Linux
• Oracle Solaris
 Management workstations can be virtual machines
 Resource requirement
• CPU: Minimum 1 GHz (2 GHz or more is recommended)
• Physical memory: 1GB or more (2GB or more is recommended)
• Available disk capacity: 1.5GB or more

Page 7-3
Hitachi Storage Navigator Modular 2 Installation and Configuration
Features and Functions

Features and Functions

 Enables essential functions for the management and optimization of


individual Hitachi storage systems
 Provides 2 interfaces to allow ease of storage management:
1. Web-accessible graphical management interface
2. Command line interface (CLI)

SNM 2 is the integrated interface for standard firmware and software features of Hitachi Unified
Storage (and earlier storage systems). It is required for taking advantage of the full feature sets
Hitachi Unified Storage offers.

Page 7-4
Hitachi Storage Navigator Modular 2 Installation and Configuration
Features and Functions

 Point-and-click graphical interface


 Immediate view of available storage and current usage
• Efficient deployment of storage resources
 Protection of access to information
 Protection of the information itself
 Online:
• Functions for Hitachi storage systems
• Firmware updates and other system maintenance functions
• Help
• Volume migrations
 Compatibility with major operating systems
 Full featured and scriptable CLI

The point-and-click graphical interface has initial set-up wizards that simplify configuration,
management and visualization of Hitachi storage systems.

SNM 2:

• Enables you to know available storage and current usage quickly and easily

o Efficient deployment of storage resources leads to fulfilled business and application


needs, optimized storage productivity and reduced time required to configure
storage systems and balance I/O workloads

• Allows you to protect access to information by restricting storage access at the port level,
requiring case-sensitive password logins and providing secure domains for application-
specific data

o SNM 2 also protects your information by letting you configure data redundancy and
assign hot spares

• Provides online functions for Hitachi storage systems, such as storage system status, event
logging, email alert notifications and statistics
• Is compatible with Microsoft Windows, Red Hat Enterprise Linux or Oracle Solaris
environments
• SNM 2 online help provides easy access to information about use of features and enables
you to get the most out of your storage system
• Provides a full featured and scriptable command line interface, in addition to a GUI view

Page 7-5
Hitachi Storage Navigator Modular 2 Installation and Configuration
Features and Functions

 RAID/dynamic provisioning pool configuration


 Volume creation and changing capacity
 Configuring and managing Hitachi replication products
 Simple Network Management Protocol (SNMP) integration
 LU mapping to front-end ports
 Ability to work with Hitachi Adaptable Modular Storage
 GUI and CLI Interface
 Able to collect trace, collect and display performance information,
and display logs
 Wizards to perform basic configuration

The SNM 2 management console provides views of feature settings on the storage system in
addition to enabling you to configure and manage those features to optimize your experience
with Hitachi Unified Storage. This page lists several of the functions enabled by SNM 2.

Page 7-6
Hitachi Storage Navigator Modular 2 Installation and Configuration
Features and Functions

 Enables storage management functionalities by integrating with HUS


features, including:
• Account authentication and audit logging
• Performance monitoring
• Modular volume migration
• Volume management
• Replication setup and management
• Cache residency manager
• Cache partition manager
• Online RAID group expansion
• System maintenance
• SAN security
• SNMP agent support

Storage Navigator Modular 2 works in conjunction with storage features found in the Hitachi Unified
Storage 100 family.

• Account authentication and audit logging provide access control to management functions
and record all system changes
• Performance monitoring software allows you to see performance within the storage system
• Modular volume migration software enables dynamic data migration
• Volume management software streamlines configuration management processes by allowing
you to define, configure, add, delete, expand, revise and reassign LUNs to specific paths without
having to reboot your storage system
• Replication setup and management feature provides basic configuration and management of
Hitachi ShadowImage In-System Replication software bundle, Hitachi Copy-on-Write Snapshot
and Hitachi TrueCopy mirrored pairs
• The cache residency manager feature allows you to lock and unlock data into a cache in real
time for optimal access to your most frequently accessed data
• Cache partition manager feature allows the application to partition the cache for improved
performance
• Online RAID group expansion feature enables dynamic addition of HDDs to a RAID group
• System maintenance feature allows online controller microcode updates and other system
maintenance functions
• SAN security software helps ensure security in open systems storage area networking
environments through restricted server access
• SNMP agent support includes management information bases (MIBs) specific to Hitachi Data
Systems and enables SNMP based reporting on status and alerts for Hitachi storage systems

Page 7-7
Hitachi Storage Navigator Modular 2 Installation and Configuration
Initial Setup Tasks

Initial Setup Tasks


This section presents the initial tasks to set up SNM 2.

Initial Setup

 Perform these steps before installing SNM 2:


1. Install your Hitachi Unified Storage hardware and confirm that it is
operational
2. Meet all operating environment requirements
3. Collect all user-supplied items required for the SNM 2 installation
4. Install and set Java Runtime Environment (JRE)
(not necessary if v23 or later)
5. Disable your firewall
6. Disable your antivirus software
7. Obtain license keys for all Hitachi Program Products you want to install
using SNM 2
8. Review the SNM 2 technical guidelines

Step 1: Be sure to verify HUS hardware is operational before proceeding.

Step 2: Install SNM 2 on a management platform (a desktop computer, a Linux workstation or


a laptop) that acts as a console for managing your HUS 100 family storage system. This PC
management console connects to the management ports on the HUS 100 family storage
controllers and uses SNM 2 to manage your storage assets and resources. The management
console can connect directly to the management ports on the HUS family storage or via a
network hub or switch.

Step 3: Have all these items prior to installation:

• SNM 2 installation CDs or access to the Hitachi Data Systems Web Portal:
support.hds.com
• A PC that will act as the management console for managing the storage system using
SNM 2

Page 7-8
Hitachi Storage Navigator Modular 2 Installation and Configuration
Initial Setup

• The IP address of each management port on your HUS 100 family storage system that
will connect to the SNM 2 management console
• The IP address of the management console
• The port number used to access SNM 2 from your browser (default port is 1099)
• The password you will use to replace the default system account password
• License keys required by each program product you want to use

Step 4: You can download JRE 6.0 from the following site and install it by following the on-
screen prompts: http://java.com/en/download/

If your management console runs Microsoft Windows, perform the following procedure.

• Click the Windows Start menu, point to Settings, and click Control Panel
• In the Windows Control Panel, double-click Java Control Panel
• Click the Java tab (the Java tab appears)
• Click View in the Java Applet Runtime Settings section
• In the Java Runtime Parameters field, type “–Xmx464m”
• Click OK to exit the Java Runtime Settings window
• Click OK in the Java tab to close the Java Control Panel window
• Close the Windows Control Panel

If your management console runs a supported Solaris or Linux operating system, perform the
following procedure.

• From a Windows terminal, execute the <JRE installed directory>/bin/jcontrol to run the
Java Control Panel
• Click View in the Java Applet Runtime Settings section
• In the Java Runtime Parameters field, type “–Xmx464m”
• Click OK to exit the Java Runtime Settings window
• Click OK in the Java tab to close the Java Control Panel window

Step 5: A firewall's main purpose is to block incoming unsolicited connection attempts to your
network. If the HUS 100 family system is used within an environment that has a firewall, there
will be times when the storage system’s outbound connections will need to traverse the firewall.
The storage system's incoming indication ports are ephemeral, with the system randomly
selecting the first available open port that is not being used by another Transmission Control
Protocol (TCP) application. To permit outbound connections from the storage system, you must
either disable the firewall or create or revise a source-based firewall rule (not a port-based rule),
so that items coming from the storage system are allowed to traverse the firewall. Firewalls
should be disabled when installing SNM 2 (refer to the documentation for your firewall). After
the installation completes, you can turn on your firewall. If you use Windows firewall, the SNM
2 installer automatically registers the SNM 2 file and Command Suite Common Components as
exceptions to the firewall. Therefore, before you install SNM 2, confirm that no security
problems exist.

Page 7-9
Hitachi Storage Navigator Modular 2 Installation and Configuration
Initial Setup

Step 6: Antivirus programs, except Microsoft Windows built-in firewall, must be disabled before
installing SNM 2. In addition, SNM 2 cannot operate with firewalls that can terminate local host
socket connections. As a result, configure your antivirus software to prevent socket connections
from being terminated at the local host (refer to the documentation for your antivirus software).

Step 7: Some Program Products require a license key before you can use them. Typically, the
license key required to activate these products is furnished with the product. We recommend
that you have these license keys available before you activate the Program Products that
require them. If you do not have license keys for the Program Products, please contact
technical support.

Step 8: Being familiar with the Technical Guidelines of installing SNM 2 will keep you on the
optimal installation path and help you avoid potential pitfalls.

Page 7-10
Hitachi Storage Navigator Modular 2 Installation and Configuration
Initial Setup

 Perform these pre-installation configurations:


• Storage Navigator Modular 2 also installs the Hitachi Command Suite
Common Component
 Use specific Hitachi Command Suite Common Component port
numbers
 Determine if other Hitachi Command products are running
• Determine the IP address and port number of the management console
• Disable pop-up blockers in your Web browser
• Disable antivirus software and proxy settings on the management console
when installing the SNM 2 software

If the management console has other Hitachi Command products installed, the Hitachi
Command Suite Common Component overwrites the current Hitachi Command Suite Common
Component.

Be sure no products other than Hitachi Command Suite Common Component are using port
numbers 1099, 23015 to 23018, 23032, and 45001 to 49000. If other products are using these
ports, you cannot start Storage Navigator Modular 2, even if the Storage Navigator Modular 2
installation completes without errors.

If other Hitachi Command products are running:

• Stop the services or daemon process for those products


• Be sure any installed Hitachi Command Suite Common Components are not operating in
a cluster configuration

o If the host is in the cluster configuration, configure it for a stand-alone


configuration according to the manual
o Back up the Hitachi Command database before installing Storage Navigator
Modular 2

Determine the IP address and port number of the management console (for example, using
ipconfig on Windows or ifconfig on Solaris and Linux). The IP address you use to log in to
Storage Navigator Modular 2 must be a static IP address. On Hitachi storage systems, the

Page 7-11
Hitachi Storage Navigator Modular 2 Installation and Configuration
Initial Setup

default IP addressed for the management ports are 192.168.0.16 for Controller 0 and
192.168.0.17 for Controller 1. Use a port number such as 2500 if available.

We also recommend that you disable antivirus software and proxy settings on the management
console when installing the Storage Navigator Modular 2 software. On Hitachi storage systems,
the default IP addressed for the management ports are 192.168.0.16 for Controller 0 and
192.168.0.17 for Controller 1.

Use the appropriate section for the operating system running on your management console
(Microsoft Windows, Oracle Solaris or Red Hat Enterprise Linux 4).

Page 7-12
Hitachi Storage Navigator Modular 2 Installation and Configuration
Installation

Installation
This section presents instructions for installation on Windows, Sun Solaris and Red Hat Linux.

Installation on Windows

 Installing SNM 2 on a Windows operating system


1. After you insert the SNM 2 installation CD, follow the installation wizard
to completion

If the installation fails on a Windows operating system:

Data Execution Prevention (DEP) is a Windows security feature intended to prevent an


application or service from executing code from a non-executable memory region. DEP
performs checks on memory to prevent malicious code or exploits from running on the system
by shutting down the process once detected. However, DEP can accidentally shut down
legitimate processes, such as your Storage Navigator Modular 2 installation. If your
management console runs Windows Server 2003 SP1 or Windows XP SP2 or later, and your
Storage Navigator Modular 2 installation fails, disable DEP.

1. Click Start, and then click Control Panel

2. Click System

3. In the System Properties window, click the Advanced tab

Page 7-13
Hitachi Storage Navigator Modular 2 Installation and Configuration
Installation on Windows

4. In the Performance area, click Settings and then click the Data Execution
Prevention tab

5. Click Turn on DEP for all programs and services except those I select

6. Click Add and specify the Navigator Modular 2 installer HSNM2-x x x x -W-GUI.exe,
where xxxx varies with the version of Navigator Modular 2 (the Navigator Modular 2
installer HSNM2-x x x x -W-GUI.exe is added to the list)

7. Click the checkbox next to the Navigator Modular 2 installer HSNM2-x x x x -W-GUI.exe
and click OK

Page 7-14
Hitachi Storage Navigator Modular 2 Installation and Configuration
Installation on Sun Solaris

Installation on Sun Solaris

 Installing SNM 2 on a Sun Solaris operating system


1. Insert the SNM 2 installation CD-ROM into the management console’s
CD/DVD-ROM drive
2. Mount the CD-ROM to destination /cdrom
3. Create a temporary directory with sufficient free space
(more than 600MB)
4. Expand the compressed files to /temporary.
5. Issue the following command lines.
• mkdir /temporary
• cd /temporary
• gunzip < /cdrom/HSNM2-XXXX-S-GUI.tar.gz | tar xf –
• /temporary/install-hsnm2.sh –a [IP address] –p [port number]

If the CD-ROM cannot be read, copy the files install-hsnm2.sh and HSNM2-X XX X -S-
GUI.tar.gz to a file system that the host can recognize. XXXX varies with the version of SNM 2.

[IP address] is the IP address used to access SNM 2 from your browser.

[port number] is the port number used to access SNM 2 from your browser.

For environments using DHCP, enter the host name (computer name) for the IP address.

Page 7-15
Hitachi Storage Navigator Modular 2 Installation and Configuration
Installation on Red Hat Linux

Installation on Red Hat Linux

 Installing SNM 2 on a Red Hat Linux operating system


1. Insert the Hitachi Storage Navigator Modular 2 installation
CD-ROM into the management console’s CD/DVD-ROM drive
2. Mount the CD-ROM to mount destination /cdrom
3. In the console, issue the following command line:
• sh /cdrom/install-hsnm2.sh –a [IP address] –p [port number]

If the CD-ROM cannot be read, copy the files install-hsnm2.sh and HSNM2-XXXX-L-GUI.rpm to
a file system that the host can recognize.

When entering an IP address, do not specify 127.0.0.1 and local host. For DHCP environments,
specify the host name (computer name).

The default port number is 1099. If you use it, you can omit the –p option from the command
line.

[IP address] is the IP address used to access SNM 2 from your browser.

[port number] is the port number used to access SNM 2 from your browser.

Page 7-16
Hitachi Storage Navigator Modular 2 Installation and Configuration
Instructor Demonstration

Instructor Demonstration

 SNM 2 Installation

Page 7-17
Hitachi Storage Navigator Modular 2 Installation and Configuration
Storage Navigator Modular 2 Wizards

Storage Navigator Modular 2 Wizards


This section provides the steps and preparation required for adding arrays using the Add Array
Wizard, with a demonstration.

Add Array Wizard

 SNM 2 wizards can be used to simplify configuration tasks


• Add Array Wizard: Adds Hitachi storage systems to the SNM 2 database
• Initial (Array) Setup Wizard: Configures email alerts, management
ports, iSCSI ports, and sets the date and time
• Create and Map Volume Wizard: Creates a Volume and maps it to a
Fibre Channel or iSCSI target
• These wizards will be used during the lab sessions

Page 7-18
Hitachi Storage Navigator Modular 2 Installation and Configuration
Add Array Wizard

IP Address or Array Name – Discovers storage systems using a specific IP address or storage
system name in the Controller 0 and 1 fields. The default IP addresses are:

• Controller 0: 192.168.0.16
• Controller 1: 192.168.0.17

For directly connected consoles, enter the default IP address just for the port to which you are
connected; you will configure the other controller later.

Range of IP Addresses – Discovers storage systems using a starting (From) and ending (To)
range of IP addresses. Check Range of IPv4 Address and/or Search for IPv6 Addressees
automatically to widen the search, if desired.

Using Ports – Select whether communications between the console and management ports will
be secure, non-secure or both.

You can also run the Add Array Wizard manually to add storage systems after initial log in by
clicking Add Array at the bottom of the Arrays window.

Page 7-19
Hitachi Storage Navigator Modular 2 Installation and Configuration
Instructor Demonstration

Instructor Demonstration

 SNM 2 Add Array Wizard

Page 7-20
Hitachi Storage Navigator Modular 2 Installation and Configuration
User Management

User Management
This section describes how to use SNM 2 to manage users in HUS.

User Management in SNM 2

 Permissions define what a user can do when using SNM 2


 Users can be created and their permissions can be defined by:
• The “System” user (the built-in account for SNM 2)
• Users who have been granted User Management/Admin permission
 The steps to add a user are:
• In the Administration tree, click Users and Permissions > Users
• Click Add User
• Complete the fields in the window
• Click OK to save the settings
 The user account is created and the new user account appears in the
user list

The default user to access SNM 2 is “system” and the password is “manager.”

After installing SNM 2, customers should be encouraged to change the password for system.

More users can be created and permissions granted.

Page 7-21
Hitachi Storage Navigator Modular 2 Installation and Configuration
User Management in SNM 2

 Steps to view and edit user profiles:


1. Log on to SNM 2 either as system (the default administration account) or
as a user who has been granted administration privileges
2. In the SNM 2 Explorer tree, click Administration > Users and
Permissions > Users
3. Click a user name, then click Edit Profile and make the desired
changes
4. Click OK

In the Edit Profile window, you can modify the user’s full name, email address or description.

Page 7-22
Hitachi Storage Navigator Modular 2 Installation and Configuration
User Management in SNM 2

 Steps to delete users:


1. Log on to SNM 2 either as system (the default administration account) or
as a user who has been granted administration privileges
2. In the SNM 2 Explorer tree, click Administration > Users and
Permissions > Users
3. Click a user name, then click Delete User
4. When a message asks whether you are sure you want to delete the
selected user, click OK to delete the user (or click Cancel to retain the
user)

If you no longer need a user, you can delete the user from SNM 2.

 Granting or changing permissions


• Permissions define what a user can do within SNM 2
• Newly created user profiles have no checkmarks

Page 7-23
Hitachi Storage Navigator Modular 2 Installation and Configuration
HUS 100 User Management Account Authentication

HUS 100 User Management Account Authentication


This section provides information about how to use SNM 2 to authenticate accounts.

Account Authentication Overview

 What is account authentication?


• A feature which can be activated on the HUS System that provides
access control to configuration modifications

A user who uses the storage system registers an account (user ID, password, and so on) before
beginning to configure account authentication. When a user accesses the storage system, the
Account Authentication feature verifies whether the user is registered. From this information,
users who use the storage system can be discriminated and restricted. A user who registered
an account is given authority (role information) to view and modify the storage system
resources according to each purpose of system management and the user can access each
resource of the storage system within the range of the authority (Access control).

Page 7-24
Hitachi Storage Navigator Modular 2 Installation and Configuration
Default Authentication

Default Authentication

Default Logon Scenario

HUS 100
Family Array

SNM 2
Server
SNM 2 Client
User: system User: root
Password: manager Password: storage

The SNM 2 server needs a


To logon from your SNM 2 different login to the array if
Client to the SNM 2 server: Account Authentication is
enabled.
User: system
Password: manager User: root
Password: storage

Be sure to use the proper logout function when you leave the array.

Page 7-25
Hitachi Storage Navigator Modular 2 Installation and Configuration
Managing Account Authentication

Managing Account Authentication

 A user with appropriate permissions may manage the Account


Authentication feature on the array

In the example, a user is logged into SNM 2 as root to access the storage system using
Account Authentication.

Note: the root is the build-in-system account and should not be used for managing the array!

Page 7-26
Hitachi Storage Navigator Modular 2 Installation and Configuration
Setting Permissions

Setting Permissions

 Permissions may be modified for any account registered with the


array

The Account Authentication user must be assigned all 3 view and modify rights as shown to
have full control on the array.

Page 7-27
Hitachi Storage Navigator Modular 2 Installation and Configuration
Storage Navigator Modular 2 Command Line Interface

Storage Navigator Modular 2 Command Line Interface


This section describes how to install the command line interface (CLI) for SNM 2.

Install

Page 7-28
Hitachi Storage Navigator Modular 2 Installation and Configuration
Start the Command Line Interface

Start the Command Line Interface

 To start the CLI, browse to the Installation folder and double click
startsnmen.bat

Page 7-29
Hitachi Storage Navigator Modular 2 Installation and Configuration
Check the Environment Variables

Check the Environment Variables

 Before using the CLI, check the environment variables


 Type set on the command line prompt and look for
STONAVM_HOME setting

When starting the CLI as described here, the environment variables are set automatically. It is
recommended to double check the settings just in case.

Page 7-30
Hitachi Storage Navigator Modular 2 Installation and Configuration
Register a Storage System

Register a Storage System

 To manage a storage system you must register it in Storage


Navigator Modular 2 CLI
 Array Unit Registration
• Format
auunitadd [ -unit unit_name ] [ -group group_name ]
[ -RS232C | -LAN ] -ctl0 device | address
[ -ctl1 device | address ] [-ignore ]
• Description
 This command registers an array unit with the Resource Manager
 Registration information consists of the array unit name, group name,
connection interface and device
• Example:
auunitadd –unit array01 –LAN –ctl0 172.17.44.14 -ctl1 172.17.44.15

Page 7-31
Hitachi Storage Navigator Modular 2 Installation and Configuration
Register a Storage System

 To manage a storage system you must register it in Storage


Navigator Modular 2 CLI

Page 7-32
Hitachi Storage Navigator Modular 2 Installation and Configuration
Create a RAID Group

Create a RAID Group

 Format
aurgadd -unit unit_name -rg rg_no
-RAID0 | -RAID1 | -RAID5 | -RAID10 | -RAID6
-drive unit_no. hdu_no ...
-pnum pty_num
 Description
• This command creates a RAID group in a specified array unit
 Example:
aurgadd –unit array01 -rg 0 -RAID5 –drive 0.0 0.1 0.2 0.3 0.4 -pnum 1
• This will create RAID group 0 in RAID5 4+1 from the first five disks in Unit
0
Disk Number = X.Y, where X = Tray Number, Y = Disk Number

Page 7-33
Hitachi Storage Navigator Modular 2 Installation and Configuration
Create a RAID Group

Page 7-34
Hitachi Storage Navigator Modular 2 Installation and Configuration
Referencing the RAID Groups

Referencing the RAID Groups

 Review the RAID group configuration of a storage system


• Format
aurgref -unit unit_name [ -m | -g ] [ -detail rg_no ]
• Description
 Command displays a list of existing RAID groups
 Displayed contents include the RAID group number, RAID level, and
size in blocks (default) MB or GB
• Example:
aurgref –unit array01 -g

Page 7-35
Hitachi Storage Navigator Modular 2 Installation and Configuration
Referencing the RAID Groups

 Review the RAID group configuration of a storage system

Page 7-36
Hitachi Storage Navigator Modular 2 Installation and Configuration
Deleting RAID Groups

Deleting RAID Groups

 Format
aurgdel -unit unit_name -rg rg_no [ -f ]
aurgdel -unit unit_name -ALL [ -f ]
• Description
 Command deletes specified RAID group or deletes all RAID groups in
an array unit
• Example:
aurgdel –unit array01 –rg 1

Page 7-37
Hitachi Storage Navigator Modular 2 Installation and Configuration
Deleting RAID Groups

Page 7-38
Hitachi Storage Navigator Modular 2 Installation and Configuration
Creating Volumes

Creating Volumes

 Format
auluadd -unit unit_name [ -lu lun ] -rg rg_no -size num [ m | g | t ] |
rest
[ -stripesize 64 | 256 | 512 ]
[ -cachept pt_no ]
[ -paircachept pt_no | auto ]
[ -createarea area_no ]
[ -noluformat]
 Description
• Command is used to create Volumes
 Example:
auluadd –unit array01 –lu 0 –rg 0 –size 100g –stripesize 256 -
noluformat

Page 7-39
Hitachi Storage Navigator Modular 2 Installation and Configuration
Creating Volumes

Page 7-40
Hitachi Storage Navigator Modular 2 Installation and Configuration
Format Volumes

Format Volumes

 Format
• auformat -unit unit_name -lu lun...
 Description
• Command formats a specified Volume or a group of Volumes
 Example:
• auformat –unit array01 –lu 0

Page 7-41
Hitachi Storage Navigator Modular 2 Installation and Configuration
Format Volumes

Page 7-42
Hitachi Storage Navigator Modular 2 Installation and Configuration
Referencing Volumes

Referencing Volumes

 Format
• auluref -unit unit_name [ -m | -g ] [ -lu lun ... ]
 Description
• Command displays information of existing Volumes (capacity, status,
current controller number, default controller number, RAID group number
of a RAID group and its RAID level)
• Example: auluref –unit array01 -g

Page 7-43
Hitachi Storage Navigator Modular 2 Installation and Configuration
Referencing Volumes

Page 7-44
Hitachi Storage Navigator Modular 2 Installation and Configuration
Display Help

Display Help

 Format
• auman [ -en | -jp ] command_name
 Description
• Command displays the help information in English (-en) or Japanese (-jp)
for a command

Page 7-45
Hitachi Storage Navigator Modular 2 Installation and Configuration
Module Summary

Module Summary

 In this module, you should have learned:


• Features and functions of SNM 2
• Initial setup tasks
• The steps for installing SNM 2
• How to use the Add Array Wizard to register an array
• The impact of enabling account authentication
• User management functions offered by SNM 2

Page 7-46
Hitachi Storage Navigator Modular 2 Installation and Configuration
Module Review

Module Review

1. SNM 2 can be used for which of the following? (Choose all that
apply)
A. RAID level configurations
B. Volume creation and expansion
C. Offline volume migrations
D. Configuring and managing Hitachi replication products
E. Online microcode updates and other system maintenance functions
2. Which operating system requires that you find out the IP address of
the management console when installing SNM 2?
A. Windows
B. Sun Solaris
C. Red Hat Linux
D. All of the above

Page 7-47
Hitachi Storage Navigator Modular 2 Installation and Configuration
Module Review

3. Which steps are required prior to installing SNM 2? (Choose all that
apply)
A. Enable your firewall
B. Disable your antivirus software
C. All of the above
4. Which of the following are true about the Add Array Wizard?
(Choose all that apply)
A. Lets you search for a system based on its WWPN
B. Lets you search for a system based on its IP address
C. Lets you search for a system based on its host name
D. Lets you search for multiple systems based on a range of IP addresses

Page 7-48
Hitachi Storage Navigator Modular 2 Installation and Configuration
Module Review

5. Which of the following are true about new SNM 2 users? (Choose
all that apply)
A. New users have only the View permission.
B. New users can be added by the default System user.
C. New users can be added by users who have been granted User
Management/Admin permission.
D. New users can be added by the default Administrator user.

Page 7-49
Hitachi Storage Navigator Modular 2 Installation and Configuration
Module Review

6. Which of the following options are available when changing user


permissions? (Choose all that apply)
A. All Applications — Use this to grant permissions for any Hitachi
Command Suite software product that resides on the same server.
B. User Management — Use this option to grant administrator permissions
for Navigator Modular 2. Users with this setting can create or delete
users and change permissions for any user on the system.
C. Storage Management — Use this option to grant administration
permissions to specify storage system access to specified users.
D. SNM 2 — Use this option to grant basic administration (modify) or
monitoring (view) permissions.

Page 7-50
8. RAID Group and Volume
Configuration
Module Objectives

 Upon completion of this module, you should be able to:


• Identify the supported types of Redundant Array of Independent Disks
(RAID)
• List rules for creating RAID groups
• Create RAID groups
• Expand RAID groups
• Create Volumes (LUNs)
• Change Volume (LUN) capacity

Page 8-1
RAID Group and Volume Configuration
Supported RAID Types

Supported RAID Types


This section summarizes the RAID types supported by HUS platforms.

Overview

 System that replicates data among multiple drives by combining


many physical drives into one volume
 Provides better disk performance, error recovery, and fault tolerance
compared to a single disk

block 1
Striping
block 2

block 3 block 1 block 2

block 4 block 3 block 4

block 5 block 5 block 6

block 6 block 7 block 8

block 7

block 8

RAID ensures data protection and integrity. It helps to recover from what could potentially be a
loss of data.

Hitachi builds disk drive RAIDs from several physical disks.

Striping involves breaking up a block of data into small equal segments and storing the
segments sequentially among the drives in the array.

Page 8-2
RAID Group and Volume Configuration
Supported RAID Levels

Supported RAID Levels

RAID-0

Data blocks
A B C D E F G H I J Outline: RAID-0 stripes the data
across disk drives for higher
Controller throughput

A B C D E Pro: Offers most available disk


F G H I J space to the user
: : : : :
Con: All data is lost in case of a
: : : : :
: : : : : disk failure

Data disks

Note: Model WMS100 does NOT support RAID-0 or striping. In terms of reliability, SATA drives
belong to a different category than Fibre Channel (SCSI) disks. Because of this, the RAID levels
supported on model WMS100 are RAID-1, RAID-1+0, RAID-5 and RAID-6. RAID-1+0 is a
combination of RAID-1 and RAID-0, which combines mirroring and striping.

Page 8-3
RAID Group and Volume Configuration
Supported RAID Levels

RAID-1

Data blocks
A B C D E F G H I J Outline: RAID-1 mirrors the data

Controller Pro: If a disk drive fails, the data


is not lost and the
A A’ performance is not affected
B B’
: : Con: More expensive
: :
: :
J J’

Data disk Mirror disk

RAID-5

Data blocks
A B C D E F G H I J Outline: RAID-5 consists of 3 or more
disk drives; 1 drive in
Controller round-robin mode contains
the parity
A B C D (A-D)P
E F G (E-H)P H Pro: Striping offers higher reading
: : : : : throughput
: : : : :
: : : : :
Con: Lower performance on
Data disks + Parity disks (small) random writes and in
the case a drive fails
:Parity

Raid-5: At least 3 disks are required to implement RAID-5. RAID-5 will not sustain a double-
disk failure which is more likely to occur with SATA drives.

Page 8-4
RAID Group and Volume Configuration
Supported RAID Levels

RAID-6

Data blocks
A B C D E F G H I J Outline: RAID-6 consists of 4 or more
disk drives; 2 independent
Controller drives in round-robin mode
contain the parity
A B C (A-C)P (A-C)P

D E (D-F)P (D-F)P F Pro: Allows recovery from a


G (G-I)P (G-I)P H I double disk failure
: : : : :
: : : : : Con: Lower performance than
: : : : : RAID-5
Data disks + Parity disks
:Parity

Raid-6: At least 4 disks are required to implement RAID-6. This configuration is very similar to
RAID-5, with an additional parity block, allowing block level striping with 2 parity blocks. The
advantages and disadvantages are the same as the RAID-5, except the additional parity disk
protects the system against double-disk failure. This feature was implemented to ensure the
reliability of the SATA drives.

Key value: Two parity drives allow a customer to lose up to 2 hard disk drives (HDDs) in a
RAID group without losing data. RAID groups configured for RAID-6 are less likely to lose data
in the event of a failure. RAID-6 performs nearly as well as RAID-5 for similar usable capacity.
RAID-6 also gives the customer options as to when to rebuild the RAID group. When an HDD is
damaged, the RAID group must be rebuilt immediately (since a second failure may result in lost
data). During a rebuild, applications using the volumes on the damaged RAID group can expect
severely diminished performance. A customer using RAID-6 may elect to wait to rebuild until a
more opportune time (night or weekend) when applications will not require stringent
performance.

HDD roaming allows the spare to become a part of the RAID group; no copy back is required
saving rebuild time.

Page 8-5
RAID Group and Volume Configuration
Supported RAID Levels

RAID-1+0

Data blocks

A B C D E F G H I J Outline: RAID-1+0 (4 or more disk


drives) is similar to RAID-1
Controller but now the data is striped

A A’ B B’ Pro: Striping offers higher small


C C’ D D’ size random access
E E’ F F’ performance compared to
G G’ H H’ RAID-1
I I’ J J’
Con: More expensive
Data Mirror Data Mirror
disk disk disk disk

Page 8-6
RAID Group and Volume Configuration
RAID Groups versus Parity Groups

RAID Groups versus Parity Groups

 Example of 3 RAID groups (RG)


• RG0 = RAID-5 (4D+1P)
• RG1 = RAID-5 (4D+1P)
• RG2 = RAID-5 (14D+1P) SP = Spare Drive

RG2

SP

RG0 RG1 RG2

 When creating RAID groups, HDS recommends a 1-to-1 relationship


between RAID group and Parity group to avoid any potential issues:
1. During a failure condition, there may be an impact to multiple workloads
sharing the RAID group, even if volume resides within Parity group that is
not sparing out an HDD
2. Space is concatenated, which means that volume may span 2 Parity
groups within RAID group, thereby increasing possibility of parity
generation overhead.\
The building block for a RAID group is a parity group. The building block for parity group is a
physical disk.

• 4D+1P refers to the layout of a parity group


• Keep the ratio of RAID group and parity Group at 1-to-1

Page 8-7
RAID Group and Volume Configuration
Rules for Creating RAID Groups

Rules for Creating RAID Groups

 Supported range by RAID level

RAID Type Minimum Disks Maximum Disks


RAID-0 2 16
RAID-1 1+1 1+1
RAID-10 2+2 8+8
RAID-5 2+1 15+1
RAID-6 2+2 28+2

HUS supports the following types of disks:

• SSD (available as 2.5-inch disks)


• SAS 10k rpm (available as 2.5-inch disks)
• NL-SAS (SAS 7.2k rpm) (available as 3.5-inch disks)
• SAS 15k rpm (available as 2.5-inch disks)

Page 8-8
RAID Group and Volume Configuration
Rules for Creating RAID Groups

 Selection of disk drives for a RAID group


• Disk drives are selected:
 Automatically
 Manually
• When disk drives for a RAID group are selected manually:
 All disks in a RG must be same type
• SAS, NL-SAS cannot be mixed
• Though not recommended, if created manually size and rpm can be different
 Disks must be unblocked
 Disks must not already be set in another RAID group or as a spare
disk
• Select disk drives in basic chassis and/or additional chassis with SNM 2

HUS supports SSD, SAS and NLSAS drives.

 User data area


• All disk drives allocated to a RAID group are managed as having the
same capacity
• First 5 drives (or 4, if that is all that exists) contain system area
 System area is a small space for the array to store firmware and
configuration
 This area is mirrored, not striped
 This area is available as additional space on all disks in the system but
only used on the first 5 disks

Page 8-9
RAID Group and Volume Configuration
Drives Supported in Hitachi Unified Storage

Drives Supported in Hitachi Unified Storage

HDD Size Form Factor RPM (krpm) Available in


SAS 3TB (NLSAS) 3.5-inch 7.2 DBX, DBL, CBXSL, CBL
SAS 900GB 2.5-inch 10 DBS, CBXSS, CBSS
SAS 600GB 2.5-inch 10 DBS, CBXSS, CBSS
SAS 300GB 2.5-inch 15 DBS, CBXSS, CBSS
SAS 300GB 2.5-inch 10 DBS, CBXSS, CBSS
SSD 400GB 2.5-inch MLC type DBS, CBXSS, CBSS
SSD 200GB 2.5” MLC type DBS, CBXSS, CBSS

Page 8-10
RAID Group and Volume Configuration
System and User Data Areas

System and User Data Areas

 RAID group storage is divided into 2 data areas: System and User
• System Area space is used on the first 5 system disks only, which must
be of the same type (system uses this area to store microcode, trace
and configuration data)
• User Area: User data is stored here
 If unequal disk sizes are used, space is lost on the larger disks

300GB 600GB 300GB 600GB

System Area

Used Area

Unused Area

Capacity of the disk drive (last


LBA) allocated to RAID group

This graphic shows that a part of the physical capacity is reserved as system area. The area is
only used as system area on the first 5 disks of a system, for example, disk 0-4. The system
area contains microcode, trace data and configuration data.

• A disk is always bigger than what is offered to the user.

o If a RAID group existed with disks of different capacity, a substantial part could
be left unused on the bigger drives (see example)
o The user data area part must be the same for all disks in a RAID group

• The first 5 disks must always be of the same type (5*SAS or 5*NL-SAS); no mix is
possible

Page 8-11
RAID Group and Volume Configuration
Creating a RAID Group

Creating a RAID Group


This section conveys steps and GUI used to create a RAID group on HUS.

How to create a RAID group:

1. Log in to Storage Navigator Modular 2

2. Select the array you want to log in to

o This screen is seen when you successfully log in to the array

3. In the Groups tab in the tree, select Volume

4. In the Tabs list, select RAID Groups

5. Select Create RG to create a RAID group

Page 8-12
RAID Group and Volume Configuration
Creating a RAID Group

 Use the Create RAID Group window to create a RAID group

RAID Level: The RAID type selected (RAID-0, 1, 10, 5 or 6)

Combination: Combinations shown based on the above selection

Number of Parity Groups: 1 (Do not change)

Drive selection can be:

• Automatic Selection: System automatically selects the required number of disks


based on disk type and capacity
• Manual Selection: Number of disks is selected manually

Click OK when complete

Page 8-13
RAID Group and Volume Configuration
Expanding a RAID Group

Expanding a RAID Group


This section provides guidelines and steps for expanding a RAID group on HUS.

Expand a RAID Group

 RAID groups (RG) can be expanded by adding disks


• The minimum number of disks that can be added is 1 (R5 or R6)
and 2 (R1 or R10)
• The maximum number of disks that can be added is 8 (less if we reach
the maximum RG width)
• R0 cannot be expanded
 Any number of RG expansion requests can be given but at any point
of time only each controller will do one RG expansion only
• Expanding RG does not expand the LUs created inside the RG
• Expanding a RG creates space inside the RG where more Volumes can
be created
 RG expansion takes time, so it should be done during low I/O time
 RG expansion does not change the RAID level
 Only RG where the PG depth is 1 can be expanded

When a RAID group is given for expansion it can be in either of the following states:

1. Expanding – In this state, the RG is currently being expanded and the expansion
cannot be cancelled

o If we force cancel the expansion there can be data loss to the LUNs that have
already expanded

2. Waiting – In this state, the RG expansion has not yet started, so the RG expansion can
be cancelled

o RG expansion can only expand the size of RAID group


o RAID groups cannot be shrunken
o RG expansion does not change the RAID level (an R5 remains an R5 after
expansion)

Page 8-14
RAID Group and Volume Configuration
Expand a RAID Group

 You may not use RAID group expansion to change the RAID level of
a RAID group
 Rules for expanding a RAID group
• You cannot expand a RAID group in the following conditions:
 If the LU (Logical Unit) whose status of the forced parity correction is:
• Correcting
• Waiting
• Waiting Drive Reconstruction
• Unexecuted, Unexecuted 1 or Unexecuted 2
 If an LU is being formatted and belongs to RAID group expansion
target
 After setting or changing Cache Partition Manager configuration
 When dynamic sparing/correction copy/copy back is operating
 While installing firmware

If any of the forced parity status messages are displayed, you need to execute a forced parity
correction for this LU, change the LU status to Correction Completed and then execute the RAID
group expansion.

If an LU is being formatted and belongs to the RAID group expansion target, wait until the
formatting has completed and then execute the expansion command from SNM 2.

If you are expanding a RAID group after setting or changing Cache Partition Manager
configuration, the storage system must be rebooted. Expand the RAID group after rebooting
the storage system in which the Power Saving function is set. Change the status of the Power
Saving feature to “Normal (spin-up)” and then expand the RAID group.

If you are expanding a RAID group when the dynamic sparing/correction copy/copy back is
operating, expand the RAID group after the drive has been restored.

If you are expanding a RAID group while installing the firmware, expand the RAID group after
completing the firmware installation.

Page 8-15
RAID Group and Volume Configuration
Expanding a RAID Group

Expanding a RAID Group

 Best practices for RAID group expansion


• You can assign priority as Host I/O or RAID group expansion
• Perform backup of all data before executing expansion
• Execute the RAID group expansion at a time when host I/O is at a
minimum
• Add drives with the same capacity and rotational speed as RAID group of
expansion target to maximize performance
• Add drives in multiples of 2 when expanding a RAID-1 or RAID-1+0 group

When backing up data, including data stored in cache memory. Data loss can occur due to a
loss of power or another type of system failure and the LU associated with the expansion can
become unformatted.

Host access performance deteriorates during RAID group expansion, especially for the LUs in
the RAID groups which are expanding.

By adding drives with the same capacity and rotational speed as the RAID group of the
expansion target, performance will be maximized.

Page 8-16
RAID Group and Volume Configuration
Expanding a RAID Group

 Use RAID group expansion window when expanding a RAID group

To access the RAID group expansion window, click Expand RG after clicking the checkbox of
the RAID group you want to expand in the left column of the RAID Groups tab of the Logical
Units window.

You can use the added capacity immediately after the expansion process has completed.

Use this dialog to do the following:

1. Create RG

2. Delete RG

3. Expand RG

4. Change RG Expansion Priority

Page 8-17
RAID Group and Volume Configuration
Example

Example

 Example of a RAID-5 4+1 RAID group expansion

4D+1P
HDD0 HDD1 HDD2 HDD3 HDD4 HDD5

LU #0

Expansion with an available


5D+1P HDD

HDD0 HDD1 HDD2 HDD3 HDD4 HDD5


New free space within the
LU #0
RAID group
Free Space

 After the expansion, you can either expand LU #0 by using the


LU Grow feature or you can create a new LU

 Example of an RAID-1+0 2+2 RAID group expansion

2D+2P
HDD0 HDD1 HDD2 HDD3 HDD4 HDD5

LU #0
Expansion with 2 available HDDs

3D+3P
New free space within the RAID
HDD0 HDD1 HDD2 HDD3 HDD4 HDD5 group

LU #0
Free Space

 After the expansion, you can either expand LU #0 by using the


LU Grow feature or you can create a new LU

Page 8-18
RAID Group and Volume Configuration
Instructor Demonstration

Instructor Demonstration

 RAID Group Operations


• Creating RAID groups
• Expanding RAID groups

Page 8-19
RAID Group and Volume Configuration
Creating Volumes

Creating Volumes
This section presents rules and steps for creating volumes.

Rules for Creating Volumes

 Overview
• Volumes (also called Logical Unit Numbers or LUNs) are created in a
RAID group or in a Dynamic Provisioning (DP) pool
 DP Pools are explained in detail in course TCI1950
• Volumes can
 Be assigned to a host group or iSCSI target
 Be presented to same or different servers
 Exist in different sizes and multiple numbers

Other software tools, like HCS, also use the name “volume“ for a LUN.
Although this is not a consistent use of expressions, throughout this course
material, a volume and a LUN is considered the same subject for compatibility.

iSCSI (Internet Small Computer System Interface): An Internet Protocol (IP)-based storage
networking standard for linking data storage facilities. By carrying SCSI commands over IP networks,
iSCSI is used to facilitate data transfers over intranets and to manage storage over long distances.
iSCSI can be used to transmit data over local area networks (LANs), wide area networks (WANs) or
the Internet, and can enable location-independent data storage and retrieval.

LUN (Logical Unit Number): A unique identifier used on a SCSI bus to distinguish between devices
that share the same bus. SCSI is a parallel interface that allows up to 16 devices to be connected
along a single cable. The cable and the host adapter form the SCSI bus, and this operates
independently of the rest of the computer. Each of the devices is given a unique address by the
SCSI BIOS (Basic Input/Output System), ranging from 0 to 7 for an 8-bit bus or 0 to 15 for a 16-bit
bus. Devices that request I/O processes are called initiators. Targets are devices that perform
operations requested by initiators. Each target can accommodate up to 8 other devices, known as
volumes, and each is assigned a volume. Commands that are sent to the SCSI controller identify
devices based on their LUNs.

Page 8-20
RAID Group and Volume Configuration
Volume Configuration

Volume Configuration

 Volumes are slices from the user data area of a RAID group
• 3 Volumes from RG0
• 1 Volume from RG1
 Maximum volumes
• Model HUS 110 = 2,048
• Model HUS 130 = 4,096
• Model HUS 150 = 4,096
 Maximum size of a volume = 128TB
RG0 RG1

Volume 0 Volume 3
Volume 1
Volume 2

Page 8-21
RAID Group and Volume Configuration
How to Create a Volume

How to Create a Volume

Volume and LUN are the same.

In order to create a LUN (SNM 2 shows LUN as a volume):

• Log in to SNM 2
• Select the array
• Select Group > Volume from the tree
• Click Create VOL

Page 8-22
RAID Group and Volume Configuration
How to Create a Volume

Create Volume asks the following questions:

• Where you want to create the volume from

o From a RAID group


o From a DP pool

• Size of the volume

o GB
o TB
o MB
o Blocks

Page 8-23
RAID Group and Volume Configuration
How to Create a Volume

In Advanced option:

• Select the Stripe Size

o Default stripe size is 256KB (you can select 64KB, 256KB or 512 KB)
o The selection also depends on the cache partition

• Cache partition settings are required if you are using Cache Partition Manager
• Choose whether you want to format the volume once it is created
• Choose where the size of the volume should come from

o Automatically – System automatically finds free space in the RG


o Manual – If you want to select free areas in the RG

• Select OK to confirm the changes

Page 8-24
RAID Group and Volume Configuration
How to Create a Volume

After the volume is created successfully, you can:

1. Create more volumes

2. Select the port, or host group, to map the volume

3. Close the window

Page 8-25
RAID Group and Volume Configuration
Changing Logical Unit Capacity

Changing Logical Unit Capacity


This section presents the procedures for changing and managing Logical Unit Capacity.

Volume Unification

 Change Volume Capacity is a button on the Volumes screen in


SNM 2 that allows the following functions:
• Adding LUs — Add additional LUs to an existing unified LU
• Separating Last LU — Separate the last LU from the unified LUN
• Separating All LUs — Separate all the LUs from the unified LU
• LU Grow — Add free space in same RAID group to LU
• LU Shrink — Release space from LU back to RAID group

Page 8-26
RAID Group and Volume Configuration
Changing LU Capacity

Changing LU Capacity

Adding LU

 Two or more LUs are unified (concatenated) into 1 larger LU


 LUs can come from different RAID groups (not recommended)
 128 LUs maximum
 RAID-5/RAID-6/RAID-1/RAID-1+0 (RAID-0 is not supported )
 LU concatenation up to 128TB (or 128 LUs)

LUN 0
LU to be expanded LU 0 Resulting Volume
Only the expanded LU
Add LU LU 0 can be mapped
LU to be added LU 1 LUs that were added
LU 1 no longer appear in the
available list
LU to be added LU 2 LU 2

Volume = LUN

Top Volume is the Main Volume

Other LUNs are called Sub-LUNs

Page 8-27
RAID Group and Volume Configuration
Changing LU Capacity

Expanding LUs: Do a Backup First

 Data assurance of the LUs to be unified:


• Follow all the steps of the on-screen instructions (data could be lost)
• Back up the Volumes before modifying them
• Format the unified Volumes to delete the volume label which the
operating system adds to Volumes
• Map the expanded LU to the host and restore the data
 Depending on the host OS and software installed, a reboot may or may
not be necessary

Adding LUs

 Formatting the unified LU


• A format on the unified LU is also performed on all the internal LUs in
sequence.
• When an internal LU blockage or degeneration (Alarm or Regression)
occurs while formatting, the status of the unified LU becomes blocked or
degenerated at the time when formatting finishes

Page 8-28
RAID Group and Volume Configuration
Changing LU Capacity

Grow

 Add a specific capacity to an existing LU from disk free space


 Free space has to be in the same RAID group

LU 0
LU 0 LU 0 Additional LU 0
Logical
Free space Additional Additional
LU 2 LU 2
Additional
Free space LU 0 LU 0
Free space
LU 4094 LU 0 (Sub)
RG0
LU 2 LU 2 Physical
Before LU Grow LU 4095 LU 0 (Sub)
Free space Free space
Unified LU

 Grow LU 0 by specifying a growth capacity and SNM 2 automatically


selects the space

Shrink

 This function reduces the capacity from an LU

LU 0
LU 0
Not used

Note:
• The host OS must support volume shrinking if you use LU shrink
• You must execute the host OS side volume shrink first
• Then execute the storage array side LU shrink

Page 8-29
RAID Group and Volume Configuration
Changing LU Capacity

SNM 2

Select the Volume you want to Unify

Click Change VOL Capacity

Page 8-30
RAID Group and Volume Configuration
Unifying a Volume

Unifying a Volume

The screen has 4 selections:

1. New Capacity

• When you enter the new capacity in the box:

o If you enter a capacity larger than the original volume size, the volume expands
o If you enter a capacity smaller than the original volume size, the volume shrinks

2. Add Volume

o You can select volumes:


o Selected volumes can be from any RG, but have to be of the same disk type
o RAID-0 volumes will not be shown

3. Separate last volume

o Select this option to remove the last volume of a unified volume


o If the volume you selected is not unfied, then this option will be disabled

Page 8-31
RAID Group and Volume Configuration
Unifying a Volume

4. Separate all volumes

o Select this option to split all the volumes from a unified volume
o If the volume you selected is not unified, this option will be disabled

Click OK to confirm your selections.

Note: If you select 3 or 4, ensure that you have backed up the volume. Many operating systems
do not support shrinking of volume. Windows 2008 supports shrinking of volume. Before you
can shrink the volume from SNM 2, do the shrinking at the OS side.

All operating systems might not support expanding of a volume online.

0009

Check Yes, and then click the Confirm button.

Formatting a Unified Volume will format all the individual volumes as well.

Page 8-32
RAID Group and Volume Configuration
Instructor Demonstration

Instructor Demonstration

 LU Operations
• Create LU
• Change LU Capacity
• Delete LU

Page 8-33
RAID Group and Volume Configuration
Module Summary

Module Summary

 In this module, you should have learned:


• Which RAID types are supported by SNM 2
• How to create and expand RAID groups
• How to create LUNs
• How to change LUN capacity

 Creating Dynamic Provisioning Pools and DP-Volumes will be


discussed in detail in a special software course

Page 8-34
RAID Group and Volume Configuration
Module Review

Module Review

1. Which of following RAID group types cannot be expanded?


A. RAID-0
B. RAID-5
C. RAID-6
D. RAID-10
2. The maximum size of a Volume is .
A. 2TB
B. 24TB
C. 60TB
D. 128TB
3. Maximum number of volumes that can be defined in HUS 150 is:
A. 2048
B. 4096
C. 1024
D. 64K

Page 8-35
RAID Group and Volume Configuration
Module Review

Page 8-36
9. Storage Allocation
Module Objectives

 Upon completion of this module, you should be able to:


• Discuss the connectivity between storage and hosts
• Describe the use of host groups
• Map volumes to a host

Page 9-1
Storage Allocation
Connectivity Between Storage and Hosts on HUS

Connectivity Between Storage and Hosts on HUS


This section presents connectivity and mapping options between storage and hosts.

Storage Allocation with HUS

1. Host Connection to HUS can be:


a. Fibre Channel
b. iSCSI
2. In order to map Volumes to a Port we need to:
a. Know the port addresses
b. Enable Host Group Security
c. Create Host Groups
• Whether the host is clustered
• Any Host Group Settings

3. Common Provision Tasks:


a. Allocating Volumes
b. Unallocating Volumes

Ports on Controller 0 start with 0, and those on Controller 1 start with 1.

Page 9-2
Storage Allocation
Host Connection to HUS

Host Connection to HUS

1a. Fibre Channel — Host Connectivity

Connectivity Fabric Switch Connection Type


Through Switch Enabled P-to-P
Direct Attached Disabled FCAL

Switch
Attached
Server

Switch Direct
Attached
Server
HUS

A host can be connected to HUS Fibre Channel ports either:

• Through Switch

o In this case, Multiple hosts can be connected to the same port


o This is the usual method of connection

• By Direct Attachment

o In this case only 1 host can be connected to the port


o This may be preferred case if a host needs high IOPS, like a virtualization server

Page 9-3
Storage Allocation
Host Connection to HUS

1b. iSCSI

 HUS can be connected to the host


• Via an IP Switch
• Directly

Switch
Attached
Server

IP Switch Direct
Attached
Server
HUS

Page 9-4
Storage Allocation
Mapping Volumes to Ports

Mapping Volumes to Ports

2a. Know the Port Addresses — Fibre Channel

0E 0F 0G 0H

Fibre Channel Ports are 8Gb/sec on HUS.

HUS 110 and HUS 130 have 4 embedded ports on the Controller.

HUS 110 embedded ports are disabled by default and need Fibre Channel Option Key to be
installed to work.

Additional Card (Option Card) can have 4 x 8Gb/sec ports.

Page 9-5
Storage Allocation
Mapping Volumes to Ports

2a. Know the Port Addresses — iSCSI

iSCSI ports are 10Gb/sec.

HUS 110 and 130 can have 2 10Gb/sec x iSCSI ports per controller.

HUS 150 can 4 x 10Gb/sec iSCSI ports per controller.

Page 9-6
Storage Allocation
Mapping Volumes to Ports

Host Groups

 Members of a host
group should be the on HUS 100
the same platform (OS) Host Volumes Internal RAID
Logical Units mapped to the port Vols Groups
 Maximum of 128 host and accessible from the host
group.
groups per port Each Host
HBA is
Each Host Group
 Maximum of 2048 Vol can reuse Vol
identified
by WWPN.
Each Host
Group supports
paths per host group numbers. a single OS type.
host group 00
host A host B Vol0
(HP-UX) (HP-UX) Physical Port Vol0 Vol1
host group 01 Vol25
Switch

host C Vol20
(Solaris) Vol0
host group 02 Vol95
host D host E Vol0 Vol1
(Windows) (Windows)
Vol31
LUN = Logical Unit Number

Page 9-7
Storage Allocation
Mapping Volumes to Ports

Host Group Security — LUN security

LUN security: Disabled LUN security: Enabled


<default>

HUS100 HUS100
FC port FC port
Host group 000

Host groups
 Only 1 host group per port
 Only single platform hosts can  Multi-host groups can be
be connected added to the port

 All hosts connected to this  Thus, multi-platform hosts are


port can access all LUs supported
 WWN based access security
(WWPN of the HBA)

HBA = Host bus adapter

To protect mission-critical data in your disk storage system from unauthorized access, you
should implement LUN security. LUN security allows you to prevent unauthorized hosts from
either seeing or accessing the data on the secured LUN. If LUN security is applied to a particular
port, that port can only be accessed from within its own host group (also known as a host
storage domain). The hosts cannot access LUs associated with the other host groups.

Page 9-8
Storage Allocation
Mapping Volumes to Ports

2b. Enable Host Group Security

2c. Create Host Groups — Fibre Channel

 Create a host group or edit an existing host group

Page 9-9
Storage Allocation
Mapping Volumes to Ports

2c. Create Host Groups — Fibre Channel

Page 9-10
Storage Allocation
Mapping Volumes to Ports

2c. Create Host Groups — Fibre Channel

 Select Host Volume from top box, and Available Volumes from
lower box; click Set to confirm

In Fibre Channel there are 2 methods for setting the options.

• Simple setting

o You need to select elements of an environment of the host computer


o Targets options necessary for the host computer to be connected are set
automatically

• Detail setting

o Directly set the target options

The settings can be done in Options Tab.

Page 9-11
Storage Allocation
Mapping Volumes to Ports

2c. Create Host Groups — iSCSI

 Select the iSCSI Target option from the tree


 You can either create a new target or you can edit an existing target

Page 9-12
Storage Allocation
Mapping Volumes to Ports

2c. Create Host Groups — iSCSI

 Select the Volume tab to map the volumes

In iSCSI there are 2 methods for setting the options.

• Simple setting

o You need to set elements of an environment of the host computer


o Targets options necessary for the host computer to be connected are set
automatically

• Detail setting

o Directly set the target options

The settings can be done in Options Tab.

Page 9-13
Storage Allocation
Mapping Volumes to Ports

2c. Additional Settings for iSCSI

 HUS supports Challenge-Handshake Authentication Protocol


(CHAP) for iSCSI targets
• You need to define the users first
• The user settings have to be done on host side with same information
(user name and secret)

2c. Additional Settings for iSCSI — 2 way authentication

 Select CHAP, None for the Authentication Method


 Check Yes to Enable Mutual Authentication, then click OK

Page 9-14
Storage Allocation
Simple Settings

Simple Settings

 Simple settings are found in


the Options tab

 Select the Platform


 Select the Middleware
 Check with HDS if the OS is
not listed in the list

Page 9-15
Storage Allocation
Advanced Settings Explanations

Advanced Settings Explanations

 PSUE Read Reject Mode:


• Set it when the fence level of
TrueCopy remote replication is
used with Data and the pair
status suppresses the read
access to P-VOL at the time of
PSUE transition
 Discovery CHAP Mode:
• Supports iSCSI Discovery with
CHAP
 Unique Extended COPY Mode:
• Supports XCOPY Command
issued from the VMware
 Unique Write Same Mode:
• Supports Write Same Command
issued from the VMware

Configuration Options for Different OS

 Operating systems have different requirements and show different


behavior when newly allocated storage volumes are detected:
• Solaris may issue a warning if volumes are not labeled
• The maximum size of a HUS volume can be 128TB
 Not all OS can handle a large size like this (for example, ESX 4.x)
• Before a HP-UX 11.i host may be able to see all new allocated storage
you may have to run the ioscan command
• Queue Depth is a parameter which can be defined in various places
 AIX may require you to define it on each LUN

Page 9-16
Storage Allocation
Queue Depth

Queue Depth

 Queue Depth or “Command Queue Extension” is used to control the


amount of SCSI commands which can be managed by a physical
storage port (not by HSD)
 These commands are usually queued by the server’s HBAs and sent
to the physical storage port
 A HUS 100 physical storage port, by default, has 512 queue slots
• This can be extended to 1024, the maximum number of commands
accepted
 Nevertheless there is also a dependency of how many commands
can be handled by a single volume (LUN).
 Queue Depth will be discussed in more detail in the “Advanced
Provisioning” course

 Best Queue Depth setting can be achieved by applying the following


formula: 512/1024 / #LUNs / #Hosts <= 32
• Or (512/1024 divided by number of LUNs) divided by number of hosts,
less than or equal to 32

 Queue depth values higher than calculated above are fine unless:
• More than 512/1024 total commands to the port are exceeded
• 32 commands are exceeded for a LUN
 The formula above guarantees you will never exceed the queue
capacity
• Maximum performance may be achieved at higher queue depth values
• The value above is quite general and assumes all LUNs are online and
available to all hosts
 Simply spoken, avoid having more than 512/1024 commands arrive
at the port simultaneously and avoid exceeding 32 per LUN

Page 9-17
Storage Allocation
How to increase Queue Depth in SNM2

The following is a brief explanation of queue depth and how it pertains to HDS
arrays using Solaris' max_throttle parameter
 The maximum # per LUN for sd_max_throttle/ssd_max_throttle is 32
• Setting “ssd_max_throttle = 8” means that the host can send 8
commands at a time to any particular Volume (LUN)
 If this is set to 8 and there are 100 LUNs mapped out of the port, it is
possible to send a total of 800 commands at a time, which would
over-throttle the SCSI buffer on the port causing transport errors
• Depending on how timeout values are set on the HBA and system, this
can cause the target to fail resulting, in a loss of access to the devices on
that port
 If, for example, one host only has 10 LUNs but there are 40 more
LUNs mapped out of that port, the other host’s I/O is going to affect
that port as well
• The calculation is “512/ALL LUNS” mapped out of that port to keep the
buffer from reaching a queue full condition.

How to increase Queue Depth in SNM2

Page 9-18
Storage Allocation
Instructor Demonstration

Instructor Demonstration

 Volume Allocation to Host


• Host Group Security
• Create Host Groups
• Assign Volumes

Page 9-19
Storage Allocation
Module Summary

Module Summary

 In this module, you should have learned:


• Various aspects that comprise storage to host connectivity
• The concept of host groups and how they are used
• The process of mapping volumes to a Host group using the Create and
Add Volume Wizard in SNM 2

Page 9-20
Storage Allocation
Module Review

Module Review

1. What are 2 ways that a Fibre Channel port can connect to host?
A. Direct connect
B. BUS connection
C. SAS connection
D. Switched connection
2. Host groups are created at what place?
A. Back-end ports
B. RAID groups
C. Front-end ports
D. DP pools

Page 9-21
Storage Allocation
Module Review

3. Which RAID type (or types) offers a single disk protection?


A. R-0
B. R-5
C. R-6
D. R-10
4. Which of the following statements are true?
A. CHAP Authentication passwords are clear text
B. The same Volume can be mapped to multiple ports
C. You can map volumes to mainframes

Page 9-22
10. Path Management
Module Objectives

 Upon completion of this module, you should be able to:


• Describe the features and benefits of Hitachi Dynamic Link Manager
(HDLM)
• Describe the pre-installation process
• List the installation steps
• Describe HDLM GUI
• Use Command Line Interface (CLI) for path management

Page 10-1
Path Management
HDLM Features and Benefits

HDLM Features and Benefits


This section presents an overview of Hitachi Dynamic Link Manager (HDLM).

Overview

 A path manager is always mandatory in a multipathing environment


 Due to the HUS 100 family’s dynamic virtual controller front-end,
each path to a LUN is considered an owner path
 I/Os can be distributed to all available paths so there is no means of
controller LUN ownership
 For this reason, dynamic virtual controller front-end also supports
operating system build in path managers
 In other words, HUS 100 and AMS 2000 families support native,
multipath I/Os from several operating systems, meaning Hitachi
Adaptable Modular Storage is optional

Page 10-2
Path Management
Overview

 What is managed by Hitachi Dynamic Link Manager?


• iSCSI devices
• Hitachi storage system command devices, such as Hitachi RAID Manager
• EMC DMX series, EMC CX series and HP EVA series
 What is not managed by Hitachi Dynamic Link Manager?
• Built-in disks on a host
• Non-disk devices, such as tape devices

Page 10-3
Path Management
Features

Features

 Multipathing — Enables multiple paths from host to devices, allowing


access to a device even if a specific path is unavailable
 Path failover — Automatically redirects I/O operations to alternate
paths if a failure occurs, allowing processing to continue without
interruption
 Failback — Recovers a failed path and places it back online when it
becomes available
 Load balancing — Intelligently allocates I/O requests across all
available paths to prevent a heavily loaded path from adversely
affecting processing speed
 Path health checking — Automatically checks path status at
user-specified intervals
 Centralized management — Provides single management console
for multiple HDLM instances

Multipathing – Multiple paths can also be used to share I/O workloads and improve
performance.

Path failover – By removing the threat of I/O bottlenecks, HDLM protects your data paths and
increases performance and reliability.

Failback – By recovering a failed path and placing it back online when it becomes available, the
maximum number of paths available for load balancing and failover is assured.

Load balancing – By allocating I/O requests across all paths, load balancing ensures continuous
operation at optimum performance levels, along with improved system and application
performance. Several load balancing policies are supported.

Since HDLM automatically performs path health checking, the need to perform repeated manual
path status checks is eliminated.

Page 10-4
Path Management
Features

Multipathing

 Enables use of multiple read/write


paths using redundant physical path
components between a server and
storage devices
 How it works
• HDLM driver interfaces with the Host
Bus Adapter (HBA) driver or the
multipathing framework provided by
the OS
• Assigns a unique identifier to paths
between each storage device and
host
• Distributes application I/O across
each path according to failover and
load balancing

With multipathing, a failure with 1 or more components still allows applications to access their
data. In addition to providing fault tolerance, multipathing also serves to redistribute the
read/write load among multiple paths between the server and storage, helping to remove
bottlenecks and balance workloads. In addition, distributing data access across all the available
paths increases performance allowing more applications to be run and more work to be
performed in a shorter period of time.

The example shows an HDLM system configuration with the host and storage system attached
to a SAN using Fibre Channel connections. The host cable port is provided by the HBA. The
storage system cable port is a port (P). A logical unit (LU) in the storage system is the I/O
target of the host. The LU area called Dev is storage address space being written or read by the
host. The path is the route that connects the host and a Dev in an LU.

Page 10-5
Path Management
Features

Path Failover

 When a path fails, all outstanding and subsequent I/O requests shift
automatically and transparently from the failed or down path to
alternative paths
 Two types of failovers:
• Automatic
• Manual

As shown earlier, due to the AMS 2000


and HUS 100 family’s dynamic virtual
controller front end, HDLM will already
use any available path.
There are no inactive paths available.

As a result of failover, mission-critical operations continue without interruption, storage assets


are maximized, and business operations remain online.

A failure occurs when a path goes into the offline status. This can be caused by the following:

• An error occurred on the path


• A user intentionally placed the path offline by using the Path Management window in the
HDLM GUI
• A user executed the HDLM command's offline operation
• Hardware, such as cables or HBAs, has been removed
• Automatic failovers can be used for the following levels of errors:
• Critical — A fatal error that might stop the system
• Error — A high-risk error, which can be avoided by performing a failover or some other
countermeasure

You can switch the status of a path by manually placing the path online or offline. Manually
switching a path is useful, for example, when system maintenance needs to be done. You can
manually place a path online or offline by doing the following:

• Use the HDLM GUI Path Management window


• Execute the dlnkmgr commands online or offline operation

Page 10-6
Path Management
Features

Path Failback

 Brings a path that has recovered from an error back online


 Enables the maximum possible number of paths to always be
available and online, resulting in better load distribution across
multiple paths
 Two types:
• Automatic
• Manual
 Paths subject to intermittent error should be removed from those
subject to automatic failbacks

In order to use the automatic failback function, HDLM must already be regularly monitoring
error recovery.

HDLM will select the next path to be used first from among the online owner paths, and then
from the online non-owner paths. As a result, if an owner path recovers from an error, and then
HDLM automatically places the recovered path online while a non-owner path is in use, the path
will be automatically switched over from the non-owner path to the owner path that just
recovered from the error.

Page 10-7
Path Management
Features

Load Balancing

 When a host is connected to a storage subsystem via multiple paths,


I/O data is distributed across all paths
• Prevents a loaded-down path from affecting the processing speed of the
entire system

Without Load Balancing With Load Balancing

Note: Some I/O operations managed by HDLM can be distributed across all available paths, and
some cannot. Therefore, even when the load balancing function is used, a particular I/O
operation might not necessarily allocate data to every available path. RAID Manager issuing
IOCTL to a command device is an example of an I/O operation that cannot allocate data to
every path.

Do not use the load balancing function that is accessible from the Microsoft iSCSI
Software Initiator user interface.

It is available in both a cluster and non-cluster environment.

All online paths are owner paths. Therefore, if one of the paths becomes unusable, the load will
be balanced among the remaining paths.

Page 10-8
Path Management
Features

Load Balancing

 Algorithms — round robin

5 3 1

Storage
Host 6 4 2

 This load balancing algorithm is appropriate for


random I/O processing
 Applications with I/O workloads that are typically
characterized as random include relational
databases and email servers

The first and most basic algorithm is round robin. This algorithm simply distributes I/O by
alternating requests across all available data paths. Some multipath solutions, such as the IBM
MPIO default PCM, only provide this type of load balancing.

This algorithm is acceptable for applications with primarily random I/O characteristics.
Applications that primarily generate sequential I/O requests actually can have performance
degradation from the use of the round robin algorithm.

Page 10-9
Path Management
Features

Load Balancing

 Algorithms — extended round robin

3 2 1

Storage
Host 6 5 4

 This load balancing algorithm is appropriate for


sequential I/O processing
 This characteristic is common for application that
execute batch processing or audio and video
streams

Extended Round-Robin

Distributes I/Os to paths depending on whether the I/O involves sequential or random access:

• For sequential access, a certain number of I/Os are issued to one path in succession

o The next path is chosen according to the round robin algorithm

• For random access, I/Os are distributed to multiple paths according to the round-robin
algorithm

Page 10-10
Path Management
Features

Load Balancing

 Algorithms — extended least I/O and extended least block

PATH 1 I/O I/O I/O I/O I/O


BLOCK BLOCK BLOCK BLOCK BLOCK
Queue I/O I/O

PATH 2 I/O I/O I/O


BLOCK BLOCK BLOCK
Queue I/O I/O I/O

Path Total I/Os Total Blocks Least I/O will select Path 1
1 2 5
Least Block will select Path 2
2 3 3

The HDLM developers recently evaluated 2 additional queuing algorithms in efforts to further
enhance I/O throughput. This investigation resulted in 4 additional load balancing algorithms for
HDLM. They are:

• Least I/O
• Least Block
• Extended Least I/O
• Extended Least Block

Extensive testing showed excellent all-around performance for the Extended Least I/O
algorithm in environments exhibiting both random and sequential I/O characteristics. Due to
this result, Extended Least I/O is the default host setting for HDLM load balancing.

Page 10-11
Path Management
Features

Path Health Checking

 Monitors the status of online and offline paths at administrator-


specified intervals
 When an error is detected, the failed path is placed offline
 Detects when a failed path is recovered and places it back online
 Checks the status of all online paths regardless of whether I/O
operations are being performed

Without path health checking an error cannot be detected unless an I/O operation is performed.

If an error is detected in a path, the path health checking function switches the status of that
path to Offline (E) or Online (E).

Page 10-12
Path Management
Features

Centralized Management

 Hitachi Global Link Manager


• Provides centralized management for multiple HDLM instances
• Uses Simple Network Management Protocol (SNMP) to communicate
with the host
• Empowers a single administrator to remotely manage multiple HDLM
multipath environments from a single point of control
• Consolidates and presents
complex multipath configuration
information in simplified host
and storage-centric views

The following components make up an HDLM environment managed by HGLM:

• HGLM server – The machine on which the HGLM software is installed

o The HGLM server collects system configuration information from each host, and
provides the information to the HGLM client
o The HGLM server also performs requests received from the HGLM client

• HGLM client – Any machine on which the Web-based HGLM GUI is used

o The HGLM GUI provides the user interface for managing the multi-path
environment

• Host – A machine on which application programs are installed

o Each host accesses a storage network to read and write application data
o HDLM manages the paths between each host and the storage systems they
access

• Storage system – An external storage device connected to a host

o Storage systems with paths managed by HDLM are incorporated into the HGLM
management environment

Page 10-13
Path Management
HDLM Pre-Installation Steps

HDLM Pre-Installation Steps


This section presents HDLM pre-installation steps and process.

Pre-Installation Process

 Before performing a new installation of HDLM for a Fibre Channel


connection, check the topology (for example, Fabric, AL) and
complete the appropriate setup
 Use a single cable to connect the host to the storage subsystem

Host

HBA HBA HBA HBA

P P P P

LU

Storage Subsystem

Using multiple paths to connect a host to a storage subsystem before installing HDLM can result
in unstable Windows operations.

Page 10-14
Path Management
Pre-Installation Process

 To prepare for an HDLM installation:


1. Set up the storage system by assigning a volume to each port
2. Install HBAs onto the host
3. Install Windows and any non-HDLM drivers
4. Set up HBAs
5. ON AIX, install Hitachi ODM library
6. If the configuration uses an IP-SAN, install and set up iSCSI initiator
(iSCSI software or HBA)
7. Prepare LUs
8. Restart the host
9. Confirm the host is operating normally
10. Be sure to install the same HDLM version on all cluster nodes

To change settings of a storage subsystem, follow the maintenance documentation for


particular storage subsystem.

When installing HBAs, install as many HBAs as desired.

• In a cluster configuration, make sure that manufacturer and model of HBA is the same
for all hosts that make up the cluster
• Make sure that versions of HBA micro-programs are the same

Set up Switches

• For details on how to set up a switch, see the documentation for the particular switch
• This step is unnecessary if you do not use a switch

Set up BIOS for all HBAs (regardless of whether paths exist)

• Different settings are used for different topologies

Install Windows and any non-HDLM drivers

• Follow documentation for each product

Set up the HBAs

• See HBA documentation and manual to complete required setup

Page 10-15
Path Management
Pre-Installation Process

If your configuration uses an IP-SAN, install and set up the iSCSI initiator (iSCSI software
or HBA). For details, see iSCSI initiator documentation, documentation for the HBA, or storage
subsystem documentation.

Prepare the LUs

• Write signatures for each LU, create partitions and then format them
• Because the system is still in the single path configuration, no problems will occur even
if you write a signature for each LU

Restart the host

Confirm that the host is operating normally

Page 10-16
Path Management
Using HDLM GUI

Using HDLM GUI


This section presents the HDLM graphical user interface (GUI) and some common screens and
tasks.

HDLM GUI

 Configuration window

Page 10-17
Path Management
HDLM GUI

 Path status
• Online
• Offline(C): Indicates I/O
cannot be issued because the
path was placed offline
• Online(E): Indicates error
Current Path Status:
has occurred in the last Gray indicates normal status.
online path for each device Red indicates an error.

• Offline(E): Indicates I/O


cannot be issued because
an error occurred on the path

 Status display
• Provides filters to pinpoint your display

Page 10-18
Path Management
Setting the HDLM Options Screen

Setting the HDLM Options Screen

Function Settings

Load balancing (on/off)

• Round Robin – Distributes all I/O among multiple paths


• Extended Round Robin – Distributes I/O to paths depending on type of I/O

Path health checking (on/off)

When enabled (default), Dynamic Link Manager monitors all online paths at specified interval
and puts them into Offline(E) or Online(E) status if a failure is detected

There is a slight performance penalty due to extra probing I/O

The default interval is 30 minutes

Auto failback (on/off)

When enabled (not the default), Dynamic Link Manager monitors all Offline(E) and Online(E)
paths at specified intervals and restores them to online status if they are found to be
operational

Page 10-19
Path Management
Setting the HDLM Options Screen

Intermittent Error Monitor (on/off)

An intermittent error is a fault that occurs sporadically. A loose cable connection to an HBA, for
example, might cause intermittent errors. If automatic fallback is enabled, intermittent errors
will cause a path to alternate between online and offline frequently, which can impact I/O
performance. To eliminate failovers due to intermittent errors, HDLM can remove a path from
automatic failback if that path suffers intermittent errors. This process is called intermittent
error monitoring. With intermittent error monitoring, HDLM monitors paths to see whether an
error occurs a number of times within a specific error-monitoring interval.

Auto failback must be On. To prevent an intermittent error from reducing I/O performance, we
recommend that you monitor intermittent errors when automatic failback is enabled.

Parameters are Monitoring Interval and Number of Times

Example: Monitoring Interval = 30 minutes, Number of Times = 3 If an error occurs 3 times


in a 30 minute period, the path is determined to have an intermittent error and is removed and
subject to automatic failback. The path will display error status until the problem is corrected.

Remove LU (on/off):

Removes the LUN when all paths to the LUN are taken offline.

Page 10-20
Path Management
HDLM Path List Window

HDLM Path List Window

Path List: This main window for the Dynamic Link Manager GUI displays the detailed
configuration and path information, allows you to change the path status, and provides access
to the other windows.

Options window: This window displays and allows you to change the Dynamic Link Manager
operating environment settings, including function settings and error management settings.

Help window: This window displays the HTML version of the users manual. The Help window is
opened automatically by your default Web browser software.

Page 10-21
Path Management
Using HDLM CLI for Path Management

Using HDLM CLI for Path Management


The sections presents the use of commands for pathe management.

Using Command Line Interface (CLI) for Path Management

 General guidelines for using commands for HDLM operations


• Execute HDLM commands as a member of the administrators group
• To specify a parameter value containing one or more spaces, enclose the
entire value in double quotation marks (")
• If the I/O load on the dynamic disk is heavy, the view operation might take
an extended period of time

Windows Server 2008 supports user account control (UAC).

Use either of the following procedures to execute HDLM commands:

• Execute the HDLM command using the Administrator account


• To execute an HDLM command with a non-administrator account, use the Administrator:
Command Prompt window

If you attempt to execute an HDLM command by any other method, you might be asked
whether you have administrator permissions.

Page 10-22
Path Management
HDLM CLI Overview

HDLM CLI Overview

 View Displays host and storage system information,


for example, " dlnkmgr view –sys"
 Offline Places an online path offline
 Online Places an offline path online
 Set Changes Dynamic Link Manager parameters
 Clear Clears default setting
 Help Shows the operations, and displays help for each one

When you are using Dynamic Link Manager for Microsoft Windows® systems, execute the
command as a user of the Administrators group. When you are using Dynamic Link Manager for
Sun Solaris systems, execute the command as a user with root permission.

Page 10-23
Path Management
Viewing Path Information with the CLI

Viewing Path Information with the CLI

 Displaying path information


• To display path information, execute the dlnkmgr command's view
operation with -path parameter specified

Page 10-24
Path Management
Changing Path Status with the CLI

Changing Path Status with the CLI

 Changing path status to online


• Check the current status of paths (dlnkmgr view –path)

• Example of a command that places all paths online that pass through
HBA port 1.1
 Verify the statuses of all applicable paths have changed to online
>dlnkmgr view –path

Page 10-25
Path Management
Module Summary

Module Summary

 In this module, you should have learned:


• Features and benefits of HDLM
• Both pre-installation and installation steps for installing HDLM
• How to perform common path management functions using HDLM GUI
and CLI interfaces

Page 10-26
Path Management
Module Review

Module Review

1. A host is connected to a storage subsystem via multiple paths.


There are online and stand-by paths configured. Due to a cable
failure, one of the online paths goes down but processing will
continue because some path statuses will change.
This is a description of:
A. Load balancing
B. Failover
C. Failback
D. Path health checking
2. What enables the maximum possible number of paths to always be
available after a path failure has been fixed?
A. Load balancing
B. Failover
C. Failback
D. Path health checking

Page 10-27
Path Management
Module Review

3. Hitachi Dynamic Link Manager provides:


A. Manual failover support
B. Automatic failover support
C. Manual failback support
D. Automatic failback support
E. All of the above
4. What is the correct order to install HDLM?
A. Set up storage, SAN, and HBA
B. Review HDML Release Notes
C. Set up the LUNs
D. Connect the Host to Storage
E. Install the HDML Software in a single-path configuration
F. Have the license key available
G. Reboot as prompted during installation

Page 10-28
Path Management
Module Review

5. The PRSV key: (select all that apply)


A. Is required for the HDLM functions to properly operate
B. Must be unique for each host
C. Must be re-entered if you get a KAPL09128-W message
D. Will be registered after the installation finishes
E. All of the above
6. Which of the following are valid filters that can be used to display
only the desired information in the HDLM GUI? (select all that
apply)
A. Online (E)
B. Offline (C)
C. Intermittent Error Monitor
D. Reservation Level
E. Online

Page 10-29
Path Management
Module Review

7. Which CLI command is used to display information for all paths?


A. dlnkmgr view -all
B. dlnkmgr show -ap
C. dlnkmgr view -path
D. dlnkmgr disp -all

Page 10-30
11. Hitachi Unified Storage Program
Products
Module Objectives

 Upon completion of this module, you should be:


• Familiar with the program products available on the HUS family

Please note
• This module provides an overview of the features and program products
of the HUS family. Additional HUS 100 software courses are available that
contain detailed information on these features, such as replication and
advanced provisioning.

Page 11-1
Hitachi Unified Storage Program Products
Products: Array Based Software

Products: Array Based Software

 Licensed features
• Cache residency manager • Audit logging; account
• Password protection authentication
• SNMP agent support function • Volume Migration
• ShadowImage In-System • Power savings feature
Replication • TrueCopy Extended Distance
• TrueCopy Remote Replication • Dynamic Provisioning
• LUN manager • Dynamic Tiering
• Copy-on-Write SnapShot • Fibre Channel option
• Data Retention Utility
• Performance Monitor
• Cache partition manager
• TrueCopy Modular Distributed

Page 11-2
Hitachi Unified Storage Program Products
Memory Management Layer

Memory Management Layer

 HUS has increased in:


• Number of disks (up to 960)
• Size of disks (up to 3TB)
• Size of LU (can be up to 128TB)
 Resulting in increase of metadata
• Replication products
• Quick formatting
 When replication licenses are enabled, HUS uses the memory
management layer (MML) to accommodate the metadata tables
 As compared to previous modular families, a reconfiguration of
memory or even rebooting the system is not necessary when
installing Copy-on-Write Snapshot (CoW) or TrueCopy Extended
Distance (TCE)

Page 11-3
Hitachi Unified Storage Program Products
Memory Management Layer

Cache Memory
User Data Region System Area

Mirror User Data Area Others CoW/TCE SI/TC/MVM

 When any of the replication Memory Management Layer


products are enabled, they use
MML area to accommodate QF TCE CoW TC SI/MVM
their metadata tables
 This fixed space (always there), RG DP Pool DMLU
taken from cache, is now much
smaller than was the expanded space in System Region (since there
is now a virtual memory mechanism in place)
 Each product has a footprint in the MML region of cache as shown,
with an overflow paging area in the RG, a DP pool, or the DMLU

RAID group (RG) — sets aside space to take care of quick formatting.

Differential Management Logical Unit (DMLU) — takes care of metadata created by


ShadowImage, Volume Migration or TrueCopy Remote Replication. LU are assigned to the
DMLU.

Dynamic Provisioning (DP) Pool — space on a DP Pool for the metadata for Copy-on-Write
(CoW) and TrueCopy Extended Distance (TCE).

Page 11-4
Hitachi Unified Storage Program Products
Cache Partition Manager

Cache Partition Manager

 Delivers cache environment that can be optimized to specific


customer application requirements:
• Less cache required for specific workloads
• Better hit rate for same cache size
• Better optimization of I/O throughput for mixed workloads
• No need to mirror writes that do not benefit application

No other modular product has the ability to manage cache at this level. Modular storage
systems typically have a simple single block-size caching algorithm, resulting in inefficient use of
cache and/or I/O.

RAID group stripe sizes can also be selected to allow further tuning of the array to deliver
better performance without additional cost.

Cache Partition Manager is included as a no-cost option for all HUS systems.

The default cache segment size is 16KB.

Page 11-5
Hitachi Unified Storage Program Products
Cache Residency Manager

Cache Residency Manager

 Storage administrator defines a portion of cache for use by the cache


residency manager
 Storage administrator sets LUN to be completely loaded into cache
 All read/write activity occurs in cache
 Write data is mirrored and protected on disk asynchronously

Mirrored
Cache
Cache

Enhanced Read/Write
Performance
Cache
Residency
Area
LUN

Page 11-6
Hitachi Unified Storage Program Products
Volume Migration

Volume Migration

 Data lifecycle management


• Host resource is not required for Volume Migration
• Does not require change of host configuration, mount or boot
• Migrate data between tiers to meet performance requirements
• Migrate data from heavy I/O RG to low I/O RG to balance performance
 Performance improvement
• Addresses performance imbalance
• Removes performance bottleneck

Volume Migration allows LUNs to be migrated across disk types and across RAID groups.

Basic volumes can be migrated to dynamic volumes and vice versa.

Volumes can be migrated across RAID groups (R10 to R5 to R6), or disk media (SSD to SAS 10k,
to SAS 7.2k).

Page 11-7
Hitachi Unified Storage Program Products
Replication Products

Replication Products

ShadowImage

 In-system hardware-based copy


facility that provides:
• Full copies of volumes within
Hitachi family storage systems
• Replicates information with a
minimum impact to service levels

Production Data LU 1

Backup Data LU 2

ShadowImage is the in-system copy facility for Modular Storage HUS/AMS2000 families of
storage systems. It enables server-free backups, which allows customers to exceed service level
agreements (SLAs). It fulfills 2 primary functions:

• Copy open-systems data


• Backup data to a second LU

ShadowImage allows information to be split away and used for system backups, testing and
data mining applications while the customer’s business continues to run. It uses either graphical
or command line interfaces to create a copy and then control data replication and fast
resynchronization of logical volumes within the system.

Page 11-8
Hitachi Unified Storage Program Products
Replication Products

Copy-on-Write Snapshot

 Rapidly creates point-in-time snapshot copies of any data volume


within Hitachi storage systems without impacting the host service or
performance levels
• Realizes significant savings compared to full cloning methods because
snapshots store only changed data blocks
• Requires substantially smaller storage capacity for each snapshot copy
than source volume
• Can have 100,000 snapshots per system
• Can have up to 1,024 snapshots per volume

An essential component of business continuity is the ability to quickly replicate data. Hitachi
Copy-on-Write Snapshot provides logical snapshot data replication within Hitachi storage
systems for immediate use in decision support, software testing and development, data backup
or rapid recovery operations. Hitachi Unified Storage uses the DP pool to store the differential
data.

• Snapshots per source – 64 maximum (with V-VOL created)


• Snapshots – without V-VOL creation can be unlimited
• Command devices – 128 maximum
• Snapshots per HUS – 100,000 maximum
• Reverse resynchronization – supported
• Instant snap restore – supported
• Auto I/O switch on double/triple drive failure – supported
• Copy-on-Write operations can be performed via Storage Navigator Modular
2 (GUI or CLI), RM-CCI (CLI), or Replication Manager (GUI)

Page 11-9
Hitachi Unified Storage Program Products
Replication Products

TrueCopy Remote Replication (Synchronous) for High Availability

(1) Host write (2) Synchronous remote copy

P-VOL S-VOL
(4) Write complete (3) Remote copy complete

 Zero data loss is possible


 Performance: dual-write plus 1 round-trip latency plus overhead
 Up to 50-60km
 Consistency groups are supported
 Can be used for migrations between AMS 2000 and HUS 100
families

• Provides a remote mirror of any data

o The remote copy is always identical to the local copy


o Allows very fast restart/recovery with no data loss
o No dependence on host operating system, database or file system
o Distance limit is variable, but typically around 40~50 km
o Impacts application response time
o Distance depends on application read/write activity, network bandwidth,
response-time tolerance and other factors

• The write I/O is not posted as complete to the application until it is written to a remote
system
• The remote copy is always a mirror image
• Provides fast recovery with no data loss
• Limited distance – response-time impact

Page 11-10
Hitachi Unified Storage Program Products
TrueCopy Extended Distance for Remote Backup

TrueCopy Extended Distance for Remote Backup

 Because the updates are written asynchronously to the remote


system, the S-VOL may not always be current with the P-VOL

Local
Host

1 Writing of data 2 Writing complete, release host

3 Sending updates to S-VOL

P-VOL S-VOL

Local AMS/HUS Remote AMS/HUS

Page 11-11
Hitachi Unified Storage Program Products
True Copy Modular Distributed (TCMD)

True Copy Modular Distributed (TCMD)

 TCMD allows multiple arrays to connect to a remote array with TCE and TC
Fan-in: up to 8 LUs on separate arrays copied to 1 array

SAN

Consolidated on a single array


Fan-out: copy 1 LU to as many as 8 arrays

SAN

Dispersed to multiple arrays

Page 11-12
Hitachi Unified Storage Program Products
Replication Products

Replication Products

Management Tools

 Tools are tailored to meet specific installation requirements that need


to be met in order to run replication projects on a HUS block system
• CCI
 Raid Manager is the basic command-line based tool which runs on all
HDS storage systems
 Can either be directly used by the administrator or will be used as an
engine by other tools (for example, Replication Manager)
 CCI uses the Fibre Channel interface to communicate with HUS
• Replication Manager
 GUI-based tool which builds upon Command Suite and CCI
• SNM 2 GUI and/or CLI
 Available on AMS 2000 and HUS systems only
 In contrast to CCI the IP, the interface rather than Fibre Channel is
used to communicate with HUS

Page 11-13
Hitachi Unified Storage Program Products
What is Dynamic Provisioning?

What is Dynamic Provisioning?

 To avoid future service interruptions, today it is common to over-


allocate storage by 75% or more
 With Dynamic Provisioning, disk capacity can be added as needed,
when needed
Purchase
capacity
as needed
Allocated, but no disks
Initial purchase installed. Disks can be
Purchased, and allocation added nondisruptively
Allocated
BUT UNUSED Alert message
when additional
Purchased, Allocated
storage required
BUT UNUSED
Initial
purchase

Actual DATA What you Actual DATA


need initially

Fat Provisioning Thin Provisioning

Page 11-14
Hitachi Unified Storage Program Products
What is Dynamic Provisioning?

 Provision virtual capacity to hosts/application


• Virtual maximum capacity is specified and provisioned
• Real capacity is provisioned from dynamic provisioning pool as host
writes are received
Virtual-VOL

Dynamic Provisioning
Real Capacity Pool

Just-In-Time Space Allocation

Page 11-15
Hitachi Unified Storage Program Products
Dynamic Tiering

Dynamic Tiering

 Solution capabilities
Storage Tiers Data Heat Index
• Automated data placement for higher
performance and lower costs High
Activity
• Simplified ability to manage multiple Set
storage tiers as a single entity
• Self-optimized for higher performance
Normal
and space efficiency Working
Set
• Page-based granular data movement for
highest efficiency and throughput
 Business value
• CAPEX and OPEX savings by moving
data to lower-cost tiers Quiet
Data
• Increase storage utilization up to 50% Set

• Easy alignment of business application


needs to right cost infrastructure

Automate and eliminate the complexities of efficient tiered storage

Capital Expenditure (CAPEX)

Operational Expenditure (OPEX)

Page 11-16
Hitachi Unified Storage Program Products
Use Cases

Use Cases

1. Reduce price per IOPS while keeping performance

 Compared to conventional HDP pools (all SAS), drive price per IOPS
can be reduced by about 15% by mixing with NL SAS
Performance [IOPS] SAS
With conventional
With HDT, mixed
ALL SAS HDP, need to select
config can be
HDT pool (mixed) HDP pools ALL SAS config
selected  High cost
SAS NLSAS
IOPS
NLSAS Cost is reduced by 15%

ALL NLSAS
Drive cost
HDP pool

HUS 150 with 100 HDD x 300GB SAS 10K 10,000 IOPS

HUS 150 with 35 HDD (5 x SSD 30 x SAS) 10,000 IOPS

2. Improve performance without increasing the price per IOPS

 Compared to the conventional HDP pool with only performance SAS


drives, performance is improved by up to 50% by mixing with SSDs
Performance [IOPS]

SSD
With HDT, mixed
With conventional HDP, need to
config can be
ALL SSD increase the number of SAS or
selected
HDT Pool (HDP Pool) select SSD pool to improve
SSD performance from SAS
SAS NLSAS
 High cost
IOPS
Performance improves by YY%
SAS
Drive cost
ALL SAS
HDP pool

HUS 150 with 100 drives x 300GB SAS 10K  10,000 IOPS

HUS150 with 50 drives (5 x SSD 45 x SAS)  15,000 IOPS

Page 11-17
Hitachi Unified Storage Program Products
Software Bundles

Software Bundles

Available on all HUS 100 models

Hitachi Base Operating


Hitachi Base Operating System M
System E
Device Manager Hitachi Device Manager
Hitachi Storage Navigator Modular 2 Storage Navigator Modular 2
LUN manager LUN manager
Performance monitor feature Performance monitor feature
SNMP agent SNMP agent
Cache residency manager feature Cache residency manager feature
Cache partition manager feature Cache partition manager feature
Volume migrator Volume migrator
Hitachi Dynamic Provisioning
Hitachi Copy-on-Write Snapshot
Hitachi ShadowImage In-System
Replication software bundle

If purchasing HUS 110, HUS 130 or HUS 150 for BLOCK STORAGE ONLY, customers MUST
also purchase Hitachi Base Operating System M (BOS M). The key differences between this
bundle and the one sold with AMS 2000 family storage systems include:

• Device Manager is included


• HSNM 2 is included (HSNM 2 will no longer be listed as a separate item in the quote)
• Dynamic Provisioning is included (HDP will no longer be listed as a separate item in
the quote)
• Copy-on-Write Snapshot is included
• ShadowImage In-System Replication software bundle is included
• Hitachi Audit Logging is not included in the HUS BOS M bundle (It is now included in
Hitachi Base Operating System Security Extension, which is presented on the next slide)
• Account Authentication is not included in the HUS BOS M bundle (It is now included in
Base Operating System Security Extension)

Page 11-18
Hitachi Unified Storage Program Products
Software Bundles

Optional for HUS 110, 130 and 150 models

Hitachi Base Operating System Security Extension


Hitachi Audit Logging
Hitachi account authentication
Hitachi Data Retention Manager

Hitachi Base Operating System Security Extension includes program products that were formerly
included in BOS M for AMS 2000.

Page 11-19
Hitachi Unified Storage Program Products
Software Bundles

Other optional HUS 100 family software products

Optional Program Products


Hitachi TrueCopy Remote Replication bundle
Hitachi TrueCopy Extended Distance
Power savings feature
Hitachi Dynamic Tiering
True Copy Modular Distributed

In addition to purchasing the base bundles, customers may purchase optional products as well. This
slide presents an overview of the optional products available.

• TrueCopy Remote Replication bundle supports synchronous replication between 2 HUS


storage systems
• TrueCopy Extended Distance supports asynchronous replication between 2 HUS storage
systems

Note: TrueCopy Remote Replication bundle and TrueCopy Extended Distance may not coexist on
the same HUS storage system. The system will only allow one of these 2 products to operate at a
time. To avoid any problems, do not allow a customer to purchase licenses for both products for
use on the same HUS system. The Hitachi Data Systems Configurator is being designed to prevent
both products from being purchased for the same HUS system.

• The power savings feature enables the spin down of RAID groups (RG) when they are not
being accessed by business applications, resulting in a decrease in energy consumption
• The Fibre Channel option must be purchased to enable the embedded Fibre Channel ports
of HUS 110

o Once the associated license key is installed or enabled the Fibre Channel ports
embedded in the base HUS system are enabled

• TrueCopy Extended Distance requires that Dynamic Provisioning be installed on the system

o Dynamic Provisioning is included in the BOS M bundle as well as in BOS M Upgrade.

Page 11-20
Hitachi Unified Storage Program Products
Module Summary

Module Summary

 In this module, you should have learned:


• Software products available with HUS

Page 11-21
Hitachi Unified Storage Program Products
Module Review

Module Review

1. ShadowImage can be invoked without installing a license key


because it is part of the Basic Operating System (BOS).
True or False?
2. The system always needs a reboot after installing the HDP key.
True or False?
3. Which of the following are possible and which are impossible when
using Modular Volume Migration:
A. Migrate data into a larger volume
B. Migrate data from HUS 130 to HUS 150
C. Migrate data from a RAID 6 to a RAID 10
D. Migrate data without interrupting the application

Page 11-22
12. Performing Hitachi Unified Storage
Maintenance
Module Objectives

 Upon completion of this module, you should be able to:


• Replace the controller
• Replace disks
• Replace Enclosure Controller (ENC) modules
• Replace batteries
• Replace Fibre Channel and iSCSI modules
• Access product maintenance documentation

Page 12-1
Performing Hitachi Unified Storage Maintenance
Maintenance Overview and Preparation

Maintenance Overview and Preparation


This section provides an overview of Hitachi Unified Storage (HUS) maintenance, preparation
and a procedural demonstration.

Maintenance Overview

 Maintenance activities
• Replacing existing components
 Hard disk drives
 Control unit
 Enclosure Controller (ENC) unit
 Small Form-Factor Pluggable (SFP) Fibre Channel host connector
• Adding new components
 Hard disk drives
 Expansion trays
 iSCSI interfaces

Page 12-2
Performing Hitachi Unified Storage Maintenance
Instructor Demonstration

Instructor Demonstration

 Maintenance Document Library (MDL) Procedure Review


• Addition/Removal/Relocation Manual
• Replacement Manual
• HUS 100 Service Guide – MK-91DF8302

Hitachi Unified Storage (HUS)

Page 12-3
Performing Hitachi Unified Storage Maintenance
Maintenance Preparation

Maintenance Preparation

 General guidelines
• Print relevant maintenance procedure (if required)
• Read through the entire procedure before performing maintenance tasks
• Check for Alerts and Techtips
• Check for any firmware requirements
 Notes
• All component maintenance should be available online
• Model upgrades require downtime
 The only supported upgrade is HUS 130 to HUS 150

Be familiar with the technical information in the Maintenance Manual.

Be sure to use the correct version of the Maintenance Manual that corresponds to the
microcode version of any systems you are supporting.

Note: You may need to keep multiple versions available for ready access.

Review the Engineering Change Notice (ECN) for every microcode release.

Note: You should be familiar with the updates and corrections that are being
implemented over time for HUS even when your customer is skipping some of the
releases on their systems.

Be sure you are on the CMS Alerts Internal distribution list. Review the Technical Tips and
Alerts that are distributed for HUS systems.

Note: You can review the Technical Tips and Alerts at any time in the HiPin system on
HDSNet.

Remember that information in Technical Tips and Alerts takes priority over information in the
ECNs or Maintenance Manual. Information in the ECN takes priority over any information in the
(correct version) of the Maintenance Manual.

Page 12-4
Performing Hitachi Unified Storage Maintenance
General Maintenance Information

General Maintenance Information


This section presents the type of general maintenance information available in support of
Hitachi Unified Storage.

General HUS Information

Field replaceable units (FRUs)

 FRUs for HUS 100 Family Controller


 Controller Board
• Replace entire controller board
 Cache DIMMs
 Power supply units
 Fans
 Cache backup battery
 Host I/O modules
 Host connectors (SFPs)

CAUTION

Touching heat sinks or ICs may cause burns.


Be sure to handle with care.

Dual In-line Memory Module (DIMM)

Small Form-Factor Pluggable module (SFP)

Integrated circuit chips (ICs)

Page 12-5
Performing Hitachi Unified Storage Maintenance
General HUS Information

Field replaceable units (FRUs)

 FRUs for disk trays


• DBS, DBL and DBX
 Disks
 ENCs
 Power supply units
 Know the model names, part numbers and existing firmware on the
components before performing a replacement
 Replace only the proper part
 For more information, please read the replacement section of the
DF850 Maintenance Manual (see following example)

Disk trays:

• DBL (3.5-inches x 12 disks) — Nearline Serial Attached SCSI (NLSAS) disks


• DBS (2.5-inches x 24 disks) — Serial Attached SCSI (SAS) and flash drives
• DBX (3.5-inches x 48 disks) — NLSAS disks

Page 12-6
Performing Hitachi Unified Storage Maintenance
Getting the Part Numbers

Getting the Part Numbers

 Locate the part numbers in the Maintenance Manual

This is a snapshot from the manual. You can also use the P arts Catalog manual to find
location information and additional information on the parts.

Page 12-7
Performing Hitachi Unified Storage Maintenance
Drive Firmware

Drive Firmware

 Drive firmware is typically updated automatically within the general


firmware update task and is not updated in a stand-alone process
 There may be situations where a drive firmware update is necessary,
independent of a normal microcode update
 Refer to the next slide for the steps to determine the firmware
version of the current drive

Page 12-8
Performing Hitachi Unified Storage Maintenance
Finding Drive Firmware

Finding Drive Firmware

http://<IP>/drvfirm

Select a drive

1. Enter the path: http://IP of Controller/drvfirm

2. Enter the maintenance user id (main…) and password (hos..)

3. The firmware of the storage system can be determined from either:

o Web Tool
o HSNM 2

Page 12-9
Performing Hitachi Unified Storage Maintenance
Part Location

Part Location

A Revision — Controller

The controllers are accessed and located in HUS 110, HUS 130 and HUS 150 as described below.

• Access in HUS 110 and HUS 130 is from the rear of the rack
• Access in HUS 150 is from the front of the rack
• In HUS 110 and HUS 130, controller 0 is on the left and controller 1 is on the right
• In HUS 150, controller 1 is on the left and controller 0 is on the right

Page 12-10
Performing Hitachi Unified Storage Maintenance
Part Location

A Revision — Disk Trays

Page 12-11
Performing Hitachi Unified Storage Maintenance
Drive Numbers in the Trays

Drive Numbers in the Trays

Page 12-12
Performing Hitachi Unified Storage Maintenance
Replacing Hard Disk Drives

Replacing Hard Disk Drives


This section provides instructions and illustrations for replacing the Hitachi Unified Storage hard
disk drives.

Safety Precautions

Notes when replacing hard disk drives

Do not turn off power Wear a wrist strap


supply • Put on a wrist strap and
• Perform replacement work connect it to a metal part
without turning off the main of the main subsystem
subsystem power supply • Wearing a wrist strap
• It cannot be replaced prevents part damage
correctly while offline caused by static electrical
charge build up
Prepare new hard disk
drives
• Prepare spare hard disk
drives in advance to
perform the work smoothly

Page 12-13
Performing Hitachi Unified Storage Maintenance
Replacing a Drive

Replacing a Drive

CBXSL, CBSL and DBL

Controllers:

• HUS 110 (2 models) — CBXSL and CBXSS


• HUS 130 (2 models) — CBSL and CBSS
• HUS 150 (1 model) — CBL

Disk trays:

• DBL (3.5-inches x 12 disks) — NLSAS disks


• DBS (2.5-inches x 24 disks) — SAS and flash drives
• DBX (3.5-inches x 48 disks) — NLSAS disks

Page 12-14
Performing Hitachi Unified Storage Maintenance
Replacing a Drive

CBXSS, CBSS and DBS

DBX

Page 12-15
Performing Hitachi Unified Storage Maintenance
Standard Time for Correction Copy or Copy Back

Standard Time for Correction Copy or Copy Back

SAS 10K rpm

Page 12-16
Performing Hitachi Unified Storage Maintenance
Standard Time for Correction Copy or Copy Back

SAS 7.2K rpm

Note: This is a partial list. For the full list, refer to the manual. When an SAS drive with higher
capacity becomes available in the future, the time will increase.

Page 12-17
Performing Hitachi Unified Storage Maintenance
Checking for Successful Replacement

Checking for Successful Replacement

 Examine the Web Tool


• Visual Part
• Warning and Information messages
 Examine HSNM 2
• Alerts and Events
 Section with errors (should no longer show)
 Information messages
 You can check with the HSNM 2 CLI
• auparts
• auinfomsg

Page 12-18
Performing Hitachi Unified Storage Maintenance
Replacing the Hitachi Unified Storage Control Unit

Replacing the Hitachi Unified Storage Control Unit


This section provides instructions and illustrations for replacing the Hitachi Unified Storage
control units.

Handle Components with Care

 Heat sinks, (ICs) and many


electronic components can
become very hot and cause
burns.
 Handle with care!

Page 12-19
Performing Hitachi Unified Storage Maintenance
Wear Wrist Strap

Wear Wrist Strap

 Before unpacking and replacing maintenance components


• Wear a wrist strap
• Connect the grounding clip at the opposite end of the wrist strap to the
metal chassis frame
 When inserting control unit into subsystem
• Support the control unit by touching the metal part with fingers of the
hand wearing the wrist strap

Page 12-20
Performing Hitachi Unified Storage Maintenance
Replacing the Control Unit

Replacing the Control Unit

Controller positions are different for HUS 110, HUS 130 and HUS 150.

Page 12-21
Performing Hitachi Unified Storage Maintenance
Replacing the Control Unit

 Complete replacement within 10 minutes.


• Otherwise the system may power off due to abnormal temperature
increase
 Before starting replacement
• Review the control unit replacement procedure in the appropriate storage
system Maintenance Manual
 Procedure may vary depending on system type and microcode level

Page 12-22
Performing Hitachi Unified Storage Maintenance
Replacing the Control Unit

HUS 110 and HUS 130

Remove the Controller

For the CBXSL/CBXSS/CBSL/CBSS:

1. Loosen the right and left blue screws

2. Open the right and left levers toward you

o When the levers are completely opened, the controller is released

3. Remove all of the cables connected to the controller

o When the drive box is connected, also remove the SAS (ENC) cable

For the CBL:

1. Slide the right and left blue latches, and then open the levers toward you

2. When the levers are completely opened, the controller is released

3. Slide the controller out, and then remove it

Page 12-23
Performing Hitachi Unified Storage Maintenance
Replacing the Control Unit

Controller LEDs Overview

HUS 150

Page 12-24
Performing Hitachi Unified Storage Maintenance
IPv6 Usage Details

HUS Controller LEDs

IPv6 Usage Details

 When preparing a Hitachi Unified Storage system to use the IPv6


protocol:
• HDS recommends manually setting the IPv6 address
 When replacing the control unit due to failure or another issue:
• Execute a search array and register again
 For range of IPv6 address set manually:
• Use the global unicast address “2001::/16” for IPv6 Internet

Page 12-25
Performing Hitachi Unified Storage Maintenance
Replacing Hitachi Unified Storage ENC Unit and I/O Modules

Replacing Hitachi Unified Storage ENC Unit and I/O Modules


This section presents instructions and illustrations for replacing the Hitachi Unified Storage ENC
controller unit.

Wear Wrist Strap

 When inserting an Enclosure Controller (ENC) into subsystem


• Support the ENC unit by touching the metal part with fingers of the hand
wearing the wrist strap
 Replacing ENC is different for these units
• DBS (and DBL), DBX and DBW

Before unpacking and replacing maintenance components, be sure to wear a wrist strap and
connect the grounding clip at the opposite end of the wrist strap to the chassis frame.

Page 12-26
Performing Hitachi Unified Storage Maintenance
Replacing the ENC Unit

Replacing the ENC Unit

 Complete replacement within 10 minutes


• Otherwise the system may power off due to abnormal temperature
increase
 Before starting replacement:
• Review the ENC replacement procedure in the appropriate storage
system Maintenance Manual
• Procedure may vary depending on system type and microcode level

Page 12-27
Performing Hitachi Unified Storage Maintenance
Replacing the ENC Unit

On DBS and DBL

Replacing the ENC

Ensure the red ALM LED on the I/O Module (ENC) is illuminated.

When the ALM LED on the I/O Module (ENC) you are replacing is off, remove the I/O Module
(ENC) following these steps:

1. Open the right and left levers toward you

2. When the levers are completely opened, the I/O Module (ENC) is released

3. Remove the SAS (ENC) cable connected to the I/O Module (ENC) you are replacing

Note:

• If the cable cannot be easily removed, do not pull it by force


• The cable can be damaged if it is forcibly bent upward or downward

Page 12-28
Performing Hitachi Unified Storage Maintenance
Replacing the ENC Unit

On DBX

Replacement Procedure with the Power Turned On

When the red ALM LED of the I/O Card (ENC) to be replaced is illuminated, follow the error
collection item in the generated error message. Verify that the required error information is collected.

1. Pull the DBX out of the rack, and remove the top cover

2. Ensure the red ALM LED of the I/O Card (ENC) to be replaced is illuminated

3. Open the right and left levers toward you, while at the same time pressing the right and left
blue buttons that secure the levers of the I/O Card (ENC)

4. Remove the I/O Card (ENC) by pulling it out

5. Wait a minimum of 20 seconds

6. Insert a new I/O Card (ENC) until its lever is slightly opened

7. Ensure the red ALM LED on the I/O Card (ENC) is off

8. Ensure the green READY LED on the front of the controller box is illuminated, and the red
ALARM LED and orange WARNING LED are off

o The green READY LED on the front of the controller box may blink rapidly (for 30 to
50 minutes, or 40 to 60 minutes for the CBL) before changing to a steady
illumination

Page 12-29
Performing Hitachi Unified Storage Maintenance
Replacing an I/O Module

Replacing an I/O Module

On CBL

This replacement procedure


needs Storage Navigator Modular
(see notes)

Log in to Storage Navigator Modular.

1. Detach the drive I/O module

a. Select Components, and then I/O Modules on the unit window of Hitachi
Storage Navigator Modular 2
b. Select the module to change, and then click the Detach I/O Module button
c. When the confirmation message displays, click Confirm
d. Ensure the red STATUS LED on the Drive I/O Module is illuminated
2. Remove the SAS (ENC) cable connected to the Drive I/O Module to be replaced

Note:

o If the cable cannot be easily removed, do not pull it by force


o The cable can be damaged if it is forcibly bent upward or downward

3. Remove the Drive I/O Module

a. Loosen one blue screw that secures the Drive I/O Module, and then pull the lever
open

Page 12-30
Performing Hitachi Unified Storage Maintenance
Replacing an I/O Module

 When the lever is completely opened, the Drive I/O Module is released

b. Pull out and remove the Drive I/O Module

c. Temporarily place the Drive I/O Module in a location where anti-static measures
are taken

4. Wait a minimum of 20 seconds

5. Install the new Drive I/O Module

a. Push the new Drive I/O Module into the slot with its right and left levers completely
opened

b. Close the levers and tighten one blue screw to secure the Drive I/O Module

6. Ensure the red STATUS LED on the Drive I/O Module is off

Page 12-31
Performing Hitachi Unified Storage Maintenance
Replacing the SFP Fibre Channel Host Connector

Replacing the SFP Fibre Channel Host Connector


This section provides instructions and illustrations for replacing the Small Form-Factor Pluggable
(SFP) module host connector.

Wear Wrist Strap

 Before unpacking and replacing maintenance components:


• Wear a wrist strap
• Connect the grounding clip at the opposite end of the wrist strap to the
metal chassis frame

Page 12-32
Performing Hitachi Unified Storage Maintenance
Reviewing Host Connector Replacement Procedure

Reviewing Host Connector Replacement Procedure

 Before starting the connector replacement:


• Review the host connector replacement procedure in the appropriate
storage system maintenance manual
 Procedure may vary depending on system type and microcode level

8Gb/sec Fibre Channel host connectors


are different than those of iSCSI 10Gb/sec.

Page 12-33
Performing Hitachi Unified Storage Maintenance
Replacing the SFP Fibre Channel Host Connector

Replacing the SFP Fibre Channel Host Connector

CBXSL, CBXSS,
CBSS and CBSL

CBL

1. Before beginning, collect a simple trace

2. Remove the FC cables connected to the control unit mounting of the FC host connector
to be replaced

3. Remove the host connector after raising the lever

4. Wait a minimum of 20 seconds

o If the host connector is inserted before 20 seconds has elapsed, the host
connector may not recover normally

5. Check the insertion direction of the FC host connector

6. Insert the FC host connector into the port until it clicks into place

7. Connect the FC cables

8. Ensure the PORT LED is illuminated

9. Check the information and error messages on the Web Tool

Page 12-34
Performing Hitachi Unified Storage Maintenance
Replacing the SFP Fibre Channel Host Connector

 Do not replace a host connector that operates normally


 Before replacing a host connector, confirm that it is blocked:
• Red HALM LED should be illuminated
 Remove the Fibre Channel cables connected to the control unit
mounting for the host connector to be replaced

See the following slide for HALM LED locations

HALM LED is the Host connector Alarm LED

1. Remove the Fibre Channel cables connected to the controller mounting on the host
connector to be replaced

Note:

o If the cable cannot be easily removed, do not pull it by force


o The cable can be damaged if it is forcibly bent upward or downward

2. Remove the host connector after raising the lever

3. Wait a minimum of 20 seconds

o If the host connector is inserted before 20 seconds has elapsed, the host
connector may not recover normally

4. Check the insertion direction of the host connector and insert the host connector in the
port until it clicks into place

Note: Be sure to install the same type of the host connector as the one which was
removed

Page 12-35
Performing Hitachi Unified Storage Maintenance
Replacing the SFP Fibre Channel Host Connector

5. Connect the Fibre Channel Interface cables

Note: If the Link LED does not light, other failures may be considered. Restore it
following Troubleshooting “Chapter 1. Flowchart for Troubleshooting” (TRBL 01-0000)

6. Refer to the Information Message on the Web Tool, to ensure I I53A0g Host Connector
recovered (Port xy) is indicated

o This message displays within 10 seconds of inserting the host connector


o When the message displays, the replacement of host connector is complete

Page 12-36
Performing Hitachi Unified Storage Maintenance
HALM LED Locations

HALM LED Locations

Fibre
Channel
Ports

Page 12-37
Performing Hitachi Unified Storage Maintenance
HALM LED Locations

iSCSI Ports

Page 12-38
Performing Hitachi Unified Storage Maintenance
Replacing the Cache Battery Module

Replacing the Cache Battery Module


This section provides instructions and illustrations for replacing the cache battery module.

Introduction

 The battery location is different for:


• CBXSL, CBXSS, CBSS and CBSL
• CBL
 The procedure is different for:
• CBXSL, CBXSS, CBSS and CBSL
• CBL

Page 12-39
Performing Hitachi Unified Storage Maintenance
Battery Location

Battery Location

CBXSS, CBXSL, CBSS


and CBSL

Remove the power unit

1. Remove the power unit on which the red B-ALM LED for the cache backup battery is
located

Note: When the power unit is removed, W07zyC PS alarm (Unit-0, PS-x) displays in the
Information Message on the Web Tool

a. Lift the latch on the cable holder of the power unit to release the lock, and then slide
the cable holder out

b. Remove the power cable from the power unit on which the red B-ALM LED for the
cache backup battery is located

c. Pull the lever open while pressing the latch on the power unit inward with your right
thumb

o When the lever is completely opened, the power unit is released

d. Pull out and remove the power unit while holding its body with both hands

2. Remove the cache backup battery

a. Loosen the blue screw on the cache backup battery cover and then open the cover

Page 12-40
Performing Hitachi Unified Storage Maintenance
Battery Location

b. Remove the cable for the cache backup battery from the cable clamp

c. Remove the cable for the cache backup battery from the connector on the power
unit

d. Remove the cache backup battery

3. Install the new cache backup battery

a. Put the new cache backup battery on the power unit, and then connect the cable for
the cache backup battery to the connector on the power unit

b. Secure the cable for the cache backup battery with the cable clamp

4. Replace the power unit

Page 12-41
Performing Hitachi Unified Storage Maintenance
Battery Location

CBL

1. Loosen the blue screw that secures the cache backup battery

2. Open the lever, and then pull out and remove the cache backup battery

Note: Since the depth of a cache backup battery is as long as about 488 mm and it is as
heavy as about 5.0 kg, please remove it carefully

3. Wait a minimum of 20 seconds

4. Install the new cache backup battery

Note: If you insert the cache backup battery without waiting for a minimum of 20
seconds, the cache backup battery may not recover normally

5. With the lever opened completely, insert the cache backup battery completely into the
slot

6. Close the lever, and tighten the blue screw to secure the cache backup battery

Page 12-42
Performing Hitachi Unified Storage Maintenance
Module Summary

Module Summary

 In this module, you should have learned:


• The location of HUS components
• Procedures for replacing HUS components
• Required safety precautions

Page 12-43
Performing Hitachi Unified Storage Maintenance
Module Review

Module Review

State True or False for the following:


1. The cache battery replacement is similar for CBXSS and CBSS.
2. The CBL battery is located in the rear of the unit.
3. A DBX has 4 ENCs.
4. Heat sinks and ICs can be hot during controller replacement.
5. FC and iSCSI 10Gb SFPs are interchangeable.

Page 12-44
13. Troubleshooting Hitachi Unified Storage
Module Objectives

 Upon completion of this module, you should be able to:


• Describe the troubleshooting steps
• Summarize the available troubleshooting tools

Page 13-1
Troubleshooting Hitachi Unified Storage
Detecting Failures

Detecting Failures
This section presents methods used to detect failures.

Failure Detection Methods

 Failures can be detected in the following ways:


1. Simple Network Management Protocol (SNMP)
2. Email notification
3. Error messages
a. Web Tool — Event Log
b. Hitachi Storage Navigator Modular 2 (SNM 2) — Alerts and Event
Dialog
4. LEDs
5. Hi-Track Remote Monitoring system

Page 13-2
Troubleshooting Hitachi Unified Storage
SNMP Setup

SNMP Setup

Page 13-3
Troubleshooting Hitachi Unified Storage
SNMP Setup

Page 13-4
Troubleshooting Hitachi Unified Storage
SNMP Setup

Page 13-5
Troubleshooting Hitachi Unified Storage
SNMP Setup

Page 13-6
Troubleshooting Hitachi Unified Storage
Troubleshooting with Error Messages

Troubleshooting with Error Messages

Web Tool — Event Log

The Patrol lamp (Ready LED) shows the status of components.

The Warning Information Messages show the status of the array.

Page 13-7
Troubleshooting Hitachi Unified Storage
Troubleshooting with Error Messages

Storage Navigator Modular 2 — Alerts and Events

Collect a trace

Failed parts

Page 13-8
Troubleshooting Hitachi Unified Storage
Troubleshooting with LEDs

Troubleshooting with LEDs

LED Locations

 Orange LED indicates a warning (component failure)


 Red LED indicates a serious failure
 Above diagram shows LED locations

Page 13-9
Troubleshooting Hitachi Unified Storage
Troubleshooting with LEDs

Battery Alarm LED

 Green blinking LED status indicates the battery is charging


 Red LED indicates the battery has failed

Page 13-10
Troubleshooting Hitachi Unified Storage
Troubleshooting with LEDs

Controller (HUS 110 and HUS 130) and Disk Tray LEDs

 Two kinds of LEDs can be seen:


• Entire controller level
• Individual hard disk level

Hitachi Unified Storage (HUS)

Controller HUS 150 LEDs

CBS

CBL

I/O Module

ENC

Page 13-11
Troubleshooting Hitachi Unified Storage
Troubleshooting with LEDs

Power Supply LED

Page 13-12
Troubleshooting Hitachi Unified Storage
Hi-Track Monitor

Hi-Track Monitor
This section presents installation and use of Hi-Track Monitor.

Installation Guide

Page 13-13
Troubleshooting Hitachi Unified Storage
Product Support

Product Support

Hi-Track Monitor supports the following products:

 Miscellaneous Storage such as:


• Hitachi Unified Storage family
• Hitachi Adaptable Modular Storage family
• Hitachi Compute Rack server CR xM (CR2x0H/S)
• Hitachi Compute Blade server models 2000, 320, IOEU
• Hitachi Content Platform
• Hitachi NAS Platform
 Switches and Director such as:
• Brocade
• Cisco
• InRange
• McData
• QLogic

Page 13-14
Troubleshooting Hitachi Unified Storage
Components

Components

Hi-Track Center Hi-Track Server


Geographically Database
located WWW Server
FTP Put Clarify Case

HDS LAN
FTP Get
Hi-Track Internet
FTP Server
HDS DMZ
Dialup
transport
(option)
FTP Put
Internet
transport
via public HTTPS transport
internet via public
(option) Internet (option)
Customer WAN
TCP/IP Socket to 9500/9200
SNMP to Switches

Hi-Track Monitor
App on Customer
Workstation
Windows, Solaris

Page 13-15
Troubleshooting Hitachi Unified Storage
Components

Page 13-16
Troubleshooting Hitachi Unified Storage
Summary Screen

Summary Screen

View accumulated tracking data by device and error type

Page 13-17
Troubleshooting Hitachi Unified Storage
Summary Screen

View detailed status of frame features and components

Page 13-18
Troubleshooting Hitachi Unified Storage
Troubleshooting

Troubleshooting

Opening a Case

 Call the Global Support Center to open a case and obtain a case ID
if one does not yet exist for the implementation service
 Upload Simple Trace data to open a support case
• https://tuf.hds.com
• Login info:
 User: Case ID
 Password: truenorth

Page 13-19
Troubleshooting Hitachi Unified Storage
Troubleshooting

TUF at HDS.COM

 Upload files to open cases at https://tuf.hds.com


 Learn how to collect other data

Technical Upload Facility (TUF)

Page 13-20
Troubleshooting Hitachi Unified Storage
Module Summary

Module Summary

 In this module, you should have learned:


• Troubleshooting steps
• Available troubleshooting tools

Page 13-21
Troubleshooting Hitachi Unified Storage
Module Review

Module Review

1. What indications from Hitachi Storage Navigator Modular 2 would


inform you of a problem in a storage array?
2. What are the 3 available types of analysis trace dumps?
3. Describe how SNMP interrogates equipment to publish report
status and errors.

Page 13-22
14. Using Constitute Files
Module Objectives

 Upon completion of this module, you should be able to:


• Explain the purpose and benefits of using Constitute functionality
• Export Constitute files for configuration and settings information by using
Hitachi Storage Navigator Modular 2

The information in this module is based on the following document:

System Parameter Manual — SYSPR 10-0000, Chapter 10. Setting Constitute Array

Page 14-1
Using Constitute Files
Overview

Overview

 Use Constitute files to:


• Back up logical configuration of the array
• Export a parts list from the array
• Quickly duplicate configuration to another array
• View or set configuration or clone the Hitachi Unified Storage system
 Constitute files can perform 2 operations:
• Get Configuration — View information
• Set Configuration — Import configuration information from an existing
constitute file

Caution: Use care when importing settings through Constitute as you


could overwrite your configuration and data.

Notes:

When using Set Configuration to set configuration information, all prior set configuration
information is overwritten

When using Set Configuration to set RAID Group or Logical Unit settings, or to clone the
storage system, all previously set configuration is overwritten and the data on the affected
RAID groups or LUNs is overwritten

Page 14-2
Using Constitute Files
Overview

 Constitute files contain:


• RAID groups, DP pools, volumes
• System parameters
• Ports information
• Boot options
• Parts information
• CHAP users
• LAN information

DP = Dynamic Provisioning

CHAP = Challenge-Handshake Authentication Protocol

Page 14-3
Using Constitute Files
Exporting and Importing Constitute Files

Exporting and Importing Constitute Files

1. Select the Array

2a. Click Settings

2b. Click Constitute Array

Log in to Hitachi Storage Navigator Modular 2.

1. Select the array

2. Select Settings > Constitute Array

3. Select the Option from Configuration Parameters

4. Select Get Configuration

Page 14-4
Using Constitute Files
Exporting and Importing Constitute Files

3. Select type of information to export

4. Select Get Configuration


or Set Configuration

Page 14-5
Using Constitute Files
Exporting a Configuration

Exporting a Configuration

Select any option


and click OK

This screen appears if RAID group, DP pool or logical unit was selected on the previous screen.

Page 14-6
Using Constitute Files
Defining RAID Group, DP Pool and LUN Information

Defining RAID Group, DP Pool and LUN Information

Click Save to
save the file

Page 14-7
Using Constitute Files
Viewing RAID Group, DP Pool and LUN Information

Viewing RAID Group, DP Pool and LUN Information

Page 14-8
Using Constitute Files
Viewing System Parameters

Viewing System Parameters

Page 14-9
Using Constitute Files
Configuring a Duplicate Storage System

Configuring a Duplicate Storage System

 Duplication of a storage system configuration to another like storage


system can be achieved using Constitute files
 Some information will not be imported to the target storage system:
• System serial number
• IP addresses
• WWNs of attached hosts
 Export Constitute files from the source storage system and import
the files to the target storage system

Take care when importing the configuration.

Page 14-10
Using Constitute Files
Instructor Demonstration

Instructor Demonstration

 Constitute Files
• Get Constitute files for ports
• View the Constitute file

Page 14-11
Using Constitute Files
Module Summary

Module Summary

 In this module, you should have learned:


• The purpose and benefits of using Constitute functionality
• How to export Constitute files for configuration and settings information
by using Hitachi Storage Navigator Modular 2

Page 14-12
Using Constitute Files
Module Review

Module Review

1. List the advantage of using Constitute files.


2. What resource information can be exported to Constitute files?

Page 14-13
Using Constitute Files
Your Next Steps

Your Next Steps

Validate your knowledge and skills with certification.

Check your progress in the Learning Path.

Review the course description for supplemental courses, or


register, enroll, and view additional course offerings.

Get practical advice and insight with HDS white papers.

Ask the Academy a question or give us feedback on this course


(employees only).

Join the conversation with your peers in the HDS Community.

Follow us on social media: @HDSAcademy

Certification: http://www.hds.com/services/education/certification

Learning Paths:
Customer Learning Path (North America, Latin America, and APAC):
http://www.hds.com/assets/pdf/hitachi-data-systems-academy-customer-learning-paths.pdf

Customer Learning Path (EMEA): http://www.hds.com/assets/pdf/hitachi-data-systems-


academy-customer-training.pdf

All Partners Learning Paths:


https://portal.hds.com/index.php?option=com_hdspartner&task=displayWebPage&menuName=
PX_PT_PARTNER_EDUCATION&WT.ac=px_rm_ptedu

Employee Learning Paths:


http://loop.hds.com/community/hds_academy

Learning Center: http://learningcenter.hds.com

White Papers: http://www.hds.com/corporate/resources/

Page 14-14
Using Constitute Files
Your Next Steps

For Partners and Employees – theLoop:


http://loop.hds.com/community/hds_academy/course_announcements_and_feedback_communi
ty

For Customers, Partners, Employees – Hitachi Data Systems Community:

https://community.hds.com/welcome

For Customers, Partners, Employees – Hitachi Data Systems Academy link to Twitter:

http://www.twitter.com/HDSAcademy

Page 14-15
Using Constitute Files
Your Next Steps

Page 14-16
15. DBX/DBW High Density Tray Installation
Module Objectives

 Upon completion of this module, you should be able to discuss rules,


safety considerations and steps for installing a high density DBX or
DBW expansion tray:
 DBX: 48 x 3.5"/SAS7K
 DBW: 84 x 3.5"/SAS7K

Page 15-1
DBX/DBW High Density Tray Installation
Overview

Overview

 Most of this presentation applies to both DBX and DBW


• However,
• The screen shots shown are for installation of a DBX tray
• The physical installation of DBW is far less work than installing a DBX
 It is more like installing a normal – yet big – tray
 There are no rails
 There are no backend cable-brackets (arms)
• For this reason DBW installation is not shown separately
• Always refer to the latest appropriate installation procedures

Page 15-2
DBX/DBW High Density Tray Installation
Rules and Safety Considerations

Rules and Safety Considerations

Installation Rules and Tools

 To avoid shipping a frame that is top heavy not all DBX/DBWs may
be installed upon arrival at the customers site.
 Installer may need to install additional DBX/DBWs at customer site.
 All dense expansion trays must be placed at lowest point in rack to
maintain a low center of gravity.
 Depending on the GEO the following help can be available for
installing the trays in the rack:
• A Genie lift (e.g. GL-8 with Load Platform) or equivalent.
• Assistance provided by the transporting company

Due to the heavy weight of the trays, no attempt should be made to move the rack with more
than four DBX/DBWs installed.

Due to the fact that maintenance access is from the top, mounting a tray high in a rack may
make maintenance tasks difficult.

In the US, usually a Genie or compatible lift is used to install the DBX/DBW tray. HDS logistics
do not stock lifts nor ladders.

In Europe, the transporting company can provide (if ordered) the physical installation of the
high density trays into the rack.

15-3
DBX/DBW High Density Tray Installation
Safety Ladder

Safety Ladder

 If a ladder is used make sure it is a safety ladder.


 Always install trays from bottom to top to keep
low center of gravity balance.

Page 15-4
DBX/DBW High Density Tray Installation
Genie Lift Assembly

Genie Lift Assembly

Unpack Genie Lift

Open end of box

Slide lift out of box

Lay lift horizontally for next steps

15-5
DBX/DBW High Density Tray Installation
Prepare Genie Lift - Attach Supporting Feet

Prepare Genie Lift - Attach Supporting Feet

Fit supporting feet


with wheels into the
two slots in bottom
frame of lift assembly

Page 15-6
DBX/DBW High Density Tray Installation
Prepare Genie Lift - Attach Lift Forks

Prepare Genie Lift - Attach Lift Forks

Slide the lift forks


supporting heavy
metal tray over ends
of the two support
tubes

Pins

Insert retaining pin in end


of each supporting tube

15-7
DBX/DBW High Density Tray Installation
Prepare Genie Lift - Attach Load Platform

Prepare Genie Lift - Attach Load Platform

 The load platform is an option on the Genie lift


 Follow instructions on label on load platform to attach it.
 Lay load platform on two load forks.
 Lift platform and push it until back edge is under lower fork mounting
tube.
 Rotate platform down until it locks in place.

Page 15-8
DBX/DBW High Density Tray Installation
Prepare Genie Lift - Unlock the Lift

Prepare Genie Lift - Unlock the Lift

Unlock lift by pulling out locking knob

Pull out knob to unlock

15-9
DBX/DBW High Density Tray Installation
Prepare Genie Lift - Configure Lift Handle

Prepare Genie Lift - Configure Lift Handle

To configure lift handle:

1. Pull out knob to unlock handle arm. 3. Reattach the handle arm with the handle facing
2. Remove the handle arm and turn it around outward.
so the handle faces outward.

Page 15-10
DBX/DBW High Density Tray Installation
Unpacking

Unpacking

Carton on wooden pallet includes both actual DBX/DBW tray plus


another box that stores all related hardware.
Unpack in three steps:
1. Leave DBX/DBW unit on pallet and lift up carton from one
side to prevent any vacuum from impeding removal.

2. Remove parts box (large arrow) from top of carton.


3. Lay out all parts in an area where you can inventory them and have
them handy as you proceed with installation.

15-11
DBX/DBW High Density Tray Installation
Rack and Rail Preparation

Rack and Rail Preparation

Prepare Rack Stabilizer

Mounting screws

Stabilizer bracket

Full size view of a 47U rack Swing bracket under frame to align with screw
shows physical location of holes, and then mount stabilizer with two screws
stabilizer foot at front of rack
Note: The 47U rack is much taller than previous versions and requires a
stable step stool to install cables in the upper half.

Using a rack stabilizer is a very important safety feature for all racks to prevent the rack from
falling over when a -heavy- tray is pulled out for a service task.

Page 15-12
DBX/DBW High Density Tray Installation
Install DBX Slide Brackets

Install DBX Slide Brackets

Install side slide brackets to mount DBX tray:


1. Note four screws in center of bracket.
2. Slightly loosen screws to allow bracket to be expanded for accurate fit to rack rails.
3. Retighten screws when brackets are in place.

L
“L” indicator for left rail

Loosen screws to adjust length

“R” indicator for right rail


Does not apply to DBW

Note: The brackets are labeled “L” for left and “R” for right side locations.

15-13
DBX/DBW High Density Tray Installation
Attach DBX Rails

Attach DBX Rails

 Four rack nuts are used on rack tracks for mounting DBX rails.
 Additional slide-in nut on front of each rail secures tray when in place.
 To access rear of rail during installation, use a long screw driver.

Rail nuts Third nut Does not apply to DBW


slides in Rear of rail
to lock tray access requires
in place long screwdriver Tighten these four
screws on each rail
when rail is in place

Page 15-14
DBX/DBW High Density Tray Installation
Installing DBX/DBW into Rack

Installing DBX/DBW into Rack

Install DBX/DBW

 To reduce the weight, remove all parts from the DBX/DBW tray
before lifting it into the rack

15-15
DBX/DBW High Density Tray Installation
DBX Fail Safe Lock

DBX Fail Safe Lock


 Front of DBX has key lock that acts as fail safe to prevent tray from
being released by unauthorized personnel.
 When key is turned, it extends two horizontal lock arms that cover tray
locking screws.
 Key must be turned to access screws when installing or removing tray
assembly.
Does not apply to DBW
Locked Unlocked

Page 15-16
DBX/DBW High Density Tray Installation
DBX Tray Releases

DBX Tray Releases

 When tray is placed in rails, there are two releases on tray sides:
• Rear release: Depressing the rear release allows you to push tray
back into rack.
• Front release: Depressing the front release allows you to pull tray
forward for maintenance or removal.

Rear release Does not apply to DBW Front release

15-17
DBX/DBW High Density Tray Installation
Mounting with Genie Lift

Mounting with Genie Lift

 When rails are installed in rack, use Genie Lift to place the tray.
 Turn Genie Lift crank handle clockwise to lift.
 To lower lift, turn crank handle about ¼ turn counter-clockwise to
unlock safety latch; then continue turning handle to lower lift.
Crank handle for lifting
To load the DBX/DBW tray onto the
Genie Lift, slide the tray off the
pallet and onto Genie Lift at right
angles to the long side of the tray.

Page 15-18
DBX/DBW High Density Tray Installation
Install DBX/DBW in the Rack

Install DBX/DBW in the Rack

WARNING – Make sure rack stabilizer is installed before installing the DBX/DBW.
 DBX rails are beveled to aid in sliding tray easily into rack mounted rails.
 Raise tray slightly above bottom of rack rails with Genie Lift before sliding
tray rearward into rack.
 When tray is seated in rails, push tray until a snapping sound is heard.
• Sound indicates rails are matched and locked securely to prevent the tray
from being pulled forward and falling.
Raise Genie Lift to height of
rack mounted rails Slide tray rearward into rails

15-19
DBX/DBW High Density Tray Installation
Cable Routing Brackets

Cable Routing Brackets

Install Cable Routing Brackets

 To mount swing arm assemblies,


install primary brackets that
connect to swing.
 Left and right brackets slide in
place and are held stationary with
a spring tension button that snaps
to fit.
 When left and right primary
brackets are in place, connect left
and right pivotal brackets to them.
Note: These swing arms also
connect in place with a spring
tension button.

Does not apply to DBW

 Directly beneath mounting


brackets, insert a stress relief
panel by snapping it in place
with spring tension buttons.
 In far rear of swing brackets,
install final bracket cover by
snapping it in place with spring
tension buttons on each swing
arm.

Does not apply to DBW

Page 15-20
DBX/DBW High Density Tray Installation
Cable Installation

Cable Installation

Install DBX/DBW ENC and Power Cables

Remove the clear plastic cover Pull ENC assembly from bottom of tray

Power

ENC

Pretty much the same for DBX and DBW

Pull ENC assembly out from tray Insert ENC cable locking bar in place

Cable locking bar

Remove assembly cover Replace assembly cover

Cable locking bar

15-21
DBX/DBW High Density Tray Installation
Install DBX/DBW ENC and Power Cables

 Label each cable with identical tags on both ends as each cable is
installed.
 Labeling tags come with kit.

 Route power cables to the appropriate power distribution units for


separation between power sources.
 Routing depends on source and configuration.

Page 15-22
DBX/DBW High Density Tray Installation
Routing Channels for Power Cables

Routing Channels for Power Cables

Follow similar routing plan for power cables as with ENC cables.

Power and ENC cables route to


opposite ends of tray to corresponding Place velcro straps around
brackets — place larger ENC cables at outside of cables and brackets
bottom of channel. to secure cables to brackets.

Does not apply to DBW

15-23
DBX/DBW High Density Tray Installation
Module Summary

Module Summary

 In this module, you learned about rules, safety considerations and


steps for installing a high density DBX/DBW expansion tray

Page 15-24
DBX/DBW High Density Tray Installation
Your Next Steps

Your Next Steps

Validate your knowledge and skills with certification.

Check your progress in the Learning Path.

Review the course description for supplemental courses, or


register, enroll, and view additional course offerings.

Get practical advice and insight with HDS white papers.

Ask the Academy a question or give us feedback on this course


(employees only).

Join the conversation with your peers in the HDS Community.

Follow us on social media: @HDSAcademy

Certification: http://www.hds.com/services/education/certification

Learning Paths:

• Customer Learning Path (North America, Latin America, and


APAC): http://www.hds.com/assets/pdf/hitachi-data-systems-academy-customer-
learning-paths.pdf
• Customer Learning Path (EMEA): http://www.hds.com/assets/pdf/hitachi-data-systems-
academy-customer-training.pdf
• All Partners Learning
Paths: https://portal.hds.com/index.php?option=com_hdspartner&task=displayWebPage
&menuName=PX_PT_PARTNER_EDUCATION&WT.ac=px_rm_ptedu
• Employee Learning Paths:
http://loop.hds.com/community/hds_academy

Learning Center: http://learningcenter.hds.com

White Papers: http://www.hds.com/corporate/resources/

15-25
DBX/DBW High Density Tray Installation
Your Next Steps

For Partners and Employees –


theLoop: http://loop.hds.com/community/hds_academy/course_announcements_and_feedback_co
mmunity

For Customers, Partners, Employees – Hitachi Data Systems Community:

https://community.hds.com/welcome

For Customers, Partners, Employees – Hitachi Data Systems Academy link to Twitter:

http://www.twitter.com/HDSAcademy

Page 15-26
Training Course Glossary
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

—A— AIX — IBM UNIX.


AaaS — Archive as a Service. A cloud computing AL — Arbitrated Loop. A network in which nodes
business model. contend to send data, and only 1 node at a
AAMux — Active-Active Multiplexer. time is able to send data.

ACC — Action Code. A SIM (System Information AL-PA — Arbitrated Loop Physical Address.
Message). AMS — Adaptable Modular Storage.
ACE — Access Control Entry. Stores access rights APAR — Authorized Program Analysis Reports.
for a single user or group within the APF — Authorized Program Facility. In IBM z/OS
Windows security model. and OS/390 environments, a facility that
ACL — Access Control List. Stores a set of ACEs, permits the identification of programs that
so that it describes the complete set of access are authorized to use restricted functions.
rights for a file system object within the API — Application Programming Interface.
Microsoft Windows security model.
APID — Application Identification. An ID to
ACP ― Array Control Processor. Microprocessor identify a command device.
mounted on the disk adapter circuit board
(DKA) that controls the drives in a specific Application Management — The processes that
disk array. Considered part of the back end; manage the capacity and performance of
it controls data transfer between cache and applications.
the hard drives. ARB — Arbitration or request.
ACP Domain ― Also Array Domain. All of the ARM — Automated Restart Manager.
array-groups controlled by the same pair of Array Domain — Also ACP Domain. All
DKA boards, or the HDDs managed by 1 functions, paths, and disk drives controlled
ACP PAIR (also called BED). by a single ACP pair. An array domain can
ACP PAIR ― Physical disk access control logic. contain a variety of LVI or LU
Each ACP consists of 2 DKA PCBs to configurations.
provide 8 loop paths to the real HDDs. Array Group — Also called a parity group. A
Actuator (arm) — Read/write heads are attached group of hard disk drives (HDDs) that form
to a single head actuator, or actuator arm, the basic unit of storage in a subsystem. All
that moves the heads around the platters. HDDs in a parity group must have the same
AD — Active Directory. physical capacity.

ADC — Accelerated Data Copy. Array Unit — A group of hard disk drives in 1
RAID structure. Same as parity group.
Address — A location of data, usually in main
memory or on a disk. A name or token that ASIC — Application specific integrated circuit.
identifies a network component. In local area ASSY — Assembly.
networks (LANs), for example, every node Asymmetric virtualization — See Out-of-band
has a unique address. virtualization.
ADP — Adapter. Asynchronous — An I/O operation whose
ADS — Active Directory Service. initiator does not await its completion before

HDS Confidential: For distribution only to authorized parties. Page G-1


proceeding with other work. Asynchronous or Yottabyte (YB). Note that variations of
I/O operations enable an initiator to have this term are subject to proprietary
multiple concurrent I/O operations in trademark disputes in multiple countries at
progress. Also called Out-of-band the present time.
virtualization. BIOS — Basic Input/Output System. A chip
ATA —Advanced Technology Attachment. A disk located on all computer motherboards that
drive implementation that integrates the governs how a system boots and operates.
controller on the disk drive itself. Also BLKSIZE — Block size.
known as IDE (Integrated Drive Electronics)
Advanced Technology Attachment. BLOB — Binary Large OBject.

ATR — Autonomic Technology Refresh. BP — Business processing.

Authentication — The process of identifying an BPaaS —Business Process as a Service. A cloud


individual, usually based on a username and computing business model.
password. BPAM — Basic Partitioned Access Method.
AUX — Auxiliary Storage Manager. BPM — Business Process Management.
Availability — Consistent direct access to BPO — Business Process Outsourcing. Dynamic
information over time. BPO services refer to the management of
-back to top- partly standardized business processes,
including human resources delivered in a
—B— pay-per-use billing relationship or a self-
B4 — A group of 4 HDU boxes that are used to service consumption model.
contain 128 HDDs. BST — Binary Search Tree.
BA — Business analyst. BSTP — Blade Server Test Program.
Back end — In client/server applications, the BTU — British Thermal Unit.
client part of the program is often called the
Business Continuity Plan — Describes how an
front end and the server part is called the
organization will resume partially or
back end.
completely interrupted critical functions
Backup image—Data saved during an archive within a predetermined time after a
operation. It includes all the associated files, disruption or a disaster. Sometimes also
directories, and catalog information of the called a Disaster Recovery Plan.
backup operation.
-back to top-
BADM — Basic Direct Access Method.
BASM — Basic Sequential Access Method.
—C—
CA — (1) Continuous Access software (see
BATCTR — Battery Control PCB.
HORC), (2) Continuous Availability or (3)
BC — (1) Business Class (in contrast with EC, Computer Associates.
Enterprise Class). (2) Business coordinator.
Cache — Cache Memory. Intermediate buffer
BCP — Base Control Program. between the channels and drives. It is
BCPii — Base Control Program internal interface. generally available and controlled as two
BDW — Block Descriptor Word. areas of cache (cache A and cache B). It may
be battery-backed.
BED — Back end director. Controls the paths to
the HDDs. Cache hit rate — When data is found in the cache,
it is called a cache hit, and the effectiveness
Big Data — Refers to data that becomes so large in of a cache is judged by its hit rate.
size or quantity that a dataset becomes
awkward to work with using traditional Cache partitioning — Storage management
database management systems. Big data software that allows the virtual partitioning
entails data capacity or measurement that of cache and allocation of it to different
requires terms such as Terabyte (TB), applications.
Petabyte (PB), Exabyte (EB), Zettabyte (ZB) CAD — Computer-Aided Design.

Page G-2 HDS Confidential: For distribution only to authorized parties.


CAGR — Compound Annual Growth Rate. Centralized management — Storage data
Capacity — Capacity is the amount of data that a management, capacity management, access
storage system or drive can store after security management, and path
configuration and/or formatting. management functions accomplished by
software.
Most data storage companies, including HDS,
calculate capacity based on the premise that CF — Coupling Facility.
1KB = 1,024 bytes, 1MB = 1,024 kilobytes, CFCC — Coupling Facility Control Code.
1GB = 1,024 megabytes, and 1TB = 1,024 CFW — Cache Fast Write.
gigabytes. See also Terabyte (TB), Petabyte
(PB), Exabyte (EB), Zettabyte (ZB) and CH — Channel.
Yottabyte (YB). CH S — Channel SCSI.
CAPEX — Capital expenditure — the cost of CHA — Channel Adapter. Provides the channel
developing or providing non-consumable interface control functions and internal cache
parts for the product or system. For example, data transfer functions. It is used to convert
the purchase of a photocopier is the CAPEX, the data format between CKD and FBA. The
and the annual paper and toner cost is the CHA contains an internal processor and 128
OPEX. (See OPEX). bytes of edit buffer memory. Replaced by
CAS — (1) Column Address Strobe. A signal sent CHB in some cases.
to a dynamic random access memory CHA/DKA — Channel Adapter/Disk Adapter.
(DRAM) that tells it that an associated CHAP — Challenge-Handshake Authentication
address is a column address. CAS-column Protocol.
address strobe sent by the processor to a
CHB — Channel Board. Updated DKA for Hitachi
DRAM circuit to activate a column address.
Unified Storage VM and additional
(2) Content-addressable Storage.
enterprise components.
CBI — Cloud-based Integration. Provisioning of a
Chargeback — A cloud computing term that refers
standardized middleware platform in the
to the ability to report on capacity and
cloud that can be used for various cloud
utilization by application or dataset,
integration scenarios.
charging business users or departments
An example would be the integration of based on how much they use.
legacy applications into the cloud or CHF — Channel Fibre.
integration of different cloud-based
applications into one application. CHIP — Client-Host Interface Processor.
Microprocessors on the CHA boards that
CBU — Capacity Backup. process the channel commands from the
CBX —Controller chassis (box). hosts and manage host access to cache.
CCHH — Common designation for Cylinder and CHK — Check.
Head. CHN — Channel adapter NAS.
CCI — Command Control Interface. CHP — Channel Processor or Channel Path.
CCIF — Cloud Computing Interoperability CHPID — Channel Path Identifier.
Forum. A standards organization active in CHSN or C-HSN— Cache Memory Hierarchical
cloud computing. Star Network.
CDP — Continuous Data Protection. CHT — Channel tachyon. A Fibre Channel
CDR — Clinical Data Repository protocol controller.
CDWP — Cumulative disk write throughput. CICS — Customer Information Control System.
CE — Customer Engineer. CIFS protocol — Common internet file system is a
platform-independent file sharing system. A
CEC — Central Electronics Complex. network file system accesses protocol
CentOS — Community Enterprise Operating primarily used by Windows clients to
System. communicate file access requests to
Windows servers.

HDS Confidential: For distribution only to authorized parties. Page G-3


CIM — Common Information Model. • Data discoverability
CIS — Clinical Information System. • Data mobility
CKD ― Count-key Data. A format for encoding • Data protection
data on hard disk drives; typically used in • Dynamic provisioning
the mainframe environment.
• Location independence
CKPT — Check Point.
• Multitenancy to ensure secure privacy
CL — See Cluster.
• Virtualization
CLI — Command Line Interface.
Cloud Fundamental —A core requirement to the
CLPR — Cache Logical Partition. Cache can be deployment of cloud computing. Cloud
divided into multiple virtual cache fundamentals include:
memories to lessen I/O contention.
• Self service
Cloud Computing — “Cloud computing refers to
• Pay per use
applications and services that run on a
distributed network using virtualized • Dynamic scale up and scale down
resources and accessed by common Internet Cloud Security Alliance — A standards
protocols and networking standards. It is organization active in cloud computing.
distinguished by the notion that resources are
CLPR — Cache Logical Partition.
virtual and limitless, and that details of the
physical systems on which software runs are Cluster — A collection of computers that are
abstracted from the user.” — Source: Cloud interconnected (typically at high-speeds) for
Computing Bible, Barrie Sosinsky (2011) the purpose of improving reliability,
availability, serviceability or performance
Cloud computing often entails an “as a
(via load balancing). Often, clustered
service” business model that may entail one
computers have access to a common pool of
or more of the following:
storage and run special software to
• Archive as a Service (AaaS) coordinate the component computers'
• Business Process as a Service (BPaas) activities.
• Failure as a Service (FaaS) CM ― Cache Memory, Cache Memory Module.
• Infrastructure as a Service (IaaS) Intermediate buffer between the channels
and drives. It has a maximum of 64GB (32GB
• IT as a Service (ITaaS)
x 2 areas) of capacity. It is available and
• Platform as a Service (PaaS) controlled as 2 areas of cache (cache A and
• Private File Tiering as a Service (PFTaas) cache B). It is fully battery-backed (48 hours).
• Software as a Service (Saas) CM DIR — Cache Memory Directory.
• SharePoint as a Service (SPaas) CME — Communications Media and
• SPI refers to the Software, Platform and Entertainment.
Infrastructure as a Service business model. CM-HSN — Control Memory Hierarchical Star
Cloud network types include the following: Network.
• Community cloud (or community CM PATH ― Cache Memory Access Path. Access
network cloud) Path from the processors of CHA, DKA PCB
to Cache Memory.
• Hybrid cloud (or hybrid network cloud)
CM PK — Cache Memory Package.
• Private cloud (or private network cloud)
• Public cloud (or public network cloud) CM/SM — Cache Memory/Shared Memory.

• Virtual private cloud (or virtual private CMA — Cache Memory Adapter.
network cloud) CMD — Command.
Cloud Enabler —a concept, product or solution CMG — Cache Memory Group.
that enables the deployment of cloud CNAME — Canonical NAME.
computing. Key cloud enablers include:

Page G-4 HDS Confidential: For distribution only to authorized parties.


CNS — Cluster Name Space or Clustered Name CSTOR — Central Storage or Processor Main
Space. Memory.
CNT — Cumulative network throughput. C-Suite — The C-suite is considered the most
CoD — Capacity on Demand. important and influential group of
individuals at a company. Referred to as
Community Network Cloud — Infrastructure “the C-Suite within a Healthcare provider.”
shared between several organizations or
CSV — Comma Separated Value or Cluster Shared
groups with common concerns.
Volume.
Concatenation — A logical joining of 2 series of
CSVP — Customer-specific Value Proposition.
data, usually represented by the symbol “|”.
In data communications, 2 or more data are CSW ― Cache Switch PCB. The cache switch
often concatenated to provide a unique (CSW) connects the channel adapter or disk
name or reference (e.g., S_ID | X_ID). adapter to the cache. Each of them is
Volume managers concatenate disk address connected to the cache by the Cache Memory
spaces to present a single larger address Hierarchical Star Net (C-HSN) method. Each
space. cluster is provided with the 2 CSWs, and
Connectivity technology — A program or device's each CSW can connect 4 caches. The CSW
ability to link with other programs and switches any of the cache paths to which the
devices. Connectivity technology allows channel adapter or disk adapter is to be
programs on a given computer to run connected through arbitration.
routines or access objects on another remote CTG — Consistency Group.
computer. CTL — Controller module.
Controller — A device that controls the transfer of CTN — Coordinated Timing Network.
data from a computer to a peripheral device
(including a storage system) and vice versa. CU — Control Unit (refers to a storage subsystem.
The hexadecimal number to which 256
Controller-based virtualization — Driven by the
LDEVs may be assigned).
physical controller at the hardware
microcode level versus at the application CUDG — Control Unit Diagnostics. Internal
software layer and integrates into the system tests.
infrastructure to allow virtualization across CUoD — Capacity Upgrade on Demand.
heterogeneous storage and third party
CV — Custom Volume.
products.
CVS ― Customizable Volume Size. Software used
Corporate governance — Organizational
to create custom volume sizes. Marketed
compliance with government-mandated
under the name Virtual LVI (VLVI) and
regulations.
Virtual LUN (VLUN).
CP — Central Processor (also called Processing
Unit or PU). CWDM — Course Wavelength Division
Multiplexing.
CPC — Central Processor Complex.
CXRC — Coupled z/OS Global Mirror.
CPM — Cache Partition Manager. Allows for
-back to top-
partitioning of the cache and assigns a
partition to a LU; this enables tuning of the —D—
system’s performance. DA — Device Adapter.
CPOE — Computerized Physician Order Entry
DACL — Discretionary access control list (ACL).
(Provider Ordered Entry).
The part of a security descriptor that stores
CPS — Cache Port Slave. access rights for users and groups.
CPU — Central Processing Unit. DAD — Device Address Domain. Indicates a site
CRM — Customer Relationship Management. of the same device number automation
CSS — Channel Subsystem. support function. If several hosts on the
same site have the same device number
CS&S — Customer Service and Support.
system, they have the same name.

HDS Confidential: For distribution only to authorized parties. Page G-5


DAP — Data Access Path. Also known as Zero virtual disk data addresses are mapped to
Copy Failover (ZCF). sequences of member disk addresses in a
DAS — Direct Attached Storage. regular rotating pattern.

DASD — Direct Access Storage Device. Data Transfer Rate (DTR) — The speed at which
data can be transferred. Measured in
Data block — A fixed-size unit of data that is kilobytes per second for a CD-ROM drive, in
transferred together. For example, the bits per second for a modem, and in
X-modem protocol transfers blocks of 128 megabytes per second for a hard drive. Also,
bytes. In general, the larger the block size, often called data rate.
the faster the data transfer rate.
DBL — Drive box.
Data Duplication — Software duplicates data, as
in remote copy or PiT snapshots. Maintains 2 DBMS — Data Base Management System.
copies of data. DBX — Drive box.
Data Integrity — Assurance that information will DCA ― Data Cache Adapter.
be protected from modification and DCTL — Direct coupled transistor logic.
corruption.
DDL — Database Definition Language.
Data Lifecycle Management — An approach to
information and storage management. The DDM — Disk Drive Module.
policies, processes, practices, services and DDNS — Dynamic DNS.
tools used to align the business value of data DDR3 — Double data rate 3.
with the most appropriate and cost-effective
storage infrastructure from the time data is DE — Data Exchange Software.
created through its final disposition. Data is Device Management — Processes that configure
aligned with business requirements through and manage storage systems.
management policies and service levels DFS — Microsoft Distributed File System.
associated with performance, availability,
DFSMS — Data Facility Storage Management
recoverability, cost, and what ever
Subsystem.
parameters the organization defines as
critical to its operations. DFSM SDM — Data Facility Storage Management
Subsystem System Data Mover.
Data Migration — The process of moving data
from 1 storage device to another. In this DFSMSdfp — Data Facility Storage Management
context, data migration is the same as Subsystem Data Facility Product.
Hierarchical Storage Management (HSM). DFSMSdss — Data Facility Storage Management
Data Pipe or Data Stream — The connection set up Subsystem Data Set Services.
between the MediaAgent, source or DFSMShsm — Data Facility Storage Management
destination server is called a Data Pipe or Subsystem Hierarchical Storage Manager.
more commonly a Data Stream.
DFSMSrmm — Data Facility Storage Management
Data Pool — A volume containing differential Subsystem Removable Media Manager.
data only.
DFSMStvs — Data Facility Storage Management
Data Protection Directive — A major compliance Subsystem Transactional VSAM Services.
and privacy protection initiative within the
DFW — DASD Fast Write.
European Union (EU) that applies to cloud
computing. Includes the Safe Harbor DICOM — Digital Imaging and Communications
Agreement. in Medicine.
Data Stream — CommVault’s patented high DIMM — Dual In-line Memory Module.
performance data mover used to move data Direct Access Storage Device (DASD) — A type of
back and forth between a data source and a storage device, in which bits of data are
MediaAgent or between 2 MediaAgents. stored at precise locations, enabling the
Data Striping — Disk array data mapping computer to retrieve information directly
technique in which fixed-length sequences of without having to scan a series of records.

Page G-6 HDS Confidential: For distribution only to authorized parties.


Direct Attached Storage (DAS) — Storage that is DKU — Disk Array Frame or Disk Unit. In a
directly attached to the application or file multi-frame configuration, a frame that
server. No other device on the network can contains hard disk units (HDUs).
access the stored data. DKUPS — Disk Unit Power Supply.
Director class switches — Larger switches often DLIBs — Distribution Libraries.
used as the core of large switched fabrics.
DKUP — Disk Unit Power Supply.
Disaster Recovery Plan (DRP) — A plan that
describes how an organization will deal with DLM — Data Lifecycle Management.
potential disasters. It may include the DMA — Direct Memory Access.
precautions taken to either maintain or DM-LU — Differential Management Logical Unit.
quickly resume mission-critical functions. DM-LU is used for saving management
Sometimes also referred to as a Business information of the copy functions in the
Continuity Plan. cache.
Disk Administrator — An administrative tool that DMP — Disk Master Program.
displays the actual LU storage configuration.
DMT — Dynamic Mapping Table.
Disk Array — A linked group of 1 or more
physical independent hard disk drives DMTF — Distributed Management Task Force. A
generally used to replace larger, single disk standards organization active in cloud
drive systems. The most common disk computing.
arrays are in daisy chain configuration or DNS — Domain Name System.
implement RAID (Redundant Array of DOC — Deal Operations Center.
Independent Disks) technology.
Domain — A number of related storage array
A disk array may contain several disk drive
groups.
trays, and is structured to improve speed
and increase protection against loss of data. DOO — Degraded Operations Objective.
Disk arrays organize their data storage into DP — Dynamic Provisioning (pool).
Logical Units (LUs), which appear as linear
DP-VOL — Dynamic Provisioning Virtual Volume.
block paces to their clients. A small disk
array, with a few disks, might support up to DPL — (1) (Dynamic) Data Protection Level or (2)
8 LUs; a large one, with hundreds of disk Denied Persons List.
drives, can support thousands. DR — Disaster Recovery.
DKA ― Disk Adapter. Also called an array control DRAC — Dell Remote Access Controller.
processor (ACP). It provides the control
DRAM — Dynamic random access memory.
functions for data transfer between drives
and cache. The DKA contains DRR (Data DRP — Disaster Recovery Plan.
Recover and Reconstruct), a parity generator DRR — Data Recover and Reconstruct. Data Parity
circuit. Replaced by DKB in some cases. Generator chip on DKA.
DKB — Disk Board. Updated DKA for Hitachi DRV — Dynamic Reallocation Volume.
Unified Storage VM and additional
DSB — Dynamic Super Block.
enterprise components.
DSF — Device Support Facility.
DKC ― Disk Controller Unit. In a multi-frame
configuration, the frame that contains the DSF INIT — Device Support Facility Initialization
front end (control and memory (for DASD).
components). DSP — Disk Slave Program.
DKCMN ― Disk Controller Monitor. Monitors DT — Disaster tolerance.
temperature and power status throughout DTA —Data adapter and path to cache-switches.
the machine.
DTR — Data Transfer Rate.
DKF ― Fibre disk adapter. Another term for a
DVE — Dynamic Volume Expansion.
DKA.
DW — Duplex Write.

HDS Confidential: For distribution only to authorized parties. Page G-7


DWDM — Dense Wavelength Division ERP — Enterprise Resource Planning.
Multiplexing. ESA — Enterprise Systems Architecture.
DWL — Duplex Write Line or Dynamic ESB — Enterprise Service Bus.
Workspace Linking.
ESC — Error Source Code.
-back to top-
ESD — Enterprise Systems Division (of Hitachi)
—E— ESCD — ESCON Director.
EAL — Evaluation Assurance Level (EAL1 ESCON ― Enterprise Systems Connection. An
through EAL7). The EAL of an IT product or input/output (I/O) interface for mainframe
system is a numerical security grade computer connections to storage devices
assigned following the completion of a developed by IBM.
Common Criteria security evaluation, an ESD — Enterprise Systems Division.
international standard in effect since 1999.
ESDS — Entry Sequence Data Set.
EAV — Extended Address Volume.
ESS — Enterprise Storage Server.
EB — Exabyte.
ESW — Express Switch or E Switch. Also referred
EC — Enterprise Class (in contrast with BC, to as the Grid Switch (GSW).
Business Class).
Ethernet — A local area network (LAN)
ECC — Error Checking and Correction. architecture that supports clients and servers
ECC.DDR SDRAM — Error Correction Code and uses twisted pair cables for connectivity.
Double Data Rate Synchronous Dynamic ETR — External Time Reference (device).
RAM Memory.
EVS — Enterprise Virtual Server.
ECM — Extended Control Memory. Exabyte (EB) — A measurement of data or data
ECN — Engineering Change Notice. storage. 1EB = 1,024PB.
E-COPY — Serverless or LAN free backup. EXCP — Execute Channel Program.
EFI — Extensible Firmware Interface. EFI is a ExSA — Extended Serial Adapter.
specification that defines a software interface -back to top-
between an operating system and platform
firmware. EFI runs on top of BIOS when a —F—
LPAR is activated. FaaS — Failure as a Service. A proposed business
EHR — Electronic Health Record. model for cloud computing in which large-
EIG — Enterprise Information Governance. scale, online failure drills are provided as a
service in order to test real cloud
EMIF — ESCON Multiple Image Facility.
deployments. Concept developed by the
EMPI — Electronic Master Patient Identifier. Also College of Engineering at the University of
known as MPI. California, Berkeley in 2011.
Emulation — In the context of Hitachi Data Fabric — The hardware that connects
Systems enterprise storage, emulation is the workstations and servers to storage devices
logical partitioning of an Array Group into in a SAN is referred to as a "fabric." The SAN
logical devices. fabric enables any-server-to-any-storage
EMR — Electronic Medical Record. device connectivity through the use of Fibre
Channel switching technology.
ENC — Enclosure or Enclosure Controller. The
units that connect the controllers with the Failback — The restoration of a failed system
Fibre Channel disks. They also allow for share of a load to a replacement component.
online extending a system by adding RKAs. For example, when a failed controller in a
redundant configuration is replaced, the
EOF — End of Field.
devices that were originally controlled by
EOL — End of Life. the failed controller are usually failed back
EPO — Emergency Power Off. to the replacement controller to restore the
EREP — Error REPorting and Printing. I/O balance, and to restore failure tolerance.

Page G-8 HDS Confidential: For distribution only to authorized parties.


Similarly, when a defective fan or power transmitting data between computer devices;
supply is replaced, its load, previously borne a set of standards for a serial I/O bus
by a redundant component, can be failed capable of transferring data between 2 ports.
back to the replacement part. FC RKAJ — Fibre Channel Rack Additional.
Failed over — A mode of operation for failure- Module system acronym refers to an
tolerant systems in which a component has additional rack unit that houses additional
failed and its function has been assumed by hard drives exceeding the capacity of the
a redundant component. A system that core RK unit.
protects against single failures operating in FC-0 ― Lowest layer on fibre channel transport.
failed over mode is not failure tolerant, as This layer represents the physical media.
failure of the redundant component may FC-1 ― This layer contains the 8b/10b encoding
render the system unable to function. Some scheme.
systems (e.g., clusters) are able to tolerate
FC-2 ― This layer handles framing and protocol,
more than 1 failure; these remain failure
frame format, sequence/exchange
tolerant until no redundant component is
management and ordered set usage.
available to protect against further failures.
FC-3 ― This layer contains common services used
Failover — A backup operation that automatically
by multiple N_Ports in a node.
switches to a standby database server or
network if the primary system fails, or is FC-4 ― This layer handles standards and profiles
temporarily shut down for servicing. Failover for mapping upper level protocols like SCSI
is an important fault tolerance function of an IP onto the Fibre Channel Protocol.
mission-critical systems that rely on constant FCA ― Fibre Adapter. Fibre interface card.
accessibility. Also called path failover. Controls transmission of fibre packets.
Failure tolerance — The ability of a system to FC-AL — Fibre Channel Arbitrated Loop. A serial
continue to perform its function or at a data transfer architecture developed by a
reduced performance level, when 1 or more consortium of computer and mass storage
of its components has failed. Failure device manufacturers, and is now being
tolerance in disk subsystems is often standardized by ANSI. FC-AL was designed
achieved by including redundant instances for new mass storage devices and other
of components whose failure would make peripheral devices that require very high
the system inoperable, coupled with facilities bandwidth. Using optical fiber to connect
that allow the redundant components to devices, FC-AL supports full-duplex data
assume the function of failed ones. transfer rates of 100MBps. FC-AL is
compatible with SCSI for high-performance
FAIS — Fabric Application Interface Standard.
storage systems.
FAL — File Access Library.
FCC — Federal Communications Commission.
FAT — File Allocation Table. FCIP — Fibre Channel over IP, a network storage
Fault Tolerant — Describes a computer system or technology that combines the features of
component designed so that, in the event of a Fibre Channel and the Internet Protocol (IP)
component failure, a backup component or to connect distributed SANs over large
procedure can immediately take its place with distances. FCIP is considered a tunneling
no loss of service. Fault tolerance can be protocol, as it makes a transparent point-to-
provided with software, embedded in point connection between geographically
hardware or provided by hybrid combination. separated SANs over IP networks. FCIP
FBA — Fixed-block Architecture. Physical disk relies on TCP/IP services to establish
sector mapping. connectivity between remote SANs over
FBA/CKD Conversion — The process of LANs, MANs, or WANs. An advantage of
converting open-system data in FBA format FCIP is that it can use TCP/IP as the
to mainframe data in CKD format. transport while keeping Fibre Channel fabric
services intact.
FBUS — Fast I/O Bus.
FC ― Fibre Channel or Field-Change (microcode
update) or Fibre Channel. A technology for

HDS Confidential: For distribution only to authorized parties. Page G-9


FCoE – Fibre Channel over Ethernet. An FPGA — Field Programmable Gate Array.
encapsulation of Fibre Channel frames over Frames — An ordered vector of words that is the
Ethernet networks. basic unit of data transmission in a Fibre
FCP — Fibre Channel Protocol. Channel network.
FC-P2P — Fibre Channel Point-to-Point. Front end — In client/server applications, the
FCSE — Flashcopy Space Efficiency. client part of the program is often called the
FC-SW — Fibre Channel Switched. front end and the server part is called the
FCU— File Conversion Utility. back end.
FD — Floppy Disk or Floppy Drive. FRU — Field Replaceable Unit.
FDDI — Fiber Distributed Data Interface. FS — File System.
FDR — Fast Dump/Restore. FSA — File System Module-A.
FE — Field Engineer. FSB — File System Module-B.
FED — (Channel) Front End Director.
FSI — Financial Services Industries.
Fibre Channel — A serial data transfer
FSM — File System Module.
architecture developed by a consortium of
computer and mass storage device FSW ― Fibre Channel Interface Switch PCB. A
manufacturers and now being standardized board that provides the physical interface
by ANSI. The most prominent Fibre Channel (cable connectors) between the ACP ports
standard is Fibre Channel Arbitrated Loop and the disks housed in a given disk drive.
(FC-AL). FTP ― File Transfer Protocol. A client-server
FICON — Fiber Connectivity. A high-speed protocol that allows a user on 1 computer to
input/output (I/O) interface for mainframe transfer files to and from another computer
computer connections to storage devices. As over a TCP/IP network.
part of IBM's S/390 server, FICON channels FWD — Fast Write Differential.
increase I/O capacity through the
-back to top-
combination of a new architecture and faster
physical link rates to make them up to 8 —G—
times as efficient as ESCON (Enterprise GA — General availability.
System Connection), IBM's previous fiber
GARD — General Available Restricted
optic channel standard.
Distribution.
FIPP — Fair Information Practice Principles.
Guidelines for the collection and use of Gb — Gigabit.
personal information created by the United GB — Gigabyte.
States Federal Trade Commission (FTC). Gb/sec — Gigabit per second.
FISMA — Federal Information Security
GB/sec — Gigabyte per second.
Management Act of 2002. A major
compliance and privacy protection law that GbE — Gigabit Ethernet.
applies to information systems and cloud Gbps — Gigabit per second.
computing. Enacted in the United States of
GBps — Gigabyte per second.
America in 2002.
GBIC — Gigabit Interface Converter.
FLGFAN ― Front Logic Box Fan Assembly.
GCMI — Global Competitive and Marketing
FLOGIC Box ― Front Logic Box.
Intelligence (Hitachi).
FM — Flash Memory. Each microprocessor has
GDG — Generation Data Group.
FM. FM is non-volatile memory that contains
microcode. GDPS — Geographically Dispersed Parallel
Sysplex.
FOP — Fibre Optic Processor or fibre open.
GID — Group Identifier within the UNIX security
FQDN — Fully Qualified Domain Name.
model.
FPC — Failure Parts Code or Fibre Channel
gigE — Gigabit Ethernet.
Protocol Chip.

Page G-10 HDS Confidential: For distribution only to authorized parties.


GLM — Gigabyte Link Module. HDD ― Hard Disk Drive. A spindle of hard disk
Global Cache — Cache memory is used on demand platters that make up a hard drive, which is
by multiple applications. Use changes a unit of physical storage within a
dynamically, as required for READ subsystem.
performance between hosts/applications/LUs. HDDPWR — Hard Disk Drive Power.
GPFS — General Parallel File System. HDU ― Hard Disk Unit. A number of hard drives
GSC — Global Support Center. (HDDs) grouped together within a
subsystem.
GSI — Global Systems Integrator.
Head — See read/write head.
GSS — Global Solution Services.
Heterogeneous — The characteristic of containing
GSSD — Global Solutions Strategy and dissimilar elements. A common use of this
Development. word in information technology is to
GSW — Grid Switch Adapter. Also known as E describe a product as able to contain or be
Switch (Express Switch). part of a “heterogeneous network,"
GUI — Graphical User Interface. consisting of different manufacturers'
products that can interoperate.
GUID — Globally Unique Identifier.
Heterogeneous networks are made possible by
-back to top-
standards-conforming hardware and
—H— software interfaces used in common by
H1F — Essentially the floor-mounted disk rack different products, thus allowing them to
(also called desk side) equivalent of the RK. communicate with each other. The Internet
(See also: RK, RKA, and H2F). itself is an example of a heterogeneous
network.
H2F — Essentially the floor-mounted disk rack
(also called desk side) add-on equivalent HiCAM — Hitachi Computer Products America.
similar to the RKA. There is a limitation of HIPAA — Health Insurance Portability and
only 1 H2F that can be added to the core RK Accountability Act.
Floor Mounted unit. See also: RK, RKA, and HIS — (1) High Speed Interconnect. (2) Hospital
H1F. Information System (clinical and financial).
HA — High Availability. HiStar — Multiple point-to-point data paths to
Hadoop — Apache Hadoop is an open-source cache.
software framework for data storage and HL7 — Health Level 7.
large-scale processing of data-sets on
clusters of hardware. HLQ — High-level Qualifier.

HANA — High Performance Analytic Appliance, HLS — Healthcare and Life Sciences.
a database appliance technology proprietary HLU — Host Logical Unit.
to SAP. H-LUN — Host Logical Unit Number. See LUN.
HBA — Host Bus Adapter — An I/O adapter that HMC — Hardware Management Console.
sits between the host computer's bus and the
Fibre Channel loop and manages the transfer Homogeneous — Of the same or similar kind.
of information between the 2 channels. In Host — Also called a server. Basically a central
order to minimize the impact on host computer that processes end-user
processor performance, the host bus adapter applications or requests.
performs many low-level interface functions Host LU — Host Logical Unit. See also HLU.
automatically or with minimal processor
Host Storage Domains — Allows host pooling at
involvement.
the LUN level and the priority access feature
HCA — Host Channel Adapter. lets administrator set service levels for
HCD — Hardware Configuration Definition. applications.
HD — Hard Disk. HP — (1) Hewlett-Packard Company or (2) High
HDA — Head Disk Assembly. Performance.

HDS Confidential: For distribution only to authorized parties. Page G-11


HPC — High Performance Computing. —I—
HSA — Hardware System Area. I/F — Interface.
HSG — Host Security Group.
I/O — Input/Output. Term used to describe any
HSM — Hierarchical Storage Management (see program, operation, or device that transfers
Data Migrator). data to or from a computer and to or from a
HSN — Hierarchical Star Network. peripheral device.
HSSDC — High Speed Serial Data Connector. IaaS —Infrastructure as a Service. A cloud
HTTP — Hyper Text Transfer Protocol. computing business model — delivering
computer infrastructure, typically a platform
HTTPS — Hyper Text Transfer Protocol Secure.
virtualization environment, as a service,
Hub — A common connection point for devices in along with raw (block) storage and
a network. Hubs are commonly used to networking. Rather than purchasing servers,
connect segments of a LAN. A hub contains software, data center space or network
multiple ports. When a packet arrives at 1 equipment, clients buy those resources as a
port, it is copied to the other ports so that all fully outsourced service. Providers typically
segments of the LAN can see all packets. A bill such services on a utility computing
switching hub actually reads the destination basis; the amount of resources consumed
address of each packet and then forwards (and therefore the cost) will typically reflect
the packet to the correct port. Device to the level of activity.
which nodes on a multi-point bus or loop are
physically connected. IDE — Integrated Drive Electronics Advanced
Technology. A standard designed to connect
Hybrid Cloud — “Hybrid cloud computing refers
hard and removable disk drives.
to the combination of external public cloud
computing services and internal resources IDN — Integrated Delivery Network.
(either a private cloud or traditional iFCP — Internet Fibre Channel Protocol.
infrastructure, operations and applications) Index Cache — Provides quick access to indexed
in a coordinated fashion to assemble a data on the media during a browse\restore
particular solution.” — Source: Gartner operation.
Research.
IBR — Incremental Block-level Replication or
Hybrid Network Cloud — A composition of 2 or
Intelligent Block Replication.
more clouds (private, community or public).
Each cloud remains a unique entity but they ICB — Integrated Cluster Bus.
are bound together. A hybrid network cloud ICF — Integrated Coupling Facility.
includes an interconnection.
ID — Identifier.
Hypervisor — Also called a virtual machine
IDR — Incremental Data Replication.
manager, a hypervisor is a hardware
virtualization technique that enables iFCP — Internet Fibre Channel Protocol. Allows
multiple operating systems to run an organization to extend Fibre Channel
concurrently on the same computer. storage networks over the Internet by using
Hypervisors are often installed on server TCP/IP. TCP is responsible for managing
hardware then run the guest operating congestion control as well as error detection
systems that act as servers. and recovery services.
Hypervisor can also refer to the interface iFCP allows an organization to create an IP SAN
that is provided by Infrastructure as a Service fabric that minimizes the Fibre Channel
(IaaS) in cloud computing. fabric component and maximizes use of the
company's TCP/IP infrastructure.
Leading hypervisors include VMware
vSphere Hypervisor™ (ESXi), Microsoft® IFL — Integrated Facility for LINUX.
Hyper-V and the Xen® hypervisor. IHE — Integrating the Healthcare Enterprise.
-back to top-
IID — Initiator ID.
IIS — Internet Information Server.

Page G-12 HDS Confidential: For distribution only to authorized parties.


ILM — Information Life Cycle Management. ISL — Inter-Switch Link.
ILO — (Hewlett-Packard) Integrated Lights-Out. iSNS — Internet Storage Name Service.
IML — Initial Microprogram Load. ISOE — iSCSI Offload Engine.
IMS — Information Management System. ISP — Internet service provider.
In-band virtualization — Refers to the location of ISPF — Interactive System Productivity Facility.
the storage network path, between the ISPF/PDF — Interactive System Productivity
application host servers in the storage Facility/Program Development Facility.
systems. Provides both control and data ISV — Independent Software Vendor.
along the same connection path. Also called ITaaS — IT as a Service. A cloud computing
symmetric virtualization.
business model. This general model is an
INI — Initiator. umbrella model that entails the SPI business
Interface —The physical and logical arrangement model (SaaS, PaaS and IaaS — Software,
supporting the attachment of any device to a Platform and Infrastructure as a Service).
connector or to another device. ITSC — Informaton and Telecommunications
Internal bus — Another name for an internal data Systems Companies.
bus. Also, an expansion bus is often referred -back to top-
to as an internal bus.
—J—
Internal data bus — A bus that operates only
Java — A widely accepted, open systems
within the internal circuitry of the CPU,
programming language. Hitachi’s enterprise
communicating among the internal caches of
software products are all accessed using Java
memory that are part of the CPU chip’s
applications. This enables storage
design. This bus is typically rather quick and
administrators to access the Hitachi
is independent of the rest of the computer’s
enterprise software products from any PC or
operations.
workstation that runs a supported thin-client
IOC — I/O controller. internet browser application and that has
IOCDS — I/O Control Data Set. TCP/IP network access to the computer on
IODF — I/O Definition file. which the software product runs.
IOPH — I/O per hour. Java VM — Java Virtual Machine.
IOS — I/O Supervisor. JBOD — Just a Bunch of Disks.
IOSQ — Input/Output Subsystem Queue. JCL — Job Control Language.
IP — Internet Protocol. The communications JMP —Jumper. Option setting method.
protocol that routes traffic across the JMS — Java Message Service.
Internet. JNL — Journal.
IPv6 — Internet Protocol Version 6. The latest JNLG — Journal Group.
revision of the Internet Protocol (IP).
JRE —Java Runtime Environment.
IPL — Initial Program Load.
JVM — Java Virtual Machine.
IPSEC — IP security.
J-VOL — Journal Volume.
IRR — Internal Rate of Return. -back to top-
ISC — Initial shipping condition or Inter-System
Communication. —K—
iSCSI — Internet SCSI. Pronounced eye skuzzy. KSDS — Key Sequence Data Set.
An IP-based standard for linking data kVA— Kilovolt Ampere.
storage devices over a network and
KVM — Kernel-based Virtual Machine or
transferring data by carrying SCSI
Keyboard-Video Display-Mouse.
commands over IP networks.
kW — Kilowatt.
ISE — Integrated Scripting Environment.
-back to top-
iSER — iSCSI Extensions for RDMA.

HDS Confidential: For distribution only to authorized parties. Page G-13


—L— networks where it is difficult to predict the
number of requests that will be issued to a
LACP — Link Aggregation Control Protocol. server. If 1 server starts to be swamped,
LAG — Link Aggregation Groups. requests are forwarded to another server
LAN — Local Area Network. A communications with more capacity. Load balancing can also
network that serves clients within a refer to the communications channels
geographical area, such as a building. themselves.
LBA — Logical block address. A 28-bit value that LOC — “Locations” section of the Maintenance
maps to a specific cylinder-head-sector Manual.
address on the disk. Logical DKC (LDKC) — Logical Disk Controller
LC — Lucent connector. Fibre Channel connector Manual. An internal architecture extension
that is smaller than a simplex connector (SC). to the Control Unit addressing scheme that
allows more LDEVs to be identified within 1
LCDG — Link Processor Control Diagnostics.
Hitachi enterprise storage system.
LCM — Link Control Module.
Longitudinal record —Patient information from
LCP — Link Control Processor. Controls the birth to death.
optical links. LCP is located in the LCM.
LPAR — Logical Partition (mode).
LCSS — Logical Channel Subsystems.
LR — Local Router.
LCU — Logical Control Unit.
LRECL — Logical Record Length.
LD — Logical Device.
LRP — Local Router Processor.
LDAP — Lightweight Directory Access Protocol.
LRU — Least Recently Used.
LDEV ― Logical Device or Logical Device
LSS — Logical Storage Subsystem (equivalent to
(number). A set of physical disk partitions
LCU).
(all or portions of 1 or more disks) that are
combined so that the subsystem sees and LU — Logical Unit. Mapping number of an LDEV.
treats them as a single area of data storage. LUN ― Logical Unit Number. 1 or more LDEVs.
Also called a volume. An LDEV has a Used only for open systems.
specific and unique address within a LUSE ― Logical Unit Size Expansion. Feature used
subsystem. LDEVs become LUNs to an to create virtual LUs that are up to 36 times
open-systems host. larger than the standard OPEN-x LUs.
LDKC — Logical Disk Controller or Logical Disk LVDS — Low Voltage Differential Signal
Controller Manual.
LVI — Logical Volume Image. Identifies a similar
LDM — Logical Disk Manager. concept (as LUN) in the mainframe
LDS — Linear Data Set. environment.
LED — Light Emitting Diode. LVM — Logical Volume Manager.
LFF — Large Form Factor. -back to top-
LIC — Licensed Internal Code. —M—
LIS — Laboratory Information Systems.
MAC — Media Access Control. A MAC address is
LLQ — Lowest Level Qualifier. a unique identifier attached to most forms of
LM — Local Memory. networking equipment.
LMODs — Load Modules. MAID — Massive array of disks.
LNKLST — Link List. MAN — Metropolitan Area Network. A
communications network that generally
Load balancing — The process of distributing
covers a city or suburb. MAN is very similar
processing and communications activity
to a LAN except it spans across a
evenly across a computer network so that no
geographical region such as a state. Instead
single device is overwhelmed. Load
of the workstations in a LAN, the
balancing is especially important for

Page G-14 HDS Confidential: For distribution only to authorized parties.


workstations in a MAN could depict Microcode — The lowest-level instructions that
different cities in a state. For example, the directly control a microprocessor. A single
state of Texas could have: Dallas, Austin, San machine-language instruction typically
Antonio. The city could be a separate LAN translates into several microcode
and all the cities connected together via a instructions.
switch. This topology would indicate a Fortan Pascal C
MAN.
High-level Language
MAPI — Management Application Programming
Assembly Language
Interface.
Machine Language
Mapping — Conversion between 2 data
Hardware
addressing spaces. For example, mapping
refers to the conversion between physical Microprogram — See Microcode.
disk block addresses and the block addresses
MIF — Multiple Image Facility.
of the virtual disks presented to operating
environments by control software. Mirror Cache OFF — Increases cache efficiency
over cache data redundancy.
Mb — Megabit.
M-JNL — Primary journal volumes.
MB — Megabyte.
MM — Maintenance Manual.
MBA — Memory Bus Adaptor.
MMC — Microsoft Management Console.
MBUS — Multi-CPU Bus.
Mode — The state or setting of a program or
MC — Multi Cabinet.
device. The term mode implies a choice,
MCU — Main Control Unit, Master Control Unit, which is that you can change the setting and
Main Disk Control Unit or Master Disk put the system in a different mode.
Control Unit. The local CU of a remote copy
MP — Microprocessor.
pair. Main or Master Control Unit.
MPA — Microprocessor adapter.
MCU — Master Control Unit.
MPB – Microprocessor board.
MDPL — Metadata Data Protection Level.
MPI — (Electronic) Master Patient Identifier. Also
MediaAgent — The workhorse for all data
known as EMPI.
movement. MediaAgent facilitates the
transfer of data between the data source, the MPIO — Multipath I/O.
client computer, and the destination storage MP PK – MP Package
media. MPU — Microprocessor Unit.
Metadata — In database management systems, MQE — Metadata Query Engine (Hitachi).
data files are the files that store the database
information; whereas other files, such as MS/SG — Microsoft Service Guard.
index files and data dictionaries, store MSCS — Microsoft Cluster Server.
administrative information, known as MSS — (1) Multiple Subchannel Set. (2) Managed
metadata. Security Services.
MFC — Main Failure Code. MTBF — Mean Time Between Failure.
MG — (1) Module Group. 2 (DIMM) cache MTS — Multitiered Storage.
memory modules that work together. (2)
Multitenancy — In cloud computing,
Migration Group. A group of volumes to be
multitenancy is a secure way to partition the
migrated together.
infrastructure (application, storage pool and
MGC — (3-Site) Metro/Global Mirror. network) so multiple customers share a
MIB — Management Information Base. A database single resource pool. Multitenancy is one of
of objects that can be monitored by a the key ways cloud can achieve massive
network management system. Both SNMP economy of scale.
and RMON use standardized MIB formats M-VOL — Main Volume.
that allow any SNMP and RMON tools to
MVS — Multiple Virtual Storage.
monitor any device defined by a MIB. -back to top-

HDS Confidential: For distribution only to authorized parties. Page G-15


—N— —O—
NAS ― Network Attached Storage. A disk array OCC — Open Cloud Consortium. A standards
connected to a controller that gives access to organization active in cloud computing.
a LAN Transport. It handles data at the file OEM — Original Equipment Manufacturer.
level.
OFC — Open Fibre Control.
NAT — Network Address Translation.
OGF — Open Grid Forum. A standards
NDMP — Network Data Management Protocol. organization active in cloud computing.
A protocol meant to transport data between
NAS devices. OID — Object identifier.

NetBIOS — Network Basic Input/Output System. OLA — Operating Level Agreements.

Network — A computer system that allows OLTP — On-Line Transaction Processing.


sharing of resources, such as files and OLTT — Open-loop throughput throttling.
peripheral hardware devices. OMG — Object Management Group. A standards
Network Cloud — A communications network. organization active in cloud computing.
The word "cloud" by itself may refer to any On/Off CoD — On/Off Capacity on Demand.
local area network (LAN) or wide area
network (WAN). The terms “computing" ONODE — Object node.
and "cloud computing" refer to services OPEX — Operational Expenditure. This is an
offered on the public Internet or to a private operating expense, operating expenditure,
network that uses the same protocols as a operational expense, or operational
standard network. See also cloud computing. expenditure, which is an ongoing cost for
NFS protocol — Network File System is a protocol running a product, business, or system. Its
that allows a computer to access files over a counterpart is a capital expenditure (CAPEX).
network as easily as if they were on its local ORM — Online Read Margin.
disks. OS — Operating System.
NIM — Network Interface Module. Out-of-band virtualization — Refers to systems
NIS — Network Information Service (originally where the controller is located outside of the
called the Yellow Pages or YP). SAN data path. Separates control and data
NIST — National Institute of Standards and on different connection paths. Also called
Technology. A standards organization active asymmetric virtualization.
in cloud computing. -back to top-

NLS — Native Language Support. —P—


Node ― An addressable entity connected to an P-2-P — Point to Point. Also P-P.
I/O bus or network, used primarily to refer
to computers, storage devices, and storage PaaS — Platform as a Service. A cloud computing
subsystems. The component of a node that business model — delivering a computing
connects to the bus or network is a port. platform and solution stack as a service.
PaaS offerings facilitate deployment of
Node name ― A Name_Identifier associated with applications without the cost and complexity
a node. of buying and managing the underlying
NPV — Net Present Value. hardware, software and provisioning
NRO — Network Recovery Objective. hosting capabilities. PaaS provides all of the
facilities required to support the complete
NTP — Network Time Protocol. life cycle of building and delivering web
NVS — Non Volatile Storage. applications and services entirely from the
-back to top- Internet.
PACS – Picture Archiving and Communication
System.

Page G-16 HDS Confidential: For distribution only to authorized parties.


PAN — Personal Area Network. A PDM — Policy based Data Migration or Primary
communications network that transmit data Data Migrator.
wirelessly over a short distance. Bluetooth PDS — Partitioned Data Set.
and Wi-Fi Direct are examples of personal
PDSE — Partitioned Data Set Extended.
area networks.
Performance — Speed of access or the delivery of
PAP — Password Authentication Protocol.
information.
Parity — A technique of checking whether data Petabyte (PB) — A measurement of capacity — the
has been lost or written over when it is amount of data that a drive or storage
moved from 1 place in storage to another or system can store after formatting. 1PB =
when it is transmitted between computers. 1,024TB.
Parity Group — Also called an array group. This is PFA — Predictive Failure Analysis.
a group of hard disk drives (HDDs) that
PFTaaS — Private File Tiering as a Service. A cloud
form the basic unit of storage in a subsystem.
computing business model.
All HDDs in a parity group must have the
same physical capacity. PGP — Pretty Good Privacy (encryption).
Partitioned cache memory — Separate workloads PGR — Persistent Group Reserve.
in a “storage consolidated” system by PI — Product Interval.
dividing cache into individually managed PIR — Performance Information Report.
multiple partitions. Then customize the PiT — Point-in-Time.
partition to match the I/O characteristics of
assigned LUs. PK — Package (see PCB).

PAT — Port Address Translation. PL — Platter. The circular disk on which the
magnetic data is stored. Also called
PATA — Parallel ATA. motherboard or backplane.
Path — Also referred to as a transmission channel, PM — Package Memory.
the path between 2 nodes of a network that a
POC — Proof of concept.
data communication follows. The term can
refer to the physical cabling that connects the Port — In TCP/IP and UDP networks, an
nodes on a network, the signal that is endpoint to a logical connection. The port
communicated over the pathway or a sub- number identifies what type of port it is. For
channel in a carrier frequency. example, port 80 is used for HTTP traffic.

Path failover — See Failover. POSIX — Portable Operating System Interface for
UNIX. A set of standards that defines an
PAV — Parallel Access Volumes. application programming interface (API) for
PAWS — Protect Against Wrapped Sequences. software designed to run under
PB — Petabyte. heterogeneous operating systems.
PBC — Port By-pass Circuit. PP — Program product.
PCB — Printed Circuit Board. P-P — Point-to-point; also P2P.
PCHIDS — Physical Channel Path Identifiers. PPRC — Peer-to-Peer Remote Copy.
PCI — Power Control Interface. Private Cloud — A type of cloud computing
defined by shared capabilities within a
PCI CON — Power Control Interface Connector
single company; modest economies of scale
Board.
and less automation. Infrastructure and data
PCI DSS — Payment Card Industry Data Security reside inside the company’s data center
Standard. behind a firewall. Comprised of licensed
PCIe — Peripheral Component Interconnect software tools rather than on-going services.
Express.
PD — Product Detail. Example: An organization implements its
own virtual, scalable cloud and business
PDEV— Physical Device. units are charged on a per use basis.

HDS Confidential: For distribution only to authorized parties. Page G-17


Private Network Cloud — A type of cloud QoS — Quality of Service. In the field of computer
network with 3 characteristics: (1) Operated networking, the traffic engineering term
solely for a single organization, (2) Managed quality of service (QoS) refers to resource
internally or by a third-party, (3) Hosted reservation control mechanisms rather than
internally or externally. the achieved service quality. Quality of
PR/SM — Processor Resource/System Manager. service is the ability to provide different
priority to different applications, users, or
Protocol — A convention or standard that enables
data flows, or to guarantee a certain level of
the communication between 2 computing
performance to a data flow.
endpoints. In its simplest form, a protocol
can be defined as the rules governing the QSAM — Queued Sequential Access Method.
syntax, semantics, and synchronization of -back to top-
communication. Protocols may be —R—
implemented by hardware, software, or a
combination of the 2. At the lowest level, a RACF — Resource Access Control Facility.
protocol defines the behavior of a hardware RAID ― Redundant Array of Independent Disks,
connection. or Redundant Array of Inexpensive Disks. A
group of disks that look like a single volume
Provisioning — The process of allocating storage
resources and assigning storage capacity for to the server. RAID improves performance
by pulling a single stripe of data from
an application, usually in the form of server
multiple disks, and improves fault-tolerance
disk drive space, in order to optimize the
performance of a storage area network either through mirroring or parity checking
and it is a component of a customer’s SLA.
(SAN). Traditionally, this has been done by
the SAN administrator, and it can be a RAID-0 — Striped array with no parity.
tedious process. In recent years, automated RAID-1 — Mirrored array and duplexing.
storage provisioning (also called auto- RAID-3 — Striped array with typically non-
provisioning) programs have become rotating parity, optimized for long, single-
available. These programs can reduce the threaded transfers.
time required for the storage provisioning
RAID-4 — Striped array with typically non-
process, and can free the administrator from
rotating parity, optimized for short, multi-
the often distasteful task of performing this
threaded transfers.
chore manually.
RAID-5 — Striped array with typically rotating
PS — Power Supply.
parity, optimized for short, multithreaded
PSA — Partition Storage Administrator . transfers.
PSSC — Perl Silicon Server Control. RAID-6 — Similar to RAID-5, but with dual
PSU — Power Supply Unit. rotating parity physical disks, tolerating 2
PTAM — Pickup Truck Access Method. physical disk failures.
PTF — Program Temporary Fixes. RAIN — Redundant (or Reliable) Array of
Independent Nodes (architecture).
PTR — Pointer.
PU — Processing Unit. RAM — Random Access Memory.
RAM DISK — A LUN held entirely in the cache
Public Cloud — Resources, such as applications
and storage, available to the general public area.
over the Internet. RAS — Reliability, Availability, and Serviceability
P-VOL — Primary Volume. or Row Address Strobe.
-back to top- RBAC — Role Base Access Control.

—Q— RC — (1) Reference Code or (2) Remote Control.

QD — Quorum Device RCHA — RAID Channel Adapter.


QDepth — The number of I/O operations that can RCP — Remote Control Processor.
run in parallel on a SAN device; also WWN RCU — Remote Control Unit or Remote Disk
QDepth. Control Unit.

Page G-18 HDS Confidential: For distribution only to authorized parties.


RCUT — RCU Target. language and development environment,
RD/WR — Read/Write. can write object-oriented programming in
which objects on different computers can
RDM — Raw Disk Mapped. interact in a distributed network. RMI is the
RDMA — Remote Direct Memory Access. Java version of what is generally known as a
RDP — Remote Desktop Protocol. RPC (remote procedure call), but with the
ability to pass 1 or more objects along with
RDW — Record Descriptor Word. the request.
Read/Write Head — Read and write data to the RndRD — Random read.
platters, typically there is 1 head per platter
side, and each head is attached to a single ROA — Return on Asset.
actuator shaft. RoHS — Restriction of Hazardous Substances (in
RECFM — Record Format Redundant. Describes Electrical and Electronic Equipment).
the computer or network system ROI — Return on Investment.
components, such as fans, hard disk drives, ROM — Read Only Memory.
servers, operating systems, switches, and
telecommunication links that are installed to Round robin mode — A load balancing technique
back up primary resources in case they fail. which distributes data packets equally
among the available paths. Round robin
A well-known example of a redundant DNS is usually used for balancing the load
system is the redundant array of of geographically distributed Web servers. It
independent disks (RAID). Redundancy works on a rotating basis in that one server
contributes to the fault tolerance of a system. IP address is handed out, then moves to the
back of the list; the next server IP address is
Redundancy — Backing up a component to help
handed out, and then it moves to the end of
ensure high availability.
the list; and so on, depending on the number
Reliability — (1) Level of assurance that data will of servers being used. This works in a
not be lost or degraded over time. (2) An looping fashion.
attribute of any commuter component
Router — A computer networking device that
(software, hardware, or a network) that
forwards data packets toward their
consistently performs according to its
destinations, through a process known as
specifications.
routing.
REST — Representational State Transfer.
RPC — Remote procedure call.
REXX — Restructured extended executor.
RPO — Recovery Point Objective. The point in
RID — Relative Identifier that uniquely identifies time that recovered data should match.
a user or group within a Microsoft Windows
RPSFAN — Rear Power Supply Fan Assembly.
domain.
RRDS — Relative Record Data Set.
RIS — Radiology Information System.
RS CON — RS232C/RS422 Interface Connector.
RISC — Reduced Instruction Set Computer.
RSD — RAID Storage Division (of Hitachi).
RIU — Radiology Imaging Unit.
R-SIM — Remote Service Information Message.
R-JNL — Secondary journal volumes.
RSM — Real Storage Manager.
RK — Rack additional.
RTM — Recovery Termination Manager.
RKAJAT — Rack Additional SATA disk tray.
RTO — Recovery Time Objective. The length of
RKAK — Expansion unit.
time that can be tolerated between a disaster
RLGFAN — Rear Logic Box Fan Assembly. and recovery of data.
RLOGIC BOX — Rear Logic Box. R-VOL — Remote Volume.
RMF — Resource Measurement Facility. R/W — Read/Write.
RMI — Remote Method Invocation. A way that a -back to top-
programmer, using the Java programming

HDS Confidential: For distribution only to authorized parties. Page G-19


—S— SBM — Solutions Business Manager.

SA — Storage Administrator. SBOD — Switched Bunch of Disks.

SA z/OS — System Automation for z/OS. SBSC — Smart Business Storage Cloud.
SBX — Small Box (Small Form Factor).
SAA — Share Access Authentication. The process
of restricting a user's rights to a file system SC — (1) Simplex connector. Fibre Channel
object by combining the security descriptors connector that is larger than a Lucent
from both the file system object itself and the connector (LC). (2) Single Cabinet.
share to which the user is connected. SCM — Supply Chain Management.
SaaS — Software as a Service. A cloud computing SCP — Secure Copy.
business model. SaaS is a software delivery SCSI — Small Computer Systems Interface. A
model in which software and its associated parallel bus architecture and a protocol for
data are hosted centrally in a cloud and are transmitting large data blocks up to a
typically accessed by users using a thin distance of 15 to 25 meters.
client, such as a web browser via the
SD — Software Division (of Hitachi).
Internet. SaaS has become a common
delivery model for most business SDH — Synchronous Digital Hierarchy.
applications, including accounting (CRM SDM — System Data Mover.
and ERP), invoicing (HRM), content SDSF — Spool Display and Search Facility.
management (CM) and service desk Sector — A sub-division of a track of a magnetic
management, just to name the most common disk that stores a fixed amount of data.
software that runs in the cloud. This is the
fastest growing service in the cloud market SEL — System Event Log.
today. SaaS performs best for relatively Selectable segment size — Can be set per partition.
simple tasks in IT-constrained organizations. Selectable Stripe Size — Increases performance by
SACK — Sequential Acknowledge. customizing the disk access size.
SACL — System ACL. The part of a security SENC — Is the SATA (Serial ATA) version of the
descriptor that stores system auditing ENC. ENCs and SENCs are complete
information. microprocessor systems on their own and
they occasionally require a firmware
SAIN — SAN-attached Array of Independent upgrade.
Nodes (architecture).
SeqRD — Sequential read.
SAN ― Storage Area Network. A network linking
Serial Transmission — The transmission of data
computing devices to disk or tape arrays and
bits in sequential order over a single line.
other devices over Fibre Channel. It handles
data at the block level. Server — A central computer that processes
end-user applications or requests, also called
SAP — (1) System Assist Processor (for I/O
a host.
processing), or (2) a German software
company. Server Virtualization — The masking of server
resources, including the number and identity
SAP HANA — High Performance Analytic of individual physical servers, processors,
Appliance, a database appliance technology and operating systems, from server users.
proprietary to SAP. The implementation of multiple isolated
SARD — System Assurance Registration virtual environments in one physical server.
Document. Service-level Agreement — SLA. A contract
SAS —Serial Attached SCSI. between a network service provider and a
SATA — Serial ATA. Serial Advanced Technology customer that specifies, usually in
Attachment is a new standard for connecting measurable terms, what services the network
hard drives into computer systems. SATA is service provider will furnish. Many Internet
based on serial signaling technology, unlike service providers (ISP) provide their
current IDE (Integrated Drive Electronics) customers with a SLA. More recently, IT
hard drives that use parallel signaling. departments in major enterprises have

Page G-20 HDS Confidential: For distribution only to authorized parties.


adopted the idea of writing a service level guidance information. (2) Storage Interface
agreement so that services for their Module. (3) Subscriber Identity Module.
customers (users in other departments SIM RC — Service (or system) Information
within the enterprise) can be measured, Message Reference Code.
justified, and perhaps compared with those
SIMM — Single In-line Memory Module.
of outsourcing network providers.
SLA —Service Level Agreement.
Some metrics that SLAs may specify include:
SLO — Service Level Objective.
• The percentage of the time services will be
available SLRP — Storage Logical Partition.
SM ― Shared Memory or Shared Memory Module.
• The number of users that can be served
Stores the shared information about the
simultaneously
subsystem and the cache control information
• Specific performance benchmarks to (director names). This type of information is
which actual performance will be used for the exclusive control of the
periodically compared subsystem. Like CACHE, shared memory is
• The schedule for notification in advance of controlled as 2 areas of memory and fully non-
network changes that may affect users volatile (sustained for approximately 7 days).
• Help desk response time for various SM PATH— Shared Memory Access Path. The
classes of problems Access Path from the processors of CHA,
• Dial-in access availability DKA PCB to Shared Memory.
• Usage statistics that will be provided SMB/CIFS — Server Message Block
Protocol/Common Internet File System.
Service-Level Objective — SLO. Individual
SMC — Shared Memory Control.
performance metrics built into an SLA. Each
SLO corresponds to a single performance SME — Small and Medium Enterprise
characteristic relevant to the delivery of an SMF — System Management Facility.
overall service. Some examples of SLOs SMI-S — Storage Management Initiative
include: system availability, help desk Specification.
incident resolution time, and application
SMP — Symmetric Multiprocessing. An IBM-
response time.
licensed program used to install software
SES — SCSI Enclosure Services. and software changes on z/OS systems.
SFF — Small Form Factor. SMP/E — System Modification
SFI — Storage Facility Image. Program/Extended.
SFM — Sysplex Failure Management. SMS — System Managed Storage.
SFP — Small Form-Factor Pluggable module Host SMTP — Simple Mail Transfer Protocol.
connector. A specification for a new SMU — System Management Unit.
generation of optical modular transceivers. Snapshot Image — A logical duplicated volume
The devices are designed for use with small (V-VOL) of the primary volume. It is an
form factor (SFF) connectors, offer high internal volume intended for restoration.
speed and physical compactness, and are SNIA — Storage Networking Industry
hot-swappable. Association. An association of producers and
SHSN — Shared memory Hierarchical Star consumers of storage networking products,
Network. whose goal is to further storage networking
SID — Security Identifier. A user or group technology and applications. Active in cloud
identifier within the Microsoft Windows computing.
security model. SNMP — Simple Network Management Protocol.
SIGP — Signal Processor. A TCP/IP protocol that was designed for
SIM — (1) Service Information Message. A management of networks over TCP/IP,
message reporting an error that contains fix using agents and stations.
SOA — Service Oriented Architecture.

HDS Confidential: For distribution only to authorized parties. Page G-21


SOAP — Simple object access protocol. A way for SRM — Site Recovery Manager.
a program running in one kind of operating SSB — Sense Byte.
system (such as Windows 2000) to
SSC — SiliconServer Control.
communicate with a program in the same or
another kind of an operating system (such as SSCH — Start Subchannel.
Linux) by using the World Wide Web's SSD — Solid-state Drive or Solid-State Disk.
Hypertext Transfer Protocol (HTTP) and its SSH — Secure Shell.
Extensible Markup Language (XML) as the
SSID — Storage Subsystem ID or Subsystem
mechanisms for information exchange.
Identifier.
Socket — In UNIX and some other operating
SSL — Secure Sockets Layer.
systems, socket is a software object that
connects an application to a network SSPC — System Storage Productivity Center.
protocol. In UNIX, for example, a program SSUE — Split SUSpended Error.
can send and receive TCP/IP messages by SSUS — Split SUSpend.
opening a socket and reading and writing
SSVP — Sub Service Processor interfaces the SVP
data to and from the socket. This simplifies
to the DKC.
program development because the
programmer need only worry about SSW — SAS Switch.
manipulating the socket and can rely on the Sticky Bit — Extended UNIX mode bit that
operating system to actually transport prevents objects from being deleted from a
messages across the network correctly. Note directory by anyone other than the object's
that a socket in this sense is completely soft; owner, the directory's owner or the root user.
it is a software object, not a physical Storage pooling — The ability to consolidate and
component. manage storage resources across storage
SOM — System Option Mode. system enclosures where the consolidation
SONET — Synchronous Optical Network. of many appears as a single view.
SOSS — Service Oriented Storage Solutions. STP — Server Time Protocol.
SPaaS — SharePoint as a Service. A cloud STR — Storage and Retrieval Systems.
computing business model. Striping — A RAID technique for writing a file to
SPAN — Span is a section between 2 intermediate multiple disks on a block-by-block basis,
supports. See Storage pool. with or without parity.
Spare — An object reserved for the purpose of Subsystem — Hardware or software that performs
substitution for a like object in case of that a specific function within a larger system.
object's failure. SVC — Supervisor Call Interruption.
SPC — SCSI Protocol Controller. SVC Interrupts — Supervisor calls.
SpecSFS — Standard Performance Evaluation S-VOL — (1) (ShadowImage) Source Volume for
Corporation Shared File system. In-System Replication, or (2) (Universal
SPECsfs97 — Standard Performance Evaluation Replicator) Secondary Volume.
Corporation (SPEC) System File Server (sfs) SVP — Service Processor ― A laptop computer
developed in 1997 (97). mounted on the control frame (DKC) and
SPI model — Software, Platform and used for monitoring, maintenance and
Infrastructure as a service. A common term administration of the subsystem.
to describe the cloud computing “as a service” Switch — A fabric device providing full
business model. bandwidth per port and high-speed routing
SRA — Storage Replicator Adapter. of data via link-level addressing.
SRDF/A — (EMC) Symmetrix Remote Data SWPX — Switching power supply.
Facility Asynchronous. SXP — SAS Expander.
SRDF/S — (EMC) Symmetrix Remote Data Symmetric virtualization — See In-band
Facility Synchronous. virtualization.

Page G-22 HDS Confidential: For distribution only to authorized parties.


Synchronous — Operations that have a fixed time storage cost. Categories may be based on
relationship to each other. Most commonly levels of protection needed, performance
used to denote I/O operations that occur in requirements, frequency of use, and other
time sequence, i.e., a successor operation does considerations. Since assigning data to
not occur until its predecessor is complete. particular media may be an ongoing and
-back to top- complex activity, some vendors provide
software for automatically managing the
—T— process based on a company-defined policy.
Target — The system component that receives a Tiered Storage Promotion — Moving data
SCSI I/O command, an open device that
between tiers of storage as their availability
operates at the request of the initiator.
requirements change.
TB — Terabyte. 1TB = 1,024GB. TLS — Tape Library System.
TCDO — Total Cost of Data Ownership.
TLS — Transport Layer Security.
TCO — Total Cost of Ownership. TMP — Temporary or Test Management Program.
TCP/IP — Transmission Control Protocol over
TOD (or ToD) — Time Of Day.
Internet Protocol.
TOE — TCP Offload Engine.
TDCONV — Trace Dump CONVerter. A software
program that is used to convert traces taken Topology — The shape of a network or how it is
on the system into readable text. This laid out. Topologies are either physical or
information is loaded into a special logical.
spreadsheet that allows for further TPC-R — Tivoli Productivity Center for
investigation of the data. More in-depth Replication.
failure analysis. TPF — Transaction Processing Facility.
TDMF — Transparent Data Migration Facility. TPOF — Tolerable Points of Failure.
Telco or TELCO — Telecommunications Track — Circular segment of a hard disk or other
Company. storage media.
TEP — Tivoli Enterprise Portal. Transfer Rate — See Data Transfer Rate.
Terabyte (TB) — A measurement of capacity, data Trap — A program interrupt, usually an interrupt
or data storage. 1TB = 1,024GB. caused by some exceptional situation in the
TFS — Temporary File System. user program. In most cases, the Operating
TGTLIBs — Target Libraries. System performs some action, and then
returns control to the program.
THF — Front Thermostat.
TSC — Tested Storage Configuration.
Thin Provisioning — Thin provisioning allows
storage space to be easily allocated to servers TSO — Time Sharing Option.
on a just-enough and just-in-time basis. TSO/E — Time Sharing Option/Extended.
THR — Rear Thermostat. T-VOL — (ShadowImage) Target Volume for
Throughput — The amount of data transferred In-System Replication.
from 1 place to another or processed in a -back to top-

specified amount of time. Data transfer rates —U—


for disk drives and networks are measured
UA — Unified Agent.
in terms of throughput. Typically,
throughputs are measured in kbps, Mbps UBX — Large Box (Large Form Factor).
and Gb/sec. UCB — Unit Control Block.
TID — Target ID. UDP — User Datagram Protocol is 1 of the core
Tiered storage — A storage strategy that matches protocols of the Internet protocol suite.
data classification to storage metrics. Tiered Using UDP, programs on networked
storage is the assignment of different computers can send short messages known
categories of data to different types of as datagrams to one another.
storage media in order to reduce total UFA — UNIX File Attributes.

HDS Confidential: For distribution only to authorized parties. Page G-23


UID — User Identifier within the UNIX security VLL — Virtual Logical Volume Image/Logical
model. Unit Number.
UPS — Uninterruptible Power Supply — A power VLUN — Virtual LUN. Customized volume. Size
supply that includes a battery to maintain chosen by user.
power in the event of a power outage.
VLVI — Virtual Logic Volume Image. Marketing
UR — Universal Replicator. name for CVS (custom volume size).
UUID — Universally Unique Identifier. VM — Virtual Machine.
-back to top-
VMDK — Virtual Machine Disk file format.
—V— VNA — Vendor Neutral Archive.
vContinuum — Using the vContinuum wizard, VOJP — (Cache) Volatile Jumper.
users can push agents to primary and
secondary servers, set up protection and VOLID — Volume ID.
perform failovers and failbacks. VOLSER — Volume Serial Numbers.
VCS — Veritas Cluster System. Volume — A fixed amount of storage on a disk or
VDEV — Virtual Device. tape. The term volume is often used as a
synonym for the storage medium itself, but
VDI — Virtual Desktop Infrastructure. it is possible for a single disk to contain more
VHD — Virtual Hard Disk. than 1 volume or for a volume to span more
VHDL — VHSIC (Very-High-Speed Integrated than 1 disk.
Circuit) Hardware Description Language. VPC — Virtual Private Cloud.
VHSIC — Very-High-Speed Integrated Circuit. VSAM — Virtual Storage Access Method.
VI — Virtual Interface. A research prototype that VSD — Virtual Storage Director.
is undergoing active development, and the VTL — Virtual Tape Library.
details of the implementation may change
considerably. It is an application interface VSP — Virtual Storage Platform.
that gives user-level processes direct but VSS — (Microsoft) Volume Shadow Copy Service.
protected access to network interface cards. VTOC — Volume Table of Contents.
This allows applications to bypass IP
VTOCIX — Volume Table of Contents Index.
processing overheads (for example, copying
data, computing checksums) and system call VVDS — Virtual Volume Data Set.
overheads while still preventing 1 process V-VOL — Virtual Volume.
from accidentally or maliciously tampering -back to top-
with or reading data being used by another.
Virtualization — Referring to storage
—W—
virtualization, virtualization is the WAN — Wide Area Network. A computing
amalgamation of multiple network storage internetwork that covers a broad area or
devices into what appears to be a single region. Contrast with PAN, LAN and MAN.
storage unit. Storage virtualization is often
used in a SAN, and makes tasks such as WDIR — Directory Name Object.
archiving, backup and recovery easier and WDIR — Working Directory.
faster. Storage virtualization is usually
implemented via software applications. WDS — Working Data Set.
WebDAV — Web-based Distributed Authoring
There are many additional types of and Versioning (HTTP extensions).
virtualization.
Virtual Private Cloud (VPC) — Private cloud WFILE — File Object or Working File.
existing within a shared or public cloud (for WFS — Working File Set.
example, the Intercloud). Also known as a
virtual private network cloud. WINS — Windows Internet Naming Service.

Page G-24 HDS Confidential: For distribution only to authorized parties.


WL — Wide Link. —Y—
WLM — Work Load Manager. YB — Yottabyte.
WORM — Write Once, Read Many. Yottabyte — A highest-end measurement of data
WSDL — Web Services Description Language. at the present time. 1YB = 1,024ZB, or 1
quadrillion GB. A recent estimate (2011) is
WSRM — Write Seldom, Read Many. that all the computer hard drives in the
world do not contain 1YB of data.
WTREE — Directory Tree Object or Working Tree.
-back to top-
WWN ― World Wide Name. A unique identifier
for an open-system host. It consists of a 64-
bit physical address (the IEEE 48-bit format —Z—
with a 12-bit extension and a 4-bit prefix). z/OS — z Operating System (IBM® S/390® or
z/OS® Environments).
WWNN — World Wide Node Name. A globally
unique 64-bit identifier assigned to each z/OS NFS — (System) z/OS Network File System.
Fibre Channel node process. z/OSMF — (System) z/OS Management Facility.
WWPN ― World Wide Port Name. A globally zAAP — (System) z Application Assist Processor
unique 64-bit identifier assigned to each (for Java and XML workloads).
Fibre Channel port. A Fibre Channel port’s ZCF — Zero Copy Failover. Also known as Data
WWPN is permitted to use any of several Access Path (DAP).
naming authorities. Fibre Channel specifies a
Zettabyte (ZB) — A high-end measurement of
Network Address Authority (NAA) to
data at the present time. 1ZB = 1,024EB.
distinguish between the various name
registration authorities that may be used to zFS — (System) zSeries File System.
identify the WWPN. zHPF — (System) z High Performance FICON.
-back to top- zIIP — (System) z Integrated Information
Processor (specialty processor for database).
—X—
Zone — A collection of Fibre Channel Ports that
XAUI — "X"=10, AUI = Attachment Unit Interface. are permitted to communicate with each
other via the fabric.
XCF — Cross System Communications Facility.
Zoning — A method of subdividing a storage area
XDS — Cross Enterprise Document Sharing. network into disjoint zones, or subsets of
XDSi — Cross Enterprise Document Sharing for nodes on the network. Storage area network
Imaging. nodes outside a zone are invisible to nodes
within the zone. Moreover, with switched
XFI — Standard interface for connecting 10Gb SANs, traffic within each zone may be
Ethernet MAC device to XFP interface. physically isolated from traffic outside the
zone.
XFP — "X"=10Gb Small Form Factor Pluggable.
-back to top-
XML — eXtensible Markup Language.
XRC — Extended Remote Copy.
-back to top-

HDS Confidential: For distribution only to authorized parties. Page G-25


Page G-26 HDS Confidential: For distribution only to authorized parties.
Evaluating this Course
Please use the online evaluation system to help improve our
courses.

Learning Center Sign-in location:


https://learningcenter.hds.com/Saba/Web/Main

HDS Confidential: For distribution only to authorized parties. Page E-1


Evaluating this Course

Page E-2 HDS Confidential: For distribution only to authorized parties.

You might also like