You are on page 1of 41

Hitachi storage systems

introduction
Scope
● High-end
● Midrange
● Fibre Channel (FC) SAN
● Block-level access
● Open Systems

2
Place in the storage world: performance results
Contents Traditional Provisioning: the VM case

Storage Classification through the time-line Traditional Provisioning: the VM case notes

Trend: to get rid of classes Dynamic Provisioning

VSP G Family relationship to predecessors Replication local / remote

Hitachi Unified Storage HUS 150 hardware configuration ShadowImage

Hitachi Unified Storage HUS VM hardware configuration Snapshots: CoW

Hitachi Virtual Storage Platform VSP G400 hardware configuration Snapshots: ThinImage

Hardware summary: Midrange arrays Remote Replication

Hitachi Virtual Storage Platform VSP hardware configuration TrueCopy typical configuration

Hitachi Virtual Storage Platform VSP G 1000 hardware configuration Universal Replicator specifics

Hardware summary: High-end arrays Universal Replicator typical configuration

Features: High-end + VSP G External storage overview

Management Software External storage operations

Hitachi Storage Terminology High-end anatomy: frontend

Traditional Provisioning: the simplest case High-end anatomy: backend

Traditional Provisioning: the simplest case notes Symmetry is the cornerstone. Putting everything together

Traditional Provisioning Evolution: LUSE and RG concatenation Thank you! Questions?

3
Place in the storage world
performance race: SPC-1 IOPS Top Ten
Rank, Performance (SPC-1 IOPS) SPC-1 Submission Identifier - Submission Date
Test Sponsor Tested Storage Product (TSP)
#1, 5,120,098.98 A00179 15 June 2016
DataCore Software Corporation DataCore Parallel Server (Dual Node, Fibre Channel SAN)
#2, 3,010,007.37 A00163 14 November 2015
Huawei Technologies Co, Ltd. Huawei OceanStorTM18800 V3
#3, 2,004,941.89 A00153 19 February 2015
Hitachi Data Systems Corporation Hitachi Virtual Storage Platform G1000 (with Hitachi Accelerated Flash)
#3, 2,004,941.89 A00162 26 August 2015
Hewlett Packard Enterprise HP XP7 Storage
(based on SPC-1 Result A00153)
#4, 1,239,898.00 A00137 17 October 2013
Kaminario, Inc. Kaminario K2 (K2F00000700)
#5, 1,201,961.83 A00178 13 June 2016
DataCore Software Corporation DataCore SANsymphony 10.0 (Dual Node, HIgh Availability, Hyper-converged)
#6, 1,005,893.43 A00140 24 January 2014
Huawei Technologies Co., Ltd. Huawei OceanStorTM18800
#7, 780,081.02 A00130 11 April 2013
IBM Corporation IBM Power 780 server (with SSDs)
#8, 685,281.71 A00154 22 April 2015
NetApp, Inc. NetApp® FAS8080 EX (All-Flash FAS)
#9, 650,987.88 A00149 21 November 2014
Huawei Technologies Co., Ltd. Huawei OceanStorTM 6800 V3
#10, 605,016.49 A00170 11 March 2016
NEC Corporation NEC Storage M710F

updated: 28 June 2016


Source: http://www.storageperformance.org/results/benchmark_results_spc1_top-ten/#spc1_top10_performance 4
Storage Classification
through the time-line
High-end class Single storage
VSP line

USP
V HUS
EOS VM VSP G
2016.09.30 1000
USP
VM VSP G
2010 2014 family
2012
2007
2008

HUS
AMS
Midrange 100
2000
class S
EO 9.30
7. 0
201 5
Trend
to get rid of classes
VSP G family introduces the single line of storage systems for
the High-end and Midrange niche

This approach brings the major advantages of:

● The storage architecture unification;


● The same storage operating system & software;
● The power and the features of the High-end arrays to
the lower systems;
● Compatible hardware: controllers, enclosures, drives,
cables, uninterruptible power suppliers, FRUs, etc;
● Possibility to upgrade storage arrays from Midrange to
High-end in the future.
6
VSP G Family
relationship to predecessors
License
upgrade

G
G G G G 1000
200 400 600 800

HUS HUS VSP


100 VM

Performance relationship*:
The VSP G200 is the smallest member of the family. It provides 4 times the performance of the HUS 110. The VSP
G400 system delivers 4 times the performance of HUS 130 array. VSP G400 can be easily upgraded to VSP G600
with only a license key. VSP G600 is 3 times powerful then the HUS 150. The VSP G800 provides 40% more
performance than the HUS VM or even the original VSP system. The VSP G1000 storage system is the flagship of
the family. It can deliver up to 4M IOPS with <1 ms response time.
* Based on:
https://www.hds.com/products/storage-systems/hitachi-virtual-storage-platform-g-series.html?WT.ac=us_mg_pro_hvspg1000
https://community.hds.com/people/mnalls/blog/2015/04/28/welcome-new-models-to-the-virtual-storage-platform-family 7
Hitachi Unified Storage
HUS 150 hardware configuration
Controllers 2, Active-Active Symmetric

Cache Size (per system) 32 GB

Fibre Channel Host ports 8 or 16 x 8Gbps

Queue depth per FC port 1024*

Backend SAS-2 (6Gbps x 32 links)

Maximum number of drives 960 SFF/LFF, 480 FMD, 960 SSD

Maximum raw capacity 3,840 TB (4 TB, 7200 RPM, HDD); 1,152 TB (1.2 TB, 10K RPM, HDD);
845 TB (1.6 TB, FMD); 384 TB (400 GB, SSD)
FMD options 1.6TB, SAS

SSD options 200/400 GB, MLC, SFF, SAS

HDD SAS options 300 GB (10/15K RPM, SFF); 600 / 900 GB (10K RPM, SFF); 2 / 3 / 4 TB (7200 RPM, LFF)

Expansion trays 40 – Hitachi Accelerated Flash Expansion Trays (2U, 12 x flash module drives), Standard disk expansion trays (2U,
24 x SFF disks), Standard disk expansion trays (2U, 12 x LFF disks);
20 – Maximum dense expansion trays (4U, 48 x LFF disks), Ultra dense expansion trays (5U, 84 x LFF disks)
Full specification https://www.hds.com/products/storage-systems/specifications/hus-physical-characteristics.html
https://www.hds.com/products/storage-systems/specifications/hus-host-ports.html

* The number of SCSI commands the port can accommodate. NB! Several OS permit the queue depth to be exceeded. This can
be cause of the serious performance issues on the affected ports for all the connected hosts.

8
Hitachi Unified Storage
HUS VM hardware configuration
Controllers 2 (one virtual storage controller);
Hierarchical Star Network, Grid switches embedded
Cache Size (per system) 256 GB

Fibre Channel Host ports 32 x 8 Gbps or 48 without internal storage (virtualizer only mode)

Queue depth 2048 per FC port, 32 per LU*

Backend SAS-2 (6 Gbps x 32 links)

Maximum number of drives 1152 SFF/LFF, 576 FMD, 128 SSD

Maximum raw capacity 4608 TB (4 TB, 7200 RPM, HDD); 1382 TB (1.2 TB, 10K RPM, HDD)
2026 TB (3.2TB, FMD); 51TB (400 GB, SSD)
64 PB - including internal and external virtualized capacity
FMD options 1.6 / 3.2 TB, SAS

SSD options 200 / 400 GB, MLC, SFF, SAS

HDD SAS options 300 GB (15K RPM, SFF); 600 / 900 GB / 1.2 TB (10K RPM, SFF); 3 / 4 TB (7200 RPM, LFF)

Expansion trays 8 – Hitachi Accelerated Flash Expansion Trays (2U, 12 x flash module drives);
48 – Standard disk expansion trays (2U, 24 x SFF disks);
24 – Standard disk expansion trays (2U, 12 x LFF disks), Maximum dense expansion trays (4U, 48 x LFF disks)

Full specification https://www.hds.com/products/storage-systems/specifications/hus-physical-characteristics.html


https://www.hds.com/products/storage-systems/specifications/hus-host-ports.html

*The recommended maximum number of SCSI commands to the port. See the "Open Systems Host Attachment Guide", MK-
90RD7037-04, https://origin-download.hds.com/download/epcra/rd70374.pdf for the details.

9
Hitachi Virtual Storage Platform
VSP G 400 hardware configuration
Controllers 2 Block controllers, Hierarchical Star Network
Cache Size (per system) 128 GB
Fibre Channel Host ports 64 x 8Gbps or 32 x 16Gbps
Queue depth per FC port 2048 per FC port, 32 per LU*
Backend SAS-3 (12Gb/sec x 16 links)
Maximum number of drives 480 SFF/LFF, 480 SSD, 192 FMD

Maximum raw capacity 2880TB (6TB LFF); 864 TB (1.8TB SFF);


1344TB (6.4TB FMD); 154TB (400GB SFF SSD)
FMD options 1.6 / 3.2 / 6.4 TB FMD

SSD options 200 / 400 GB (SFF MLC)


HDD SAS options 4 / 6TB (LFF 7.2K);
600 GB / 1.2 TB / 1.8 TB (SFF 10K);
300 / 600 GB (SFF 15K)
Expansion trays 16 – (2U: 12 LFF); 8 – (4U: 60 LFF/SFF)
16 – (2U: 24 SFF); 16 – (2U: 12 FMD)
Full specification https://www.hds.com/products/storage-systems/hitachi-virtual-storage-platform-g-series.html
https://www.hds.com/assets/pdf/virtual-storage-platform-family-datasheet.pdf

*The recommended maximum number of SCSI commands to the port. See the "Open Systems Host Attachment Guide", MK-
90RD7037-04, https://origin-download.hds.com/download/epcra/rd70374.pdf for the details.

10
Hardware summary:
Midrange arrays
Storage array Controllers Architecture Cache FC host ports Backend Drives Raw Drive types
type number capacity

Adaptive Modular Storage 2 Symmetric 32GB 16 x 8Gbps SAS 480 472TB SATA II, SAS, SSD
(AMS) 2500* Active-Active
Hitachi Unified Storage (HUS 2 Symmetric 32GB 16 x 8Gbps SAS-2 960 3840TB SAS, SSD, FMD
) 150 Active-Active SFF/LFF****
Hitachi Unified Storage VM 2** Hierarchical Star 256GB 32*** x 8Gbps SAS-2 1152 4608TB SAS, SSD, FMD
(HUS VM) Network SFF/LFF****
Virtual Storage Platform VSP 2 Block Hierarchical Star 128GB 64 x 8Gbps / 32 SAS-3 480 2880TB SAS, SSD, FMD
G400 controllers Network x 16Gbps

* End of support on 2017.09.30


** 1 virtual storage controller
*** 48 ports without internal storage
**** Small and Large Form Factors

Significant enhancements and differences:


● Very large variations of drives and drive-enclosures appeared on HUS Family;
● SAS (3 Gbps) and SAS-2 (6 Gbps) are replaced by SAS-3 (12 Gbps) backend interconnect;
● FMD – Special enhanced Flash Memory Drives (Tomahawk) designed to minimize impact of
garbage collection and bad-blocks management;
● High-end architecture introduced to the midrange world by the help of HUS VM brings;
more cache, more host ports, more performance and ability to use external storage;
● Lower VSP G systems are the successors of HUS VM arrays
11
Hitachi Virtual Storage Platform
VSP hardware configuration
Controller Directors Front End 24, Back End 8;
Hierarchical Star Network Gen 5
Cache / Control memory 1024 GB / 32 GB
Fibre Channel Host ports 192 x 8 Gbps
Queue depth per FC port 2048 per FC port, 32 per LU*
Backend type SAS-2
Maximum number of drives 2048 SFF; 1280 LFF
Maximum raw capacity 3759TB (3TB LFF NL-SAS),
1770TB (900GB SFF SAS), 338TB (1.6TB FMD)
FMD options 1.6TB FMD
SSD options 200 / 400 GB SFF SAS
HDD SAS options 146 GB SFF SAS, 300 GB SFF SAS, 600GB SFF SAS, 900GB SFF SAS, 3TB LFF SAS
Full specification https://www.hds.com/assets/pdf/hitachi-architecture-guide-virtual-storage-platform.pdf
https://www.hds.com/assets/pdf/hitachi-datasheet-virtual-storage-platform.pdf

*The recommended maximum number of SCSI commands to the port. See the "Open Systems Host Attachment Guide", MK-
90RD7037-04, https://origin-download.hds.com/download/epcra/rd70374.pdf for the details.

12
Hitachi Virtual Storage Platform
VSP G 1000 hardware configuration
Controller Directors Front End 24, Back End 8 (16 Block controllers)
Hierarchical Star Network Gen 7
Cache / Control memory 2048 GB / 256 GB
Fibre Channel Host ports 192 x 16Gbps
Queue depth per FC port 2048 per FC port, 32 per LU*
Backend type SAS-2 6Gb/sec x 128 links
Maximum number of drives 2304 SFF, 1152 LFF, 384 SSD, 576 FMD

Maximum raw capacity 6912 TB (6TB LFF), 4147 TB (1.8TB SFF),


4032 TB (6.4TB FMD), 302 TB (800GB SSD)
FMD options 1.6 / 3.2 / 6.4 TB FMD

SSD options 400 / 800 GB SFF MLC SAS


HDD SAS options 4 / 6TB (LFF 7.2K)
600 / 900 GB / 1.2, 1.8 TB (SFF, 10K)
300 / 600 GB (SFF 15K)
Expansion trays 12 (16U: 96 LFF), 12 (16U: 192 SFF), 12 (8U: 48 FMD)

Full specification https://www.hds.com/products/storage-systems/hitachi-virtual-storage-platform-g-series.html


https://www.hds.com/assets/pdf/virtual-storage-platform-family-datasheet.pdf

*The recommended maximum number of SCSI commands to the port. See the "Open Systems Host Attachment Guide", MK-
90RD7037-04, https://origin-download.hds.com/download/epcra/rd70374.pdf for the details.
13
Hardware summary:
High-end arrays
Storage array Controller Architecture Cache / FC host Backend Drives number Raw Drive Types
Directors Control ports type capacity
memory
Universal Storage Front End 14 Universal Star Network 512 GB / 224 x FC 1152 FC 1134.5 TB SATA II, FC,
Platform V (USP V)* Back End 8 V (Gen 4)** 28 GB 4Gbps SSD
Virtual Storage Front End 24 Hierarchical Star 1024 GB / 32 192 x SAS-2 2048 SFF, 1280 LFF, 3759 TB SATA II, SAS,
Platform (VSP) Back End 8 Network (Gen 5)*** GB 8Gbps 256 SSD, 192 FMD SSD, FMD
VSP G1000 Front End 24 Hierarchical Star 2048 GB / 192 x SAS-2 2304 SFF, 1152 LFF, 6912 TB SAS, SSD,
Back End 8 (16 Network (Gen 7) 256 GB 16Gbps 384 SSD, 576 FMD FMD DC2
Block controllers)

* End of support on 2017.09.30


** Hitachi Universal Star Network crossbar switch architecture
*** Aka HiStar-E Network or Hi-Star-E PCI Express switched grid

Significant enhancements and differences:


● FC in the backend was eliminated by SAS version 2;
● SAS-3 implemented in the backend of the lower models of the VSP G family;
● Hierarchical Star Network architecture evolution;
● Cache & Control memory have dramatically grown;
● Host FC ports evolved from 4 Gbps to 16 Gbps;
● FMD – Special enhanced Flash Memory Drives designed to minimize impact of garbage
collection and bad block management;
● No more SATA II disks: replaced by near-line SAS drives;
● Maximum drive number increased twice;
● From 5 of USP V to 6 racks of VSP G1000 14
Features:
High-end + VSP G
Virtualization:
● The Hitachi Dynamic Provisioning (HDP) – stripe
volumes across multiple physical Parity Groups of
any RAID level;
● The Thin Provisioning (ThP) – implement volumes
with the virtual storage capacity (DPVOLs);
● The Hitachi Dynamic Tiering (HDT) – distribute data
between different tiers of physical space;
● External Storage – virtualize external arrays behind
Hitachi storage systems;
● Global-active device (GAD) – transparent active-
active host access to the volume across two
*SVOS – Storage Visualization Operating System
**Since VSP, asynchronous TC is deprecated by HUR 15

storage systems on the different sites. GAD is


Management Software
High-End + VSP
G family
● GUI
– Hitachi Storage
Hitachi Command
Navigator
Suite (HCS):
– Hitachi Device – Hitachi Device
Manager (HDvM) Manager (HDvM)
● CLI – Hitachi Tuning
Manager (HtnM)
– Hitachi Command
Control Interface – Hitachi Dynamic Link 16
(CCI) Manager Advanced
Hitachi Storage Terminology
● Parity Group (RAID Group, RG, rarely Array
Group) – is a group of drives that form the basic
storage unit in a storage subsystem. All drives
in a parity group must have the same physical
characteristics to form the RAID structure; On
High-end systems Parity Groups may consist of
4 or 8 physical drives. It allows to create RAID-
5 (3D+1P) or (7D+1P), RAID-10 (2D+2D) or
(4D+4D), RAID-6 (6D+2P). Midrange arrays
support RAID-1, 5, 6, 10. Midrange systems are
much more flexible in Raid Group creations.
● Dynamic Pool (DP-pool or Pool) – A set of17
volumes of Parity Groups that are reserved for
Traditional Provisioning
the simplest case
Host
On the storage side:
1.Raid-Groups are created 4
on physical drives;
2.LDEVs are defined within
Raid Groups;

Storage system
3 LUN LUN LUN LUN

3.Appropriate Host-Groups LDEV LDEV LDEV LDEV


2
are created on FC-ports
and LDEVs are mapped to
them as LUNs; RG RG RG RG
1
On the server side: 18
Traditional Provisioning
the simplest case notes
Host
● There is no or difficult
backend (BED) workload
balancing as it's done
manually. Raid Groups are
utilized differently;

Storage system
● There is no or difficult manual LUN LUN LUN LUN

frontend (FED) workload LDEV LDEV LDEV LDEV

balancing. FC ports are


utilized differently.
● Increased overhead of the RG RG RG RG

used storage space, as it's 19

difficult to choose proper sizes


Traditional Provisioning Evolution:
LUSE and RG concatenation

Host
Host
LUSE is a method to create large LUNs. It is
formed by the LDEVs concatenation and
presented to the FC port as a single LUN.
Storage system

Disadvantages:
LUN Poor workload distribution;

Storage system

● LDEVs utilization is not equal; LUN


● 1-st LDEV of a LUSE handles additional
LUSE concatenation metadata workload which can lead to the LDEV
performance issues;
Raid-Group concatenation is used to
disperse LDEVs across multiple Parity
LDEV LDEV Groups on a RAID stripe level on a Concatenated RG
LDEV LDEV round-robin basis.

A good solution for hosts without a


volume manager and low requirements
for data migration tasks. RG RG RG RG
RG RG RG RG
20
Traditional Provisioning:
the VM case

Host
On the storage side: 5 Striped
volume

1.Raid-Groups are created on


physical drives;
Volume
4 manager
2.A number of LDEVs is
defined within Raid Groups;
3.All LDEVs are mapped to

Storage system
3 LUN LUN LUN LUN

FC ports as LUNs; 2
LDEV LDEV LDEV LDEV

On the server side:


4.A volume manager takes 1 RG RG RG RG

control of the LUNs 21


Traditional Provisioning:
the VM case notes
Almost perfect but:
● Back-end and front-end workloads are balanced
only for particular Raid-groups and FC-Ports but
not for the whole storage system;
● Almost all balancing of the storage system is
performed manually;
● LDEVs management may be complicated. There
are dozens of Host groups with hundreds of
LUNs. Hundreds of Raid-Groups and thousands
of LDEVs
Screenshot in shows
of the XP-Scope utility total
a partserved with
of the Raid-Group map of athe classical
single DKA pair high-
end storage array. 22
Dynamic Provisioning

Host
Dynamic provisioning 5

simplifies administration tasks


of LUNs/LDEVs/Raid-Groups.
Workload balancing is much 4 DP

better and done automatically.


LUN Pool

3 DP-VOL

1.On the storage side Raid


Groups are created on

Storage system
LDEV LDEV
physical drives; 2 LDEV LDEV

2.A number of LDEVs is


defined within Raid Groups;
1
3.A DP pool is formed by RG RG RG RG
23
Replication
local / remote
● ShadowImage – In-system replication,
mirroring;
● CoW Snapshot (SS) – traditional Copy-on-Write
Snapshots;
● Hitachi Thin Image (HTI) – the improved HTI
CoW/CaW snapshots.

● TrueCopy (TC) – Remote Replication


synchronous and asynchronous* modes;
Hitachi Universal
*Since VSP, asynchronous TC is deprecated by HUR
● Replicator (HUR) –24
ShadowImage
Shadow Image (SI) feature supports cascaded pairs,
consistency groups and different methods of
resynchronization as well.

P-VOL – the primary volume, the source of data;


S-VOL – the secondary volume, the data destination.
Command Control
Storage Navigator provides GUI for pair operations Interface (CLI)
management, monitoring and troubleshooting;

CCI – the same tasks + scripting;


Command device – a special LUN-interface with the Storage Navigator
Command GUI
array; Needed for CCI to issue commands. device

Volume pair operations:


Creation: SMPL →PAIR
Suspension: PAIR → PSUE (suspended-error state) P-VOL Copy, split, S-VOL
Splitting: PAIR → PSUS (suspended-split state) resync
Resynchronization: PSUE/PSUS → PAIR
Deletion: PSUS → SMPL

COPY status indicates any data copying process


during pair operations.

SI operates asynchronous to the I/O. To get a consistent data copy, you must wait for the split pair
operation to complete. 25
Snapshots: Storage
CoW Command
device Navigator GUI
S-VOL /
Hitachi Copy-on-write Snapshot P-VOL V-VOL
5
Software (SS) – virtual, point-in-time
copy of a volume. Based on the pool
technology to store changed data 2 1
blocks only. Therefore snapshot could 3
4
be substantially less sized than the
source volume.
SS Pool
V-VOL – (Virtual Volume) is the synonym of S-VOL
in the terms of SS. Up to 64 V-VOLs can be defined for a single P-VOL

How SS works:
1.The V-VOL is made up of pointers to the data in the P-VOL;
2.A host wants to update the P-VOL with the NEW data;
3.The OLD data-block of P-VOL is saved in the Pool and the NEW data is
stored to the P-VOL;
4.The corresponding pointer of V-VOL data-block is targeted to the saved OLD
data in the Pool;
5.Host gets an acknowledge of the write operation competition.
26
Snapshots Storage
ThinImage Command
device Navigator GUI
S-VOL /
Hitachi ThinImage (HTI) – is the P-VOL V-VOL
3
renewed snapshot technology
which supports both Copy-on-Write
2
and Copy-after-Write (CaW) 1
5
methods. HTI supports 1024 4

snapshots and intended to be


faster then SS CoW. HTI Pool

How HTI works:


1.The V-VOL is made up of pointers to the data in the P-VOL;
2.A host wants to update the P-VOL with the NEW data;
3.The storage system replies with acknowledge immediately;
4.The OLD data-block of P-VOL is saved in the Pool and the NEW data
is stored to the P-VOL;
5.The corresponding pointer of V-VOL data-block is targeted to the
saved OLD data in the Pool;

27
Remote Replication
● TrueCopy (TC) – Remote Replication synchronous* mode for metro-mirror distances;
● Hitachi Universal Replicator (HUR) – Journal asynchronous replication for world-wide distances.

● Volumes can form pairs. P-VOL is the primary and S-VOL is the secondary volume of the PAIR.
● In normal circumstance P-VOL serves write-operations and S-VOL rejects write-operations
staying in the read-only mode.
● The PAIR can be split, resynchronized, reverse resynchronized, and returned to unpaired status.
● TrueCopy supports consistency groups to perform copy operations simultaneous;

Terminology:
● MCU – Main control unit, the system at the primary site;
● RCU – Remote control unit, the system at the remote site;
● Target port – ordinary port for the host-connections;
● MCU Initiator port – Outgoing replication port;
● RCU Target port – Incoming replication port.

Pair statuses:
● SMPL – volume is not assigned to the TC pair;
● COPY – initial copy is in progress;
● PAIR – volumes are paired. Initial copy successfully completed;
● PSUS – Suspended-split. A pair is split by a user command.
● PSUE – Suspended-error: split by an error;

28
*Since VSP, asynchronous TC is deprecated by HUR
TrueCopy
typical configuration
Main site

Remote site
Cluster LAN Cluster

CCI CCI

SAN
Target MCU RCU Target
port Initiator Target port
port port

PAIR

Command P-VOL P-VOL S-VOL S-VOL Command


device device

consistency group

MCU storage system Data write flow RCU storage system


CCI management commands
Read-Only data access to the S-VOL 29
Universal Replicator
specifics
● Updates sent from a host to the primary production volume on the
local system are copied to a local journal volume;
● The remote system “pulls” data from the journal volume across the
communication link to the secondary volume;

Terminology:
● Master / Restore Journal volume – The journal volume on the local /
remote system;
● Master / Restore Journal – The structure consists of one or more data
modules and journal modules; Journals are used for the same
purposes as Consistency Groups of TrueCopy and ShadowImage: to
guarantee data consistency across multiple pairs; Set CCI
consistency group numbers as journal numbers is a best practice;
● Mirror – The relationship between Master and Restore journal; Up to
4 Mirrors can be used;
● Initial Copy – Process of copying all the data from P-VOL to S-VOL.
Journal volumes are not used during the process.
30
Universal Replicator
typical configuration
Main site

Remote site
Cluster LAN Cluster

CCI CCI

SAN
Target MCU RCU Target
Port Initiator Target Port
Port Port

mirror

restore journal
master journal

Master Restore
Command journal journal Command
device P-VOL volume volume device
S-VOL

pair

MCU storage system


RCU storage system
Data write flow
CCI management commands
31
Read-Only data access to the S-VOL
External storage
overview
● Large number of supported external storage arrays (Hitachi, HP, IBM, EMC, etc);
● Ports on the storage system must be set into special External-port mode (Target
ports turned into initiators);
● Inflow control – Option that specifies whether the host write operation to the
cache is not accepted when write to the external storage cannot be completed;
This option prevents cache overflow on the situation when external storage is
inaccessible;
● Cache Mode – option controls the moment the write-complete response is sent
to the server. If the option is enabled – the write-complete acknowledge is sent
when the data received by VSP G cache, if disabled – the acknowledge to the
host is sent on the fact the data is accepted by the external storage; In other
words, this is a write-back or write-through cache modes.
● Load-Balancing – several modes available depending on external storage
capabilities: Multi, Single, APLB*;
● External storage compatible with all the products and features: Dynamic
Provisioning, ShadowImage, TrueCopy, etc.
● E-LUN – LUN presented from external storage;
* APLB – Active Path Load Balancing for ALUA or Active/Active asymmetric devices
32
External storage
operations

Host
1.External mode is enabled
on the FC ports of the 6 LUN
s
Hitachi array;
2.LUNs are created on the SAN
external arrays and SAN
exported to the Hitachi
storage through the SAN;
3.Hitachi storage imports
external LUNs through the
2 1
External mode ports; Ex Ex T T

4.External LUNs are treated 3 5 Ordinary


as Raid-Groups; LDEVs are
4 created
5.LDEVs are created within on the RGs

Internal and External Raid


groups mapped to the
ordinal Target ports as
LUNs; External Raid-Groups Internal Raid-Groups

6.The host accesses LUNs


as usual.
33
High-end anatomy
Pre-VSP front-end
FC Ports
Cluster 1 Cluster2 FC CHA
1A 1E 1G 1N 2A 2E 2G 2N
3A 3E 3G 3N 4A 4E 4G 4N
1A
00 00 1A
5A 5E 5G 5N 6A 6E 6G 6N
3A

FC Ports
FC Ports
7A 7E 7G 7N 8A 8E 8G 8N Processors Processors
(MP) (MP)
1B 1F 1K 1P 2B 2F 2K 2P
5A
3B 3F 3K 3P 4B 4F 4K 4P 01 01 3A
5B 5F 5K 5P 6B 6F 6K 6P 7A
7B 7F 7K 7P 8B 8F 8K 8P
1C 1G 1L 1Q 2C 2G 2L 2Q
1B
3C 3G 3L 3Q 4C 4G 4L 4Q
02 02 1B
5C 5G 5L 5Q 6C 6G 6L 6Q
3B

FC Ports
FC Ports
7C 7G 7L 7Q 8C 8G 8L 8Q Processors Processors
(MP) (MP)
1D 1H 1M 1R 2D 2H 2M 2R 5B
3D 3H 3M 3R 4D 4H 4M 4R 03 03 3B
5D 5H 5M 5R 6D 6H 6M 6R 7B
7D 7H 7M 7R 8D 8H 8M 8R
USP V CHA and FC ports layout 34
High-end anatomy
Pre-VSP backend
DKU-L2 DKU-L1 DKU-R0 DKU-R1 DKU-R2

L2 Upper L1 Upper HDU HDU R1 Upper R2 Upper


Left Right
HDU HDU HDU HDU HDU HDU HDU HDU
Left Right Left Right Left Right Left Right

2 1

4 3

DKA
L2 Lower L1 Lower R1 Lower R2 Lower
6 5
HDU HDU HDU HDU HDU HDU HDU HDU
Left Right Left Right Left Right Left Right
8 7

DKA to HDU map on USP V DK


C

DKA – Disk Adapters (Backend processors)


DKC – Disk Controller
HDU – Hard Disk Unit
DKU – Disk Unit (Drive Chassis) 35
Symmetry is the cornerstone
Putting everything together
DKA Map
FC Port s
Clust er 1 Clust er2 01: My_host_group 2 1
1A 1E 1G 1N 2A 2E 2G 2N 8 7 8 7 4 3 4 3
Host1 WWN:
3A 3E 3G 3N 4A 4E 4G 4N 10:00:00:00:c9:a3:24:2b 2 1
5A 5E 5G 5N 6A 6E 6G 6N
Cluster 1 Ports 4 3
0 6 5
7A 7E 7G 7N 8A 8E 8G 8N 1 6 5 6 5 2 1 2 1
8 7
1B 1F 1K 1P 2B 2F 2K 2P ...
3B 3F 3K 3P 4B 4F 4K 4P 1A CHA n

SAN Fabric 01: My_host_group


FOO Host1 WWN: Raid
10:00:00:00:c9:a3:24:2b
Groups
0
1E CHA
Host

1 DP Pool
LDEVs
WWN: 10:00:00:00:c9:a3:24:2b ... DKA
n

DP-VOLs
HBAs

Cluster 2 Ports 0
WWN: 10:00:00:00:c9:a3:24:3c 1 DKA

n
2A CHA Host1 WWN:
10:00:00:00:c9:a3:24:3c
SAN Fabric LDKC:CU:LDEV
BAR 01: My_host_group
00:ae:01 LDKC:CU:LDEV
0 00:ae:02
1 00:01:01
2E .. ... 00:02:01
CHA
n ...
Host1 WWN:
10:00:00:00:c9:a3:24:3c
36
01: My_host_group
High-end anatomy
VSP and beyond: VSD boards
● Each FED board has a Data Accelerator chip
(“DA”, or “LR” for local router) instead of 4 MPs.
The DA routes host I/O jobs to the VSD board
that owns that LDEV and performs DMA
transfers of all data blocks to/from cache.
● Each BED board has 2 Data Accelerators
instead of 4 MPs. They route disk I/O jobs to
the owning VSD board and move data to/from
cache.
Most MP functions have been moved from the
●Source: the VSP Architecture Overview, HDS Technical Training

FED and BED boards to new multi-purpose 37

VSD boards. No user data passes through the


High-end anatomy:
VSP front-end

Source: the VSP Architecture Overview, HDS Technical Training


38
High-end anatomy
VSP backend
DKU and HDU Map – Front View, Dual Chassis
Chassis #1 Chassis #0
front view front view
RK-12 RK-11 RK-10 RK-00 RK-01 RK-02

HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU
17-5 17-4 14-5 14-4 11-5 11-4 01-5 01-4 04-5 04-4 07-5 07-4
DKU-17 DKU-14 DKU-11 DKU-01 DKU-04 DKU-07
HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU
17-1 17-0 14-1 14-0 11-1 11-0 01-1 01-0 04-1 04-0 07-1 07-0

HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU
16-5 16-4 13-5 13-4 10-5 10-4 00-5 00-4 03-5 03-4 06-5 06-4
DKU-16 DKU-13 DKU-10 DKU-00 DKU-03 DKU-06
HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU
16-1 16-0 13-1 13-0 10-1 10-0 00-1 00-0 03-1 03-0 06-1 06-0

HDU HDU HDU HDU HDU HDU HDU HDU


16-5 16-4 12-5 12-4 Logic Box Logic Box 02-5 02-4 05-5 05-4
DKU-15 DKU-12 Chassis 1 Chassis 0 DKU-02 DKU-05
HDU HDU HDU HDU DKC-1 DKC-0 HDU HDU HDU HDU
16-1 16-0 12-1 12-0 02-1 02-0 05-1 05-0

power power power power power power power power power power power power

DKC-0 DKU # DKC-1 DKU #


0 1 2 3 4 5 6 7 10 11 12 13 14 15 16 17
BED 1, 2 BED 3,4
0 1 2 3 4 5 6 7 10 11 12 13 14 15 16 17
Source: the VSP Architecture Overview, HDS Technical Training 39
High-end anatomy
VSP backend
DKU and HDU Map – Rear View, Dual Chassis
Chassis #0 Chassis #1
rear view rear view
RK-02 RK-01 RK-00 RK-10 RK-11 RK-12

HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU
07-7 07-6 04-7 04-6 01-7 01-6 11-7 11-6 14-7 14-6 17-7 17-6
DKU-07 DKU-04 DKU-01 DKU-11 DKU-14 DKU-17
HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU
07-3 07-2 04-3 04-2 01-3 01-2 11-3 11-2 14-3 14-2 17-3 17-2

HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU
06-7 06-6 03-7 03-6 00-7 00-6 10-7 10-6 13-7 13-6 16-7 16-6
DKU-06 DKU-03 DKU-00 DKU-10 DKU-13 DKU-16
HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU
06-3 06-2 03-3 03-2 00-3 00-2 10-3 10-2 13-3 13-2 16-3 16-2

HDU HDU HDU HDU HDU HDU HDU HDU


05-7 05-6 02-7 02-6 Logic Box Logic Box 12-7 12-6 16-7 16-6
DKU-05 DKU-02 Chassis 0 Chassis 1 DKU-12 DKU-15
HDU HDU HDU HDU DKC-0 DKC-1 HDU HDU HDU HDU
05-3 05-2 02-3 02-2 12-3 12-2 16-3 16-2

power power power power power power power power power power power power

DKC-0 DKU # DKC-1 DKU #


0 1 2 3 4 5 6 7 10 11 12 13 14 15 16 17
BED 1, 2 BED 3,4
0 1 2 3 4 5 6 7 10 11 12 13 14 15 16 17
Source: the VSP Architecture Overview, HDS Technical Training
40
Thank You!
Questions?

41

You might also like