Enterprise Virtual Arrays

The EVA family A technical overview
Choong Ming Tze Technical Consultant ming-tze.choong@hp.com May 2007

© 2007 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice

Agenda
• The • The • The

needs of today's businesses EVA Family value of Virtualization Storage within an

• Tiered

EVA

• EVA • EVA • EVA • EVA

Management SW Features Solutions Services

Today’s customer challenges
• •

CIO’s top issues Business environment is volatile and unpredictable Intense competitive pressure IT budgets constrained

• • • •

Key storage themes Consolidation Simplification Guaranteed service levels

• •

HP StorageWorks EVA Portfolio Powerfully Simple – Comprehensive Solutions
Better Manage TCO
Total Solution: - HP StorageWorks EVA - EML E-Series Tape Libraries - Enterprise File Services Clustered Gateway

CIO

Real-Time Information
Fast Recovery Solution for Windows 2003 File System Extender

End User

Powerful, Flexible, Scalable

Fast access to information

Value Remove Operational Tasks
Command View EVA Powerfully Command View Tape Library Business Copy EVA Continuous Access EVA Cluster Extension EVA, Metrocluster, ContinentalCluster

Agility

Administrator

Simple

Simplicity

HP StorageWorks product portfolio

The B-Series SAN switch family
SAN Director 4/256 32-384 4Gb ports FICON support HP 400 MP-Router (16FC + 2IP ports)

Fabric Manager Enhanced capabilities
SAN Switch 4/64 (32-64 ports)

SAN Switch 4/32 (16-32 ports)

C

m om

o

ic br a nF

5 OS

.x
4/48 Port Blade For the 4/256 HP MPR Blade For the 4/256

Brocade 4Gb SAN Switch for HP c-Class BladeSystem SAN Switch 4/8 & 4/16 8 and 16 ports Brocade 4Gb SAN Switch for HP p-class BladeSystem

The C-series SAN switch family
Small & Medium-Sized Business
MDS 9000 Family Systems MDS 9020* MDS 9216 and 9216i MDS 9120 and 9140 MDS 9000 Modules 12-Port, 24Port, 48-Port 1, 2 & 4Gb FC 4-Port 10Gb FC IP Storage Services – iSCSI and FCIP SSM (Virtualization; Intelligent fabric Applications) MDS 9506 MDS 9509 MDS 9513

Enterprise & Service Provider

Mgmt. OS

Supervis or 1 and 2 *FabricWare

14-Port, 16Port, 32-Port 1 & 2 Gb FC

Cisco Fabric Manager Cisco MDS 9000 Family SAN-OS

StorageWorks market coverage
XP 10000 EVA8000 EVA6000 EVA4000 MSA1500 MSA1510i MSA1000 XP 12000

Enterprise Departmental
plugged into the data center fabric to maximize scalability and availability
• High • High • High

Availability

MSA60, 70

MSA500

MSA20, 30, 50

Workgroups
high performance internal / external storage with Smart Array technologies
• •

Simple, affordable, fault tolerant Smart Array technology clustering & shared storage • minimal infrastructure • DtS conversion • price/availability

Branch Office

flexible and scalable entrylevel fibre channel storage
• Scalable

connectivity scalability efficiency

modularity of administration

• Highest

• Heterogeneous • Ease

disaster tolerance solutions connectivity and heterogeneity

price/capacity

• Universal

Scalability

• Price/scalability

Transition Slide The HP

StorageWorks Enterprise Virtual Array EVA

The EVA family
Leading in array virtualization and ease of use
• • • • •

A revolutionary redesign of the proven EVA3000 and EVA5000 Storage Arrays Three family members for a broad range of prices, storage capacities and performance 4Gbps FC Controller iSCSI Connectivity Option Concurrent support of various FC and FATA Disks in the same Disk Enclosures
− 72, 146, 300GB FC − 250, 400, 500GB FATA EVA4000 EVA6000

• •

Virtual RAID Arrays: Vraid0, Vraid1, Vraid5 Industry standard multi-path failover support
− MPIO − Pvlink − DMP etc.

• • •

Native HBAs Support (Sun, IBM, HP) Local and remote copy support Broad range of solutions and integrations available
EVA8000

*Note: Legacy 36GB FC and 250, 400GB FATA disks are still fully supported

The EVA family specifications
EVA4000
Controller Cache size RAID Levels Supported OS Supported Drives Host ports Device ports Mirror ports Backend loop switches # of Drives # of Enclosures Max Capacity 0 8 - 56 1-4 28TB 4GB VRAID0, VRAID1, VRAID5 Windows 2000/2003, HP-UX, Linux, IBM AIX, OpenVMS, Tru64, SUN Solaris, VMWare, Netware FC: 72, 146GB/15krpm, 146, 300GB/10krpm FATA: 250, 400, 500GB 4 4 4 2 16 – 112 4-8 56TB 4 8 – 240 2 – 18 120TB
EVA8000

EVA6000

EVA8000
HSV210 8GB

HSV200

EVA4000 EVA6000

8 8

The EVA4000 architecture
Heterogeneous Servers

Management Server (Windows)

Fabric 1

Fabric 2

4Gbps Front-End 2 HSV Controllers

HSV200 controller 1

HSV200 controller 2

1 to 4 Disk enclosures 8 to 56 FC Disks

The EVA6000 architecture
Management Server (Windows)

Fabric 1 4Gbps Front-End • 2 HSV Controllers
• •

Fabric 2

HSV200 controller 1
FC loop switch

HSV200 controller 2
FC loop switch

2 FC Loop Switches

• •

4-8 Disk enclosures 16 to 112 FC Disks

The EVA8000 architecture
Heterogeneous Servers
Management Server (Windows)

Fabric 1 4Gbps Front• End 2 HSV Controllers
• •

Fabric 2

HSV210 controller 1
FC loop switch FC loop switch

HSV210 controller 2
FC loop switch FC loop switch

4 FC Loop Switches 2-18 Disk enclosures 12 in the first rack 6 in the utility cabinet 8 to 240 FC Disks

EVA Performance
Workload 512 B Reads IOPs 256kB Reads MB/s

(based on 2GB controllers)

Controller limits: 100% cache hits
EVA5000 141’000 700 EVA8000 215’000 1600

Maximum Data Transfer Rates for 128 KB Sequential Workloads (MB/s)
Workload Reads Vraid 1 Writes Vraid 5 Writes EVA4000 340 160 260 EVA6000 770 355 515 EVA5000 530 165 153 EVA8000 1,430 530 525

Throughputs (IOPs) under Random Workloads (4 KB Transfers @<30ms)
Workload Reads Vraid 1 Writes Vraid 5 Writes Vraid 1 OLTP (60r/40w) Vraid 5 OLTP (60r/40w) EVA4000 14,500 8,000 4,400 11,300 7,000 EVA6000 27,600 15,200 8,000 21,200 13,900 EVA5000 50,000 20,600 12,200 30,400 22,100 EVA8000 55,900 22,300 13,000 32,600 23,300

Transition Slide The Benefits

of

the EVA Virtualization

Traditional Disk Array Approach

RAID Controller

Traditional Disk Array Approach

RAID Controller
Disk Groups & RAID Level

RAID1

RAID5
Spare Spare

Dedicated Spare Disk(s)

Traditional Disk Array Approach
0 1 2

Presente d LUNs

RAID Controller

RAID1
LUN 0

RAID5
Spare Spare
LUN 2

LUN 1

Traditional Disk Array Approach
RAID levels in separate small Disk Groups, dispersed LUNs, beware of hot-spots
0 1

2

3

4

5

6

7

Presente d LUNs

RAID Controller
RAID5
LUN 6 LUN 7 LUN 4 LUN 3

RAID0
LUN 5

RAID1
Spare Spare

Disk Groups & RAID Level

RAID1
LUN 0

RAID5
Spare Spare LUN 2

Dedicated Spare Disk

LUN 1

HP Virtual Array Approach
Disk groups, segments, block mapping tables & sparing

Spare Capacity Block Mapping Table Disk Group(s)

Virtual Array Controller

HP Virtual Array Approach
Disk groups
An EVA can have
• •

1 to 16 disk groups 8 to 240 disks per disk group

HP Virtual Array Approach
LUN/vdisk allocation
Presented LUNs

1 2
Virtual Array Controller
LUN 1 LUN 2
(RAID1) (RAID5)

HP Virtual Array Approach
LUNs/vdisks and their allocation
An EVA can have
• • •

from 1 to 1024 virtual disks/LUNs LUN sizes from 1GB to 2TB in steps of 1GB any combination of VRAID 0, 1, 5
2 0

HP Virtual Array Approach
Capacity upgrade and load leveling

1 2 3
Virtual Array Controller
LUN 1 LUN 2 LUN 3

HP Virtual Array Approach
Capacity upgrade, disk group growth

2

HP Virtual Array Approach
All RAID levels within a Disk Groups, optimal striping, no hot-spots
Presented LUNs Spare Capacity Block Mapping Table Disk Group(s)

1

2

3

Virtual Array Controller LUN 1 (RAID0) LUN 2 (RAID1) LUN 3 (RAID5)

HP Virtual Array Approach
Online Volume Growth

1 2
Virtual Array Controller
LUN 1 LUN 2

HP Virtual Array Approach
Online Volume Growth

25

The value of the EVA virtualization
Lower management and
• • • •

training costs Easy to use intuitive web-interface Unifies storage into a common pool Effortlessly create virtual RAID volumes (LUNs) Significantly increase utilization and reduce stranded capacity

Improved application availability
• • •

Enterprise-class availability Dynamic pool and Vdisk (LUN) expansion No storage reconfiguration down time

Improve performance – service more customers
• • •

Buy less

Vraid striping across all disks in disk group Eliminate I/O hot spots Automatic load leveling

Transition Slide EVA Iscsi

Connectivity Option

EVA iSCSI connectivity option

An integrated EVA solution
− Mounted in the EVA cabinet − Provides LUNs to iSCSI hosts − Managed and controlled by Command View − Flexible Connectivity
• • Fabric and direct attach on EVA 4/6/8000 Fabric attach on EVA 3/5000

Single or dual iSCSI option
− A324A single router configuration − A325A upgrade to dual router configuration

• •

High Performance Solution
− 35K target IOPS

OS Support
− Microsoft Windows 2003, SP1 − Red Hat Enterprise Linux:
• • • • Red Hat™ Enterprise Linux 4 update 3 (kernel 2.6.9-34) Red Hat Enterprise Linux 3 update 5 SUSE Linux Enterprise Server 9, SP3 (2.6.5 kernel) SUSE Linux Enterprise Server 8, SP4

− SUSE® Linux Enterprise Server:

EVA iSCSI connectivity option

An integrated EVA solution
− Mounted in the EVA cabinet − Provides LUNs to iSCSI hosts − Managed and controlled by Command View − Flexible Connectivity
H P S o a e Wo k t r g r s mp 1 0 x
MGMT

H P S o a e Wo k t r g r s mp 1 0 x

MG MT

FC1

FC2

GE 1

GE 2

H P S o a e Wo k t r g rs m 10 x p

MGMT

• Fabric and direct attach on EVA 4/6/8000 • Fabric attach on EVA 3/5000 only •

FC1

FC2

GE 1

GE 2

FC1

FC2

G E1

GE 2

11

9

7

5

3

1

11

9

7

5

3

1

12

10

8

6

4

2

12

10

8

6

4

2

I NTE RC ON TROLLE R IN TE RC ON TR OLLE R

CAB UID

ON

STB Y

DP1B

DP2B

MP1

FC1

FC2

FC3

FC4

MP 2

DP 1A

DP2A

CAB UID

ON

STB Y

DP1B

DP2B

MP1

FC1

FC2

FC3

FC4

MP 2

DP 1A

DP2A

11

9

7

5

3

1

11

9

7

5

3

1

12

10

8

6

4

2

12

10

8

6

4

2

Single or dual iSCSI option
− A324A single router configuration − A325A upgrade to dual router configuration

High Performance Solution
− 35K target IOP

EVA iSCSI connectivity option MPX100 – FC/iSCSI bridge
• • • • • • •

OEM Qlogic iSR-6140 533MHz PowerPC CPU 128MB DDR2 memory Internal buses are 133MHz, 64bit PCI-X Single power supply physical footprint is 1U high, half-rack width.
− Allows dual redundant mpx100 pair in a 1U rack slot

Mpx100 is the FRU level

Dual 2 Gb/s FC (Qlogic ISP2322 chip)

Dual GbE RJ45 (Qlogic ISP4022 iSCSI / TOE chip)

Serial console (defaults to 115200N81)

100Mbit/s RJ45 management port

EVA iSCSI connectivity option The iSCSI Host Driver – iSCSI
initiator:
HOST – iSCSI initiator Applications File System Block Device SCSI Generic
iSCSI Driver

Resides on the host and provides hostto-storage connectivity over an IP network Uses the host’s existing TCP/IP stack, network drivers and network interface card(s) (NIC) to provide the same functions as native SCSI drivers and Host Bus Adapter (HBA) cards Functions as a transport for SCSI commands and responses between the host and the MPX100 on an IP network. (The MPX100 then translates the SCSI commands and responses and communicates directly with the target Fibre Channel storage devices)
mpx100 – iSCSI target

EVA

TCP/IP Stack NIC Driver Adapter Driver NIC Adapter
SCSI Adapter (HBA)

Direct Attached Storage

SCSI/TCP Server
Driver GigE NIC

Direct connect or SAN

TCP/IP

SCSI Driver FC HBA

IP Network

EVA iSCSI connectivity option
Fabric attached (all EVA models supported)
Single iSCSI router configuration
(Windows and Linux)
FC Server
Any supported OS

Dual iSCSI router configuration
(Windows only)
FC SAN Attached Server
Any supported OS

iSCSI Server
Windows or Linux

iSCSI Server
Windows

iSCSI IP Network Fibre Channel Fabric 1 Fibre Channel Fabric 2 Fibre Channel Fabric 1

iSCSI IP Network Fibre Channel Fabric 2

Command View EVA Mgmt Server

mpx100

Command View EVA Mgmt Server

mpx100

mpx100

Mgmt IP Network

EVA Controller 1 EVA Controller 2

Mgmt IP Network

EVA Controller 1 EVA Controller 2

EVA iSCSI connectivity option
Direct attached (EVA 4/6/8000 and Windows only supported) Single iSCSI router configuration Dual iSCSI router configuration
FC Server
Any supported OS

iSCSI Server
Windows

FC Server
Any supported OS

iSCSI Server
Windows

iSCSI IP Network

iSCSI IP Network

Fibre Channel Fabric 1

Fibre Channel Fabric 2

Fibre Channel Fabric 1

Fibre Channel Fabric 2

Command View EVA Mgmt Server

mpx100

Command View EVA Mgmt Server

mpx100

mpx100

Mgmt IP Network

EVA Controller 1 EVA Controller 2

Mgmt IP Network

EVA Controller 1 EVA Controller 2

EVA iSCSI connectivity option
Single iSCSI router configuration
iSCSI Server
Windows

Direct attached (EVA 4/6/8000 and Windows only supported)
iSCSI Server

Dual iSCSI router configuration
Windows

iSCSI IP Network

iSCSI IP Network

Command View EVA Mgmt Server

mpx100

Command View EVA Mgmt Server

mpx100

mpx100

Mgmt IP Network

EVA Controller 1 EVA Controller 2

Mgmt IP Network

EVA Controller 1 EVA Controller 2

Configuration support overview
EVA model iSCSI Initiator Windows EVA firmware Single mpx100 Dual mpx100 Fabric attached mpx100 Direct attached mpx100
2)

EVA 4/6/8000

≥ XCS 5.1x0 ≥ XCS 6.000 ≥ XCS 5.031

√ √ √ √ √ √ √ √

√ √
1)

√ √ √ √ √ √ √ √

√ √ (all other EVA ports will run in
loop mode as well and therefore only support directly connected Windows servers)

Linux

≥ XCS 5.1x0 ≥ XCS 6.000 ≥ XCS 5.031

√ √ (all other EVA ports will run in
loop mode as well and therefore only support directly connected Windows servers)

1)

EVA 3/5000

Windows

≥ VCS 4.004 ≥ VCS 3.028


1)

Linux

≥ VCS 4.004 ≥ VCS 3.028

1) You can run Linux in a dual mpx100 environment as long as you configure only one of the two mpx100 as target. Linux will run single path while other Windows hosts can run multipath across both mpx100s. 2) Continuous Access EVA is not supported in direct attached environments.

Tiered storage with EVA
Online – active data, mirroring, instant recovery and stale data Near-Online – 3 to 6 months active, faster recovery, infrequently accessed data Nearline – 1 year active, file recovery, off-site recovery Offline – 5 year active, off-site storage, disaster recovery Archive – 30 year records or longer, offsite storage, retrievable
FC drives 72, 146, 300GB 15krpm 72, 146, 300GB 10krpm FATA drives 250, 400, 500GB

Tape/Automation

Tape and Optical

Optical

EVA interface – Group properties

Building storage classes within an EVA
File Servers Backup Server Archive Server DB Servers

HSV controller 1
FC loop switch

HSV controller 2
FC loop switch

Fast FC Disks
73, 146GB 15krpm

Large FC Disks
146, 300GB 10krpm

FATA Disks
250/400/500GB near online

EVA selective storage presentation

What does it do?
− Provides storage access control assuring that a host cannot access data belonging to a different host.

How does it work?
− Selectively grants access of HBA WWNs to LUNs

HBA WWN 1

HBA WWN 2

HBA WWN 3

HBA WWN 4

LUN

masking

2 n

1 3

Multipathing for EVA3000/5000
VCS 2.x and 3.x

They use an active/passive LUN presentation model, a LUN is only actively presented on one HSV controller The Multipathing implementation in the OS or SecurePath has to make sure that
− The OS only used the active controller for a particular LUN − Switches over the LUN ownership and used paths in case of an error − Potentially can do load balancing

HBA 1

HBA 2

HBA 3

HBA 4

Active path Passive path

New EVA multipathing
XCS ≥5.x and VCS 4.x

Uses an active/active LUN presentation model, a LUN is actively presented on both HSV controller Support for industry standard multipathing solutions like
− MPIO for Windows, AIX and Netware − MPxIO/STM and DMP for Solaris − Pvlink and DMP for HP-UX

HBA 1

HBA 2

HBA 3

HBA 4

A LUN is still owned by one controller. If an IO comes from the non-owning controller it is passed over to the owning controller via the cache mirror ports The multipathing implementation in the OS only has to make sure that
− A single LUN is not presented multiple times − Potentially can do load balancing

Multipathing and boot support
Operating System
HP-UX

EVA 3000/5000 with VCS 2.x and 3.x
Secure Path v3.0F MPIO DSM Secure Path v4.0C SP2 Qlogic FO driver – basic Secure Path v3.0C SP2 native native Secure Path v3.0D SP1 Secure Path v2.0D SP3 Antemeta Solution Secure Path v3.0C SP2.1 VM MPIO

Concurrent attachment
¹) Same server Same HBA Same server Same HBA Same server Same HBA Same server Same HBA Same server Same HBA Same server Different HBA Same server Different HBA Same server Same HBA Same server Same HBA

EVA 4/6/8000 and EVA3000/5000 with VCS 4.x
native pvlinks Secure Path v3.0F Veritas DMP HP MPIO - AA DSM (full-feature) Veritas MPIO DSM Direct server attachment supported Qlogic FO driver; Emulex Lightpulse Md driver planned; DMP support by Symantec native native MPxIO/STM (also with non-SUN HBAs) ²) Veritas DMP MPIO – PCM Native VM MPIO

ALUA ALB

SAN Boot

         

Windows Linux Tru64 OVMS Solaris AIX Netware VMware ESX

1) For details see SAN Design Referenc Guide: Heterogeneous server rules on www.hp.com/go/sandesign 2) See http://www.sun.com/io_technologies/qlogic_corp_.html

ALUA
Asymmetric Logical Units Access defined by INCITS T10 / Adaptive Load Balance
A LUN can be accessed trough multiple Target Ports. Target Ports Groups can be defined to manage Target Ports with the same attributes The ALUA inquiry string reports one of the following states/attributes
• • • •

Active/optimized Active/non-optimized Standby Unavailable
Target Port 1 Target Port 2 Target Port n Target Port Group n Active/optimized path
HBA 1 HBA 2

Active/non-optimized path

Target Port Group 1

LU N

ALB and Windows MPIO
Adaptive Load Balance
HP Implementation of ALUA into the Windows MPIO DSM (Initial Release 2.01.00) Supported with EVA3/5000 and VCS4.x and EVA4/6/8000 Enabled by DSM CLI command: “hpdsm set device=x alb=y” or DSM manager GUI HP MPIO Full-Featured DSM for EVA Disk Arrays (Windows
2000/2003) Maximum number of HBAs per host
Failback Load balancing User Interface Support for Microsoft Cluster Coexistence with HP MPIO basic failover for EVA arrays on same server Coexistence with HP MPIO Full-Featured DSM for EVA3/5000 VCS 4.x and XP Disk Arrays on same server 8 32 Yes Yes Yes Yes Yes Yes Maximum number of paths per LUN for EVA

For more Information see: http://h18006.www1.hp.com/products/sanworks/multipathoptions/index.html

Transition Slide HP

StorageWorks EVA Software Solutions

EVA software
XCS6.0 / Command View EVA 6.0
− Windows based authentication (same as RSM) • Impacting GUI, SSSU and API • Single sign-on − Support of new firmware features • Mirrorclone • Snapshot Restore • Enhanced async CA • Non-migrate disk firmware update • Progress indicators − Usability enhancements • Single Page creation of snapshots, snapclones, mirrorclones, diskgroups, storage initialization • Delete SnapClone while normalizing • CA links status

XCS6.0 / CV EVA 6.0 Gotchas
As of 22.11.06
• AppRM

(replacement for FRS)

− Not supported for CA volumes − Only SnapClones supported

• MetroCluster • Data

EVA

− Support expected December 06

Protector ZDB and Instant Recovery

− No container support yet − No MirroClone support yet

• Storage

Essentials

− Only supported with SE5.1 SP1 expected December 06

Command View Security

Totally new security model

New with XCS 6.0 − No longer relying on System Management Homepage/WBEM − Instead CV EVA now uses Windows based authentication
• Use your Windows account to log into CV EVA

− Consistently used across all interfaces (CV GUI, SSSU, API) − Providing two levels of access
• Admin access: full access to all functionality • User acess: read-only acess

− Introducing user id based auditing
• If turned on all actions are logged by user • Written to a file (locally or share) and/or to the Windows Application Event Log

Side effect:
− Command View EVA will not longer rely on the System Management homepage − Therefore port has been changed to: https://localhost:2372/command_view_eva

Non-migrate disk drive firmware update

Pre-XCS 6 possibilities
− Massive disk drive code load to update all drives at a time
• Single image applied like an EVA firmware code load • EVA will be offline for several minutes

New with XCS 6.0

− Single ungrouped disk drive code load
• Every drive has to be ungrouped, updated and regrouped • Massive time and effort

New with XCS 6 (above are still possible)
− Ability to code load disk drives while they are grouped − Pre-requisit:
• No VRaid 0 VDisk must exist on that Disk Group

− Process:
• EVA will take disk out of operation, code load it and then reintroduce it • Any write to that disk will be buffered and applied once disk drive is back • Reads will be generated out of RAID information

HP Command View EVA
Powerfully Simple Management

HP Command View EVA

Provides a powerfully, simple management experience for all EVA arrays

Management Applications

• •

HP ISEE solutions - remote monitoring

Automate and aggregate management tasks Offers proactive remote monitoring services for maximum uptime Intuitive, easy-to-use GUI Enables you to quickly expand a LUN online, configure LUNs or RAID groups, or add physical disks with just a few mouse clicks Uses standards-based SMI-S Allows you to easily

HP Command View EVA suite

• •

Configuration, discovery, events & monitoring, security
Performance Monitoring LUN masking CLUIscripting /agents Basic replication

SMIS and APIs

• •

HP Enterprise Virtual Arrays

CV EVA deployment options
• Choice • Broad

and flexibility to maximize your investment Microsoft Windows OS coverage or direct host attached device management
Customer application CV EVA

HP Storage Management Appliance (discontinued)
OV SOM v1.2 SMA SW v1.2

Existing OV SOM Installs; includes OV SNM CV EVA ≥5.0 required for EVA4000/6000/8000

Or
CV EVA

Host
SAN

SMA SW v1.2

• Host-based

Up to 16 EVAs Management Server-dedicated
OV SOM v1.2

EVA family

General Purpose Server

HP ProLiant Storage Server (NAS)
Gigabit Ethernet (iSCSI) Or Fibre channel

Existing OV SOM Installs; includes OV SNM CV EVA ≥5.0 required for EVA4000/6000/8000

Or

Or
NAS OS CV EVA

Host
SAN

CV EVA

SAN
SAN

Up to 16 EVAs EVA family

Up to 16 EVAs

EVA family

HP command view EVAperf
EVA performance analysis
• • • • •

Performance analysis tool for whole EVA product line Shipped with Command View EVA Integrates with Windows PerfMon Create your own scripts via a command prompt Monitor in real-time and view historical EVA performance metrics to more quickly identify performance bottlenecks Easily monitor and display EVA Performance metrics:
− Host connection data − port status − host port statistics − storage cell data

− physical disk data See the− virtual disk data EVAPerf Whitepaper on: http://h18006.www1.hp.com/storage/arraywhitepapers.html − CA statistics

EVA Replication Software
Enhancements with XCS 6.0xx
• Replication

Solution Manager 2.1

− Tru64 Host Agent − Single sign-on

• Business

Copy 4.0

− MirrorClone feature with Delta Resync and Instant Restore − Instant Restore from a Snapshot

• Continuous

Access 3.0

− Enhanced asynchronous performance and distance support by using buffer-to-disk (journaling)

Replication Solutions Manager 2.1
Familiar browser based navigation Selectable views Select hosts or storage volumes Auto discovery of storage systems and volumes Oracle Application Integration

Status monitoring Context sensitive actions and wizards Local and remote Mgmt Interactive

Topology Manager

Business copy EVA
4 options available:
• • • •

space efficient vSnapShot pre-allocated vSnapShot vSnapClone Mirror Clone

point-in-time copy capability for the EVA (local copy

Controlled from Command View, RSM or SSSU. Ideally suited to create point-intime copies to:
• • • •

Keep applications online while backing up data Test applications against real data before deploying Restore a volume after a corruption Mine data to improve business processes or customer marketing

Space efficient snapshots
Virtually capacity free
contents identical contents contents

different

different

volume A A’ snap of A

volume “A”

A’ (content s as of t0)

volume “A”

A’ (contents as of t0)

updates t3 updates t1 updates t1

volume A receives updates (copy on write)

volume A receives more updates

time
t2 t3 t4

t0
$ create snapshot “A”

t1

Pre-allocated snapshots
Space reservation
contents identical contents different contents different

volume “A” snap of A

volume “A”

contents as of t0

volume “A”

contents as of t0

updates t3 updates t1 updates t1

volume “A” receives updates (copy on write)

volume “A” receives more updates

time
t2 t3 t4

t0
$ create snapshot “A”

t1

New: Pre-allocated 3-phase snapshots
Space reservation
contents identical contents different contents different

volume “A” snap of A

volume “A”

contents as of t0

volume “A”

contents as of t0

updates t3 updates t1 updates t1

create empty container

volume “A” receives updates (copy on write) t0
$ create snapshot “A”

volume “A” receives more updates

convert snap to an empty container

t-x

t1

t2

t3

t4

t5

SnapClone of virtual disks
Full copy
contents identical contents different relation suspended

volume A B snap of A

volume A

contents as of t0

volume A

Vol B content as vol A at t0

A

A
updates t1

A
Cloning finished

B

cloning process starts

volume A receives updates (copy on write) t2 t3

time
t4

t0
$ create snapclone“A”

t1

3-phase SnapClone
Full copy
contents identical contents different relation suspended

volume A B snap of A

volume A

content of A as at t0

volume A

content of A as at t0

A
create empty container

A
updates t1

A

B
convert B to an empty container

cloning process starts

volume A receives updates (copy on write)

Cloning finished

t-x

t0
$ create snapclone“A”

t1

t2 t3

t4

t5

Business Copy 4.0 MirrorClones

A Mirror Clone is a pre-normalized Clone
− Full Clone of the source
• requires 100% of the capacity (if same raid level)

New with XCS 6.0

− Synchronous mirror between source VDisks and MirrorClone
• Once synchronized data is always identical (unless fractured)

− MirrorClone can be in a different Disk Group/have a different Raid level
• Tiered storage approach • Can be used to protect against physical failures •

Point-in-time copy is established at moment fracture is made
− differences are tracked via bitmap − Delta resynch/restore is accomplished by only resynchronizing/restoring data that is marked different

Primary advantages
− Data is available at instant of split − Delta resynch takes less time than a full copy

MirrorClone Tasks
• •

Initial creation
− Will establish MirrorClone relationship and start initial copy

Fracture (only permitted when fully synchronized)
− Will establish a point-in-time copy by stopping replication of writes to MirrorClone − Deltas are tracked in a bitmap (for both source VDisks and MirrorClone) − Allows MirrorClone to be presented

Resync (only permitted when fractured)
− Will resync the deltas from the source VDisk to the MirrorClone leading to a synchronized MirrorClone

Restore (only permitted when fractured)
− Will restore the source VDisk back to the point-in-time the MirrorClone was fractured − Instant access to restored data

Detach (only permitted when fractured)
− Will break the MirrorClone relation and convert the MirrorClone into a standalone VDisk − If exist, Snapshots from the MirrorClone will stay intact and attached to the former MirrorClone

MirrorClone creation
MirrorClone source MirrorClone target (Production VDisk)

Host
E:

Initial situation • VDisk, presented to the host, volume mounted

Read s Writes

Host
E:

Read s Writes

Container

User... • Creates empty container with same size as source VDisk • Raid level and Disk Group can be different User... • Creates MirrorClone using the Container as target EVA... • Establishes MirrorClone relationship • Start inital synchronization of MirrorClone (Volume stays fully accessible to host) Synchronized MirrorClone • Once MirrorClone is synchronized data on both volumes is kept identical • Writes are applied to both volumes • Reads are satisfied by source only

Host
E:

Read s Writes behind copy fence

Writes

Host
E:

Read s

Writes

MirrorClone fracture and resynch
MirrorClone source (Production Vdisk)

MirrorClone target

Host
E:

Read s

Writes

User... • Fractures MirrorClone EVA ... • Stops applying writes to the MirrorClone target • Instead changes are marked in a delta bitmap User... • Can present fractured MirrorClone for various purposes (Read and write). EVA ... • Changes to the source and target are recorded in a delta bitmap

Host
E:

Read s

Read s

Host
F:

Writes Writes

Host
E:

Read s

Writes

User... • Initiates resynchronization of volumes in either direction EVA ... • Copies change block only until source and target are synchronized Synchronized MirrorClone • Once MirrorClone is synchronized data on both volumes is
• •

Host
E:

Read s Writes

kept identical Writes are applied to both volumes Reads are satisfied by source only

Combining Snapshots and MirrorClone

MirrorClone and SnapShot can be combined in a way that you take the Snapshot from the MirrorClone target
t0

Advantages:
− A way to get around the Snapshot copy-before-write performance impact − „cross disk group“ Snapshots by putting MirrorClone into different diskgroup

Source

MC

t1 t2

• The Snapshots will allocate space in the MirrorClone disk group

Disadvantages:
− No Snapshot restore in the first release
• Workaround could be to detach MirrorClone, then restore and present as original LUN • Direct restore is planned for end 2006

Continuous access EVA
Remote copy capability for the EVA
Continuous Access EVA delivers array-based remote data replication – protecting your data and ensuring continued operation from a disaster.
• •

Ideal suited to: Keep byte for byte copies of data at a remote site for instant recovery Replicate data from multiple sites to one for consolidated backup Shift operations to a recovery site for primary site upgrades and maintenance Ensure compliance to government legislation and business objectives

Continuous Access EVA
Remote copying

What does it do?
− Replicates LUNs between EVAs − Provides disaster recovery − Simplifies workload management − Allows point-in-time database backup − Provides restore without latency

How does it work?
− Creates up to 256 Copy Sets for all specified logical units in the array over Fibre Channel and FC extensions − Synchronous and asynchronous support up to 20’000km (200ms round trip time) − Works with all EVAs
Source VOL

Dest VOL

Copy

Set
Source VOL

Dest VOL

Copy

Set

DR groups and managed sets
DR Group

Consistent Group of replicated copy sets (Vdisks)
− Up to 256 DR Groups or DR Group members/array − Up to 32 replicated copy sets / DR Group − IO ordering across members is guaranteed − Share a single write history log − Vdisks within a DR Group behave like a single entity − Management commands like suspend or failover are handled atomically − All source members online on same HSV controller

Therefore a DR Group is the primary level of CA management
− Write Mode ([Synchronous] / Asynchronous) − Failsafe Mode (Enabled or [Disabled]) − Suspend Mode ([Resume] / Suspend) − Failover command

Managed Sets

Another level of CA management
− Collection of DR groups for the purpose of common management − No consistency as between members of a DR Group − If you perform a management command on a Managed Set this command will be run for all contained DR Group one

Vdisk DR Group Managed Set

Continuous Access EVA 3.0
Enhanced Async implemented with XCS 6.0
• Replaced • Tunnel

previous CA async

resources are 124 x 8k buffers = 1MB on the fly Async uses a write history log
−You set size and location when a copy set is created −You can force a full copy −Log is a circular buffer
• Log overflows when tail meets head • Overflow of log forces a full copy of DR group • Draining the log will require transition to sync CA

• Enhanced

Continuous Access 3.0 Enhanced Async

100th Percentile - Sync CA

MB/sec

95th Percentile - Async CA

50th Percentile - Enhanced Async CA

0

8am

12 noon Time

5pm

12pm

Multiple relationships

Fan-in of multiple relationships
− The ability of one EVA to act as the destination for different LUNs from more than one source EVA EVA3000 EVA6000 EVA4000

Fan-out of multiple relationships
− The ability for different LUNs on one EVA to replicate to different destination EVA EVA8000 EVA6000

EVA5000

Bidirectional
− one array with copy sets acting as the source and destination across the same EVA5000 EVA8000

EVA CA SAN configuration
2 fabric configuration
Server1
Shared SAN for host and CA traffic.
Managemen t Server Managemen t Server

Server2

A EVA1

B

All EVA ports are used for host IO and some also for CA IO

A EVA2

B

EVA CA SAN configurations
Physically separated 6 fabric configuration
Server1
CA traffic is only going through the CA SAN No host IO cross-site possible > CLX EVA Managemen
t Server Managemen t Server

Server2

A EVA1

B

4 ports per EVA used for host IO 4 ports per EVA used for CA IO

A EVA2

B

CA configurations: Dedicated CA fabrics
Physically separated & zoned 4 fabric configuration
Server1
CA traffic is only going through the CA SAN if the EVA ports are properly zoned off in the host SAN Host IO cross-site possible -> Stretched Cluster
Managemen t Server Managemen t Server

Server2

A EVA1

B

4 ports per EVA used for Hosts 4 ports per EVA used for CA

A EVA2

B

CA configuration: Dedicated CA zone
Zoned 2 fabric configuration
Server1
CA traffic is only going through the CA zone if the EVA ports are properly zoned off in the host zones Host IO cross-site possible -> Stretched Cluster
Managemen t Server Managemen t Server

Server2

A EVA1

B

4 ports per EVA used for Hosts 4 ports per EVA used for CA

A EVA2

B

Transition Slide EVA Solutions

Zero downtime backup
Recovering in minutes not hours

Description
− Data Protector provides no impact backup,by performing backup on the copy of the production data; with the option to copy it or move it to tape. − NEW with Data Protector 6.0: Incremental ZDB for files

Client network HP-UX Solaris NT W2k Data Protector Server SAN

Usage
− Data that requires:
• Non-disruptive protection • Application-aware • Zero impact backup

− SAN protection

Benefits
− Fully automates the protection process − All options can be easily configured using simple selections − The Data Protector GUI permits complete control of the mirror specification − Administrators can choose the schedule of the backup

P-Vol

S-Vol

Oracle database integration

What does it do?
− Maps the Oracle DB to Vdisks, DR Groups etc. − Replicates all Vdisks of specified Oracle Databases − Allows creating local or remote replicas − Easy control via RSM GUI − Can quiesce and resume Oracle − Provides a topology MAP – Supported with BC 2.3 / RSM2.0 – Chargeable RSM Option: Application Integration LTU T4390A

Instant recovery for XP and EVA
Recovering in minutes not hours

Description
− Allows Instant Recovery by retrieving the data directly from the replicas on disk. − This technology moves Zero Downtime Backup a step further, allowing to keep multiple replicas on disk available and rotating

Client network HP-UX Solaris NT W2k Data Protector Server SAN

Usage
− Critical data that has to be recovered within minutes, instead of hours

Benefits
− Fully automated protection process, including creation and rotation of replicas − Disk operations permit non-disruptive, application-aware protection as frequently as once an hour − Administrators can choose disk protection, tape protection, or scheduled combinations; to meet their protection requirements
P-Vol

BC1 BC2

BC3

Prerequisits
− Data Protector or − AppRM (1H07)

t0

t1

t-2

HP VLS 300 EVA Gateway

Seamless integration
– Emulates popular tape drives and libraries – Same easy to use GUI as VLS6000 – Allows deployment of existing EVA systems for backup use

Node 1

VLS Gateway
Node 2 Node n

SAN attached Servers

SAN •

Easily scale capacity and performance Utilizes existing infrastructure
– Switches – Arrays
EVA 1 EVA 2 EVA n

Application Recover Manager

Application support
− First release: Exchange 2003 ,SQL 2000/2005, NTFS File system − Future: take over all ZDB/IR integrations DP has (other Apps, other OSs)

Array support
− EVA arrays support including copy back restore − Disk array independence through VSS/VDS (unmap/swap restore only)

Features
− Round-robin replicas − Built-in scheduler − User Management − Sophisticate logging and monitoring

Distributed architecture
− Central management using ‘DP like’ GUI & CLI − Clustered Cell Server − Remote client deployment

Application Recovery Manager (AppRM)

A new solution has been created that encapsulates and delivers Data Protector’s VSS functionality
− Announced in May 06, released in Nov 06 − Replacing “Fast Recovery Solutions for Exchange”

AppRM is
− Disk-based (VSS) replication and restore only, no tape backup possible, but can be used as pre-exec to 3rd party backup application

AppRM is based on Data Protector 6.0 code
− DP will offer same feature set as AppRM, whereas AppRM offers only a subset of DP functionality (ZDB and IR)

Target customers:
− NON – DP accounts, with the desire of a VSS instant recovery solution, but no need for a “full” backup software product − Potential up-sell opportunity to migrate existing backup product to Data Protector

Application Recover Manager

AppRM follows Data Protector licensing scheme
− Capacity based − More expensive than FRS, especially for only a few larger systems − but also more functionality − TB licenses are based on the source capacity independent of the number of copies

T4395A • T4396A • T4399A • T4400A

HP HP HP HP

StorageWorks StorageWorks StorageWorks StorageWorks

AppRM AppRM AppRM AppRM

Cell Manager Win LTU Online Backup Win LTU Inst. Recovery EVA 1 TB Inst. Recovery EVA 10 TB

First version AppRM 6.0
− Aligned with DP version numbering

MetroCluster EVA for HP-UX

What does it do?
– Provides manual or automated site-failover for Server and Storage resources

ServiceGuard for HP-UX

Supported environments:
– HP-UX 11i V1 & 11i V2 – Serviceguard ≥11.15

Metro Cluster EVA •
HP Continuous Access EVA

Requirements:
– EVA Disk Arrays – Metrocluster – Continuous Access EVA – Max 200ms network round-trip delay – Command View EVA & SMI-S

DataCenter 1

DataCenter 2

Up to several 100km

Cluster extension EVA for Windows

MSCS on Windows

What does it do?
– Provides manual or automated site-failover for Server and Storage resources

Cluster Extension XP

Supported environments:
– Microsoft Windows 2003 Enterprise Edition (32bit & 64-bit) – Microsoft Windows 2003 Data Center Server(64-bit) – NAS4000 & 9000 – HP Proliant Storage Server – Microsoft Cluster Service 5.2 – Up to 500km

HP Continuous Access EVA

DataCenter 1

DataCenter 2 •

Up to 500km

Requirements:
– EVA Disk Arrays – Cluster Extension EVA – Continuous Access EVA – Max 20ms network round-trip delay – Command View EVA & SMI-S

Cluster extension EVA for Linux

Serviceguard/Li

What does it do?
– Provides manual or automated site-failover for Server and Storage resources

Cluster Extension EVA

Supported environments:
– Serviceguard for Linux as the cluster service – SG 11.16.02 with RH EL 4 – SG 11.16.01 for SuSe SLES 9

HP Continuous Access EVA

• DataCenter 1 DataCenter 2

Requirements:
– EVA Disk Arrays – Cluster Extension EVA – Continuous Access EVA – Max 20ms network round-trip delay – Command View EVA & SMI-S

Up to 500km

Windows 2003 stretched cluster with CA
App A App B Quoru m Quoru m App A App B

Quoru m CA • Failover Restart Servers (Rescan)

DRG A DRG B

Continuous Access EVA

Cluster extension EVA CLX
– Manual Move of App A
Quorum or Witness Server

App A App B Quoru m

App A Quoru m Move App A

DRG A DRG B Continuous Access EVA

Cluster extension EVA
– Storage Failure
App A App B Quoru m Quoru m App A App B

Quoru m

DRG A DRG B Continuous Access EVA

Majority Node Set Quorum – File Share Witness

What is it?
− A patch for Windows 2003 SP1 clusters provided by Microsoft (KB921181)

What does it do?
− Allows the use of a simple file share to provide a vote for an MNS quorum-based 2node cluster
• In addition to introducing the file share witness concept, this patch also introduces a configurable cluster heartbeat (for details see MS Knowledge Brief)

What are the benefits?
− The „arbitrator“ node is no longer a full cluster member.
• A simple fileshare can be used to provide this vote. • No single subnet requirement for network connection to the „arbitrator“.

− One arbitrator can serve multiple clusters. However, you have to set up a separated share for each cluster. − The „abitrator“ exposing the share can be
• a standalone server • a different OS architecture (e.g. a 32-bit Windows server providing a vote for a IA64 cluster)

95

March 7, 2008

Majority Node Set Quorum – File Share Witness
\\arbitrator\share Get vote

App A

App A App B
# cluster nodes 2 3 4 5 # node failures 0 (1 with MNS fileshare witness) 0 1 1 2 2 3 3

App A App B

6 7 8

HP SAN certification and support
HP SAN architecture Rules

http://www.hp.com/go/sandesign

HP StorageWorks SAN Design Guide
– Architecture guidance – Massive configuration support – Implementation best practices – Incorporation of new technologies – Include now IP Storage implementation like iSCSI, NAS/SAN Fusion, FC-IP

Provides the benefit of HP engineering when building a scalable, highly available enterprise storage network Documents HP Services SAN integration, planning and support services

The EVA global service portfolio
HP StorageWorks EVA base product warranty Foundation Service Solution
• 2 years parts, 2 years of labor and 2 years of hardware onsite 24x7, 4-hour response for EVA controller pair and drive shelves (enclosures) as defined by product SKU and the Hard Disk Drives purchased with the array • 2 years, 24x7, 2-hour response phone-in and updates for Virtual Controller Software (VCS) • Array Installation and Startup (includes drive shelves and hard disks purchased with the EVA)

HP care pack services for storage

HP H/W Support
− 4-hour 24 x 7 − Years 3, 4, and 5

HP S/W Support
− 24 x 7 technical support − Software product updates

Premium Hardware & Software Services
− Support Plus 24 − Proactive 24 − HP Critical Service

Why should you choose the new EVA?
Inherited from the current EVA
• • • • •

Easiest management and setup Virtualization allows better use of resources and automatic striping to prevent hot spots Dynamic LUN expansion Full set of local and remote copy options Known solid HP support Easier implementation and coexistence due to support of industry standard multipathing and native HBAs Higher performance Higher capacities – The EVA8000 supports architecturally > 200TB

Added with the new EVA
• • •

Transition Slide Hp

StorageWorks™ – the Right Choice

Hp logo black on white

Sign up to vote on this title
UsefulNot useful