You are on page 1of 598

PowerMax and VMAX Family

Configuration and Business


Continuity Administration
Student Guide

Dell EMC Education Services


February 2019

desina.satyasrinu@wipro.com
PowerMax and VMAX Family
Configuration and Business Continuity Administration

1 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Welcome to the PowerMax and VMAX Family Configuration and Business Continuity Administration course.

Copyright © 2019 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, and other trademarks are
trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective
owners. Published in the USA.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” DELL EMC MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY
KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE .

Use, copying, and distribution of any DELL EMC software described in this publication requires an applicable software license. The trademarks, logos,
and service marks (collectively "Trademarks") appearing in this publication are the property of DELL EMC Corporation and other parties. Nothing
contained in this publication should be construed as granting any license or right to use any Trademark without the prior written permission of the party
that owns the Trademark.

AccessAnywhere Access Logix, AdvantEdge, AlphaStor, AppSync ApplicationXtender, ArchiveXtender, Atmos, Authentica, Authentic Problems,
Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, Avamar, Aveksa, Bus-Tech, Captiva, Catalog Solution, C-Clip, Celerra, Celerra
Replicator, Centera, CenterStage, CentraStar, EMC CertTracker. CIO Connect, ClaimPack, ClaimsEditor, Claralert ,CLARiiON, ClientPak,
CloudArray, Codebook Correlation Technology, Common Information Model, Compuset, Compute Anywhere, Configuration Intelligence, Configuresoft,
Connectrix, Constellation Computing, CoprHD, EMC ControlCenter, CopyCross, CopyPoint, CX, DataBridge , Data Protection Suite. Data Protection
Advisor, DBClassify, DD Boost, Dantz, DatabaseXtender, Data Domain, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, DLS ECO,
Document Sciences, Documentum, DR Anywhere, DSSD, ECS, elnput, E-Lab, Elastic Cloud Storage, EmailXaminer, EmailXtender , EMC Centera,
EMC ControlCenter, EMC LifeLine, EMCTV, Enginuity, EPFM. eRoom, Event Explorer, FAST, FarPoint, FirstPass, FLARE, FormWare, Geosynchrony,
Global File Virtualization, Graphic Visualization, Greenplum, HighRoad, HomeBase, Illuminator , InfoArchive, InfoMover, Infoscape, Infra, InputAccel,
InputAccel Express, Invista, Ionix, Isilon, ISIS,Kazeon, EMC LifeLine, Mainframe Appliance for Storage, Mainframe Data Library, Max Retriever, MCx,
MediaStor , Metro, MetroPoint, MirrorView, Mozy, Multi-Band Deduplication,Navisphere, Netstorage, NetWitness, NetWorker, EMC OnCourse,
OnRack, OpenScale, Petrocloud, PixTools, Powerlink, PowerPath, PowerSnap, ProSphere, ProtectEverywhere, ProtectPoint, EMC Proven, EMC
Proven Professional, QuickScan, RAPIDPath, EMC RecoverPoint, Rainfinity, RepliCare, RepliStor, ResourcePak, Retrospect, RSA, the RSA logo,
SafeLine, SAN Advisor, SAN Copy, SAN Manager, ScaleIO Smarts, Silver Trail, EMC Snap, SnapImage, SnapSure, SnapView, SourceOne, SRDF,
EMC Storage Administrator, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix, Symmetrix DMX, Symmetrix VMAX, TimeFinder,
TwinStrata, UltraFlex, UltraPoint, UltraScale, Unisphere, Universal Data Consistency, Vblock, VCE. Velocity, Viewlets, ViPR, Virtual Matrix, Virtual
Matrix Architecture, Virtual Provisioning, Virtualize Everything, Compromise Nothing, Virtuent, VMAX, VMAXe, VNX, VNXe, Voyence, VPLEX, VSAM-
Assist, VSAM I/O PLUS, VSET, VSPEX, Watch4net, WebXtender, xPression, xPresso, Xtrem, XtremCache, XtremSF, XtremSW, XtremIO, YottaYotta,
Zero-Friction Enterprise Storage.

Revision Date: 02\2019

Revision Number: ES112STG00370.POWERMAXOS 5978.1.0

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
PowerMax and VMAX Family Configuration and Business Continuity Administration 1
Course Overview
This Specialist level course provides participants with an in-depth understanding of configuration tasks on
PowerMax and VMAX family arrays. It also provides the knowledge required to deploy and manage PowerMax
and VMAX family array-based local and remote replication solutions for business continuity needs. Key features
and functions of the arrays are covered in detail. Topics include storage provisioning concepts, virtual
provisioning, device creation and port management, and service level-based storage allocation to hosts.
Description
Operational details and implementation considerations for Dell EMC TimeFinder SnapVX and Symmetrix Remote
Data Facility (SRDF) are covered. Participants will use Unisphere for PowerMax/VMAX and Solutions Enabler
(SYMCLI) to manage configuration changes on the arrays.

Hands-on lab exercises using Symmetrix Command Line Interface (SYMCLI) and Unisphere for PowerMax will be
performed on Open Systems hosts attached to PowerMax and VMAX3 arrays.

This course is intended for Dell EMC customers, partners and employees responsible for configuration and
Audience
administration of PowerMax and VMAX Family arrays.
Upon completion of this course, you should be able to:
• Provide an overview of PowerMax and VMAX Family configurations
• Discuss storage provisioning concepts
• Manage ports and port characteristics
Objectives
• Perform Service Level based provisioning to hosts
• Provide an overview of storage management in a virtualized environment
• Use Unisphere for PowerMax for Compliance Monitoring and Workload Planning
• Provide details on local and remote replication offerings in PowerMax and VMAX Family arrays
2 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This Specialist level course provides participants with an in-depth understanding of configuration tasks on
the PowerMax, VMAX All Flash, and VMAX3 Family of arrays.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
PowerMax and VMAX Family Configuration and Business Continuity Administration 2
Agenda – Days 1 and 2

Module/Lessons Supporting Activities


• Configuration Administration Overview
– PowerMax and VMAX Family Overview • Lab: Explore Lab Environment with Unisphere
– Storage Provisioning Overview for PowerMax and SYMCLI
• Virtual Provisioning and FAST Concepts
– Virtual Provisioning and FAST Overview
Day 1 – FAST Algorithms and Parameters
• Device and Port Management
• Device Management
• Port Management
• Port Management with Unisphere and SYMCLI • Lab: Port Management with Unisphere and
SYMCLI

• Storage Allocation using Auto-provisioning Groups • Lab: SL Based Provisioning with Unisphere
• Auto-provisioning Groups Overview • Lab: Service Level Based Provisioning with
• Host Considerations – Storage Allocation SYMCLI
• Service Level Based Provisioning with Unisphere • Lab: Cascaded Storage Groups – Moving
Day 2 • Service Level Based Provisioning with SYMCLI Devices Non-disruptively and Changing SL
• Management in a Virtualized Environment • Lab: Managing Host I/O Limits
• Virtual Server Management – Unisphere for
PowerMax
• EMC VSI for VMware vSphere Web Client

3 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The agenda lists the modules, lessons, and labs covered in the PowerMax and VMAX Family
Configuration and Business Continuity Administration course. This is the recommended agenda for days
one and two.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
PowerMax and VMAX Family Configuration and Business Continuity Administration 3
Agenda – Day 3

Module/Lessons Supporting Activities


• Monitoring and Workload Planning with Unisphere for
PowerMax
• Monitor SRP
• Monitor Storage Group Compliance • Lab: Monitoring SRP and Storage Group
• Monitor Compression Compliance with Unisphere
• Workload Planning • Lab: Workload Planning with Unisphere
• Introduction to Business Continuity
• TimeFinder
Day 3
• SRDF
• Integrated Solutions
• TimeFinder SnapVX Operations
• TimeFinder SnapVX Concepts
• TimeFinder SnapVX Operations using Unisphere for
PowerMax • Lab: TimeFinder SnapVX

4 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This is the recommended agenda for day three.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
PowerMax and VMAX Family Configuration and Business Continuity Administration 4
Agenda – Days 4 and 5

Module/Lessons Supporting Activities

• SRDF/Synchronous Operations
• SRDF Initial Setup
• SRDF Disaster Recovery Operations
Day 4 • SRDF Decision Support Operations
• SRDF/S Operations – Unisphere for PowerMax • Lab: SRDF/Synchronous Operations

• SRDF/Asynchronous Operations
• SRDF/A concepts and Operations
• SRDF/A Resiliency Features
• SRDF/A Multi-session Consistency (MSC)
Day 5 • Lab: SRDF/Asynchronous Operations
• SRDF/Metro Operations • Lab: SRDF/Metro
• SRDF/Metro Overview
• SRDF/Metro Setup
• SRDF/Metro Monitoring

5 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This is the recommended agenda for days four and five.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
PowerMax and VMAX Family Configuration and Business Continuity Administration 5
Module: Configuration Administration Overview

Upon completion of this module, you should be able to:

• Describe the PowerMax, VMAX All Flash, and VMAX3 Family of arrays

• Provide overview of key features

• Identify the available tools for managing these arrays

• Articulate Storage Provisioning concepts

6 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This module describes the PowerMax, VMAX All Flash, and VMAX3 family of arrays and provides an
overview of key features. It also covers tools for management of the arrays and storage provisioning
concepts.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 6
Lesson: Overview
This lesson covers the following topics:

• PowerMax, VMAX All Flash, and VMAX3 model comparison

• Key features

• Management tools

7 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson describes the PowerMax, VMAX All Flash, and VMAX3 family of arrays. It
provides an overview of key features and tools for management of the arrays.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 7
PowerMax, VMAX All Flash, and VMAX3
SCALABILITY

VMAX 850F/FX VMAX 400K


VMAX 950F/FX 1 to 8 Engines
5760 Drives
PowerMax 8000 1 to 8 V-Bricks
VMAX 200K
1920 Drives
1 to 8 Bricks
1 to 4 Engines
288 Drives VMAX 450F/FX 2880 Drives
1 to 4 V-Bricks
960 Drives
PowerMax 2000 VMAX 100K
1 to 2 Bricks VMAX 250F/FX
1 to 4 Engines*
96 Drives 1 to 2 V-Bricks 2880 Drives
100 Drives

8 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved. *100K with 3 or 4 engines requires RPQ

PowerMax models consist of the PowerMax 2000 and the PowerMax 8000. The VMAX All Flash arrays
models include the VMAX 250, the VMAX 450, the VMAX 850, and the VMAX 950. PowerMax and VMAX
All Flash arrays provide appliance-like packaging. Engines and drives are packaged in set sizes, and
software is included. In the PowerMax, this appliance-like packaging is known as a Brick. In the VMAX All
Flash, they are called V-Bricks. Additional capacity packs, also in set sizes, can be added to the arrays to
increase the usable storage. The VMAX3 models encompass three array models: the VMAX 100K for
commercial data centers, the VMAX 200K for most Enterprise data centers, and the VMAX 400K for large-
environment Enterprise data centers. The PowerMax, VMAX All Flash, and VMAX3 arrays are 100%
virtually provisioned and preconfigured in the factory. The arrays are built for management simplicity,
extreme performance, and massive scalability in a small footprint.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 8
PowerMax Arrays Features
PowerMax Features

Feature PowerMax 2000 PowerMax 8000


Maximum
• 44 2.5” + spares – PowerMax 2000 • 32 2.5” + spares – PowerMax 8000
Drives per Brick

Drive Options • NVMe Flash • NVMe Flash

• Three-Phase Delta (50 amp), WYE (32 amp) • Three-Phase Delta (50 amp), WYE (32 amp)
Power Options
• Single-Phase (32 amp) • Single-Phase (32 amp)
Vault • Vault to Flash in engine • Vault to Flash in engine
• Quad-brick bay
• Dual-brick bay – Support for single brick in bay
Racking Options – Support for single brick in bay • Third party racking
• Third party racking • Bay dispersion
– 25 meters/82 feet (RPQ)
• Onsite connectivity with service laptop
Service Access • Onsite connectivity with service laptop
• Service tray on all bays

9 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Common features throughout the PowerMax family include NVMe flash drives in multiple sizes, power
configuration options, multiple racking options, and service access points. Vault to Flash in engine is
another feature that is implemented on the PowerMax family, which is the same as in VMAX3 and VMAX
All Flash models. PowerMax 2000 arrays are dual-brick configurations. Two separate PowerMax 2000
arrays can be installed in a standard 40" rack. PowerMax 8000 bay configurations are quad-brick-per-rack
systems. A single brick in both the dual-brick and quad-brick configurations is supported. A service laptop,
which is not included, is required for onsite access to the array. A service tray is available on the bays in
PowerMax 8000 models.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 9
PowerMax – Model Comparison

PowerMax 2000 PowerMax 8000


Number of Bricks 1-2 1-8
Cache per Brick 512 GB, 1 TB, 2 TB 1 TB, 2 TB
2.5 GHz 2.8 GHz
Engine Type
48-core 72-core
Max 2.5” Drives per array 96 288
Max Usable Capacity 1 PBe 4 PBe
Max Front-End Ports 64 256
InfiniBand Fabric None (direct InfiniBand connections) Dual 18-Port switches
PowerMaxOS, eManagement, TimeFinder SnapVX, Compression and Deduplication,
Essentials Software Package
Non-Disruptive Migration (NDM), and AppSync Starter Package
All from Essentials Package, Data at Rest Encryption (D@RE), SRDF/S, SRDF/A, SRDF 3-
Pro Software Package site and 4-site, SRDF/Metro, Embedded NAS (eNAS), Unisphere 360, PowerPath (75
Hosts), Storage Resource Management (SRM), and AppSync Full Suite

10 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This table shows a comparison of the PowerMax models.

The PowerMax 2000 is configured with one to two bricks. When fully configured with two bricks, the
PowerMax 2000 supports up to 96 2.5” drives, providing up to 1 Petabyte of usable capacity, and up to 64
front-end ports. There are no switches in the PowerMax 2000, as the two engines in the bricks are directly
connected to each other for data and communications.

The PowerMax 8000 is configured with one to eight bricks. With the maximum eight-brick configuration,
the PowerMax 8000 supports up to 288 2.5” drives, providing up to 4 Petabytes of usable capacity. When
fully configured, the 8000 provides up to 256 front-end ports for host connectivity. The internal fabric
interconnect uses dual InfiniBand 18-port switches for redundancy and availability.

Two software offerings are available with the PowerMax arrays. The Essentials Package, which is a
starter package, and the Pro Package, which has additional software that is included with the system.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 10
PowerMax 2000 Bay Configurations
One System Two Systems

Single-engine Dual-engine Flexible racking options


DAE4 DAE4
DAE3 DAE3

Engine 2 Engine 2

System 2
SPS SPS SPS SPS
DAE2 DAE2 DAE2 DAE2
DAE1 DAE1 DAE1 DAE1

Engine 1 Engine 1 Engine 1 Engine 1

SPS SPS SPS SPS SPS SPS SPS SPS

DAE4 DAE4 DAE4

DAE3 DAE3 DAE3

Engine 2 Engine 2 Engine 2

System 1
SPS SPS SPS SPS SPS SPS

DAE2 DAE2 DAE2 DAE2 DAE2 DAE2

DAE1 DAE1 DAE1 DAE1 DAE1 DAE1

Engine 1 Engine 1 Engine 1 Engine 1 Engine 1 Engine 1

SPS SPS SPS SPS SPS SPS SPS SPS SPS SPS
SPS SPS

11 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The PowerMax 2000 array provides flexible racking options. The PowerMax 2000 system, whether a
single-brick or dual-brick is located in the bottom half of the rack, leaving the upper 20U for an additional
PowerMax system.

Two systems can be configured in a single rack, and flexible options include two single-brick systems, two
dual-brick systems, and a mix of the two. Certain restrictions and configuration rules apply. See
www.dellemc.com for details.

Onsite connectivity to the arrays, as with the PowerMax 8000 system, requires a service laptop, which is
not included.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 11
PowerMax 8000 Bay Configurations

PowerMax 8000

SPS SPS SPS SPS


IB Switch B
IB Switch A
DAE6 DAE6
DAE5 DAE5
DAE4 DAE4

Engine 4 Engine 8

SPS SPS SPS SPS

Engine 3 Engine 7
Ethernet Ethernet
Service Tray Service Tray

Engine 2 Engine 6

SPS SPS SPS SPS

Engine 1 Engine 5

DAE3 DAE3
DAE2 DAE2
DAE1 DAE1
SPS SPS SPS SPS

12 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The PowerMax 8000 systems are available in quad-engine bay configurations only. Up to four Bricks and
supporting Standby Power Supplies (SPSs) are installed per bay in the PowerMax 8000 systems. Support
for fewer than four Bricks is available with the PowerMax 8000, installed in a quad-engine bay. Ethernet
switches and, in multi-engine systems, InfiniBand switches are installed in System Bay 1 only. Service
trays are available in both PowerMax 8000 bays for connecting a service laptop, which is not included, for
onsite access to the array. No KVM is included with the PowerMax 8000 system.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 12
VMAX All Flash Arrays Features

VMAX All Flash Features

Feature VMAX 250F/FX VMAX 450F/FX and 850F/FX VMAX 950F/FX


Maximum Drives
• 50 2.5” Flash • 240 2.5” • 240 2.5” Flash
per Engine
• Three-Phase Delta (50 amp), • Three-Phase Delta (50 amp), • Three-Phase Delta (50 amp),
Power Options WYE (32 amp) WYE (32 amp) WYE (32 amp)
• Single-Phase (32 amp) • Single-Phase (32 amp) • Single-Phase (32 amp)
• Dual-engine • Dual-engine
• Third party racking • Third party racking
• Flexible racking • Dispersion • Dispersion
Racking Options
• Third party racking – System Bays 2-8, up to – System Bays 2-8, up to
25 meters/82 feet from 25 meters/82 feet from
System Bay 1 System Bay 1
Vault • Vault to Flash in engine • Vault to Flash in engine • Vault to Flash in engine
• Keyboard, Video, Mouse (KVM) • Keyboard, Video, Mouse (KVM)
• Onsite connectivity with component in System Bay 1 component in System Bay 1
Service Access
service laptop – Service tray on – Service tray on
additional system bays additional system bays

13 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The VMAX All Flash 250F/FX shares common features with the VMAX All Flash 450, 850 and 950
models, including power options and vaulting. Significant differences, however, exist in the physical
configuration of the VMAX 250 models. First, there is a maximum of 50 2.5” all flash drives in a VMAX
250. The VMAX 250 uses a new 25-drive DAE not used in the other models. A two-engine VMAX 250
occupies 20U of a standard 40U (Titan) rack, leaving the other 20U available for a second system, or
other data center components such as hosts and/or switches. The rack can be Dell EMC-provided or an
approved third-party rack. Restrictions and configuration rules apply, see www.dellemc.com for specifics.
Finally, the VMAX 250 does not have a KVM component. Service access is achieved using the laptop of
an approved service technician.

Features with the VMAX All Flash 450, 850 and 950 models include maximum drives per engine and DAE
type—120-drive DAEs only. These features differ from the VMAX All Flash 250 arrays. In the VMAX All
Flash 450, 850 and 950 models, only dual-engine per bay configurations are supported, along with a third
party racking option. Configurations with a single engine in the dual-engine bay are supported. Power
configuration options, system bay dispersion, and vaulting in the VMAX All Flash 450, 850 and 950 arrays
are identical. Service access is provided by a Keyboard, Video, Mouse (KVM) component located in
System Bay 1 on VMAX 450, 850 and 950 models. Also, a service tray is available on all other bays to
optionally connect a service laptop, which is not included.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 13
VMAX All Flash – Model Comparison
VMAX 250 VMAX 450 VMAX 850 VMAX 950
Number of V-Bricks 1-2 1-4 1-8 1-8
1 TB (2 TB available as
Cache per V-Brick 512 GB, 1 TB or 2 TB 1 TB or 2TB 1 TB or 2 TB
upgrade)

Engine Type 2.2 GHz, 48-core 2.6 GHz, 32-core 2.7 GHz, 48-Core 2.3 GHz, 72-core

Max 2.5” Drives 100 960 1920 1920


Max Usable 1 PB (2 PB with 2 TB
1 PB 4 PB 4 PB
Capacity engine upgrade)
Max Front-End 192 (OS)
64 96 192
Ports 256 (MF)
InfiniBand Fabric None Dual 12-Port switches Dual 18-Port switches Dual 18-Port switches
HYPERMAX OS, Thin Provisioning, Inline Compression, Non-Disruptive Migration, Virtual Volumes, QOS:
F Package
Host I/O Limits, Embedded Management, TimeFinder SnapVX, AppSync iCDM Starter Bundle

All from F Package, Data at Rest Encryption (D@RE), SRDF/S, SRDF/A, SRDF 3-site and 4-site, SRDF/Metro,
FX Package
Embedded NAS, Unisphere 360, PowerPath (75 Hosts), CloudArray Enabler*, ViPR SRM, AppSync Advanced

*CloudArray Enabler only available if sized in VMAX Sizer (does not appear as part of FX bundle in MyQuotes)

14 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This table shows a comparison of the VMAX All Flash models.

The VMAX 250 is configured with one to two engines. When fully configured with two engines, the VMAX
250 supports up to 100 2.5” drives, providing up to 1 Petabyte of usable capacity, and up to 64 front-end
ports. There are no switches in the VMAX 250, as the two engines are directly connected to each other for
data and communications.

The VMAX 450 is configured with one to four engines. With the maximum four-engine configuration, the
VMAX 450 supports up to 960 2.5” drives. This configuration provides up to 2 Petabytes of usable
capacity, when all engines are upgraded with 2 Terabytes of cache. When fully configured, the 450
provides up to 96 front-end ports for host connectivity. The internal fabric interconnect uses dual
InfiniBand 12-port switches for redundancy and availability.

The VMAX 850 is configured with one to eight engines. With the maximum eight-engine configuration, the
VMAX 850 supports up to 1920 2.5” drives, providing up to 4 Petabytes of usable capacity. When fully
configured, the 850 provides up to 192 front-end ports for host connectivity. The internal fabric
interconnect uses dual InfiniBand 18-port switches for redundancy and availability.

The VMAX 950 is also configured with one to eight engines. With the maximum eight-engine configuration,
the VMAX 950 supports up to 1920 2.5” drives, providing up to 4 Petabytes of usable capacity. When fully
configured, the 950 provides up to 256 front-end ports for host connectivity. The internal fabric
interconnect uses dual InfiniBand 18-port switches for redundancy and availability.

Two software offerings are available with the VMAX All Flash arrays:

• F Package, which is a starter package

• FX Package, which has additional software that is included with the system

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 14
VMAX 250 Bay Configurations

One System Two Systems


Single-engine Dual-engine Flexible racking options*
Direct DAE BB Direct DAE BB
Direct DAE BA Direct DAE BA

Engine 2 Engine 2

System 2
SPS SPS SPS SPS
Direct DAE AB Direct DAE AB Direct DAE AB Direct DAE AB
Direct DAE AA Direct DAE AA Direct DAE AA Direct DAE AA

Engine 1 Engine 1 Engine 1 Engine 1


SPS SPS SPS SPS SPS SPS SPS SPS
Direct DAE BB Direct DAE BB Direct DAE BB
Direct DAE BA Direct DAE BA Direct DAE BA

Engine 2 Engine 2 Engine 2


SPS SPS System 1 SPS SPS SPS SPS
Direct DAE AB Direct DAE AB Direct DAE AB Direct DAE AB Direct DAE AB Direct DAE AB
Direct DAE AA Direct DAE AA Direct DAE AA Direct DAE AA Direct DAE AA Direct DAE AA

Engine 1 Engine 1 Engine 1 Engine 1 Engine 1 Engine 1


SPS SPS SPS SPS SPS SPS SPS SPS SPS SPS SPS SPS

*Restrictions apply – see www.dellemc.com for details

15 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Like the PowerMax 2000, flexible racking options with the VMAX 250 models include upgrade capabilities
in a single system. Notice that the system, whether a single-engine or dual-engine configuration is located
in the bottom half of the rack. This configuration leaves the upper 20U for an additional VMAX 250 system,
or foreign components such as customer-provided hosts and switches.

Two systems can be configured in a single rack, and flexible options include two single-engine systems,
two dual-engine systems, and a mix of the two. Certain restrictions and configuration rules apply. See
www.dellemc.com for details.

Onsite connectivity to VMAX 250 arrays, as with the PowerMax systems, requires a service laptop, which
is not included.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 15
VMAX All Flash 450, 850, 950 Bay Configurations

Dual-Engine

VMAX 450F/FX, 850F/FX, 950F/FX

System Bay 1 System Bays 2-4

Engine 1 Engine Odd

Engine 2 Engine Even


Ethernet Ethernet
KVM

Engine Odd
Direct B DAE

Engine Odd
Direct A DAE

Engine Even
Direct B DAE
Engine Even
Direct A DAE

16 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

VMAX All Flash 450, 850 and 950 models use dual-engine bays. The dual-engine bay configuration
contains up to two engines per bay, a supporting power subsystem, and up to 4 DAEs. All 4 DAEs in the
bay are direct-attach, two to each engine. There is no daisy chaining in the dual-engine bays.

In dual-engine systems, there are unique components only present in System Bay 1. These components
include the Keyboard, Video, Mouse (KVM), a pair of Ethernet switches for internal communications, and
dual InfiniBand switches used for the fabric interconnect between engines. The dual InfiniBand switches
are present in multi-engine systems only. In system bays 2 through 8, a work tray is located in place of the
KVM and Ethernet switches. The work tray provides the option to connect a service laptop, which is not
included, for remote access to scripts, diagrams, and other service processor functionality.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 16
VMAX3 Arrays Features
VMAX3 Array Features
Feature VMAX 100K, 200K, 400K
• 720 2.5”
Maximum Drives per Engine
• 360 3.5”
• Hybrid (mixed drive types)
Drive Options
• All Flash
• Three-Phase Delta (50 amp), WYE (32 amp)
Power Options
• Single-Phase (32 amp)
Vault • Vault to Flash in engine
• Single-engine bay
• Dual-engine bay
– Support for single engine in bay
Racking Options
• Third party racking
• System Bay dispersion
– Systems bays 2-8, up to 25 meters/82 feet from System Bay 1
• Keyboard, Video, Mouse (KVM) component in System Bay 1
Service Access
• Service tray on additional system bays

17 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Common features throughout the VMAX3 Family include maximum drives per engine for both hybrid and
all flash, power configuration options, system bay dispersion, multiple racking options, and service access
points. Vault to Flash in engine is another feature that is implemented on the VMAX3 Family. Service
access is provided by a Keyboard, Video, Mouse (KVM) component in System Bay 1. A service tray is
available on additional bays to optionally connect a service laptop, which is not included.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 17
VMAX3 Arrays – Model Comparison

VMAX 100K VMAX 200K VMAX 400K


Number of Engines 1-2* 1-4 1-8
Cache/Engine 512 GB or 1 TB 512 GB, 1 TB, or 2 TB 512 GB, 1 TB, or 2 TB
2.1 GHz 2.6 GHz 2.7 GHz
Engine Type
24-core 32-core 48-Core
Maximum 2.5” Drives 1440 2880 5760
Maximum 3.5” Drives 720 1440 2880

Maximum Usable Capacity 0.5 PBu 2.3 PBu 4.3 PBu

Maximum Front-End Ports 64 128 256


InfiniBand Fabric Dual 12-Port switches Dual 12-Port switches Dual 18-Port switches

*VMAX 100K with up to (4) engines is supported with RPQ only

18 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This table shows a comparison of the three VMAX3 Family arrays.

The VMAX 100K is configured with one to two engines. With the maximum two-engine configuration, the
VMAX 100K supports up to 1440 2.5” drives, or up to 720 3.5” drives, providing up to 0.5 Petabytes of
usable capacity. When fully configured, the 100K provides up to 64 front-end ports for host connectivity.
The internal fabric interconnect uses dual Infiniband 12-port switches for redundancy and availability. The
VMAX 100K can be configured with up to four engines, with a Request for Price Quote (RPQ), or special
order. With the maximum four-engine configuration, the VMAX 100K doubles the amount of supported
drives, usable capacity, and front-end ports.

The VMAX 200K is configured with one to four engines. With the maximum four-engine configuration, the
VMAX 200K supports up to 2880 2.5” drives, or up to 1440 3.5” drives, providing up to 2.3 Petabytes of
usable capacity. When fully configured, the 200K provides up to 128 front-end ports for host connectivity.
The internal fabric interconnect uses dual InfiniBand 12-port switches for redundancy and availability.

The VMAX 400K is configured with one to eight engines. With the maximum eight-engine configuration,
the VMAX 400K supports up to 5760 2.5” drives, or up to 2880 3.5” drives, providing up to 4.3 Petabytes
of usable capacity. When fully configured, the 400K provides up to 256 front-end ports for host
connectivity. The internal fabric interconnect uses dual InfiniBand 18-port switches for redundancy and
availability.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 18
VMAX3 Bay Configurations

Single-Engine Dual-Engine

VMAX 100K, 200K, 400K VMAX 100K, 200K, 400K

System Bay 1 System Bays 2-8 System Bay 1 System Bays 2-4

Engine
Engine 11 Engine 2-8 Engine 1 Engine Odd

Engine 2 Engine Even


Ethernet Ethernet Ethernet Ethernet
KVM KVM

Engine Odd
Direct B DAE

Engine Odd
Direct A DAE

Engine Even
Direct B DAE
Engine Even
Direct A DAE

19 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

VMAX3 Family arrays can be either single-engine bay configurations or dual-engine bay configurations.

In a single-engine bay configuration, as the name suggests, there is one engine per bay supported by the
power subsystem and up to six DAEs. Two of the DAEs are direct-attach to the engine, and each of them
can have up to two additional daisy-chained DAEs.

The dual-engine bay configuration contains up to two engines per bay, a supporting power subsystem,
and up to 4 DAEs. All 4 DAEs in the bay are direct-attach, two to each engine. There is no daisy chaining
in the dual-engine bays.

In both single-engine and dual-engine systems, there are unique components only present in System Bay
1. These components include the Keyboard, Video, Mouse (KVM), a pair of Ethernet switches for internal
communications, and dual InfiniBand switches used for the fabric interconnect between engines. The dual
InfiniBand switches are present in multi-engine systems only. In system bays 2 through 8, a work tray is
located in place of the KVM and Ethernet switches. The work tray provides the option to connect a service
laptop, which is not included, for remote access to scripts, diagrams, and other service processor
functionality.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 19
Dynamic Virtual Matrix

APPLICATION ACCESS

MULTI DATA SERVICES DYNAMIC


PROCESSOR RESOURCE
SCALE-OUT DATA
INTEGRITY
QoS VIRTUAL
PROVISIONING
And
TIME
FINDER
SRDF
ALLOCATION
FAST

STORAGE ACCESS

20 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMax, VMAX All Flash, and VMAX3 arrays feature the worlds first and only Dynamic Virtual Matrix. It
enables hundreds of CPU cores to be pooled and allocated on-demand to meet the performance
requirements for dynamic mixed workloads. It is architected for agility and efficiency at scale.

Resources are dynamically apportioned to host applications, data services, and storage pools to meet
application service levels. These resources enable the system to automatically respond to changing
workloads and to optimize itself to deliver the best performance available from the current hardware.

The Dynamic Virtual Matrix provides:

• Fully redundant architecture along with fully shared resources within a dual controller node and across
multiple controllers

• A dynamic load distribution architecture

The Dynamic Virtual Matrix is essentially the bios of the array operating software. It provides a truly
scalable multi-controller architecture that scales and manages from two fully redundant storage controllers
up to 16 fully redundant storage controllers. All controllers share common I/O, processing, and cache
resources.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 20
Multi-Core Technology

Legacy VMAX PowerMax/VMAX All Flash/VMAX3


DEFAULT (BALANCED) SETTING 100%
LOAD ON
FRONT-END (FA) PORTS FRONT-END (FA) PORTS THIS
PORT
0 1 N-1 N 0 1 N-1 N

ALL CORES
CAN BE
APPLIED

FRONT-END CORE POOL

• SINGLE, DEDICATED CORE FOR EACH DUAL PORT


HYPERMAX
OS
BACK-END CORE POOL

0 1 N-1 N 0 1 N-1 N

BACK-END (DA) PORTS BACK-END (DA) PORTS

21 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Legacy VMAX architecture—VMAX 10K, 20K, and 40K—supports a single, hard-wired dedicated core for
each dual port for FE or BE access—regardless of data service performance changes. The PowerMax,
VMAX All Flash, and VMAX3 systems can focus hardware resources, namely cores, as needed by
storage data services.

The PowerMax, VMAX All Flash, and VMAX3 architecture provides a CPU pooling concept, and further, it
provides a set of threads on a pool of cores. The pools provide a service for FE access, BE access, or a
data service such as replication. As displayed here, the default configuration has the services balanced
across FE ports, BE ports, and data services.

A unique feature enables the system to provide the best performance possible even when the workload is
not well distributed across the various ports/drives and central data services – as the example shows
when there may be 100% load on a port pair. In this specific use case for the heavily used FE port pair, all
the FE cores can be used for a period of time to the active dual port.

There are three core allocation policies: balanced, front-end, back-end. Dell EMC Services can shift the
bias of the pools between balanced, front-end—for example, lots of small host I/O and high cache hits—
and back-end—for example, write-heavy workloads. This shifting becomes dynamic and automated over
time. This change cannot be managed with management software.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 21
Key Features
100% Virtually
Provisioned

Arrays shipped preconfigured


eManagement Data Devices, Data Pools, Storage Resource
Manage the array without software Pool (SRP) and Service Levels (SLs)
installed on a host

PowerMax
VMAX All Flash and
VMAX3 Arrays
Service Level Provisioning
Classify applications at the Storage
MMCS Group level
Integrated Service Processor in System
Bay 1 – Management Module Control
Station
Local and Remote Replication
TimeFinder SnapVX
SRDF
ProtectPoint

22 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Here is a brief overview of some of the features of PowerMax, VMAX All Flash, and VMAX3 arrays. The
arrays are preconfigured at the factory, and are all virtually provisioned. The preconfiguration creates all
required Data Pools and RAID protection levels.

Service Level provisioning provides a simpler way to provision storage, enabling classification of
applications at the Storage Group level.

For local and remote replication, TimeFinder, SRDF, and ProtectPoint are available on the arrays.
TimeFinder SnapVX does not require a target volume, providing space-efficient point-in-time local copies
of data. Symmetrix Remote Data Facility, or SRDF, offers multiple remote replication options, including
synchronous and asynchronous replication. The ProtectPoint solution integrates with Data Domain
providing backup and restore capability using TimeFinder SnapVX and FAST.X.

PowerMax, VMAX All Flash, and VMAX3 arrays have Management Module Control Stations (MMCSs)
installed in System Bay 1. The MMCS is an integrated service processor which provides environmental
monitoring of, support functionality for, and accessibility to the arrays.

eManagement is a capability that enables customers to run Dell EMC array management software
components inside the array. eManagement provides a tightly integrated management solution for
customers interested in managing a single PowerMax, VMAX All Flash, or VMAX3 array. Dell EMC
Solutions Enabler (SE) and Unisphere for PowerMax provide array management and control of the arrays.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 22
Additional Features – FAST.X on VMAX3
Dell EMC XtremIO

Extends Data Services with SLO Provisioning

Dell EMC Arrays 3rd Party Arrays

HP 3PAR
VMAX DMX VNX2 IBM DS8000 HDS G1000

*CloudArray not supported for FAST operations

CloudArray*

23 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

With substantial increases in the amount of data stored, businesses continue to strive for ways to use and
extend the value of existing resources, reduce the cost of management, and drive the best performance
achievable in the environment. Adding to the challenge is the desire to ensure that data is kept on an
appropriate storage tier so that it is available when needed but stored in as cost-effective and
environmentally responsible a manner as possible.

FAST.X addresses many of these concerns by allowing qualified storage platforms to be used as physical
disk space for VMAX3 arrays. Users can continue to use VMAX3 availability and reliability along with
features such as local and remote replication, storage tiering, and data migration on external arrays. FAST
uses external array capacity as a tier for part of SL-based provisioning. Automatic movement of data
based on usage includes any arrays that are within the same SRP as local tier storage on VMAX3 arrays.
Because CloudArray must reside in its own SRP, it is not available as part of the FAST tiering solution. All
other supported arrays including Dell EMC XtremIO, VMAX, DMX and VNX, and third-party arrays from
IBM, HP, and HDS can be used by FAST.

FAST.X uses Dell EMC Data Domain systems for backups in a ProtectPoint solution. ProtectPoint
provides block movement of data on source LUNs to Data Domain LUNs for incremental backups. VMAX
All Flash arrays use FAST.X capability for ProtectPoint and CloudArray solutions only. FAST.X is not
supported on PowerMax arrays, however, FAST.X technology is used for a ProtectPoint solution in
PowerMax arrays.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 23
Additional Features – ProtectPoint

SnapVX
Production Data

Only copy FAST.X to


changes Data Domain

Application Host

PowerMax Data Domain


VMAX All Flash
VMAX3

24 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

ProtectPoint is an integration between PowerMax, VMAX All Flash, or VMAX3 arrays and Data Domain
storage systems to back up production data. TimeFinder SnapVX is used to create a replica, or a
snapshot, of a LUN. ProtectPoint copies the snapshot to a vdisk on the Data Domain system. The vdisk is
seen by the source array as a FAST.X encapsulated LUN. Change tracking is enabled for the replica, and
therefore, only changes made are copied, providing performance increases and space savings.
ProtectPoint eliminates performance impact on applications, provides faster backup and recovery, and
reduces costs and complexity of traditional backups.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 24
Additional Features – Data Reduction

Dell EMC Data Reduction Description


Array Technique
VMAX3 SRDF Data • Minimizes the amount of data that is
Family Compression transmitted over SRDF links
• Software and Hardware compression
VMAX All Inline • Compresses data as it is written to flash drives
Flash Compression • Storage Group (SG) level
Family • Open systems FBA data only
PowerMax Inline • Further improves efficiency
Family Compression • Reduces the number of copies of identical
and tracks that are stored on drives
Deduplication • Open systems FBA data only

25 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Data reduction technologies supported in PowerMax, VMAX All Flash, and VMAX3 arrays include
compression and deduplication.

With VMAX3 arrays, SRDF compression minimizes the amount of data that is transmitted over an SRDF
link. Both software and hardware compression can be activated simultaneously for SRDF traffic over GigE
and Fiber Channel. Data is first compressed by software and then further compressed by hardware.
Hardware compression is available through a compression I/O module.

VMAX All Flash arrays support inline compression. Inline compression compresses data as it is written to
flash drives and is a feature of storage groups. Compression is enabled by default, and new I/O to an SG
is compressed when written to disk. If there is existing data on the SG, it starts to compress in the
background. After compression is disabled, new I/O is no longer compressed, and existing data remains
compressed until it is written again, at which time it decompresses. Compression is available on open
systems (FBA) only, which includes eNAS data.

PowerMax arrays feature deduplication with inline compression. Deduplication works hand-in-hand with
inline compression. Deduplication reduces the number of copies of identical tracks that are stored on
back-end drives. Enabling deduplication also enables compression. That is, deduplication cannot operate
independently of compression. Both must be active. In addition, deduplication operates across an entire
system. It is not possible to use compression only on some storage groups and compression with
deduplication on others. As with inline compression, deduplication is available on open systems (FBA)
data only.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 25
Additional Features – eNAS

File Services

FC, iSCSI, NFS,


CIFS, SMB3

26 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Embedded Network-Attached Storage (eNAS) extends file-level storage capability to PowerMax, VMAX
All Flash, and VMAX3 arrays. The storage hypervisor on the array manages and protects embedded
services by extending high availability to these services that traditionally would have run outside the array.
It provides direct access to hardware resources to maximize performance. Virtual instances of Data
Movers and Control Stations running on the PowerMax, VMAX All Flash, or VMAX3 arrays provide the
NAS services.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 26
Additional Features – PowerPath

SAN
Load Balancing
Up to 75 Hosts
HBA
... HBA
HBA
HBA X
Failover

27 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The PowerMax Pro and VMAX All Flash FX software packages include Dell EMC PowerPath and
PowerPath/Virtual Edition (PowerPath/VE) for up to 75 hosts. PowerPath can be purchased separately for
VMAX3 arrays. PowerPath dynamically routes I/O to the most efficient paths using patented algorithms to
balance workloads. Testing of paths includes health and performance checks, and improves performance
and availability compared to traditional host-based multipathing tools. With PowerPath, users gain the
automated data path management and load balancing for all heterogeneous servers, networks, and
storage deployed in their environment. This streamlined data path enables more predictable and
consistent application availability while providing up to three times the IOPS even during periods of high
I/O.

PowerMaxOS 5978 integration with PowerPath includes advanced host and array information sharing,
including hostname, operating system version, and cluster details. Host I/O Limits and Service Levels are
known to PowerPath as well, and automated host information mapping is included for simplified
management. PowerMax machine learning integrates with PowerPath/VE to improve application
performance using I/O tagging, currently supported with Oracle only.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 27
Configuration Tools

• Initial configuration
– Configuration is done at the factory
– SymmWin and Simplified SymmWin
› Runs on Management Module Control Station (MMCS)
› Access is restricted to authorized Dell EMC personnel only

• End-user tools for configuration and management


– Solutions Enabler
– Unisphere for PowerMax
– Unisphere 360

28 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The initial configuration of the PowerMax, VMAX All Flash, and VMAX3 arrays is done at the Dell EMC
factory with SymmWin and Simplified SymmWin. These software applications run on the Management
Module Control Station (MMCS) of the arrays, and are restricted for use by Dell EMC personnel only.
Once the arrays have been installed, Solutions Enabler (SYMCLI), Unisphere for PowerMax and
Unisphere 360 can be used to manage them.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 28
Management Tools

• Solutions Enabler and Unisphere for PowerMax


– Installed in local, remote, or embedded configurations

Embedded
SYMAPI
Management
Server
SRDF SRDF SRDF

Clients Clients

Management Management
Embedded
Server Server
Management

Embedded
Management

Local Remote Embedded

29 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Local, remote, or embedded instances of Solutions Enabler (SE) and Unisphere for PowerMax
(Unisphere) can be used to monitor, manage and configure PowerMax, VMAX All Flash, and VMAX3
arrays. Solutions Enabler provides command line interface (CLI) access, and Unisphere provides a
graphical user interface (GUI).

In a local configuration, SE and Unisphere are loaded onto a management server that is connected to the
array(s). A SYMAPI server is used, and accessed by the management server in a remote configuration.
Users typically access the management hosts through clients that are configured in the data center. The
newest implementation of management tools for PowerMax, VMAX All Flash, and VMAX3 arrays are
Embedded Management, or eManagement (eMgmt). eMgmt provides individual instances of array
management tools running on the array. eMgmt includes Solutions Enabler, Unisphere, SMI-S—an
industry standard intended to facilitate the management of storage devices from multiple vendors in
Storage Area Networks—and DBA—Data Base Analyzer, used with Unisphere for viewing storage at
database object levels. eMgmt can be used to monitor both local and remotely attached arrays.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 29
Solutions Enabler Integration with Array Operating Systems

Unisphere SYMCLI

SYMAPI

MMCS

SymmWin

PowerMaxOS/HYPERMAX OS

30 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This diagram illustrates the software layers and where each component resides.

Dell EMCs Solutions Enabler APIs are the storage management programming interfaces that provide an
access mechanism for managing the PowerMax, VMAX All Flash, and VMAX3 arrays. They can be used
to develop storage management applications. SYMCLI resides on a host system to monitor and perform
control operations on the arrays. SYMCLI commands are invoked from the host operating system
command line—shell. The SYMCLI commands are built on top of SYMAPI library functions, which use
system calls that generate low-level I/O SCSI commands to the storage arrays.

Unisphere for PowerMax is the graphical user interface that makes API calls to SYMAPI to access the
array.

SymmWin, running on the MMCS, accesses PowerMaxOS/HYPERMAX OS directly.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 30
Dell EMC Solutions Enabler – Introduction

• Symmetrix Command Line Interface


(SYMCLI)
• Comprehensive command set for managing
PowerMax, VMAX All Flash, and VMAX3
arrays
– Invoked from the host operating system
command line
– Scripts that may provide further integration
with operating system and application
• Security and access controls
– Monitor only
– Host-based and user-based controls

31 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Solutions Enabler command line interface (SYMCLI) is used to perform control operations on PowerMax
and VMAX arrays, and the array devices, tiers, groups, directors, and ports. Some of the array controls
include setting array-wide metrics, creating devices, and masking devices.

You can invoke SYMCLI from the local host to make configuration changes to a locally connected
PowerMax, VMAX All Flash, or VMAX3 array, or to an RDF-linked array.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 31
Unisphere for PowerMax – Introduction

• Management Console for


PowerMax and VMAX Family
of arrays
• Performance Analyzer
– Installed by default
– PostgreSQL

• APIs for Automation and


Provisioning

32 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Dell EMC Unisphere for PowerMax is the management console for PowerMax and VMAX family of arrays.

In previous versions of Unisphere, Performance Analyzer was an optional component. With Unisphere
8.0.x and above, the installation of Performance Analyzer is done by default during the installation of
Unisphere. Also with Unisphere 8.0.x and above, PostgreSQL replaces MySQL as the database for
Performance Analyzer. Unisphere for PowerMax also provides a comprehensive set of APIs which can be
used by orchestration services like SRM, Open Stack, and VMware.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 32
Unisphere for PowerMax – Functionality

• Manage eLicenses, Users, and Roles


• Storage Configuration Management
– SL-based provisioning
– FAST on VMAX3
• Configure and Monitor Alerts
• Performance Monitoring
– Real time, root cause, and historical
– Dashboards
› Predefined
› User-customized

33 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

You can use Unisphere for PowerMax for a variety of tasks, including managing eLicenses, user accounts
and roles, and performing array configuration and volume management operations, such as SL-based
provisioning on PowerMax and VMAX family arrays. It can also manage Fully Automated Storage Tiering
(FAST) on VMAX3 arrays.

With Unisphere, you can also configure and monitor alerts and alert thresholds.

In addition, Unisphere provides tools for performing analysis and historical trending of performance data
with Performance Analyzer. Performance Analyzer provides a view of high frequency metrics in real time,
system heat maps, and graphs detailing system performance. You can also drill down through data to
investigate issues, monitor performance over time, run scheduled and ongoing reports (queries), and
export that data to a file. Users can use various predefined dashboards for many of the system
components, or customize their own dashboard view.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 33
Unisphere 360

Provides single, centralized monitoring of up to 200 enrolled DMX, VMAX2,


VMAX3, VMAX All Flash, and PowerMax arrays

34 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

With the introduction of Embedded Management, multiple arrays within a data center can be running
individual instances of Unisphere. Unisphere 360 provides users with a single, centralized view of all
registered instances of Unisphere, both embedded and traditional, from PowerMax, VMAX All Flash,
VMAX3, VMAX2, and DMX arrays. Unisphere 360 facilitates better insight across the entire data center.
Users have a single-window view where they can manage, monitor, and plan at the array level or for the
entire data center. Management can be done through link and launch to the registered Unisphere
instances. The minimum version of Unisphere for VMAX supported by Unisphere 360 is version 8.2.0.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 34
Lesson: Storage Provisioning Overview
This lesson covers the following topics:

• Factory Preconfiguration

• Service Level based provisioning

• Introduction to configuration changes with Unisphere and SYMCLI

35 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers factory preconfiguration and storage provisioning concepts for PowerMax, VMAX All
Flash, and VMAX3 arrays. An introduction to configuration changes with Unisphere for PowerMax and
SYMCLI is also provided.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 35
Factory Preconfiguration

Disk Group
• Collection of physical drives
Data Pools
• Collection of Data Devices (TDATs) in each Disk
Group
• One-to-one relationship between Data Pool and
Disk Group in VMAX3, and many* to 1 in VMAX All
Flash and PowerMax
• Performance capability is known based on drive
type, speed, capacity, quantity of drives, and RAID
protection
Storage Resource Pool (SRP)
• Collection of Data Pools
*Up to 16
Service Level (SL)
• Expected average response time target
36 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Disk Groups in PowerMax, VMAX All Flash, and VMAX3 arrays are similar to previous generation VMAX
arrays. A Disk Group is a collection of physical drives. Each drive in a Disk Group shares the same
performance characteristics, determined by the rotational speed and technology of the drives—15K, 10K,
7.2K, or Flash—and the capacity.

Data Pools are a collection of data devices. Each individual Disk Group is preconfigured with data devices
(TDATs). All the data devices in the Disk Group have the same RAID protection. Thus, a given Disk Group
only has data devices with one single RAID protection. All the data devices in the Disk Group have the
same fixed size devices. All available capacity on the disk is consumed by the TDATs. All the data devices
(TDATs) in a Disk Group are added to a Data Pool. There is a one-to-one relationship between a Data
Pool and Disk Group in VMAX3. There is up to sixteen-to-one relationship in VMAX All Flash and
PowerMax arrays, which is dynamically configured by PowerMaxOS. The performance capability of each
Data Pool is known and based on the drive type, speed, capacity, quantity of drives, and RAID protection.

One Storage Resource Pool (SRP) is preconfigured.

The available Service Levels are also preconfigured. Disk Groups, Data Pools, Storage Resource Pools,
and Service Levels cannot be configured or modified by Solutions Enabler or Unisphere. They are created
during the configuration process in the factory.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 36
Data Pool and Disk Group Relationship in VMAX3

Flash - RAID 5 (3+1) SAS 15K - RAID 1 SAS 7.2K - RAID 6 (14+2)
Data Pool 0 Data Pool 1 Data Pool 2

Disk Group 0 Disk Group 1 Disk Group 2


Flash SAS 15K SAS 7.2K
400 GB 300 GB 1 TB
RAID 5 (3+1) RAID 1 RAID 6 (14+2)
2.5” 2.5” 3.5”

37 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Data Devices of each Data Pool are preconfigured. The Data Pools are built according to what is
selected by the customer during the ordering process. All Data Devices that belong to a particular Data
Pool must belong to the same Disk Group. There is a one-to-one relationship between Data Pools and
Disk Groups. Disk Groups must contain drives of the same disk technology, rotational speed, capacity,
and RAID type. The performance capability of each Data Pool is known and is based on the drive type,
speed, capacity, quantity of drives, and RAID protection.

In this example, Disk Group 0 contains 400 Gigabyte (GB) Flash drives configured as RAID 5 (3+1). Only
Flash devices of this size and RAID type can belong to Disk Group 0. If more drives are added to Disk
Group 0, they must be 400 GB Flash drives that are configured as RAID 5 (3+1). Disk Group 1 contains
300 GB SAS drives with rotational speeds of 15 thousand (15K) revolutions per minute (rpm) configured
as RAID 1. Disk Group 2 contains 1 Terabyte (TB) SAS drives with rotational speeds of seventy-two
hundred (7.2K) rpm configured as RAID 6 (14 + 2). This is an example of a configuration on a VMAX3
array.

In VMAX All Flash and PowerMax, there are multiple data pools dynamically created based on
compression. The pools provide various data compression ratios but the same RAID protection, all located
in the same disk group.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 37
Protection Options – Data Pools

Option Characteristics Protection Performance Array Support

• Write to two separate


RAID 1 physical drives Higher Fastest VMAX3
• Read from single drive
VMAX3
• Parity-based protection
Fast Read VMAX All Flash (7+1 only)
RAID 5 – 3+1 and 7+1 High
Good Write PowerMax 2000
• Striped data and parity PowerMax 8000 (7+1 only)
• Two parity drives
– 6+2 and 14+2 VMAX3
RAID 6 • Data availability is primary Highest
Fast Read
VMAX All Flash (14+2 only)
consideration Fair Write
PowerMax (6+2 only)
• Performance is a secondary
consideration

38 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMax, VMAX All Flash, and VMAX3 arrays are preconfigured with Data Pools and Disk Groups as
discussed earlier. There is a 1:1 correspondence between Data Pools and Disk Groups. The Data
Devices in the Data Pools are configured with one of the data protection options listed on the slide. The
choice of the data protection option is made during the ordering process, and the array is configured with
the chosen and available options.

RAID 1 employs disk mirroring. It is the replication of data to two or more disks. Disk mirroring is a good
choice for applications that require high performance and high availability, such as transactional
applications, email, and operating systems. RAID 1 is supported in VMAX3 arrays only.

RAID 5 is based on the industry standard algorithm and can be configured with three data and one parity,
or seven data and one parity. While the latter provides more capacity per dollar, there is a greater
performance impact in degraded mode where a drive has failed. All surviving drives must be read to
rebuild the missing data. VMAX All Flash and PowerMax 8000 systems support RAID 5 only in a 7+1
configuration. VMAX3 and PowerMax 2000 arrays support both 3+1 and 7+1 RAID 5 protection.

RAID 6 focuses on availability. With the new larger capacity disk drives, rebuilding may take multiple days,
increasing the exposure to a second disk failure. VMAX All Flash supports RAID 6 only in a 14+2
configuration. PowerMax arrays support RAID 6 only in a 6+2 configuration. Random read performance is
similar across all protection types, assuming you are comparing the same number of drives. The major
difference is write performance. With mirrored devices, for every host write, there are two writes on the
back end. With RAID 5, each host write results in two reads and two writes. For RAID 6, each host write
results in three reads and three writes.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 38
Storage Resource Pool (SRP)

• Collection of Data Pools


• Factory Preconfiguration includes one SRP
• Not configurable with Solutions Enabler or Unisphere
• Multiple SRPs may be configured
Storage Resource Pool

Flash - RAID 5 (3+1) SAS 15K - RAID 1 SAS 7.2K - RAID 6 (14+2)
Data Pool 0 Data Pool 1 Data Pool 2

39 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

A Storage Resource Pool (SRP) is a collection of Data Pools, which are configured from Disk Groups. A
Data Pool can only be included in one SRP. SRPs are not configurable using Solutions Enabler or
Unisphere. The factory preconfigured array includes one SRP that contains all Data Pools in the array. If
required, multiple SRPs can be configured by qualified Dell EMC personnel. If there are multiple SRPs,
one of them must be marked as the default.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 39
Service Level Based Provisioning

Service Level (SL) Storage Group (SG)


• Defines ideal performance operating range • Can be explicitly associated with an SRP
of an application • Can be explicitly associated with an SL and
• Can be combined with a Workload Type* Workload Type*
– Further refines the performance – Defines the SG as FAST VMAX3 arrays
objective • Implicitly associated with Default SRP and
• Preconfigured Optimized SL
*The concept of Workload Type has been removed from PowerMax arrays

40 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

A Service Level (SL) defines the ideal performance operating range of an application. Each SL contains
an expected maximum response time range. The response time is measured from the perspective of the
front-end adapter. The SL can be combined with a workload type to further refine the performance
objective on VMAX All Flash and VMAX3 arrays. SLs are predefined and are prepackaged with the array,
and are not customizable by Solutions Enabler or Unisphere.

A Storage Group (SG) is a logical grouping of devices that are used for device masking, control, and
monitoring. A Storage Group can be associated with an SRP, enabling devices in the SGs to allocate
storage from any pool in the SRP. When an SG is associated with an SL, it defines the SG as FAST
managed in VMAX3 arrays. SL-based provisioning is covered in more detail in subsequent modules in the
course.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 40
Service Level Details

Dell EMC Array Operating System Service Levels Behavior


PowerMax Family PowerMaxOS 5978 • Front-end queue management
• Predictable and consistent application performance
VMAX All Flash Family PowerMaxOS 5978 • Front-end queue management
• Predictable and consistent application performance
VMAX All Flash Family HYPERMAX OS 5977 • Diamond Service Level
– All flash drives
VMAX3 Family (Hybrid HYPERMAX OS 5977 • Prioritized a lower-cost solution with media mix
arrays) – Mix of flash and spinning drives
• Automated process of managing disk pools (FAST)

41 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Service Levels were reintroduced with PowerMaxOS 5978 running on VMAX All Flash and PowerMax
arrays. PowerMaxOS uses response time targets to prioritize performance of Storage Groups using front
end queuing, using a floor and a ceiling. SGs never go below their set floor, and never go above their set
ceiling, ensuring consistency when adding new applications to an existing array. Previous applications
continue to perform as they have. There is no impact when adding new applications, and predictable
performance can be assigned using Service Levels.

With HYPERMAX OS 5978, FAST is limited in function in the All Flash arrays because there is no need to
manage to SLs as there is only one type of drives. For these arrays, the Diamond Service Level is used. It
is still relevant however, for instance, when load needs to be balanced across raid groups when new
capacity is added, and the workload needs to be migrated to the new drives for balance. The system is still
being monitored to ensure that Diamond Service Levels are being met.

With VMAX3 hybrid arrays, FAST dynamically allocates workloads across storage technologies by non-
disruptively moving application data to meet stringent Service Levels. FAST moves the most active data to
high-performance flash drives and the less active data to lower-cost drives. FAST uses the best
performance and cost characteristics of each different drive type. FAST proactively monitors workloads to
identify busy data that would benefit from being moved to higher performing drives. It also identifies less-
busy data that could be moved to higher capacity drives, without affecting existing performance. This
promotion/demotion activity is based on achieving service level objectives that set performance targets for
associated applications, with FAST determining the most appropriate drive technologies, or RAID
protection types, to use.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 41
5978 Family Service Levels

Service Level Expected Average Limit


Response Time
Diamond 0.6 ms Upper

Platinum 0.8 ms Upper

Gold 1.0 ms Upper and Lower

Silver 3.6 ms Upper and Lower

Bronze 7.2 ms Upper and Lower

Optimized N/A N/A

42 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

With PowerMax arrays, a single tier of flash storage is configured. Multiple Service Levels were
reintroduced with PowerMax arrays to enable users to set expectations for applications to provide
predicable and consistent performance. Users can prioritize data access and set priority on critical, high
priority applications while managing lower priority applications using Service Levels in PowerMax arrays.
Service Levels have a target response time, and, in some cases, an upper and lower response time limit
as well.

Diamond and Platinum SLs have the highest priority and performance. Both have an upper response time
limit, but no lower response time limit, ensuring they are serviced as fast as possible.

Gold, Silver, and Bronze SLs have both an upper and a lower limit, designed to enable higher priority SLs
to be unaffected. These SLs are managed such that their average response time is greater than or equal
to the lower response time limit.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 42
5977 Family Service Levels

Service Level Performance Behavior Expected Average Array Support


Response Time
Diamond Flash 0.8 ms VMAX3, VMAX All Flash

Platinum Between Flash and 15K RPM 3.0 ms VMAX3

Gold 15K RPM 5.0 ms VMAX3

Silver 10K RPM 8.0 ms VMAX3

Bronze 7.2K RPM 14.0 ms VMAX3

Optimized System optimized N/A VMAX3, VMAX All Flash

43 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In addition to the default Optimized SL, there are five available Service Levels, varying in expected
average response time targets. The Optimized SL has no explicit response time target. The Optimized SL
achieves optimal performance by placing the most active data on higher performing storage and the least
active data on the most cost-effective storage. Diamond emulates Flash drive performance. Platinum
emulates performance between Flash and 15K RPM drives. Gold emulates the performance of 15K RPM
drives. Silver emulates the performance of 10K RPM drives. Bronze emulates performance of 7.2K RPM
drives. The actual response time of an application associated with an SL varies based on the actual
workload. It depends on the average I/O size, read/write ratio, and the use of local and remote replication.
The end user can associate the desired SL with a Storage Group. The Diamond SL is available only if
Flash drives are configured. For the VMAX All Flash array with only the internal SRP, only the Diamond
Service Level is available. However, if Dell EMC CloudArray is integrated into a VMAX All Flash array, the
Optimized Service Level is available for the external SRP used for CloudArray.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 43
Workload Types

Workload Type* Description


OLTP Small block I/O workload

OLTP with Replication Small block I/O workload with local or remote replication

DSS Large block I/O workload

DSS with Replication Large block I/O workload with local or remote replication
*The concept of workload type has been removed from PowerMax arrays

44 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

There are four workload types as shown here. The workload type can be specified with the Diamond,
Platinum, Gold, Silver, and Bronze SLs to further refine response time expectations. You cannot associate
a workload type with the Optimized SL. The concept of workload type has been removed from PowerMax
arrays, as the system can handle workload changes without affecting the response time of the chosen SL.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 44
Host View of Storage

• Auto-provisioning Groups are used to allocate storage to hosts


• Thin Devices are presented to hosts
– Open Systems host sees thin devices as FBA SCSI disk drives
– Mainframe host sees thin devices as 3380 or 3390 CKD volumes
• Thin Device – Size Metrics
– Sector – 16 Blocks (512-byte block) – 8 KB
– Track Size – 16 Sectors – 16*8 = 128 KB
– Cylinder Size – 15 Tracks – 15*128 = 1920 KB
– Maximum Device Size – 35791394 Cylinders = 65536 GB = 64 TB

45 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Auto-provisioning groups are used to allocate storage to hosts. PowerMax, VMAX All Flash, and VMAX3
arrays are 100% virtually provisioned and thin devices are presented to the hosts. From the perspective of
an open systems host, the thin device is simply seen as one or more FBA SCSI devices. In the
mainframe, thin devices are seen as CKD 3380 or 3390 volumes. Standard SCSI commands such as
SCSI INQUIRY and SCSI READ CAPACITY return low-level physical device data, such as vendor,
configuration, and basic configuration, but have very limited knowledge of the configuration details of the
storage system.

Knowledge of array-specific information, such as director configuration, cache size, number of devices,
mapping of physical-to-logical, port status, flags, and so on, requires a different set of tools. Solutions
Enabler and Unisphere are tools that are used to gather and display array-specific information.

Host I/O operations are managed by the operating environment which runs on the arrays. Thin devices are
presented to the host with the following configuration or emulation attributes:

• Each device has N cylinders. The number is configurable.

• Each cylinder has 15 tracks (heads).

• Each device track in a fixed block architecture (FBA) is 128 KB (256 blocks of 512 bytes each).

• Maximum Thin Device size that can be configured on a VMAX All Flash, or VMAX3 is 35791394
cylinders or about 64 TB.

Unisphere device creation requests can be specified in cylinders, MB, GB, or TB. Solutions Enabler
device creation requests can be specified in cylinders, MB, or GB.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 45
Storage Allocation
Auto-provisioning groups are used to allocate storage to hosts:
• Fiber Channel (WWN) or iSCSI Initiator
Initiator Group (IG) • Port Flags set on Initiator Group
 FCID Lockdown per initiator

• Front-end ports
• A port can belong to multiple Port Groups
Port Group (PG) • A Port Group contains either all physical ports (fiber) or all virtual targets (fiber or
iSCSI)
• Ports must have the ACLX flag enabled
• Thin devices
Storage Group (SG) • A device can belong to more than one Storage Group
• Can be associated with SRP, SL and Workload Type

Masking View (MV) • One of each type of group is associated together to form a Masking View

46 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Auto-provisioning Groups are used for device masking on the PowerMax, VMAX All Flash, and VMAX3
Family of arrays.

An Initiator Group contains the world wide name (WWN) or iSCSI name of a host initiator, also known as a
host bus adapter (HBA). An Initiator Group may contain a maximum of 64 initiator addresses or 64 child
initiator group names. Initiator Groups cannot contain a mixture of host initiators and child IG names or
types. Port flags are set on an Initiator Group basis, with one set of port flags applying to all initiators in the
group. However, the FCID lockdown is set on a per-initiator basis. An individual initiator can only belong to
one Initiator Group. However, once the initiator is in a group, the group can be a member in another
Initiator Group. It can be grouped within a group. This feature is called Cascaded Initiator Groups, and is
only allowed to cascade one level.

A Port Group may contain a maximum of 32 front-end ports. Front-end ports may belong to more than one
Port Group. Before a port can be added to a Port Group, the ACLX flag must be enabled on the port. A
Port Group contains either physical ports (fiber) or virtual targets (iSCSI). A mix of port types in a Port
Group is not supported.

Storage Groups can only contain devices or other Storage Groups. No mixing is permitted. A Storage
Group with devices may contain up to four-thousand logical volumes. A logical volume may belong to
more than one Storage Group. There is a limit of sixteen-thousand Storage Groups per PowerMax, VMAX
All Flash, or VMAX3 array.

A parent SG can have up to 64 child Storage Groups. One of each type of group is associated together to
form a Masking View.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 46
Managing Configuration and Provisioning

• Execute using Unisphere or SYMCLI


– Unisphere – various wizards and tasks
– SYMCLI
› symconfigure
› symaccess
› symsg
› symdev

• Perform configuration and storage provisioning


– Thin device management – creation, deletion, attribute modification
– Front-end port management – attributes, association
– Array metrics
– Manage Auto-provisioning groups – storage provisioning

47 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Configuration and provisioning are managed with Unisphere for PowerMax or SYMCLI. Unisphere has
numerous wizards and tasks to help achieve various objectives. The symconfigure SYMCLI command
is used for the configuration of thin devices and for port management. The symaccess SYMCLI
command is used to manage Auto-provisioning groups which enables storage allocation to hosts (LUN
Masking). The symsg SYMCLI command is used to manage Storage Groups. Arrays running
PowerMaxOS 5978 or HYPERMAX OS 5977 support the management of devices using the symdev
create, symdev modify, and symdev delete commands.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 47
Configuration Architecture

Host Local Array Remote Array


SYMCLI UNISPHERE
SYMAPI FA
SIL
RA RA

Ethernet Ethernet
SYMWIN SYMWIN
Scripts Scripts

SYMMWIN SYMMWIN

MMCS MMCS

48 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Configuration Manager architecture enables SymmWin scripts to run on the MMCS. Configuration
change requests are generated either by the symconfigure SYMCLI command, or a SYMAPI library call
generated by a user making a request through the Unisphere GUI. These requests are converted by
SYMAPI on the host to syscalls and transmitted to the array through the channel interconnect. The front
end routes the requests to the MMCS, which invokes SymmWin procedures to perform the requested
changes. In the case of SRDF connected arrays, configuration requests can be sent to the remote array
over the SRDF links.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 48
Gatekeeper Devices

• 3-cylinder thin devices (~ 6MB)


• Receives low-level SCSI I/O from SYMCLI/GUI
• Used as target of SYMCLI/SYMAPI commands
– Commands are passed through gatekeepers to the array for action
– Locked during the passing of commands
– Lots of commands flowing to the array from many applications on the same host
can cause gatekeeper shortage

• Must be accessible from the host running the commands


• Refer to EMC Knowledgebase solution “Gatekeepers – all you need to
know” (000458145) on Dell EMC Support

49 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Solutions Enabler is a Dell EMC software component used to control the storage features of Symmetrix,
VMAX, and PowerMax arrays. It receives user requests from SYMCLI, GUI, or other means, and
generates system commands that are transmitted to the array for action. Gatekeeper devices are LUNs
that act as the target of command requests to array-based functionality. These commands arrive in the
form of disk I/O requests. The more commands that are issued from the host, and the more complex the
actions required by those commands, the more gatekeepers are required to handle those requests in a
timely manner.

When Solutions Enabler successfully obtains a gatekeeper, it locks the device, and then processes the
system commands. Once Solutions Enabler has processed the system commands, it closes and unlocks
the device, freeing it for other processing. A gatekeeper is not intended to store data and is usually
configured as a small three-cylinder device—approximately 6 MB. Gatekeeper devices should be mapped
and masked to single hosts only and should not be shared across hosts. For specific recommendations on
the number of gatekeepers required, see Dell EMC KnowledgeBase solution 000458145 available on the
Dell EMC Support website.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 49
Configuration Changes – Unisphere

Multiple ways to invoke


configuration changes in
Unisphere
• Depends on the type of
configuration change
• Unisphere has many wizards
• Configuration tasks can be
submitted to a job list

50 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Configuration changes can be invoked with Unisphere for PowerMax in many different ways. The method
depends on the type of configuration change. Several wizards are available, for example, a Provision
Storage wizard, which is shown here. Configuration requests in Unisphere can be added to a Job List.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 50
Unisphere: Storage Resource Pool – Headroom

51 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Storage Groups Dashboard in Unisphere for PowerMax shows all the configured Storage Resource
Pools and the available headroom for each SL. Before allocating new storage to a host, it is a good idea to
check the available headroom. To go to the Storage Groups Dashboard, click the Storage Section button.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 51
Unisphere Storage Resource Pool Details

52 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

You can also look at the details of the configured Storage Resource Pools to see the details of Usable,
Allocated, and Free capacity. To go to the Storage Resource Pools, click the Storage Resource Pool link
in the Storage section dropdown.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 52
Unisphere: Job List

• List of jobs
– Not yet run – Can be run on demand or
scheduled for later execution
– Jobs that are running, successfully completed,
or failed

• Job List can be accessed by clicking:


– Job List link in the System section dropdown
– Job List link in the status bar

53 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Most of the configuration tasks in Unisphere can be added to the Job List for execution at a later time. The
Job List shows all the jobs that are yet to be run (Created status), jobs that are running, jobs that have run
successfully, and jobs that have failed. You can go to the Job List by clicking the Job List link in the
Events section dropdown or by clicking the Job List link in the status bar.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 53
Unisphere: Job List Example

54 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

An example of a Job List is shown here. In this example, an SRDF director was added to the
configuration. Double-clicking the job displays the job details, shown on the right of the screen.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 54
Configuration Changes Preparation – SYMCLI

• Verify that configuration changes can be made safely:


– symconfigure verify –sid <SymmID>

• Check usage of the configured Storage Resource Pools


– symcfg list –srp –sid <SymmID>

• Consider impact on I/O:


– To make devices not ready:
› symdev not_ready <symDev> -sid <SymmID>

• After allocation/de-allocation of storage to a host update the host operating


system environment before attempting I/O

55 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Before making configuration changes, it is important to know the current array configuration.

Verify that the current array configuration is a viable configuration for host-initiated configuration changes.
The command symconfigure verify -sid <SymmID> returns successfully if the array is ready for
configuration changes.

The capacity usage of the configured Storage Resource Pools can be checked using the command
symcfg list –srp –sid <SymmID>.

To understand the impact that a configuration change operation can have on host I/O, check the product
documentation.

After allocating storage to a host, you must update the host operating system environment. Attempting
host activity with a device after it has been removed or altered, but before you have updated the device
information of the host, can cause host errors.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 55
SYMCLI – symconfigure Command

• Command options
– preview, prepare, or commit
• Command execution formats
– Submit file containing commands as a parameter to symconfigure
› File can have multiple commands separated with semicolon
› Example: symconfigure
–sid <SymmID> –file myfile commit
– Enclose commands within quotes following the –cmd option
› Example: symconfigure –sid <SymmID> –cmd “delete dev 0015;”
commit
– On UNIX systems, enter the commands via stdin on the screen
› symconfigure –sid <SymmID> –noprompt prepare <<EOF
delete dev 0015;
EOF

56 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The symconfigure command has three main options:

• Preview ensures that the command file syntax is correct and verifies the validity of the command file
changes.

• Prepare validates the syntax and correctness of the operations. It also verifies the validity of the
command file changes and their appropriateness for the specified array.

• Commit attempts to apply the changes defined in the command file into the specified array after
running the actions described under prepare and preview.

The symconfigure command can be run in one of the three formats shown here.

The syntax for these commands is described in the Solutions Enabler Array Management CLI User Guide,
available on support.dellemc.com. Multiple changes can be made in one session.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 56
SYMCLI – Query/Abort Configuration Sessions

• Query
– symconfigure query –sid <SymmID>

• Abort
– Configuration change sessions can be terminated prematurely using the
abort command.
– Premature termination is only possible before the point of no return
– symconfigure –sid <SymmID> abort –session_id <SessID>

57 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Configuration change sessions can be viewed using the symconfigure query command. If there are
multiple sessions running, all session details are shown. In rare instances, it might become necessary to
cancel configuration changes. To cancel configuration changes, use the symconfigure abort
command as long as the point of no return has not been reached.

Aborting a change that involves RDF devices in a remote array might necessitate the termination of
changes in a remote array.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 57
Lab: Explore Lab Environment with Unisphere for PowerMax
and SYMCLI
This lab covers:
• Exploring the lab environment with Unisphere for
PowerMax
• Exploring the lab environment with SYMCLI

58 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In this lab, explore the environment with Unisphere and SYMCLI.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 58
Module Summary

Key points covered in this module:

• PowerMax, VMAX All Flash, and VMAX3 arrays

• Key features

• Storage Provisioning concepts

• Available tools for managing PowerMax, VMAX All Flash, and VMAX3 arrays

59 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This module covered an overview of the PowerMax, VMAX All Flash, and VMAX3 Family of arrays. Key
features and storage provisioning concepts were covered, as well as tools for managing the arrays.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Configuration Administration Overview 59
Module: Provisioning and Service Levels

Upon completion of this module, you should be able to:

• Provide an overview of Virtual Provisioning

• Explain PowerMaxOS 5978 Service Level implementation

• Describe FAST concepts

60 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This module focuses on Virtual Provisioning, Service Levels, and FAST concepts.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 60
Lesson: Virtual Provisioning and FAST Overview
This lesson covers the following topics:

• Virtual Provisioning overview

• Thin Provisioning elements

• FAST overview and elements

• Service Levels

61 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson provides an overview of Virtual Provisioning, Thin Provisioning elements, FAST elements and
terminology, and Service Levels.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 61
Storage Provisioning

• 100% Virtually Provisioned


– Thin Devices are presented to Hosts

• Arrays are preconfigured


– Disk Groups
– Data/Virtual Provisioning Pools
– Storage Resource Pool(s)
– Service Levels

• Back-end placement of all host-related data is managed by FAST on


VMAX3 arrays

62 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMax, VMAX All Flash, and VMAX3 arrays are all 100% virtually provisioned. Arrays are
preconfigured at the factory with the elements for thin provisioning. On VMAX3 arrays, Virtual Provisioning
and FAST work together all the time and there is no way to separate the two. All host-related data is
managed by FAST, starting with allocations made to thin devices and movement of data on the back end
as the workload changes over time.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 62
Virtual Provisioning (Thin Provisioning)
Compute Systems
Virtual Provisioning (Thin Provisioning)

The ability to present a LUN to a


compute system with more capacity than
what is physically allocated to the LUN.
10 TB 10 TB 10 TB
Thin Thin Thin
Compute
Device Device Device
• Capacity-on-demand from the Storage Reported
Capacity
Resource Pool 3 TB 4 TB 3 TB
Allocated Allocated Allocated
– Physical storage allocated only when the
compute system requires it
Data Pools
– Extent Size – One Track – 128 KB
Pool 0 Pool 1 Pool 2
RAID 5 RAID 1 RAID 6
(3+1) (6+2)

Storage Resource Pool

63 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

One of the biggest challenges for storage administrators is balancing the storage space that is required by
various applications in their data centers. Administrators typically allocate storage space based on
anticipated storage growth to reduce the management overhead and application downtime required to add
new storage later on. This allocation generally results in the overprovisioning of storage capacity, which
leads to higher costs, increased power, cooling, floor space requirements, and lower capacity utilization.
These challenges are addressed by Virtual Provisioning.

Virtual Provisioning is the ability to present a logical unit—Thin LUN—to a compute system, with more
capacity than what is physically allocated to the LUN on the storage array. Physical storage is allocated to
the application on-demand from a shared pool of physical capacity. Virtual Provisioning provides more
efficient utilization of storage by reducing the amount of allocated, but unused physical storage.

The shared storage pool, called the Storage Resource Pool, contains one or more Data Pools with internal
devices that are called Data Devices. When a write is performed to a portion of the thin device, the array
allocates a minimum allotment of physical storage from the pool and maps that storage to a region on the
thin device, including the area targeted by the write. The allocation operation is performed in small units of
storage called virtually provisioned device extents. The virtually provisioned device extent size is one track
(128 KB).

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 63
Thin Provisioning

• Preconfigured thin provisioning pools


• Create host-addressable volumes (TDEVs) using
– Unisphere for VMAX/PowerMax
– Solutions Enabler
• Physical storage allocated on host write

64 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMax, VMAX All Flash, and VMAX3 arrays are preconfigured at the factory with thin provisioning
pools ready for use. Create host-addressable thin devices (TDEVs) using Unisphere or Solutions Enabler.
TDEVs can be added to existing Storage Groups. When the host writes to the TDEVs, physical storage is
automatically allocated from the default Storage Resource Pool.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 64
Thin Provisioning Components
PowerMax and VMAX All Flash Arrays
Service Levels

Diamond Platinum Gold


Storage
Groups Silver Bronze Optimized
VP_ProdApp1 VP_ProdApp2

Storage
Resource SRP_1
Pool

Pool 0 – Pool F
Virtual
Provisioning
RAID 5 (7+1)
Pool

Disk DG 0
Groups
NVMe
3.84 TB

65 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The elements related to Thin Provisioning are Disk Groups, Virtual Provisioning Pools, Storage Resource
Pools, Service Levels, and Storage Groups.

Disk Groups, Data Pools with Data Devices (TDATs), Storage Resource Pools, and Service Levels all
come preconfigured on the array and cannot be modified using management software. Solutions Enabler
and Unisphere give the end user visibility to the preconfigured elements, but no modifications are allowed.
Storage Groups are logical collections of thin devices. Storage Groups and thin devices can be
configured—created/deleted/modified, and so on—with Solutions Enabler and Unisphere. In the example
shown here, the array has been configured with one Disk Group, 16 Virtual Provisioning Pools, one
Storage Resource Pool, and the SLs. This is an example using a PowerMax array.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 65
Disk Group and Virtual Provisioning Pool
PowerMax and VMAX All Flash Arrays

• Disk Group
– Collection of physical disks with same
Pool 0 characteristics
Virtual
Provisioning › Capacity
RAID 5
Pool
(7+1) – Preconfigured with Data Devices (TDATs)
› Single RAID protection
› Fixed hyper sizes – minimum 16 hypers per disk
Disk DG 0
Group
NVMe
• Virtual Provisioning Pool
3.84 TB – Many-to-1* relationship with Disk Group
– All TDATs in disk group added to data pool
– Performance capability is known

*Number of VP Pools to DG is based on the compressibility of the data


66 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

A Disk Group is a collection of physical drives sharing physical and performance characteristics. Drives
are grouped based on technology, capacity, form factor, and RAID protection type. PowerMax and VMAX
All Flash arrays support up to 512 internal Disk Groups.

Each Disk Group is automatically configured with data devices (TDATs) upon creation. All the data
devices in the disk group are of a single RAID protection type, and are all the same size. Each drive in the
group has the same amount of hypers, all sized the same. Each drive has a minimum of 16 hypers. Larger
drives may have more hypers.

A Virtual Provisioning Pool is a collection of data devices of the same emulation and RAID protection.
PowerMax and VMAX All Flash arrays support up to 512 Virtual Provisioning Pools. In PowerMax and
VMAX All Flash arrays, based on the compression ratio, PowerMaxOS dynamically creates many pools in
the same disk group. The number is based on the compressibility of the data. There is up to a many-to-1
relationship between the Virtual Provisioning Pool and the Disk Group. The performance capability of each
pool is known and is based on the drive type, speed, capacity, quantity of drives, and RAID protection.

Data devices provide the dedicated physical space that is used by thin devices. Data devices are internal
devices.

Disk Group, Virtual Provisioning Pools, and data devices (TDATs) cannot be modified using management
software. Solutions Enabler and Unisphere for PowerMax give the end user visibility to the preconfigured
elements, but no modifications are allowed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 66
Storage Resource Pool

• Collection of Virtual Provisioning Pools


– A virtual provisioning pool can only be in one SRP
• Factory preconfiguration includes one SRP
– Contains all the configured data pools
• Multi SRP case – One SRP must be marked as the default

Storage
Resource
SRP_1
Pool

Pool 0
Virtual
Provisioning
RAID 5
Pool
(7+1)

67 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

A Storage Resource Pool (SRP) is a collection of Virtual Provisioning Pools. An SRP can have up to 512
Virtual Provisioning Pools. Individual pools can only be part of one SRP. By default, a single SRP is
configured, which contains all the configured Virtual Provisioning Pools.

When multiple SRPs are configured, one of the SRPs must be marked as the default SRP.

SRP configuration cannot be modified using management software. Solutions Enabler and Unisphere give
the end user visibility into the preconfigured SRPs, but no modifications are allowed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 67
FAST with HYPERMAX OS 5977
VMAX3 Arrays

• Runs within HYPERMAX OS


• Collects and aggregates performance metrics
• Performs workload forecasting
• Plans and executes data movement
• Provides additional core functionality
– Extent allocation management

• FAST hinting to prioritize mission-critical database processes

68 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Fully Automated Storage Tiering (FAST) is permanently enabled on VMAX3 arrays running HYPERMAX
OS. FAST automates the identification of active or inactive application data to reallocate that data across
different performance/capacity pools within the array. FAST proactively monitors workloads to identify
busy data that would benefit from being moved to higher-performing drives. FAST also identifies less-busy
data that could be moved to higher-capacity drives, without affecting existing performance. As mentioned
previously, because VMAX All Flash arrays contain only the highest performing drives and therefore use
the Diamond Service Level, data movement with FAST will not take place. However, when attaching an
external array with FAST.X, FAST sees this storage and uses it accordingly for data movement.

VMAX3 arrays are 100% virtually provisioned so FAST on HYPERMAX OS operates on thin devices. Data
movements can be performed at the sub-LUN level. A single thin device may have extents that are
allocated across multiple data pools within the Storage Resource Pool.

FAST collects and analyzes performance metrics and controls all the data movement within the array.
Data movement is determined by forecasting future system I/O workload, based on past performance
patterns, eliminating any user intervention. FAST provides additional core functionality of extent allocation
management.

FAST hinting provides users a way to accelerate mission critical processes based on business priority and
Service Level. FAST hinting is application-aware and uses the intelligence of Dell EMC Database Storage
Analyzer and Performance Analyzer to monitor the read/write status of the current workload. FAST sends
hints to the array for data that is likely to be accessed in a given period of time. The IT administrator first
creates FAST hint profiles. The profiles are given a priority and scheduled one-off, on-going or on a
recurring frequency—daily, weekly or monthly—along with an expected execution duration. Hints are
provided in the hinting tab in Dell EMC Database Storage Analyzer interface in Unisphere. FAST hinting is
only supported on hybrid—multiple drive type—VMAX3 arrays running HYPERMAX OS 5977.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 68
VMAX3 FAST Provisioning Components
Service Levels

Diamond Platinum Gold


Storage
Groups Silver Bronze Optimized
VP_ProdApp1 VP_ProdApp2

Storage
Resource SRP_1
Pool

Pool 0 Pool 1 Pool 2 Pool 3


Data
Pools
RAID 5 RAID 1 RAID 5 RAID 6
(7+1) (3+1) (6+2)

Disk DG 0 DG 1 DG 2 DG 3
Groups
eMLC 15K 10K 7.2K
200 GB 300 GB 600 GB 4 TB

69 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The elements that are related to FAST and Service Level Provisioning are Disk Groups, Data Pools,
Storage Resource Pools, Service Levels, and Storage Groups.

Disk groups, Data Pools with Data Devices (TDATs), Storage Resource Pools, and Service Levels all
come preconfigured on the array and cannot be modified using management software. Solutions Enabler
and Unisphere for VMAX/PowerMax give the end user visibility to the preconfigured elements, but no
modifications are allowed. Storage Groups are logical collections of thin devices. Storage Groups and thin
devices can be configured—created/deleted/modified, and so on—with Solutions Enabler and Unisphere.
Storage Group definitions are shared between FAST and auto-provisioning groups.

In the example shown here, the array has been configured with four Disk Groups, four Data Pools, one
Storage Resource Pool, and the SLs. This is an example using a VMAX3 array.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 69
VMAX3 Disk Group and Data Pool

• Disk Group
– Collection of physical disks with same characteristics
Pool 0 › Rotational Speed for HDDs or Flash
Data
Pool › Capacity
RAID 5
(7+1) – Preconfigured with Data Devices (TDATs)
› Single RAID protection
› Fixed hyper sizes – minimum 16 hypers per disk
Disk DG 0
Group
eMLC
• Data Pool
200 GB – 1-to-1 relationship with Disk Group
– All TDATs in disk group added to data pool
– Performance capability is known

70 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

A Disk Group is a collection of physical drives sharing physical and performance characteristics. Drives
are grouped based on technologies, rotational speed (or Flash), capacity, form factor, and desired RAID
protection type. VMAX3 arrays support up to 512 internal Disk Groups.

Each Disk Group is automatically configured with data devices (TDATs) upon creation. All the data
devices in the disk group are of a single RAID protection type, and are all the same size. Each drive in the
group has the same amount of hypers, all sized the same. Each drive has a minimum of 16 hypers. Larger
drives may have more hypers.

A Data Pool is a collection of data devices of the same emulation and RAID protection. VMAX3 arrays
support up to 512 data pools. All data devices that are configured in a single physical disk group are
contained in a single data pool. There is 1-to-1 relationship between Disk Groups and Data Pools. The
performance capability of each Data Pool is known and is based on the drive type, speed, capacity,
quantity of drives, and RAID protection.

Data devices provide the dedicated physical space that is used by thin devices. Data devices are internal
devices.

Disk Group, Data Pools, and data devices (TDATs) cannot be modified using management software.
Solutions Enabler and Unisphere give the end user visibility to the preconfigured elements, but no
modifications are allowed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 70
VMAX3 Storage Resource Pool

• Collection of Data Pools


– Constitutes a FAST domain
– A data pool can only be in one SRP

• Factory preconfiguration includes one SRP


– Contains all the configured data pools

• Multi SRP case – One SRP must be marked as the default


Storage
Resource SRP_1
Pool

Pool 0 Pool 1 Pool 2 Pool 3


Data
Pools
RAID 5 RAID 1 RAID 5 RAID 6
(7+1) (3+1) (6+2)

71 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

A Storage Resource Pool (SRP) is a collection of data pools and makes up a FAST domain. Data
movement performed by FAST is done within the boundaries of the SRP. An SRP can have up to 512
Data Pools. Individual Data Pools can only be part of one SRP. By default, a single SRP is configured,
which contains all the configured Data Pools.

Application data belonging to thin devices can be distributed across all data pools within the SRP to which
it is associated. When moving data between data pools, FAST differentiates the performance capabilities
of the pools based on RAID protection and rotational speed—if applicable.

When multiple SRPs are configured, one of the SRPs must be marked as the default SRP.

SRP configuration cannot be modified using management software. Solutions Enabler and Unisphere give
the end user visibility into the preconfigured SRPs, but no modifications are allowed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 71
Display SRP Details – Unisphere

72 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Configured SRPs can be displayed in Unisphere for PowerMax under Storage > Storage Resource
Pools by double-clicking the SRP. Details are shown on the right.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 72
Display SRP Details – SYMCLI

symcfg list -srp -v -sid <SymmiD>

73 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Configured SRPs can also be displayed with SYMCLI using the symcfg list -srp -v -sid
<SymmID> command.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 73
Service Level (SL)

Predefined SLs
Diamond Platinum Gold Silver Bronze Optimized*

• Defines the expected average response time target for a Storage Group
– Desired SL is set on a Storage Group
– SL can be combined with a Workload Type on VMAX3 and VMAX All Flash arrays
to refine performance objective
› OLTP › OLTP with replication
› DSS › DSS with replication

• Response time relates to the front-end adapter

*Optimized is the default SL

74 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

A Service Level (SL) defines an expected average response time target for an application. Performance of
the SG can be predictable by associating an SL to an application (Storage Group). On VMAX3 arrays,
FAST automatically monitors the performance of the application and adjusts the distribution of extent
allocations within an SRP to maintain or meet the response time target. When combined with a Workload
Type on VMAX All Flash or VMAX3 arrays, performance objectives can be refined to fit an application.
Both small-block (OLTP) and large-block (DSS) Workload Types are available, and each can include local
or remote replication, if chosen.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 74
Display Available SLs

75 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The available SLs can be displayed in Unisphere for PowerMax. The display also shows the expected
average response times. Shown here are the available SLs and response times for a PowerMax array.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 75
Display Available Workload Types

76 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To view available Workloads Types on VMAX All Flash and VMAX3 arrays, select Storage > Service
Levels to open the Service Levels view. Select the desired service level—Diamond, Platinum, Gold,
Silver, Bronze, or Optimized.

The Workload types are used to refine the service level—that is, narrow the latency range. Possible
values are OLTP or DSS. OLTP workload is focused on optimizing performance for small block I/O, and
DSS workload is focused on optimizing performance for large block I/O. The Workload Type can also
specify whether to account for any overhead associated with replication—OLTP + Rep and DSS + Rep.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 76
Display Available SLs – SYMCLI
symcfg list -sl -detail -sid <SymmID>

VMAX3 100K

PowerMax

77 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The available Service Levels can be displayed with SYMCLI using the symcfg list –sl –detail –
sid <SymmID> command. The display also shows the expected average response times. Notice that
there is no workload that is associated with the Service Levels on PowerMax arrays, shown on the right.
As mentioned previously, the concept of workload has been removed with PowerMax arrays. PowerMax
arrays are capable of handling changes in workloads without affecting the SLs or response times.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 77
I/O Mix

78 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Clicking the icon next to the Write % on a Workload Type displays the I/O Mixture for the workload.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 78
Workload Skew

79 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Clicking the icon next to the Skew % on a Workload Type displays the calculated skew density score for
the disks in the Storage Group as a percentage of the expected values of the Storage Group.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 79
Storage Groups

• Logical collection of thin devices


– Used for LUN masking and/or FAST

• Can be explicitly associated with an SRP


– By default an SG is associated with the default SRP

• Can be explicitly associated with an SL and Workload Type where applicable


– By default SGs are managed by the Optimized SL

• If explicitly associated with SRP or SL or both, an SG is considered FAST managed

Service Levels
Diamond Platinum Gold
Storage
Group
VP_ProdApp1 Silver Bronze Optimized

SRP_1
80 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

A Storage Group (SG) is a logical collection of thin devices that are managed together. Typically, they
constitute the devices that are used for a single application. Storage Group definitions are shared between
FAST and auto-provisioning groups (LUN masking).

A Storage Group can be explicitly associated with an SRP or an SL or both. Associating an SG with an
SRP defines the physical storage on which data in the SG can be allocated. The association of the SL and
Workload Type defines the response time target for that data. By default, devices within an SG are
associated with the default SRP and are managed by the Optimized SL. Changing the SRP association on
an SG results in all the data being migrated to the new SRP.

While all the data on a VMAX3 array is managed by FAST, an SG is not considered FAST managed if it is
not explicitly associated with an SRP or an SL. Devices may be in more than one SG, but may only be in
one SG that is FAST managed. This limitation ensures that a single device is not managed by more than
one SL or have data that is allocated from more than one SRP.

Note the concept of Cascading Storage Groups, wherein a Parent Storage Group has Child Storage
Groups as members. Child SGs have thin devices as members. In the case of Cascading Storage
Groups, FAST associations are done at the Child SG level.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 80
Thin Device Considerations

• Upon creation
– By default associated with default SRP and the Optimized SL
– Device is automatically in the ready state

• Devices could be added to an existing SG during creation


– Device inherits SRP and SL from SG

• No extents allocated when device is created


– Extents allocated as a result of host write or preallocation request

• A thin device may only be in one SG that is FAST managed


– Device could be in one FAST managed SG and in other non FAST managed SGs

81 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

When a thin device is created, it is implicitly associated with the default SRP and is managed by the
Optimized SL. As a result of being associated with the default SRP, thin devices are automatically in a
ready state upon creation.

During the creation of thin devices, you could optionally add them to an existing Storage Group. The thin
device then inherits the SRP and SL set on the SG.

No extents are allocated during the thin device creation. Extents are allocated only as a result of a host
write to the thin device or a preallocation request.

Devices may be in more than one SG, but may only be in one SG that is FAST managed. This limitation
ensures that a single device is not managed by more than one SL or have data that is allocated from more
than one SRP. Trying to include the same device into a second FAST managed SG results in an error as
follows:

“A device cannot belong to more than one Storage Group in use by FAST.”

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 81
Lesson: PowerMaxOS 5978 Service Levels
This lesson covers the following topics:

• Service Levels on PowerMax and VMAX All Flash arrays running PowerMaxOS 5978

• Service Level implementation

• Managing Service Levels

• Service Level usage examples

82 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers Service Levels on PowerMax and VMAX All Flash arrays running PowerMaxOS 5978.
It includes a description of SL implementation, management of SLs, and examples of usage.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 82
PowerMax and VMAX All Flash Service Levels

• Quality-of-service controls for individual Storage Groups


– Ensure that applications have consistent and predictable performance

• Performance ceiling
– All Service Levels (except Optimized)
– Maximum response time
– Lower priority workloads see elongated response times

• Performance floor
– Gold, Silver, and Bronze Service Levels only
– Minimum response time

83 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMax and VMAX All Flash systems running PowerMaxOS 5978 use response time management to
set performance expectations on applications. With a single tier of all flash storage, data is stored on high-
speed flash storage without the need to move any data for better performance. Access is managed per
Storage Group with a specified Service Level. All Service Levels have a ceiling, which defines the shortest
time that each I/O operation on the SG takes to complete. Gold, Silver, and Bronze Service Levels also
have a performance floor, which defines the longest acceptable time for any I/O operation on the SG to
complete. The Optimized Service Level is exempt from ceiling and floor limits.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 83
Response Time Management

• Throttling of lower-priority SGs


– Diamond throttles Platinum, Gold, Silver, and Bronze SGs
– Platinum throttles Gold, Silver, and Bronze SGs
– Gold throttles Silver and Bronze SGs
– Silver throttles Bronze SGs
– Optimized is exempt from throttling
› Response time may degrade as the system load increases

• Front-end queue management


• Real-time machine learning
– Models workload characteristics
– Predictive function anticipates workload demands for an SG

84 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMaxOS is continually monitoring the system to ensure that lower-priority applications are not
disruptive to higher-priority applications. If the response time of a higher-priority application approaches
the upper limit of its selected SL, the system begins to manage any lower-priority applications. If a Storage
Group begins to exceed the boundaries of the Service Level, the system ensures that lower priority SGs
are throttled using front-end queue management. The Optimized SL is exempt from throttling, however,
response times may degrade as the load on the system increases.

PowerMaxOS uses real-time machine learning to model workload characteristics. These models provide a
predictive function that allows PowerMaxOS to anticipate workload demands for SGs. With these
anticipated workload demands, the system adapts as necessary to changes in block size, write ratio, or
I/O load.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 84
Availability of Service Levels

Storage Groups/Applications
• Containing Open Systems FBA devices
– All Service Levels available

• Containing mainframe CKD devices


– Diamond, Bronze, and Optimized only

85 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

All six Storage Levels are available for SGs containing Open Systems FBA devices. For SGs containing
mainframe CKD devices, only Diamond, Bronze, and Optimized Service Levels are available.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 85
Create Storage Group with Service Level
Unisphere for PowerMax

Storage Group
Dashboard

Service Levels
Dashboard

86 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Create a Storage Group with a Service Level in Unisphere through either the Storage Group Dashboard or
the Service Level Dashboard. From the Storage Groups Dashboard, click Create to open the Provision
Storage wizard. From the Service Levels Dashboard, select the Service Level and click Provision to
open the wizard.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 86
Renaming Service Levels – Unisphere

87 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Renaming Service Levels with Unisphere for PowerMax is a simple process. From the Service Levels
Dashboard, hover to the right of the Service Level name and click the pencil icon. Enter the new SL name
and click the checkmark to save the change. In this example, the Diamond Service Level name has been
changed to CriticalApps.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 87
Change SL of SG – Unisphere

88 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To change the Service Level of a Storage Group in Unisphere for PowerMax, select the SG from the
Storage Groups Dashboard. Click the Modify button, and change the Service Level to the desired level
using the Service Level dropdown selection.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 88
Create Storage Group with Service Level
Solutions Enabler and RestAPI

Solutions Enabler RestAPI

Add –sl flag slold = Service Level name


Example:
symsg create DemoSG –sid <SymmID> -sl <SLName>

89 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Creating a Storage Group with a Service Level in Solutions Enabler requires using the –sl flag, as seen
in the example. The SG DemoSG was created with the Gold Service Level.

With RestAPI, input the Service Level name in the “slold” field. With both Solutions Enabler and
RestAPI, if a Storage Level is not selected, it is set to “None”.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 89
Compliance – Unisphere

90 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Information on an SG that is out of Service Level compliance can be found in the Storage Groups
Dashboard. Double-click the Storage Group, and choose the Compliance tab for detailed information. In
this example, a significant spike in activity happened on Thursday. Further investigation can be done on
this SG and its activity can be done using the Performance tab.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 90
Service Level Usage Examples

Critical Applications Service Providers


• Use Diamond SL to protect critical • Ability to provide different levels of
applications requiring the fastest performance within the array
response time • Introduces delays to establish a range of
• Use lower SLs for other applications performance
– For example, assign Diamond SL to – For example, using Silver and Bronze in
the mission-critical application and PowerMax and VMAX All Flash introduces
Bronze to less-critical applications, delays, enabling a less expensive tier of
such as batch jobs storage

91 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Service Levels in PowerMax and VMAX All Flash arrays have many valuable uses. Mission-critical
applications that require almost immediate response times are not jeopardized by other applications in the
arrays when assigning proper SLs to the applications. For Service Providers, SLs provide the ability to
have different levels of performance within the array. Using lower SLs, such as Silver or Bronze,
introduces delays so applications are not seeing the high level of performance normally seen with all flash
storage.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 91
Lesson: VMAX3 FAST Algorithms and Parameters
This lesson covers the following topics:

• FAST implementation

• FAST metrics and algorithms

• Best practice recommendations

92 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers VMAX3 FAST implementation, metrics and algorithms, and best practice
recommendations.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 92
FAST Runtime Implementation

Collect and • Deliver defined storage services


aggregate
performance – Based on mixed drive configuration
Metrics

• Balance capability of storage resources


with SG SLs
Execute required Monitor workload • Data movements
data movements on disk groups
– Determined by forecasting future system
workload
› Based on observed workload

Monitor Storage
• Runtime tasks are performed continuously
Group
performance

Primary runtime tasks

93 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The goal of FAST is to deliver defined storage services, namely application performance based on SLs,
based on a hybrid storage array containing a mixed configuration of drive technologies and capacities.
Based on the configuration of the array, FAST balances the capabilities of the storage resources, primarily
the physical drives, against the performance objectives of the applications consuming storage on the
array. FAST aims to maintain a level of performance for an application that is within the allowable
response time range of the associated SL while understanding the capabilities of each disk group with the
SRP.

Data movements that are performed by FAST are determined by forecasting the future system workload
at both the disk group and application level. The forecasting is based on the observed workload patterns.

The primary runtime tasks of FAST are:


• Collect and aggregate performance metrics
• Monitor workload on each disk group
– Identify extent groups to be moved to reduce load if necessary
• Monitor Storage Group performance
– Identify extent groups to be moved to meet SL
• Execute required data movements

All the runtime tasks are performed continuously. Performance metrics are constantly being collected and
analyzed and data is being relocated within an SRP to meet application SLs.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 93
Performance Metrics: Collection Levels

• Metrics collected at three levels


– Disk Group
– Storage Group
– Thin Device sub-LUN
• Thin Device sub-LUN – Regions
Region
Extent One track – 128 KB
Extent Group 42 extents – 5.25 MB
Extent Group Set 42 extent groups – 1764 extents – 220.5 MB

– Data movement requests - Extent group - 42 tracks

94 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Performance Metrics are collected at the Disk Group, Storage Group, and Thin Device sub-LUN levels. At
the sub-LUN level, each thin device is broken up into multiple regions – extents, extent groups, and extent
group sets.

Each thin device is made up of multiple extent group sets which, in turn, contain multiple extent groups.
Each extent group is made up of 42 contiguous thin device extents. Each thin device extent being a single
track (128 KB). Thus an extent group is 42 tracks and an extent group set is 1764 tracks.

Metrics that are collected at each sub-LUN level enable FAST to make separate data movement requests
for each extent group for the device – 42 tracks.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 94
Performance Metrics

• Read misses
• Writes
• Prefetch (sequential reads)
• Cache hits
• I/O size
– Tracked separately for reads and writes

• Workload clustering
– Based on read-to-write ratio of workloads on specific LBA ranges

95 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The read miss metric accounts for each DA read operation that is performed. That is, data is read from a
thin device that was not previously in cache and so needs to be read directly from a drive within the SRP.

Write operations are counted in terms of the number of distinct DA operations that are performed. The
metric accounts for when writes are destaged.

Prefetch operations are accounted for in terms of the number of distinct DA operations that are performed
to prefetch data spanning a FAST extent. This metric considers each DA read operation that is performed
as a prefetch operation.

Cache hits, both read and write, are counted in terms of the impact such activity has on the front-end
response time that is experienced for such a workload.

The average size of each I/O is tracked separately for both read and write workloads.

Workload clustering refers to the monitoring of the read-to-write ratio of workloads on specific logical block
address (LBA) ranges of a thin device or data device within a pool.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 95
Data Movement Algorithms

• Capacity
– SRP capacity compliance
› Ensures that data is on correct SRP
– SL capacity compliance
› Ensures that data is on appropriate drive types within SRP

• Performance
– Disk resource protection
– SL response time compliance
– Both use performance metrics to determine appropriate data pool to allocate data
› Prevent overloading of a particular disk group
› Maintain the response time objective of an application

96 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

FAST uses four distinct algorithms as listed on the slide to determine the appropriate allocation for data
across an SRP. Two are capacity-oriented and the other two are performance-oriented.

The SRP and SL capacity compliance algorithms are used to ensure that data belonging to specific
applications is allocated to the correct SRP and across the appropriate drive types within an SRP,
respectively.

The disk resource protection and SL response time compliance algorithms consider performance metrics
collected. These metrics determine the appropriate data pool to allocate data to prevent the overloading of
a particular disk group and to maintain the response time objective to an application.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 96
SL Selection – Recommendations

• Applications being migrated


– Use existing performance information
› Average response time
› Average I/O size
– Translate existing performance information to SL and Workload Type

• If little or no information about the application is available, Dell EMC


recommends using Optimized Service Level

97 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The more information that is available for the applications being provisioned on the array, the easier it is to
select an appropriate SL for each application. Applications that are being migrated from older storage
should have performance information available, including average response time and average I/O size.
This information can be simply translated to an SL and Workload Type combination, setting the
performance expectation for the application and a target for FAST to accomplish. If little is known about
the application, use the Optimized Service Level.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 97
Storage Group Recommendations

• Configure SG for each application


– Provides most granular management
– Associate SL and Workload Type
– FAST can manage to the response time target for the application

• Use Cascaded Storage Groups


– If different devices types in the same application require different SLs

• Non-disruptive device movement between SGs

98 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To provide the most granular management of applications, Dell EMC recommends that each application is
placed in its own SG to be associated to an SL. This recommendation provides for more equitable
management of data pool utilization and ensures FAST can manage to the response time target for the
individual application.

In some cases there may be a need to separately manage different device types within a single
application. For example, it may be desired to apply different SLs to the redo log devices versus the data
file devices within the same database. The use of cascaded Storage Groups is recommended in this case.
Cascaded Storage Groups allow devices to be placed in separate child SGs, which can then be placed in
the same Parent SG. Each child SG can be associated with a different SL, while the Parent SG is used in
the masking view to provision devices to the host.

Depending on requirements, it may be necessary to change the SL of an individual device, which requires
moving the device to another SG. Device movement between SGs with different SLs is allowed and may
be performed non-disruptively to the host if the movement does not result in a change to the masking
information for the device being moved. That means, following the move, the device is still visible to the
exact same host initiators on the same front-end ports as before the move. Devices may also be moved
between Child SGs who share the same parent, where the masking view is applied to the parent group.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 98
Module Summary

Key points covered in this module:

• Overview of Virtual Provisioning

• Service Level implementation and management

• FAST concepts

99 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This module covered Virtual Provisioning, Service Levels, and FAST concepts.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Provisioning and Service Levels 99
Module: Device and Port Management

Upon completion of this module, you should be able to:

• Create, delete, and expand Thin Devices

• Manage port attributes and associations

100 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This module focuses on device and port management.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 100
Lesson: Device Management
This lesson covers the following topics:

• Device types

• Creation, deletion, and expansion of devices

101 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers device types and the creation, deletion, and expansion of devices.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 101
Device Types

• Devices that can be managed with management software


– Thin Devices (TDEV)
› Thin Gatekeeper Devices
– Thin BCV Devices (BCV+TDEV)
– SRDF Thin Devices (RDF1 or RDF2)

• Preconfigured devices – cannot be managed with management software


– Data devices
– Internal Thin Devices (Int+TDEV)
› Used by Data Services Hypervisor VMs
o Tools VMs, eManagement VMs, and eNAS Control Station VMs

102 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Solutions Enabler (SE) and Unisphere can be used to create and delete thin devices. Thin gatekeeper
devices are thin devices which have a capacity of three cylinders—approximately 6 MB. Thin BCV devices
and thin SRDF devices can also be managed with SE and Unisphere. The arrays come with factory
preconfigured devices which cannot be managed with SE and Unisphere. They are the data devices that
are used in the data pools and internal thin devices which are used by Data Services Hypervisor VMs.

On PowerMax, VMAX All Flash, and VMAX3 arrays, the operating system provides a data services
hypervisor running natively. The Data Services Hypervisor provides storage infrastructure services
through virtual machines running on the embedded hypervisor. Storage to these virtual machines is
provided by the internal thin devices.

This lesson focuses on the creation and deletion of thin devices and thin gatekeeper devices. SRDF thin
devices and thin BCV devices are covered in other Dell EMC training courses.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 102
Thin Device Attributes

• Set at or after device creation


– SCSI3_persist_reserve – Enabled by default

• Dynamic RDF capable


– No specific attribute needs to be set

Name Used by
SCSI3_persist_reserve UNIX and Windows cluster software
DIF1 Oracle to ensure data integrity
AS400_GK IBM AS400 host control software STM

103 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The attributes that are listed here can be set on thin devices at or after device creation.

The SCSI-3 persistent reservation attribute, sometimes called the PER bit, is used by various UNIX and
Windows cluster software products. It is enabled by default.

The Data Integrity Field (DIF) is a setting on a device that is relevant to an Oracle environment and all
hosts that support the DIF protocol. Oracle objects that are built on devices that have the DIF attribute
send 520-byte Command Descriptor Blocks (CDBs) rather than the normal 512-byte CDBs. The extra 8
bytes are a form of checksum that validates the 512 bytes of data. When the array receives a CDB on a
device that has the DIF attribute, it validates the Oracle data and honors the write request. If the checksum
and the data do not match, it rejects the write request. The DIF setting is likely to have many different
versions of Data Integrity. PowerMaxOS and HYPERMAX OS support the DIF1 format.

The AS400_GK attribute on a PowerMax, VMAX All Flash, or VMAX3 thin device is required when an
AS400 device is used with IBM Server Task Manager (STM) host control software. This attribute is also
used with the Celerra NAS for Celerra gatekeeper devices.

All PowerMax, VMAX All Flash, and VMAX3 thin devices are dynamic RDF capable by default. No specific
attribute needs to be set.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 103
Thin Device Creation – SYMCLI (1 of 3)

• symconfigure syntax with commonly used options


create dev count=<n>, size= <size> [MB | GB | CYL]
emulation=<Emulation Type>,
config=TDEV,
[sg=<SG Name>];
– Default size unit is cylinders
– Emulation type
› FBA, Celerra_FBA or AS/400_D910_099
– Storage group name can be optionally specified
› Storage group must exist

104 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The symconfigure syntax for the creation of devices is shown here with the most commonly used
options. For a complete list of options, see the latest Solutions Enabler Array Management CLI User
Guide on support.dellemc.com.

Arrays running PowerMaxOS 5978 or HYPERMAX OS 5977 enable the creation of externally visible
(TDEV) devices up to 64 TB. As discussed previously, the symconfigure command syntax can be
submitted using the –file or –cmd options.

The count indicates the number of devices to be created. The size can be specified in megabytes (MB),
gigabytes (GB), or in cylinders (CYL). Cylinders is the default. The supported emulation types are FBA,
Celerra_FBA, and AS/400_D910_099. Celerra_FBA emulation is used for the eNAS solution. The device
configuration type for thin devices is TDEV.

To create BCV thin devices, set config to BCV+TDEV. You can optionally add the newly created devices
to a Storage Group by specifying the name of an existing Storage Group.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 104
Thin Device Creation – SYMCLI (2 of 3)

• Other options with create dev


– preallocate size = <ALL>, allocate_type = <PERSISTENT>
› Only valid preallocate value is ALL - fully preallocates thin device
› Allocate type of PERSISTENT does a persistent preallocation
– device_attr = <Attribute Type>
› SCSI3_persist_reserve, DIF1, AS400_GK
– device_name = <Device Name>, number = <n | SYMDEV>
› User supplied name for device and suffix

105 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

You can choose to preallocate space for the thin devices in the Storage Resource Pool. The only valid
option for the preallocation size is ALL. The entire device is preallocated. You can set the allocation type
to PERSISTENT. Persistent allocations are unaffected by any reclaim operations. The default preallocation
is nonpersistent.

You can optionally set device attributes previously discussed.

Users can assign a friendly device name to a device at the time of creation. You can use the same name
for all the new devices, for example, mydev, or you can assign the device a name and a numerical suffix
that is incremented for each device. The name plus the suffix may not exceed 64 characters. Setting the
number parameter to SYMDEV means that the corresponding device number is used as the suffix.
Solutions Enabler does not check for the uniqueness of names.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 105
Thin Device Creation – SYMCLI (3 of 3)

• symdev syntax
symdev create –tdev –cap <#> [-captype <cyl|mb|gb|tb>] [-
bcv] [-N <#>]
– Default size unit is cylinders
– Default emulation type is FBA

106 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Also, you can use the symdev syntax for the creation of devices as shown on the slide. See the latest
Solutions Enabler Array Management CLI User Guide for more details. The size can be specified in
cylinders (CYL), megabytes (MB), gigabytes (GB), or in terabytes (TB). Cylinders is the default.

Arrays running PowerMaxOS 5978 or HYPERMAX OS 5977 enable the creation of externally visible
(TDEV) devices up to 64 TB. The device configuration type for thin devices is TDEV. To create BCV thin
devices, use the option -bcv. The option –N sets the number of devices to create.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 106
Thin Device Creation – symconfigure Examples

• Create (5) 10 GB FBA emulation thin devices and add to


storage group dbserver1_oracle
– create dev count=5, size =10GB, emulation=FBA,
config=TDEV, sg=dbserver1_oracle;

• Create (10) 10 GB FBA emulation thin devices with user specified names
– create dev count=10, size=10GB, emulation=FBA,
config=TDEV, device_name=mydev, number=1001;
– First device called mydev1001, and the last device called mydev1010
› Can be displayed with - symdev list –identifier device_name

107 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Shown here are two examples of thin device creation using the symconfigure command.

The device creation request can be placed in a command file, for example, “myfile”, and the following
syntax can be used to commit the change:

symconfigure –sid <SymmID> –file myfile commit

To display user-defined names on devices, use the symdev list –identifier device_name –sid
<SymmID> command:

symdev list -identifier device_name -sid 217

Symmetrix ID: 000197600217

Device
------------------------------------------------------------------------------
-
Sym Config Attr Device Name
----- --------------- ---- ---------------------------------------------------
-
0008F TDEV mydev1001
00090 TDEV mydev1002
00091 TDEV mydev1003
00092 TDEV mydev1004…

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 107
Thin Device Creation – symdev Examples

• Create (1) 10 GB FBA emulation thin devices


– symdev –sid 217 create –tdev –cap 10 –captype gb

• Create (5) 10 GB FBA emulation BCV thin devices


– symdev –sid 217 create –tdev –cap 10 –captype gb –bcv –N 5

108 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Shown here are two examples of thin device creation using the symdev command.

To display all nonprivate Symmetrix devices that are configured in one or more Symmetrix arrays that are
connected to this host, use the symdev list command:

symdev list -bcv

Symmetrix ID: 000197600217

Device Name Dir Device


---------------------------- ------- -------------------------------------
Cap
Sym Physical SA :P Config Attribute Sts (MB)
------------------------- ------- -------------------------------------
000C5 Not Visible ???:??? BCV+TDEV N/Asst'd RW 10241

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 108
Gatekeeper Creation – SYMCLI (1 of 2)

• symconfigure syntax
create gatekeeper count=<n>, emulation=<Emulation Type>;
– Equivalent to
› create dev count=<n>, size=3 CYL,
emulation=<Emulation Type>, config=TDEV;
– Emulation type
› The AS400_GK attribute is automatically set for the Celerra_FBA and AS/400_D910_099 emulations
– Gatekeeper devices are thin devices

• Example – Create (6) FBA gatekeeper devices


– create gatekeeper count=6, emulation=FBA;

109 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The symconfigure create gatekeeper command results in the creation of thin devices with a
capacity of three cylinders—approximately 6 MB. The create gatekeeper command is equivalent to
the create dev command with the size that is specified to three cylinders and the configuration set to
TDEV. The create gatekeeper command automatically sets the AS400_GK attribute for the
AS/400_D910_099 and Celerra_FBA emulations.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 109
Gatekeeper Creation – SYMCLI (2 of 2)

• symdev syntax
– Default emulation type is FBA

• Example: Create (6) FBA gatekeeper devices


– symdev create –gk –N 6 –sid 217

110 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Also, you can use the symdev command to create a gatekeeper. The –gk option creates the gatekeeper
devices with the proper size.

The –gk option automatically sets the FBA emulation.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 110
Thin Device Creation – Unisphere for PowerMax

• Create Volumes Wizard


– Used to create Thin Devices and
Gatekeeper devices
– Launched by
› Create button in Volumes listing under
Storage

• Storage Groups
– Create New Storage Group Wizard includes
selecting hosts and ports
• Service Levels
– Provision Wizard includes selecting hosts
and ports

111 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Thin devices and gatekeeper devices can be created in Unisphere for PowerMax by using the Create
Volumes wizard. Devices can also be created from the Storage Group wizard or the Provision wizard
under Service Levels.

This lesson focuses on the Create Volumes wizard. The Create Volumes wizard is launched from the
Volumes selection under the Storage dropdown.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 111
Create Volume – Thin Devices

Optionally add
devices to an
existing Storage
Group

112 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

On PowerMax, VMAX All Flash, and VMAX3 arrays, the Create Volume wizard only supports the creation
of thin devices (TDEV), thin BCV devices (BCV+TDEV), or thin gatekeepers (Virtual Gatekeeper).

Configuration: To create thin devices, select TDEV in the Configuration drop-down selector.

Emulation: FBA is the default. You can also create CELERRA_FBA or AS/400_D910_099 emulation
devices.

Number of Volumes: Type in the required number of devices.

Volume Capacity: You can type in the required capacity of the devices or use the capacity field drop-down
to pick an existing device size. You can specify the capacity units in Cyl, MB, or GB.

Add to Storage Group: This entry is an optional field. You can choose to select an existing Storage Group
to which the newly created devices will be added. To chose an existing Storage Group, click the down
arrow.

The Advanced Options button enables you to optionally give the new devices a Volume Identifier. The
Volume Identifier is equivalent to the device name and number that is specified in SYMCLI. Also, users
can choose to enable Mobility ID, which enables FBA device mobility by ensuring the WWN is unique, and
to allocate the full volume capacity. After specifying the requirements, add the job to the job list by using
the Add to Job List option. Alternately, you can run the job immediately using the Run Now option from
the Add to Job List dropdown.

The preferred method is to add the job to the job list as the job list enables running multiple jobs together
and also enables scheduling of jobs.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 112
Create Volume – Thin Gatekeepers

Optionally add
devices to an
existing Storage
Group

113 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Notice when Virtual Gatekeeper is chosen, there is no option for volume capacity. Because Gatekeepers
were selected, the wizard knows to create three-cylinder devices for this purpose.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 113
Job List – Group Jobs

114 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Go to the Job List by clicking the Job List link in the banner area, or by selecting Job List under Events
in the left navigation pane. You can group multiple jobs and then run the group of jobs as a single job.
Highlight the list of jobs to be grouped and click the Group button.

In the Group Jobs dialog, provide a name for the group. Optionally, you can reorder the jobs and schedule
the grouped job to run on a certain date and time.

The job group is displayed in the Jobs List. By selecting the grouped job, you can modify it, run the job, or
schedule it to run later.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 114
Job Group Details

Job Details

115 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To view details of a job or job group, double-click the job.

Click the More Actions icon—three vertical dots—to Ungroup a grouped job, or to delete a job from the
Jobs List.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 115
Run Job Group

116 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To run the job, click Run and click OK in the confirmation dialog. The Status of the job changes to
RUNNING.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 116
Job Succeeded

117 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

If the jobs complete successfully, the Status changes to SUCCEEDED. To see the Tasks details, double-
click the job.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 117
Unisphere – Volumes View

118 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The newly created devices can be seen in the Volumes View. Go to the Volumes View by choosing
Volumes from the Storage section dropdown. The display has been scrolled to show the newly created
devices 00A0:00AA.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 118
Thin Device Expansion – SYMCLI

• symdev syntax
symdev modify <SymDevName> -tdev -cap <#> [-captype
<cyl|mb|gb|tb> ] –devs <<SymDevStart>:<SymDevEnd>

• Examples:
– Expand device 000AD to 20 GBs
symdev modify 000AD –cap 20 –captype gb –tdev

– Expand devices 000AA-000AF to 300 GBs


symdev modify –devs 000AA:000AF -cap 300 –captype gb -tdev

119 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

TDEVs can be increased in size with online device expansion. The Solutions Enabler symdev modify
command is used to expand device capacity. Devices can be expanded up to 64 TB, online and non-
disruptively. This feature applies to TDEVs only, and FAST.X external provisioned and incorporated
devices are included. See the latest Solutions Enabler CLI Command Reference for more detail.

Shown here is an example of the command to expand a single device, device 000AD, to 20 GB in size.
Also shown is an example of the command to expand a range of devices to 300 GB.

The –captype option specifies the units of capacity, either cylinders, megabytes, gigabytes, or
terabytes. The default is cylinders.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 119
Thin Device Expansion – Unisphere

120 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

A single volume can be expanded in Unisphere for PowerMax from the Volume page. Select an existing
volume and click the Expand button. In this example, the capacity of the volume is 10 GB. An additional
capacity of 5 GB is added to expand the volume capacity to 15 GB. Select Run Now to run the job now.
After the job completes, rescan storage on the host to verify that the operation completed successfully.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 120
Storage Group Capacity Expansion

121 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Unisphere for PowerMax also supports modifying the size of multiple volumes in a Storage Group with a
single operation. From the Storage Group Dashboard, select a Storage Group and click the Modify
button. In the Modify Storage Group dialogue, increase the Volume Capacity setting. In this example,
DemoGroup has five 10 GB volumes. New volume capacity can be typed in or the drop-down menu can
be used to add additional capacity to the SG. PowerMaxOS supports expanding a volume up to 64 TB.
Notice that each volume has increased from 10 GB to 15 GB, adding 25 GB to the SG, for a total capacity
of 75 GB.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 121
Deleting Devices

• Thin devices
– Must not be mapped to a front-end port
– Must not have any allocations or written tracks

• SYMCLI – symconfigure syntax


delete dev SymDevName[:SymDevName];

• SYMCLI – symdev syntax


symdev delete SymDevName[:SymDevName]

• Unisphere
– Select devices to be deleted in Volume listing and click Delete
(Trash Can icon)

• Data Devices (TDATs) cannot be deleted with Solutions


Enabler or Unisphere

122 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Solutions Enabler or Unisphere for PowerMax can be used to delete thin devices. The device to be
deleted must not be mapped to a front-end port and must not have any allocations or written tracks.

The symconfigure and symdev syntaxes for the deletion of devices are shown on the slide. The
symconfigure command syntax can be submitted using the –file or –cmd options. To delete devices
in Unisphere, go to the Volumes listing page, select the devices, and then click the Trash Can icon to
delete. To run the device deletion, click OK in the confirmation dialog.

To free up all allocations or written tracks, you can use the SYMCLI command symdev –sid <SymmID>
free –all –devs <SymDevStart>:<SymDevEnd>.

To free up all allocations or written tracks in Unisphere, go to the Volumes listing page, select the devices,
and then click More Actions button—three vertical dots—and choose Allocate/Free/Reclaim. In the
dialog choose Free Volumes, check Free all allocations for the volume (written and unwritten), and
run the job.

Data devices (TDATs) cannot be deleted with Solutions Enabler or Unisphere.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 122
Lesson: Port Management
This lesson covers the following topics:

• Director emulations

• Port attributes

• Port association

123 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers director emulations, port attributes, and port association.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 123
Director Emulations

Director Emulations

• Slice A - Infrastructure Management (IM)


• Slice B – HYPERMAX OS Data Services (EDS)
• Slice C - Back End Emulation (DS)
• Slice D-H - Remaining Emulations
– FA, SE, RF, RE, EF*, FE, DX
• Each emulation appears only once and
consumes CPU cores
– I/O module ports mapped to emulation
– Maximum 16 ports per director

*PowerMax 2000 and VMAX All Flash 250 arrays do not support mainframe (EF) attach

124 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In the PowerMax, VMAX All Flash, and VMAX3 Family of arrays, there are eight slices per director.

Slice A is used for the Infrastructure Manager (IM) system emulation. The goal of IM emulation is to place
common infrastructure tasks on a separate instance so that it can have its own CPU resources. The IM
performs all environmental monitoring and servicing. All environmental commands, syscalls, and FRU
monitoring are issued on the IM emulation only. DAE FRUs are monitored by the IM through the DS
emulation. If the DS emulation is down, access to DAE FRUs is affected.

Slice B is used by HYPERMAX OS Data Services (EDS) system emulation. EDS consolidates various
HYPERMAX OS functionalities to enable easier and more scalable addition of features. Its main goals are
to reduce I/O path latency and introduce better scalability for various HYPERMAX OS applications. EDS
also manages Open Replicator data services.

Slice C is used for back end emulation—DS – SAS back end.

Slices D through H are used for the remaining emulations. The supported emulations are Fibre Channel
(FA), iSCSI (SE), FC RDF (RF), GbE RDF (RE), FICON (EF), FCoE (FE), and the DX emulation used for
FAST.X and ProtectPoint. Only those emulations that are required are configured.

Each emulation appears only once per director and consumes cores as needed. A maximum of 16 front-
end I/O module ports are mapped to an emulation. In order for a front-end port to be active, it must be
mapped to an emulation.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 124
Director Emulations Example
C:\Users\Administrator>symcfg list -dir all -sid 217

Symmetrix ID: 000197600217 (Local)

S Y M M E T R I X D I R E C T O R S

Ident Type Engine Cores Ports Status


----- ------------ ------ ----- ----- ------

IM-1A IM 1 3 0 Online
IM-2A IM 1 3 0 Online
ED-1B EDS 1 4 0 Online
ED-2B EDS 1 4 0 Online
DF-1C DISK 1 5 4 Online
DF-2C DISK 1 5 4 Online
FA-1D FibreChannel 1 8 9 Online
FA-2D FibreChannel 1 8 9 Online
SE-1E GigE 1 2 8 Offline
SE-2E GigE 1 2 8 Offline
RF-1F RDF-BI-DIR 1 2 1 Online
RF-2F RDF-BI-DIR 1 2 1 Online

125 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

There is a single emulation instance of a specific type per director. The output from the symcfg list –
dir command displays the director emulations. The emulation instances that are seen in this example
output are:

1A & 2A - Infrastructure Manager (IM)

1B & 2B - HYPERMAX OS Data Services (EDS)

1C & 2C - Disk adapter—Output shows it as DF – it is the DS backend emulation)

1D & 2D - Fibre Channel Frontend Adapter (FA)

1E & 2E - iSCSI/Gigabit Ethernet (SE)

1F & 2F - Fibre RDF (RF)

Also shown is the engine the emulations are running on, the number of cores each emulation is using, the
number of ports associated with the emulation type, and status. Notice the number of ports associated
with the FA and RF emulations.

With PowerMaxOS, all director emulations are capable of supporting multiple cores. The actual number of
cores assigned to a director is fixed. Also, all director emulations support a variable number of ports. Ports
are either physical or virtual. Virtual ports are associated with FA directors. You can associate and
disassociate ports from the FA and RF emulations if needed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 125
Add Director Emulations – Solutions Enabler

• New symconfigure command syntax examples

symconfigure –sid 217 commit –cmd “add dir slot_num = 1 type = FA;”
– add dir slot_num = <director number> type = <FA|FE|SE|RF|RE>

symconfigure –sid 217 commit –cmd “remove dir 1f;”


– remove dir slot_num = <director number>

• Supported director types


– FA – Fibre channel front-end
– FE – FCoE front-end (VMAX3 only)
– SE – iSCSI front-end
– RF – Remote Fibre (SRDF)
– RE – Remote Ethernet (SRDF)

• No duplicate emulations

126 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Solutions Enabler 8.2 and above supports the ability to add and remove directors through the
symconfigure CLI. New options, add dir and remove dir are used to support director
management. Examples of these CLI options are shown here.

When adding or removing directors, only front-end fibre channel (FA), Fibre Channel over Ethernet (FCoE
– FE on VMAX3 arrays only) and iSCSI (SE) can be used. Also, in SRDF configurations, remote fibre (RF)
or remote Ethernet (RE) can be added or removed. All other emulations cannot be added or removed.
Directors can be modified in slices D, E, F, G, and H only, as others—A, B, and C—are reserved for
internal and back-end emulations. An available slice must be present to add an emulation. If an emulation
already exists on a director, another instance of the emulation cannot be added to that director.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 126
Add Director Emulations – Unisphere

Supported director types


• FA – Fibre channel front-end
• FE – FCoE front-end
• SE – iSCSI front-end
• RF – Remote Fibre (SRDF)
• RE – Remote Ethernet (SRDF)
No duplicate emulations

127 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Adding Director Emulations can also be done using Unisphere for PowerMax. The same rules and
restrictions as were seen with Solutions Enabler apply to Unisphere add/remove activities.

From the System section, choose Hardware, and click the Manage Emulation button. Choose add or
remove, select the director slot (director number) and emulation type, and choose Add to Jobs List or
Run Now to add or remove an emulation.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 127
Logical Port Layout
PowerMax 2000
Slot Slot Slot Slot Slot Slot Slot Slot Slot Slot Slot
0 1 2 3 4 5 6 7 8 9 10

7 11 23 27 31

Compression &
Deduplication
Management

NVMe Flash

NVMe Flash
Reserved
6 10 22 26 30 1
Even
B

5 9 13 17 21 25 29 0
4 8 12 16 20 24 28
Director

7 11 23 27 31

Compression &
Management

Deduplication
NVMe Flash

NVMe Flash
Reserved

6 10 22 26 30 1
Odd
A

5 9 13 17 21 25 29 0
4 8 12 16 20 24 28 Director

Flash Back End Flash Universal/FE


MMCS/MM Universal/FE Fabric (SIB)
Compression/
Deduplication

128 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The logical port layout for a PowerMax 2000 array is shown here. PowerMax 2000 engines support 32
ports per director, ports 0 through 31. Ports 0, 1, 2, 3, 20, 21, 22, and 23 are reserved and not currently
used. Ports 4 through 11 and 24 through 31 can be used for front-end connectivity, and a compression
and deduplication module is installed in slot 7. Ports 12, 13, 16, and 17 are used for back-end connectivity.
On the SIB, ports 0 and 1 are used for inter-engine connectivity, as the PowerMax 2000 with its maximum
configuration of two bricks, does not require a fabric. Port numbers do not become available unless an I/O
module is inserted in the slot. Each FA emulation also supports 32 virtual ports numbered 32 to 63.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 128
Logical Port Layout
PowerMax 8000
Multi-engine system “Born as” Multi-Engine System
• Compression and deduplication module
located in Slot 7
• Subsequent engines added have
compression and deduplication in Slot 7

Single engine system

“Born as” Single-Engine System


• Compression and deduplication module
located in Slot 9
• Subsequent engines added have
compression and deduplication in Slot 9

129 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The logical port layout for PowerMax 8000 arrays is shown here. It is important to understand what the
system was “Born as”, that is, how it was initially ordered from the factory. With systems born as multi-
engine systems, ports 4 through 11 and 24 through 31 can be used for front-end connectivity. The
compression and deduplication module is installed in slot 7. Upgrades adding more bricks to this
configuration also have the compression and deduplication module in slot 7.

In systems that are born as single-engine systems, notice that the the compression and deduplication
module is installed in slot 9. When this module is installed in slot 9, it eliminates the ability to use ports 28
through 31 for front-end connections. Upgrades adding more bricks to a single-engine configuration also
have the compression and deduplication module installed in slot 9, decreasing the number of front-end
ports over the entire system. In both multi-engine and single-engine configurations, ports 0, 1, 2, 3, 20, 21,
22, and 23 are reserved and not currently used. Ports 12, 13, 16 and 17 are used for back-end
connectivity. Ports 0 and 1 on the SIB are used to connect redundantly to the two InfiniBand switches, or
the fabric, in multi-engine systems.

In single-engine systems, the SIB is not installed. If a single-engine system is upgraded, SIB modules and
a fabric are included as part of the upgrade process and must be installed into the existing system. All
engines then have SIB modules to connect to the fabric for inter-director and inter-engine
communications. Port numbers do not become available unless an I/O module is inserted in the slot. Each
FA emulation also supports 32 virtual ports numbered 32 to 63.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 129
Logical Port Layout
VMAX All Flash 450, 850 and 950
Slot Slot Slot Slot Slot Slot Slot Slot Slot Slot Slot
0 1 2 3 4 5 6 7 8 9 10

7 11 15 19 23 27 31

Compression &
Deduplication
Vault to Flash
Vault to Flash

Vault to Flash

Vault to Flash
Management

6 10 14 18 22 26 30 1
Even
B

5 9 13 17 21 25 29 0
4 8 12 16 20 24 28
Director

7 11 15 19 23 27 31

Compression &
Vault to Flash

Deduplication
Vault to Flash

Vault to Flash
Management

Vault to Flash
6 10 14 18 22 26 30 1
Odd
A

5 9 13 17 21 25 29 0
4 8 12 16 20 24 28 Director

Flash Back End Flash Compression/


Deduplication
MMCS/MM Universal/FE Universal/FE Fabric (SIB)
130 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

VMAX All Flash 450, 850 and 950 model logical port layouts are the same. Slot 9 contains a compression
module by default. This slot and the associated ports—ports 28 to 31—are not available for FE
connectivity in these models. Ports 12 through 19 are used for back-end connectivity. On the SIB, ports 0
and 1 are used for connectivity to the fabric in each director. Port numbers do not become available unless
an I/O module is inserted in the slot. Each FA emulation also supports 32 virtual ports numbered 32 to 63.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 130
Logical Port Layout
VMAX 250
Slot Slot Slot Slot Slot Slot Slot Slot Slot Slot Slot
0 1 2 3 4 5 6 7 8 9 10

7 11 15 23 27 31

Compression &
Deduplication
Vault to Flash
Vault to Flash

Vault to Flash
Management

6 10 14 22 26 30 1
Even
B

5 9 13 21 25 29 0
4 8 12 20 24 28
Director

7 11 15 23 27 31

Compression &
Vault to Flash

Vault to Flash
Management

Deduplication
Vault to Flash

6 10 14 22 26 30 1
Odd
A

5 9 13 21 25 29 0
4 8 12 20 24 28 Director

Flash Universal/FE Empty Universal/FE


MMCS/MM Flash Fabric (SIB)
Compression/
Back End Deduplication

131 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

VMAX All Flash 250 models are designed to support 32 ports per director, ports 0 through 31. Ports 0, 1,
2, 3, 20, 21, 22, and 23 are reserved and not currently used. Ports 4 through 11 and 24 through 31 can be
used for front-end connectivity. VMAX 250 directors have up to three Vault to Flash I/O Modules in slots 0,
1, and 6, versus four in the other VMAX All Flash models. Slot 4 is used for the backend connections to
the disk drives in the DAEs, and Slot 5 is empty and unused. Slot 7 is used for the hardware compression
I/O Module, which is installed by default in every VMAX All Flash model. Finally, slot 10 is used for the
directly connected 56 Gb/s inter-director links, as no switches are used in the VMAX 250 models.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 131
Logical Port Layout
VMAX3
Slot Slot Slot Slot Slot Slot Slot Slot Slot Slot Slot
0 1 2 3 4 5 6 7 8 9 10

7 11 15 19 23 27 31

Vault to Flash
Vault to Flash

Vault to Flash

Vault to Flash
Management

6 10 14 18 22 26 30 1
Even
B

5 9 13 17 21 25 29 0
4 8 12 16 20 24 28
Director

7 11 15 19 23 27 31
Vault to Flash
Vault to Flash

Vault to Flash
Management

Vault to Flash
6 10 14 18 22 26 30 1
Odd
A

5 9 13 17 21 25 29 0
4 8 12 16 20 24 28 Director

Flash Back End Flash


MMCS/MM Universal/FE Universal/FE Fabric (SIB)

132 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

VMAX3 arrays are designed with these engines to support 32 ports per director, ports 0 through 31. Ports
0, 1, 2, 3, 20, 21, 22, and 23 are reserved and not currently used. Ports 4 through 11 and 24 through 31
can be used for front-end connectivity. Ports 12 through 19 are used for back-end connectivity. On the
SIB, ports 0 and 1 are used for connectivity to the fabric in each director. Port numbers do not become
available unless an I/O module is inserted in the slot. Each FA emulation also supports 32 virtual ports
numbered 32 to 63.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 132
System Hardware

133 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Unisphere System Hardware view for an array shows front-end, RDF, and back-end director
information. Clicking the down arrow on a director, as shown here with FA-1D and SE-1E, shows a listing
of the associated ports and status information. The Available Ports icon displays ports not in use, and the
type and speed of the port.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 133
Host Types

• Physical servers running UNIX offered by different vendors


– HP-UX from Hewlett Packard
– Solaris from Oracle
– IBM-AIX from IBM
– Linux from various companies
• Windows servers running Microsoft Windows
• Virtual Machines running on Hypervisors
• Mainframe
– z/OS
– z/TPF
– z/VM
– Linux on System Z

134 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMax, VMAX All Flash, and VMAX3 arrays can be attached to a wide variety of operating systems—
too numerous to list here. In the open systems world, the most widely used operating systems are MS-
Windows and UNIX flavors such as Solaris, HP-UX, AIX, and Linux.

In recent years as VMware has grown in popularity, it is also common to find these arrays attached to
VMware ESXi servers. For a complete list of supported hosts and operating systems, consult the E-Lab
navigator accessible through the Dell EMC Support website.

PowerMaxOS supports mainframe attach on PowerMax and VMAX All Flash arrays. With HYPERMAX
OS 5977 Q1 2016 Service Release and above, mainframe attach is supported with VMAX All Flash and
VMAX3 arrays. Operating system support for mainframe includes z/OS, z/TPF, z/VM, and Linux running
on System Z. PowerMax 2000 and VMAX All Flash 250 arrays do not support mainframe attach.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 134
Front-End Port Flag Requirements

• Different operating systems need different bits or flags set on the front-end
port to recognize devices
• ACLX flag enables storage auto-provisioning for the port
• Port flag settings can be overridden at the initiator group level
• Hosts should access devices over two or more ports
– With path management software
› Higher Availability Host A
› Load balancing

If hosts A and B have the same port flag requirements Host B


and host C has different requirements, the initiators
on host C can be set to override the port flags on the
front-end port of the array
Host C
135 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Many vendors require that specific fibre/SCSI flags are set to communicate with the storage array.
PowerMax, VMAX All Flash, and VMAX3 arrays permit the setting of flags at the front-end port level.
Front-end ports can be shared by multiple hosts as shown in the diagram. Sometimes hosts sharing the
front-end ports may have different bit/flag requirements.

PowerMax, VMAX All Flash, and VMAX3 arrays permit port flags to be overridden by flags set at the
initiator or initiator group level to accommodate hosts with different bit/flag requirements. The auto-
provisioning SYMCLI command symaccess or Unisphere for PowerMax is used to allocate storage to
hosts.

The auto-provisioning process automatically maps and masks the devices. Most hosts typically access
storage through multiple front-end ports. Host-based path management software—for example,
PowerPath—is used to provide higher availability and load balancing.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 135
Simple Support Matrices – Dell EMC E-Lab

https://elabnavigator.emc.com
136 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Browse to the Dell EMC E-Lab Interoperability Navigator website (https://elabnavigator.emc.com/). Under
Simple Support Matrices, choose Storage. Click the link for the desired Simple Support Matrix. The
Director Bit Settings Simple Support Matrix, shown here, lists the port flag settings that are required for the
various operating systems.

The host connectivity guides for the different operating systems can be found on the E-Lab website also.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 136
Excerpt from Simple Support Matrix

For most operating systems, the required flags are enabled by default

137 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Here is an excerpt from the Simple Support Matrix for Director Bit Settings in a Fibre Channel Switched
environment for VMAX All Flash and VMAX3 arrays. For most operating systems, the required flags are
enabled by default. For HP-UX systems, the Volume Set Addressing flag has to be enabled. The Simple
Support Matrix also lists settings iSCSI and FCoE. See the Simple Support Matrix for more details.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 137
Port Settings for Common Hosts

• HP
– Volume Set Addressing (V)
– SPC-2 Compliance (SPC2)
– SCSI-3 Compliance (SC3)

• IBM AIX, Solaris, Linux, Windows, VMware


– SPC-2 Compliance (SPC2)
– SCSI-3 Compliance (SC3)

• Most of the required flags are enabled by default


• ACLX flag is required for auto-provisioning

138 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Shown here are the common SCSI bus and Fibre port settings that are used by the common operating
systems. The ACLX flag needs to be enabled on the port to use auto-provisioning groups on PowerMax,
VMAX All Flash, and VMAX3 arrays.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 138
ACLX Port Considerations

• ACLX flag is required for auto-provisioning


– Enabled by default

• ACLX device
– Preconfigured
› No user management of ACLX device
– Default LUN Address of 0
– Visibility on Port
› Controlled by Show_ACLX_Device attribute
› Enabled on one FA port by default

139 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Auto-provisioning groups require the ACLX enabled ports. By default, the ACLX flag is enabled on all FA
ports. PowerMax, VMAX All Flash, and VMAX3 arrays come preconfigured with one ACLX device. A user
cannot create, delete, or change the attributes of the ACLX device. The device is visible to hosts at the
default address of 000. The device is only visible on front-end ports that have the Show_ACLX_Device
port characteristic set to Enabled.

When arrays come out of the factory, the first ACLX enabled port typically has the Show ACLX device flag
enabled. All other ACLX enabled ports typically have the flag disabled. As a result, the ACLX device is
visible to hosts only on one port.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 139
Unisphere – Port Details and Attributes

140 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

From the System Hardware page, select a front-end port. In this example, FA-1D, Port 5 has been
chosen. Notice that details of the port are shown on the right of the screen. Scroll down to see the port
attributes enabled/disabled state on this port. A Performance tab is also available for performance
information about this port.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 140
Unisphere – Set Port Attributes

141 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To set port attributes, click the More Actions icon (three vertical dots). Clicking the Set Port Attributes
selection opens the Set Port Attributes dialog.

The current port flag settings are shown in the dialog. In this example, Volume Set Addressing is disabled.

Make the desired changes in the Set Port Attributes dialog. For example, enable Volume Set Addressing
by selecting the box next to it.

After making the desired changes, click Add to Job List. The task is listed in the Job List view, and the
command can be run from there. Alternately, you can choose Run Now.

Front-end port attributes or characteristics can be set with the SYMCLI symconfigure command. The
symconfigure syntax is # set port DirectorNum:PortNum FlagName=enable|disable;

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 141
Listing Free Ports

C:\Users\Administrator>symcfg list –free -port -sid 217

Symmetrix ID: 000197600217

Slot Port FCISDRE Gb/sec Status


---- ---- ------- ------ -------
1 7 Y...YY. 16 Powered
2 7 Y...YY. 16 Powered

Legend:
Flags:
(F)A : Y = Yes, . = No
F(C)OE : Y = Yes, . = No
F(I)CON : Y = Yes, . = No
(S)E : Y = Yes, . = No
(D)X : Y = Yes, . = No
(R)F : Y = Yes, . = No
R(E) : Y = Yes, . = No

142 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

For PowerMax, VMAX All Flash, and VMAX3 arrays, there is only a single emulation instance of a specific
type—FA, DS, RF, and so on—available per director board as discussed earlier. If you need more
connectivity, you can add additional ports to an existing emulation instance. That instance uses all cores
that are configured to it to drive the workload across all ports that are assigned to it.

A capability attribute on each physical port determines the set of front-end emulations to which the port
may be assigned. You can associate—assign—unused ports to front-end emulations and disassociate—
free—ports from the FA and RF emulation types.

Ports that are available to be associated with an emulation can be listed with SYMCLI or with Unisphere
as shown here. The Slot numbers refer to the directors.

In this example, available ports are port 7 on directors 1 and 2, which are 16 GB fibre ports. To view the
free ports, use the symcfg list –free –port command in SYMCLI, or the Available Ports tab on the
System Hardware page in Unisphere.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 142
Associate Ports

• symconfigure syntax
– associate port <portnum>,[<portnum>,…] to dir <dirnum>;
– Example: Associate port 7 to director 1D
associate port 7 to dir 1D;

• Online the port after association

143 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Free/Available ports can be associated with a desired director emulation. The SYMCLI symconfigure
syntax is shown here with an example. In Unisphere for PowerMax, select an available port from the
Available Ports listing and then click the Associate button. In the Port Association dialog, select the
desired emulation to which the port should be associated and then click OK to complete the association.
In this example, port 7 is associated with the fibre channel emulation on director 1.

Once the port has been associated, it needs to be brought online. Use the SYMCLI symcfg –fa
<dirnum> –p <portnum> online command or use Unisphere to enable the port. The port can be
enabled from the Front-End Director port list view—you saw this view when you were setting port
attributes in Unisphere.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 143
Disassociate Ports

• Disassociate considerations
– Front-end port must not be in a port group
– RDF port must not have any RDF groups configured
– Port needs to be offline

• symconfigure syntax
– disassociate port <portnum>,[<portnum>,…] from dir <dirnum>;
– Example: Disassociate ports 30 and 31 from director 1D
disassociate port 30,31 from dir 1D;

• Unisphere
– Select desired port from port listing and choose Disassociate from the More Actions icon

144 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Before disassociating, ensure that a front-end port is not in a port group or the RDF port does not have
any RDF groups configured. Ports have to be offline before they can be disassociated from a given
director. You can offline the port with SYMCLI or Unisphere for PowerMax.

The SYMCLI symconfigure syntax is shown here with an example. In Unisphere, from the System
Hardware page, select the port to be disassociated from the Front-End or RDF port listing. Choose
Disassociate from the More Actions (three vertical dots) icon.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 144
Lab: Port Management with Unisphere and SYMCLI

This lab covers:


• Port management with Unisphere for PowerMax
• Port management with SYMCLI

145 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lab covers port management with Unisphere and SYMCLI.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 145
Module Summary

Key points covered in this module:

• Creation, deletion, and expansion of thin devices

• Management of port attributes and port associations

146 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This module covered device creation, deletion, expansion, and port management.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: Device and Port Management 146
Module: Storage Allocation using Auto-provisioning
Groups

Upon completion of this module, you should be able to:

• Describe storage allocation using auto-provisioning groups

• Explain the benefits, features, and considerations of host I/O limits

• Articulate host considerations for storage allocation

• Perform SL-based provisioning with Unisphere for PowerMax

• Perform SL-based provisioning with SYMCLI

147 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This module focuses on allocation of storage to hosts using auto-provisioning groups. Auto-provisioning
groups, host I/O limits, and host considerations while allocating storage are discussed. Using Unisphere
for PowerMax and SYMCLI to perform SL-based storage provisioning is shown.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 147
Lesson: Auto-provisioning Groups Overview
This lesson covers the following topics:

• Auto-provisioning Groups

• Host I/O Limits

148 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson provides an overview of auto-provisioning groups and host I/O limits. SYMCLI syntax to
manage auto-provisioning groups is introduced.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 148
Auto-provisioning Overview

• Easy to provision storage in environments with clusters and hosts using


multiple paths to the array
• Managed with SYMCLI or Unisphere for PowerMax
• SYMCLI
– symaccess command manages all auto-provisioning groups and Masking Views
– symsg command manages Storage Groups (SGs)
› Used for auto-provisioning and FAST
o Performs many of the functions that symaccess performs on SGs
o Also used to set Host I/O limits, SRP, SL, and Workload Type (where applicable)

• Unisphere for PowerMax


– Storage Groups and Hosts sections

149 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

As the number of volumes in a single array continue to increase, auto-provisioning offers a flexible
scheme for provisioning storage in large enterprises. Auto-provisioning groups enable storage
administrators to create groups of host initiators (Hosts), front-end ports (Port Groups), and logical devices
(Storage Groups). These groups are then associated to form a Masking View, from which all controls are
managed. Masking Views reduce the number of commands that are needed for masking devices, and
enables easy management of LUN masking.

Auto-provisioning in PowerMax, VMAX All Flash, and VMAX3 arrays are achieved by using the
symaccess SYMCLI command or with Unisphere for PowerMax. The symaccess command can
manage Storage Groups, Port Groups, Hosts, and Masking Views.

The symsg SYMCLI command manages Storage Groups and is used for auto-provisioning and with FAST
to set the required SRP, SL, and Workload Type, where applicable.

In Unisphere, the Storage Groups and Hosts sections are used to manage auto-provisioning. The Storage
section has the Storage Groups Dashboard. Port Groups, Hosts (Initiator Groups), and Masking Views are
managed under the Hosts section.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 149
Auto-provisioning Groups
Group Names
• Up to 64 characters long
• Case insensitive
• Unique per group type

Initiator/Host Group Port Group Storage Group


Contains FC WWNs Contains Ports Contains Devices
or iSCSI IQNs

Masking View
150 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Auto-provisioning groups are used for device masking in PowerMax, VMAX All Flash, and VMAX3 family
of arrays.

An Initiator Group (IG) contains the world wide name (WWN) or the ISCSI Qualified Name (IQN) of a host
initiator. A host initiator is also known an HBA or host bus adapter. An IG may contain a maximum of 64
initiator addresses or 64 child IG names. IGs cannot contain a mixture of host initiators and child IG
names.

Port flags are set on an Initiator Group basis, with one set of port flags applying to all initiators in the
group. However, the FCID lockdown is set on a per initiator basis. An individual initiator can only belong to
one IG, but once the initiator is in a group, the group can be a member in another IG. It can be grouped
within a group. This feature is called cascaded Initiator Groups, and is only allowed to a cascaded level of
one.

A Port Group may contain a maximum of 32 front-end ports. Front-end ports may belong to more than one
Port Group. Before a port can be added to a Port Group, the ACLX flag must be enabled on the port.

Storage Groups can only contain devices or other Storage Groups. No mixing is permitted. A Storage
Group with devices may contain up to 4K PowerMax, VMAX All Flash, or VMAX3 logical volumes. A
logical volume may belong to more than one Storage Group. There is a limit of 16K Storage Groups per
PowerMax, VMAX All Flash, or VMAX3 array. A parent Storage Group can have up to 64 child Storage
Groups.

One of each type of group is associated together to form a Masking View.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 150
Auto-provisioning Flexibility

• Initiators can be dynamically added or removed from Initiator Groups


• Ports can be dynamically added or removed from Port Groups
• Storage can be dynamically added or removed from Storage Groups
• Automatic session rollback in the event of a session failure
– Audit log contains messages relating to the rollback

151 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Once the groups have been created, auto-provisioning represents an easy way to handle provisioning. It
enables you to mask multiple devices, ports, and HBAs by placing them into groups. These groups can be
dynamically altered to give the host access to new storage.

With the symaccess command, all groups and views are backed up to a file, and can be restored from a
backup file.

When an auto-provisioning session fails on a PowerMax, VMAX All Flash, or VMAX3 array, the system
automatically rolls back the ACLX database to the state it was in prior to initiating the session. This
rollback feature recovers the database and releases the session lock automatically. The audit log contains
any messages relating to the rollback.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 151
Provisioning Limits

Object PowerMax, VMAX All Flash, and VMAX3 Maximums


Devices 16K per director
4K per Storage Group
Initiator Group (IG) 64 initiators (or 64 child IGs) per IG
Storage Group (SG) 16K SGs per array
64 child SGs per parent SG
Port Group (PG) 16K PGs per array
32 ports per PG
Masking View (MV) 16K MVs per array
LUN Addresses 4K LUN addresses per director port

152 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The table shows the provisioning limits for PowerMax, VMAX All Flash, and VMAX3 arrays.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 152
Storage Groups

• Collection of devices
– Used for LUN masking and/or FAST

• Can be explicitly associated with SRP, SL, and Workload Type


– Default – SG is associated with Default SRP and Optimized SL
– If an SG is explicitly associated with SRP or SL or both, it is considered FAST
managed
– A thin device may only be in one SG that is FAST managed
– Device could be in one FAST managed SG and in other non-FAST-managed SGs

153 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

A Storage Group is a logical collection of thin devices that are to be managed together. Storage Group
definitions are shared between FAST and auto-provisioning groups—LUN masking. A Storage Group can
be explicitly associated with an SRP or an SL or both. By default, devices within an SG are associated
with the default SRP and are managed by the Optimized SL.

If an SG is not explicitly associated with an SRP or an SL, it is not considered FAST managed. Devices
may be included in more than one SG, but may only be included in one SG that is FAST managed. This
restriction ensures that a single device is not managed by more than one SL or have data that is allocated
from more than one SRP.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 153
Cascaded Storage Groups

• Storage Group with other Storage Groups as


members
Parent SG
– Single level of cascading
• Parent SG Child SG1 Child SG2
– Inherits all the devices in the child SGs
– Cannot inherit same device from more than one child
• Child SG
– Contains devices only
– SRP, SL, and Workload Type set on child
– May only be contained in a single parent
• Masking is typically done on the parent

154 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Cascaded Storage Groups are Storage Groups that contain other Storage Groups. A Storage Group that
has other Storage Groups as members is called the parent. A child Storage Group contains only devices,
and is contained within a parent Storage Group. Cascading of Storage Groups enables individual
policies—SRP, SL, and, where applicable, Workload Type settings—for child Storage Groups, and a
Masking View for the parent Storage Group.

Only a single level of cascading is permitted. A parent Storage Group may not be a child of another
Storage Group. Storage Groups can only contain devices or other Storage Groups. No mixing is
permitted.

Empty SGs can be added to a parent SG if the parent SG inherits at least one device when the parent SG
is in a view. A parent SG cannot inherit the same device from more than one child Storage Group. A child
Storage Group may only be contained by a single parent Storage Group.

No parent Storage Group can be FAST managed. A FAST managed Storage Group is not allowed to be a
parent Storage Group.

Masking is not permitted for a child Storage Group which is contained by a parent Storage Group already
part of a Masking View. Masking is not permitted for the parent Storage Group which contains a child
Storage Group that is already part of a Masking View.

A child Storage Group cannot be deleted until it is removed from its parent Storage Group.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 154
Create Storage Group – SYMCLI Syntax

symaccess example: Create SG and add devices


#symaccess create –sid 225 –name SG_1 –type storage devs 50:52

symsg example: Create SG, and set SL & WL


#symsg create –sid 225 SG_1 –sl Gold –wl DSS
#symsg –sg SG_1 addall -devs –range 50:52

Devices 50, 51, and 52

155 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The example shows how to use both symaccess and symsg commands to create Storage Groups. The
symaccess command enables you to create the Storage Group and simultaneously add devices or child
Storage Groups.

The symsg command enables you to create an empty Storage Group first and then populate it with
devices or child Storage Groups. The symsg command also enables you to set the SL, Workload Type,
and Host I/O limits while the Storage Group is created.

See the latest Solutions Enabler Array Management CLI User Guide for more details and options while
creating and managing Storage Groups with the symaccess and symsg commands.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 155
Other Common SG Operations – SYMCLI

Action symaccess symsg


symaccess –sid 80 –name MyGrp symsg –sg MyGrp –sid 80
Add devices –type storage add devs CD:F4 addall -devs –range CD:F4
symaccess –sid 80
symsg –sg MyGrp –sid 80
Remove devices –name MyGrp –type storage remove rmall -devs –range CD:F4
devs CD:F4
symaccess list
-type storage –sid 80
symsg list -sid 80
Display group info symaccess show
symsg show MyGrp –sid 80
–name MyGrp –type storage
–sid 80
symaccess -sid 80 symsg delete MyGrp
Delete a group -name MyGrp -type storage delete -sid 80

156 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Here are some other commonly performed Storage Group operations. Storage Groups can be renamed if
needed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 156
Set SRP/SL/Workload Type – symsg Syntax

symsg -sg <SgName> -sid <SymmID>

set [-sl <SLName> [-wl <WorkloadName>] |-nosl]

[-srp <SRPName> | -nosrp]

-sl
– Diamond, Platinum, Gold, Silver, Bronze, Optimized (default)

-wl
– OLTP, OLTP_REP, DSS, DSS_REP

-nosl – Removes any explicitly set SL, also removes workload type

157 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

By default, Storage Groups use the default SRP and are managed by the Optimized SL. The SG is
considered FAST managed only if an SL or SRP is explicitly set. The valid arguments for the –sl and –
wl options are listed. Workload type is no longer used with PowerMax and VMAX All Flash arrays
running PowerMaxOS, as previously mentioned. The –nosl option removes any explicitly set SL and WL
type. The SG is managed by the Optimized SL. The –nosrp option removes any explicitly set SRP. The
SG uses the default SRP.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 157
PowerMaxOS Data Reduction

INTELLIGENT INLINE DATA REDUCTION


Native Data Reduction • Supports all data services

• Granular
– Storage Group (Application) level
– Can compress and/or dedupe existing data

• Use VMAX Sizer for proper configuration

• Data is reduced by I/O module


INLINE DATA REDUCTION
– If hardware fails, software is used

• Performance-Optimized
– Balances performance and efficiency
– Hot data not compressed

• Data deduplication (dedupe) and compression on PowerMax

• Data compression on VMAX All Flash

158 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMaxOS includes data reduction, which maximizes the PowerMax and VMAX All Flash value
proposition by providing the best space savings. Intelligent inline data reduction delivers higher space
efficiency, reducing the overall cost per usable TB. It works with all VMAX trusted data services such as
SnapVX, SRDF, eNAS, and D@RE. Compressed data can be encrypted in real time, which is unique.

Data reduction operates granularly at the Storage Group (Application) level so customers can target those
workloads that benefit the most. Data reduction techniques can be applied to existing data. In addition to
effective capacity calculations, cache requirements must be met to support compression. VMAX Sizer
must be used to size a system that will have data reduction enabled.

Data is reduced as it moves from the system cache to the back end drives using a data reduction I/O
Module on each director. If the hardware I/O module fails, software-based reduction is used.

PowerMaxOS compression is performance-optimized and smart enough to ensure the most active data is
not compressed. This optimization enables the system to deliver maximum throughput using cache and
SSD technology, and ensures that system resources are balanced and always available when required.

PowerMaxOS running on PowerMax arrays provides compression and deduplication (dedupe) together.
PowerMaxOS running on VMAX All Flash arrays provides compression only.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 158
PowerMaxOS Data Reduction by Platform

PowerMax VMAX All Flash


Inline Compression and
Data Reduction Technology Inline Compression
Inline Deduplication
I/O Module Type Data Reduction Module Compression Module
Adaptive Compression Engine (ACE)
• Algorithms learn from incoming workload to Yes Yes
create customized back end
Compression Algorithm Deflate LZS
Extended Data Compression (EDC)
Yes No
• Further compresses compressed data
SRP Type Open Systems (FBA) Open Systems (FBA)

159 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMaxOS data reduction technology differs slightly between PowerMax and VMAX All Flash arrays. In
PowerMax arrays, both compression and deduplication are done inline using a data reduction hardware
I/O module. In VMAX All Flash arrays, inline compression is used, without deduplication. VMAX All Flash
arrays use a compression hardware I/O module.

Both systems employ an Adaptive Compression Engine, or ACE, which is a combination of multiple core
components working together to achieve maximum system efficiency. Intelligent algorithms learn from
incoming workloads and dynamically create a customized back end, catering to the incoming workload.
ACE changes the backend compression pool layout as needed to ensure it operates at optimal levels for
both performance and space efficiency. The algorithms also identify the busiest data in the system and
that data is not compressed, minimizing decompression overhead.

Deflate compression is used in PowerMax, while LZS is used in VMAX All Flash arrays. Both of these
techniques are lossless data compression algorithms.

PowerMax systems include Extended Data Compression, or EDC. EDC compresses already compressed
data to gain further capacity savings. Data that qualifies for EDC is data that has not been accessed for 30
days. To be eligible for EDC, the data must also belong to a data reduction enabled Storage Group, and
must not be already compressed by EDC.

Data reduction applies to Open Systems Fixed Block Architecture (FBA) data only.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 159
PowerMax Deduplication

Dedupe Function/Component Description


• Inline process using the same data reduction module as ACE
Hardware Acceleration • Data is passed through the module to generate a unique Hash ID
• Identifies identical data patterns based on Hash ID
Deduplication Algorithm • Performed as data is passed through data reduction module

• Unique Hash IDs stored in table in system memory


Hash Table • Representation of data in a dedupe relationship

• Only exist when dedupe relationship exists


Deduplication Management Object
• Stores and manages pointers between front end devices and the
(DMO) single instance of data stored on disk

160 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMax uses inline deduplication to reduce redundant copies of data that would consume storage
capacity. Pointers are used to replace the redundant copies, and provide access to multiple sources for
the subsequent requests for that data. Deduplication in PowerMax arrays is accomplished through a
series of functions and components that are described in the table above.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 160
PowerMaxOS Write I/O Flow
Are there <5 Yes
Start FE pointers?
Yes

No
Compress data
Write initiated and create hash Does a DMO Add to existing
No Create new
ID* exist? DMO
DMO

No

Is Data Is hash ID in Update hash ID


Reduction Yes Is it active data? Yes
hash table? in hash table
enabled?

Yes
No No

Perform normal Add hash ID to Allocate data to


I/O flow hash table disk Finish

*Hash IDs and hash tables are not used in VMAX All Flash Arrays
161 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The write I/O flow for PowerMax is shown here. With VMAX All Flash arrays, the IO flow does not involve
a hash ID and hash table which are used for deduplication, but the compression flow is the same.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 161
PowerMax Data Reduction IO Flow

1 Host write
1 • Write to TDEV
AAAA
• Destage at later time
Data
Reduction
TDAT Module 2 Destage
TDEV
021
FFCF6
• Data Reduction Module
2 – Compresses data
4A
– Generates unique hash
AAAA
4A
3 I/O Engine
1-22DB-CEDCDC
3 • Checks hash table
– No entry in table
I/O Engine
Hash Table • Adds hash entry to table
1-22DB-CEDCDC
• Writes data to TDAT
• Links hash entry to TDAT

162 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Writes to PowerMax arrays are handled much in the same as VMAX Family arrays. Writes are stored in
cache, and the write is immediately acknowledged to the host. At a later point, the array destages the data
from cache to backend disks. With PowerMax data reduction, the destage process differs. Once the
decision to destage has been made, data is placed in the I/O Engine. From there, the Data Reduction
module compresses the data and creates a random hash for the data. The I/O Engine then checks the
hash table to see if there is already an entry for that data. Since it is a new write, there is not, and the hash
is added to the table. The compressed data is written to the backend TDAT, and the hash is linked to it.
The write is now complete, and the TDEV is pointing to the data on the TDAT.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 162
Duplicate Write I/O Flow

1 Host write
• Write to TDEV
• Destage at a later time
Data
Reduction
TDAT Module 2 Destage
TDEV
021
FFCF6
• Data Reduction Module
2 – Compresses data
4A
– Generates unique hash
AAAA
4A
1 3 I/O Engine
1-22DB-CEDCDC
3 • Checks hash table
AAAA – Sees duplicate entry
TDEV I/O Engine
A4E Hash Table • Updates table with additional
1-22DB-CEDCDC
pointer

163 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Some time later, a duplicate write is done to a different TDEV. The Data Reduction module compresses
the data and generates the dedupe hash, which is identical to the previous write. The I/O Engine then
checks the hash table and sees that there is already a hash entry for this data. Data does not need to be
destaged. The pointer in the table is updated for the new write.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 163
Compression Settings – Unisphere

164 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Compression enables users to compress user data on Storage Groups. Compression is enabled by
default on PowerMax and VMAX All Flash arrays, and can be turned on and off at the Storage Group
level. If a Storage Group is cascaded, enabling compression at the parent level enables compression for
each of the child Storage Groups. The user has the option to disable compression on one or more of the
child Storage Groups if desired. To turn off the feature on a particular Storage Group in Unisphere, clear
the Compression check box when creating or modifying Storage Groups. Disabling compression does not
automatically decompress data, but new I/O is not compressed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 164
Compression Settings – Solutions Enabler

• Enable compression on selected Storage Group


symsg –sid <SymmID> –sg <sg_name> set –compression

• Disable compression on selected Storage Group


symsg –sid <SymmID> –sg <sg_name> set –nocompression

• Create new Storage Group with compression enabled


symsg –sid <SymmID> create <sg_name> -compression –srp
<srp_name> -sl <Service Level_name>

165 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To enable or disable compression on a particular Storage Group using Solutions Enabler, use the symsg
set commands that are shown here.

When creating a new Storage Group with compression enabled, use the symsg create command that
is shown.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 165
Compression Constraints

• PowerMax and VMAX All Flash only


– No VMAX3 hybrid or all-flash systems

• VMAX Sizer
– Systems must be sized properly to enable compression

• Fixed Block Architecture only


– No mixed FBA/CKD Storage Groups

• Changes require StorageAdmin rights


• No external array support
– No FAST.X support
– CloudArray can run on array with compression

166 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Compression is only supported on the PowerMax and VMAX All Flash arrays. In addition to effective
capacity calculations, cache requirements must be met in to support compression. VMAX Sizer must be
used to size a system that will have compression enabled. Compression is supported on Fixed Block
Architecture (FBA) devices only. Compression is not supported with mainframe (CKD) including mixed
FBA/CKD Storage Groups. Enabling or disabling compression requires StorageAdmin rights.

Compression is not supported with SRPs with external Flash, so FAST.X is not supported. However,
CloudArray is supported on a VMAX with compression as long as a different SRP is used. If a non-
CloudArray FAST.X configuration is created in a separate SRP, the internal SRP can have compression
enabled.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 166
Host I/O Limits Overview

• Set limits on the front-end bandwidth and IOPS consumed by applications


• Limits are set on a per Storage Group basis
– Storage Group is associated with a Masking View
– Limits are distributed across the directors in the Port Group of the associated
Masking View
› Distribution can be dynamic
– I/O limits may be placed on parent and child Storage Groups

167 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Host I/O Limits feature enables users to place limits on the front-end bandwidth and IOPS that are
consumed by applications.

Limits are set on a per Storage Group basis. As users build Masking Views with these Storage Groups,
limits for maximum front-end IOPS or MB/s are distributed across the directors within the associated
Masking View. The system then monitors and enforces against these set limits.

The Host I/O Limits can be managed and monitored using both Solutions Enabler and Unisphere for
PowerMax.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 167
Host I/O Limits – Benefits

• Ensures that applications cannot exceed their set limit, reducing the potential of
impacting other applications
• Provides greater levels of control on performance allocation in multi-tenant
environments
• Enables predictability needed to service more customers on the same array
• Simplifies quality-of-service management by presenting controls in industry-
standard terms of IOPS and throughput
• Provides the flexibility of setting either IOPS, throughput, or both, based on
application characteristics and business needs
• Manages expectations of application administrators with regard to performance

168 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The benefits of Host I/O Limits are listed here.

Host I/O limits are beneficial whenever an array is shared among multiple tenants by enabling the setting
of consistent performance SLAs. They prevent applications from using more than their allotted share of
front-end resources.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 168
Host I/O Limits – Features

• Cascaded Storage Groups


– Limits can be set on parent, on child, or on both
– Sum of all child limits may exceed the limit of the parent
› Total I/O rate of all the children remains limited by the limit of the parent
– Individual child limit may not exceed the limit of the parent
• Dynamic I/O Distribution
– Mode can be Never, OnFailure, or Always
› Never: Implies static even distribution (default)
› OnFailure: On a failure, I/O limits are redistributed to online ports
› Always: I/O limits are dynamically distributed based on demand

169 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

For Cascaded Storage Groups, users may set up a cascaded SG configuration where there are optional
limits that are assigned to each individual child SG. The parent SG may also have its own assigned limit.
The sum of child limits may exceed the limit of the parent. However, the combined I/O rate of all child SGs
remains limited by the limit of the parent. Also, the individual child SG limits may not exceed the assigned
limit of the parent.

Host I/O distribution is governed by the Dynamic Mode setting. The default mode is Never which implies a
static even distribution of configured limits across the participating directors in the Port Group. The
OnFailure mode causes adjustment of the fraction of the configured Host I/O limits available to a
configured port based on the number of ports that are online. Setting the dynamic distribution to Always
causes dynamic distribution of the configured limits across the configured ports, enabling the limits on
each individual port to adjust to fluctuating demand.

For example, if the mode is set to OnFailure in a two-director Port Group which is part of a Masking View,
both directors are assigned half of the total limit. If one director goes offline, the other director is
automatically assigned the full amount of the limit. This assignment makes it possible to ensure that the
application is running at full speed regardless of a director failure.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 169
Host I/O Limits – Considerations

• Only one limit per Storage Group


• Devices in multiple Storage Groups can only adhere to one limit
• A Storage Group with a Host I/O limit can be associated with, at most, one
Port Group in any provisioning view
• Usually, the total Host I/O limits may only be achieved with proper host load
balancing between directors

170 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Only one limit can be set per Storage Group, and devices in multiple Storage Groups can only adhere to
one limit.

At any given time, an SG with a Host I/O Limit can be associated with, at most, one Port Group (PG) in
any provisioning view. If the SG with a Host I/O Limit is in a provisioning view with a PG, the SG and PG
combination have to be used when creating other provisioning views on the SG.

Usually, the total Host I/O Limits may only be achieved with proper host load balancing between directors.
Load balancing is achieved using multipathing software on the hosts, such as PowerPath.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 170
Setting Host I/O Limits – symsg Syntax

• Can be set during SG creation or on existing SG


– During creation
symsg -sid <SymmID> create <SgName>
[-bw_max <MBperSec>]
[-iops_max <IOperSec>]
[-dynamic <NEVER | ALWAYS | ONFAILURE>]
– Existing SG
symsg -sg <SgName> -sid <SymmID>
set <[-bw_max <MBperSec> | NOLIMIT ]
[-iops_max <IOperSec> | NOLIMIT ]
[-dynamic <NEVER | ALWAYS | ONFAILURE>]>

171 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Host I/O Limits can be set with the symsg command when the SG is created or on an existing SG.

Options:
• -bw_max – Limits the bandwidth, specified in megabytes per sec. The valid range for bandwidth is
from 1 MB/Sec to 100,000 MB/Sec. NOLIMIT removes any set limits.
• -iops_max – Limits the IOPS. The valid range for IOPS is from 100 IOPS to 2,000,000 IOPS and
must be specified in units of 100 IOPS. NOLIMIT removes any set limits.
• -dynamic – Sets the mode for the dynamic I/O distribution discussed earlier in this lesson.
NEVER is the default.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 171
Port Groups

• Contain valid front-end ports


• A port can belong to more than one Port Group 1D:6

• Ports must have ACLX flag enabled 2D:6

Create Port Group – SYMCLI Example:


#symaccess create –sid 225 –name PG_1 -type port –dirport
1D:6,2D:6

172 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Port Groups contain front-end director and port identification. A port can belong to more than one Port
Group. Ports can be Fibre Channel or iSCSI. On PowerMax, VMAX All Flash, and VMAX3 arrays, you
cannot mix different types of ports, that is, Fibre Channel and iSCSI, within a single Port Group. Ports
must have the ACLX flag enabled. The ACLX flag is enabled by default.

Ports can be added and removed. When a Port Group is no longer associated with a Masking View, it can
be deleted.

The SYMCLI example that is shown creates a new PG named PG_1 containing two front-end ports.

See the latest Solutions Enabler Array Management CLI User Guide for more details and options while
creating and managing Port Groups with the symaccess command.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 172
Other Common PG Operations – SYMCLI

Action symaccess
symaccess -sid 80 -name MyPorts -type port
Add port to Port Group -dirport 1D:6 add

symaccess -sid 80 -name MyPorts -type port


Remove port from group remove -dirport 1D:6

symaccess list -name MyPorts -sid 80


Display group contents
symaccess show MyPorts -type port -sid 80

Delete a group symaccess -sid 80 -name MyPorts -type port delete

List the group or groups that symaccess list -sid 80 -type port -dirport 1D:6
a port belongs to

173 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Shown here are some of the operations that are commonly performed on a Port Group. Port Groups can
be renamed if needed.

See the latest Solutions Enabler Array Management CLI User Guide for more details and options while
creating and managing Port Groups with the symaccess command.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 173
Initiator Groups (IG)

• Container of host initiators


– Can be cascaded
– Cannot mix host initiators and child IG names
– Cannot mix host initiator types

• An initiator may belong to only one IG


• A child IG may belong to one or more parent IGs
• Cascaded IG Example
– Child_IG1 contains WWN1 & WWN2
– Child_IG2 contains WWN3 & WWN4
– Parent_IG contains Child_IG1 and Child_IG2

174 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

An Initiator Group is a container of one or more host initiators, which are Fibre WWNs or iSCSI IQNs.
Each Initiator Group can contain up to 64 initiator addresses or 64 child IG names. Initiator Groups cannot
contain a mixture of host initiators and child IG names. Thus an IG contains only host initiators or it
contains only child IG names.

You cannot mix different types of initiators, that is, Fibre Channel WWNs and iSCSI IQNs, within a single
Initiator Group. Also, all child IG names that are added to a parent IG must contain the same initiator type.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 174
Create Initiator Group – SYMCLI Example

#symaccess create -sid 225 -name IG_1 -type initiator –consistent_lun


–wwn 2100001b321e9dd5

#symaccess -sid 225 -name IG_1 -type initiator add -wwn 2101001b323e9dd5

OR
#symaccess create –sid 225 –name IG_1 –type initiator –consistent_lun
–file HBA_WWNS

Initiator WWNs
File HBA_WWNS contains 2100001b321e9dd5
wwn:2100001b321e9dd5
2101001b323e9dd5
wwn:2101001b323e9dd5

175 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

You can create an Initiator Group using the WWN of the HBA, or a file containing WWNs or another
Initiator Group name. Use the -consistent_lun option if the devices of an SG in a view must be seen
on the same LUN on all ports of the Port Group. If the -consistent_lun option is set on the IG,
PowerMaxOS ensures that the host LUN number that is assigned to devices is the same for the ports. If
this option is not set, the first available LUN on each individual port is chosen.

For arrays running PowerMaxOS or HYPERMAX OS 5977 or higher, you can create an Initiator Group
using the iSCSI IQN of the HBA also. To support iSCSI targets, the symaccess command includes the -
iscsi_dirport and -iqn options.

See the latest Solutions Enabler Array Management CLI User Guide for more details and options while
creating and managing Initiator Groups with the symaccess command.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 175
Other Common IG Operations – SYMCLI

Action symaccess
symaccess -sid 80 -name MyInit -type initiator
Add initiator to group add -wwn 10000000c92ab6de

Remove initiator from symaccess -sid 80 -name MyInit -type initiator


Initiator Group remove -wwn 10000000c92ab6de
symaccess list -name MyInit –sid 80
symaccess show MyInit –type initiator –sid 80
Display group contents
symaccess show MyInit –type initiator –sid 80
–detail
symaccess -sid 80 -name MyInit -type initiator
Delete a group delete

List the group or groups to symaccess list -sid 80 -type initiator


which an initiator belongs -wwn 10000000c93124ae

176 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Shown here are some of the operations that are commonly performed on an Initiator Group. Initiator
Groups can be renamed if needed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 176
Masking View

• Association of one Initiator Group, one Port Group, and one Storage Group
– Devices in SG become visible to host initiators in the IG through the ports in the
PG

• Create Masking View – SYMCLI example


#symaccess create view –sid 225 –name MV_1 –sg SG_1 –pg PG_1 -ig
IG_1

SAN

IG_1 PG_1
SG_1

MV_1
177 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

A Masking View is created by associating one Initiator Group, one Port Group, and one Storage Group. So
a Masking View is a container of a Storage Group, a Port Group, and an Initiator Group.

When you create a Masking View, the devices in the Storage Group become visible to the host. The
devices are masked and mapped automatically.

See the Solutions Enabler Array Management CLI User Guide for more details and options while creating
and managing Masking Views with the symaccess command.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 177
Other Common Masking Operations – SYMCLI
Action symaccess
symaccess -sid 80 rename view -name MyView
Rename a Masking View –new_name YourView

Delete a Masking View symaccess -sid 80 -name MyView delete view

symaccess list view -name MyView -sid 80


Display view info
symaccess show view MyView -sid 80

Backup symaccess –f symm80.bak -sid 80 backup

Restore symaccess –f symm80.bak -sid 80 restore

178 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Shown here are some of the operations that are commonly performed on Masking Views. The
symaccess backup command backs up the entire masking database.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 178
Lesson: Host Considerations – Storage Allocation
This lesson covers the following topics:

• HBA Flags

• Rescan SCSI Bus

179 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers host considerations that are related to storage provisioning. HBA flag settings and the
commands to rescan the SCSI bus on common server platforms are shown.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 179
HBA Flag Settings for Common Hosts

• HP
– Volume Set Addressing (V)
– SPC-2 Compliance (SPC2)
– SCSI-3 Compliance (SC3)

• IBM AIX, Solaris, Linux, Windows, VMware


– SPC-2 Compliance (SPC2)
– SCSI-3 Compliance (SC3)

180 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Earlier, the required SCSI and Fibre port settings at the array Port Level were set. Shown here are the
common SCSI bus and Fibre port settings that are used by the common operating systems. The port flags
settings can be overridden at the initiator or Initiator Group level.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 180
Setting HBA Flags

• HBA flags that can be set on Initiator Groups


– Disable_Q_Reset_on_UA [D]
– Environ_Set [E]
– Volume_Set_Addressing [V]
– Avoid_Reset_Broadcast [ARB]
– OpenVMS [OVMS]
– SCSI_3 [SC3]
– SPC2_Protocol_Version [SPC2]
– SCSI_Support1 [OS2007]

• HBA flags that can be set on initiators


– All of the above except Volume Set Addressing

181 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMax, VMAX All Flash, and VMAX3 arrays enable you to set the HBA port flags on a per initiator or
Initiator Group basis. This feature allows specific host flags to be enabled and disabled on the director
port. If a flag conflicts with any initiator in the group, it cannot be set for the group. After a flag is set for a
group, it cannot be changed on an initiator basis.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 181
Set Port Flags – SYMCLI

• To set port flags, use the symaccesscommand

Port level:
symaccess -sid <SymmID> -wwn <wwn> | -iscsi <iscsi>
set hba_flags <on <flag,flag,flag...> <-enable |-disable> |
off [flag,flag,flag...]>

Group level:
symaccess -sid <SymmID> -name <GroupName> -type initiator
set ig_flags <on <flag> <-enable |-disable> |
off [flag]>
set consistent_lun <on | off [-force]>

182 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To set—or reset—the HBA port flags on a port, use the following SYMCLI syntax:

# symaccess -sid <SymmID> -wwn <wwn> | -iscsi <iscsi> set hba_flags <on <flag>
<-enable |-disable> |off [flag]>

To set—or reset—the HBA port flags on an Initiator Group, use the following SYMCLI syntax:

# symaccess -sid <SymmID> -name <GroupName> -type initiator set ig_flags <on
<flag> <-enable |-disable> |off [flag]>

If a flag conflicts with any initiator in the group, it cannot be set for the group. After a flag is set for a group,
it cannot be changed on an initiator basis.

See the Solutions Enabler Array Management CLI User Guide for more details on overriding port flags
with the symaccess command.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 182
Commands to Scan SCSI Bus for Devices (1of 3)

• Solaris
– devfsadm –C
• IBM AIX
– lsdev –Cc adapter –Fname | grep fc (identifies the fibre channel
directors such as fcs0, fcs1)
– cfgmgr –l fcs? (? represents the bus number such as 0)
• HP-UX
– ioscan –fnC disk (scans host bus and identifies new devices)
– insf –e (creates device special volumes)

183 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

After devices have been provisioned to a host by the creation of a Masking View, the operating system on
the host must recognize the device. A SCSI bus rescan must be initiated from the host to recognize the
device. The bus rescan commands vary from operating system to operating system.

The commands that are shown here are taken from the Dell EMC Host Connectivity Guides. While they
work reliably in most cases, they may not work for every version of a particular operating system. Verify
the accuracy of these commands by checking the vendor documentation.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 183
Commands to Scan SCSI Bus for Devices (2 of 3)

• Linux (version 2.6 – does not work for all drivers)


– cd /sys/class/scsi_host/host? (? represents host bust adapter instance,
for example, host0 or host1)
– ls –al scan
– echo ‘- - -’ > scan (dashes represent channel, target, and LUN numbers)
• QLogic scan utility available from QLogic website
– ./ql-dynamic-tgt-lun-disc.sh
• Emulex lun_scan utility from Emulex website
– ./lun_scan.sh all

184 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Since there are several flavors of commercially available Linux, there are various ways that the SCSI bus
on those systems can be rescanned. The methods that are documented here are taken from the Linux
Host Connectivity Guide.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 184
Commands to Scan SCSI Bus for Devices (3 of 3)

• Windows
– Uses the DISKPART CLI utility
› C:\DISKPART
DISKPART> rescan
DISKPART> exit
– Windows Disk Management GUI can also be used to perform a rescan
• Solutions Enabler commands to rescan the bus
– symcfg scan
– symntctl rescan (Windows only)

185 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In addition to the vendor-supplied commands, Dell EMC also has some commands in Solutions Enabler
that are designed to scan the SCSI bus. The Dell EMC commands are convenient to use, but the vendor-
supplied commands are the most reliable.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 185
Commands to Scan SCSI Bus – VMware

ESXi Server
• Command issued on ESXi server with esxcli enabled
esxcli storage core adapter rescan --all
• The preferred method is to have the command issued from a host that is
network-attached to the ESXi server and has ESX cli installed
esxcli -s 10.127.94.252 -u root -p <password> storage
core adapter rescan –-all
• VMware vSphere GUI can also be used to rescan for new devices into the
ESXi server

186 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The CLI commands that are shown here are useful for rescanning the SCSI bus. The preferred method of
using vCLI (esxcli) is to run it on a host that is network-attached to the ESXi console. Also, the VMware
vSphere GUI can be used to rescan the SCSI bus.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 186
Steps to Replace HBA

1. View old HBA WWN


symaccess list logins

2. Swap out the old HBA board with the new HBA
3. Discover the WWN of the new HBA
symaccess discover hba or symaccess list hba

4. Use the replace action


symaccess –sid 80 replace –wwn <WWN_old> -new_wwn <WWN_new>

5. Use the rename action to establish the new alias for the HBA
symaccess discover hba –rename

187 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

If a host adapter fails or needs replacement, replace the adapter and assign devices to a new adapter by
using the replace action in the following form:

# symaccess –sid <SymmID> replace -wwn <wwn> -new_wwn


<NewWWN>

# symaccess –sid <SymmID> replace -iscsi <iscsi >-new_iscsi


<NewiSCSI>

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 187
Lesson: Service Level Based Provisioning with
Unisphere
This lesson covers the following topics:

• Managing Hosts (Initiator Groups)

• Storage Provisioning Wizard

• Managing Storage Groups

• Managing Port Groups

• Managing Masking Views

188 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers SL-based provisioning of PowerMax, VMAX All Flash, and VMAX3 storage using
Unisphere for PowerMax.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 188
Managing Hosts – Initiator Groups

Hosts View
• Create, Modify, Provision Storage to Host, Set Flags, Delete, View Details

Host Details
189 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In Unisphere for PowerMax, Initiator Groups are called hosts. The configured hosts can be listed by
clicking Hosts under the Hosts panel. From the Hosts view, you can create new Hosts or Host Groups—
cascaded Initiator Groups. Clicking a host displays details of that host, which is shown on the right of the
screen. When a host is selected, users can modify, provision storage, and, using the More Actions button,
set flags, or delete the host.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 189
Create Host Wizard

190 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To provision storage to a host, first use the Create Host wizard to create the Initiator Group for the host.
The Create Host wizard is available by clicking the Create button in the Hosts view and selecting Create
Host.

Enter the desired name of the host, and select Fibre Channel or iSCSI. The available initiators are shown
according to the technology chosen. Select the WWNs of the HBAs of your host and click the greater than
(>) button to add them to the list.

In this example, the host has already been zoned to the array, and the WWNs of the host are listed and
can be chosen. If a host is not yet zoned, you can type in the WWN using the + symbol to Add User
Defined Initiator to Host.

You can optionally click the Set Host Flags button to override or set any port flag settings.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 190
Create Host Wizard – Set Host Flags: Consistent LUNs

191 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To set host flags, you can click the Set Host Flags button. In this example, Consistent LUNs are used,
enabled by checking the Consistent LUNs box. Also, override or enable any of the other port flags listed
using the boxes that are associated with the flags. In this example, no overrides are done. To close the
Set Host Flags dialog, click OK.

To complete the Create Host process, add the task to a Job List or choose Run Now.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 191
Host – Detailed View

192 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To view the details of a host, select the host in the host list view. Details display on the right of the screen.
In this example, the host has one Masking View and one Initiator. The Consistent LUNs option is enabled.
Click the Masking View to and select it to view details on the Masking View associated with this host.
From here, you can create a Masking View, rename the Masking View, and view path details.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 192
Host – Modify

193 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Modify button enables you to add or remove initiators from an existing Host. To remove an initiator,
select the initiator from the Initiators in Host listing on the right, and select the less than (<) button. To add
a new initiator, select an available initiator and click the greater than (>) button. To complete the add or
remove, click either Add to Job List or Run Now.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 193
Provision Storage to Host Wizard

• Wizard simplifies provisioning storage to hosts


– Creates Storage Groups with desired
› SL, Workload Type
› Volumes with specified capacity
o Uses existing devices or create devices as needed
– Creates Port Group or uses existing Port Groups
– Creates Masking View

• Launched from
– Hosts listing
– Detailed view of host

194 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Provision Storage to Host wizard simplifies the process of provisioning storage to a host. The wizard
creates the desired Storage Groups, Port Group, and Masking View. The Storage Groups are created with
the required Service Levels, workload type where applicable, and capacity.

The wizard can create stand alone Storage Groups or cascaded Storage Groups. The wizard is typically
launched from the context of a host—Initiator Group—either from the hosts listing or the detailed view of a
host.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 194
Storage Provisioning Wizard

195 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In this example, the Storage Provisioning wizard has been launched from the context of an existing host.
The host does not have to be specified. Type a name for the Storage Group being created. Hover the
mouse pointer over any of the Service Level, Volume, or Volume Capacity dropdowns, and two icons are
displayed on the right. The Pencil icon enables you to add multiple volume sizes to the Storage Group,
and optionally set Volume Identifier Names. The Plus icon enables you to add multiple Storage Groups to
this host. Each Storage Group can have a different Service Level.

In this example, a single Storage Group with devices is created, and the Service Level, Number of
Volumes, and Capacity are specified. Click Next to continue with the Provisioning wizard.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 195
Select Port Group

196 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Once all Storage Group details have been completed, choose the Port Group for this request. You can
choose an existing Port Group or create a new one.

To view host-invisible ports—unmasked and unmapped—select Include port not visible to the host.
The wizard shows the Port Group recommendation dialog if the port selections do not match the Dell EMC
best practice recommendation. Once complete, click Next to go to the Summary page.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 196
Optionally Set Host I/O Limits

197 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To limit the amount of bandwidth and IOPS that are consumed by a Storage Group, use Host I/O limits. To
set Host I/O Limits, click the Set Host I/O Limits button. Set the desired values in the Host I/O limits
dialog and click OK to return to the Provisioning wizard.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 197
Run Suitability Check

198 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

On the review page, click the Run Suitability Check button to see if the array can meet the Service
Levels for the provisioning request. In order for the Suitability Check to work, the arrays must be
registered for performance data collection. The review screen also shows the names of the Storage
Group, Host, and Port Group. The Masking View name can be edited as needed.

To receive alerts when the performance of the Storage Group changes relative to its SL target, select
Enable SL Compliance Alerts.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 198
Provisioning Job – Success

199 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The job has been successfully run. The provisioning task either finds existing devices or creates new
devices as needed to satisfy the provisioning request. To view the details of the job, click Show Task
Details. In this example, volumes 000E7 and 000E8 were created for the request. Notice that a Masking
View, DemoHostSG_MV was also created.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 199
New Masking View

200 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To see a listing of all Masking Views, click Masking Views in the Hosts menu.

The new Masking View DemoHostSG_MV is listed. To see information about the view, select the
Masking View. Details are shown on the right of the screen.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 200
Storage Groups Details

Storage Group Details


201 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Storage Group management is done in the Storage Groups section. To see a listing of the Storage
Groups, click Storage Groups under the Storage section. From this view you can create SGs, modify
SGs, provision an existing SG to a host, view details, and set Host I/O limits.

Details of the selected Storage Group are shown on the right. To view additional details, click the View All
Details link.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 201
View All Details – Storage Group

202 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Additional details, including capacity, compliance, and performance information can be seen from the All
Details view.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 202
Create SG – Provisioning Wizard

203 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

New Storage Groups can be created by clicking the Create button in the Storage Groups listing page. The
Provisioning wizard that is shown on the screen is launched. This wizard is similar to the Create Host
wizard. The only difference is that you can choose the host to which this storage should be provisioned. In
the Create Host wizard, the host is selected before starting the wizard.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 203
Modify Storage Group

204 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To modify a Storage Group, select a Storage Group from the Storage Group listing and click Modify to
launch the Modify Storage Group dialog. For cascaded Storage Groups, the dialog always shows the
parent and child SGs even if the Modify button is clicked from the context of the child SGs.

You can make the desired changes—change SL, add volumes, modify the size of multiple volumes—
PowerMaxOS supports expanding a volume up to 64 TB—or add a new child by clicking the Plus icon to
add an additional SG. In this example, the size of each of the five volumes is increased from 10 GB to 15
GB.

You can run the Suitability Check when modifying Storage Groups. Once the desired changes are made,
add the job to the job list or run now.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 204
Set Host I/O Limits

205 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To set Host I/O Limits, select a Storage Group from the Storage Group listing and click the Set Host I/O
Limits button. For cascaded Storage Groups, you can choose different Host I/O Limits on the parent and
children.

Once the desired changes are made, click OK to apply the changes.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 205
Managing Port Groups

206 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To show the list of Port Groups configured on an array, choose Port Groups in the Hosts section. From
this view, you can create new Port Groups or click a Port Group to modify or delete the Port Group.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 206
Create Port Group

207 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To create a Port Group, click the Create button in the Port Groups view.

In the Create Port Group dialog, type a name for the Port Group and select ports from the available list.

Click Add to Job List or Run Now to complete the request. The new Port Group is listed in the Port
Groups view.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 207
Port Group Details

Port Group Details

208 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To see the details of a specific Port Group, select it in the Port Groups view. Details are displayed on the
right of the screen.

All Port Groups have a link to a ports listing. In this example, the link is the number 2, which is the number
of ports in this Port Group. Clicking the Ports link shows a listing of the ports. If the Port Group is part of
one or more Masking Views, a Masking View link is shown. In this example, the Port Group is part of one
Masking View. Clicking the link displays details about the Masking View.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 208
Port Group – Ports: Add/Remove

209 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Clicking the Ports link in the details of a Port Group displays the ports listing. To add ports to the Port
Group, click the Add Ports button. Highlighting a port in the port listing, as shown in the example, enables
the Remove button, enabling you to remove the port from the Port Group.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 209
Managing Masking Views

210 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In Unisphere for PowerMax, Masking View management is done from the Masking Views page. To show
the list of Masking Views configured on an array, select Masking Views in the Hosts section. From this
list, you can create new Masking Views. To view details, rename, view path details, or delete a Masking
View, click the Masking View. The Provisioning Wizard creates Masking Views as part of the provisioning
process as well. Creating a Masking View from this page requires the manual selection of Host, Storage
Group, and Port Group.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 210
Create Masking View

211 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Provisioning wizard creates Masking Views as part of the provisioning process. However, you can
choose to manually create a Masking View by clicking the Create Masking View button in the Masking
View listing.

Creating a Masking View requires the manual selection of Host, Port Group, and Storage Group. The
Host, Port Group, and Storage Group must already exist.

In the Create Masking View dialog, type a name for the Masking View and pick a Host, PG, and SG from
the list of available groups. Optionally, click the Set Dynamic LUNs button if you want to change the host
LUN address. The Starting LUN number should be specified. To close the LUN address dialog, click OK.

To complete the creation of the Masking View, click OK. The new Masking View is listed in the Masking
Views page.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 211
Masking View – Details

Masking View Details

212 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To see the details of a specific Masking View, select it in the Masking Views listing. Details are displayed
on the right of the screen. The details frame has links for Host, Port Group, and Storage Group. Clicking
these links shows a listing of those objects.

Selecting a Masking View also enables buttons to Rename the Masking View and View Path Details. Use
the More Actions (three vertical dots) button to delete the Masking View.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 212
Masking View – View Path Details

• One view to see all the components in a Masking View


• View can be used to troubleshoot connectivity issues

213 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The View Path Details of a Masking View button enables you to see all the components that make up the
Masking View. The path details page contains three tree lists for each of the component groups in the
Masking View: Hosts, Ports, and Storage Groups.

The parent group is the default top-level group in each expandable tree view. It contains a list of all
components in the Masking Group including child entries which are also expandable.

To filter the Masking View, single or multiselect—hold shift key and select—the items in the list view. As
each selection is made, the filtered results table is updated to reflect the current combination of filter
criteria.

This view can be useful for troubleshooting. As an example, you could filter the view by choosing only one
of the hosts and one of the ports. This view enables you to see which of the initiators is logged in to the
array.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 213
iSCSI Masking View
iSCSI Target Node
iqn.1992-04.com.emc:6000098
Network Portal Director 1E
10.127.200.10/24
VLAN 80
Director 1E Port 4 7

6
iqn.2015-05.com.microsoft:host1
TCP/IP
5
Connection
10.127.200.9/24 4
NIC
VLAN 80
Port 052 053 054
NIC 10.127.100.9/24 Group
VLAN 81
7
Initiator TCP/IP Storage
Group Connection 6 Group
Multipath
5
IO
4
Network Portal
Masking View 10.127.100.10/24
VLAN 81
Initiator Group iqn.2015-05.com.microsoft:host1 Director 2E Port 4 iSCSI Target Node
iqn.1992-04.com.emc:6000097
Port Group iqn.1992-04.com.emc:6000098 Director 2E
iqn.1992-04.com.emc:6000097

Storage Group 052, 053, 054

214 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

iSCSI is supported with the 10 GbE 4-port I/O module on arrays running HYPERMAX OS 5977 Q3 2015
SR and above. iSCSI configuration supports multiple iSCSI targets (IQNs) and IP addresses on SE
emulation, to support the whole range of possible storage configurations allowed by the iSCSI
architecture. The purpose of this diagram is to explain the iSCSi Masking View management.

Once you have configured all the iSCSI components, you can build a Masking View and add the iSCSI
components to it.

Starting at the host there are two NICs. Each NIC is assigned an IP address, Network_prefix, and VLAN
by the person who administers the host. The host has Multipath software typically presents one host IQN.
Shown here is an example of a Microsoft host. The IQN of the host is added to an Initiator Group. Most
host-based MPIO present a single initiator IQN to iSCSI target nodes with host name embedded in the
name.

The NIC and the iSCSI Target form a TCP connection. They are a part of a session which is the primary
communication link between the Initiator and Target. This IQN of the Target is added to a Port Group
(PG). There can be multiple target IQNs in the Port Groups.

Next are the devices. The devices get added to a Storage Group (SG). The Storage Group contains
Symmetrix volume IDs, which is no different than creating a Storage Group in a Fibre Channel
environment.

After the Masking View is created, the host must discover its storage. Depending on the operating system,
the procedure to discover a target differs. When using IP, discover the target by its IP address.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 214
Lab: Service Level Based Provisioning with Unisphere

This lab covers:


• SL-based provisioning with Unisphere for
PowerMax

215 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In this lab, the Unisphere for PowerMax Storage Provisioning wizard is used to perform SL-based
provisioning to an open systems host.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 215
Lesson: Service Level Based Provisioning with
SYMCLI
This lesson covers the following topics:

• Provision storage to a host using Cascaded Storage Groups

• Manage Host I/O Limits

216 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers SL-based provisioning of PowerMax, VMAX All Flash, and VMAX3 storage using
SYMCLI. Storage provisioning with SYMCLI with an example scenario is illustrated.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 216
Use Case – SL Provisioning with SYMCLI

• Application Server
– Requires storage for two applications
› Each application has different Service Level requirements
– 2 HBAs
› WWNs - 2100001b321ea3d5 & 2101001b323ea3d5
› Already zoned to a PowerMax 8000 array SID 217
› FA1D:05 & FA2D:05

• Provision storage with cascaded Storage Groups

SAN

217 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In this example, an application server is configured with two HBAs that require storage for two different
applications. The Service Level requirements for the two applications are different. The server HBAs have
already been zoned to a VMAX 100K array. To satisfy the requirement of different Service Levels,
provision storage to this server using cascaded Storage Groups.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 217
High-Level Steps

• Confirm HBAs are zoned to the array


• Create an Initiator Group with the two HBAs
• Create a Port Group with the two array ports
• Create Storage Groups
– Application 1 SG with required devices and SL
– Application 2 SG with required devices and SL
– Parent SG – Application SGs as members
• Create Masking View
– Initiator Group, Port Group, and parent SG
• Confirm that application server has access to devices

218 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Listed are the high-level steps that are involved in provisioning storage in the example.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 218
Confirm Zoning
C:\>symaccess list logins -dirport 1d:5 -sid 217
Symmetrix ID : 000197600217
Director Identification : FA-1D
Director Port : 005
User-generated Logged On
Identifier Type Node Name Port Name FCID In Fabric
---------------- ----- ----------------------------------- ------ ------ ------
2100001b321ea3d5 Fibre 2100001b321ea3d5 2100001b321ea3d5 20000 Yes Yes

C:\>symaccess list logins -dirport 2d:5 -sid


Symmetrix ID : 0001976007217
Director Identification : FA-2D
Director Port : 006
User-generated Logged On
Identifier Type Node Name Port Name FCID In Fabric
---------------- ----- ----------------------------------- ------ ------ ------
2101001b323ea3d5 Fibre 2101001b323ea3d5 2101001b323ea3d5 20200 Yes Yes

219 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Zoning of the HBAs to the ports can be confirmed by looking at the switch. In this example, the
symaccess list logins command is used to confirm that the HBAs of the server have been zoned to
the ports of the array. WWN 2100001b321ea3d5 is zoned to 1D:5, and WWN 2101001b323ea3d5 is
zoned to 2D:5.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 219
Build Initiator Group
• File containing the WWNs of the application server
C:\>more server_wwn.txt
wwn:2100001b321ea3d5
wwn:2101001b323ea3d5

• Create Initiator Group with consistent LUN option


C:\>symaccess -sid 217 create -name app_server -type initiator -
consistent_lun -file server_wwn.txt
• Confirm creation of Initiator Group
C:\>symaccess -sid 217 list -type initiator -name app_server
Symmetrix ID : 000197600217
Init View
Initiator Group Name Count Count
-------------------------------- ----- -----
app_server 2 0

220 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

First, create a file with the WWNs of the initiators. Then create the Initiator Group with the consistent LUN
option, and confirm the creation of the Initiator Group.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 220
Initiator Group Details

• Show Initiator Group details


C:\>symaccess -sid 217 -type initiator show app_server

Symmetrix ID : 000197600217

Initiator Group Name : app_server


Last update time : 04:28:52 PM on Tue Nov 27,2018
Group last update time : 04:28:52 PM on Tue Nov 27,2018
Host Initiators
{
WWN : 2100001b321ea3d5
[alias: 2100001b321ea3d5/2100001b321ea3d5]
WWN : 2101001b323ea3d5
[alias: 2101001b323ea3d5/2101001b323ea3d5]
}

---- OUTPUT TRUNCATED -----------------

221 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Use the symaccess show command to confirm that the Initiator Group has the correct WWNs.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 221
Build Port Group
• Create Port Group with the required ports
C:\>symaccess -sid 217 create -name app_server_pg -type port -dirport
1d:5,2d:5
• Examine new Port Group
C:\>symaccess -sid 217 -type port show app_server_pg
Symmetrix ID : 000197600217
Port Group Name : app_server_pg
Last update time : 04:31:38 PM on Tue Nov 27,2018
Director Identification
{
Director
Ident Port WWN Port Name / iSCSI Target Name
------ ---- ---------------------------------------
FA-1D 005 50000973580c081a
FA-2D 005 50000973580c085a
}
Masking View Names
{
None
222 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In this example, a Port Group called app_server_pg with ports 1d:5 and 2d:5 is created. The contents of
the Port Group are examined using the symaccess show command.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 222
List Available Devices
• List available devices not in a Storage Group
C:\>symdev list -sid 217 -notinsg -gb

Symmetrix ID: 000197600217

Device Name Dir Device


---------------------------- ------- -------------------------------------
Cap
Sym Physical SA :P Config Attribute Sts (GB)
---------------------------- ------- -------------------------------------
------------------------------- Some output truncated ------------
0008B Not Visible ???:??? TDEV N/Grp'd RW 10.0
0008C Not Visible ???:??? TDEV N/Grp'd RW 10.0
0008D Not Visible ???:??? TDEV N/Grp'd RW 10.0
0008E Not Visible ???:??? TDEV N/Grp'd RW 10.0
0008F Not Visible ???:??? TDEV N/Grp'd RW 10.0

223 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Use the symdev list command with the –notinsg option to list devices on the array which are not in
any Storage Groups. The output shows a list of such devices.

The questions marks in the SA:P columns also indicate that these devices are not mapped to any front-
end port. It is safe to assume that these devices are unused.

Devices 0008B:0008E will be used for building the required Storage Groups.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 223
Build Storage Groups

• Build two child Storage Groups and populate them with devices
– Application 1 SG with SL of platinum and WL type of OLTP

C:\>symsg -sid 217 create app_server_app1 -sl platinum

C:\>symsg -sid 217 -sg app_server_app1 addall -devs 0008B:0008C


– Application 2 SG with SL of Gold and WL type of DSS

C:\>symsg -sid 217 create app_server_app2 -sl gold

C:\>symsg -sid 217 -sg app_server_app2 addall -devs 0008D:0008E

• Build parent Storage Group and populate with children


C:\>symsg -sid 217 create app_server_parent

C:\>symsg -sid 217 -sg app_server_parent add sg


app_server_app1,app_server_app2

224 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Storage Groups are built as shown here. Each child Storage Group is given the appropriate SL and
WL and populated with two devices. The parent Storage Group is populated with the two child Storage
Groups.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 224
Storage Group Listing

C:\>symsg list -detail -sid 217

S T O R A G E G R O U P S

Symmetrix ID: 000197600217

Flags Number Number Child


Storage Group Name EFM SLC Devices GKs SGs Service Level Name Workload SRP
Name
------------------------------------------------- ------------------ -------- --
-------
app_server_app1 FX. C.X 2 0 0 Platinum <none> <none>
app_server_app2 FX. C.X 2 0 0 Gold <none> <none>
app_server_parent F.. P. 4 0 2 <none> <none> <none>

------------------------------ Output Truncated ------------------------------

225 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The symsg list –detail command shows the Storage Groups that were created. The child Storage
Groups have the correct SL set. Also, the child SGs are shown as FAST managed while the parent is not
shown as FAST managed, and compression is enabled on both child SGs.

Legend:

Flags:

Device (E)mulation A = AS400, F = FBA, 8 = CKD3380,

9 = CKD3390, M = Mixed, . = N/A

(F)ast X = Fast Managed, . = N/A

(M)asking View X = Contained in Mask View(s), . = N/A

Cascade (S)tatus P = Parent SG, C = Child SG, . = N/A

Host IO (L)imit D = Defined, S = Shared, B = Both, . =


N/A

(C)ompression X = compression Enabled, . = N/A

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 225
Build Masking View

• Create Masking View

C:\>symaccess -sid 217 create view -name app_server_mv -ig app_server


-pg app_server_pg -sg app_server_parent

• Confirm creation of Masking View

C:\>symaccess -sid 217 list view -name app_server_mv

Symmetrix ID : 000197600217

Masking View Name Initiator Group Port Group Storage Group


------------------- ------------------- ------------------- ------------------
-
app_server_mv app_server app_server_pg app_server_parent

226 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Finally, create the Masking View with the Initiator Group, Port Group, and the parent Storage Groups that
were created. To discover the newly provisioned devices, go to the application host and perform a SCSI
bus scan.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 226
Masking View Details (1 of 3)
C:\>symaccess show view app_server_mv -sid 217

Symmetrix ID : 000197600217

Masking View Name : app_server_mv


Last update time : 04:44:55 PM on Tue Nov 27,2018
View last update time : 04:44:55 PM on Tue Nov 27,2018

Initiator Group Name : app_server

Host Initiators
{
WWN : 2100001b321ea3d5
[alias: 2100001b321ea3d5/2100001b321ea3d5]
WWN : 2101001b323ea3d5
[alias: 2101001b323ea3d5/2101001b323ea3d5]
}
---------- Continued on next slide ---------------

227 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The symaccess show view command shows the details of the Masking View. Shown here are the host
initiators.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 227
Masking View Details (2 of 3)

---------- Continued from previous slide ---------------

Port Group Name : app_server_pg

Director Identification
{
Director
Ident Port WWN Port Name / iSCSI Target Name
------ ---- -----------------------------------
FA-1D 005 50000973580c081a
FA-2D 005 50000973580c085a
}
---------- Continued on next slide ---------------

228 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The symaccess show view command shows the details of the Masking View. Shown here are the port
details.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 228
Masking View Details (3 of 3)

---------- Continued from previous slide ---------------


Storage Group Name : app_server_parent
Number of Storage Groups : 2
Storage Group Names : app_server_app1 (IsChild)
app_server_app2 (IsChild)
Sym Host
Dev Dir:Port Physical Device Name Lun Attr Cap(MB)
------ -------- ----------------------- ---- ---- -------
0008B 01D:005 Not Visible 1 10241
02D:005 Not Visible 1
0008C 01D:005 Not Visible 2 10241
02D:005 Not Visible 2
0008D 01D:005 Not Visible 3 10241
02D:005 Not Visible 3
0008E 01D:005 Not Visible 4 10241
02D:005 Not Visible 4
-------
Total Capacity 40964

229 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The symaccess show view command shows the details of the Masking View. Shown here are the
Storage Group details.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 229
Devices Visible on Application Server

SUN1-88-21/> syminq

Device Product Device


---------------------------------------- --------------------------- ---------------------
Name Type Vendor ID Rev Ser Num Cap (KB)
---------------------------------------- --------------------------- ---------------------
--------- SOME OUTPUT DELETED --------------------------
/dev/rdsk/emcpower0c EMC SYMMETRIX 5978 170008B000 10487040
/dev/rdsk/emcpower1c EMC SYMMETRIX 5978 170008C000 10487040
/dev/rdsk/emcpower2c EMC SYMMETRIX 5978 170008D000 10487040
/dev/rdsk/emcpower3c EMC SYMMETRIX 5978 170008E000 10487040

SUN1-88-21/> powermt display ports

Storage class = Symmetrix


==============================================================================
----------- Storage System --------------- -- I/O Paths -- --- Stats ---
ID Interface Wt_Q Total Dead Q-IOs Errors
==============================================================================
000197600217 FA 1d:5 256 4 0 0 0
000197600217 FA 2d:5 256 4 0 0 0

230 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

A SCSI rescan was performed on the application server. The syminq output shows the four devices that
were provisioned to this server.

The Rev 5978 is the PowerMaxOS version. The 17 in the Ser Num column represents the last two digits
of the array SID. The items in the Num column show the logical volume numbers of 8B:8E.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 230
Set Host I/O Limits on Parent SG
C:\>symsg -sid 217 -sg app_server_parent set -bw_max 200 -dynamic always

C:\>symsg -sid 217 show app_server_parent


Name: app_server_parent
Symmetrix ID 000197600217 :
Last updated at Tue Nov 27 17:42:58 2018 :
Masking Views Yes :
FAST Managed No :
Service Level Name <none> :
Workload <none> :
SRP Name <none> :
VP Saved (%) N/A :
Compression Enabled No :
Compression Ratio N/A :
Host I/O Limit Defined :
Host I/O Limit MB/Sec
200 :
Host I/O Limit IO/Sec
NoLimit :
Dynamic DistributionAlways :
Number of Storage Groups
2 :
Storage Group Names app_server_app1 (IsChild) :
app_server_app2 (IsChild)
------------- Output Truncated ----------------------------------
231 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In this example, Host/IO Limits are set on a parent SG. A bandwidth limit is also set, and dynamic
distribution is set to Always.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 231
SG Listing – Check Host I/O Setting

C:\>symsg list -sid 217

S T O R A G E G R O U P S

Symmetrix ID: 000197600217

Flags Number Number Child


Storage Group Name EFM SLC Devices GKs SGs
-------------------------------------------------
app_server_app1 FXX CSX 2 0 0
app_server_app2 FXX CSX 2 0 0
app_server_parent F.X PD 4 0 2
---------- Output Truncated -----------------

232 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The symsg list command shows the Storage Groups. Host I/O Limits are defined on the parent
indicated by the D in the L column. The S in the L column of the child SGs indicates that the children are
currently sharing Host I/O Limits. There is no explicit setting for the children.

Legend:

Flags:

Device (E)mulation A = AS400, F = FBA, 8 = CKD3380,

9 = CKD3390, M = Mixed, . = N/A

(F)ast X = Fast Managed, . = N/A

(M)asking View X = Contained in Mask View(s), . = N/A

Cascade (S)tatus P = Parent SG, C = Child SG, . = N/A

Host IO (L)imit D = Defined, S = Shared, B = Both, . = N/A

(C)ompression X = Compression Enabled, . = N/A

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 232
Set Host I/O Limits on Child SG
C:\>symsg -sid 217 -sg app_server_app2 set -bw_max 100

C:\>symsg -sid 217 show app_server_app2


Name: app_server_app2
Symmetrix ID : 000197600217
Last updated at : Tue Nov 27 17:46:14 2018
Masking Views : Yes
FAST Managed : Yes
Service Level Name : Gold
Workload : <none>
SRP Name : <none>
VP Saved (%) : N/A
Compression Enabled : Yes
Compression Ratio : N/A
Host I/O Limit : Defined (Shared)
Host I/O Limit MB/Sec : 100 (200)
Host I/O Limit IO/Sec : NoLimit (NoLimit)
Dynamic Distribution : Always
Number of Storage Group : 1
Storage Group Names : app_server_parent (IsParent)
------------- Output Truncated ----------------------------------

233 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In this example, Host I/O Limits on a child SG are explicitly defined. There is an explicit setting on the
parent as well. A bandwidth limit has been set in this example—less than the limit of the parent. The
symsg show output shows the bandwidth limit for this SG is 100, while the limit on the parent is 200.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 233
SG Listing – Check Host I/O Setting Again

C:\>symsg list -sid 217

S T O R A G E G R O U P S

Symmetrix ID: 000197600217

Flags Number Number Child


Storage Group Name EFM SLC Devices GKs SGs
-------------------------------------------------
app_server_app1 FXX CSX 2 0 0
app_server_app2 FXX CBX 2 0 0
app_server_parent F.X PD 4 0 2
---------- Output Truncated -----------------

234 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The symsg list command shows the Storage Groups. The app2 Storage Group shows a B in the L
column indicating that Host I/O Limits are defined both on the parent and the child.

Legend:

Flags:

Device (E)mulation A = AS400, F = FBA, 8 = CKD3380,

9 = CKD3390, M = Mixed, . = N/A

(F)ast X = Fast Managed, . = N/A

(M)asking View X = Contained in Mask View(s), . = N/A

Cascade (S)tatus P = Parent SG, C = Child SG, . = N/A

Host IO (L)imit D = Defined, S = Shared, B = Both, . = N/A

(C)ompression X = Compression Enabled, . = N/A

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 234
Host I/O Limit – Demand Report by PG

C:\>symsg -sid 217 list -demand -by_pg -pg app_server_pg

Symmetrix ID: 000197600217

Port Group IO Limit Bandwidth Limit


----------------------- ---------------- --------------------------------------
Maximum Number Port Grp Maximum Number
Flags Demand Nolimit Speed Demand NoLimit Excess
Name HD (IO/Sec) SGs (MB/Sec) (MB/Sec) (%) SGs (MB/Sec)
----------------- ----- -------- ------- -------- -------- --- ------- --------
app_server_pg YY 0 1 2000 200 10 0 +1800

Legend:
Flags:
(H)ost IO Limit Exists Y = Yes, N = No, M = Mixed, . = N/A
(D)ynamic Distribution Y = Yes, N = No, . = N/A

235 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

You can run the symsg list –demand –by_pg command to view quota information sorted by Port
Group. The –pg option limits the output to the specified Port Group. The –v option is supported for further
detail.

The columns display all the available capacity and IOPS quotas, and bandwidth quotas that are enforced
within Port Groups.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 235
Host I/O Limit – Demand Report by Port
C:\>symsg -sid 217 list -demand -by_port
Symmetrix ID: 000197600217

Director IO Limit Bandwidth Limit


-------------- ---------------- --------------------------------------
Maximum Number Port Maximum Number
Flags Demand Nolimit Speed Demand NoLimit Excess
DIR:PORT HD (IO/Sec) SGs (MB/Sec) (MB/Sec) (%) SGs (MB/Sec)
-------- ----- -------- ------- -------- -------- --- ------- --------
01D:004 NN 0 1 1000 0 0 1 +1000
01D:005 MY 0 3 1000 100 10 2 +900
01D:032 NN 0 2 500 0 0 2 +500
01D:033 NN 0 0 - 0 - 0 -
02D:004 NN 0 3 1000 0 0 3 +1000
02D:005 MY 0 3 1000 100 10 2 +900
02D:032 NN 0 2 500 0 0 2 +500
02D:033 NN 0 0 - 0 - 0 -
Legend:
Flags:
(H)ost IO Limit Exists Y = Yes, N = No, M = Mixed, . = N/A
(D)ynamic Distribution Y = Yes, N = No, . = N/A

236 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

You can run the symsg list –demand –by_port command to view quota information sorted by front-
end director ports. The –v option is supported for further detail.

The columns display all the available capacity and IOPS quotas, and bandwidth quotas that are enforced
by front-end directors.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 236
Non-Disruptive Device Moves Between SGs

• Moving devices from one SG to another SG does not disrupt host visibility
for the devices
– Certain conditions must be met*

• symsg syntax
symsg -sg <SgName> -sid <SymmID>

move dev <SymDevName> <DestSgName> [-force]

[-devs <<SymDevStart>:<SymDevEnd> | <SymDevName>


[,<<SymDevStart>:<SymDevEnd> | <SymDevName>>...]> |
-file <DeviceFileName> [-tgt] ]
moveall <DestSgName> [-force]

*See notes
237 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMax, VMAX All Flash, and VMAX3 arrays support moving devices from one SG to another SG
without disrupting host visibility for the devices. Moving a device to another SG does not disrupt the host
visibility for the device, if any one of the conditions is met:

• Moves between child SGs of a parent SG, when the view is on the parent SG.

• Moves between SGs when a view is on each SG, and both the Initiator Group (IG) and the Port
Group (PG) are common to the views.

• Moves between SGs when a view is on each SG, and they have a common IG. They have different
PGs but the same set of ports, or the target PG is a superset of the source PG.

• Moves when source SG is not in a Masking View.

If none of the conditions are met, the operation is rejected, but the move can be forced by specifying the -
force flag. Forcing a move may affect the host visibility of the device.

Device moves between FAST managed SGs or between a FAST managed SG and a non-FAST-managed
SG is permitted.

The symsg syntax is shown here.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 237
Non-Disruptive Cascaded SG Conversion
Stand-alone SG to Cascaded SG Cascaded SG to Stand-alone SG

• Parent SG retains the name of • Allowed only if a Cascaded SG has a


original stand-alone SG single child SG
• New stand-alone SG retains the name
• If original stand-alone SG is part of of the original parent SG
any Masking Views, after conversion • If original parent SG is part of any
all views will be moved to the new Masking Views, after conversion all
parent SG views will be moved to the new stand-
alone SG
• Existing Host I/O limits can be • Existing Host I/O limits on parent SG
migrated to the new parent or child are migrated to the new stand-alone
SG
• If original child SG was FAST
• If original stand-alone SG was FAST managed, the stand-alone SG will be
managed, only the child SG will be FAST managed after conversion
FAST managed after conversion

238 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMax, VMAX All Flash, and VMAX3 arrays support non-disruptively performing the conversion of a
stand-alone SG to Cascaded SG or a Cascaded SG to a stand-alone SG. This conversion enables FAST-
managed SGs containing devices with a single SL to be expanded to include devices in a second SL,
without disrupting the availability of those devices from host applications.

To convert a stand-alone SG to a cascaded configuration, the command supplies the name of the stand-
alone SG being converted and the name of the new child SG. Upon successful completion, the parent SG
retains the name of the stand-alone group and the child SG is given the new child name. If the SG starts in
one or more Masking Views, at the end of the operation all of the views are moved to the parent Storage
Group. If the SG starts with Host I/O Limits configured, these limits can be migrated to the parent SG or to
the child SG. If the SG starts as FAST-managed, at the end of the conversion only the child SG is FAST
managed.

To convert a cascaded SG to a stand-alone configuration, the command supplies the name of the parent
SG being converted to a stand-alone SG. This conversion is allowed only if the cascaded SG has a single
child SG. Upon successful completion, the stand-alone SG retains the name of the parent group. If the
parent Storage Group starts in one or more Masking Views, at the end of the operation all of the views are
moved to the stand-alone SG. If the parent SG starts with Host I/O Limits configured, these limits are
migrated to the stand-alone SG. If the child SG starts as FAST-managed, the stand-alone SG becomes
FAST-managed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 238
Cascaded SG Conversion – symsg Syntax

• Convert stand-alone SG to cascaded SG


symsg -sid <SymmID>
convert -cascaded <SgName> <ChildSgName>
[-host_IO <on_parent | on_child>]
-host_io must be specified if stand-alone SG has Host I/O defined

• Convert cascaded SG to stand-alone SG


symsg -sid <SymmID>
convert –standalone <SgName>
[-host_IO <keep_parent | keep_child>]
-host_io must be specified if Host I/O has been defined on both parent and child
SGs

• Unisphere for PowerMax can also be used

239 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The symsg convert –cascaded command enables the non-disruptive conversion of a stand-alone
SG to a cascaded SG consisting of a parent SG and a single child SG. If the stand-alone SG has a Host
I/O Limit, the user must specify if, after the conversion, the limit will be set on the parent or the child SG.

The symsg convert –standalone command enables the non-disruptive conversion of a cascaded
SG consisting of a parent SG and a single child SG to a stand-alone SG. If either the parent SG or the
child SG has a Host I/O Limit that is defined, it is set on the stand-alone SG. But if both parent and child
SGs have a Host I/O Limit, the user must supply the host_IO option.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 239
Lab: Service Level Based Provisioning with SYMCLI

This lab covers:


• SL-based provisioning with SYMCLI

240 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In this lab, SYMCLI is used to perform SL-based provisioning to an open systems host.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 240
Lab: Cascaded Storage Groups – Moving Devices Non-
disruptively and Changing SL
This lab covers:
• Cascaded Storage Groups
• Non-disruptive movement of devices between
Storage Groups
• Modifying SL

241 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lab covers Cascaded Storage Groups, moving devices non-disruptively between Storage Groups,
and changing the SL on Storage Groups.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 241
Lab: Managing Host I/O Limits

This lab covers:


• Host I/O Limits

242 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lab covers the management of Host I/O Limits.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 242
Module Summary

Key points covered in this module:

• Auto-provisioning Groups

• Host I/O Limits

• Host considerations for Storage Allocation

• SL-based provisioning with Unisphere for PowerMax

• SL-based provisioning with SYMCLI

243 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This module covered storage allocation of PowerMax, VMAX All Flash, and VMAX3 storage to hosts using
auto-provisioning groups. An overview of auto-provisioning groups, Host I/O Limits, and host
considerations while allocating storage was presented. SL-based storage provisioning with Unisphere for
PowerMaX and SYMCLI was covered in detail.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Storage Allocation using Auto-provisioning Groups 243
Module: Management in a Virtualized Environment

Upon completion of this module, you should be able to:

• Manage Virtual Servers with Unisphere for PowerMax

• Describe the Dell EMC Virtual Storage Integrator (VSI) for VMware vSphere Client features for
PowerMax and VMAX All Flash arrays

244 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This module focuses on management of PowerMax, VMAX All Flash, and VMAX3 storage in a virtualized
environment using Unisphere for PowerMax. Also covered is the Dell EMC Virtual Storage Integrator (VSI)
for VMware vSphere Client features.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Management in a Virtualized Environment 244
Lesson: Virtual Server Management – Unisphere for
PowerMax
This lesson covers the following topics:

• Virtual Server Management with Unisphere for PowerMax

245 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers virtual server management with Unisphere for PowerMax.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Management in a Virtualized Environment 245
Virtual Servers – Management

• HOME > VMWARE > vCenters and ESXi

246 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Using Unisphere for PowerMax, you can discover vCenter and ESXi hosts. Once the Virtual Server is
discovered, you can view its details. To see a listing of all the discovered virtual servers, choose
VMWARE > vCenters and ESXi from the HOME screen. This page is also used to register new servers.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Management in a Virtualized Environment 246
Register vCenter and ESXi

247 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To register a new server, click the Register vCenter/ESXi Server button. Enter the Server Name or IP
address, Username, and Password. Choose Run Now from the ADD TO JOB LIST dropdown.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Management in a Virtualized Environment 247
ESXi Host – Details

248 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To see the details of a specific ESXi host, select it in the listing. Details are shown on the right. For more
details, click VIEW ALL DETAILS.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Management in a Virtualized Environment 248
ESXi Host – View All Details

249 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The VIEW ALL DETAILS link displays more details on the ESXi host. The Details tab includes properties
of the host, such as Memory and Virtual Machines, and Array Related Storage details including Masking
Views, Storage Groups, and Capacity information. Tabs for Masking Views, Virtual Machines, and
Performance are also provided.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Management in a Virtualized Environment 249
ESXi Host – Masking Views

250 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

From the Masking Views tab, clicking the View Path Details button brings you to the HOSTS > Masking
Views page on the associated array. This page displays the Masking View Path Details of the ESXi host,
including Hosts, Ports, Storage, and Volumes.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Management in a Virtualized Environment 250
ESXi Host – Details: Virtual Machines

251 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Double-click a Virtual Machine (VM) in the listing in the Virtual Machines tab to display details about a VM.
Details are shown on the right of the screen, and include a link to Virtual Disks associated with this VM.
Double-clicking the Virtual Disk in the listing shows advanced details—not shown—such as Physical Disk
information, about the Virtual Disk.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Management in a Virtualized Environment 251
ESXi Server – Performance

252 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Performance tab displays performance information for the ESXi host, such as Storage Group
performance details, and Front-end Director and Port details.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Management in a Virtualized Environment 252
Lesson: Dell EMC VSI for VMware vSphere Web Client
This lesson covers the following topics:

• Dell EMC VSI 8.0 for VMware vSphere Client features for PowerMax and VMAX All Flash

253 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers the Dell EMC VSI 8.0 for VMware vSphere Client features for PowerMax and VMAX All
Flash arrays.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Management in a Virtualized Environment 253
VMware vSphere Web Client

Flash

HTML5

254 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

As of VMware vSphere 6.5, changes to vCenter Server management include an HTML5-based vSphere
Web Client, which is known as vSphere Client.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Management in a Virtualized Environment 254
Dell EMC VSI 8.0 for VMware vSphere Client

• Enables VMware administrators to provision and manage Dell EMC storage


systems
– PowerMax and VMAX All Flash storage arrays running PowerMaxOS
– XtremIO storage arrays
– Unity/UnityVSA

• Can run with VSI 7.x in the same environment to support HYPERMAX OS
arrays
• Documentation
– VSI for VMware vSphere Web Client Product Guide
– VSI for VMware vSphere Web Client Release Notes
– Dell EMC Simple Support Matrix

255 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Dell EMC Virtual Storage Integrator (VSI) version 8.0 for VMware vSphere Client is a plug-in for VMware
vCenter. It enables VMware administrators to provision and manage the Dell EMC storage systems that
are listed here for VMware ESXi hosts.

Tasks that administrators can perform with VSI include storage provisioning, storage mapping, managing
data protection systems, and viewing information such as capacity utilization.

VSI consists of a GUI and the Dell EMC Solutions Integration Service (SIS). SIS is the programming
interface that provides communications to the storage and data protection systems. The administrator
uses VMware vCenter Client to provision and manage storage.

VSI 8.0 version supports the HTML5 vSphere Client and PowerMaxOS only. Users may run VSI 8.0 and
7.x in the same VMware environment to support HYPERMAX OS arrays.

Refer to the listed documentation, found on Dell EMCs support website at https://www.dell.com/support,
for detailed information about the installation and configuration process.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Management in a Virtualized Environment 255
Dell EMC VSI 8.0 Features

• Storage system and Storage Group administration


• Provision VMFS 6 datastores
• View VMFS 6 datastore PowerMax/VMAX All Flash storage properties
• Dell EMC VSI 8.0 requirements for PowerMax and VMAX All Flash
– PowerMaxOS 5978.144.144 or later
– Unisphere for PowerMax 9.x
– Masking views for the ESX/ESXi hosts must exist on the array
– Array is connected to the vCenter host (FC or SCSI)
– Array must be registered in VSI

256 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Dell EMC VSI plug-in enables storage system and Storage Group administration for PowerMax and
VMAX All Flash arrays. You can provision datastores that are built on array storage to ESXi hosts. The
VSI plug-in automatically provisions storage to the ESXi host and creates a datastore. VSI shows the
properties of the datastores.

To provision and manage PowerMax and VMAX All Flash, VSI requires a supported version of
PowerMaxOS and Unisphere for PowerMax 9.x. The ESXi hosts must have a masking view on the array,
and the array must be registered in Dell EMC VSI.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Management in a Virtualized Environment 256
VSI – High-Level Deployment Steps

• Download Dell EMC


VSI OVA from Dell
EMC Support
• Deploy Dell EMC VSI
OVA
– Use VMware vSphere
Client

• Register VSI plug-in


with VMware vCenter
• Register arrays

257 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The high-level steps to deploy Dell EMC VSI for VMware vSphere Client are shown. Refer to the VSI for
VMware vSphere Web Client Product Guide and VSI for VMware vSphere Web Client Release Notes
found on https://www.dell.com/support for detailed steps.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Management in a Virtualized Environment 257
Register Arrays

Add array

258 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To register an array, log in to the vSphere Client. From the Dashboard, select vCenters. Choose Storage
Systems and click the + icon to add an array.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Management in a Virtualized Environment 258
Register Arrays – Type and Connection Settings

259 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In the Register Storage System dialog, choose the type of array and enter the connections settings. For
PowerMax and VMAX All Flash systems, enter the Unisphere Host Name or IP address and port, and the
username and password. Click NEXT to continue to the Storage Systems to Register dialog. Choose the
array being registered, and click NEXT.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Management in a Virtualized Environment 259
Register Arrays – Add Storage Group

Add Storage
Group

260 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In the Storage Groups dialog, click the + icon to add a new Storage Group. Provide a name for the SG,
choose the storage system, enter the capacity for the SG, and click ADD.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Management in a Virtualized Environment 260
Register Arrays – Storage Access

261 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Select the Storage Group and click NEXT to provide storage access. Select the users or groups to allow
access to the Storage Groups and click NEXT.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Management in a Virtualized Environment 261
Register Arrays – Complete Registration

262 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To complete the registration, click FINISH. The registered array is shown with a status of Connected.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Management in a Virtualized Environment 262
Register Arrays – Retrieve Arrays

Array Details

Storage
Groups

263 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To view details of the registered array, click the down arrow next to the array. To display the Storage
Group that was created, click the Storage Groups tab.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Management in a Virtualized Environment 263
VSI – Provision Datastore

264 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Dell EMC VSI can be used to provision a new datastore to an ESXi host with PowerMax or VMAX All
Flash storage. In the vSphere Client, go to the Hosts and Clusters view. Right-click a host or cluster, and
then select Dell EMC VSI Actions, New Datastore.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Management in a Virtualized Environment 264
Provision Datastore – Steps 1 and 2

265 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Create Datastore on Dell EMC Storage dialog has multiple steps. Shown here are steps 1 and 2. In
step 1, choose the Type of datastore. Dell EMC VSI 8.0 defaults to a VMFS 6 datastore. To continue to
step 2, Datastore Settings, click NEXT. Provide a name, and verify the location for the datastore. Click
NEXT to continue.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Management in a Virtualized Environment 265
Provision Datastore – Steps 3 and 4

266 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Select the storage system in the Storage System Selection dialog, and click NEXT to continue to Storage
Settings. Enter a Capacity for the datastore. In this example, the datastore capacity is 20 GB. Select the
Storage Group, in this example, VSI_DEMO, and click NEXT.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Management in a Virtualized Environment 266
Provision Datastore – Steps 5 and 6

267 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In step 5, select the Initiator Group, and click NEXT. Review and verify all entries in the Ready to
Complete dialog, and click FINISH. Once the array has completed its task, Dell EMC VSI rescans the
ESXi host and then creates the datastore on the newly presented devices. The newly created datastore is
now available to the ESXi host.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Management in a Virtualized Environment 267
Datastore Details

Datastores
Menu

268 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In the vSphere Client, select your datastore from the Datastores menu. Details of the datastore are
displayed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Management in a Virtualized Environment 268
EMC VSI – Dashboard

269 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Storage Group is now displayed on the Dell EMC VSI Dashboard.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Management in a Virtualized Environment 269
Module Summary

Key points covered in this module:

• Management of Virtual Servers with Unisphere for PowerMax

• Dell EMC VSI 8.0 for VMware vSphere Client features for PowerMax and VMAX All Flash arrays

270 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This module covered the management of storage in a virtualized environment. Management of virtual
servers with Unisphere for PowerMax and Dell EMC VSI 8.0 for VMware vSphere Client features for
PowerMax and VMAX All Flash arrays was also shown.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Management in a Virtualized Environment 270
Module: Monitoring and Workload Planning with
Unisphere for PowerMax

Upon completion of this module, you should be able to:


• Monitor Storage Resource Pool (SRP) Reports
• Monitor Storage Group Compliance
• Monitor Data Reduction
• Perform Workload Planning
• SRP Headroom
• Suitability Check

271 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This module focuses on monitoring and workload planning with Unisphere for PowerMax (Unisphere).
Unisphere is used to monitor Storage Resource Pool reports, compliance, and data reduction. The SRP
Headroom and Suitability Check workload planning features are also covered.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 271
Lesson: Monitor SRP
This lesson covers the following topics:

• SRP details

• SRP reports
– Storage Group Demand Report
– Service Level Demand Report
– Compressibility Report

• Utilization Alerts

272 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers monitoring SRPs with Unisphere for PowerMax. Unisphere is used to view SRP
reports and utilization alerts.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 272
Storage – Storage Resource Pools

273 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

From the Storage section, click Storage Resource Pools to display the configured SRPs. In this example,
there is one SRP configured, named SRP_1. The used and allocated capacity is shown as a percentage
of the overall capacity, and the total usable and subscribed capacity is shown in terabytes (TB).

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 273
View SRP Details

274 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To view the details of an SRP, click the checkbox next to it. Details, such as Emulation, Overall Efficiency,
and capacity information, are shown on the right. Clicking the checkbox also enables the Modify and Add
eDisks buttons to allow modifications to the SRP.

SRP details can be shown in SYMCLI using the symcfg show command:

C:\Users\Administrator>symcfg show -srp SRP_1 -sid 217

Symmetrix ID : 000197600217

Name : SRP_1
Description :
Default SRP : FBA
Effective Used Capacity (%) :11
Usable Capacity (GB) : 12518.0
Used Capacity (GB) : 1376.7
Free Capacity (GB) : 11141.3
User Subscribed Capacity (GB) : 1264.5
Reserved Capacity (%) : 10
Compression State : Enabled
Compression Ratio : 1.5:1
Usable by RDFA DSE : Yes

Disk Groups (1):


------------------ Output Truncated ------------------------------------

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 274
Reports

275 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

STORAGE GROUP DEMAND, SERVICE LEVEL DEMAND, and COMPRESSIBILITY reports for an SRP
are available from the Dashboard. From the Dashboard, click the CAPACITY selection, and select an SRP
from the System dropdown to enable the actions buttons to retrieve the reports.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 275
Storage Group Demand Report

276 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Storage Group Demand report shows the subscribed and used capacity in GB. It also shows the allocated %
from the perspective of the subscribed capacity for the SGs. GB used and compression ratio are also shown, along
with Snapshot information for the SG.

This report can be generated using the SYMCLI symcfg list command:

C:\>symcfg list -srp -demand -type sg -sid 217

STORAGE RESOURCE POOLS

Symmetrix ID : 0001976002171

Name : SRP_1
Usable Capacity (GB) : 12518.0
SRDF DSE Allocated (GB) : 0.0
-------------------------------------------------------------------------------
Snapshot
Subscribed Allocated Allocated
SG Name (GB) (GB) (%) (GB)
-------------------------------- ------------- -------------- ----------
esxi-94-161-GK_SG 0.1 0.0 0 0.0
esxi-94-163_GK_SG 0.1 0.0 0 0.0
esxi-94-161_Data 240.0 60.0 25 60.0
esxi-94-163_Data 240.0 0.0 0 0.0
NDM_SRC_253_TGT_217_SG1 22.5 0.0 0 0.0
NDM_SRC_253_TGT_217_SG2 22.5 0.0 0 0.0
NDM_SRC_253_TGT_217_SG3 22.5 0.1 0 0.0
NDM_SRC_253_TGT_217_SG4 22.5 18.3 81 0.0
nestedesxi55_prod1 20.0 1.8 8 0.0
nestedesxi55_prod2 20.0 0.0 0 0.0
DemoGroup 75.0 0.0 0 0.0
esxi-88-68_sg 40.0 4.7 11 0.0
esxi-88-67_sg 40.0 5.2 13 0.0

------------------ Output Truncated ------------------------------------

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 276
Service Level Demand Report

277 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Service Level Demand report shows the allocated and subscribed capacity in GB. It also shows the
allocated and subscribed % as a percentage of the overall SRP capacity.

This report can be generated using the SYMCLI symcfg list command:

C:\>symcfg list -srp -demand -type sl -detail -sid 217

STORAGE RESOURCE POOLS

Symmetrix ID : 000197600217

Name : SRP_1
Usable Capacity (GB) : 12518.0
SRDF DSE Allocated (GB) : 0.0
Snapshots Allocated (GB): 60.0

---------------------------------------------------------------
Service Level Subscribed Allocated
Name Workload (GB) (GB) (%)
------------------------ -------- -------------- --------------
<none> N/A 0.1 0.0 0
Diamond <none> 1174.4 78.5 6
Optimized N/A 89.9 18.4 20
-------- -------- -----
Total 1264.5 96.8 7

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 277
Compressibility Report

278 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Compressibility report shows the maximum data reduction for all SGs in the SRP when compression
is enabled in the system. The display includes the Storage Group name, number of volumes in the group,
the allocated and used capacity in GB, and the Target Ratio. The Target Ratio is the expected
compression based on the last 24 hours of samples.

This report can be generated using the SYMCLI symcfg list command:

C:\>symcfg list -sid 217 –sg_compression

STORAGE GROUPS

Symmetrix ID: 000197600217

Name : SRP_1

Number Allocated Used Estimated


Storage Group Name Devices (GB) (GB) Ratio
--------------------------------------------------------------------------
esxi-94-161-GK_SG 24 0.0 0.0 N/A
esxi-94-163_GK_SG 24 0.0 0.0 N/A
esxi-94-161_Data 24 60.0 60.0 16.0:1
esxi-94-163_Data 24 0.0 0.0 16.0:1
NDM_SRC_253_TGT_217_* 1 0.0 0.0 N/A
NDM_SRC_253_TGT_217_* 1 0.0 0.0 N/A
<not_in_sg> 25 0.0 0.0 N/A

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 278
Utilization Alerts
• Utilization alerts are enabled by default

279 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To display Pool Threshold Alerts, click the Settings icon from the top banner area. Expand the Alerts
section, and choose Symmetrix Threshold and Alerts. Alert Thresholds can be set on Storage Resource
Pools (SRPs), System Meta Data, Local Replication Utilization, and Backend and Frontend Meta Data
Usage. These utilization alerts are enabled by default with the default threshold policies shown. The
default threshold policies cannot be modified or deleted.

To set up customized thresholds, click the Create button. In the Create Symmetrix Threshold Alert dialog,
select the category from the dropdown menu. To create a threshold alert for an SRP, add it to the
Instances to Enable field, and set the threshold levels for Warning, Critical and Fatal. For all other alerts,
choose the category and set the levels for Warning, Critical, and Fatal. To enable the threshold alert, click
the OK button.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 279
Lesson: Monitor Compliance
This lesson covers the following topics:

• Service Levels

• Storage Group compliance

• Storage Group performance

280 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers the monitoring of Storage Group compliance and performance.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 280
SL – Expected Average Response Times

281 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The available Service Levels and the expected average response time for each SL is displayed as shown.
Clicking the Service Levels link in the Storage section brings up this view. For compliance, the response
time of the SL must lie within the compliance range of the SL.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 281
Renaming Service Levels

282 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To rename a Service Level, hover the mouse pointer over the Service Level and click the pencil icon.
Type the new name over the existing name and click the checkmark.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 282
Dashboard – SG Compliance

283 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Selecting SG Compliance from the Dashboard gives a summary of Storage Group compliance. The
Compliance panel displays the number of SGs that are Critical, Marginal, Stable, and SGs that have No
Status. To view the detailed Compliance Report for all SGs, click the link on the bottom of the panel.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 283
Compliance Report

284 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To assess the performance for a Storage Group, the weighted response time for the past 4 hours and for
the past 2 weeks is calculated. The two values are then compared to the maximum response time
associated with the given SL for the SG. If both calculated values fall within or under the SL defined
response time band, the compliance state is STABLE. If one of them is in compliance and the other is out
of compliance, the compliance state is MARGINAL. If both are out of compliance, the compliance state is
CRITICAL.

This report can be exported to a .pdf file using the Export button. It can be set to run as a report and
scheduled to be distributed by email on a user-defined schedule using the Schedule button.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 284
Storage Group Compliance

• Critical and Marginal – SGs


which do not meet SL
• Stable – SGs which meet SL
• No Status – SGs with no
explicit SL
• Total – All SGs

285 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Icons for Critical, Marginal, Stable, No Status, and Total are displayed on the Dashboard. Clicking the icon
directs you to the appropriate listing.

For example, clicking the Total icon directs you to a listing of all the Storage Groups configured on the
array. Clicking Stable directs you to the listing of Storage Groups which are performing within the SL
target. Marginal indicates that the performance is below the SL target, while Critical indicates performance
well below the SL target. No Status is the listing of SGs on which an SL has not been explicitly set.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 285
Stable Storage Groups

286 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Here is an example of a listing of Stable Storage Groups. To see the details of the compliance of a specific
Storage Group, select the Storage Group from the listing. Details of the SG are shown on the right. For
more details, click the VIEW ALL DETAILS link.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 286
Stable SG – View All Details

287 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The detailed view of the Storage Group shows detailed information about the SG, and includes tabs for
Compliance, Volumes, and Performance information for the SG.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 287
Stable SG Example – Compliance Tab

288 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Compliance tab of a Storage Group shows details of Compliance for the SG. The display shows
Response Time, Workload Skew, IOPS, and I/O Mixture for the SG.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 288
Response Time Details

289 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To view details of the Response Time for the SG, click the VIEW DETAILS link in the Response Time
panel.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 289
Stable SG Example – Volumes Tab

290 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To view volumes in the SG, click the Volumes tab. To view details of a volume, select the volume. Details
are shown on the right of the screen.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 290
Stable SG Example – Performance Tab

291 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Performance for the SG is displayed using the Performance tab. You can see the graphs in more detail by
maximizing each graph individually.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 291
Performance Dashboard

292 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Performance Dashboard includes array-level performance information, along with SG, Hosts, and
components such as FE, BE, and SRDF directors. Performance information about Disk Technology is also
included. You can see the graphs in more detail by maximizing each graph individually. Custom
Dashboards can be created from this screen.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 292
Performance Dashboard – Storage Groups

293 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Storage Group Performance Dashboard displays an overview of the performance for all Storage
Groups.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 293
Performance Dashboard – SG Workload

294 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To view details of a Storage Group, select a Storage Group. Here is an example of the Performance
Dashboard for a Storage Group showing the SG Workload. You can see the graphs in more detail by
maximizing each graph individually.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 294
Performance Dashboard – SG IO Profile

Information

295 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Here is an example of the Performance Dashboard for a Storage Group showing its IO Profile. You can
see the graphs in more detail by maximizing each graph individually, and get more details by clicking the
Information icon.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 295
Performance Dashboard – SG Performance Thresholds

296 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Here is an example of the Performance Thresholds for a Storage Group showing its IO Profile. To view
more details, click any of the categories.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 296
Performance Dashboard – SG Noisy Neighbor

297 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The SG Noisy Neighbor tab helps identify potential issues with a Storage Group. This Dashboard charts
key performance metrics and details the relationship between the SG and the associated front-end
directors and ports. It also shows other SGs that are sharing ports that could potentially contribute to
performance issues.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 297
Storage Group – Performance Analyze View

298 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To analyze performance on a Storage Group, choose Analyze from the Performance section. Drill down
to the Storage Group by selecting the array and then the Storage Group. Real Time, Diagnostic, and
Historical tabs are available for viewing performance information about the SG.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 298
Compliance Alert

299 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

You can configure Unisphere for PowerMax to alert you when the performance of a Storage Group,
relative to its SL target, changes. Once configured, Unisphere for PowerMax assesses the performance of
the storage every 30 minutes, and deliver the appropriate alert level.

To open the Compliance Alert Policies list view, choose Settings from the banner area, and select
Compliance Alert Policies under Alerts. Click Create to open the Create Compliance Alert Policy dialog
box. Select the storage system on which the Storage Groups are located. Select one or more Storage
Groups and click Add. By default, compliance policies are configured to generate alerts for all compliance
states.

• Critical: Storage group performing well below SL target

• Marginal: Storage group performing below SL target

• Stable: Storage group performing within the SL target

In this example, the DemoHostSG is selected, and compliance alerts are enabled for Stable, Marginal,
and Critical. To change this default behavior, clear the box for any of the states for which you do not want
to generate alerts.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 299
Lesson: Monitor Data Reduction
This lesson covers the following topics:

• Monitoring Data Reduction in PowerMax and VMAX All Flash arrays

300 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers the monitoring of overall efficiency and data reduction in PowerMax and VMAX All
Flash arrays.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 300
Overall Efficiency

301 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Overall Efficiency is reported on the Dashboard screen. The Overall Efficiency in this example is 11.0:1.
The Overall Efficiency ratio is the ratio of the sum of all TDEVs and Snapshot sizes and the physical used
storage. This ratio is calculated based on the 128K track size and the compressed pool track size.

The Data Reduction Ratio is the ratio of the sum of all TDEVs and Replication Data Pointers (RDPs)
logical backend storage, and the TDEVs and RDP physical used storage. The TDEVs and RDPs logical
backend storage is calculated based on the 128K track size. The RDEVs and RDP physical used storage
is calculated based on the compressed pool track size. RDP is a metadata object in cache that is used by
TimeFinder SnapVX to track snapshot deltas. If no free RDP space is available, snapshots fail. Unisphere
for PowerMax tracks local replication cache threshold alerts so that users do not unknowingly run out of
RDP space. If RDP critical or fatal alerts are triggered, contact Dell EMC, as this setting can only be
changed by Dell EMC Engineering. The Virtual Provisioning Savings is based on the TDEV configured
storage and the TDEV Logical Backend Storage, calculated based on TDEV allocated tracks without
shared unowned. The Snapshot Savings is based on the sum of all Snapshot sizes and the RDP Logical
Backend Storage.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 301
Compression Ratio – Storage Group

302 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To view the Compression Ratios on Storage Groups in Unisphere, view the Storage Group Demand
Report from the Capacity section of the Dashboard.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 302
View Compression – Volume

303 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To view the Compression Ratio of a given volume, choose Storage Groups under the Storage section.
Select the SG, and click the Volumes tab. Select the volume and the right panel displays details on the
volume, including the Compression Ratio.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 303
View SRP Compression – Solutions Enabler

symcfg show -srp SRP_1 -sid 217 | more

Symmetrix ID : 000197600217

Name : SRP_1
Description :
Default SRP : FBA
Effective Used Capacity (%) : 11
Usable Capacity (GB) : 12518.0
Used Capacity (GB) : 1419.6
Free Capacity (GB) : 11098.4
User Subscribed Capacity (GB) : 1264.5
Reserved Capacity (%) : 10
Compression State : Enabled
Data Reduction Ratio : 1.4:1
Usable by RDFA DSE : Yes

...

304 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Use the symcfg show command to display SRP compression settings with Solutions Enabler. An
example of the command syntax and output is shown here.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 304
View SG Compression – Solutions Enabler
symsg -sid 217 show esxi-94-161_Data

Name: esxi-94-161_Data

Symmetrix ID : 000197600217
Last updated at : Mon Jun 18 16:47:46 2018
Masking Views : Yes
FAST Managed : Yes
Service Level Name : Diamond
Workload : <none>
SRP Name : SRP_1
VP Saved (%) : 75.0
Compression Enabled : Yes
Compression Ratio : 1.5:1
Host I/O Limit : None
Host I/O Limit MB/Sec : N/A
Host I/O Limit IO/Sec : N/A
Dynamic Distribution : N/A
Number of Storage Groups : 1
Storage Group Names : esxi-94-161_parent_SG (IsParent)
Number of Gatekeepers : 0
...

305 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To view the compression settings for a particular Storage Group, use the symsg show command. An
example of the command syntax and output is shown here.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 305
Lab: Monitoring SRP and Compliance with Unisphere

This lab covers:


• Monitoring SRP with Unisphere for PowerMax
• Monitoring Compliance with Unisphere for
PowerMax

306 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lab covers the monitoring of SRP and Compliance with Unisphere for PowerMax.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 306
Lesson: Workload Planning
This lesson covers the following topics:

• Data Exclusion Windows

• SRP Headroom

• Suitability Check

307 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers the workload planning features of Unisphere for PowerMax: Data Exclusion Windows,
SRP Headroom, and Suitability Check.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 307
Workload Planning – Unisphere for PowerMax
Simple, Automated, Workload-Aware, Service Level Based Sizing

Workload Planner

Features Supported by WLP


• Headroom Indicator
• Suitability Check

308 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Planning is greatly assisted by a software layer in the system that is known as Workload Planner, or WLP.
The main function of WLP is to abstract the array as a provider of services. Think of it as a mediator that
the converts array status—disk, ports, directors—to array capabilities. WLP helps Storage Administrators
plan for workloads being considered.

The workload planning features supported by Unisphere for PowerMax are Headroom Indicator and
Suitability Check. These features enable you to plan based on Service Level and workload.

In some environments, the array abruptly degrades in performance as new workloads are provisioned.
Headroom Indicator gauges the remaining capacity per Service Level so you can plan how many more
workloads can be provisioned.

When you are ready to provision, a Suitability Check can be run. The Suitability Check determines if the
capacity and the Service Level request can be met by the array with its current workload. Using the
workload planning features requires that the arrays are registered for performance data collection in
Unisphere for PowerMax.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 308
Data Exclusion Windows

309 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Data Exclusion Windows feature enables excluding specified performance stats that affect reporting
such as headroom, suitability, and compliance. Peaks in storage system statistics can occur due to
anomalies or unusual events, or recurring maintenance during off-hours that fully loads the storage
system.

Using Data Exclusion Windows enables Storage Administrators to focus on specific performance for
planning purposes.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 309
Data Exclusion Windows Settings

Blue shaded area


indicates Included

Gray shaded area


indicates Excluded

310 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Data Exclusion Windows page enables users to view and set One-Time Exclusion Periods and
Recurring Exclusion Periods for a selected storage system. It consists of two panels. The One-Time
Exclusion Periods panel displays 84 component utilizations—two weeks worth of data—in a single chart.
This chart enables you to set the one-time exclusion period value from a given time slot. This exclusion
results in all time slots, prior to the selected time slot, being ignored for the purposes of calculating
compliance and admissibility values. The Recurring Exclusion Periods panel displays the same data in a
one-week format. This format enables selection of repeating recurring exclusion periods during which any
collected data is ignored.

One-Time Exclusions can be set up to a maximum of two weeks, however, they are cleared automatically
when the selected time runs off the cycle. Recurring Exclusions remain set until removed.

To set exclusions, click the data point. When the data point is clicked, it changes to a gray color indicating
the point is excluded. If that same gray excluded data point is clicked again, it is added back to be included
in the reporting. Included data points show as a blue color. Once complete, choose either Set One-Time
Exclusion or Set Recurring Exclusions and save the selections. Exclusions can be set on all components
for suitability, or on back-end components only for headroom.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 310
SRP Headroom
• Useful for workload planning
– Displays the available headroom for an SL
– Assumes all the remaining capacity is this type

311 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The SRP Headroom indicator in the Dashboard is useful for workload planning. It displays the space
available for a particular SL if all remaining capacity is on that type.

The capacity for an SL indicates the amount that you can provision and be assured that the array is able to
meet the SL compliance requirements.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 311
Suitability Check

• Part of Provision Storage wizard


• Suitability Check can be performed when:
– Provisioning storage to a new host with Provision Storage to Host
– Modifying an existing Storage Group in a Masking View
› Adding more storage
› Modifying Service Level

• Determines if the array can handle the capacity and Service Level request
• Optional
• If check fails, does not prevent provisioning

312 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Suitability Check is an optional step that can be performed. This check can be performed by the
Provision Storage wizard when provisioning storage to a host. It can also be run when modifying an
existing SG which is part of a Masking View. An example of modification to an SG is the addition of more
storage or a change to the Service Level.

The Suitability Check determines if the array can handle the changes to the capacity and Service Level.
The provisioning process can be continued even if the Suitability Check fails.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 312
Suitability Check – Provision Storage to Host

313 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Provision Storage wizard includes a Run Suitability Check button on the Summary page.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 313
Suitability Check – Provision Storage Wizard

314 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Suitability Check is an optional step on the Summary page of the Provision Storage wizard. Click the
Run Suitability Check button to see if the array can meet the Service Levels for the provisioning request.
In order for the Suitability Check to work, the arrays must be registered for performance data collection. In
this example, the green check mark indicates that Service Level for provisioning request will be met.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 314
Suitability Check – Modify SG: Add Volumes

Added more
volumes
Hover for existing
and additional
workloads

315 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Suitability Check can be run when modifying a Storage Group that is a part of a Masking View. Any
changes to the number of volumes or the Service Level enables you to run the Suitability Check. In this
example, volumes have been added to an SG. Results of the Suitability Check are displayed in a bar
chart, showing Front End, Back End, and Cache. To see the existing and additional workload of each
component, hover the mouse pointer over the bar associated with the component.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 315
Suitability Check – Modify SG Service Level

Changed
Service Level

316 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In this example, only the Service Level of the SG was changed. The number of Volumes is unchanged.
The Suitability Check returns that the modifications to this SG should meet the Service Level.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 316
Lab: Workload Planning with Unisphere

This lab covers:


• Monitoring for available headroom
• Running Suitability Check when allocating more
storage to a host

317 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lab covers the workload planning features of Unisphere for PowerMax.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 317
Module Summary

Key points covered in this module:

• Monitoring SRP Reports

• Monitoring Compliance

• Monitoring Compression

• Workload Planning
• SRP Headroom
• Suitability Check

318 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This module covered monitoring and workload planning with Unisphere for PowerMax. Unisphere for
PowerMax was used to monitor SRP, SGs, Compliance, and Compression. The SRP Headroom and
Suitability Check workload planning features were also covered.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Monitoring and Workload Planning with Unisphere for PowerMax 318
Module: Introduction to Business Continuity

Upon completion of this module, you should be able to:

• Describe the various PowerMax and VMAX Family Business Continuity features and integrated
solutions.

319 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This module describes the PowerMax and VMAX Family of arrays Business Continuity features and
integrated solutions.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Introduction to Business Continuity 319
Local Replication with TimeFinder SnapVX Overview

• Creates local point-in-time copies of


production data
• Target device required to mount replica
• Highly scalable
– Single source volume can have up to 256
snapshots
– Up to 1024 target volumes can be linked to
the snapshots of a single volume
• Highly efficient
• Snapshots share point-in-time tracks called
snapshot deltas

320 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

TimeFinder SnapVX is a local replication technology that was introduced with the VMAX3 platform running
HYPERMAX OS 5977, and is supported on all VMAX All Flash and PowerMax arrays as well. SnapVX
creates local point-in-time copies of data without requiring pairing between source and target volumes.
Targets are not required to create the snapshot and are only required to mount and use a replica.
TimeFinder SnapVX is highly scalable. A single source volume can have up to 256 snapshots. Up to 1024
target volumes can be linked to the snapshots of a single volume. The snapshots are made as efficient as
possible by sharing point-in-time tracks which are called snapshot deltas. SnapVX also provides emulation
modes for the classic Dell EMC local replication software options of TimeFinder Mirror, Clone, and VP
Snap.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Introduction to Business Continuity 320
Local Replication Suite
Mainframe

Open
Systems

321 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Local Replication Suite includes TimeFinder SnapVX with cloud-scalable snaps and clones to protect
your data. For file-level local replication, SnapSure is available, and for mainframe environments,
Compatible Flash is provided.

Dedicated target devices are no longer required. TimeFinder SnapVX offers shorter create and terminate
times and removes the dependency on cache when scaling. It provides zero-impact and cloud-scalable
snaps to protect the data. TimeFinder SnapVX works in open systems and mainframe environments. It
provides the underlying technology which supports Data Protector for z Systems (zDP).

This course covers TimeFinder SnapVX snapshots with open systems hosts.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Introduction to Business Continuity 321
Remote Replication Overview

• SRDF/Synchronous and
SRDF/Asynchronous
• SRDF Concurrent, SRDF Cascaded, and
SRDF/Star
• SRDF/Data Mobility (SRDF/DM)
• SRDF/Consistency Groups (SRDF/CG)
• SRDF/Cluster Enabler (SRDF/CE)
• SRDF/Automated Replication (SRDF/AR)
• SRDF/Metro
• Non-Disruptive Migration

322 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Dell EMC Symmetrix Remote Data Facility (SRDF) is a replication technology that enables the mirroring of
a data center with minimal impact to the performance of the production application. SRDF provides
disaster recovery and data mobility solutions for the PowerMax and VMAX Family storage arrays in both
open systems and mainframe data centers. SRDF enables storage systems to be in the same room,
different buildings, or hundreds to thousands of kilometers apart. SRDF can operate in many modes and
can integrate with other products such as Microsoft Failover Clusters, VMware vCenter Site Recovery
Manager (SRM), and Dell EMC TimeFinder.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Introduction to Business Continuity 322
SRDF/Synchronous
Primary Secondary

R1 Limited Distance R2
Synchronous
Application Host

323 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

SRDF/Synchronous (SRDF/S) maintains a real-time—synchronous—mirrored copy of production data at


a physically separated storage system. The production volumes are labeled R1s and the copies are
labeled R2s. Host writes are written simultaneously to both arrays in real time before the application I/O
completes. Acknowledgments are not sent to the host until the data is stored in cache on both arrays.
SRDF/S can be used only for limited distance—up to 125 miles or 200 km.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Introduction to Business Continuity 323
SRDF/Asynchronous
Primary Secondary

R1 Unlimited Distance R2
Asynchronous
Application Host

324 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

SRDF/Asynchronous (SRDF/A) mirrors data from the R1 devices while maintaining a dependent-write
consistent copy of the data on the R2 devices at all times. SRDF/A can be used for unlimited distance.
Host writes are collected for a configurable interval into delta sets. Delta sets are transferred to the remote
array in timed cycles. The copy of the data at the secondary site is typically only seconds behind the
primary site.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Introduction to Business Continuity 324
Concurrent SRDF

Primary
R2

R1

Application Host

R2

325 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Concurrent SRDF is a disaster recovery solution where data is mirrored from the primary site concurrently
to two R2 devices. Usually, one copy running in SRDF/S mode is maintained at a nearby location and
offers zero data loss if the primary site fails. The second copy operating in SRDF/A mode offers an out-of-
region recovery site with an Recovery Point Objective (RPO) of seconds to minutes.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Introduction to Business Continuity 325
Cascaded SRDF

Primary Secondary Tertiary

Application Host

326 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Cascaded SRDF is a three-site configuration that uses a bunker site and combines synchronous and
asynchronous modes. Data from a primary site is synchronously replicated to a secondary site and then
asynchronously replicated to a tertiary site. The major benefit provided with a cascading configuration is
its inherent capability to continue replicating from the secondary site to the tertiary sites when the primary
site goes down.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Introduction to Business Continuity 326
SRDF/Star
Production Secondary

Application Host

Tertiary
327 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

SRDF/Star is a three-site disaster recovery solution consisting of primary, secondary, and tertiary sites.
The secondary site synchronously mirrors the data from the primary site, and the tertiary site
asynchronously mirrors the production data. When an outage occurs at the primary site, SRDF/Star allows
the user to quickly move operations and re-establish remote mirroring between the remaining sites.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Introduction to Business Continuity 327
SRDF/Automated Replication

Host Host

SRDF Background Copy

R1 R2

Site A Site B

328 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

SRDF/Automated Replication (SRDF/AR) is an automated remote replication solution that uses both
SRDF and TimeFinder to provide a long-distance disaster restart solution for UNIX and Windows
environments. SRDF/Automated Replication is the most affordable of the listed solutions because it can
be configured to run on lower bandwidth networks than the other solutions.

SRDF/AR can be deployed over 2 or 3 sites:

• In 2-site configurations, SRDF/DM is deployed with TimeFinder

• In 3-site configurations, SRDF/DM is deployed with a combination of SRDF/S and TimeFinder

In a 2-sit configuration, as shown here, data on the SRDF R1 TimeFinder target device is replicated
across the SRDF links to the SRDF R2 device. The SRDF R2 device is also a TimeFinder source device.
TimeFinder replicates this device to a TimeFinder target device. You can map the TimeFinder target
device to the host connected to the secondary array at Site B.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Introduction to Business Continuity 328
SRDF/Metro
Host with Multi-path Hosts with Clustering
Software Software

R/W Access R/W Access R/W Access

R1 R2 R1 R2

Site A Site B Site A Site B

329 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

SRDF/Metro allows both R1 and R2 devices to be Read/Write accessible to hosts. Hosts can
write to both the R1 and R2 side of the device pair, and R2 devices assume the same
external device identity as their R1 mate. This shared identity causes the R1 and R2 devices
to appear to hosts as a single virtual device across the two arrays. SRDF/Metro can be
deployed with either a single multi-path host or with a clustered host environment. For
single host configurations, multi-pathing software directs parallel reads and writes to each
array. For clustered host configurations, host I/Os can be issued by multiple hosts accessing
both sides of the SRDF device pair. In both configurations, writes to the R1 or R2 devices
are synchronously copied to the paired device. Any write conflicts are resolved by the
SRDF/Metro software to maintain consistent images on the SRDF device pairs.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Introduction to Business Continuity 329
Non-Disruptive Migration
Host(s) with Multi-path Host(s) with Multi-path
Software Software

Source Array Target Array Source Array Target Array

Pass-through Metro-based
mode mode

VMAX (5876) VMAX3 (5977) VMAX (5977) PowerMax (5978)


VMAX All Flash (5977 or 5978) VMAX All Flash (5977) VMAX All Flash (5978)
PowerMax (5978)

330 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Non-Disruptive Migration (NDM) is a method for migrating data without application downtime. NDM is
supported across SRDF/S distances. However, because of the requirement that the host sees both the
source and target storage, migrations are typically performed between arrays within a data center. There
are two supported NDM implementations that are dependent mainly on the source array software version.
If migrating from a VMAX array running Enginuity 5876 to a PowerMax running PowerMaxOS 5978, Pass-
through mode is used. If migrating from a VMAX3 or VMAX All Flash array running HYPERMAX OS 5977
or later to a PowerMax running PowerMaxOS 5978, Metro-based mode is used.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Introduction to Business Continuity 330
OpenReplicator

PowerMaxOS Third Party Array

SAN

Application Host

331 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Open Replicator enables copying full or incremental copies of data from qualified third party arrays within
a SAN infrastructure to or from arrays running PowerMaxOS. Open Replicator uses the Solutions Enabler
SYMCLI symrcopy command.

Use Open Replicator to:

• Pull from source volumes on qualified remote arrays to a volume on an array running PowerMaxOS

• Perform online data migrations from qualified storage to an array running PowerMaxOS

For pull operations, the volume can be in a live state during the copy process. The local hosts and
applications can begin to access the data as soon as the session begins, even before the data copy
process has completed. The pull can also be performed in cold mode to a static volume.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Introduction to Business Continuity 331
ProtectPoint Integration

SnapVX

Production Data

Only Copy FAST.X to Data


Application Host Images Domain

PowerMax or Data Domain


VMAX Family
Array

332 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The PowerMax and VMAX Family Business Continuity integrated solutions include ProtectPoint,
RecoverPoint, and AppSync.

ProtectPoint is an integration between PowerMax, VMAX All Flash or VMAX3 arrays, and Data Domain
storage systems to backup production data. TimeFinder SnapVX is used to create a replica—or a
snapshot—of a LUN. ProtectPoint copies the snapshot to a vdisk on the Data Domain system. The vdisk
is seen by the source array as a FAST.X encapsulated LUN. Change tracking is enabled for the replica,
and therefore, only changes made are copied, providing performance increases and space savings.
ProtectPoint eliminates the performance impact on applications, provides faster backup and recovery, and
reduces the costs and complexity of traditional backup solutions.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Introduction to Business Continuity 332
RecoverPoint Integration

SnapVX

RPAs RPAs

FC/WAN

Application Host

New York London

333 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The RecoverPoint solution also leverages native PowerMax and VMAX Family snap
capabilities to create point-in-time consistent snaps of production volumes in a consistency
group. The snapshots are used to synchronize the production volumes with the copy
volumes. The data path is through the RecoverPoint Appliances (RPAs). The solution
supports manual, continuous, and periodic snapshots. Replication is asynchronous.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Introduction to Business Continuity 333
AppSync

334 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Dell EMC AppSync offers a simple, SLA-driven, self-service approach for protecting, restoring, and
cloning critical Microsoft and Oracle applications and VMware environments. After defining service plans,
application owners can protect, restore, and clone production data quickly with item-level granularity by
using the underlying Dell EMC replication technologies.

On PowerMax arrays, the Essentials software package contains AppSync in a starter bundle. The
AppSync Starter Bundle provides the license for a scale-limited, yet fully functional version of AppSync.
The Pro software package contains the AppSync Full Suite.

AppSync supports the following applications:

• Oracle

• Microsoft SQL Server

• Microsoft Exchange

• VMware VMFS

• NFS datastores and File systems

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Introduction to Business Continuity 334
Module Summary

Key points covered in this module:

• Described the various PowerMax and VMAX Family Business Continuity features and integrated
solutions.

335 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This module described the PowerMax and VMAX Family of arrays Business Continuity features and
integrated solutions.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com
Module: Introduction to Business Continuity 335
Module: TimeFinder SnapVX Operations

Upon completion of this module, you should be able to:

• Describe TimeFinder SnapVX concepts

• Perform TimeFinder SnapVX operations using SYMCLI and Unisphere for PowerMax

• Replicate a VMFS datastore using TimeFinder SnapVX

336 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This module focuses on TimeFinder SnapVX local replication technology on PowerMax and the VMAX
family of arrays. Concepts, terminology, and operational details of creating snapshots and presenting them
to target hosts is discussed. Performing TimeFinder SnapVX operations using SYMCLI and Unisphere for
PowerMax are presented. Use of TimeFinder SnapVX for replication in a virtualized environment is also
presented.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 336
Lesson: TimeFinder SnapVX Concepts and Operations
This lesson covers the following topics:

• TimeFinder SnapVX concepts

• Performing TimeFinder SnapVX operations

337 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers the concepts of TimeFinder SnapVX. Operational examples using SYMCLI are
presented in detail.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 337
TimeFinder SnapVX Overview

• Create local point-in-time copies (snapshots)


of data without requiring pairing between Source
source and target volumes
– Target volumes are required only if host
access to point-in-time data is wanted

• Highly scalable
Backup Backup Testing
– Up to 256 snapshots per source volume 6 AM 7 AM 8 AM

• Highly space-efficient
– Sharing of point-in-time tracks among different Snapshots
snapshots

338 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

TimeFinder SnapVX provides a highly efficient mechanism for taking periodic point-in-time copies of
source data without the need for target devices. Target devices are required only for presenting the point-
in-time data to another host. Up to 1024 target volumes can be linked per source volume. Sharing
allocations between multiple snapshots makes it highly space efficient. A write to the source volume will
only require one snapshot delta to preserve the original data for multiple snapshots.

If a source track is shared with a target or multiple targets, a write to this track will preserve the original
data as snapshot delta and will be shared for all the targets. Write to the target will be applied only to the
specific target.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 338
TimeFinder SnapVX Terminology

• Source Volume
– A device whose point-in-time copy is to be preserved
• Snapshot
– Preserved point-in-time image of a source volume
• Snapshot Delta
– Original source volume tracks at the point-in-time of the snapshots that were
preserved during host writes to the source volume
• Linked Target Volume
– A device that is used to provide access to point-in-time data by linking it to a
snapshot
• Storage Resource Pool
– A collection of data pools that provide physical storage for thin devices

339 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The terminology used in SnapVX is described here. Note that all host accessible devices in a PowerMax
or VMAX Family array are thin devices.

Host writes to source volumes will create snapshot deltas in the SRP. Snapshot deltas are the original
point-in-time data of tracks that have been modified after the snapshot was established.

SRP configuration must be specified when ordering the system, prior to installation. The source and target
volumes can be associated with the same SRP or different SRPs. Snapshot deltas will always be stored in
the SRP of the source volume.
• Allocations owned by the source will be managed by its Service Level (SL).
• Allocations for the target will be managed by the SL of the target.
• Snapshot deltas will be managed by the Optimized SL.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 339
Redirect-on-Write – Preserving Point-in-Time Data
Host Access Backup
Source (Snapshot)

TDAT TDAT TDAT TDAT

Storage Resource Pool

Host Write Backup


Source (Snapshot)

TDAT TDAT TDAT TDAT

Storage Resource Pool

340 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

When the snapshot is created, both the source device and the snapshot point to the location of data in the
SRP. When a source track is written to, the new write is asynchronously written to a new location in the
SRP. The source volume will point to the new data. The snapshot will continue to point to the location of
the original data. The preserved point-in-time data becomes the snapshot delta. This is the Redirect-on-
Write (ROW) technology.

Under some circumstances SnapVX will use Asynchronous Copy on First Write (ACOFW) in non-
PowerMax and VMAX All Flash arrays. This might be done to prevent degradation of performance for the
source device. For example, if the original track was allocated on Flash drive, then it would be better to
copy this down to a lower tier and accommodate the new write in the Flash drive.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 340
Snapshot Generations and Time-to-Live

• Generation Numbers
– Any snapshot of a source volume with a
Source
unique name is assigned Generation
Number 0 (most recent)

• Time-to-Live (TTL)
– At time of creation or later, an expiration
Backup Backup Testing
time for the snapshot can be set 6 AM (1) 7 AM (0) 8 AM (0)
– Snapshots automatically terminate when
the TTL expires
Snapshots

341 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Each snapshot is assigned a generation number. If the name assigned to the snapshot is reused, then the
generation numbers are incremented. The most recent snapshot with the same name will be designated
as generation 0, the one prior as generation 1, and so on. If each snapshot is given a unique name, they
will all be generation 0. Terminating a snapshot will result in reassignment of generation numbers.

Snapshots are kept until they are terminated, unless a Time-to-Live (TTL) is set. TTL is used to
automatically terminate a snapshot at a set time. This can be specified at the time of snapshot creation or
can be modified later. PowerMaxOS will terminate the snapshot at the set time. If a snapshot has linked
targets, it will not be terminated. It will be terminated only when the last target is unlinked. TTL can be set
as a specific date or as a number of days from creation time.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 341
Secure Snapshots

• Prevent snapshot data deletion


– Date/time delta from current time or absolute date

• Snapshot automatically deleted


– Expiration date has passed, and snapshot has no links

• Secure snapshot expiration date can be extended


– Cannot be shortened

• Existing snapshot can be converted to a secure snapshot


– Secure snapshot cannot be converted into a traditional snapshot

• Expired secure snapshots with links are not deleted


– No longer considered secure

342 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Secure snapshots prevent administrators or other high-level users from intentionally or unintentionally
deleting snapshot data. When creating a secure snapshot, you assign it an expiration date/time either as a
delta from the current date or as an absolute date. Once the expiration date passes, and if the snapshot
has no links, PowerMaxOS automatically deletes the snapshot. Prior to its expiration, administrators can
only extend the expiration date—they cannot shorten the date or delete the snapshot. A snapshot can be
converted to a secure snapshot, but a secure snapshot may not be converted to a traditional snapshot. If a
secure snapshot expires, and it has a volume linked to it, or an active restore session, the snapshot is not
deleted. However, it is no longer considered secure.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 342
Accessing Point-in-Time Data – Linked Targets

• Snapshots should be linked to a target Source


volume if host access to the point-in-
time data is required
• Linked targets can be in:
– No Copy Mode (default) Backup Backup Testing
6 AM 7 AM 8 AM
– Copy Mode

• Mode can be specified at the time of Snapshots


linking the snapshot to target or can be Link

modified later

Target

343 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

A snapshot has to be linked to a target volume to provide access to point-in-time data to a host. The link
can be in No Copy or Copy mode. Copy mode linked targets will provide full volume copies of the point-in-
time data of the source volumes—similar to full copy clones. Copy mode linked targets will have a useable
copy of the data even after termination of the snapshot—provided the copy has completed.

A snapshot can have both No Copy mode and Copy mode linked targets. Default is to create No Copy
mode linked targets. This can be changed later if desired.

Writing to a linked target will not affect the snapshot. The target can be re-linked to the snapshot to revert
to the original point-in-time.

A snapshot can be linked to multiple targets. But a target volume can be linked to only one snapshot.

There is no benefit to have the No Copy mode linked targets in an SRP different from the source SRP.
Writes to the source volume will only create snapshot deltas which will be stored in the SRP of the source
volume. The writes will not initiate any copy to the target.

A target volume that is larger than the source can be linked to a snapshot. This is enabled by default. The
environment variable SYMCLI_SNAPVX_LARGER_TGT can be set to DISABLE to prevent this.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 343
Linked Target – Undefined/Defined Tracks

Link
Target
Source Backup
Undefined

TDAT TDAT TDAT TDAT

Storage Resource Pool

• Undefined Tracks: Location of data for the target has to be resolved through snapshot pointers
• Defined Tracks: Location of data for the target points directly to the appropriate tracks in the SRP

344 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

When a snapshot is linked to a target, the process of defining the tracks for the target is initiated internally.
In the undefined state, the location of data for the target has to be resolved through the pointers for the
snapshot. In the defined state, data for the target points directly to the corresponding locations in the SRP.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 344
Relinking and Unlinking Targets

• Relink operation unlinks the target from the current snapshot and links it to a
different snapshot
– Relink must be between the same source and target devices

• Relink to the same snapshot refreshes the target with original point-in-time
– Useful if target has been modified and you want to revert to the original snapshot

• Unlink operation disassociates the linked target from the snapshot


– Copy mode linked targets will contain a full and usable copy of the source
devices, after copy completes
– No Copy mode linked targets behavior depends on array OS
› PowerMaxOS, HYPERMAX OS 5977.810.184 and later allow access to unlinked target
– Refer to the Dell EMC TimeFinder SnapVX Local Replication Technical Note for
the complete details

345 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Relink provides a convenient way of checking different snapshots to select the appropriate one to access.
A link between the snapshot of the source volume and the target must exist for the relink operation. Relink
can also be performed with a different snapshot of the same source volume or a different generation of the
same snapshot of the source volume.

The Unlink operation removes the relationship between a snapshot and the corresponding target. Copy
mode linked targets can be unlinked after the copying completes. This will provide a full, independent,
useable point-in-time copy of the source data on the target device.

No Copy mode linked targets can be unlinked at any time. The unlinked target behavior depends on the
storage system OS. Prior to HYPERMAX OS 5977.810.184, users could not access the data on a nocopy
target after having been unlinked. With PowerMaxOS and HYPERMAX OS 5977.810.184 and later, users
can access the data on fully-defined nocopy targets after having been unlinked. This functionality is
possible through the shared allocations.

When a target is unlinked, the allocation sharing remains in place. Even after unlinked, termination of a
snapshot results in the target owning the snapshot delta. And an updated write to the source track results
in the target owning the original track. The target also takes ownership of any shared source tracks if the
source is deallocated after unlink.

This enhanced functionality allows the user to continue to access the target data after unlink in the same
way that previously required full copy targets, but without duplicating the entire back-end data from the
point-in-time to the target.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 345
Restore to Source

• Restore from snapshot


– Snapshots can be directly restored to the source volume
– Source volume data is set back to the point-in-time of the snapshot
– Only changed data has to be restored from the snapshot delta—inherently
differential operation

• Restore from linked target


– Two-step process:
1. Create a snapshot of the linked target
2. Link this snapshot with the source volume—which will now be a linked target

346 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

From the perspective of the host, as the data on the source volume changes, the source volume should be
unmounted prior to the restore operation and then re-mounted. To restore from a linked target, a snapshot
of it must be established; this snapshot should be linked to the source volume. The source volume cannot
be unlinked until the copy completes. So the link should be created in Copy mode.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 346
Cascading Snapshots

Backup
Source Host process to
obfuscate sensitive
data

Link
Target backup_test

Link

Target

347 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Presenting sensitive data to test or development environments often requires that the source of the data
be disguised beforehand. Cascaded snapshots provide this separation and disguise. There is no limit to
the number of cascaded hops that can be created as long as the overall limit for SnapVX is maintained. If
no change to the data is required before linking the snapshots to the test or development environments,
there is no need to create a cascaded relationship.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 347
Reserved Capacity

• Reserved Capacity is a percentage of Storage Resource Pool (SRP) that


can be only allocated to new host writes
• When SRP gets to where only Reserved Capacity is left:
– Snapshot fails when new allocations for snapshot deltas are required
› The snapshot has to be terminated
– Copy to targets halt
› Copy resumes if free space is made available in the SRP or if the Reserved Capacity is
lowered

348 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Reserved capacity ensures that there will be sufficient capacity available in the SRP to accommodate new
host writes. When the allocated capacity reaches the point where only reserved capacity remains, SnapVX
allocations for snapshot deltas and copy processes will be affected.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 348
Expanding Storage Groups with Active Snapshots

• New volumes can be added to Storage Groups based on the growth of the
application
• Existing snapshots do not include the new volumes
• New snapshots include the newly added volumes
• If the SG is restored from an earlier snapshot, the new volumes are set as
Not Ready to the host
– The volumes will remain NR even after the restored session is terminated. User
has to decide the best course of action to include these for the application and
host.
• Likewise, if the SG of the linked targets has been expanded after the
snapshot, these volumes will be set as Not Ready to the host

349 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Care needs to be exercised when expanding SGs with existing snapshots. If there are more volume(s) in
the SG than are contained in the snapshot, then a restore from the snapshot will set these additional
volume(s) to Not Ready. This is because these volume(s) were not present when the snapshot was taken.

Of course subsequent snapshots, after the SG expansion, will contain all the volumes. Similarly, if the
linked target SG has been expanded and has more devices than the snapshot, then the additional
volumes in the linked target SG will be set to Not Ready.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 349
Online Device Expansion with SnapVX

• Expand SnapVX source or target devices


• Snapshot data remains the same size
• The ability to restore a smaller snapshot to an expanded source
device
• Target link and relink operations depend on the size of the source
device when the snapshot was taken, not its size after expansion
• Key User Benefits:
– Expand while retaining local and remote protection
– Reduces need to heavily overprovision

350 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMaxOS 5978 supports online device expansion for Local Replication (LREP) configurations. As with
standalone and SRDF devices, this means an administrator can increase the capacity of thin devices that
are part of an LREP relationship without any service disruption. In the past, you would need to delete an
existing snapshot to leverage ODE. ODE reduces the need for customers to heavily provision, or
overprovision TDEVs to avoid having to expand later. Devices eligible for expansion are those that are
part of SnapVX sessions and legacy sessions that use CCOPY, SDDF, or Extent.

After a source device expansion, the snapshot data remains the same size. ODE also enables the ability
to restore a smaller snapshot to an expanded source device. Note that the target link and relink operations
depend on the size of the source device when the snapshot was taken, not its size after expansion.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 350
Creating Snapshots

C:\>symsnapvx -sid 217 -sg snapvx_sg establish -name backup

Execute Establish operation for Storage Group snapvx_sg (y/[n]) ? y

Establish operation execution is in progress for the storage group snapvx_sg. Please wait...

Polling for Establish.............................................Started.


Polling for Establish.............................................Done.
Polling for Activate..............................................Not Needed.

Establish operation successfully executed for the storage group snapvx_sg

351 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The most convenient and preferred way of performing TimeFinder SnapVX operations is using Storage
Groups. In this example, we are creating a snapshot named backup for the devices in the Storage Group
snapvx_sg.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 351
Listing Snapshots

C:\>symsnapvx -sid 217 list -sg snapvx_sg -detail

Storage Group (SG) Name : snapvx_sg


SG's Symmetrix ID : 000197600217 (Microcode Version: 5978)

----------------------------------------------------------------------------------------------
Snapshot Total
Sym Flags Dev Size Deltas Non-Shared
Dev Snapshot Name Gen FLRG TSEB Snapshot Timestamp (Tracks) (Tracks) (Tracks)
----- --------------- ---- --------- ------------------------ ---------- ---------- ----------
000D1 backup 0 .... .... Wed Nov 28 14:26:48 2018 81930 0 0
backup 1 .... .... Wed Nov 28 14:24:48 2018 81930 820 20
backup 2 .... .... Wed Nov 28 14:22:45 2018 81930 3552 0
---------- ----------
4372 20

352 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

We have created three successive snapshots using the same name. Note that each snapshot is given a
generation number. As discussed earlier, the most recent snapshot is designated as generation 0. As
there is workload on the source devices, the changes are accumulated in snapshot deltas. The non-
shared tracks are unique to the specific snapshot.

These are the tracks that will be returned to the SRP if the snapshot is terminated.

Note that the output has been edited to fit the slide. The Expiration Date is not shown. As we did not
specify a time-to-live during the establish operation, the Expiration Date is NA.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 352
Setting Time-to-Live

C:\>symsnapvx -sid 217 -sg snapvx_sg -snapshot_name backup -generation 0 set ttl -delta 2

SetTimeToLive operation successfully executed for the storage group snapvx_sg

C:\>symsnapvx -sid 217 list -sg snapvx_sg -detail

Storage Group (SG) Name : snapvx_sg


SG's Symmetrix ID : 000197600217 (Microcode Version: 5978)

----------------------------------------------------------------------------------------------
Snapshot
Sym Flags Dev Size
Dev Snapshot Name Gen FLRG TSEB Snapshot Timestamp (Tracks) Expiration Date
----- ------------- ---- --------- ------------------------ --------- ------------------------
000D1 backup 0 .... .... Wed Nov 28 14:26:48 2018 81930 Fri Nov 30 14:42:44 2018
backup 1 .... .... Wed Nov 28 14:24:48 2018 81930 NA
backup 2 .... .... Wed Nov 28 14:22:45 2018 81930 NA

353 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

We can set the time-to-live even after creating the snapshot. The parameter –delta is used to specify the
number of days for expiration from the time the snapshot was created. Note that the output has been
edited to fit the slide.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 353
Creating Secure Snapshots

C:\>symsnapvx -sid 217 -sg snapvx_sg establish -name secure_backup -secure -delta 4:12 -nop

Establish operation successfully executed for the storage group snapvx_sg

C:\>symsnapvx -sid 217 list -sg snapvx_sg -detail

Storage Group (SG) Name : snapvx_sg


SG's Symmetrix ID : 000197600217 (Microcode Version: 5978)

---------------------------------------------------------------------------------------------

Sym Flags
Dev Snapshot Name Gen FLRG TSEB Snapshot Timestamp Expiration Date
----- -------------------------------- ---- --------- ----------------------- ---------------
000D1 secure_backup 0 .... .X.. Wed Nov 28 15:08:17 2018 Mon Dec 03
backup 0 .... .... Wed Nov 28 14:26:47 2018 Fri Nov 30
backup 1 .... .... Wed Nov 28 14:24:47 2018 NA
backup 2 .... .... Wed Nov 28 14:22:45 2018 NA

354 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

A secure snapshot is an optional setting that prevents the accidental or intentional deletion of snapshots.
The –secure option creates a snapshot with a secure expiration time either as a number of days plus
hours from the current host time or an absolute date and hour in the future. In this example, the secure
snapshot cannot be terminated until four days and 12 hours have passed from the current host time. The
secure expiration time is set using the –delta option as shown or using the –absolute <Date:Hour>
option.

Flags:

(F)ailed : X = General Failure, . = No Failure

: S = SRP Failure, R = RDP Failure

(L)ink : X = Link Exists, . = No Link Exists

(R)estore : X = Restore Active, . = No Restore Active

(G)CM : X = GCM, . = Non-GCM

(T)ype : Z = zDP snapshot, . = normal snapshot

(S)ecured : X = Secured, . = Not Secured

(E)xpanded : X = Source Device Expanded, . = Source Device Not Expanded

(B)ackground: X = Background define in progress, . = No Background define

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 354
Linking Snapshot to Target

C:\>symsnapvx -sid 217 link -sg snapvx_sg -snapshot_name backup -gen 2 -lnsg snapvx_tgt_sg -
nop

Link operation successfully executed for the storage group snapvx_sg

C:\>symsnapvx -sid 217 list -sg snapvx_sg -detail

Storage Group (SG) Name : snapvx_sg


SG's Symmetrix ID : 000197600217 (Microcode Version: 5978)
------------------------------------------------------------------------------------------
Snapshot
Sym Flags Dev Size
Dev Snapshot Name Gen FLRG TSEB Snapshot Timestamp (Tracks)
----- -------------------------------- ---- --------- ------------------------ ----------
000D1 secure_backup 0 .... .X.. Wed Nov 28 15:08:17 2018 81930
backup 0 .... .... Wed Nov 28 14:26:47 2018 81930
backup 1 .... .... Wed Nov 28 14:24:47 2018 81930
backup 2 .X.. .... Wed Nov 28 14:22:45 2018 81930

355 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The first command shown here links the generation 2 backup snapshot to a Storage Group named
snapvx_tgt_sg using the –lnsg flag. The target device is contained in the snapvx-tgt-sg Storage Group.
The default for linking is the No Copy mode. After issuing the list detail command for the snapped Storage
Group, we see that the generation 2 snapshot now has the Link Exists Flag set.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 355
When Target is Modified

C:\>symsnapvx -sid 217 list -sg snapvx_sg -detail -linked

Storage Group (SG) Name : snapvx_sg


SG's Symmetrix ID : 000197600217 (Microcode Version: 5978)

---------------------------------------------------------------------------------------------
Sym Link Flgs Remaining Done
Dev Snapshot Name Gen Dev FCMDS Snapshot Timestamp (Tracks) (%)
----- ----------------------------- ---- ----- ----- ------------------------ ---------- ----
000D1 backup 2 000DD ..XX. Wed Nov 28 14:22:44 2018 80141 0
----------
80141

356 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

When a target is written to, the original point-in-time snapshot is unaffected—it remains pristine. The
Modified Flag is set to Modified Target Data, as shown here. The % Done and Remaining (Tracks)
indicates tracks that have been changed due to the writes.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 356
Expanded Snapshot Source Device

C:\>symsnapvx -sid 217 list -sg snapvx_sg -detail

Storage Group (SG) Name : snapvx_sg


SG's Symmetrix ID : 000197600217 (Microcode Version: 5978)

-----------------------------------------------------------------------------------------
Snapshot
Sym Flags Dev Size
Dev Snapshot Name Gen FLRG TSEB Snapshot Timestamp (Tracks)
----- -------------------------------- ---- --------- ------------------------ ----------
000D1 backup 0 .X.. ..X. Fri Jan 04 10:55:43 2019 81930

Flags:

(E)xpanded : X = Source Device Expanded, . = Source Device Not Expanded

357 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

If the source device of a snapshot has been expanded, the (E)xpanded Flag is set. In this case, the device
00D1 was expanded from 10 GB to 15 GB while the snapshot was linked to a target device. The snapshot
size remains the same.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 357
Relinking Snapshot to Target

C:\>symsnapvx -sid 217 relink -sg snapvx_sg -snapshot_name backup -gen 1 -lnsg snapvx_tgt_sg

Polling for Relink................................................Started.


Polling for Relink................................................Done.

Relink operation successfully executed for the storage group snapvx_sg

C:\>symsnapvx -sid 217 list -sg snapvx_sg -detail -linked

Storage Group (SG) Name : snapvx_sg


SG's Symmetrix ID : 000197600217 (Microcode Version: 5978)

---------------------------------------------------------------------------------------------
Sym Link Flgs Remaining Done
Dev Snapshot Name Gen Dev FCMDS Snapshot Timestamp (Tracks) (%)
----- ----------------------------- ---- ----- ----- ------------------------ ---------- ----
000D1 backup 1 000DD ...X. Wed Nov 28 14:24:41 2018 81930 0
----------
81930

358 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In this example, we are relinking the target to snapshot generation 1 of the same source volume. The
target volumes should be unmounted prior to relinking and then mounted back again to ensure that the
host accesses correct data. We can link/relink different snapshots to target volumes to select the desired
point-in-time.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 358
Restoring from Snapshots
C:\>symsnapvx -sid 217 restore -sg snapvx_sg -snapshot_name backup -gen 1 -nop

Restore operation execution is in progress for the storage group snapvx_sg. Please wait...

Restore operation successfully executed for the storage group snapvx_sg

C:\>symsnapvx -sid 217 list -sg snapvx_sg -detail

Storage Group (SG) Name : snapvx_sg


SG's Symmetrix ID : 000197600217 (Microcode Version: 5978)

----------------------------------------------------------------------------------------------
Snapshot Total
Sym Flags Dev Size Deltas
Dev Snapshot Name Gen FLRG TSEB Snapshot Timestamp (Tracks) (Tracks)
----- ------------------------- ---- --------- ------------------------ ---------- ----------
000D1 secure_backup 0 .... .X.. Wed Nov 28 15:08:11 2018 81930 81930
backup 0 .... .... Wed Nov 28 14:26:42 2018 81930 66362
backup 1 .XX. .... Wed Nov 28 14:24:42 2018 81930 827
backup 2 .... .... Wed Nov 28 14:22:39 2018 81930 826

359 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Any available snapshot can be restored to the source volume. This will revert the data on the source
volume to that specific point-in-time. In this example, the generation 1 snapshot is now displaying the
Restore Active Flag.

Since the data on the disk will be changing from the perspective of the host, it is recommended to
unmount the source volume prior to performing a restore operation.

After the restore, the source volume can be mounted again to access the correct data. Terminating the
restored session will leave the original snapshot intact. It will only terminate the restored session.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 359
Terminating Snapshots
C:\>symsnapvx -sid 217 terminate -sg snapvx_sg -snapshot_name backup -gen 2 -nop

Polling for Terminate.............................................Started.


Polling for Terminate.............................................Done.

Terminate operation successfully executed for the storage group snapvx_sg

C:\>symsnapvx -sid 217 list -sg snapvx_sg -detail

Storage Group (SG) Name : snapvx_sg


SG's Symmetrix ID : 000197600217 (Microcode Version: 5978)

------------------------------------------------------------------------------------------
Snapshot
Sym Flags Dev Size
Dev Snapshot Name Gen FLRG TSEB Snapshot Timestamp (Tracks)
----- -------------------------------- ---- --------- ------------------------ ----------
000D1 secure_backup 0 .... .X.. Wed Nov 28 15:08:11 2018 81930
backup 0 .... .... Wed Nov 28 14:26:41 2018 81930
backup 1 .XX. .... Wed Nov 28 14:24:41 2018 81930

360 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Snapshots that have linked targets cannot be terminated. You must first unlink the target before
terminating. Terminating a snapshot that has a restored session would require terminating the restored
session first, followed by terminating the snapshot.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 360
SnapVX Emulation Support

symclone command symsnapvx command


symclone establish -full A -> B symsnapvx establish A +
symsnapvx link -copy A -> B
symclone establish A -> B symsnapvx establish A +
symsnapvx link -copy A -> B
symclone create A -> B + symsnapvx establish A +
symclone activate A -> B symsnapvx link -copy A -> B
symclone restore A <- B symsnapvx restore A

361 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

TimeFinder automatically maps TimeFinder/Clone, TimeFinder VP Snap, and TimeFinder/Mirror


commands to the executable of the equivalent SnapVX command. TimeFinder/Mirror commands are first
converted to TimeFinder/Clone using legacy Clone Emulation, and the TimeFinder/Clone commands are
then converted to SnapVX. In some cases, there is no exact equivalent because SnapVX snapshots are
targetless. The plus sign (+) indicates that the command is followed by the next command listed in that
table cell. The letters A and B indicate devices, and the arrow symbols indicate data direction.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 361
Lesson: TimeFinder SnapVX Operations Using
Unisphere for PowerMax
This lesson covers the following topics:

• Performing TimeFinder SnapVX operations using Unisphere for PowerMax

362 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers performing TimeFinder SnapVX operations using Unisphere for PowerMax.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 362
Create Storage Group (1 of 2)

363 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

TimeFinder SnapVX operations are performed on Storage Groups in Unisphere for PowerMax. Unisphere
for PowerMax does not support Device Group or Device Files for SnapVX operations. In this example, a
number of devices are already in a Storage Group and Masking view for host access. If you want to take a
snapshot of just one of the devices, you have to create a new Storage Group.

Navigate to the STORAGE > Storage Groups page and select Create.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 363
Create Storage Group (2 of 2)

364 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Provision Storage Wizard opens. Enter a Storage Group name. If the device is in another Storage
Group which has a Service Level associated with it, you must select None for the Storage Resource Pool.
This will limit the Service Level to None as well as disable the Enable Compression selection box. Select
Run Now. This will create an empty Storage Group named snapvx_uni_sg.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 364
Add Device to Storage Group

365 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Navigate to the STORAGE > Storage Groups details page of the particular SG and select the Volumes
tab. Click the Add Volumes To SG button.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 365
Add Volumes to Storage Group Wizard (1 of 2)

366 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Add Volumes to SG Wizard opens. Specify the device you want to add to the SG—00D1 in our
example. As noted in the previously, this device belongs to another SG. You must deselect the Exclude
Volumes in use selection box to make it available. Click NEXT to continue.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 366
Add Volumes to Storage Group Wizard (2 of 2)

367 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Select the volume or volumes and click OK.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 367
Create Snapshot

368 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Navigate to the STORAGE > Storage Groups page and select the SG. Click Protect. This will launch the
Protect Storage Group wizard.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 368
Protect Storage Group Wizard (1 of 4)

369 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Protect Storage Group wizard opens. Select Point in Time using SnapVX and then click NEXT to
continue.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 369
Protect Storage Group Wizard (2 of 4)

370 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In step two, choose an existing snapshot or enter a new snapshot name. In this example, the name of the
snapshot is backup.

The Entity Type can be None—as shown here—or Time to Live. If we leave the Entity Type as None, the
Advanced Options include Enable Force Flag and Enable Secure Snaps.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 370
Protect Storage Group Wizard (3 of 4)

371 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

If you select the Time to Live Entity Type, you can set the Time to Live using the Days and Hours drop
down menus. The Advanced Options include the Enable Force Flag. The Enable Secure Snaps option is
not available when a Time to Live is set.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 371
Protect Storage Group Wizard (4 of 4)

372 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Shown here is the Protect Storage Group Wizard Summary page. Select Run Now from the ADD TO JOB
LIST drop down menu to create the snapshot now. Note that you can also set a schedule to run the job or
set up a recurring schedule to run the job.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 372
Link Snapshot to Target

373 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To link a snapshot to a target device, navigate to the DATA PROTECTION > Storage Groups > SG >
SnapVX Snapshots page. Select the snapshot, and click the Link button. In this example, the first
generation snapshot—gen 0—is selected.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 373
Select Target Storage Group

374 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The next step is to select the target SG. The default setting is to select a new SG name for the target SG.
The source SG name is automatically appended as shown. You can leave the default name or change it.
You can also select an existing target Storage Group as shown on the right. In this case there is already a
target SG named snapvx-tgt_sg configured with a single PowerMax device in it. The Advanced Options
include Copy, Force, Star, and Remote.

After the job is run, a host can mount the TimeFinder SnapVX Linked target.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 374
Other SnapVX Operations

375 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The generation zero backup snapshot displays a Linked status. Other SnapVX operations can be
performed by selecting the snapshot name and then selecting the More Actions link.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 375
Lesson: Replicating VMFS Datastore
This lesson covers the following topics:

• Using TimeFinder SnapVX to create a snapshot of a VMFS datastore presented to the Primary
ESXi server

• Linking the snapshot to a target and presenting it to the Secondary ESXi server

376 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers replicating a VMware VMFS datastore using TimeFinder SnapVX. A snapshot of the
VMFS datastore presented to the Primary server will be created and linked to a target device. The target is
then accessed on a Secondary ESXi server.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 376
Primary ESXi Datastore

377 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Using the vSphere Web Client, you find that the Primary ESXi server—esxi-88-67—has access to the
Datastore named Production_Datastore. Click on the Production_Datastore link to view the Datastore
details.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 377
Datastore Details

378 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Note the naa number of the Extent. You will use this number to correlate the device with the PowerMax
volume, using Unisphere for PowerMax. Click on the Datastore browser tab to view the contents of the
Datastore.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 378
VM Resident on Production Datastore

379 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Browsing the Production_Datastore shows that it contains the folder StudentVM. This folder contains the
StudentVM.vmx and other files pertaining to the VM StudentVM.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 379
Datastore Browser

380 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

You can open a console to the StudentVM and examine the data. For the purposes of this example, a
folder named Production_data has been created on StudentVM. The objective is to use TimeFinder
SnapVX to take a snapshot of the PowerMax device hosting the Production_Datastore.

You have to identify a suitable target device accessible to a Secondary ESXi server. Then you can link the
snapshot to the target device. Subsequently you should be able to power on a snapshot of the StudentVM
on the Secondary ESXi Server.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 380
Identifying Device Hosting Production Datastore

381 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In Unisphere for PowerMax, navigate to the STORAGE > Storage Groups > Volumes page for the
Primary ESXi Server Storage Group. In this example, the Storage Group named primary_esxi_88_67 is
masked to the ESXi server. The Storage Group contains one volume. The volume 000B1 is the PowerMax
device that our Datastore is located on. A listing of the volume details shows the WWN for it.

This matches with the naa number shown previously in the vSphere Web Client. This confirms that the
Primary ESXi Server has access to device 000B1. This device is in SID:217 and its capacity is 10 GB. In
order to take a snapshot of this device and link it to a target, you have to identify a 10 GB device on
SID:217 that has been masked to the Secondary ESXi Server.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 381
Identifying Target LUN

382 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Once again using Unisphere for PowerMax navigate to the Storage Group for the Secondary ESXi Server
and identify the devices it has access to. We will use device 000AD. Listing the details of this device
shows the WWN for it.

You can match this WWN with the naa number reported in the vSphere Client for the Secondary ESXi
Server.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 382
LUNs Accessible to Secondary ESXi Server

383 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Using the vSphere Web Client we find that the Secondary ESXi server has access to a few devices. Note
the naa number of the highlighted device. This correlates with the WWN of the PowerMax device 000AD.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 383
Create Snapshot of Production Device

384 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In Unisphere for PowerMax, SnapVX operations can only be performed on Storage Groups. As this is the
first time you will be creating a snapshot for the Production Device, navigate to the STORAGE > Storage
Groups page and select the Storage Group.

The Storage Group primary_esxi_88_67 was created when the production device was masked to the
Primary ESXi Server. To proceed, click the Protect button.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 384
Protect Storage Group Wizard (1 of 3)

385 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Protect Storage Group Wizard opens. Select the Point in Time using SnapVX radio button and then
click NEXT.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 385
Protect Storage Group Wizard (2 of 3)

386 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In this example, the snapshot is named datastore_backup and a 5 day Time To Live is set. To proceed,
click NEXT.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 386
Protect Storage Group Wizard (3 of 3)

387 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Review the SnapVX Summary page and then select Run Now from the ADD TO JOB LIST drop down
menu.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 387
Link Snapshot to Target

388 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To link the snapshot to the target Storage Group, navigate to the DATA PROTECTION > Storage
Groups > SnapVX Snapshots page, select the snapshot, and click the Link button. The Link Snapshot
dialog box opens. In our example, we select an Existing Target Storage Group. The target device is in the
Storage Group named secondary_esxi_88_68.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 388
Mount Datastore

[root@esxi-88-68:~] esxcfg-volume -l
Scanning for VMFS-3/VMFS-5 host activity (512 bytes/HB, 2048 HBs).
VMFS UUID/label: 5c018873-d2208d22-7d8f-005056955e4a/Production_Datastore
Can mount: Yes
Can resignature: Yes
Extent name: naa.60000970000197600217533030304144:1 range: 0 - 9983 (MB)

[root@esxi-88-68:~] esxcfg-volume -r 5c018873-d2208d22-7d8f-005056955e4a


Resignaturing volume 5c018873-d2208d22-7d8f-005056955e4a
[root@esxi-88-68:~]

389 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Mounting the datastore can be done using the vSphere Web Client when a vCenter Server is configured.
In our example, there is no vCenter Server deployed, so we used esxcli commands. Open a PuTTy
session to the ESXi server and issue the commands shown. In this case, we used the –r option to mount
and resignature the datastore using the UUID.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 389
View Datastore Using Web Client

390 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Now that the datastore has been mounted, we can view the snapshot in the Datastores tab of the Web
Client Storage page. Select the Register a VM link to proceed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 390
Register VM

391 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Register VM dialog box opens. Select the datastore and then select the Student VM. Right click on
the StudentVM.vmx file and select Register VM.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 391
Datastore Browser

392 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The VM is now available on the secondary ESXi server. Select the VM and click the Power on link.
Answer the Virtual Machine Message question. Choose I Copied It.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 392
VM on Secondary ESXi Server

393 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

You can open a console to the VM on the Secondary ESXi server and verify that this VM has the same
data as the VM on the Primary ESXi server at the point in time of the snapshot.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 393
Lab: TimeFinder SnapVX Operations

This lab covers


• Creating snapshots
• Accessing snapshot data from a secondary host
• Restoring to source from snapshots
• Restoring to source from modified target device

394 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lab covers creating and linking TimeFinder SnapVX snapshots to target devices. It also covers
restoring snapshot data to the source device as well as restoring modified target data back to the source
device.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 394
Lab: TimeFinder SnapVX Replication of VMFS Datastore

This lab covers


• Identifying and correlating source and target devices
with devices accessible by primary and secondary
ESXi servers
• Creating a VMFS datastore on the source device
and deploying a Virtual Machine
• Using Unisphere for PowerMax to create a SnapVX
snapshot of the datastore and linking the snapshot
to the target
• Accessing the linked target from the secondary ESXi
server and power-on the Virtual Machine on the
secondary ESXi server

395 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lab covers performing TimeFinder SnapVX replication of a VMFS Datastore using Unisphere for
PowerMax and VMware vSphere client.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 395
Module Summary

Key points covered in this module:

• TimeFinder SnapVX concepts

• Performing TimeFinder SnapVX operations using SYMCLI and Unisphere for PowerMax

• Replicating a VMFS datastore using TimeFinder SnapVX

396 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This module covered TimeFinder SnapVX local replication technology on PowerMax arrays. Concepts,
terminology, and operational details of creating snapshots and presenting them to target hosts were
discussed. Performing TimeFinder SnapVX operations using SYMCLI and Unisphere for PowerMax were
presented. Use of TimeFinder SnapVX for replication in a virtualized environment was also presented.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: TimeFinder SnapVX Operations 396
Module: SRDF/Synchronous Operations

Upon completion of this module, you should be able to:

• Create Dynamic SRDF Groups and Dynamic SRDF Pairs

• Perform SRDF/S operations using SYMCLI and Unisphere for PowerMax

• Perform Disaster Recovery of a VMFS datastore

397 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This module focuses on SRDF operations in synchronous mode. Use of SYMCLI and Unisphere for
PowerMax to perform SRDF operations are presented in detail. A method for performing DR operations in
a virtualized environment for a VMFS datastore is discussed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 397
Lesson: SRDF Initial Setup Operations
This lesson covers the following topics:

• Listing the SRDF environment

• Creating Dynamic SRDF Groups

• Creating Dynamic SRDF Pairs

• Grouping SRDF devices into SYMCLI device groups

• Changing SRDF mode and suspending and resuming SRDF links

398 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers initial SRDF setup operations. Creating dynamic SRDF groups and SRDF pairs using
SYMCLI is presented in detail. Basic SRDF operations are also discussed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 398
Listing Environment

C:\>symcfg list

S Y M M E T R I X

Mcode Cache Num Phys Num Symm

SymmID Attachment Model Version Size (MB) Devices Devices

000197600217 Local PowerMax_8000 5978 457728 14 296

000196802253 Remote VMAX100K 5977 207872 0 213

C:\>

399 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In our example, a PowerMax—SID:217—and a VMAX 100K array—SID 253—have been configured with
RF emulation. The remote adapters of each array are zoned to access the remote adapters of the other
array.

The commands shown are executed from a host attached to SID:217, the Local PowerMax 8000 array.
The Num Phys Devices column indicates that the host from which the command was executed has
physical access to 14 devices on SID:217. The Num Symm Devices column indicates the total number of
devices that have been configured on the storage arrays.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 399
Listing Remote Adapters

C:\>symcfg -sid 217 list -ra all C:\>symcfg -sid 253 list -ra all

Symmetrix ID: 000197600217 (Local) Symmetrix ID: 000196802253 (Remote)

S Y M M E T R I X R D F D I R E C T O R S S Y M M E T R I X R D F D I R E C T O R S

Remote Local Remote Status Remote Local Remote Status


Ident Port SymmID RA Grp RA Grp Dir Port Ident Port SymmID RA Grp RA Grp Dir Port
----- ---- ------------ -------- -------- -------------- ----- ---- ------------ -------- -------- --------------

RF-1F 6 000196802253 1 (00) 1 (00) Online Online RF-1F 11 000197600217 1 (00) 1 (00) Online Online
6 000196802253 2 (01) 2 (01) Online Online 11 000197600217 2 (01) 2 (01) Online Online
6 000196802253 3 (02) 3 (02) Online Online 11 000197600217 3 (02) 3 (02) Online Online
6 000196802253 250 (F9) 250 (F9) Online Online 11 000197600217 250 (F9) 250 (F9) Online Online
RF-2F 6 000196802253 1 (00) 1 (00) Online Online RF-2F 11 000197600217 1 (00) 1 (00) Online Online
6 000196802253 2 (01) 2 (01) Online Online 11 000197600217 2 (01) 2 (01) Online Online
6 000196802253 3 (02) 3 (02) Online Online 11 000197600217 3 (02) 3 (02) Online Online
6 000196802253 250 (F9) 250 (F9) Online Online 11 000197600217 250 (F9) 250 (F9) Online Online

400 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Shown here are the outputs from the symcfg list –ra all command from both storage arrays.
Currently there are four SRDF groups configured. SID:217 uses port 6 on RF-1F and RF-2F, and SID:253
uses port 11 on RF-1F and RF-2F.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 400
Listing RDF Groups

C:\>symcfg -sid 217 list -rdfg all

Symmetrix ID : 000197600217

S Y M M E T R I X R D F G R O U P S

Local Remote Group RDFA Info


------------ --------------------- --------------------------- ---------------
LL Flags Dir Flags Cycle
RA-Grp sec RA-Grp SymmID ST Name LPDS CHTM Cfg CSRM time Pri
------------ --------------------- --------------------------- ----- ----- ---
1 ( 0) 10 1 ( 0) 000196802253 OD RDFG1 X... ..X. F-S -IS- 15 33
2 ( 1) 10 2 ( 1) 000196802253 OD esx65_75 X... ..X. F-S -IS- 15 33
3 ( 2) 10 3 ( 2) 000196802253 OD esx67_77 X... ..X. F-S -IS- 15 33
250 (F9) 10 250 (F9) 000196802253 OD M_02172253 X... ..X. F-S -IS- 15 33

401 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Shown here is the symcfg –sid 217 list –rdfg all command output confirming the four SRDF
Groups.

Group Flags :

Prevent Auto (L)ink Recovery : X = Enabled, . = Disabled

Prevent RAs Online Upon (P)ower On: X = Enabled, . = Disabled

Link (D)omino : X = Enabled, . = Disabled

(S)TAR/SQAR mode : N = Normal, R = Recovery, . = OFF

S = SQAR Normal, Q = SQAR Recovery

RDF Software (C)ompression : X = Enabled, . = Disabled, - = N/A

RDF (H)ardware Compression : X = Enabled, . = Disabled, - = N/A

RDF Single Round (T)rip : X = Enabled, . = Disabled, - = N/A

RDF (M)etro : X = Configured, . = Not Configured

RDFA Flags :

(C)onsistency : X = Enabled, . = Disabled, - = N/A

(S)tatus : A = Active, I = Inactive, - = N/A

(R)DFA Mode : S = Single-session, M = MSC, - = N/A

(M)sc Cleanup : C = MSC Cleanup required, - = N/A

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 401
Dynamic Configuration State

C:\>symcfg -sid 217 list -v|more

Symmetrix ID: 000197600217 (Local)


Time Zone : Eastern Standard Time

Product Model : PowerMax_8000


Symmetrix ID : 000197600217

Microcode Version (Number) : 5978 (175A0000)


Microcode Registered Build : 1
-----Output Truncated-----
Switched RDF Configuration State : Enabled
Concurrent RDF Configuration State : Enabled
Dynamic RDF Configuration State : Enabled
Concurrent Dynamic RDF Configuration : Enabled

402 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Dynamic RDF Configuration State is Enabled by default on PowerMax and VMAX Family arrays. A
verbose listing of SID:217 shows that the Dynamic RDF Configuration State is displayed as Enabled. This
should be verified for SID:253 as well. The combination of the ability to dynamically create SRDF groups
and dynamic device pairs enables you to create, delete, and swap SRDF R1 and R2 pairs. PowerMax and
VMAX Family arrays will support 250 SRDF groups per port.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 402
Display SRDF Connectivity – symsan Command

C:\>symsan -sid 217 list -sanrdf -dir all

Symmetrix ID: 000197600217

Flags Remote
------ ----------- ------------------------------------
Dir Prt Lnk
Dir:P CS S S Symmetrix ID Dir:P WWN
------ --- --- --- ------------ ------ ----------------
01F:06 SO O C 000196802253 01F:11 500009735823340B
01F:06 SO O C 000196802253 02F:11 500009735823344B
02F:06 SO O C 000196802253 01F:11 500009735823340B
02F:06 SO O C 000196802253 02F:11 500009735823344B

403 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The symsan command can be used to discover and display SRDF connectivity between the arrays. The
symsan command helps in situations where SRDF groups have not yet been created between the storage
array pairs. The symsan command is particularly useful to determine the local and remote RDF directors,
as well as the full serial number of the remote array. The full serial number of the remote array is required
to create the first Dynamic SRDF group. Subsequent SRDF groups can be created by just specifying the
last few digits of the remote array. The output verifies that the RF on SID:217 can indeed access the RF
on SID:253 over the SAN.

Legend:

Director:

(C)onfig : S = Fibre-Switched, H = Fibre-Hub

G = GIGE, - = N/A

(S)tatus : O = Online, F = Offline, D = Dead, - = N/A

Port:

(S)tatus : O = Online, F = Offline, - = N/A

Link:

(S)tatus : C = Connected, P = ConnectInProg

D = Disconnected, I = Incomplete, - = N/A

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 403
Creating Dynamic SRDF Group

C:\>symrdf addgrp -label SRDF_Sync -sid 217 -remote_sid 253 -rdfg 101 -remote_rdfg 101 -
dir 1F:06,2F:06 -remote_dir 1F:11,2F:11

Execute a Dynamic RDF Addgrp operation for group


'SRDF_Sync' on Symm: 000197600217 (y/[n]) ? y

Successfully Added Dynamic RDF Group 'SRDF_Sync' for Symm: 000197600217

C:\>

404 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The symrdf addgrp command creates an empty Dynamic SRDF group on the source and the target
arrays and logically links them. The directors and the respective ports for the arrays are specified in the
command. The physical links and communication between the two arrays must exist for this command to
succeed.

Note that if this was the first SRDF group between these two arrays, the full 12-digit serial numbers of the
two arrays would need to be specified. Otherwise, an error message would be displayed.

The SRDF group number in the command (-rdfg and -remote_rdfg) is in decimal. In the PowerMax or
VMAX Family array, it is converted to hexadecimal. The decimal group numbers start at 01 but the
hexadecimal group numbers start at 00. Hence the hexadecimal group numbers will be off by one.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 404
Listing Configured SRDF Groups

C:\>symcfg -sid 217 list -rdfg all

Symmetrix ID : 000197600217

S Y M M E T R I X R D F G R O U P S

Local Remote Group RDFA Info


------------ --------------------- --------------------------- ---------------
LL Flags Dir Flags Cycle
RA-Grp sec RA-Grp SymmID ST Name LPDS CHTM Cfg CSRM time Pri
------------ --------------------- --------------------------- ----- ----- ---
1 ( 0) 10 1 ( 0) 000196802253 OD RDFG1 X... ..X. F-S -IS- 15 33
2 ( 1) 10 2 ( 1) 000196802253 OD esx65_75 X... ..X. F-S -IS- 15 33
3 ( 2) 10 3 ( 2) 000196802253 OD esx67_77 X... ..X. F-S -IS- 15 33
101 (64) 10 101 (64) 000196802253 OD SRDF_Sync X... ..X. F-S -IS- 15 33
250 (F9) 10 250 (F9) 000196802253 OD M_02172253 X... ..X. F-S -IS- 15 33

405 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The command shown gives detailed information on the currently configured SRDF Groups. The SRDF
Group we have just created is listed.

We have created an SRDF Group with the label SRDF_Sync and the SRDF Group number 101 in
decimal. Shown in parentheses is the hexadecimal value 64. It would be convenient if the SRDF Group
numbers on the local and the remote arrays are identical, however, this is not a requirement.

Legend:

Group (S)tatus : O = Online, F = Offline

Group (T)ype : S = Static, D = Dynamic, W = Witness

Director (C)onfig : F-S = Fibre-Switched, F-H = Fibre-Hub

G = GIGE, E = ESCON, T = T3, - = N/A

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 405
Creating SRDF Device Pairs
C:\>symrdf createpair -sid 217 -f rdf_device_pairs.txt -type R1 -rdfg 101 -establish –nop

An RDF 'Create Pair' operation execution is in progress for device


file 'rdf_device_pairs.txt'. Please wait...

Create RDF Pair in (0217,101)....................................Started.


Create RDF Pair in (0217,101)....................................Done.
Mark target device(s) in (0217,101) for full copy from source....Started.
Devices: 00D2-00D3 in (0217,101).................................Marked.
Mark target device(s) in (0217,101) for full copy from source....Done.
Merge track tables between source and target in (0217,101).......Started.
Devices: 00D2-00D3 in (0217,101).................................Merged.
Merge track tables between source and target in (0217,101).......Done.
Resume RDF link(s) for device(s) in (0217,101)...................Started.
Resume RDF link(s) for device(s) in (0217,101)...................Done.

The RDF 'Create Pair' operation successfully executed for device


file 'rdf_device_pairs.txt'.

406 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The symrdf createpair command takes the dynamic capable device pairs listed in the text file—
pairs.txt—and makes them R1-R2 pairs. Devices are created as Dynamic capable by default. By
specifying –establish, the newly created R2 devices are synchronized with the data from the newly
created R1 devices. In this example, the file contains the following entries:

rdf_device_pairs.txt

0D2 06C

0D3 06D

The command has been executed from the host attached to SID:217. The first column in the file lists the
devices on the PowerMax on which the command is executed. The second column is the remote VMAX
100K—SID:253. Specifying –type R1 makes the devices in the first column R1s and the devices in the
second column become their corresponding R2s. The mode of operation for the SRDF pairs that are
newly created is set to Adaptive Copy Disk Mode by default. Adaptive Copy Disk Mode is discussed later
in this module.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 406
Confirming Device Pairs
C:\>symrdf query -sid 217 -rdfg 101 -f rdf_device_pairs.txt

Symmetrix ID : 000197600217 (Microcode Version: 5978)


Remote Symmetrix ID : 000196802253 (Microcode Version: 5977)
RDF (RA) Group Number : 101 (64)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
N/A 000D2 RW 0 0 RW 0006C WD 0 0 D..E Synchronized
N/A 000D3 RW 0 0 RW 0006D WD 0 0 D..E Synchronized
Total ------- ------- ------- -------
Track(s) 0 0 0 0
MB(s) 0.0 0.0 0.0 0.0

407 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

As noted earlier, the SRDF mode is set to Adaptive Copy Disk Mode by default . The establish operation
synchronizes data from the new R1 device to the new R2 device. This command has been executed from
the host attached to SID:217. The R1 devices are created on SID:217 and their corresponding R2 devices
are created on SID:253. R1 device 0D2 is paired with R2 device 06C. R1 device 0D3 is paired with R2
device 06D.

Legend for FLAGS:

(M)ode of Operation : A = Async, S = Sync, E = Semi-sync, D = Adaptive Copy


Disk Mode

: W = Adaptive Copy WP Mode, M = Mixed, T = Active

(C)onsistency State : X = Enabled, . = Disabled, M = Mixed, - = N/A

(E)xempt : X = Enabled, . = Disabled, M = Mixed, - = N/A

R1/R2 Device (S)ize : E = R1 EQ R2, 1 = R1 GT R2, 2 = R2 GT R1, - = N/A

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 407
Deleting Device Pairing

• Removes the pairing information from the array


• Must suspend SRDF links before issuing symrdf deletepair command
– Link state must be NR and pair state is Suspended, Split, or FailedOver

• Canceling dynamic SRDF pairings changes the type of the device group
from R1 or R2 to Regular
• Devices in the device group are changed from SRDF devices to SRDF-
capable standard devices, and the SYMAPI database is updated

408 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The delete SRDF pairs command cancels SRDF pairs in the device file specified. Before the
deletepair can be invoked, the pair must be suspended. The SRDF Group is not deleted by this
operation. If the SRDF Group is to be deleted, then the symrdf removegrp command is used after the
deletepair operation.

Example:

c:\symrdf suspend -sid 97 -file grp5.txt -rdfg 5

c:\symrdf deletepair -sid 97 -file grp5.txt -rdfg 5

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 408
Identify SRDF Devices Accessible to Host (1 of 2)
C:\>symrdf list pd

Symmetrix ID: 000197600217

Local Device View


-----------------------------------------------------------------------------
STATUS FLAGS RDF S T A T E S
Sym Sym RDF --------- ----- R1 Inv R2 Inv ----------------------
Dev RDev Typ:G SA RA LNK MTES Tracks Tracks Dev RDev Pair
----- ----- -------- --------- ----- -------- -------- --- ---- -------------

000D2 0006C R1:101 RW RW RW D1.E 0 0 RW WD Synchronized


000D3 0006D R1:101 RW RW RW D1.E 0 0 RW WD Synchronized

Total -------- --------


Track(s) 0 0
MB(s) 0.0 0.0

409 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The SYMCLI command symrdf list pd gives a list of all SRDF devices accessible to the host. The
command has been executed on the host attached to SID:217. In this example, the host has access to 2
SRDF devices—0D2 and 0D3—from SID:217. As can be seen under the RDF Type:G column, the
devices are type R1 and they have been created in SRDF Group 101. The mode of SRDF operation for
these pairs is Adaptive Copy Disk Mode, and currently all the R1-R2 pairs are in a Synchronized state.
The local R1 devices—the Sym Dev column of the output—0D2 and 0D3 are paired with corresponding
R2 devices—the Sym RDev column of the output—06C and 06D.

Legend for FLAGS:

(M)ode of Operation : A = Async, S = Sync, E = Semi-sync, D = Adaptive Copy


Disk Mode

: W = Adaptive Copy WP Mode, M = Mixed, T = Active

Mirror (T)ype : 1 = R1, 2 = R2

(E)xempt : X = Enabled, . = Disabled, M = Mixed, - = N/A

R1/R2 Device (S)ize : E = R1 EQ R2, 1 = R1 GT R2, 2 = R2 GT R1, - = N/A

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 409
Identify SRDF Devices Accessible to Host (2 of 2)
C:\>sympd list

Symmetrix ID: 000197600217

Device Name Dir Device


---------------------------- ------- -------------------------------------
Cap
Physical Sym SA :P Config Attribute Sts (MB)
---------------------------- ------- -------------------------------------
\\.\PHYSICALDRIVE1 00001 01D:005 TDEV N/Grp'd ACLX WD 6
\\.\PHYSICALDRIVE2 000A2 01D:005 TDEV N/Grp'd RW 6
\\.\PHYSICALDRIVE3 000A3 01D:005 TDEV N/Grp'd RW 6
\\.\PHYSICALDRIVE4 000D1 01D:005 TDEV N/Grp'd RW 15360
\\.\PHYSICALDRIVE5 000D2 01D:005 RDF1+TDEV N/Grp'd RW 10241
\\.\PHYSICALDRIVE6 000D3 01D:005 RDF1+TDEV N/Grp'd RW 10241
\\.\PHYSICALDRIVE7 000D4 01D:005 TDEV N/Grp'd RW 10241
-----Output Truncated-----

410 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The sympd list command gives the list of all the devices that the host can access on the array. This
command is used to correlate the host physical device name with the array device number. We see that
the host addresses the R1 devices as PHYSICALDRIVE5 and PHYSICALDRIVE6. To format the devices,
create partitions, and create and mount file systems, the physical device names of the host are used.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 410
Creating SRDF Device Groups – SYMCLI

• All devices in a device group must be in the same Symmetrix array


• All devices must be of the same type (RDF1, RDF2, RDF21, or Regular)

C:\>symdg create -type R1 syncsrcdg

C:\>set SYMCLI_DG=syncsrcdg

C:\>symdg addall dev -range 0D2:0D3

411 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

A device group is a user-created object for viewing and managing related array devices. All devices in a
device group should be on the same array. There are five types of device groups: RDF1, RDF2, RDF21,
ANY or REGULAR. If a type is not specified explicitly when creating a device group, a device group of
type REGULAR is created by default. A device group, with the type REGULAR cannot contain SRDF
devices; a device group of type RDF1 cannot contain R2 devices; and a device group of type RDF2
cannot contain R1 devices. When performing SRDF/S operations, SYMCLI commands can be executed
for ALL devices in the device group or a subset of them. For SRDF/A operations, the commands should
be executed for ALL devices in the SRDF Group.

Storage Administrators must create a device group with RDF1 or RDF2 for SRDF operations, as
appropriate. In this example, device group type R1—RDF1—is created, so that the R1 devices 0D2 and
0D3 can be added to it. Note that the environment variable SYMCLI_DG has been set to the device group
that was created. When this variable is set, subsequent commands to manage the device group will not
need the –g <device_group_name> flag in the command syntax.

By default, the device group definition is stored on the host where the symdg create command was
executed. To manage the device group from other hosts connected to the same array, GNS daemon is
used.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 411
Device Group Details (1 of 2)
C:\>symdg show syncsrcdg |more

Group Name: syncsrcdg

Group Type : RDF1 (RDFA)


Device Group in GNS : No
Valid : Yes
Symmetrix ID : 000197600217
Group Creation Time : Tue Jan 08 14:45:37 2019
Vendor ID : EMC Corp
Application ID : SYMCLI

Number of STD Devices in Group : 2


Number of Locally-associated BCV's : 0
Number of Locally-associated VDEV's : 0
Number of Locally-associated TGT's : 0
-----Output Truncated-----

412 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The symdg show command displays detailed group information for any specific device group. The
device group syncsrcdg contains 2 local standard devices. The device group type is RDF1. The
Symmetrix serial number is also displayed. If there are any BCVs, VDEVs, etc. associated with or added
to the group, this information is also displayed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 412
Device Group Details (2 of 2)

Standard (STD) Devices (2):


{
-----------------------------------------------------------------------------------
Sym Device Cap
LdevName PdevName Dev Config Att. Sts (MB)
-----------------------------------------------------------------------------------
DEV001 \\.\PHYSICALDRIVE5 000D2 RDF1+TDEV RW 10241
DEV002 \\.\PHYSICALDRIVE6 000D3 RDF1+TDEV RW 10241
}

-----Output Truncated-----

413 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Further information on the devices in the group, as well as relevant SRDF information is also presented
with the symdg show command. The two devices have been assigned the Logical Device Names of
DEV001 and DEV002 by default. The host—Windows OS in this example—addresses the two devices as
PHYSICALDRIVE5 and PHYSICALDRIVE6.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 413
symrdf Command Syntax

symrdf –g <device_group_name> <Action> [Options]

Actions
suspend failback
resume split
set mode establish
failover restore
update

414 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Users can perform a number of SRDF operations using host-based SYMCLI commands. Major SRDF
operations or actions include: suspend, resume, set mode, failover, update, failback, split, establish, and
restore.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 414
SRDF Modes

• Synchronous—up to 200 kilometers


– Write acknowledged after target device has received and checked the data
– R1 and R2 devices always contain identical data

• Asynchronous—unlimited distance
– Writes from production host are acknowledged immediately by local array
– Maintains a dependent-write consistent copy between the R1 and R2 devices

• Adaptive Copy Disk


– Designed to transfer large amounts of data without loss of performance
– New data accumulates on the R1, marked as invalid tracks, no guarantee of R2
data consistency if not synchronized

415 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In SRDF/Synchronous mode, the array responds to the host that issued a write operation to the source
device only after the array containing the target device acknowledges that it has received and checked the
data. Synchronous mode ensures that the source R1 and target R2 devices contain identical data.

SRDF/Asynchronous (SRDF/A) is a long-distance disaster restart solution with fast application response
times. SRDF/A maintains a dependent-write consistent copy between the R1 and R2 devices across any
distance with no impact to the application.

Adaptive copy disk mode is designed to transfer large amounts of data without loss of performance.
Adaptive copy mode allows the R1 and R2 devices to be more than one I/O out of synchronization. Unlike
the asynchronous mode, adaptive copy mode does not guarantee a dependent-write consistent copy of
data on R2 devices. Because the array cannot fully guard against data loss should a failure occur, Dell
EMC recommends:

1. Use adaptive copy disk mode to transfer the bulk of your data to target devices.

2. Then switch to synchronous mode to ensure full data protection or asynchronous mode to ensure full
data consistency.

The amount of data out of synchronization between the R1 and the R2 devices at any given time is
determined by the maximum skew value.

In adaptive copy disk mode (acp_disk), new data accumulates on the R1 until it can be transferred to the
R2.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 415
Mixed SRDF Modes on Remote Adapters
C:\>symqos -sid 217 -ra -dir 1F set io -sync 50 -async 40 -copy 10

C:\>symqos -sid 217 list -ra -io

RA IO State : Enabled

System Defaults:

Synchronous IOs () : 70
Asynchronous IOs () : 20
Copy IOs () : 10

RDF Directors:

Flg IO Percent
Ident R Sync Async Copy
------ --- ---- ----- ----
RF-1F X 50 40 10
RF-2F . 70 20 10

416 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

RA CPU resource distribution for Synchronous, Asynchronous, and Copy modes can be set either system
wide—which will affect all RAs—or can be set on a subset of RAs. The resource distribution can be
enabled or disabled. The system defaults as seen in here are 70/20/10 for Sync/Async/Copy modes.

As shown here, for purpose of illustration, the distribution can be changed for one of the directors if
necessary. In this case, RA-1F has been changed to 50/40/10 for Sync/Async/Copy modes.

Legend for Flg:

(R)A IO Set: X = Set, . = Default, - = N/A

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 416
Changing SRDF Mode of Operation
C:\>symrdf set mode sync –nop

C:\>symrdf query
-----Output Truncated-----

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
DEV001 000D2 RW 0 0 RW 0006C WD 0 0 S..E Synchronized
DEV002 000D3 RW 0 0 RW 0006D WD 0 0 S..E Synchronized

Total ------- ------- ------- -------


Track(s) 0 0 0 0
MB(s) 0.0 0.0 0.0 0.0

417 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The symrdf set mode command will change the SRDF operation mode. In this example, the mode has
been changed to Synchronous for these two R1-R2 pairs. This is indicated by the S in the M column of the
output. In normal operations of SRDF, the R1 device presents a Read Write (RW) status to its host and
the corresponding R2 device presents a Write Disabled (WD) status to its host. Data written to the R1
device is sent over the links to the R2 storage system. The meaning of the R1/R2 Inv(alid) Tracks are
discussed throughout this module.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 417
Suspending SRDF Links
C:\>symrdf suspend –nop

C:\>symrdf query
-----Output Truncated-----

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
DEV001 000D2 RW 0 7278 NR 0006C WD 0 0 S..E Suspended
DEV002 000D3 RW 0 7232 NR 0006D WD 0 0 S..E Suspended

Total ------- ------- ------- -------


Track(s) 0 14510 0 0
MB(s) 0.0 1813.8 0.0 0.0

418 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Suspend is a singular operation. Data transfer from the source devices to the target devices is stopped.
The links for these devices are logically set to Not Ready (NR). This operation only affects the targeted
devices in the device group. SRDF device pairs in other device groups and other SRDF Groups are not
affected even if they share the same Remote Directors. Physical links and the RA communication paths
are still available. New writes to the source devices accumulate as invalid tracks to the R2 in the R2 Inv
Tracks column. The R1s continue to be Read Write enabled and the R2s continue to be Write Disabled.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 418
Resuming SRDF Links
C:\>symrdf resume –nop

C:\>symrdf query
-----Output Truncated-----

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
DEV001 000D2 RW 0 2 RW 0006C WD 0 0 S..E SyncInProg
DEV002 000D3 RW 0 0 RW 0006D WD 0 0 S..E Synchronized

Total ------- ------- ------- -------


Track(s) 0 2 0 0
MB(s) 0.0 0.3 0.0 0.0

419 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Resume is a singular operation. To invoke this operation, the RDF pair(s) must already be in the
Suspended state. Data transfer from R1 to R2 is resumed. The pair state will remain in SyncInProg until
all accumulated invalid tracks for the pair have been transferred. Invalid tracks are transferred to the R2 in
any order – so write serialization will not be maintained. The link is set to Read Write. The R1s continue to
be Read Write enabled and the R2s continue to be Write Disabled.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations: 419
Manage SRDF Operations Using Storage Groups

• symrdf command can be executed on storage groups using the –sg option

symrdf –sg <storagegroup> -sid <SymmID> -rdfg <grpNum> <Action>

• Create SRDF pairs using storage groups

For example:

symrdf createpair -sid <SymmID> -sg <storagegroup> -rdfg <grpNum> -type


<r1|r2> -remote_sg <storagegroup>

420 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Starting with Solutions Enabler 8.0.2/HYPERMAX OS Q1 2015 SR, you can now manage SRDF
operations using Storage Groups. Storage Groups (SGs) are a collection of devices stored on the array
that are used by an application, a server, or a collection of servers. Refer to Dell EMC Solutions Enabler
Array Controls and Management CLI User Guide for more information on Storage Groups.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations: 420
Lesson: SRDF Disaster Recovery Operations
This lesson covers the following topics:

• SRDF Failover

• SRDF Update

• SRDF Failback

421 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers SRDF Disaster Recovery operations. Device and link states under different conditions
are presented in detail. Host considerations when performing DR operations are also discussed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations: 421
SRDF Disaster Recovery Operations

• Failover:
– Makes a copy of the data on target devices—R2s—available to the host accessing these
devices on the target array
– Invoked after a disaster—host, storage array, or site failure
– Can be used for maintenance operations on the source site: Provides data availability from
the target devices, during host, storage array, or site maintenance

• Update:
– Begins transfer of accumulated invalid tracks from the R2s to the R1s, while production work
continues on the R2s

• Failback:
– Resumes operation back on the primary host accessing the source devices—R1s. All
changes that are made to the R2s when failed over are transferred back to the source devices
– Primary host can access the R1 devices when the command completes without waiting for the
data transfer to complete

422 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The SRDF disaster recovery operations are:

• Failover from the source side to the target side, switching data processing to the target side

• Update the source side after a failover while the target side is still used for applications

• Failback from the target side to the source side by switching data processing to the source side

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations: 422
SRDF Failover

C:\>symrdf -g syncsrcdg failover

Execute an RDF 'Failover' operation for device


group 'syncsrcdg' (y/[n]) ? y

An RDF 'Failover' operation execution is


in progress for device group 'syncsrcdg'. Please wait...

Write Disable device(s) on SA at source (R1)..............Done.


Suspend RDF link(s).......................................Done.
Read/Write Enable device(s) on RA at target (R2)..........Done.

The RDF 'Failover' operation successfully executed for


device group 'syncsrcdg'.

423 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The failover operation can be executed from the source side host or the target side. This is true for all
symrdf commands. In order to perform operations from the target side, a device group of type RDF2
should be created and the R2 devices should be added to it. In the event of an actual disaster, this is
helpful as there would be no way of communicating with the source array. The operation assumes there is
a disaster situation and makes all efforts to enable data access on the target array:
• Will proceed if possible
• Will give message for any potential data integrity issue

As can be seen in the output, the R1 devices are Write Disabled, the SRDF links between the device pairs
are logically suspended, and the R2 devices are Read Write enabled. The host accessing the R2 devices
can now resume processing the application.

While in a true disaster situation, when the source host/storage array/site may be unreachable, it is not
possible to perform a graceful shutdown on the source side prior to a failover. However, if the failover is
due to testing or for a maintenance operation, a graceful shutdown is recommended. A failover leads to a
Write Disabled state of the R1 devices. If a device suddenly becomes Write Disabled from a Read Write
state, the reaction of the host can be unpredictable if the device is in use. Hence the recommendation is to
stop applications, unmount the filesystem, or unassign the drive letter prior to performing a failover for
maintenance operations.

For a clean, consistent, coherent point-in-time copy which can be used with minimal recovery on the target
side, some or all of the following steps may have to be taken on the source side:
• Stop all applications
• Unmount the file system—unmount or unassign the drive letter to flush the filesystem buffers from
the host memory down to the storage array
• Deactivate the Volume Group

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 423
Query After Failover
C:\>symrdf -g synctgtdg query

Device Group (DG) Name : synctgtdg


DG's Type : RDF2
DG's Symmetrix ID : 000196802253 (Microcode Version: 5977)
Remote Symmetrix ID : 000197600217 (Microcode Version: 5978)
RDF (RA) Group Number : 101 (64)

Target (R2) View Source (R1) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
DEV001 0006C RW 6322 0 NR 000D2 WD 0 0 S..E Failed Over
DEV002 0006D RW 6280 0 NR 000D3 WD 0 0 S..E Failed Over

Total ------- ------- ------- -------


Track(s) 12602 0 0 0
MB(s) 1575.3 0.0 0.0 0.0

424 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

As seen in the output, the R1s are now Write Disabled, the links are set to Not Ready, and the R2s are
Read Write enabled. As application processing has been started using the R2 devices, you see that there
are invalid tracks accumulating—R1 Inv Tracks—on the target storage array. These are the changes
being made to the R2 devices. When it is time to return to the source storage array, the invalid tracks will
be incrementally synchronized back to the source. The pair state is reflected as Failed Over.

Note that we have created a device group named synctgtdg on the remote host that is accessing the R2
devices. The device group type is RDF2 and the R2 devices—06C:06D from SID:253—have been added
to the device group. The query was executed on the remote host and shows the state from the perspective
of the R2 devices.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 424
SRDF Update

C:\>symrdf -g synctgtdg update -nop

An RDF 'Update R1' operation execution is


in progress for device group 'synctgtdg'. Please wait...

Suspend RDF link(s).......................................Done.


Merge device track tables between source and target.......Started.
Devices: 00D2-00D3 in (2253,101)..........................Merged.
Merge device track tables between source and target.......Done.
Resume RDF link(s)........................................Started.
Resume RDF link(s)........................................Done.

The RDF 'Update R1' operation successfully initiated for


device group 'synctgtdg'.

425 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

While the target R2 device is still operational—Read Write Enabled to its local host—an incremental data
copy from the target R2 device to the source R1 device can be initiated. This is done to update the R1
mirror with changed tracks from the target R2 device. After an extended outage on the R1 side, a
substantial amount of invalid tracks could have accumulated on the R2. If a failback is now performed,
production starts from the R1. New writes to the R1 have to be transferred to the R2 synchronously. Any
track requested on the R1 that has not yet been transferred from the R2 has to be read from across the
links. This could lead to performance degradation on the R1 devices. The update operation helps to
minimize this impact.

When performing an update, the R1 devices are still Write Disabled. The links become Read Write
enabled because of the Updated state. The target devices remain Read Write during the update process.

The update operation can be used with the –until flag, which represents a skew value assigned to the
update process. For example, we can choose to update until the accumulated invalid tracks are down to
30000. Then a failback operation can be executed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 425
Query After Update
C:\>symrdf -g synctgtdg query

Device Group (DG) Name : synctgtdg


DG's Type : RDF2
DG's Symmetrix ID : 000196802253 (Microcode Version: 5977)
Remote Symmetrix ID : 000197600217 (Microcode Version: 5978)
RDF (RA) Group Number : 101 (64)

Target (R2) View Source (R1) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
DEV001 0006C RW 5009 0 RW 000D2 WD 0 0 S..E R1 UpInProg
DEV002 0006D RW 4990 0 RW 000D3 WD 0 0 S..E R1 UpInProg

Total ------- ------- ------- -------


Track(s) 9999 0 0 0
MB(s) 1249.9 0.0 0.0 0.0

426 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

When the update operation is performed after a failover, the links become Read Write enabled, but the
Source devices are still Write Disabled. Production work continues on the R2 devices.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 426
SRDF Failback

C:\>symrdf -g syncsrcdg failback -nop

An RDF 'Failback' operation execution is


in progress for device group 'syncsrcdg'. Please wait...

Write Disable device(s) on RA at target (R2)..............Done.


Suspend RDF link(s).......................................Done.
Merge device track tables between source and target.......Started.
Devices: 00D2-00D3 in (0217,101)..........................Merged.
Merge device track tables between source and target.......Done.
Resume RDF link(s)........................................Started.
Resume RDF link(s)........................................Done.
Read/Write Enable device(s) on SA at source (R1)..........Done.

The RDF 'Failback' operation successfully executed for


device group 'syncsrcdg'.

427 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

When the source site has been restored, or if maintenance is completed, you can return production to the
source site. The symrdf failback command will set the R2s to Write Disabled, the link to Read Write,
and the R1s to Read Write enabled. Merging of the device track tables between the source and target is
done. The SRDF links are resumed. The accumulated invalid tracks are transferred to the source devices
from the target devices. So all changes made to the data when in a failed over state will be preserved. As
noted earlier, the primary host can access the R1 devices and start production work as soon as the
command completes. If a track that has not yet been sent over from the R2 is required on the R1, SRDF
can preferentially read that track from across the links.

As the R2s will be set to Write Disabled, it is important to shut down the applications using the R2 devices,
and perform the appropriate host dependent steps to unmount filesystem/deactivate volume groups. If
applications are still actively accessing R2s when they are being set to Write Disabled, the reaction of the
host accessing the R2s will be unpredictable. In a true disaster, the failover process may not give an
opportunity for a graceful shutdown. But a failback event should always be planned and done gracefully.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 427
Query After Failback
C:\>symrdf -g syncsrcdg query

Device Group (DG) Name : syncsrcdg


DG's Type : RDF1
DG's Symmetrix ID : 000197600217 (Microcode Version: 5978)
Remote Symmetrix ID : 000196802253 (Microcode Version: 5977)
RDF (RA) Group Number : 101 (64)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
DEV001 000D2 RW 7347 0 RW 0006C WD 0 0 S..E SyncInProg
DEV002 000D3 RW 7324 0 RW 0006D WD 0 0 S..E SyncInProg

Total ------- ------- ------- -------


Track(s) 14671 0 0 0
MB(s) 1833.9 0.0 0.0 0.0

428 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

As can be seen in the output, the R1s are set to Read Write, the R2s are set to Write Disabled, and the
links are set to Read Write. The pair states go into SyncInProg. The accumulated invalid tracks have been
transferred from the target array to the source array. Once all accumulated invalid tracks have been
transferred, the pair state will go to Synchronized. Because a failback operation sets the R2 devices to
Write Disabled, applications accessing the R2 devices must be stopped before the failback operation.
When a host suddenly loses RW access to a device while still actively accessing it, the results are
unpredictable.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations: 428
Lesson: SRDF Decision Support Operations
This lesson covers the following topics:

• SRDF Establish

• SRDF Restore

• Concurrent SRDF

429 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers SRDF Decision Support operations. Considerations for performing these operations
are presented in detail. Concurrent SRDF where one R1 device is simultaneously paired with two R2
devices is also discussed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations: 429
SRDF Decision Support Operations

• Split
– Enables accessing both the R1 and R2 devices by their respective hosts
– Suspends the links between the R1-R2 pairs
– Read-Write enables the R2 device

• Establish
– Resumes normal SRDF mirroring—source RW and target WD, link RW
– Save source R1 data—changes made to the R1 during split state are propagated to the R2. Changes made
to the R2 are discarded

• Restore
– Resumes normal SRDF mirroring—source RW and target WD, link RW
– Save target R2 data—changes made to the R2 during split state are propagated to the R1 and changes made
to the R1 are discarded

430 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The decision support operations for SRDF devices are:

Split an SRDF pair which stops mirroring for the SRDF pairs in a device group.

Establish an SRDF pair by initiating a data copy from the source side to target side. The operation can be
a full or incremental.

Restore remote mirroring, which initiates a data copy from the target side to the source side. The
operation can be full or incremental.

As noted in the title, these are decision support operations and are not disaster recovery/business
continuance operations. In these situations, both the source and target sites are healthy and available.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations: 430
SRDF Split
C:\>symrdf -g syncsrcdg split –nop

C:\>symrdf -g syncsrcdg query

-----Output Truncated-----
Device Group (DG) Name : syncsrcdg
DG's Type : RDF1

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
DEV001 000D2 RW 0 7227 NR 0006C RW 6256 0 S..E Split
DEV002 000D3 RW 0 7252 NR 0006D RW 6237 0 S..E Split

Total ------- ------- ------- -------


Track(s) 0 14479 12493 0
MB(s) 0.0 1809.9 1561.6 0.0

431 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The split command suspends the links between source—R1—and target—R2—volumes. The source
devices continue to be Read Write enabled. The target devices are set to Read Write enabled. Writes to
the R1 devices accumulate as R2 Inv(alid) Tracks—these are the tracks now owed to the R2 devices.
Writes to the R2 devices accumulate as R1 Inv(alid) Tracks—these are the tracks owed to the R1 devices.
The RDF Pair state is displayed as Split.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations: 431
SRDF Establish
C:\>symrdf -g syncsrcdg establish

Execute an RDF 'Incremental Establish' operation for device


group 'syncsrcdg' (y/[n]) ? y

An RDF 'Incremental Establish' operation execution is


in progress for device group 'syncsrcdg'. Please wait...

Write Disable device(s) on RA at target (R2)..............Done.


Suspend RDF link(s).......................................Done.
Resume RDF link(s)........................................Started.
Merge device track tables between source and target.......Started.
Devices: 00D2-00D3 in (0217,101)..........................Merged.
Merge device track tables between source and target.......Done.
Resume RDF link(s)........................................Done.

The RDF 'Incremental Establish' operation successfully initiated for


device group 'syncsrcdg'.

432 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Establish operation will resume SRDF remote mirroring. Changes made to the source while in a split
state are transferred to the target. Changes made to the target are overwritten. The R2 devices are set to
Write Disabled. Hence applications should stop accessing the R2 devices prior to performing an establish
operation. The links are resumed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations: 432
Query After SRDF Establish
C:\>symrdf -g syncsrcdg query

-----Output Truncated-----
RDF (RA) Group Number : 101 (64)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
DEV001 000D2 RW 0 2283 RW 0006C WD 0 7710 S..E SyncInProg
DEV002 000D3 RW 0 2397 RW 0006D WD 0 7879 S..E SyncInProg

Total ------- ------- ------- -------


Track(s) 0 4680 0 15589
MB(s) 0.0 585.0 0.0 1948.6

433 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

As can be seen in the query output, the state of the devices are reverted to their normal state—R1-RW;
R2-WD—and the links are resumed—RW. Changes made to the R2 devices during the split state are
discarded. Changes made to the R1 devices during the split state are propagated to the R2 devices.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations: 433
SRDF Restore
C:\>symrdf -g syncsrcdg restore -nop

An RDF 'Incremental Restore' operation execution is


in progress for device group 'syncsrcdg'. Please wait...

Write Disable device(s) on SA at source (R1)..............Done.


Write Disable device(s) on RA at target (R2)..............Done.
Suspend RDF link(s).......................................Done.
Merge device track tables between source and target.......Started.
Devices: 00D2-00D3 in (0217,101)..........................Merged.
Merge device track tables between source and target.......Done.
Resume RDF link(s)........................................Started.
Resume RDF link(s)........................................Done.
Read/Write Enable device(s) on SA at source (R1)..........Done.

The RDF 'Incremental Restore' operation successfully initiated for


device group 'syncsrcdg'.

434 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Restore operation will resume SRDF remote mirroring. Changes made to the target while in a split
state are transferred to the source. Changes made to the source are overwritten. The R2 devices are set
to Write Disabled. Hence, applications should stop accessing the R2 devices prior to performing an
establish operation. The links are resumed. As data on the R1 devices changes without the knowledge of
the host, access to R1 devices should be stopped prior to performing a restore operation. As soon as the
command completes, R1 devices can be accessed again without waiting for synchronization to be
completed. Any required track on the R1 that has not yet been received from the R2 will be read across
the links.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations: 434
Query After SRDF Restore
C:\>symrdf -g syncsrcdg query

-----Output Truncated-----
RDF (RA) Group Number : 101 (64)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
DEV001 000D2 RW 8205 0 RW 0006C WD 1938 0 S..E SyncInProg
DEV002 000D3 RW 8205 0 RW 0006D WD 1871 0 S..E SyncInProg

Total ------- ------- ------- -------


Track(s) 16410 0 3809 0
MB(s) 2051.3 0.0 476.1 0.0

435 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

As can be seen in the query output, the state of the devices are reverted to their normal state—R1-RW;
R2-WD—and the links are resumed to RW. Changes made to the R1 devices during the split state are
discarded. Changes made to the R2 devices during the split state are propagated to the R1 devices.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 435
R1/R2 Personality Swap
• Changes the personality of the SRDF devices
– Current R1 becomes new R2. Current R2 becomes new R1
• Data flow is from the new R1—old R2—to the new R2—old R1
• Can be performed in one of two ways:
– symrdf swap
– symrdf failover –establish

• Useful for:
– Disaster Recovery drills
– Datacenter relocation
– Maintenance operations on local site hosts while continuing production work WITH
Disaster Recovery protection
– Selective load balancing for certain applications by swapping their device personalities and
moving their workload to the other array

436 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

An R1/R2 personality swap—or R1/R2 swap—refers to when the SRDF personality of the SRDF device
designations of a specified device group are swapped. The source R1 devices become target R2 devices
and target R2 devices become source R1 devices.

Sample scenarios for R1/R2 Swap:

Symmetrix Load Balancing:

In our rapidly changing computing environments, it is often necessary to redeploy applications and storage
on a different storage array without having to give up disaster protection. An R1/R2 swap can enable this
redeployment with minimal disruption, while offering the benefit of load balancing across two storage
arrays.

Primary Data Center Relocation:

Sometimes a primary data center needs to be relocated to accommodate business practices. Businesses
might want to test their Disaster Recovery readiness without sacrificing DR protection. R1/R2 swaps allow
these customers to move their primary applications to their DR centers and continue to SRDF mirror back
to their Primary data center.

Post-Failover Temporary Protection Measure:

If the hosts on the source side are down for maintenance, an R1/R2 swap permits the relocation of
production computing to the target site without giving up the security of remote data protection. When all
problems have been solved on the local storage array hosts, you have to failover again and swap the
personality of the devices to go back to the original configuration.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 436
Concurrent SRDF Devices

• An SRDF device with two SRDF mirrors is called a Concurrent SRDF device
• There are three different types of Concurrent SRDF devices:
– R11 – Each R1 mirror is paired with a different R2 mirror on two different remote
storage arrays.
– R21 – This device is the R2 mirror for an R1 device and also acts as a R1 mirror
for another R2 device. This device is used in the secondary site of a Cascaded
SRDF configuration.
– R22 – Each R2 mirror is paired with a different R1 mirror on two different remote
storage arrays. Only one of the R2 mirrors can be Read Write on the links at a
time.

437 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

An SRDF device with two SRDF mirrors is called a Concurrent SRDF device. R11 devices operate as the
R1 device for two R2 devices. Links to both R2 devices are active. R21 devices are configured and used
for Cascaded SRDF environments. R22 devices are used in SRDF/Star environments. An R22 device has
two R1 mirrors. However, it can receive data from just one of the R1 mirrors at a time.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 437
Concurrent SRDF – R11 Devices

• One R1 can be paired with two R2 devices, concurrently

• Each of the two concurrent mirrors must belong to different SRDF groups—RA groups

SRDF Group 1

R2

Site B

R11

Site A

SRDF Group 2
R2

Site C
438 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Concurrent SRDF allows two remote SRDF mirrors of a single R1 device. A concurrent R1 device has two
R2 devices associated with it. Each of the R2 devices is usually in a different array. Any combination of
SRDF modes is allowed:

R11  R2 (Site B) in Synchronous mode and R11  R2 (Site C) in Asynchronous mode

R11  R2 (Site B) in Synchronous mode and R11  R2 (Site C) in Adaptive Copy Disk mode

R11  R2 (Site B) in Synchronous mode R11  R2 (Site C) in Synchronous mode

R11  R2 (Site B) in Asynchronous mode and R11  R2 (Site C) in Asynchronous mode

Each of the R1  R2 pairs are created in different SRDF Groups.

2 Synchronous remote mirrors: A write I/O from the host to the R11 device cannot be acknowledged to
the host as completed until both remote arrays signal the local array that the SRDF I/O is in cache at the
remote side.

SRDF swap is not allowed in this configuration.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 438
Concurrent SRDF – R11 Example
C:\>symrdf createpair -sid 217 -f rdf_device_pairs_conc.txt -type R1 -rdfg 102 –establish -nop

An RDF 'Create Pair' operation execution is in progress for device


file 'rdf_device_pairs_conc.txt'. Please wait...

Create RDF Pair in (0217,102)....................................Started.


Create RDF Pair in (0217,102)....................................Done.
Mark target device(s) in (0217,102) for full copy from source....Started.
Devices: 00D2-00D3 in (0217,102).................................Marked.
Mark target device(s) in (0217,102) for full copy from source....Done.
Merge track tables between source and target in (0217,102).......Started.
Devices: 00D2-00D3 in (0217,102).................................Merged.
Merge track tables between source and target in (0217,102).......Done.
Resume RDF link(s) for device(s) in (0217,102)...................Started.
Resume RDF link(s) for device(s) in (0217,102)...................Done.

The RDF 'Create Pair' operation successfully executed for device


file 'rdf_device_pairs_conc.txt'.

439 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

For the purpose of illustration, we show the R1 devices paired with two R2 devices on the same remote
array. The real use for R11 devices will be to pair them with R2 devices on two different remote arrays,
perhaps at two different locations.

In this example, R1 devices 0D2 and 0D3 on SID:217 are paired with R2 devices 06C and 06D on
SID:253, as well as concurrently paired with R2 devices 06E and 06F on SID:253. This was accomplished
by the following two commands:

C:\>symrdf addgrp -label SRDF_CONC -sid 217 -remote_sid 253 -dir 1F:06,2F:06 -
remote_dir 1F:11,2F:11 -rdfg 102 -remote_rdfg 102

A new RDF group—number 102—has been created.

C:\>symrdf createpair -sid 217 -f rdf_pairs_conc.txt -type R1 -rdfg 11 –


establish

Where the file rdf_pairs_conc.txt contains:

0D2 06E

0D3 06F

This specifies that R1 devices 0D2 and 0D3 should now be concurrently paired with R2 devices 06E and
06F as well.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 439
Listing Concurrent SRDF Devices
C:\>symrdf -sid 217 list -concurrent

Symmetrix ID: 000197600217

Local Device View


-----------------------------------------------------------------------------
STATUS FLAGS RDF S T A T E S
Sym Sym RDF --------- ----- R1 Inv R2 Inv ----------------------
Dev RDev Typ:G SA RA LNK MTES Tracks Tracks Dev RDev Pair
----- ----- -------- --------- ----- -------- -------- --- ---- -------------

000D2 0006C R1:101 RW RW RW S1.E 0 0 RW WD Synchronized


0006E R1:102 RW RW RW D1.E 0 0 RW WD Synchronized
000D3 0006D R1:101 RW RW RW S1.E 0 0 RW WD Synchronized
0006F R1:102 RW RW RW D1.E 0 0 RW WD Synchronized

Total -------- --------


Track(s) 0 0
MB(s) 0.0 0.0

440 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The output shows that R1 device 0D2 is now concurrently paired with R2 device 06C in RDF Group 101
as well as with R2 device 06E in RDF Group 102. Note that one leg {0D206C} is in Synchronous
mode of SRDF and the other leg {0D206E} is in Adaptive Copy Disk mode. Likewise for the device
pairs {0D306D} and {0D306F}. If you want to change the other leg to Synchronous mode as well,
then use the command symrdf set mode sync –rdfg 102.

So the way to deal with the two different legs is to call them out with the –rdfg flag and explicitly specify
which leg you want to operate on.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 440
SRDF Consistency Protection (1 of 2)

• An SRDF consistency group is a composite group of SRDF devices that are


enabled for consistency
– If a source R1 device in the consistency group cannot propagate data to its
corresponding R2 device, SRDF consistency suspends data propagation from all
the R1 devices in the group.
• The SRDF daemon storrdfd provides consistency protection for:
– SRDF/Synchronous RDF-Enginuity Consistency Assist (ECA) consistency groups
– SRDF/Asynchronous Multi-session Consistency (MSC) consistency groups
– Concurrent SRDF

441 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

SRDF consistency preserves the dependent-write consistency of devices within a consistency group by
monitoring data propagation from source devices to their corresponding target devices. If a source R1
device in the consistency group cannot propagate data to its corresponding R2 device, SRDF consistency
suspends data propagation from all the R1 devices in the group.

A Composite group must be created using the RDF consistency protection option (-rdf_consistency) and
must be enabled using the symcg enable command for the SRDF daemon to begin monitoring and
managing the consistency group. Devices in a consistency group can be from multiple arrays or from
multiple SRDF groups in the same array.

Consistency protection is managed by the SRDF daemon which is a Solutions Enabler process that runs
on a host with Solutions Enabler and connectivity to the array. Consistency protection is available for
SRDF/S, SRDF/A and Concurrent SRDF modes. The storrdfd daemon ensures that there will be a
consistent R2 copy of the database at the point in time in which a data flow interruption occurs.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 441
SRDF Consistency Protection (2 of 2)

• For SRDF/S RDF-ECA, the SRDF daemon


– Continuously polls SRDF/S sessions for data flow interruptions
› If an R1 cannot propagate data to its R2, it suspends SRDF links for all devices in the consistency group

• For SRDF/A MSC, the SRDF daemon performs


– Cycle switching and cache recovery for all SRDF/A sessions within a consistency
group
– Manages the R1 -> R2 commits for SRDF/A sessions in multi-cycle mode

442 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

RDF-ECA provides consistency protection for synchronous mode devices by performing suspend
operations across all SRDF/S devices in a consistency group. SRDF/A MSC will be discussed in the next
module.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 442
Lesson: SRDF/S Operations Unisphere for PowerMax
This lesson covers the following topics:

• Creating Dynamic SRDF Groups and SRDF Pairs

• Creating Device Groups

• Performing SRDF/S Operations

443 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers performing SRDF operations using Unisphere for PowerMax.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 443
Creating Dynamic SRDF Groups

444 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To list the currently configured SRDF Groups, navigate to SID > Data Protection > SRDF Groups. We
see that there are currently six SRDF Groups created on SID:217. Click Create SRDF Group to launch
the wizard.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 444
Create SRDF Group Wizard (1 of 4)

445 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Unisphere for PowerMax Create SRDF Group dialog is shown here. Choose the desired
Communication protocol FC or GigE, select the Remote Array ID, and enter an SRDF group label. Click
NEXT to proceed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 445
Create SRDF Group Wizard (2 of 4)

446 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Configure Local array dialog is next. Enter or select the SRDF Group Number. Choose the RA
directors and ports for the local array.

The Advanced Options include Local Link Domino and Local Auto Link Recovery.

Under certain conditions, the SRDF devices can be forced into the Not Ready state to the host if, for
example, the host I/Os cannot be delivered across the SRDF link. The domino attribute is used to stop all
subsequent write operations to both R1 and R2 devices to avoid data corruption.

If, during normal operation, all SRDF links fail, the array will store the SRDF states of the affected SRDF
devices. The Local Auto Link Recovery attribute enables the array to restore the devices to these states
automatically when the SRDF links become operational.

If selecting Advanced Options, click OK to proceed. Click NEXT to continue to the Configure Remote
dialog.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 446
Create SRDF Group Wizard (3 of 4)

447 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Configure Remote array dialog is next. Enter or select the SRDF Group Number. Choose the SRDF
directors and ports for the remote array. Click NEXT to proceed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 447
Create SRDF Group Wizard (4 of 4)

448 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Shown here is the Review Summary. The Use Software Compression option is available for SRDF traffic
over Fibre Channel and GigE SRDF links. If software compression is enabled, PowerMaxOS compresses
the data before sending it across the SRDF links. The arrays at both sides of the SRDF links must support
software compression and must have the software compression feature enabled in the configuration file.

Select Run Now from the ADD TO JOB LIST drop down menu to continue.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 448
Creating Dynamic SRDF Pairs

449 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To create Dynamic SRDF pairs in Unisphere for PowerMax, navigate to the SRDF Groups page. Click
the SRDF group that you want to create SRDF Pairs in and then click the Create Pairs button to launch
the Create Pairs dialog.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 449
Create Pairs Wizard (1 of 5)

450 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In the dialog, choose the Mirror Type R1 or R2. Select the SRDF Mode. The options include Adaptive
Copy, Synchronous, Asynchronous, and Active. Select the Establish radio button. Optionally, you can
choose to bypass the check to ensure the target of the operation is not writable by the host. Click NEXT to
continue.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 450
Create Pairs Wizard (2 of 5)

451 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The next step is to choose the Local Volumes. In this example, we chose two volumes manually using the
Select Volumes Wizard (not shown). Click NEXT to proceed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 451
Create Pairs Wizard (3 of 5)

452 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Next, select the Remote Volumes. In this case, we selected two volumes manually using the Select
Volumes Wizard (not shown). Click NEXT to continue.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 452
Create Pairs Wizard (4 of 5)

453 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In the Sort Pairs dialog window, you can reorder the volume pairing by dragging and dropping remote
volumes. Here we can see that the Local Volumes selected are volumes 0D4 and 0D5. The Remote
Volumes are volumes 070 and 071. Click NEXT to proceed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 453
Create Pairs Wizard (5 of 5)

454 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Review the Pair Summary and select Run Now from the ADD TO JOB LIST drop down menu.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 454
SRDF Group Operations

455 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

From the SRDF Groups page, select the SRDF Group and click the More Actions button. Attributes that
can be set and other actions that can be performed on this SRDF Group are displayed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 455
Creating Device Group

456 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

As mentioned earlier, a device group is a user-created object for viewing and managing related array
devices. All devices in a device group should be on the same array. To create a device group, navigate to
SID > DATA PROTECTION > Device Groups >Device Groups tab, and click Create.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 456
Creating Device Group Wizard (1 of 5)

457 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Enter a name for the device group. Select the Device Group Type. Options include REGULAR, RDF1,
RDF2, RDF21, and ANY. Click NEXT to proceed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 457
Creating Device Group Wizard (2 of 5)

458 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Select the Source. Options include Storage Group or Volumes. Select the Source Volume Type. Options
include STD and BCV. Select the volumes and click the Add button to continue.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 458
Creating Device Group Wizard (3 of 5)

459 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Here we can see the volumes 0D4 and 0D5 have been added to the volumes list. Click NEXT to proceed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 459
Creating Device Group Wizard (4 of 5)

460 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In the Select Target Volumes field, select Automatically match source volumes or Manually select target
volumes. The Select Target Volume Type field options include TGT and BCV. In this example, we are
manually selecting target volumes 070 and 071. Click the Add button to add the volumes to the volumes
list and then click NEXT to continue.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 460
Creating Device Group Wizard (5 of 5)

461 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Review the Summary dialog window and click FINISH to create the device group.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 461
Device Group SRDF Operations

462 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

All SRDF operations are performed from the SID > DATA PROTECTION > Device Groups page. Select
the device group to be managed. Clicking the More Actions button shows the list of operations that can be
performed from Unisphere for PowerMax.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 462
Lesson: SRDF/S VMFS Datastore Disaster Recovery
This lesson covers the following topics:

• Performing SRDF/S Disaster Recovery of a VMFS datastore

• Performing SRDF Failback and resuming operations on the Primary ESXi server

463 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers performing an SRDF/S DR of a VMFS datastore.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 463
Datastore Presented to Primary ESXi Server

464 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

For this example, we will replicate the Production_Datastore using SRDF/Synchronous mode. This
datastore provides the storage for the VM StudentVM. The Production_Datastore is created on the
PowerMax array device 0B3. This was the setup we used for our TimeFinder SnapVX example for locally
replicating a VMFS Datastore.

The device 0B3 on SID:217 has been SRDF paired with device 058 on SID:253 which is a VMAX 100K.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 464
Identifying Device Hosting Production Datastore

465 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In Unisphere for PowerMax, navigate to the Masking View for the Primary ESXi Server and identify the
device it has access to. In this example, it is device 0B3. Listing the details of this device shows that it is a
RDF1+TDEV. So it is an SRDF R1 device.

The WWN of the device 04C is also listed. We can match this WWN with the naa number shown
previously in this lesson and conclude that the Primary ESXi Server has access to device 0B3. This
device is in SID:217 and its capacity is 10 GB. In order to replicate this device using SRDF, we have to
identify the R2 device with which 0B3 is paired. We have to ensure that the corresponding R2 device has
been masked to the Secondary ESXi Server.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 465
Browse Datastore

466 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Browse the Production_Datastore to determine the VM resident on it. This shows that StudentVM is the
VM.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 466
Data on StudentVM

467 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Shown here an open console to StudentVM. We have created a folder named Production_Data. What we
have determined so far:

1. Primary ESXi Server has access to Production_Datastore

2. Production_Datastore_64 has been created on device 0B3 in SID:217

3. Device 0B3 is an SRDF R1 device

4. StudentVM uses Production_Datastore for its storage

5. StudentVM contains Production_Data

The objective is to perform an SRDF Failover of device 0B3. Access the corresponding R2 device from
the Remote ESXi Server and power on the VM on the Remote ESXi Server.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 467
Identifying R2 of Production R1 Device

468 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The SRDF Storage Group named esx67_r1_sg contains the R1 device 0B3. Navigating to SID > DATA
PROTECTION > Storage Groups > esx67_r1_sg > SRDF Pairs shows that the R1 device 0B3 is paired
with R2 device 058 in SID:253. The SRDF Pair state is Synchronized.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 468
LUNs Presented to Remote ESXi Server

469 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Configuration information for the Remote ESXi Server shows the accessible LUNs. As in the case of
TimeFinder SnapVX, you have to correlate the naa name shown here with the WWN using Unisphere for
PowerMax.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 469
Unisphere for PowerMax R2 Device Detail

470 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Use Unisphere for PowerMax to confirm that the correct R2 device has been presented to the Remote
ESXi Server. Here we can see the details of the Remote ESXi Server Storage Group esxi-88-77_sg. The
Storage Group contains device 058. The device details on the right show the WWN of device 058 with the
naa number that matches the naa number displayed in the vSphere Web Client. Note that the device
Status is listed as Write Disabled.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 470
Shut Down and Unregister VM on Primary ESXi Server

471 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To illustrate SRDF functionality with VMFS, we will perform a graceful failover. While in a true disaster this
would not be possible. For this example, shut down StudentVM and Unregister it from the Primary ESXi
server.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 471
Unmount Datastore on Primary ESXi Server

472 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To continue the graceful shutdown, unmount the Production_Datastore from the Primary ESXi server.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 472
SRDF Failover

473 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Navigate to SID > DATA PROTECTION > Storage Groups > SRDF. Select the SRDF Storage Group
that contains the production device 0B3, click the More Actions link, and select Failover.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 473
Rescan Remote ESXi Server

474 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

After the Failover, rescan the Remote ESXi Server for all storage. The process for accessing the R2
device from the Remote ESXi server is identical to the one used for accessing the linked Target on the
Secondary ESXi server.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 474
Mount R2 Datastore on Remote ESXi Server

[root@esxi-88-77:~] esxcfg-volume -l
Scanning for VMFS-6 host activity (4096 bytes/HB, 1024 HBs).
VMFS UUID/label: 5c3dfabf-cb3b383c-4b68-005056955e4a/Production_Datastore
Can mount: Yes
Can resignature: Yes
Extent name: naa.60000970000196802253533030303538:1 range: 0 - 9983 (MB)

[root@esxi-88-77:~] esxcfg-volume -r 5c3dfabf-cb3b383c-4b68-005056955e4a


Resignaturing volume 5c3dfabf-cb3b383c-4b68-005056955e4a
[root@esxi-88-77:~]

475 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Mounting the datastore can be done using the vSphere Web Client when a vCenter Server is configured.
In our example, there is no vCenter Server deployed, so we used the same esxcli commands used to
mount the SnapVX Snapshot. Open a PuTTy session to the ESXi server and issue the commands shown.
As with the SnapVX session, we used the –r option to mount and resignature the datastore using the
UUID.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 475
Select Datastore

476 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Like in the example of SnapVX we have assigned a new signature, hence the Datastore name has been
pre-fixed with snap-xxxxx. Click the Register a VM link to proceed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 476
Register VM

477 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Browse to and select the StudentVM.vmx file and click the Register button.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 477
Power on VM

478 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Navigate to the VM display and click the Power on link. As the VM is a replica, answer I Copied It.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 478
VM on Remote ESXi Server

479 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

We can open a console to the VM on the Remote ESXi server and verify that it has the same data that
was available on the Primary ESXi server at the time of the graceful failover.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 479
Add Data to VM on Remote ESXi Server

480 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To simulate resuming production work from the R2 device, we add more data to the VM. Next we will do
the steps necessary to failback the SRDF pair, resume production work on the R1 device, and verify that
the data added in the failed over state is available back on the R1 device.

Prior to performing an SRDF failback and resuming production work from the R1 device, shut down the
VM on the Remote ESXi server. Remember that the R2 device will be Write Disabled on a failback
operation. Navigate to the VM Actions menu in the vSphere Web Client, and select Unregister.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 480
Unmount Datastore from Remote ESXi Server

481 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Next, unmount the datastore from the Remote ESXi server.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 481
SRDF Failback

482 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In Unisphere for PowerMax, navigate to DATA PROTECTION > Storage Groups > SRDF. Select the
Storage Group named esx67_r1_sg and click on the More Actions link. Select Failback.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 482
Failback Task Details

483 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Shown here are the Task Details.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 483
Mount Datastore on Primary ESXi Server

[root@esxi-88-67:~] esxcfg-volume -l
Scanning for VMFS-6 host activity (4096 bytes/HB, 1024 HBs).
VMFS UUID/label: 5c3f29e4-bb0ea8f8-5a8b-005056950057/snap-68bedab8-Production_Datastore
Can mount: Yes
Can resignature: Yes
Extent name: naa.60000970000197600217533030304233:1 range: 0 - 9983 (MB)

[root@esxi-88-67:~] esxcfg-volume -r 5c3f29e4-bb0ea8f8-5a8b-005056950057


Resignaturing volume 5c3f29e4-bb0ea8f8-5a8b-005056950057
[root@esxi-88-67:~]

484 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

After rescanning the primary ESXi server, as we did before, we are issuing esxcli commands to the
primary ESXi server to remount the datastore with a resignature after the failback.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 484
Register VM on Primary ESXi Server

485 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Browse to the datastore to register the VM to the primary ESXi server. Remember that we had shut down
the VM, removed it from inventory, and unmounted the datastore in order to perform a graceful failover. So
now we have to reregister the VM to the Primary ESXi server. Select the StudentVM.vmx file and click
Register.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 485
Data on VM on Primary ESXi Server

486 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Here we can see that the data added to the VM when it was running on the remote ESXi server is
available on the VM after performing an SRDF Failback operation.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 486
Lab: SRDF/Synchronous Operations

This lab covers


• SRDF Initial Setup and Basic Operations –
Creating Dynamic SRDF Groups, SRDF pairs
• SRDF/S Disaster Recovery Operations –
Failover, Failback
• SRDF/S Decision Support/Concurrent Access
Operations – Split, Establish, Restore

487 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lab covers creating dynamic SRDF groups, SRDF pairs, basic SRDF operations, as well as SRDF
Disaster Recovery and Decision Support operations.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 487
Lab: SRDF/S Disaster Recovery for VMFS Datastore

This lab covers


• Identifying and correlating SRDF R1 and R2 devices
with devices accessible by Primary and Remote
ESXi servers
• Creating a VMFS Datastore on the R1 device and
deploying a Virtual Machine
• Using Unisphere for PowerMax to create a Device
Group, adding the R1 device, and performing SRDF
Failover operation
• Accessing the SRDF R2 device from the Remote
ESXi server and power-on Virtual Machine on the
Remote ESXi server

488 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lab covers performing SRDF/S Disaster Recovery for a VMFS datastore.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 488
Module Summary

Key points covered in this module:

• Creating Dynamic SRDF Groups and Dynamic SRDF Pairs

• Performing SRDF/S Operations using SYMCLI and Unisphere for PowerMax

• Performing Disaster Recovery of a VMFS datastore

489 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This module covered SRDF operations in Synchronous mode. Use of SYMCLI and Unisphere for
PowerMax to perform SRDF operations were presented in detail. A method for performing DR operations
in a virtualized environment for a VMFS datastore was discussed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Synchronous Operations 489
Module: SRDF/Asynchronous Operations

Upon completion of this module, you should be able to:

• Describe SRDF/Asynchronous remote replication on PowerMax and VMAX Family arrays

• Perform SRDF/A operations

• Describe SRDF/A resiliency features

• Manage SRDF/A multi-session consistency

490 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This module focuses on the SRDF/Asynchronous mode of remote replication. Concepts and operations for
SRDF/A in single and multi-session modes are presented. SRDF/A resiliency features are also discussed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 490
Lesson: SRDF/A Concepts and Operations
This lesson covers the following topics:

• Multi-cycle mode for SRDF/A on PowerMax arrays and VMAX Family arrays

• SRDF/A – System-level and group-level attributes

• Adding and removing device pairs to and from active SRDF/A sessions

491 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers SRDF/A multi-cycle mode on PowerMax and VMAX Family arrays. The attributes that
can be set for SRDF/A at a system and group level are discussed in detail. Methods for adding and
removing RDF device pairs to and from active SRDF/A sessions are presented.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 491
SRDF/A Multi-cycle Mode
SRDF/A

transmit receive
(n) )
capture(n-2
capture (n-1 apply
capture (n) )
capture(n-1
1. Minimum cycle time has elapsed
capture (n) 2. Capture set added to transmit queue
3. New capture set created
R1
4. Capture set added to transmit queue R2

492 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

SRDF/A Multi-Cycle Mode (MCM) allows more than two capture cycles on the R1 side.

When the minimum_cycle_time has elapsed, the data from the capture cycle will be added to a
transmit queue and a new capture cycle will occur. The transmit queue is a feature of SRDF/A. It provides
a location for R1 captured cycle data to be placed so a new capture cycle can occur.

The capture cycle will occur even if no data is transmitted across the link. If no data is transmitted across
the link, the capture cycle data will again be added to the transmit queue. The transmit queue holds the
data until it is transmitted across the link. The transmit cycle will transfer the data in the oldest capture
cycle to the R2 first and then repeat the process.

The benefit of this is to capture controlled amounts of data on the R1 side. Each capture cycle will occur at
regular intervals and will not contain large amounts of data waiting for a cycle to occur.

Another benefit is data that it is sent across the SRDF link will be smaller in size and should not
overwhelm the R2 side. The R2 side will still have two delta sets, the receive and the apply.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 492
SRDF/A System Attributes
C:\>symcfg -sid 217 list -v|more

Symmetrix ID: 000197600217 (Local)


Time Zone : Eastern Standard Time

Product Model : PowerMax_8000


Symmetrix ID : 000197600217

Microcode Version (Number) : 5978 (175A0000)


-----Output Truncated-----
Symmetrix Configuration Checksum : F1226876
Switched RDF Configuration State : Enabled
Concurrent RDF Configuration State : Enabled
Dynamic RDF Configuration State : Enabled
Concurrent Dynamic RDF Configuration : Enabled
RDF Data Mobility Configuration State: Disabled
-----Output Truncated-----
SRDF/A Maximum Host Throttle (Secs) : 0
SRDF/A Maximum Cache Usage (Percent) : 75
SRDF/A DSE Maximum Capacity (GB) : NoLimit

493 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The system attributes that pertain to SRDF/A are shown here. The use of Host Throttle, Maximum Cache
Usage, and DSE Maximum Capacity attributes will be explained later in this module.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 493
SRDF Group – SRDF/A Attributes

C:\>symcfg -sid 217 list -rdfg 104 -rdfa

Symmetrix ID : 000197600217

S Y M M E T R I X R D F A G R O U P S

-------- ---------- -------- ----- --- --- --------- -----------------------


Write Pacing
RA-Grp Group Flags Cycle Pri Thr Transmit Delay Thr GRP DEV FLG
Name CSRM TDA time Idle Time (usecs) (%) SAU SAU P
-------- ---------- -------- ----- --- --- --------- ------- --- --- --- ---
104 (67) Async1 -IS- XIX 15 33 50 000:00:00 50000 60 I.- --- X

494 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The SRDF/A attributes of an SRDF Group can be listed with the –rdfa option as shown here. Note that
all attributes displayed here are default values. The RDF Group has just been created and no modification
to the attributes has been made.

Legend:

RDFA Flags :

(C)onsistency : X = Enabled, . = Disabled, - = N/A

(S)tatus : A = Active, I = Inactive, - = N/A

(R)DFA Mode : S = Single-session, M = MSC, - = N/A

(M)sc Cleanup : C = MSC Cleanup required, - = N/A

(T)ransmit Idle : X = Enabled, . = Disabled, - = N/A

(D)SE Status : A = Active, I = Inactive, - = N/A

DSE (A)utostart : X = Enabled, . = Disabled, - = N/A

Write Pacing Flags :

(GRP) Group-Level Pacing:

(S)tatus : A = Active, I = Inactive, - = N/A

(A)utostart : X = Enabled, . = Disabled, - = N/A

S(U)pported : X = Supported, . = Not Supported, - = N/A

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 494
List of Individual RDF Group

C:\>symrdf -sid 217 list -rdfg 104

Symmetrix ID: 000197600217

Local Device View


-----------------------------------------------------------------------------
STATUS FLAGS RDF S T A T E S
Sym Sym RDF --------- ----- R1 Inv R2 Inv ----------------------
Dev RDev Typ:G SA RA LNK MTES Tracks Tracks Dev RDev Pair
----- ----- -------- --------- ----- -------- -------- --- ---- -------------

000D6 00072 R1:104 RW RW RW D1.E 0 0 RW WD Synchronized


000D7 00073 R1:104 RW RW RW D1.E 0 0 RW WD Synchronized

Total -------- --------


Track(s) 0 0
MB(s) 0.0 0.0

495 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The list of devices in SRDF Group 104 is displayed here. The two devices 0D6 and 0D7 are R1 devices
on the local PowerMax 217. They have been paired with device 072 and 073 on the remote VMAX 100K.
The SRDF mode is Adaptive Copy Disk—as denoted by the (M)ode of Operation flag—and the SRDF
pairs are currently Synchronized. As noted in the SRDF/S module, the default mode for newly created
SRDF pairs is Adaptive Copy Disk. The displays in this and the previous page are the results of the
following operations—seen earlier in the SRDF/S module:

Create RDF Group:

symrdf addgrp -label SRDF_Async1 -sid 217 -remote_sid 253 -rdfg 104 -
remote_rdfg 104 -dir 1F:06,2F:06 -remote_dir 1F:11,2F:11

Create RDF device pairs:

symrdf createpair -sid 217 -rdfg 104 -f async1.txt -type R1 -establish -g


asyncdg1

The –g asyncdg1 option adds the newly created RDF device pairs to a SYMCLI device group
asyncdg1.

async1.txt

0D6 072

0D7 073

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 495
Transitioning to SRDF/A Mode

• From Synchronous mode:


– If the devices are in Synchronized state, the R2 data is already consistent.
Enabling SRDF/A immediately provides consistent data on the R2.

• From Adaptive Copy Disk mode:


– Any invalid tracks owed to the R2 are synchronized. Two cycle switches after
Synchronization, SRDF/A provides consistent data on the R2.

496 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

SRDF/A can be enabled when the device pairs are operating in any of the listed modes. In the case of
Adaptive Copy to SRDF/A transition, it takes two additional cycle switches after resynchronization of data
for the R2 devices to be consistent.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 496
Example – Synchronous Mode to SRDF/A (1 of 2)
C:\>symrdf -g asyncdg1 query

Device Group (DG) Name : asyncdg1


DG's Type : RDF1
DG's Symmetrix ID : 000197600217 (Microcode Version: 5978)
Remote Symmetrix ID : 000196802253 (Microcode Version: 5977)
RDF (RA) Group Number : 104 (67)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
DEV001 000D6 RW 0 0 RW 00072 WD 0 0 S..E Synchronized
DEV002 000D7 RW 0 0 RW 00073 WD 0 0 S..E Synchronized

Total ------- ------- ------- -------


Track(s) 0 0 0 0
MB(s) 0.0 0.0 0.0 0.0

497 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Any SRDF/A operation—with the exception of Consistency Exempt, discussed later in the module—must
be performed on all devices in an RA group. This means that all devices in an SRDF Group must be in the
same SRDF Device Group as well. This is in contrast with SRDF/S, where operations can be performed
on a subset of devices in an SRDF Group.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 497
Example – Synchronous Mode to SRDF/A (2 of 2)
C:\>symrdf -g asyncdg1 set mode async –nop
C:\>symrdf -g asyncdg1 enable
C:\>symrdf -g asyncdg1 query -rdfa
---Output Truncated----
RDFA Session Status : Active – DSE
---Output Truncated----
R2 Data is Consistent : True
---Output Truncated----
Remote Symmetrix ID : 000196802253 (Microcode Version: 5977)

RDF (RA) Group Number : 104 (67)


Source (R1) View Target (R2) View FLAGS
--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
DEV001 000D6 RW 0 0 RW 00072 WD 0 0 AX.E Consistent
DEV002 000D7 RW 0 0 RW 00073 WD 0 0 AX.E Consistent

Total ------- ------- ------- -------


Track(s) 0 0 0 0
MB(s) 0.0 0.0 0.0 0.0

498 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The mode of SRDF operation is set to Asynchronous for the device pairs in the device group asyncdg1.
SRDF/A consistency is enabled. The symrdf query –rdfa command gives detailed information about
the SRDF/A state of the device group. As described earlier, the transition from Synchronous to
Asynchronous mode is immediate. The consistency state of the R2 devices is displayed in the query as
True. The RDF pair state reflects Consistent.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 498
Example – Adaptive Copy Disk Mode to SRDF/A (1 of 2)
C:\>symrdf -g asyncdg1 query

Device Group (DG) Name : asyncdg1


DG's Type : RDF1
DG's Symmetrix ID : 000197600217 (Microcode Version: 5978)
Remote Symmetrix ID : 000196802253 (Microcode Version: 5977)
RDF (RA) Group Number : 104 (67)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
DEV001 000D6 RW 0 57 RW 00072 WD 0 0 D..E SyncInProg
DEV002 000D7 RW 0 105 RW 00073 WD 0 0 D..E SyncInProg

Total ------- ------- ------- -------


Track(s) 0 162 0 0
MB(s) 0.0 20.3 0.0 0.0

499 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In this example, the device pairs are in SRDF Adaptive Copy Disk Mode (D..E). There are R2 invalid
tracks.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 499
Example – Adaptive Copy Disk Mode to SRDF/A (2 of 2)
C:\>symrdf -g asyncdg1 set mode async –nop
C:\>symrdf -g asyncdg1 enable
C:\>symrdf -g asyncdg1 query -rdfa
---Output Truncated------
RDFA Session Number : 103
RDFA Cycle Number : 4
RDFA Session Status : Active – DSE
---Output Truncated------
R2 Data is Consistent : False

Remote Symmetrix ID : 000196802253 (Microcode Version: 5977)


RDF (RA) Group Number : 104 (67)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
DEV001 000D6 RW 0 0 RW 00072 WD 0 0 AX.E SyncInProg
DEV002 000D7 RW 0 0 RW 00073 WD 0 0 AX.E SyncInProg

500 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The transition into SRDF/A is immediate and the group has been enabled for consistency (AX.E).
However, the pair state is SyncInProg. The R2 devices do not have consistent data until the pair state is
synchronized and then at least two cycle switches have completed. Consistency of the R2 data is also
displayed by the highlighted field in the output. The R2 Data is Consistent field is currently displaying
False.

Legend for FLAGS:

(M)ode of Operation : A = Async, S = Sync, E = Semi-sync, D = Adaptive Copy


Disk Mode

: W = Adaptive Copy WP Mode, M = Mixed, T = Active

(C)onsistency State : X = Enabled, . = Disabled, M = Mixed, - = N/A

(E)xempt : X = Enabled, . = Disabled, M = Mixed, - = N/A

R1/R2 Device (S)ize : E = R1 EQ R2, 1 = R1 GT R2, 2 = R2 GT R1, - = N/A

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 500
Adding Devices to Active SRDF/A Session

• Create a new SRDF device pair in a temporary SRDF Group and


synchronize them
– Synchronization can be done using the –establish option with the
createpair operation

• Verify synchronization, and suspend the device pair


• Move the device pair from the temporary SRDF Group into the active
SRDF/A Group
– Use the –cons_exempt flag with the movepair operation

• Resume the device pair that has been moved


• Wait for the pair state to change from Consistency Exempt to Consistent

501 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

With Consistency Exempt, the existing devices in an active SRDF/A session need not be suspended when
adding new devices to the session. Consistency is maintained for the existing devices. The new devices
are excluded from the consistency calculation until they are synchronized, moved into a consistent state,
and the Consistency Exempt attribute has been removed. PowerMaxOS automatically clears the
Consistency Exempt status. There is no CLI to do this. It is critical to wait for the new devices to go into a
Consistent RDF Pair state before using the R1 device for application data. As long as the Consistency
Exempt attribute is set, data on the R2 is not guaranteed to be consistent with the primary data on the R1.
Devices that have the Consistency Exempt attribute set can be controlled independently of the other
devices in the active SRDF/A session. The operations are limited to suspend, resume, and establish.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 501
Query Active SRDF/A Session
C:\>symrdf -g asyncdg1 query -rdfa

--Output Truncated----

Remote Symmetrix ID : 000196802253 (Microcode Version: 5977)


RDF (RA) Group Number : 104 (67)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
DEV001 000D6 RW 0 0 RW 00072 WD 0 0 AX.E Consistent
DEV002 000D7 RW 0 0 RW 00073 WD 0 0 AX.E Consistent

Total ------- ------- ------- -------


Track(s) 0 0 0 0
MB(s) 0.0 0.0 0.0 0.0

502 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

RDF Group 104 has a pair of devices that are currently in an active SRDF/A session. The objective is to
add another SRDF pair to this group without affecting the consistency of the current SRDF/A session.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 502
Create New SRDF Pair
C:\>symrdf addgrp -label temp -sid 217 -remote_sid 253 -rdfg 105 -remote_rdfg 105 -dir
1F:06,2F:06 -remote_dir 1F:11,2F:11

C:\>symrdf createpair -sid 217 -rdfg 105 -f newpair.txt -type R1 -rdf_mode sync –establish

C:\>symrdf -sid 217 -f newpair.txt -rdfg 105 query

Symmetrix ID : 000197600217 (Microcode Version: 5978)


Remote Symmetrix ID : 000196802253 (Microcode Version: 5977)
RDF (RA) Group Number : 105 (68)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
N/A 000D8 RW 0 0 RW 00074 WD 0 0 S..E Synchronized

503 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

A new SRDF device pair is created in a different SRDF Group (Group 105). The pair state has been
synchronized.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 503
Move New Device Pair into Active SRDF/A Session

C:\>symrdf -sid 217 -f newpair.txt -rdfg 105 suspend

C:\>symrdf movepair -sid 217 -f newpair.txt -cons_exempt -rdfg 105 -new_rdfg 104

Execute an RDF 'Move Pair' operation for device file


'newpair.txt' (y/[n]) ? y

An RDF 'Move Pair' operation execution is in progress for device


file 'newpair.txt'. Please wait...

Move RDF Pair from (0217,105) to (0217,104)......................Started.


Move RDF Pair from (0217,105) to (0217,104)......................Done.

The RDF 'Move Pair' operation successfully executed for device


file 'newpair.txt'.

504 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Next, suspend the link for the new device pair and move if from rdfg 105—where it was created—to rdfg
104. rdfg 104 is currently in an active SRDF/A session. Use the –cons_exempt flag for the movepair
operation.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 504
Query SRDF/A Session
C:\>symrdf -g asyncdg1 query -rdfa

Device Group (DG) Name : asyncdg1


DG's Type : RDF1
DG's Symmetrix ID : 000197600217 (Microcode Version: 5978)
--Output Truncated-----
RDFA Session Status : Active - DSE
RDFA Consistency Exempt Devices : Yes
--Output Truncated-----
Remote Symmetrix ID : 000196802253 (Microcode Version: 5977)
RDF (RA) Group Number : 104 (67)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
DEV001 000D6 RW 0 0 RW 00072 WD 0 0 AX.E Consistent
DEV002 000D7 RW 0 0 RW 00073 WD 0 0 AX.E Consistent
DEV003 000D8 RW 0 0 NR 00074 WD 0 0 AXXE Suspended

505 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

A query shows that the SRDF/A session is still active and that the group contains Consistency Exempt
Devices. As can be seen, the mode indicates the Consistency Exempt attribute for the new device added
to the SRDF/A session. The existing devices continue to be in a Consistent state. Note that the device
0D8 was added to the device group asyncdg1 after the createpair operation.

Legend for FLAGS:

(M)ode of Operation : A = Async, S = Sync, E = Semi-sync, D = Adaptive Copy


Disk Mode

: W = Adaptive Copy WP Mode, M = Mixed, T = Active

(C)onsistency State : X = Enabled, . = Disabled, M = Mixed, - = N/A

(E)xempt : X = Enabled, . = Disabled, M = Mixed, - = N/A

R1/R2 Device (S)ize : E = R1 EQ R2, 1 = R1 GT R2, 2 = R2 GT R1, - = N/A

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 505
Resume Link for New Device Pair
C:\>symrdf -g asyncdg1 resume DEV003

C:\>symrdf -g asyncdg1 query -rdfa


--Output Truncated-----
RDFA Session Status : Active - DSE
RDFA Consistency Exempt Devices : No
--Output Truncated-----
Remote Symmetrix ID : 000196802253 (Microcode Version: 5977)
RDF (RA) Group Number : 104 (67)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
DEV001 000D6 RW 0 0 RW 00072 WD 0 0 AX.E Consistent
DEV002 000D7 RW 0 0 RW 00073 WD 0 0 AX.E Consistent
DEV003 000D8 RW 0 0 RW 00074 WD 0 0 AX.E Consistent

506 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Use the command shown to resume the link for the newly added device pair. The pair state must be
Consistent and the Consistency Exempt attribute cleared before using this device for workload.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 506
Removing Devices from Active SRDF/A Session

• Suspend the device pair in the active SRDF/A session


– Use the –cons_exempt flag with the suspend operation
– If Consistency has been enabled for the SRDF/A session, the –force option is
required

• Move the device pair to a different SRDF Group


– Use the –cons_exempt flag again if moving to another group with active
SRDF/A session

507 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To remove a device pair, use the –cons_exempt flag to first suspend the link for the devices. Then use
the movepair operation to move the devices out of the active SRDF/A session to a different SRDF
Group.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 507
SRDF/A Array-Wide Parameters
• rdfa_cache_percent:
– Defaults to 75, with a range of valid values from 0 to 100 percent
– It is the percentage of the Max# of System Write Pending Slots available to SRDF/A. The purpose is to
ensure that other applications can use some of the WP limit
– When SRDF/A hits its WP cache limit, it is forced to drop SRDF/A sessions to free up cache
– Setting it lower reserves some WP limit for non-SRDF/A cache usage. Setting it higher enables SRDF/A to use more
of the cache WP limit, potentially creating performance problems for other applications

• rdfa_host_throttle_time:
– Defaults to 0, with a range of valid values from 0 to 65535
– If greater than 0, this value overrides the rdfa_cache_percent and session_priority settings
– When the System WP Limit is reached, throttling delays a write from the host until a cache slot becomes free
– The value is the number of seconds to throttle host writes before dropping SRDF/A sessions. A value of 65535 means
wait forever

• dse_max_cap:
• Specifies maximum number of GB in SRP that DSE can use
• Defaults is No Limit
• Can be set to between 1–100000

508 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The rdfa_cache_percent sets the percentage of write pending cache that can be used by SRDF/A.
The rdfa_cache_percent can range from 0 to 100 percent.

The rdfa_host_throttle_time sets the number of seconds to throttle host writes to SRDF/A devices
when cache is full, before dropping RDF/A sessions. Throttling delays a write from the host until a cache
slot becomes free. Values are from 0 to 65535.

The dse_max_cap specifies the maximum capacity in the array's DSE (Delta Set Extension) SRP
(Storage Resource Pool). The Best Practices for Dell EMC SRDF/A Delta Set Extension Technical Note
provides more information.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 508
SRDF/A Configuration Parameters

• Array-wide parameters:
– Maximum Cache Usage
• symconfigure –sid 499 –cmd “set symmetrix rdfa_cache_percent=50;” commit
– Maximum Host Throttle
• symconfigure –sid 499 –cmd “set symmetrix rdfa_host_throttle_time=2;”
commit
– DSE Maximum Capacity
• symconfigure –sid 499 –cmd “set symmetrix dse_max_cap=1000;” commit

• SRDF Group level parameters:


– Cycle Time
• symrdf –sid 499 –rdfg 20 set rdfa –cycle_time 3
– Session Priority
• symrdf –sid 499 –rdfg 20 set rdfa –priority 20

509 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The array-wide parameters are set using the symconfigure command as shown here.

The Group parameters for SRDF/A can be set using the symrdf command.

The Cycle Time is the minimum time to wait before attempting an SRDF/A cycle switch. Values range
from 1 to 60 seconds. The default minimum cycle time is 15 seconds.

The Session priority is the priority used to determine which SRDF/A sessions to drop if cache becomes
full. Values range from 1 to 64, with 1 being the highest priority—the last to be dropped.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 509
Lesson: SRDF/A Resiliency Features
This lesson covers the following topics:

• Transmit Idle

• Delta Set Extension

• Group-level Write Pacing

• Recovery after link loss

510 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers SRDF/A resiliency features such as Transmit Idle, Delta Set Extension, and Group-
level Write Pacing. A method for recovering after a link loss is also discussed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 510
Transmit Idle

• Transmit Idle is enabled by default when Dynamic SRDF Groups are


created
• Keeps SRDF/A sessions active if a temporary link loss occurs
• When links are restored, data transmission resumes
• When all links are lost:
– Data transmission from source to target is halted
– Cycle switching continues
– Transmit queue depth increases
– Data accumulates in cache until SRDF/A cache usage reaches the DSE
threshold
– When maximum DSE capacity is reached, the session is dropped

511 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

SRDF/A Transmit Idle is a feature of SRDF/A that dynamically and transparently extends the Capture,
Transmit, and Receive phases of the SRDF/A cycle. Transmit Idle is enabled by default when Dynamic
SRDF groups are created. SRDF/A Transmit Idle is used to keep SRDF/A sessions active during
temporary link losses and mask the effects of an all SRDF links lost event. Without the SRDF/A Transmit
Idle feature, an all SRDF links lost event would normally result in the abnormal termination of SRDF/A.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 511
Delta Set Extension

• Extends cache available for SRDF/A by off-loading cycle data from cache to
disk
• Arrays are preconfigured with one or more SRPs before installation
• One SRP is designated for DSE allocations and supports DSE for all
SRDF/A sessions in the array
– The default SRP for DSE is the default SRP for FBA devices
• Data is paged to disk when array Write Pending count crosses the DSE
threshold
– Default threshold is 50% of array Write Pending limit
• When conditions become normal, data is read back from disk to cache and
transmitted to the Target array

512 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

SRDF/A DSE extends the cache space available for SRDF/A session cycles by offloading cycle data from
cache to preconfigured pool storage. DSE helps SRDF/A ride through larger and longer throughput
imbalances than cache-based buffering alone.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 512
Setting Maximum Cache Usage and DSE Capacity
C:\>symcfg -sid 217 -v list

-----Output Truncated-----
SRDF/A Maximum Host Throttle (Secs) : 0
SRDF/A Maximum Cache Usage (Percent) : 75
SRDF/A DSE Maximum Capacity (GB) : NoLimit
-----Output Truncated-----

C:\>symconfigure -sid 217 -cmd "set symmetrix rdfa_cache_percent=80;" commit

C:\>symconfigure -sid 217 -cmd "set symmetrix dse_max_cap=1000;" commit

C:\>symcfg -sid 217 -v list

-----Output Truncated-----
SRDF/A Maximum Host Throttle (Secs) : 0
SRDF/A Maximum Cache Usage (Percent) : 80
SRDF/A DSE Maximum Capacity (GB) : 1000
-----Output Truncated-----

513 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The default SRDF/A Maximum Cache Usage is 75 percent. The default SRDF/A DSE Maximum Capacity
is NoLimit. Use the symconfigure commands shown to modify these two settings.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 513
Verifying Designated SRP for DSE

C:\>symcfg -sid 217 list -srp -detail

STORAGE RESOURCE POOLS

Symmetrix ID : 000197600217
C A P A C I T Y
-------------------------------- --- -------------------------------------------
Flg Usable Allocated Free Subscribed
Name DR (GB) (GB) (GB) (GB)
-------------------------------- --- ---------- ---------- ---------- ----------
SRP_1 FX 12518.0 1419.7 11098.3 1264.5
---------- ---------- ---------- ----------
Total 12518.0 1419.7 11098.3 1264.5

Legend:
Flags:
(D)efault SRP : F = FBA Default, C = CKD Default, B = Both, . = N/A
(R)DFA DSE : X = Usable, . = Not Used

514 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

A listing of the SRP shows that SRP_1 is designated for DSE use.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 514
Query After Temporary Link Loss (1 of 2)
C:\>symrdf -g asyncdg1 query -rdfa

Device Group (DG) Name : asyncdg1


DG's Type : RDF1
DG's Symmetrix ID : 000197600217 (Microcode Version: 5978)

RDFA Session Number : 103


RDFA Cycle Number : 570
RDFA Session Status : Active - DSE
RDFA Consistency Exempt Devices : No
RDFA Minimum Cycle Time : 00:00:15
RDFA Avg Cycle Time : 00:00:15
RDFA Avg Transmit Cycle Time : 00:00:15
Transmit Queue Depth on R1 Side : 42
Tracks not Committed to the R2 Side: 869312
Time that R2 is behind R1 : 00:10:18
R2 Image Capture Time : Fri Jan 25 14:33:03 2019
R2 Data is Consistent : True
RDFA R1 Side Percent Cache In Use : 0
RDFA R2 Side Percent Cache In Use : 0
R1 Side DSE Used Tracks : 30954
----Output Continued on Next Page-------

515 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The SRDF/A session is still active. The transmit queue depth on the R1 side increases as cycle switches
continue in multi-cycle mode (MCM). DSE spillover has started as can be seen from the R1 Side DSE
Used Tracks.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 515
Query After Temporary Link Loss (2 of 2)
----Output Continued from Previous Page-------
R2 Side DSE Used Tracks : 0
R1 Side Shared Tracks : 0
Transmit Idle Time : 00:10:07

Remote Symmetrix ID : 000196802253 (Microcode Version: 5977)


RDF (RA) Group Number : 104 (67)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
DEV001 000D6 RW 0 0 RW 00072 NA NA NA AX.E TransIdle
DEV002 000D7 RW 0 0 RW 00073 NA NA NA AX.E TransIdle
DEV003 000D8 RW 0 0 RW 00074 NA NA NA AX.E TransIdle

Total ------- ------- ------- -------


Track(s) 0 0 0 0
MB(s) 0.0 0.0 0.0 0.0

516 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The session has been in transmit idle for a little over 10 minutes and the pair state is reflected as
TransIdle.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 516
DSE Utilization
C:\>symcfg -sid 217 list -srp -demand -type sl

STORAGE RESOURCE POOLS

Symmetrix ID : 000197600217

Name : SRP_1
Usable Capacity (GB) : 12518.0
SRDF DSE Allocated (GB) : 8.6
Snapshots Allocated (GB) : 60.0

--------------------------------------------------
Service Level Subscribed Allocated
Name (GB) (GB) (%)
------------------------ ---------- --------------
<none> 20.1 0.0 0
Diamond 1162.4 116.6 10
Optimized 89.9 18.4 20
Platinum 20.0 0.0 0
Gold 8.0 0.0 0
---------- ---------- ---
Total 1300.5 135.0 10

517 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

A little over 8 GB has been allocated so far for DSE from the designated SRP—SRP_1 in this case.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 517
Group-Level Write Pacing

• HYPERMAX OS introduced enhanced Group-level write pacing


• Paces host IOPS to the DSE transfer rate for an SRDF/A session
• Responds to only to spillover rate on the R1 side
– Spillover on the R2 side does not affect Group-level write pacing

518 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

HYPERMAX OS introduced enhanced group-level pacing. Enhanced group-level pacing paces host I/Os
to the DSE spill-over rate for an SRDF/A session.

When DSE is activated for an SRDF/A session, host-issued write I/Os are throttled so their rate does not
exceed the rate at which DSE can offload the SRDF/A session’s cycle data. The system will pace at the
spillover rate until the usable configured capacity for DSE on the SRP reaches its limit.

At this point, the system will then either drop SRDF/A, or pace to the link rate option. To drop or pace is
user definable.

All existing pacing features are supported and can be utilized to keep SRDF/A sessions active.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 518
Activating Group-Level Write Pacing
C:\>symcfg -sid 217 list -rdfg 104 -rdfa
-----Output Truncated-----
-------- ---------- -------- ----- --- --- --------- -----------------------
Write Pacing
RA-Grp Group Flags Cycle Pri Thr Transmit Delay Thr GRP DEV FLG
Name CSRM TDA time Idle Time (usecs) (%) SAU SAU P
-------- ---------- -------- ----- --- --- --------- ------- --- --- --- ---
104 (67) Async1 XAS- XAX 15 33 50 000:00:00 50000 60 I.- --- X

C:\>symrdf -sid 217 -rdfg 104 activate -rdfa_wpace –nop

C:\>symrdf -sid 217 -rdfg 104 set rdfa_pace -wp_autostart on -nop

C:\>symcfg -sid 217 list -rdfg 104 -rdfa


-----Output Truncated-----
-------- ---------- -------- ----- --- --- --------- -----------------------
Write Pacing
RA-Grp Group Flags Cycle Pri Thr Transmit Delay Thr GRP DEV FLG
Name CSRM TDA time Idle Time (usecs) (%) SAU SAU P
-------- ---------- -------- ----- --- --- --------- ------- --- --- --- ---
104 (67) Async1 XAS- XAX 15 33 50 000:00:00 50000 60 AXX --- X

519 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

A list command issued to the SRDF Group 104 shows current Group-level Write Pacing flags. They are
Inactive and Disabled by default. The command examples illustrate how to activate Group-level Write
Pacing and set Write Pacing Autostart to on. A subsequent list command displays the result.

Legend:

Write Pacing Flags :

(GRP) Group-Level Pacing:

(S)tatus : A = Active, I = Inactive, - = N/A

(A)utostart : X = Enabled, . = Disabled, - = N/A

S(U)pported : X = Supported, . = Not Supported, - = N/A

(DEV) Device-Level Pacing:

(S)tatus : A = Active, I = Inactive, - = N/A

(A)utostart : X = Enabled, . = Disabled, - = N/A

S(U)pported : X = Supported, . = Not Supported, - = N/A

(FLG) Flags for Group-Level and Device-Level Pacing:

Devs (P)aceable : X = All Devices, . = Not all devices, - = N/A

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 519
Recovering After Extended Loss of Links

• Dell EMC recommends making a Gold Copy of the R2 before starting any
resynchronization
• If an extended loss of link occurs, many R2 invalid tracks can build up on
the R1 side
• Enable SRDF/A after the two sides are synchronized
• Resynchronization before enabling SRDF/A can be performed by setting
SRDF mode to Adaptive Copy Disk Mode

520 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

As noted earlier, during resynchronization, the R2 does not have consistent data. A copy of the consistent
R2 data prior to resynchronization can safeguard against unexpected failures during the resynchronization
process. When the link is resumed, if there are a large number of invalid tracks owed by the R1 to its R2, it
is recommended that SRDF/A not be enabled right away. Enabling SRDF/A right after link resumption
causes a surge of traffic on the link due to shipping of accumulated invalid tracks and the new data added
to the SRDF/A cycles. This could lead to SRDF/A consuming more cache and reaching the System Write
Pending limit. If this happens, SRDF/A would drop again. Like with SRDF/S, resynchronization should be
performed during periods of relatively low production activity.

Resynchronization in Adaptive Copy Disk mode minimizes the impact on the production host. New writes
are buffered and these, along with the R2 invalids, are sent across the link. The time it takes to
resynchronize is elongated.

Resynchronization in Synchronous mode impacts the production host. New writes have to be sent
preferentially across the link while the R2 invalids are also shipped. Switching to Synchronous is possible
only if the distances and other factors permit. For instance, if the norm is to run in SRDF/S and toggle into
SRDF/A for batch processing—due to higher bandwidth requirement. In this case, if a loss of links occurs
during the batch processing, it might be possible to resynchronize in SRDF/S.

In either case, the R2 data is inconsistent until all the invalid tracks are sent over. Therefore, it is advisable
to enable SRDF/A after the two sides are completely synchronized.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 520
Recovery Example (1 of 3)
C:\>symrdf -g asyncdg1 query -rdfa

RDFA Cycle Number : 0


RDFA Session Status : Inactive

Remote Symmetrix ID : 000196802253 (Microcode Version: 5977)


RDF (RA) Group Number : 104 (67)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
DEV001 000D6 RW 0 470727 NR 00072 NA NA NA AX.E Partitioned
DEV002 000D7 RW 0 465205 NR 00073 NA NA NA AX.E Partitioned

Total ------- ------- ------- -------


Track(s) 0 935932 NA NA
MB(s) 0 116992 NA NA

521 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In this example, there is a workload on the devices in SRDF/A enabled state. A permanent loss of link
places the devices in a Partitioned state. Production work continues on the R1 devices and the new writes
arriving for the R1 devices are marked as invalid or owed to the R2. At some point, SRDF/A is dropped
and the session is marked Inactive. To get to this state, the maximum DSE capacity has been exceeded.
So there is no choice but to drop SRDF/A.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 521
Recovery Example (2 of 3)
C:\>symrdf -g asyncdg1 query -rdfa

RDFA Cycle Number : 0


RDFA Session Status : Inactive
Time that R2 is behind R1 : 00:09:31
R2 Image Capture Time : Mon Jan 28 14:16:45 2019
R2 Data is Consistent : True

Remote Symmetrix ID : 000196802253 (Microcode Version: 5977)


RDF (RA) Group Number : 104 (67)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
DEV001 000D6 RW 0 472615 NR 00072 WD 0 0 AX.E Suspended
DEV002 000D7 RW 0 467117 NR 00073 WD 0 0 AX.E Suspended

522 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

When the links are restored, the pair state moves to Suspended. Even though the flags indicate SRDF/A
mode, the session status is Inactive. Also note that the R2 Data is Consistent. This is because the data
would be consistent up to the last Apply cycle. However, there are accumulated R2 Invalid tracks that are
owed to the R2 side.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 522
Recovery Example (3 of 3)
C:\>symrdf -g asyncdg1 disable –nop

C:\>symrdf -g asyncdg1 set mode acp_disk –nop

C:\>symrdf -g asyncdg1 resume –nop

C:\>symrdf -g asyncdg1 query –rdfa

Remote Symmetrix ID : 000196802253 (Microcode Version: 5977)


RDF (RA) Group Number : 104 (67)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
DEV001 000D6 RW 0 466384 RW 00072 WD 0 0 D..E SynchInProg
DEV002 000D7 RW 0 462423 RW 00073 WD 0 0 D..E SynchInProg

523 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

As mentioned, we will next place the device group in Adaptive Copy Disk mode. As consistency was
enabled when the links were lost, we have to first disable consistency before changing the mode to
Adaptive Copy Disk. The RDF pair state is still Suspended. Next we resume the links. Once the RDF pair
state moves to Synchronized, the mode can be changed to Asynchronous and Consistency Enabled.

symrdf –g asyncdg1 set mode async

symrdf -g asyncdg1 enable

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 523
Failover/Failback in SRDF/A Mode

• If the primary site fails, data on R2 is consistent up to the last Apply cycle
– Partial data in the Receive cycle is discarded

• SRDF failover procedure can then be executed, and the workload can be
started on the R2 devices
– Consistency protection should be disabled before issuing symrdf failover
without the –force option

• Failback procedure after the primary site has been restored is identical to
Synchronous SRDF
– After symrdf failback command completion, workload can be restarted on
the R1 devices and SRDF/A can be enabled

524 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Again, it is advisable to make a copy of the R2 prior to executing a failback operation. When the workload
is resumed on the R1 devices immediately after a failback, accumulated invalid tracks have to be
synchronized from the R2 to the R1, and new writes must be shipped from the R1 to R2. If there is an
interruption now, data on the R2 is not consistent. Even though SRDF/A can be enabled right after a
failback, it should be enabled after the SRDF pairs enter the Synchronized state.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 524
Lesson: Independent Groups and Multi-Session Consistency
This lesson covers the following topics:

• Multiple independent SRDF/A groups

• Multi-Session Consistency (MSC)

• Managing MSC

525 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers managing independent SRDF/A groups and SRDF/A Multi-Session Consistency.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 525
Independent SRDF/A Groups (1 of 4)
C:\>symrdf -g asyncdg1 query –rdfa

RDFA Cycle Number : 1421


RDFA Session Status : Active - DSE
RDFA Consistency Exempt Devices : No
RDFA Minimum Cycle Time : 00:00:15

Remote Symmetrix ID : 000196802253 (Microcode Version: 5977)


RDF (RA) Group Number : 104 (67)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
DEV001 000D6 RW 0 0 RW 00072 WD 0 0 AX.E Consistent
DEV002 000D7 RW 0 0 RW 00073 WD 0 0 AX.E Consistent
DEV003 000D8 RW 0 0 RW 00074 WD 0 0 AX.E Consistent

526 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Devices in RDF Group 104 are in an active SRDF/A session. The pair state is Consistent. The current
cycle number for this group is 1421. The minimum cycle time is at the default of 15 seconds.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 526
Independent SRDF/A Groups (2 of 4)
C:\>symrdf -g asyncdg2 query -rdfa

RDFA Cycle Number : 265


RDFA Session Status : Active - DSE
RDFA Consistency Exempt Devices : No
RDFA Minimum Cycle Time : 00:00:05

Remote Symmetrix ID : 000196802253 (Microcode Version: 5977)


RDF (RA) Group Number : 105 (68)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
DEV001 000D9 RW 0 0 RW 00075 WD 0 0 AX.E Consistent
DEV002 000DA RW 0 0 RW 00076 WD 0 0 AX.E Consistent

527 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Devices in RDF Group 105 are in an active SRDF/A session. The pair state is Consistent. The current
cycle number for this group is 265. The minimum cycle time for this group has been set to five seconds.
The two groups switch cycles independently of each other.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 527
Independent SRDF/A Groups (3 of 4)

C:\>symrdf -g asyncdg1 query -rdfa

RDFA Cycle Number : 1442


RDFA Session Status : Active – DSE

Remote Symmetrix ID : 000196802253 (Microcode Version: 5977)


RDF (RA) Group Number : 104 (67)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
DEV001 000D6 RW 0 0 RW 00072 NA NA 0 AX.E TransIdle
DEV002 000D7 RW 0 0 RW 00073 NA NA 0 AX.E TransIdle
DEV003 000D8 RW 0 0 RW 00074 NA NA 0 AX.E TransIdle

528 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Loss of links for RDF Group 104 causes the pair states to go into Transmit Idle.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 528
Independent SRDF/A Groups (4 of 4)
C:\>symrdf -g asyncdg2 query -rdfa

RDFA Cycle Number : 303


RDFA Session Status : Active - DSE
RDFA Consistency Exempt Devices : No
RDFA Minimum Cycle Time : 00:00:05

Remote Symmetrix ID : 000196802253 (Microcode Version: 5977)


RDF (RA) Group Number : 105 (68)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
DEV001 000D9 RW 0 0 RW 00075 WD 0 0 AX.E Consistent
DEV002 000DA RW 0 0 RW 00076 WD 0 0 AX.E Consistent

529 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

However, as the links for RDF Group 105 are still available, it is not affected by the loss of links for RDF
Group 104. So the devices in RDF Group 105 continue to be consistent and the cycle switches proceed as
usual.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 529
SRDF/A Multi-Session Consistency
MSC
• Manages multiple SRDF/A sessions
logically as if they were a single session: Delta
Set
– RDF Daemon for Open Systems
– Sessions can be within or across arrays
– Ensures a complete, re-startable point-in-
time copy on the remote side

Delta
Set

530 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

If one or more source R1 devices in an SRDF/A Multi-Session Consistency (MSC) enabled SRDF
consistency group cannot propagate data to their corresponding target R2 devices, then the MSC process
suspends data propagation from all R1 devices in the consistency group. This halts all data flow to the R2
targets. The RDF Daemon—storrdfd—performs cycle-switching and cache recovery operations across
all SRDF/A sessions in the group. This ensures that a consistent R2 data copy of the database exists at
the point-in-time any interruption occurs. If a session has devices from multiple arrays, then the host
running storrdfd must have access to all the arrays to coordinate cycle switches. It would be
recommended to have more than one host with access to all the arrays running the storrdfd daemon. In
the event one host fails, the surviving host can continue with MSC cycle switches.

A composite group must be created using the RDF consistency protection option—rdf_consistency—
and must be enabled using the symcg enable command. At this point the, RDF Daemon begins
monitoring and managing the MSC consistency group.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 530
SRDF/A MSC

• RDF Daemon coordinates cycle switching of the SRDF/A MSC group


sessions as a single entity:
– Responsible for detecting failure conditions that would cause data on the R2 side
to become inconsistent
– When a failure condition is detected, the cycle switching for all SRDF/A sessions
in the group are stopped in a manner that leaves the R2 side with consistent data

531 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The RDF process daemon maintains consistency for enabled composite groups across multiple arrays for
SRDF/A with MSC. For the MSC option—rdf_consistency—to work in an RDF consistency-enabled
environment, each locally-attached host performing management operations must run an instance of the
RDF Daemon—storrdfd. Each host running storrdfd must also run an instance of the base
daemon—storapid. Optionally, if the Group Naming Services (GNS) daemon is also running, it
communicates the composite group definitions back to the RDF Daemon. If the GNS daemon is not
running, the composite group must be defined on each host individually.

In MSC, the Transmit cycles on the R1 side of all participating sessions, as well as all the corresponding
Apply cycles on the R2 side, must be empty. The switch is coordinated and controlled by the RDF
Daemon.

All host writes are held for the duration of the cycle switch. This ensures dependent write consistency. If
one or more sessions in MSC complete their Transmit and Apply cycles ahead of other sessions, they
have to wait for all sessions to complete, prior to a cycle switch.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 531
SRDF/A Operations
• Set SYMAPI_USE_RDFD = ENABLE in options configuration file
• Create a Composite Group (CG) with the -rdf_consistency option
– Group definition is passed to the RDF Daemon as a candidate group
– If the Daemon is not already running, it is started automatically

• Add all the devices in the multiple SRDF/A sessions to the CG


• Put all CG devices into Async mode
symrdf -cg <CGname> set mode async

• Enable CG devices for consistency protection


symcg -cg <CGname> enable
– The RDF Daemon is notified that the group should now be monitored
– Enable command must be done after the devices are put into Async mode

• When the devices become RW on the link, the RDF Daemon:


– Starts performing cycle switching
– Actively monitors the health of the group to maintain R2 data consistency

532 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To use the optional RDF Daemon, enable it in the SYMAPI options file and then start it. Managing MSC
requires the creation of Composite Groups. When the Composite Group is enabled, the cycle switching is
controlled by the RDF Daemon.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 532
SRDF/A MSC

C:\>symrdf -g asyncdg1 disable –nop

C:\>symrdf -g asyncdg2 disable –nop

C:\>symdg dg2cg asyncdg1 msc_cg -rdf_consistency

C:\>symdg dg2cg asyncdg2 msc_cg -rdf_consistency –rename

C:\>symcg list

C O M P O S I T E G R O U P S

Number of Number of
Name Type Valid Symms RAGs DGs Devs BCVs VDEVs TGTs

msc_cg RDF1 Yes 1 2 0 4 0 0 0

C:\>symcg -cg msc_cg enable -nop

533 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The objective is to manage two SRDF/A groups as a single entity using MSC. We first disable consistency
for the two groups and then add them to a consistency group as shown here.

Issue the symcg list command to list the consistency groups. The Number of RDF Groups (RAGs)
displays 2.

Next, enable MSC for the CG.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 533
Query of Composite Group
C:\>symrdf -cg msc_cg query -rdfa

Composite Group Name : msc_cg


Composite Group Type : RDF1
Number of Symmetrix Units : 1
Number of RDF (RA) Groups : 2
RDF Consistency Mode : MSC
RDFA MSC Info
{
MSC Session Status : Active
Consistency State : CONSISTENT
}

RDF (RA) Group Number : 105 (68)


RDFA Info:
{
Cycle Number : 79
Session Status : Active - MSC – DSE

RDF (RA) Group Number : 104 (67)


RDFA Info:
{
Cycle Number : 79
Session Status : Active - MSC - DSE

534 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The cycle numbers for the two groups have been reset to be the same. MSC has been enabled. They
cycle switch in unison even though their minimum cycle switch times are different.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 534
SRDF/A MSC at Work
C:\>symrdf -cg msc_cg query
RDF (RA) Group Number : 105 (68)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
DEV004 000D9 RW 0 130492 NR NA NA NA NA AX.E Partitioned

RDF (RA) Group Number : 104 (67)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
DEV001 000D6 RW 0 8205 NR 00072 WD 0 0 AX.E Suspended
DEV002 000D7 RW 0 8205 NR 00073 WD 0 0 AX.E Suspended
DEV003 000D8 RW 0 8205 NR 00074 WD 0 0 AX.E Suspended

535 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

A permanent loss of links for RDF Group 105 results in a Partitioned state for that group. But RDF Group
104 is suspended even though its links are still available. Note that the output is very verbose and has
been edited to show the relevant details for this example. When the failed links are restored, the RDF
Group moves from the Partitioned state to the Suspended state. Recovering from this state can be
accomplished with the command:

symrdf –cg msc_cg establish

Once the invalid tracks are marked, merged, and synchronized, MSC protection is automatically re-
instated. The user does not have to issue symcg –cg msc_cg enable again.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 535
MSC Cleanup

• If the link to the R2 side is available, the RDF Daemon performs the cleanup
automatically on the R1 side
• If the link is unavailable, then invocation of any SRDF command—such as
symrdf failover or split—from the R2 side performs the automatic
cleanup

536 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Cleanup is automatically performed by the RDF Daemon if the link to the R2 side is available.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 536
Lab: SRDF/Asynchronous Operations

This lab covers


• Single Session SRDF/Asynchronous groups
• Configuring Concurrent SRDF
• Configuring and managing SRDF/A Multi-Session
Consistency

537 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lab covers setting SRDF/Asynchronous mode of operation for SRDF device pairs and enabling
consistency protection. It also covers configuring Concurrent SRDF with one leg in SRDF/Synchronous
mode and the other in SRDF/Asynchronous mode. Configuring and managing SRDF/A Multi-Session
Consistency is covered as well.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 537
Module Summary

Key points covered in this module:

• SRDF/Asynchronous remote replication on PowerMax and VMAX Family arrays

• SRDF/A operations

• SRDF/A resiliency features

• SRDF/A Multi-Session Consistency

538 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This module covered SRDF/Asynchronous mode of remote replication. Concepts and operations for
SRDF/A in Single and Multi-Session modes were presented. SRDF/A resiliency features were also
discussed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.comModule: SRDF/Asynchronous Operations 538
Module: SRDF/Metro

Upon completion of this module, you should be able to:

• Describe SRDF/Metro

• Configure SRDF/Metro

• Create and view SRDF/Metro Device Pairs

539 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This module provides an overview of SRDF/Metro. Configuration and monitoring of SRDF/Metro is also
covered.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 539
Lesson: SRDR/Metro Introduction
This lesson covers the following topics:

• SRDF/Metro configurations

• Bias Facility

• SRDF Resiliency

• SRDF/Metro Extended Disaster Recovery (DR)

540 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers the concepts of SRDF/Metro. Configurations, resiliency and Disaster Recovery (DR)
options are also discussed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 540
Introduction to SRDF/Metro

• Application High Availability (HA)


Production
– At PowerMax, VMAX All Flash, and Application Application Host

VMAX3 level
• Features
– R1 & R2 Read/Write Read/Write

> Read/Write to host


> Read/Write on link Witness
– Synchronous replication
– Metro distance 100 km
– Bias and witness options R1 Read/Write R2

– Single or clustered host


• Managed by Solutions Enabler or Synchronous

Unisphere for VMAX 8.1 or above,


or Unisphere for PowerMax 100 km

541 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

SRDF/Metro provides High Availability (HA) for an application at the PowerMax, VMAX All Flash, and
VMAX3 array level. Typically, HA for an application is provided at the host level. SRDF/Metro provides the
host read/write capability to both R1 and R2 volumes while both volumes are read/write on the SRDF link.

Replication between the two sites is performed synchronously across the link with a Metro distance of 100
kilometers. A bias facility is used to determine which RDF volumes the host has access to if volumes in
the Metro configuration go Not Ready on the RDF link. This bias facility supports three methods for
determining which volumes to use; Device Bias, Array Witness—shown in this example—and Virtual
Witness(vWitness). SRDF/Metro is supported for single, as seen in this example, or clustered host
environments.

Solutions Enabler or Unisphere for VMAX 8.1 or above, or Unisphere for PowerMax 9.0 or above is
required to manage SRDF/Metro.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 541
SRDF/Metro Host Configurations

Single Host

Host
Multi-Pathing
Cluster
software

Read/Write Read/Write
Read/Write Read/Write

SRDF SRDF
Links Links

R1 R2
R2 R1 R2
R2

542 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

On the left is an SRDF/Metro configuration with a standalone host. In this example, the configuration has
visibility to a VMAX All Flash and a VMAX3 array—R1 and R2 devices. It is using multi-pathing software,
such as PowerPath, to enable parallel reads and writes to each array. The identity of the R1 device is
federated, insuring that the paired R2 device appears, through additional paths to host, as a single
virtualized device. On the right, is a clustered host environment. Each cluster node has dedicated access
to an individual array. In either case, writes to the R1 or R2 devices are synchronously copied to its SRDF
device pair. Should a conflict occur between writes to paired SRDF/Metro devices, the conflicts will be
internally resolved. This resolution insures that a consistent image between paired SRDF devices is
maintained to the individual host or host cluster.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 542
SRDF/Metro Configuration Resiliency

Device Bias Array Witness Virtual Witness (vWitness)


Configuration setting Third physical array ESXi Server vApp

Witness Array
R1 R2

R1 R2

543 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Equipment or communication failures can make either device unavailable or break the SRDF link. In such
an event, SRDF/Metro uses a facility called Bias to determine which side remains accessible to the host
system. There are three methods for deciding which side remains available during a failure situation.
Device Bias uses a configuration setting of the device pair to specify which side remains available. Array
Witness uses a third physical array to determine which side is accessible. Virtual Witness (vWitness) runs
in a virtual appliance (vApp) on an ESXi server to determine which side is accessible to the host or hosts.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 543
SRDF/Metro Resiliency – Device Bias

No Witness • Created with –use_bias option


configured • R1 is always the winner in disasters
• If use R2 bias, system automatically
performs R1/R2 swap
R1 R2

544 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Device Bias is the simplest of the bias methods. When making device pairs available on the SRDF link,
indicate that the bias method should be used for the device pairs with the -use_bias option. By default, the
R1 side of the pair is configured as the bias side. However, if there is a failure on the array that contains
the bias device, the host loses device access. The Device Bias method provides no way to make the R2
device available to the host. When operating with Device Bias, the state of the device pair is ActiveBias.

If the witness options are not used, the establish and restore commands also require the -use_bias
option. Bias can be changed when all device pairs in the SRDF/Metro group have reached the
ActiveActive or ActiveBias pair state.

Bias applies only to RDF device pairs in an SRDF/Metro configuration.

If either Array Witness or vWitness options are used, but unavailable, Device Bias is used if the device
pair becomes Not Ready on the RDF link.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 544
SRDF/Metro Resiliency – Array Witness

Witness
VMAX, VMAX3,, VVMAX All Flash, or PowerMax
• Third witness array
– VMAX running Enginuity 5876 Witness R1 Witness R2
with ePack SRDF Group SRDF Group
– VMAX All Flash or VMAX3 running
HYPERMAX OS 5977.810.784
with ePack
– VMAX All Flash or VMAX3 running
HYPERMAX OS 5977.945.890+
– PowerMaxOS 5978.144.144+ R1
RR2
2

545 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

With the Array Witness method, SRDF/Metro uses a third witness array to determine the bias side. The
witness array runs on a VMAX array running Enginuity, on a VMAX All Flash or VMAX3 running
HYPERMAX OS, or on a PowerMax or VMAX All Flash running PowerMaxOS. The VMAX array must
have Enginuity 5876 with an ePack containing fixes to support SRDF N-x connectivity. On VMAX All Flash
and VMAX3 arrays, HYPERMAX OS 5977.810.784 with an ePack containing fixes to support SRDF N-x
connectivity must be used. HYPERMAX OS 5977.945.890 or above contains all fixes and supports
SRDF/Metro on VMAX All Flash and VMAX3 arrays as well. For PowerMax or VMAX All Flash arrays,
PowerMaxOS 5989.144.144 or above supports SRDF/Metro. In the event of a failure, the witness decides
which side of the Metro group remains accessible to hosts, giving preference to the bias side. This method
chooses which side to continue operations on when the Device Bias method may not result in continued
host availability to a surviving non-biased array. When operating with Array Witness, the state of the
device pair is ActiveActive.

The witness array must have SRDF connectivity to both the R1-side array and R2-side array. SRDF
remote adapters (RAs) are required on the witness array with applicable network connectivity to both the
R1 side and R2 side arrays.

For complete redundancy, there can be multiple witness arrays. If the auto configuration process fails and
no other applicable witness arrays are available, SRDF/Metro uses the Device Bias method.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 545
SRDF/Metro Resiliency – Virtual Witness (vWitness)

vWitness (vApp)

• Deployed as vApp on ESXi vWitness R1 vWitness R2


server IP Connectivity IP Connectivity
– Requires HYPERMAX OS
5977.945.890 or above
– Management with Solutions
Enabler, Unisphere for VMAX
8.3 or above, or Unisphere for
PowerMax
R1 R2
R2

546 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

vWitness is an additional resiliency option introduced in HYPERMAX OS 5977.945.890 and Solutions


Enabler or Unisphere for VMAX 8.3. vWitness has similar capabilities to the Array Witness method. The
difference is that it is packaged to run in a virtual appliance (vApp) on a VMware ESXi server, rather than
on an array. The vWitness and the Array Witness options are treated the same in the operating
environment, and can be deployed independently or simultaneously. When deployed simultaneously,
SRDF/Metro favors the Array Witness option over the vWitness option, as the Array Witness option has
better availability. For redundancy, you can configure up to 32 vWitnesses. When operating with vWitness,
the state of the device pair is ActiveActive.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 546
SRDF/Metro Management

• SRDF/Metro pairs are managed in the unit of RDF group


• All the devices pairs in the SRDF/Metro group are consistent
• Devices can be added/removed to/from active SRDF/Metro group with –
exempt*
• Can move between SRDF/Sync or Adaptive Copy RDF group pairs and the
active SRDF/Metro group with –exempt * option
• Only establish, suspend and restore operations can work with RDF pairs in
Metro mode

547 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

All the devices in the RDF group should be managed together in Metro mode, similiar to SRDF/A
management. By nature, all the device pairs in the same RDF group with SRDF/Metro mode are
consistent. Unlike the other SRDF modes, only symrdf establish/suspend and restore operations can be
done between metro built pairs.

*The exempt option is only available on arrays running PowerMaxOS 5978 using Solutions Enabler 9.x or
Unisphere 9.x.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 547
Device Add/Remove to/from Active SRDF/Metro Group
with -exempt
• Add existing (non-RDF) devices to an active Example: Add devices
SRDF/Metro group while a host is actively
using the volumes being added
• Remove devices being protected using
SRDF/Metro
SRDF/M
R1 Group R2
Benefits
• Achieve zero RPO and RTO for existing
applications using SRDF/Metro
• Parity with other SRDF modes

Non-RDF devices

548 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Newly added devices will synchronize R1->R2 invalid tracks under a new SRDF/Metro consistency
exempt status:
• similar in concept to the previous SRDF/A consistency exempt functionality

The ActiveActive pair state is reached for devices only after the volumes have been added to the
SRDF/Metro session and track synchronization for added devices completes.

Once synchronized, the exempt status for these devices will be cleared and SRDF/Metro operations for
all active devices will continue normally.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 548
Exempt Support Restrictions

• Restore operations will be blocked while one or more devices in an


SRDF/Metro group are in an exempt status
• At least one device within the SRDF/Metro session must be non-
exempt
‒ Management software will not allow all devices in the SRDF/Metro
session to be removed with exempt deletepair or movepair

549 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

An Active SRDF/metro RDF group can not be empty, so there should be at least one device in the session
that is non-exempt.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 549
Create Pair Exempt – Unisphere (1 of 6)

550 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Let us look at how to use Unisphere to add RDF pairs to an active RDF metro group. Navigate to the
DATA PROTECTION > SRDF Groups page. Select the RDF group. In this case, there is an existing RDF
group 2 with 8 active metro pairs selected.

1. Choose Create Pairs

2. For the Select SRDF Mode, choose the Mirror Type that matches the other pairs in the same RDF
group from the same array. Notice that SRDF Mode is Active and grayed out.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 550
Create Pair Exempt – Unisphere (2 of 6)

551 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Search the non-RDF devices on the array to be used for creating the pairs, and confirm the selection.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 551
Create Pair Exempt – Unisphere (3 of 6)

552 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The selected volumes may optionally be added to the existing storage group that the existing pairs belong
to by checking the Add to Stroage Group check box.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 552
Create Pair Exempt – Unisphere (4 of 6)

553 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Search for remote volumes and confirm the selection.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 553
Create Pair Exempt – Unisphere (5 of 6)

554 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The chosen remote volumes can also be optionally added into the existing storage group that the existing
pairs belong to, and then sort the pairing.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 554
Create Pair Exempt – Unisphere (6 of 6)

11

10

555 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Review the pair summary then choose Run Now. Once finished, the Success message us displayed.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 555
SRDF/Metro Device Pairs

C:\>symrdf -sid 217 list -rdfg 4

Symmetrix ID : 000197600217 (Microcode Version: 5978)


Remote Symmetrix ID : 000196802253 (Microcode Version: 5977)
RDF (RA) Group Number : 4 (03)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
N/A 000F1 RW 0 0 RW 00078 RW 0 0 TX.E ActiveBias
N/A 000F2 RW 0 0 RW 00079 RW 0 0 TX.E ActiveBias

556 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Shown here is a list output issued to RDF Group 4. RDF Group 4 is an SRDF/Metro RDF Group with
two device pairs in the ActiveBias RDF Pair State. The (M)ode of Operation flag is displayed as T for
Active.

Legend for FLAGS:

(M)ode of Operation : A = Async, S = Sync, E = Semi-sync, D = Adaptive Copy


Disk Mode
: W = Adaptive Copy WP Mode, M = Mixed, T = Active
(C)onsistency State : X = Enabled, . = Disabled, M = Mixed, - = N/A
(E)xempt : X = Enabled, . = Disabled, M = Mixed, - = N/A
R1/R2 Device (S)ize : E = R1 EQ R2, 1 = R1 GT R2, 2 = R2 GT R1, - = N/A

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 556
Viewing SRDF/Metro Group Details

C:\>symcfg -sid 217 list -rdfg 4 -metro

Symmetrix ID : 000197600217

S Y M M E T R I X R D F G R O U P S

Local Remote Group RDF Metro


------------ --------------------- --------------------------- -----------------
LL Flags Dir Witness
RA-Grp sec RA-Grp SymmID ST Name LPDS CHTM Cfg CE S Identifier
------------ --------------------- --------------------------- -- --------------
4 ( 3) 10 4 ( 3) 000196802253 OD Metro_SG X... ..XX F-S BB - -

Legend:
Group (S)tatus : O = Online, F = Offline
Group (T)ype : S = Static, D = Dynamic, W = Witness
Director (C)onfig : F-S = Fibre-Switched, F-H = Fibre-Hub
G = GIGE, E = ESCON, T = T3, - = N/A

557 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Adding the –metro option displays more details. The RDF (M)etro flag is displayed as configured and the
(C)onfigured Type and (E)ffective Type are listed as B for Bias.

Group Flags:

Prevent Auto (L)ink Recovery : X = Enabled, . = Disabled

Prevent RAs Online Upon (P)ower On: X = Enabled, . = Disabled

Link (D)omino : X = Enabled, . = Disabled

(S)TAR/SQAR mode : N = Normal, R = Recovery, . = OFF

S = SQAR Normal, Q = SQAR Recovery

RDF Software (C)ompression : X = Enabled, . = Disabled, - = N/A

RDF (H)ardware Compression : X = Enabled, . = Disabled, - = N/A

RDF Single Round (T)rip : X = Enabled, . = Disabled, - = N/A

RDF (M)etro : X = Configured, . = Not Configured

RDF Metro Flags:

(C)onfigured Type : W = Witness, B = Bias, - = N/A

(E)ffective Type : W = Witness, B = Bias, - = N/A

Witness (S)tatus : N = Normal, D = Degraded,

F = Failed, - = N/A

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 557
Adding Devices to SRDF/Metro Group Using CLI

C:\>symrdf -sid 217 -rdfg 4 -f addpairtometro.txt createpair -type R1 -metro –exempt

C:\>symrdf -sid 217 list -rdfg 4

Symmetrix ID: 000197600217

Local Device View


-----------------------------------------------------------------------------
STATUS FLAGS RDF S T A T E S
Sym Sym RDF --------- ----- R1 Inv R2 Inv ----------------------
Dev RDev Typ:G SA RA LNK MTES Tracks Tracks Dev RDev Pair
----- ----- -------- --------- ----- -------- -------- --- ---- -------------

000F1 00078 R1:4 ?? RW RW T1.E 0 0 RW RW ActiveBias


000F2 00079 R1:4 ?? RW RW T1.E 0 0 RW RW ActiveBias
000F4 00081 R1:4 ?? RW RW T1.E 0 0 RW RW ActiveBias

558 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The createpair command creates device pairs using the devices listed in the addpairtometro.txt
file and places them in the SRDF/Metro session. In this example, the device pair includes devices 0F4 and
081. The -exempt option indicates that data on the R1 side of the new RDF device pairs should be
preserved and host accessibility should remain on the R1 side.

After creating the new device pairs in RDF group 4, Solutions Enabler performs an establish on them,
setting the device pairs to RW on the RDF link with SyncInProg RDF pair state. Then the device pairs will
transition to the ActiveActive RDF pair state if the devices already in the group are using witness
protection. The pair state will be ActiveBias if configured using bias protection, as in this example. If the
devices already in the group are suspended, then the newly added devices will also be suspended.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 558
Move Devices in/out of Active SRDF/Metro group
with -exempt
• Take an existing RDF pair and move it into a
Example: Move SRDF/S
SRDF/Metro group.
devices into SRDF/Metro group
• Take an existing RDF pair in a SRDF/Metro group
and move it to an SRDF/S or SRDF/Adaptive Copy
Mode group
• Not applicable to devices using SRDF/A

Benefits SRDF/Metro
R1 Group R2
• Minimize initial sync and time to consistency
• Parity with other SRDF modes
SRDF/S
R1 Group R2

559 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Source Movepair SRDF group and devices must be Adaptive Copy or Synchronous modes (not
SRDF/A).

The Target SRDF/Metro group may now be Active with device pair states of Suspended, SyncInProg,
ActiveActive, or ActiveBias.

R1 devices will remain host accessible and R2 will remain inaccessible to the host until these devices
reach active mode.

Devices added to an SRDF/Metro configuration must meet the following criteria:

The R2 cannot be larger than the R1


The R2 device cannot have device inactive set if it is mapped to a host
The R1 device cannot be device inactive
Devices cannot have User Not Ready set
Devices cannot have User Geometry set
ORS RCopy is not supported
Devices cannot be BCVs
Devices cannot be CKD
Devices cannot be RP
Devices can be used as the source of a TimeFinder data copy
Devices cannot be used as the target of a TimeFinder data copy when the SRDF devices are RW on the
RDF link with either a SyncInProg, ActiveBias or ActiveActive RDF pair state.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 559
Move Pair Exempt – Unisphere (1 of 5)

560 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

We can move RDF pairs of SRDF/S or SRDF/Adaptive Copy Disk mode to an active metro group. Now
we are to move some current Synchronous RDF pairs.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 560
Move Pair Exempt – Unisphere (2 of 5)

561 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Highlight the pairs to be move, and choose Move.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 561
Move Pair Exempt – Unisphere (3 of 5)

562 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Choose the active metro RDF group that the pairs are to be moved to. If the storage group is to be used
for the other replication unit, all the devices in the storage group will be replicated with consistency. So
choose the storage group to remove from if necessary.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 562
Move Pair Exempt – Unisphere (4 of 5)

563 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

We can optionally add the device pairs to a new storage group, review the summary and run the job.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 563
Move Pair Exempt – Unisphere (5 of 5)

564 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

After the task succeeds, notice the device pairs moved into the metro RDF group 2—now 16 pairs virsus
12 prior to the move.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 564
SRDF/Metro Extended DR with SRDF/A (1 of 3)
• Primary (bias) Site protection, best practice
SRDF/Metro

R11 R2
R2

(Bias)

SRDF/A Only

R2

565 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Extended Disaster Recovery (DR) using SRDF/A in an SRDF/Metro environment includes protecting the
Primary, Secondary, or both sites. The best practice recommendation is to protect the Primary (bias) site
with SRDF/A to a tertiary site. In this case, the Primary device acts as an R11.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 565
SRDF/Metro Extended DR with SRDF/A (2 of 3)
• Secondary (non-bias) Site protection, not recommended
SRDF/Metro

R1 R21
R2

(Bias)

SRDF/A Only

R2

566 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Disaster Recovery in an SRDF/Metro configuration using SRDF/A from the Secondary site can be
achieved, however, it is not recommended. In this configuration, the Secondary device acts as an R21.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 566
SRDF/Metro Extended DR with SRDF/A (3 of 3)
• Primary and secondary Site protection
SRDF/Metro

R11 R21
R2

(Bias)

SRDF/A Only SRDF/A Only

R2
R2

567 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

It is also possible to protect both the Primary and Secondary sites with SRDF/A to their respective tertiary
sites, as shown here.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 567
Concurrent SRDF device restrictions

Criteria required of concurrent devices in an SRDF/Metro configuration:


• The non-Metro RDF mirror cannot be in Synchronous mode
• A device cannot have 2 Metro RDF mirrors
• The non-Metro RDF mirror of the SRDF/Metro devices must be an R1 (R11, R21)
– The R1 device in an SRDF/Metro configuration can be an R11 device, but it cannot be
an R21 device, and
– The R2 device in an SRDF/Metro configuration can be an R21 device, but it cannot be
an R22 device
• A device cannot simultaneously be both RW on the RDF link on the Metro RDF
mirror and the target of data copy from the non-Metro RDF mirror
• A device cannot be WD to the host if the Metro SRDF mirror of the device is RW
on the RDF link

568 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Restrictions may change with code release, refer to the release notes for the up-to-date information.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 568
Lesson: Implementing SRDF/Metro
This lesson covers the following topics:

• SRDF/Metro implementation using Unisphere for PowerMax

• Monitoring of SRDF/Metro with Unisphere for PowerMax

569 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers SRDF/Metro implementation. Unisphere for PowerMax is used to set up, manage, and
monitor SRDF/Metro.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 569
SRDF/Metro Implementation Steps

1. Identify implementation configuration: Bias or with Witness


– if with Witness, which witness physical or virtual
– if physical witness, from both Source and Target arrays, is there an empty RDF
group from witness array to Source/Target array
– if virtual witness, is a vWitness available

2. Create RDF groups for the application to use SRDF/Metro


3. Mask the future R1 (now non-RDF) devices to the host
4. Create RDF pairs using the RDF groups from step 2 with either –use_bias
or -metro
5. Mask now R2 devices to the host

570 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To implement SRDF/Metro, the future R1 devices can be accessible by the host through out the
implementation, and the future R2 devices can only be accessible by the host once the pairs are active.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 570
SRDF Group Details

571 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Here is an example of building RDF pairs with storage groups in Unisphere with bias. We have built an
empty RDF group. To view details of the SRDF Group, select it in the list. Details are shown on the right of
the screen.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 571
Protect Storage Group

572 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Now we select a non-RDF storage group. To protect a Storage Group using SRDF/Metro, select the SG in
the Storage Groups list, and click the Protect button.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 572
Select Technology – SRDF/Metro

573 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Choose High Availability using SRDF/Metro and click NEXT.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 573
Configure Metro

574 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The Remote Array ID will be auto-populated with the remotely-attached array. Alternately, choose the
Scan button to scan for remote arrays.

Choosing the SRDF Group can be done automatically, or manually. To choose a specific SRDF Group,
choose Manual, and click the Select button. Select the SRDF Group from the Select Group listing and
click OK. Establish SRDF Pairs is enabled by default. To disable this setting, uncheck the box.
Establishing SRDF Pairs initiates a copy from the source R1 to the target R2. Choose the type of
protection, either Bias or Witness, for the SRDF/Metro configuration. Leave the default settings for Remote
Storage Group Name and Remote Service Level, unless changes need to be made. Click NEXT to
continue.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 574
Review Metro

575 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Review the settings for SRDF/Metro and choose Run Now from the ADD TO JOB LIST dropdown.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 575
Protect Storage Group Task Details

576 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Once the task has completed, choose the Show Task Details link to display the steps that were taken to
protect the Storage Group using SRDF/Metro. Click CLOSE when done reviewing the details.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 576
SRDF Protected Storage Groups

577 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

From the DATA PROTECTION menu, select Storage Groups. To display Storage Groups protected by
SRDF, select the SRDF tab. Storage Group BusInApp shows a State of ActiveBias, indicating
SRDF/Metro using Bias. Selecting the Storage Group in the list displays details of the Storage Group on
the right of the screen.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 577
SRDF/Metro – Create Physical Witness Group

578 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

A Physical Witness requires the use of a third VMAX, VMAX All Flash, or PowerMax array. From the
DATA PROTECTION menu, choose SRDF Groups, and click Create SRDF Group. This process must
be done on both the R1 and R2 sides of the Metro configuration. In this example, the R1 side configuration
is shown.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 578
Physical Witness Group – Select Remote

579 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In the Create SRDF Group dialog, choose the Communication Protocol, Remote Array ID, and provide a
name for the Group. Select the SRDF/Metro Witness Group checkbox, and click NEXT.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 579
Physical Witness Group – Configure Local

580 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Choose an SRDF Group Number and select the ports from the Local array. Click NEXT to continue.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 580
Physical Witness Group – Configure Remote

581 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Select the SRDF Group Number and ports for the Remote array. Click NEXT to continue.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 581
Physical Witness Group – Review Summary

582 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Select Run Now from the ADD TO JOB LIST dropdown to create the Physical Witness Group on the
Local array. You must create a Witness Group on the Remote array as well, not shown here.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 582
Physical Witness Group Task Details

583 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

View the details of the Physical Witness SRDF Group creation. Click CLOSE when finished.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 583
SRDF/Metro – Configure Virtual Witness

584 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To create a Virtual Witness, choose Virtual Witness from the DATA PROTECTION menu, and select the
Create button.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 584
Create Virtual Witness

585 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The vWitness has to be added to each of the two arrays in the SRDF/Metro configuration. Navigate to the
DATA PROTECTION > Virtual Witness page. Click the Create button to open the Create Virtual Witness
wizard. Enter a vWitness name, enter the IP address of the SE vApp, and select Add Virtual Witness to
remote arrays. Select Run Now from the ADD TO JOB LIST drop down menu to complete the task.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 585
Lesson: Configure SRDF/Metro Device Pairs
This lesson covers the following topics:

• Creating SRDF/Metro Device Pairs

• Viewing SRDF/Metro Device Pairs with Unisphere for PowerMax and Solutions Enabler

586 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers SRDF/Metro device pairs. Solutions Enabler and Unisphere for PowerMax are used to
create and view SRDF/Metro Device Pairs. RDF device pairs can also be created without storage groups
as shown in the last lesson, but with device file from CLI, manually selecting the device pairs—from
Unisphere—or using device group.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 586
Create SRDF/Metro Device Pairs

587 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

From the DATA PROTECTION menu, choose SRDF Groups. Create SRDF/Metro Device Pairs within an
SRDF Group by choosing the SRDF Group and clicking the Create Pairs button. In this example, SRDF
Group MetroStG has no SRDF Group Volumes.

With Solutions Enabler, an SRDF/Metro configuration is created from a set of non-SRDF devices with the
symrdf createpair command with the -rdf_metro option. The -rdf_metro option indicates the
device pair will operate in an SRDF/Metro configuration with a Witness array available. Devices must be
added to an empty RDF group or an RDF group which contains device pairs that are Not Ready on the
link. If the RDF group has devices in it already, they have to be devices that are part of an RDF/Metro
configuration.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 587
Create SRDF/Metro Device Pairs – Continued

588 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Select Active for the SRDF Mode and then click NEXT. In this example, Device Bias is used.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 588
Select Local Volumes

589 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Select the local volumes for the Create Pairs operation. Devices meeting the requested criteria, in this
example, one 2 GB volume, will be auto-selected. Manual Selection allows you to choose the specific
volumes from a listing based on the size and configuration of the volume. The Add to Storage Group
selection creates a new SG for the volume or volumes. In this example, no SG is being used. Click NEXT
to continue.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 589
Select Remote Volumes

590 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

Select the Remote Volumes to be used for the Create Pairs operation. Click NEXT to continue.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 590
Review Pair Summary

591 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To complete the Create Pair operation, choose Run Now from the ADD TO JOB LIST dropdown on the
Review Pair Summary screen.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 591
Create Pairs Task Details

592 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

View the Create Pairs Task Details, and click CLOSE.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 592
SRDF/Metro Device Pairs

593 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

The SRDF Group MetroStG displays an SRDF Mode of Active. View SRDF Metro Pairs in an SRDF
Group by clicking the SRDF Group Volumes link from the details panel of the SRDF Group. In this
example, device 00F3 is the source, or R1 volume, and 007F is the remote, or R2 volume. The device pair
is in an ActiveBias state.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 593
SRDF/Metro Device Pairs

R1

R2

594 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

In SRDF/Metro configurations, the External Identity WWN of the R1 is the same as the External Identity
WWN of the R2.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 594
SRDF/Metro Device Pairs
C:\>symdev –sid 217 show f3 R1
Device Physical Name : Not Visible
Device Symmetrix Name : 000F1
Symmetrix ID : 000197600217
.
Device WWN : 60000970000197600217533030304633
.
Device External Identity
{
Device WWN : 60000970000197600217533030304633

C:\>symdev –sid 253 show 7f


R2
Device Physical Name : Not Visible
Device Symmetrix Name : 00078
Symmetrix ID : 000196802253
.
Device WWN : 60000970000196802253533030303746
.
Device External Identity
{
Device WWN : 60000970000197600217533030304633

595 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

To display the Device External Identity of SRDF/Metro devices in Solutions Enabler, use the symdev
show command. The Device External Identity of the R1 and R2 is the same in SRDF/Metro
configurations.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 595
Module Summary
Key points covered in this module:

• SRDF/Metro Overview

• Configuring SRDF/Metro with Unisphere for PowerMax

• SRDF/Metro Device Pairs

596 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This module covered monitoring SRDF/Metro device pairs. Solutions Enabler and Unisphere for
PowerMax are used to view SRDF/Metro details.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Module: SRDF/Metro 596
Course Summary

Key points covered in this course:


• PowerMax and VMAX Family configuration overview
• Storage provisioning concepts
• Managing ports and port characteristics
• Performing service provisioning to hosts
• Overview of storage management in a virtualized environment
• Using Unisphere for PowerMax for Compliance Monitoring and Workload Planning
• Local and remote replication offerings in PowerMax and VMAX Family arrays

597 © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.

This concludes the training.

© 2019 Dell Inc. or its subsidiaries. All Rights Reserved.


desina.satyasrinu@wipro.com Course Summary 597

You might also like