You are on page 1of 93

Technical white paper

MSA Remote Snap Software


Technical white paper

Contents
Introduction ........................................................................................................................................................................................................................................................................................................................................ 4
Remote Snap product overview ................................................................................................................................................................................................................................................................................ 4
What’s new? ....................................................................................................................................................................................................................................................................................................................................... 5
Replication models and documentation conventions ................................................................................................................................................................................................................................. 5
Benefits of Remote Snap........................................................................................................................................................................................................................................................................................................ 6
Disaster recovery .................................................................................................................................................................................................................................................................................................................... 6
Backup .............................................................................................................................................................................................................................................................................................................................................. 6
Development .............................................................................................................................................................................................................................................................................................................................. 6
Components of Remote Snap............................................................................................................................................................................................................................................................................................ 7
Components common to linear and virtual replication........................................................................................................................................................................................................................ 7
Components of linear replication ............................................................................................................................................................................................................................................................................. 7
Components of virtual replication ........................................................................................................................................................................................................................................................................... 7
How the technology works .................................................................................................................................................................................................................................................................................................. 8
Linear replication process............................................................................................................................................................................................................................................................................................... 8
Virtual replication process ...........................................................................................................................................................................................................................................................................................10
Comparison of linear replications versus virtual replications.............................................................................................................................................................................................................12
Types of replications...............................................................................................................................................................................................................................................................................................................12
Local replication....................................................................................................................................................................................................................................................................................................................12
Remote replication .............................................................................................................................................................................................................................................................................................................12
Physical media transfer ..................................................................................................................................................................................................................................................................................................12
Remote Snap requirements ..............................................................................................................................................................................................................................................................................................13
Setup requirements ...........................................................................................................................................................................................................................................................................................................13
Snapshot space .....................................................................................................................................................................................................................................................................................................................14
Network requirements ....................................................................................................................................................................................................................................................................................................14
Remote Snap basic functions..........................................................................................................................................................................................................................................................................................16
General notes about using the SMU and CLI ............................................................................................................................................................................................................................................16
Preparing the systems ....................................................................................................................................................................................................................................................................................................18
Creating a replication set..............................................................................................................................................................................................................................................................................................24
Scheduling replications ..................................................................................................................................................................................................................................................................................................30
Deleting a replication set ..............................................................................................................................................................................................................................................................................................35
Accessing the secondary volume’s data.........................................................................................................................................................................................................................................................37
Setting the primary volume (linear replications only)........................................................................................................................................................................................................................40
Verifying replication data links ................................................................................................................................................................................................................................................................................41
Ports connected for replication ..............................................................................................................................................................................................................................................................................47
CHAP settings and Remote Snap ........................................................................................................................................................................................................................................................................51
Examples of replication types and operations ................................................................................................................................................................................................................................................52
Remote replication .............................................................................................................................................................................................................................................................................................................52
Local replication and physical media transfer (for linear replications only) ..................................................................................................................................................................53
Disaster recovery operations ....................................................................................................................................................................................................................................................................................55
Technical white paper

Use cases ...........................................................................................................................................................................................................................................................................................................................................62


Single office with a remote site for backup and disaster recovery using iSCSI to replicate data ...............................................................................................................62
Single office with local site disaster recovery and backup using iSCSI and host access using FC ............................................................................................................66
Single office with a local site disaster recovery and backup using FC (only for linear replications or virtual replications when
both the primary and secondary volumes reside on MSA 2050 or MSA 1050 arrays) ....................................................................................................................................67
Initial local replication—linear ..................................................................................................................................................................................................................................................................................68
Initial local replication—virtual ................................................................................................................................................................................................................................................................................69
Two branch offices with disaster recovery and backup ...................................................................................................................................................................................................................70
Single office with a target model using FC and iSCSI ports ..........................................................................................................................................................................................................71
Multiple local offices with a centralized backup (only for linear replications or virtual replications when the Central Office
array is an MSA 2050) ...................................................................................................................................................................................................................................................................................................72
Replication of application-consistent snapshots (only for linear replications or virtual replications when the primary volume
resides on an MSA 2050 or MSA 1050) .......................................................................................................................................................................................................................................................73
Replication of Microsoft VSS-based application-consistent snapshots (only for linear replications or virtual replications
when the primary volume resides on an MSA 2050 or MSA 1050) ...................................................................................................................................................................................78
Best practices ................................................................................................................................................................................................................................................................................................................................81
Fault tolerance .......................................................................................................................................................................................................................................................................................................................81
Volume size and policy...................................................................................................................................................................................................................................................................................................81
License ..........................................................................................................................................................................................................................................................................................................................................82
Scheduling .................................................................................................................................................................................................................................................................................................................................82
Physical media transfer (linear replications only) ..................................................................................................................................................................................................................................83
Replication setup wizard (linear replications only) ...............................................................................................................................................................................................................................84
Application-consistent snapshots (linear replications only) ........................................................................................................................................................................................................84
Max. volume limits ..............................................................................................................................................................................................................................................................................................................84
Replication limits ..................................................................................................................................................................................................................................................................................................................86
Monitoring..................................................................................................................................................................................................................................................................................................................................86
Performance tips ..................................................................................................................................................................................................................................................................................................................87
Troubleshooting .........................................................................................................................................................................................................................................................................................................................88
FAQs ......................................................................................................................................................................................................................................................................................................................................................89
Summary ............................................................................................................................................................................................................................................................................................................................................91
Glossary ..............................................................................................................................................................................................................................................................................................................................................91
For more information..............................................................................................................................................................................................................................................................................................................92
Technical white paper Page 4

Introduction
This document provides information for using the MSA Remote Snap Software (Remote Snap). The following topics are covered:
• Benefits
• Components
• How the technology works
• Types of replications
• Requirements
• Basic functions
• Use cases
• Best practices
• Troubleshooting
• Frequently asked questions

Remote Snap product overview


Remote Snap is array-based functionality that provides remote replication on HPE MSA 2050 Storage, HPE MSA 1050 Storage, HPE MSA
2040 Storage, HPE MSA 1040 Storage, and HPE P2000 G3 arrays. The following array controllers support Remote Snap:
• HPE MSA 2050 SAN Controller
• HPE MSA 1050 SAN FC Controller
• HPE MSA 1050 1 Gb iSCSI Controller
• HPE MSA 1050 10 Gb iSCSI Controller
• HPE MSA 2040 SAN Controller
• HPE MSA 1040 FC Controller
• HPE MSA 1040 1 Gb iSCSI Controller
• HPE MSA 1040 10 Gb iSCSI Controller
• HPE P2000 G3 MSA Fibre Channel Controller
• HPE P2000 G3 MSA FC/iSCSI Combo Modular Smart Array Controller
• HPE P2000 G3 10GbE iSCSI MSA Array System Controller
• HPE P2000 G3 iSCSI MSA Array System Controller

Please visit hpe.com/storage/MSA2050, hpe.com/storage/MSA1050, hpe.com/storage/MSA2040, hpe.com/storage/MSA1040, or


hpe.com/storage/P2000 for more information on these controllers.

Note
Remote Snap is not supported on the HPE P2000 G3 SAS MSA Array System Controller, the HPE MSA 1040 SAS Controller, the HPE MSA
2040 SAS Controller, the HPE MSA 1050 SAS Controller, or the HPE MSA 2050 SAS Controller.
Technical white paper Page 5

Remote Snap is a form of asynchronous replication that uses the snapshot functionality of the array to replicate block-level or page-level
data from a volume on a primary system to a volume on a secondary system. The secondary system may be at the same location as the first,
or it may be located at a remote site. Remote Snap only replicates blocks or pages that have changed since the last replication, thereby
providing efficient replication.

What’s new?
This section describes new enhancements and support added with the release of the VL100 firmware for MSA 2050 Storage and the
VE100 firmware for MSA 1050 Storage. Since neither the MSA 2050 nor the MSA 1050 supports linear storage, they do not support
replication of linear volumes; they only support replication of virtual volumes.
• The MSA 2050 and the MSA 1050 require authentication when creating or modifying peer connections.
• Increased peer connections to four per array for the MSA 2050; remains at one for the MSA 1050.
• When both the primary and secondary volumes reside on MSA 2050 or MSA 1050 arrays:
– Use FC for the peer connection for replication.
– Change the protocol used for the peer connection between FC and iSCSI for the MSA 2050.
– Queue replications—up to one replication can be queued.
– Up to 16 replication history snapshots available for replication sets of volumes (not for replication sets of volume groups).
• When the primary volume resides on an MSA 2050 or an MSA 1050 array:
– Decreased interval between scheduled replications from one hour to thirty minutes.
– Added the ability to optionally use an existing snapshot of the primary volume as the current snapshot when comparing against the
previous snapshot to determine data to transfer for replication. The existing snapshot may be created manually, automatically via a
schedule or snapshot history, or using the VSS hardware provider. Previously, the current snapshot of the primary volume was always
created as part of the replication process.

Replication models and documentation conventions


The term linear replication is used when Remote Snap is used to replicate a linear volume to another linear volume, and the term virtual
replication is used when Remote Snap is used to replicate a virtual volume to another virtual volume. Remote Snap cannot be used to
replicate from a linear volume to a virtual volume or from a virtual volume to a linear volume. The HPE P2000 G3 arrays only support linear
volumes, and so only support linear replication. The HPE MSA 2050 and HPE MSA 1050 only support virtual volumes, and so only support
virtual replication. Since the replication model can be either linear or virtual, and the HPE MSA P2000 G3 arrays support only linear
replications and the HPE MSA 2050 and HPE MSA 1050 arrays support only virtual replications, an HPE MSA P2000 G3 array cannot
replicate to or from an HPE MSA 2050 or HPE MSA 1050 array. The HPE MSA 2040 and HPE MSA 1040 support both linear volumes and
virtual volumes, and so support linear replications or virtual replications; however, creation of one replication model (linear or virtual)
prevents the creation of the other replication model. If linear replications exist and you want to use virtual replications, delete all replications,
then you can create a peer connection and virtual replications. If virtual replications exist and you want to use linear replications, delete all
replications and peer connections, then you can create linear replications.
Technical white paper Page 6

Benefits of Remote Snap


Remote Snap is a licensed replication feature for disaster recovery. It has a robust, fault-tolerant design that allows replication to continue in
the event of some system failures involving communication, controllers, ports, hard drives (depending on the RAID configuration), or
temporary power failure. Remote Snap can employ Ethernet; for linear replications, and for virtual replications where both the primary and
secondary volumes reside on MSA 2050 or MSA 1050 arrays, it also supports Fibre Channel (FC) technology.

Remote Snap technology enables the following key data management and protection capabilities:
• Continuity of business systems in the event of a failure on the primary site
• Access to data at a remote site, for either dispersed operations or development activities
• Multiple recovery points using snapshots

Disaster recovery
Remote Snap provides access to data at a secondary site when the primary site experiences a critical failure. It allows several data volumes
(limits determined by model and volume type replicated) to be replicated. Replicating at regular intervals helps to protect the data. Recovery
time is reduced because the data is available at the secondary site; applications can switch to the secondary site with minimal downtime
using data from the last replication point. The data stored at the secondary site can then be used to restore the primary location once it is
back online, or the data can be exported and used by users at the secondary site.

Backup
Remote Snap can replicate volumes with marginal impact on server performance. It can be used by small businesses as a primary backup
tool and by large businesses as a secondary backup tool at data centers. Remote Snap can be used as interim storage for backing up to
removable media such as tape.

Alternatively, remote offices can replicate to central data centers where backups occur. The software reduces the overall backup time by
replicating only data that has been modified. Because linear volume replication and virtual volume replication where both the primary and
secondary volumes reside on MSA 2050 or MSA 1050 arrays supports either FC or Ethernet interconnects, businesses have the flexibility to
use the technology that best matches their current environment.

Development
Remote Snap enables different development use cases:
• An application administrator can test patches or changes in the primary system by switching the applications to the secondary site. Once
the testing of the patch update is completed, the administrator can switch the applications back to the primary site.
• A database application development team can have access to regularly scheduled snapshots of the replicated database volumes by
exporting the snapshots or, when both the primary and secondary volumes reside on MSA 2050 or MSA 1050 arrays, using replication
history snapshots on the secondary system. When the exported or replication history snapshot is no longer needed, it can be deleted.
Technical white paper Page 7

Components of Remote Snap


Components common to linear and virtual replication
• Primary volume—the volume that is the source of the replication. This volume can be mapped to hosts. For virtual replications, the
primary can be a virtual volume, a virtual snapshot, or a virtual volume group.
• Secondary volume—the volume that is the destination or target of the replication. This volume cannot be mapped to hosts, but it can be
snapped, and the snapshot can be mapped to hosts. For virtual volumes, this can be either a virtual volume or virtual volume group.
• Primary and secondary volume replication snapshots—snapshots of the primary or secondary volume used in replication. Replication
snapshots do not count against license limits.
For linear replications, these snapshots are listed in the output of the user interfaces; though they cannot be directly mapped to hosts,
they can be exported as regular snapshots, and those snapshots can be mapped to hosts. As you’ll see below, linear replications can have
a number of replication snapshots—if a replication snapshot of a primary volume has a matching secondary volume replication snapshot,
the two replication snapshots are together considered a replication image.
For virtual replication, there are two snapshots for the MSA 2040 and the MSA 1040: the current snapshot and the previous snapshot.
There are three snapshots for the MSA 2050 and the MSA 1050: the current snapshot and the previous snapshot, and a snapshot used
when queuing is enabled. Though these snapshots are not listed, they do consume space and count against volume count limits for
snapshot trees and pools.
• Replication set—a primary and secondary volume pair connected together for the purposes of replication, along with their associated
snapshots.
For linear replications, the replication set includes the host I/O ports used for replication for the set. Each replication set can use multiple
redundant host ports of one protocol only, but a different replication set may use host ports of a different protocol. The direction of a
linear replication set can be changed—the secondary becomes the primary and the primary becomes the secondary.
For virtual replication, the replication set includes the peer connection used. See below for more information on the peer connection. The
direction of a virtual replication cannot be changed—the secondary cannot become the primary and the primary cannot become the
secondary. Also, if the replication set includes a virtual volume group, you cannot add or remove volumes from the volume group.

Components of linear replication


• Remote system—a representation of another array; it contains the management IP addresses and login credentials of a management level
user on that array. The local array may communicate using the management ports to the remote array to obtain storage configuration of
the remote array to assist in creating and managing replication sets.
• Replication image—a conceptual term for a pair of primary and secondary snapshots that represent a synchronized point-in-time
representation of the data.

Components of virtual replication


• Peer connection—defines the host ports used between two arrays involved in replications. These host ports are used for
managing replications as well as transferring data for replication; the management ports of the array are not used for virtual
replication. Peer connections are bidirectional, and can use FC or iSCSI host ports when both the primary and secondary
volumes reside on MSA 2050 or MSA 1050 arrays, but can use only iSCSI host ports if either the primary or secondary volume
resides on an MSA 2040 or an MSA 1040 array. All replications that share a peer connection use the same host ports. The
MSA 2040, the MSA 1040, and the MSA 1050 may have only one peer connection, while the MSA 2050 can have up to four
peer connections. Since the MSA 2050 and MSA 1050 require authentication when creating or modifying peer connections,
and neither the MSA 2040 nor the MSA 1040 have the ability to obtain the required authentication information from the user
to provide to the remote MSA 2050 or MSA 1050, when one of the arrays is an MSA 2050 or MSA 1050 and the other array
is an MSA 2040 or MSA 1040, create or modify the peer connection from the MSA 2050 or MSA 1050.
Technical white paper Page 8

How the technology works


Remote Snap is based on the existing snapshot technology offered by MSA 2050 Storage, MSA 1050 Storage, MSA 2040 Storage,
MSA 1040 Storage, and P2000 G3 arrays. Snapshots are used to track the data to be replicated by determining the differences in data
updated on the master volume, minimizing the amount of data to be transferred.

Remote Snap enables snapshots of data to reside on another array at a location other than the primary site. To perform a replication, the
system takes a snapshot of the volume to be replicated, creating a point-in-time image of the data. The system replicates this point-in-time
image to the destination volume by copying only the differences in the data between the current snapshot and the previous one via TCP/IP
(iSCSI) or FC.

The snapshot occurs at the volume level and is block-based or page-based. The software functions independently of the vdisk or disk group
configuration, so the secondary volume in a given set may have different underlying RAID levels, drive counts, drive sizes or drive types than
the primary volume, though the volume sizes are identical. Since the software functions at the raw block-level or page-level, it has no
knowledge of the volume’s operating system configuration, the file system, or any data that exists on the volume.

Linear replication uses a pull model while virtual replication uses a push model. In a pull model, the secondary volume’s system requests data
from the appropriate snapshot on the primary volume; in a push model the primary volume’s system writes data to the appropriate snapshot
on the secondary volume.

Linear replication process


The linear replication process includes the following steps:
1. Create a snapshot on the primary volume. This snapshot is a component of a replication image.
2. The primary array sends a notification to the designated secondary array that a replication operation has been started for a given
storage volume.
3. The secondary array requests a difference list from the primary array. This list contains only the changed storage blocks of the primary
volume since the last replication command (previous sync point snapshot). For the first replication, the difference list contains all storage
blocks of the primary volume.
4. The primary array sends a difference list to the secondary array.
5. The secondary array requests all blocks in the difference list from the primary array.
6. When the transfer is complete, the secondary array creates a new snapshot to track the newly acquired storage blocks. The secondary
array creates snapshots of the secondary volume for each replication.

Linear replication repeats step 1 and queues steps 2–6 for each new replication command issued to the same replication set until the prior
replication command is complete. As long as the sync points are maintained, new replication commands to the same primary volume can be
performed while one or more previously executed replication commands are still in process. This enables you to take snapshots at discrete
intervals without waiting for any previous replications to complete.
Technical white paper Page 9

Figure 1: Linear replication image creation cycle


Technical white paper Page 10

Virtual replication process


The process for virtual replication includes the following steps:
1. Save the previous snapshots of the primary and secondary volumes.
2. Create current snapshots of the primary and secondary volumes; if the queue policy is set to queue latest, copy the queued replication
snapshot, if it exists, to the current primary snapshot.
3. Compare the current primary snapshot to the previous primary snapshot.
4. Copy the differences to the current secondary snapshot. For the initial replication, all allocated pages are copied, but only allocated
pages—no copying of empty unallocated pages. For subsequent replications, only the difference between the current primary snapshot
and previous primary snapshot are copied.
5. Rollback the secondary volume to the current secondary volume snapshot.

For the MSA 2040 and the MSA 1040, virtual replication does not keep more than the current and previous snapshots, and those snapshots
are not accessible. For the MSA 2050 and MSA 1050, virtual replication has three internal snapshots that are not accessible—the current
snapshot, the previous snapshot, and the queued snapshot. While these snapshots are not accessible, they can be listed with their storage
consumption using the show snapshots type replication CLI command. In addition, if both the primary and secondary volumes
reside on MSA 2050 or MSA 1050 arrays, replication snapshot history is available, and can exist for the secondary volume only, or for both
the secondary and primary volumes.

For the MSA 2040 and the MSA 1040, virtual replication does not queue replications; if a new replication request occurs while a previous
replication is in progress, the new request fails. For the MSA 2050 and MSA 1050, if the queue policy is set to queue latest, then if a new
replication request occurs while a previous replication is in progress, the system takes a snapshot and puts it in the queued internal
replication snapshot, replacing any previous snapshot with the new one, thus queuing at most one snapshot.
Technical white paper Page 11

Figure 2: Virtual replication cycle


Technical white paper Page 12

Comparison of linear replications versus virtual replications


Table 1. Comparison of linear replications versus virtual replications
Feature/Attribute Linear replication Virtual replication

Replication protocol iSCSI or FC iSCSI for MSA 2040 and MSA 1040, iSCSI or FC for
MSA 2050 and MSA 1050
Change replication protocol No Only for MSA 2050
Initial replication All blocks are copied Only allocated pages are copied
Queued replication Yes MSA 2050 and MSA 1050, at most one queued
Up to 16 replication history snapshots available on the
MSA 2050 and MSA 1050
Multiple replication images/replication history Up to the maximum number of snapshots
Not applicable for the MSA 2040 and MSA 1040—only
the current replication is retained
Modify primary volume/reverse replication direction Allowed Not allowed
Replicate snapshot of the primary volume using existing Allowed for the MSA 2050 and the MSA 1050, not
Allowed
replication set allowed for the MSA 2040 and MSA 1040
Create replication set using a snapshot or a volume
Not allowed Allowed
group as the source
Can replicate to or from more than one system Allowed Allowed for the MSA 2050, not allowed for the
MSA 2040, MSA 1050, or MSA 1040

Types of replications
A replication from a volume on a system to another volume on the same system is a local replication; a replication from a volume on one
system to a volume on another, separate, system is a remote replication, no matter the location of the primary or secondary system. Virtual
replication can perform remote replications, while linear replication can perform both local and remote replications, and can be reconfigured
between the two.

Local replication
Local replication occurs when the primary and secondary volumes reside on the same system. When creating the replication set, ensure the
primary volume resides on one vdisk and the secondary volume resides on another vdisk. Once the set is created, replications can be
initiated. Local replication still uses the host ports for replication, and so the host ports need to be configured and connected to a switch.

Remote replication
Remote replication occurs when the primary and secondary volumes reside on different systems. When creating the replication set, ensure
the primary volume resides on a vdisk or pool of the local, or primary, system and the secondary volume resides on a vdisk or pool on the
remote, or secondary, system. Once the set is created, replications can be initiated.

Physical media transfer


Another option for linear replication is to perform an initial local replication before converting to a remote replication as part of a disaster
recovery setup or backup. The initial replication between a primary linear volume and a secondary linear volume requires a full data copy
between the two volumes; every block on the volume is copied. If the primary volume is very large, even if there is very little actual data
written to it, the initial remote replication can take a long time to complete and can cause bandwidth congestion. Perform a local replication
to avoid congestion, and then transfer the secondary volume to the remote system by transferring the physical media that the secondary
volume resides on to the remote system. This process is known as physical media transfer.

Note that when using Full Disk Encryption (FDE) on an MSA 2040 Storage array, it is a best practice to move media between systems that
are identically configured with FDE enabled or disabled. That is, move secured Self-Encrypting Drives (SEDs) to a secured FDE system, and
unsecured SEDs or non-SEDs to an unsecured FDE system or non-FDE system.
Technical white paper Page 13

Remote Snap requirements


Setup requirements
To set up Remote Snap, observe the following requirements:
Linear replications
• One, for local replications, or two, for remote replications, dual-controller arrays connected via a switch; direct connection between systems
is not supported. For optimum results, configure and connect all ports.
• The management ports on the two arrays must be on the same network when using the SMU to create replication sets, or when
specifying a vdisk and having the remote system create the secondary volume.
• The Remote Snap license must be enabled on the local and, if applicable, remote systems:
– To explore Remote Snap, enable the 60-day temporary license available on the P2000 G3 SMU on the system’s Tools > Install
License page, or obtain a 180-day temporary license.
– To permanently enable Remote Snap, purchase a license.
• Remote Snap supports up to 16 replication sets per array (up to 8 replication sets for the MSA 1040 Storage array). If a volume on the
system is participating in a replication set, either as a primary volume or as a secondary volume, it counts against the replication set limit.
• For the combo controller and iSCSI controller arrays, for optimal results connect and configure all of the iSCSI ports with valid IP addresses.
• At least one volume on the primary system and one vdisk on the secondary system are required to create a replication set.
• For linear replication, creating replication sets with the SMU requires adding the remote system to the local, or primary system. Creating
replication sets with the CLI is easier if the remote system is added, but is not necessary.
• If using an existing replication-prepared volume, it must be the exact same size as the primary volume.

Virtual replications
• Two arrays are required. All arrays should have all reachable ports configured and connected via a switch; direct connection between
systems is not supported.
• The Remote Snap license must be enabled on both systems:
– To explore Remote Snap on the MSA 1040 or MSA 2040, obtain a 180-day temporary license.
– To permanently enable Remote Snap, purchase a license.
• Remote Snap supports up to 32 replication sets per array. If a volume on the system is participating in a replication set, either as a primary
volume or as a secondary volume, it counts against the replication set limit.
• At least one virtual pool on each system is required to create a peer connection between the two systems.
• At least one volume on the primary system is required to create a replication set.
• If one of the arrays is an MSA 2050 or MSA 1050 and the other array is an MSA 2040 or MSA 1040, create or modify the peer
connection from the MSA 2050 or MSA 1050.

Restrictions common to both linear and virtual replications


Remote Snap does not support SAS. Refer to the Remote Snap product overview section in the Introduction (page 4).

Note
For more information on controller types and additional specifications, see the HPE MSA 2050 Storage QuickSpecs, the HPE MSA 1050
Storage QuickSpecs, the HPE MSA 2040 Storage QuickSpecs, the HPE MSA 1040 Storage QuickSpecs, or the HPE MSA P2000 G3 Modular
Smart Array Systems QuickSpecs.
Technical white paper Page 14

Snapshot space
Since Remote Snap compares snapshots to determine data to transfer for replication, care must be given to provide sufficient space for the
snapshots. While overall space requirements depend on the rate of data change and frequency of replication, and determining this is beyond
the scope of this document, some information on managing snapshot space is provided here. To see the storage consumed by the
snapshots, use the show snapshots type replication CLI command.

For linear replications, set the initial snap pool size in a number of ways: by creating a snap pool and associating it to the volume during
creation, or when converting a standard volume to a master volume; by specifying the reserve (snap pool) size when creating a volume or
replication set; or by allowing the size to default to 20% or the minimum snap pool size of 5.37 GB, whichever is greater, when creating the
volume or the replication set. Use the set snap-pool-policy CLI command to set the policy to automatically attempt to expand the
snap pool, delete oldest snapshots, delete all snapshots, halt writes, or just to notify when various thresholds are reached. The defaults are to
only notify for the warning threshold, to automatically attempt to expand the snap pool for the error threshold, and to delete snapshots for
the critical threshold. The warning and error thresholds can be set using the set snap-pool-thresholds CLI command. The defaults
are 75% for the warning threshold, and 90% for the error threshold; the critical threshold is set at 98% and cannot be changed. Snapshot
deletion will follow the retention priority set for the pool—see the online help for the set priorities CLI command for more information
and the default priorities.

For virtual replications, if overcommit is disabled for the pool, snapshots are fully provisioned, including the internal replication snapshots—
storage is allocated to the snapshots equal to the size of the volume. If overcommit is enabled, the snapshot space has a soft limit as a
percentage of the pool’s physical space, the default is 10%. Set the snapshot space limit, and the snapshot space policy to only notify or delete
snapshots, using the set snapshot-space CLI command. If the policy is set to only notify, more snapshot space may be allocated, if
available. Note that adding or removing disk groups from the pool changes the size of the snapshot space; if removing a disk group causes the
snapshot space to exceed the limit, and the policy is set to delete snapshots, auto-deletion of snapshots occurs. Attempts to lower the limit
below the current allocated size of the snapshots will fail, as this may otherwise cause auto-deletion of snapshots. Only a snapshot that is a
leaf, that is, there are no snapshots of the snapshot, and is unmapped will be considered for auto-deletion. Retention priority is also considered
when auto-deleting; the retention priority is inherited from the snapshot’s parent, or can be reset on the snapshot itself, but changing the
parent’s priority does not propagate to existing children. Snapshot deletion considers priority first, then by date, oldest first. Snapshots are
deleted one at a time until the snapshot space drops below the limit. If no eligible snapshots exist, and snapshot space is still above the limit,
no new snapshots can be created, but existing snapshots can continue to consume space; to allow new snapshots to be created, change the
priority of one or more snapshots from never-delete to another priority, unmap a leaf snapshot, change the limit to a higher value, or add one
or more disk groups to the pool. To get notified of snapshot space usage, set thresholds for the snapshot space, the default for the low
threshold is 75% of the snapshot space, the default for the middle threshold is 90%, and the default for the high threshold is 99%.

Network requirements
The following is a guideline for setting up iSCSI controllers with 1 Gb and 10 Gb ports to use with Remote Snap. The two arrays do not have
to be in the same subnet, but must be connected to a network whose route tables and firewall or port-blocking settings allow iSCSI traffic
(port 3260) to pass between the two systems. CHAP must be configured appropriately on both arrays. If jumbo frames are enabled on
either array, it must be enabled on the other array and all network devices in the path between the arrays.
System or environment variables
Hardware type: 10 Gb or 1 Gb
Priority (set from set replication-volume-parameters priority): Low, medium, or high
Number of concurrent inbound replications (Rp) (from the primary system’s view): User-configured
Number of inbound channels (Cp) (from the primary system’s view): User-configured
Number of concurrent outbound replications (Rs) (from the secondary system’s view): User-configured
Number of outbound channels (Cs) (from the secondary system’s view): User-configured
Packet loss rate (PL): You may get this from a switch or router, or use a tool such as PathPing or MTR
Round trip time (RTT) in ms: Get this from ping
Bandwidth (BW) in Kilobytes/second (Kbps): Use a bandwidth speed test available from many websites
Congestion Avoidance Loss (CAL): This is difficult to obtain. It is generally around 30% for a WAN, but higher as distance increases
Technical white paper Page 15

Throughput requirements
Data Transfer Pending (DTP) depends on the Priority: 1280 for low, 2816 for medium, or 4096 for high.
Primary system calculations:
Primary timeout (TOp): 30 ms

Throughput required (Tp) = DTP * Rp/TOp

Minimum throughput required (MTp) = 13 KB/s


Secondary system calculations:
Secondary timeout (TOs): 40 ms

1 Gb throughput required (Ts): DTP * Rs (up to 8)/TOs

10 Gb throughput required: N/A

Minimum throughput required (MTs) = 9.6 KB/s


Network throughput
Maximum segment size (MSS): 8960 if Jumbo frames are enabled, 1460 otherwise

TCP window size (TWS) in Kilobytes: 32 KB for 1 Gb controllers, 64 KB for 10 Gb controllers


Throughput limit by packet loss (Bps): If PL = 0, then 0, else MSS/(RTT/1000) * (1/SQRT [PL])

Throughput limit by RTT (Bps): TWS/(RTT/1000) * (1-CAL)

Throughput limit by Bandwidth (Bps): BW * 1024

Network throughput limit (NTL): Minimum of throughput limit by packet loss (if non-zero), throughput limit by RTT and throughput limit
by bandwidth
Results
Primary system throughput required to avoid timeout (TOAp) (KB/s) = Tp/Cp

For the primary system


• If the NTL is greater than the TOAp, the replication should not timeout.
• If the NTL is greater than the MTp, timeouts will occur, but the replication should succeed.
• If the NTL is less than the MTp, the replication will fail.

Secondary system throughput required to avoid timeout (TOAs) (KB/s) = Ts/Cs for 1 Gb controllers, N/A for 10 Gb controllers.

For the secondary system with 1 Gb controllers


• If the NTL is greater than the TOAs, the replication should not timeout.
• If the NTL is greater than the MTs, timeouts will occur, but the replication should succeed.
• If the NTL is less than the MTs, the replication will fail. If the secondary system contains 10 Gb controllers, then the replication should
not timeout.
Technical white paper Page 16

Remote Snap basic functions


The following functions are highlighted in this section:
• Preparing the systems:
– For linear replication, adding a remote system
– For virtual replication, creating a peer connection
• Creating a replication set
• Scheduling replications
• Deleting a replication set
• Accessing the secondary volume’s data
– Exporting a replication image of a linear replication set
– Creating a snapshot of a secondary virtual volume
– Using a replication history snapshot of a secondary virtual volume
• For linear replication, setting the primary volume
• Verifying replication data links

For more information on Remote Snap functions, see the HPE MSA 2050 SMU reference guide, the HPE MSA 1050 SMU reference guide,
the HPE MSA 2040 SMU reference guide, the HPE MSA 1040 SMU reference guide, or the HPE P2000 G3 MSA System SMU reference guide.

General notes about using the SMU and CLI


Linear replications are created and managed using the v2 version of the SMU and CLI; virtual replications are created and managed
using the v3 version of the SMU and CLI. The P2000 G3 supports only linear replications, and the MSA 2050 and MSA 1050 support
only virtual replications. The MSA 1040 and MSA 2040 support both linear and virtual replications, and so support both the v2 and
v3 versions of the SMU and CLI. For the CLI, see the current version (v2 or v3) using the show cli-parameters command and
set it using the management-mode parameter of the set cli-parameters command. For the SMU, specify v2 or v3 in the URL
(e.g., https://10.10.0.10/v2), or change the version on the login screen as necessary—click the link Click to launch previous version
on the v3 login screen to access the v2 interface, or click the Click to launch new version of the application on the v2 login screen
to access the v3 interface.
Technical white paper Page 17

Figure 3: Selecting the v2 interface from the v3 login

Figure 4: Selecting the v3 interface from the v2 login

Performing tasks using the v2 SMU or the P2000 G3 SMU


To perform a task, select the component in the Configuration View panel and then either right-click on the component and use the context
menu, or click a task category from the list at the top of the main panel and select the specific task to perform. The system is the top-most
component of the Configuration View panel and shows the system’s name and its model in parentheses.

Performing tasks using the v3 SMU or the MSA 2050 or MSA 1050 SMU
To perform a task, select the topic from the topic tabs on the left side of the interface, select the component in the topic pane, then select
the action from the Action menu. While some tasks may be performed in the Volumes topic, all tasks can be performed in the
Replications topic.
Technical white paper Page 18

Preparing the systems


Adding a remote system for linear replication
This operation adds the remote system to the local system. The remote system is listed in the navigation tree. This operation is required
when using the SMU to create a replication set. This operation is required when using the CLI only when specifying a remote vdisk and
having the remote system create the secondary volume; however, adding a remote system generally makes it easier to create a replication
set using the CLI. Use the system’s Configuration > Remote Systems > Add Remote System SMU page or the create remote-system
CLI command.

Figure 5: Creating a remote system using the SMU

# create remote-system username manage password !manage 10.10.5.170


Success: Command completed successfully. (10.10.5.170) - The remote system was created.

Command example 1: Creating a remote system using the CLI


Technical white paper Page 19

Creating a peer connection for virtual replication


This operation creates a connection between two systems for the purpose of replication. The connection uses host ports for both
communication and I/O. For the MSA 2050 and the MSA 1050, the connection can use either the iSCSI or FC host ports; for the MSA 2040
and the MSA 1040, the connection uses only the iSCSI host ports. If both systems of a peer connection are MSA 2050 arrays operating in a
combination FC and iSCSI mode (two ports per controller are configured for FC and two ports are configured for iSCSI), the connection can
change protocols from iSCSI to FC or from FC to iSCSI. The connection includes all paths between the systems—it uses up to two paths for
load balancing. The connection is bidirectional—the concepts of primary and secondary do not apply to the connection. For the MSA 2050,
up to four peer connections can be created; for the MSA 2040, the MSA 1050, and the MSA 1040, only one peer connection can exist on
a system.
Checking for connectivity with another system
To determine if there is connectivity to a remote system and provide some information about that system, use the query peer connection
functionality. For the MSA 2050 and MSA 1050, from the Replications topic, select Action > Query Peer Connection.

Figure 6: Querying a peer connection on an MSA 2050 or MSA 1050 using the SMU
Technical white paper Page 20

Figure 7: Peer connection query results on an MSA 2050 or MSA 1050 using the SMU

The MSA 2040 and MSA 1040 do not have an SMU option available, use the query peer-connection CLI command, which is also
available on the MSA 2050 and MSA 1050.
Technical white paper Page 21

# query peer-connection 10.20.5.172


Info: This may take a few minutes to ping all port combinations...
Peer Connection Info
--------------------
System Name: SanAntonio
System Contact: Joe Admin (555.0698 joe. admin@hpe.com)
System Location: Rack5
System Information: Remote Snap - 2050 Combo
Midplane Serial Number: DHSIFTJ-161927CD55
Vendor Name: HPE
Product ID: MSA 2050 SAN
License Key: 890816fc54afdded2d3a0b77431743fa
Licensing Serial Number: 27CD55
Maximum Licensable Snapshots: 1024
Base Maximum Snapshots: 64
Licensed Snapshots: 512
In-Use Snapshots: 0
Snapshots Expire: Never
Virtualization: Enabled
Virtualization Expires: Never
Performance Tier: Enabled
Performance Tier Expires: Never
Volume Copy: Enabled
Volume Copy Expires: Never
Replication: Enabled
Replication Expires: Never
VDS: Disabled
VDS Expires: Never
VSS: Disabled
VSS Expires: Never
SRA: Enabled
SRA Expires: Never

Peer Controllers
----------------
Controller: A
Storage Controller Code Version: VLS100R03-01
Management Controller Code Version: VXM100R004-01
IP Address: 10.10.5.172

Port Type Port Health Port Address Reachable Local Links


--------------------------------------------------------------------------------------------
A3 iSCSI Up 10.20.5.172 A3,B3
A4 iSCSI Up 10.30.5.172 A4,B4

Peer Controllers
----------------
Controller: B
Storage Controller Code Version: VLS100R03-01
Management Controller Code Version: VXM100R004-01
IP Address: 10.10.5.173

Port Type Port Health Port Address Reachable Local Links


--------------------------------------------------------------------------------------------
B3 iSCSI Up 10.20.5.173 A3,B3
B4 iSCSI Up 10.30.5.173 A4,B4

Success: Command completed successfully.


Command example 2: Querying a peer connection using the CLI
Technical white paper Page 22

Creating the peer connection


In the Replications topic, select Action > Create Peer Connection. The Create Peer Connection panel opens. Enter a name for the
connection and the iSCSI address or, when both systems are MSA 2050 or MSA 1050, the FC WWN of one of the ports of the other system.
Alternatively, use the create peer-connection CLI command.

Figure 8: Creating an iSCSI peer connection from the MSA 2040 or MSA 1040 using the SMU

# create peer-connection remote-port-address 10.20.5.162 PhxSea


Info: This may take a few minutes to ping all port combinations...
Success: Command completed successfully.

Command example 3: Creating a peer connection from the MSA 2040 or MSA 1040 using the CLI
Technical white paper Page 23

Since the MSA 2050 and MSA 1050 require authentication when creating or modifying peer connections, and neither the MSA 2040 nor
the MSA 1040 have the ability to obtain the required authentication information from the user to provide to the remote MSA 2050 or
MSA 1050, when one of the arrays is an MSA 2050 or MSA 1050 and the other array is an MSA 2040 or MSA 1040, create or modify the
peer connection from the MSA 2050 or MSA 1050.

Figure 9: Creating an FC peer connection from the MSA 2050 or the MSA 1050 using the SMU

# create peer-connection remote-port-address 10.20.5.168 remote-username manage remote-password !manage KC-Phx


Info: This may take a few minutes to ping all port combinations...
Success: Command completed successfully.

Command example 4: Creating an iSCSI peer connection from the MSA 2050 or the MSA 1050 using the CLI
Technical white paper Page 24

Creating a replication set


Creates the secondary volume if necessary and connects it to the primary volume. When creating a replication set, you must always select or
specify the primary volume. The rest of the parameters depends on the type of volume (linear or virtual), the user interface used (SMU or
CLI), if the replication is local (using only one array) or remote (using two arrays), and whether or not a remote system is involved and if it
has been added and if it is accessible.
Creating a linear replication set
Choose your link type carefully. The link type of the replication set cannot be changed later. Also, the primary volume does not have to be
created as a master volume. The process of creating the replication set converts a standard volume to a master volume. A secondary volume
created manually must be created as a replication-prepared volume, using the prepare-replication-volume parameter of the
create volume command in the CLI, or by checking the Replication Prepare box on the vdisk’s Provisioning > Create Volume page.

Using the SMU

Note
If creating a remote replication, add the remote system first.

From the volume’s Provisioning > Replicate Volume page, select the secondary system (the local system is the default), and either the
vdisk the secondary volume will automatically be created on or the replication-prepared secondary volume. Then select the link type (FC or
iSCSI). Finally, if you elected to initiate the replication; choose whether to initiate it now or schedule the replication. Allowing the system to
automatically create the secondary volume on the vdisk specified is the easiest and fastest choice for creating a replication set.

Figure 10: Creating a linear replication set using the SMU—automatic creation of secondary volume
Technical white paper Page 25

Using the CLI

Note
If you want a secondary volume created for you on a vdisk on a remote system, you must add the remote system first. Even if you’re using a
replication-prepared volume on a remote system, adding the remote system first makes creating the replication set easier since you don’t
have to provide the addresses of the ports of the system that contains the secondary volume.

Use the create replication-set command, and specify the link-type (optional if supplying the primary-address), the
remote-system (for a remote replication, and if the remote system has previously been added), the remote-vdisk or replication-prepared
remote-volume, the secondary-address (optional for local replications or if the remote system has previously been added), and the
primary-volume.
# create replication-set link-type iSCSI secondary-address ip=10.20.5.170,10.30.5.171 remote-volume dst
primary-address ip=10.20.5.160,10.30.5.161 set src-dst src
Info: The volume was created. (spsrc)
Info: Converted the volume to a master volume. (src)
Info: The primary volume was prepared for replication. (src)
Info: Started adding the secondary volume to the replication set. (dst)
Info: Verifying that the secondary volume was added to the replication set. This may take a couple of
minutes... (dst)
Info: The secondary volume was added to the replication set. (dst)
Info: The primary volume is ready for replication. (src)
Success: Command completed successfully.

Command example 5: Creating a linear replication set using the CLI—specifying addresses and secondary volume when management ports
cannot communicate
Technical white paper Page 26

Creating a virtual replication set


When creating a replication set using a virtual primary volume or volume group, the secondary volume or volume group is created
automatically; you cannot create a secondary volume or volume group manually. If the secondary pool name does not match the primary
pool name, specify the secondary pool name manually.

Note
Create the peer connection that is required as part of the replication set before attempting to create the replication set. If both systems
involved in the replication set are MSA 2050 arrays operating in a combination FC and iSCSI mode (two ports per controller are configured
for FC and two ports are configured for iSCSI), the connection can change protocols from iSCSI to FC or from FC to iSCSI.

Using the SMU


From the Replications topic, select the peer connection, then select the Create Replication Set action. From the Create Replication Set
panel, provide the Replication Set Name, select Single Volume or Volume Group to display the appropriate choices for the source, select
the volume or volume group to replicate, and modify the Secondary Volume Name and Peer System Pool as desired. For the MSA 2050
and the MSA 1050, select the Queue Policy desired, and select Secondary Volume Snapshot History and associated parameters if you
wish to retain replication history snapshots. Note that changing the Queue Policy to the Queue Latest option or enabling Secondary
Volume Snapshot History is only successful when both the primary and secondary volumes reside on MSA 2050 or MSA 1050 arrays.
Finally chose whether to schedule replications. Once the replication set has been created, you’ll have a chance to initiate the initial replication
if replications were not scheduled.

Figure 11: Creating a virtual replication set from the MSA 2040 or MSA 1040 for a volume from the Replications topic in the SMU
Technical white paper Page 27

Figure 12: Creating a virtual replication set from the MSA 2050 or MSA 1050 for a volume from the Replications topic in the SMU
Technical white paper Page 28

Alternatively, from the Volumes topic, select the volume or a member of the volume group to replicate, then select the Create Replication
Set action. From the Create Replication Set panel, select Single Volume or Volume Group, provide the Replication Set Name, and modify
the Secondary Volume Name and Secondary Pool as desired. For the MSA 2050 and the MSA 1050, select the Queue Policy desired, and
select Secondary Volume Snapshot History and associated parameters if you wish to retain replication history snapshots. Note that setting
the Queue Policy to Queue Latest or enabling Secondary Volume Snapshot History is only successful when both the primary and
secondary volumes reside on MSA 2050 or MSA 1050 arrays. Finally chose whether to schedule replications. Once the replication set has
been created, you’ll have a chance to initiate the initial replication if replications were not scheduled.

Figure 13: Creating a virtual replication set from the MSA 2040 or MSA 1040 for a volume group from the Volumes topic in the SMU
Technical white paper Page 29

Figure 14: Creating a virtual replication set from the MSA 2050 or MSA 1050 for a volume from the Volumes topic in the SMU
Technical white paper Page 30

Using the CLI


Use the create replication-set command, and specify the peer-connection, primary volume or volume group and replication set
name, at a minimum. You may also choose the name of the remote pool (A or B), and change the remote volume name if replicating a
volume rather than a volume group. When both the primary and secondary volumes reside on MSA 2050 or MSA 1050 arrays, you can also
specify the queue policy and replication snapshot history options.
# create replication-set peer-connection PhxSea primary-volume Src secondary-pool B secondary-volume-name Dst
SrcDst
Success: Command completed successfully.

Command example 6: Creating a virtual replication set from the MSA 2040 or MSA 1040 using the CLI

# create replication-set peer-connection PhxSea primary-volume Src secondary-pool B secondary-volume-name Dst


queue-policy queue-latest snapshot-history both snapshot-basename SrcDstHist snapshot-count 4 SrcDst
Success: Command completed successfully.

Command example 7: Creating a virtual replication set for the MSA 2050 or MSA 1050 using the CLI

Scheduling replications
The scheduler can be used to replicate data from a primary volume in regular intervals to the remote system. Creating a replication schedule
consists of two parts; creating a replication task, which indicates the action of the task (in this case replication) and parameters associated
with the task, and creating a schedule for running the task. The CLI requires two commands to perform this, while the SMU creates the task
and schedule in one operation.
Common schedule parameters
Parameters common to all schedules are the start time and date, which must be a time and date in the future; recurrence or repetition
interval—if not set or selected, the replication will occur only once; end time and date or count limit (the number of times to run the task);
and time and date constraints, which only constrains when the task starts, and is not a window in which the task must complete.
Scheduling a linear replication
Notes on parameters for scheduled linear replication tasks
When a scheduled replication occurs, the name of a replication image (the name of the primary snapshot) created by the scheduled task will
begin with a prefix you specify, followed by “_R” and then a four digit number, starting with 0001; for example, if the prefix was “Data,” the
first replication image will have the name “Data_R0001.”

To control space usage, specify the number of images (replication snapshots) to retain; this is the retention count. The number is the
maximum number—fewer images (snapshots) may be retained due to snapshot space limitations.

There are two replication modes. For one mode, a new snapshot is created and the system will replicate it, for the other, the system will replicate
the most recent snapshot. The second mode is highly useful when another application, such as VSS, performs the actual snapshot creation.
Technical white paper Page 31

Using the SMU


Defaults are provided for the Replication image prefix, the Replication Mode and the Replication Images to Retain; change these to suit.
See the Notes on parameters for scheduled linear replication tasks and the Common schedule parameters section for more information.
Option 1: Create the schedule while creating the replication set. Check the Initiate Replication box and select the Scheduled radio button.

Figure 15: Creating a linear replication schedule when the replication set is created using the SMU
Technical white paper Page 32

Option 2: Create the schedule for an existing replication set. Select the primary volume’s Provisioning > Replicate Volume page, and select
the Scheduled radio button.

Figure 16: Creating a linear replication schedule for an existing replication set using the SMU

Using the CLI


First, create the task. For linear replication tasks, you must specify the task type (Replicate Volume), the source volume, snapshot prefix, and
the retention count. You may specify the replication mode, the default is to take a new snapshot. See the Notes on parameters for scheduled
linear replication tasks section for more information.
# create task type ReplicateVolume source-volume Data snapshot-prefix Data_image retention-count 3
replication-mode newsnapshot DataRepTask
Success: Command completed successfully. (DataRepTask) - The task was created.

Command example 8: Creating a linear replication task using the CLI

Then, create the schedule. You must provide the task to run and the schedule’s name; while you must also provide a schedule specification,
only the start time is required. See the Common schedule parameters section for more information. The minimum interval is 30 minutes.
# create schedule schedule-specification “start 2016-02-22 22:00 every 2 hours count 5" task-name DataRepTask
DataRepSchedule
Success: Command completed successfully. (DataRepSchedule) - The schedule was created.

Command example 9: Creating a linear replication schedule using the CLI


Technical white paper Page 33

Scheduling a virtual replication


Using the SMU
While creating the replication set, check the Scheduled box on the Create Replication Set panel, or, after the replication set has been
created, from the Replications topic, select the replication set and select the Schedule Replications action. On the Schedule Replications
panel, you must provide the Task Name. By default, when the scheduled replication occurs, it creates a snapshot as part of the process. For
the MSA 2050 and the MSA 1050, select Last Snapshot to use the existing last snapshot of the volume for replication, rather than creating
a new snapshot when replication begins. If Last Snapshot is selected and no snapshot exists for the volume when the scheduled replication
begins, the system generates an error with event code 362 and the replication fails. Snapshots used by the Last Snapshot option may be
created manually or by scheduling the snapshot. See the Common schedule parameters section for more information.

Figure 17: Creating a virtual replication schedule from the MSA 2040 or MSA 1040 using the SMU
Technical white paper Page 34

Figure 18: Creating a virtual replication schedule from the MSA 2050 or MSA 1050 using the SMU

Using the CLI


First, create the task. For virtual replication tasks, you must specify the task type (Replicate), the replication set name or serial number, and
the task name. For the MSA 2050 and the MSA 1050, specify the last-snapshot parameter to use the existing last snapshot of the
volume for replication, rather than creating a new snapshot when replication begins.
# create task type Replicate Replication-set SrcDst SrcDstTask
Success: Command completed successfully. (SrcDstTask) - The task was created.

Command example 10: Creating a virtual replication task using the CLI

Then, create the schedule. You must provide the task to run and the schedule’s name; while you must also provide a schedule specification,
only the start time is required. See the Common schedule parameters section for more information. For the MSA 2050 and the MSA 1050,
the minimum interval is 30 minutes; for the MSA 2040 and the MSA 1040, the minimum interval is 60 minutes.
# create schedule schedule-specification “start 2016-02-25 07:00 every 60 minutes only any weekday of year"
task-name SrcDstTask SrcDstSched
Success: Command completed successfully. (SrcDstSchedule) - The schedule was created.

Command example 11: Creating a virtual replication schedule using the CLI
Technical white paper Page 35

Deleting a replication set


The volumes associated with the replication set are not deleted, but are converted or changed to become accessible to hosts. Linear
replication snapshots are converted to standard snapshots. You can only delete linear replication sets from the system that contains the
primary volume; you can delete virtual replication sets from either system.
Deleting a linear replication set using the SMU
Use the primary volume’s Provisioning > Remove Replication Set page.

Figure 19: Deleting a linear replication set using the SMU


Technical white paper Page 36

Deleting a virtual replication set using the SMU


Select the replication set from the Replications topic, then use the Delete Replication Set action.

Figure 20: Deleting a virtual replication set using the SMU

Deleting a replication set using the CLI


For either linear or virtual replications, use the delete replication-set command.

# delete replication-set SrcDst


If you delete the replication set, the primary volume and secondary volume will no longer be in a replication
relationship. Although the data in both volumes will remain unchanged, any future replications from the
primary volume must be to a different secondary volume.
Do you want to continue? (y/n) y
Success: Command completed successfully.

Command example 12: Deleting a replication set using the CLI


Technical white paper Page 37

Accessing the secondary volume’s data


Exporting a replication image of a linear replication volume
To access a secondary linear volume’s data, export one of the replication images (snapshots) to a standard snapshot on the secondary
system; export the last replication image on the secondary volume’s system to effectively access the secondary volume’s current data. This
newly created snapshot can then be mounted and used to verify data or for any other purpose. Any change made in this new snapshot does
not affect the original replication snapshot or secondary volume. Snapshots exported from a replication image count against the snapshot
license limits. Use the replication image’s Provisioning > Export Snapshot page in the SMU or the export snapshot CLI command.

Figure 21: Exporting a snapshot of a replication image using the SMU

# export snapshot name init-snap init


Info: The exported snapshot will reside in the snap pool. If the snap pool reaches its critical threshold, the
snapshot may be deleted, even if it is mapped. To preserve the exported snapshot's data, create a volume copy
of the exported snapshot.
Info: The snapshot has been exported. (init-snap)
Success: Command completed successfully.

Command example 13: Exporting a snapshot of a replication image using the CLI
Technical white paper Page 38

Creating a snapshot of a virtual replication volume


To access a secondary virtual volume’s data, create a snapshot of the secondary volume. This newly created snapshot can then be mounted
and used to verify data or for any other purpose. Any change made in this new snapshot does not affect the secondary volume. Snapshots
made from a secondary volume count against the snapshot license limits. In the SMU, select the secondary volume from the Volumes topic,
then select the Create Snapshots action, or use the create snapshots CLI command.

Figure 22: Creating a snapshot of a replication volume using the SMU

# create snapshots volumes Dst Dst-snap


Info: The virtual pool is overcommitted. Ensure that event notification is configured to receive warnings
before running out of physical storage.
Success: Command completed successfully. (Dst-snap) - Snapshot(s) were created.

Command example 14: Creating a snapshot of a replication volume using the CLI
Technical white paper Page 39

Using a replication history snapshot of a secondary virtual volume


For the MSA 2050 or the MSA 1050, another method to access the secondary volume’s data is to use the replication snapshot history
feature. This applies only to replication sets of volumes, not to replication sets of volume groups. You can create replication history
snapshots on the secondary volume only, or on both the secondary and primary volumes. If enabled for the primary volume, the current
internal primary snapshot is copied to a user accessible snapshot prior to replication; the current internal secondary snapshot is copied to
a user accessible snapshot once the replication completes.

You can choose to retain up to 16 snapshots; once the snapshot count is exceeded, the oldest unmapped history snapshot is discarded
automatically, irrespective of the retention priority, even if it is never-delete. Manually created snapshots do not count against the retention
limit and are not managed by the snapshot history feature; snapshots created with this feature do count against the array-wide maximum
licensed snapshots. You can set the retention priority of replication history snapshots just like you do for any other volume snapshot; this
specifies the retention priority in regards to snapshot space, not the snapshot history retention (see setting the snapshot retention priority
for a volume for more information).

Snapshot names are of the form <basename>+“_nnnn”, where nnnn is an integer that is incremented for each subsequent snapshot, and
basename is something you choose, up to 26 bytes. If primary snapshots are enabled, snapshots of the same name (including digits) will
exist on both the primary and secondary arrays. If attempting to create a history snapshot, but a volume with the name already exists, the
history snapshot will not be created; the replication operation itself will not be affected. All replication requests increment the number
identifier of the history snapshot’s name, no matter if the replication occurs or not. For example, if currently replicating history snapshot
SrcDst_0004, and four more replications are attempted before it is done, then when it is done, the next history snapshot will be SrcDst_0008
(SrcDst_0005 is replaced by SrcDst_0006 which is replaced by SrcDst_0007 which is replaced by SrcDst_0008 in the queue). As another
example, if a replication set is created without replication snapshot history enabled, five replications occur, then replication snapshot history
is enabled with the snapshot basename SrcDst, the first replication history snapshot will be SrcDst _0006.
Technical white paper Page 40

Setting the primary volume (linear replications only)


By default, the source volume is designated as a primary volume and the destination volume is designated as a secondary volume. If any
failure occurs at the local site, the secondary volume on the remote system can be changed to a primary volume (the secondary volume now
appears as a primary volume in the remote system). The result is that this volume now can be mapped and is accessible to hosts. This is
necessary for applications to fail over to the remote site. Once your local site comes up again, you must change the original primary volume
to be a secondary volume for the following reasons:
1. If both the volumes participating in the replication set are primary volumes, replication will not complete.
2. Once the volume at the local system becomes secondary, you can replicate data back from the remote system to the local system. This
will synchronize the two systems.

Since a secondary volume cannot be mapped, unmap a primary volume before changing it to a secondary volume. Once the data is
replicated back to the local system from the remote system, change the local system’s volume to a primary volume and change the remote
system’s volume to a secondary volume. Note that while both volumes can be designated as primary, only one volume in a set can be a
secondary volume.
Use the secondary or primary volume’s Provisioning > Set Replication Primary Volume page, or the set replication-primary-volume
command.

Figure 23: Setting the primary volume using the SMU

# set replication-primary-volume primary-volume dst volume dst


Info: Started setting the primary volume of the replication set. (src-dst)
Info: Setting the primary volume of the replication set. This may take a couple of minutes... (src-dst)
Info: Successfully set primary volume: (dst)
Info: The primary volume of the replication set was changed. (src-dst)
Success: Command completed successfully.

Command example 15: Setting the primary volume using the CLI
Technical white paper Page 41

Verifying replication data links


You may want to verify the data link between the local and remote system, or, for linear replications only, the link between ports on the same system.
For linear replications, the remote system’s Tools > Check Remote System Link page of the SMU and the verify remote-link command
in the CLI are available to verify the link connectivity between the local and remote systems. This tool should be run before creating a replication set.
Sample outputs of remote system link check in the SMU and CLI are provided here.

Figure 24: Check remote system link output with iSCSI connectivity using the SMU

# verify remote-link remote-system KansasCity link-type ALL Port Type Links


---------------------------------------------
A1 FC
A2 FC
A3 iSCSI A1,B1
A4 iSCSI A2,B2
B1 FC
B2 FC
B3 iSCSI A1,B1
B4 iSCSI A2,B2
---------------------------------------------
Success: Command completed successfully.

Command example 16: Check remote system link CLI output where the remote system is iSCSI only
Technical white paper Page 42

In the system’s Wizards > Replication Setup Wizard, you can also enable a remote link check by selecting the check box.

Figure 25: Check remote system link in the Replication Setup Wizard of the SMU
Technical white paper Page 43

For the MSA 2050 and the MSA 1050, use the Query Peer Connection action in the Replications topic to verify the data links—note that
when displaying the data links to a remote MSA 2040 or MSA 1040 system, the FC ports of the remote system are shown even though they
cannot be used in a peer connection.

Figure 26: Query peer connection output from the MSA 2050 or MSA 1050 using the SMU
Technical white paper Page 44

For virtual replications, use the CLI command show peer-connections with the verify-links parameter to check the data link.
# show peer-connections verify-links PhxSea
Info: This may take a few minutes to ping all port combinations...
Peer Connections
----------------
Peer Connection Name: PhxSea
Peer Connection Type: iSCSI
Connection Status: Online
Health: OK
Health Reason:
Health Recommendation:

Local Port Port Address Reachable Remote Links


------------------------------------------------------------------------------------
A3 10.20.5.168 A1,B1
A4 10.30.5.168 A2,B2
B3 10.20.5.169 A1,B1
B4 10.30.5.169 A2,B2

Remote Port Port Address Reachable Local Links


------------------------------------------------------------------------------------
A1 10.20.5.162 A3,B3
A2 10.30.5.162 A4,B4
B1 10.20.5.163 A3,B3
B2 10.30.5.163 A4,B4

Success: Command completed successfully.

Command example 17: Verify links for a peer connection for virtual replications using the CLI
Technical white paper Page 45

To check links between ports for local linear replications from the SMU use the system’s Tools > Check Local System Link and the CLI by
running the command verify links. This will check links from controller A ports to controller B ports irrespective of where you run the
command from (e.g., controller A or B).

Figure 27: Check Local System Link using the SMU

# verify links
Port Type Links
---------------------------------------------
A1 FC B1
A2 FC B2
A3 iSCSI B3
A4 iSCSI B4
B1 FC A1
B2 FC A2
B3 iSCSI A3
B4 iSCSI A4
---------------------------------------------
Success: Command completed successfully.

Command example 18: Check local link using the CLI


Technical white paper Page 46

In the CLI, you can use the same command to check remote system links for replication purposes; this tests the links to be used for
replication from one system to another system.
In the system’s Wizards > Replication Setup Wizard, you can also enable a local link check by selecting the check box.

Figure 28: Check local link in the Replication Setup Wizard of the SMU
Technical white paper Page 47

Ports connected for replication


Connected port field in a linear replication set
For a remote primary or secondary volume, the connected ports field of the Replication Addresses table of the volume’s View > Overview
page shows the IDs of up to two ports from the remote array that are connected to ports in the local array. If two ports are connected but
only one is shown, this could mean that a problem is preventing half the available bandwidth from being used.

Note
This field shows N/A for a local primary or secondary volume.

Figure 29: Connected ports being used for a linear replication using the SMU
Technical white paper Page 48

For the CLI, use the show replication-sets command.


# show replication-sets rsFSDATA
Replication Set [Name (rsFSDATA) Serial Number (00c0ffda02f30000b857ab5601000000) ] Primary Volume:
Name Serial Number Status Status-Reason Monitor Location Primary-Volume Primary-
Volume-Serial
Primary-Volume-Status MaxQueue MaxRetryTime On Error Link Type On Collision Monitor Interval Priority
Connection Status
Connection Time
-----------------------------------------------------------------------------------------------------------
---------------------
FSDATA 00c0ffda02f300007956ab5601000000 Online N/A OK Local FSDATA
00c0ffda02f300007956ab5601000000
Online 32 1800 Retry iSCSI Oldest 300 Medium Not
Attempted N/A

Connected Ports Remote Address


--------------------------------------------------------------
N/A IP=10.20.5.160:3260
N/A IP=10.30.5.160:3260
N/A IP=10.20.5.161:3260
N/A IP=10.30.5.161:3260

rFSDATA 00c0ffdadd5c0000a849ab5601000000 Online N/A OK Remote FSDATA


00c0ffda02f300007956ab5601000000
Online 32 1800 Retry iSCSI Oldest 300 Medium Online
2016-02-25 08:52:46

Connected Ports Remote Address


--------------------------------------------------------------
A3 IP=10.20.5.170:3260
A4 IP=10.30.5.170:3260
A3 IP=10.20.5.171:3260
A4 IP=10.30.5.171:3260

Success: Command completed successfully.

Command example 19: Showing connected ports for a linear replication using the CLI
Technical white paper Page 49

Connected ports listed for a peer connection for virtual replications


Hover over the peer connection in the replication topic to see the ports connected for the peer connection.

Figure 30: Connected ports used in a peer connection for virtual replications using the SMU
Technical white paper Page 50

For the CLI, use the show peer-connections command.


# show peer-connections
Peer Connections
----------------
Peer Connection Name: PhxSea
Peer Connection Type: iSCSI
Connection Status: Online
Health: OK
Health Reason:
Health Recommendation:

Local Port Port Address


----------------------------------------------------------
A3 10.20.5.168
A4 10.30.5.168
B3 10.20.5.169
B4 10.30.5.169

Remote Port Port Address


----------------------------------------------------------
A1 10.20.5.162
A2 10.30.5.162
B1 10.20.5.163
B2 10.30.5.163

Success: Command completed successfully.

Command example 20: Showing connected ports for a peer connection for virtual replications using the CLI
Technical white paper Page 51

CHAP settings and Remote Snap


If you configure CHAP with Remote Snap, you can use CHAP to authenticate iSCSI login requests between the local system and a remote system:
• Create a one-way CHAP record on each system. On the local system, the CHAP record must refer to the node name of the remote system.
On the remote system, the CHAP record must refer to the node name of the local system. Both records must use the same secret.
Use the create chap-record command to create a CHAP record:
# create chap-record name iqn.1991-05.com.microsoft:myhost.domain secret 0D12x

Command example 21: Creating a CHAP record using the CLI

• After the CHAP records are created, enable CHAP on the primary system, the secondary system, or both.
To enable CHAP, use the set iscsi-parameters command:
# set iscsi-parameters chap enabled

Command example 22: Enabling CHAP using the CLI


Table 2. CHAP settings and corresponding behavior with Remote Snap
Local system Remote system Expected behavior

CHAP Disabled (Secret: No; CHAP record: No) CHAP Disabled (Secret: No; CHAP record: No) Remote Snap works fine. No iSCSI authentication.
CHAP Enabled (Secret: SECRET1; CHAP record: Yes) CHAP Enabled (Secret: SECRET1; CHAP record: Yes) Remote Snap works fine.
Remote Snap will fail. Use the same secret for both the
CHAP Enabled (Secret: SECRET1; CHAP record: Yes) CHAP Enabled (Secret: SECRET2; CHAP record: Yes)
local and remote systems.
Remote Snap will fail. Enabling CHAP without
CHAP Enabled (Secret: No; CHAP record: No) CHAP Enabled (Secret: No; CHAP record: No) specifying a secret for an iSCSI initiator effectively
blocks that initiator.
CHAP Disabled (Secret: SECRET1; CHAP record: Yes) CHAP Enabled (Secret: SECRET1; CHAP record: Yes) Remote Snap works fine.
Remote Snap will fail. Use the same secret for both the
CHAP Disabled (Secret: SECRET1; CHAP record: Yes) CHAP Enabled (Secret: SECRET2; CHAP record: Yes)
local and remote systems.

Note
If you are performing a local replication involving iSCSI ports, CHAP will not be used.

Disabling or enabling CHAP will cause the host ports to reset. If the CHAP records are not configured correctly (see Table 2 CHAP settings
and corresponding behavior with Remote Snap), then replication cannot occur.
Technical white paper Page 52

Examples of replication types and operations


Remote replication
See the following for an illustration of remote replication.

Figure 31: Illustration of remote replication

Using FC
You can set up local and remote sites connected via an FC network and can have linear, and, when both the primary and secondary volumes
reside on MSA 2050 or MSA 1050 arrays, virtual, replication sets performing replications over FC. This is useful in cases where local and
remote sites are in different blocks of a campus or building.

FC ports can be used to transfer regular data while they are being used for replication. However, this action will result in decreased
performance for both the datapath and the replication transfer.
Using iSCSI
When the local and remote systems are in different geographical regions, you can create replication sets using iSCSI to perform replications
over a WAN. For example, when the local system is in New York and you are planning to set up your backup system (remote system) in
Houston, you can create remote replication sets using iSCSI as the transfer media for performing replications.

For linear replications, perform a physical media transfer to overcome bandwidth and latency issues with the initial replication. These issues
can sometimes be caused by a large amount of data in the primary volume getting replicated to a remote system. (See more on these issues
in the Physical media transfer section.)

For virtual replications, you may want to co-locate the systems to overcome bandwidth and latency limitations of a WAN. However, since
only allocated data is transferred, network limitations may be acceptable when the systems are dispersed geographically.
Technical white paper Page 53

Local replication and physical media transfer (for linear replications only)
See the following for an illustration of local replication and physical media transfer, resulting in a remote replication.

Figure 32: Illustration of local replication and physical media transfer

Important notes on physical media transfer


• If you intend to move a disk drive enclosure, add the enclosure at the end of the chain of connected enclosures.
• Make sure that the remote system supports the chosen link type (iSCSI or FC); the link type can’t be changed once the replication set has
been created.
• Make it possible to perform the physical media transfer by setting up a local replication such that the secondary volume resides on one of
the following:
– On the same array, but on different vdisks (so that disks can be removed and physically transferred to the remote site).
– In an attached drive enclosure. This drive enclosure will be attached to a remote system later.
• When using Full Disk Encryption (FDE) on an MSA 2040 Storage array, it is a best practice to move media between systems that are
identically configured with FDE enabled or disabled. That is, move secured Self-Encrypting Drives (SEDs) to a secured FDE system, and
unsecured SEDs or non-SEDs to an unsecured FDE system or non-FDE system.
Technical white paper Page 54

Detailed steps for physical media transfer


1. Ensure the first, initial, replication is complete.
2. Detach the secondary volume, which resides on the local system. If the secondary volume’s vdisk contains any other secondary volumes,
detach those volumes also. Use the detach replication-volume command via the CLI or the secondary volume’s Provisioning >
Detach Replication Volume function in the SMU.
a. You must detach the replication volume before moving the secondary volume to the remote system.
b. Once the secondary volume is detached it remains part of the replication set but is not updated with any new data.
3. Ensure the detach operation is complete.
4. Stop the secondary volume’s vdisk and associated snap pool’s vdisk (if the secondary volume and its snap pool reside on separate
vdisks) using the stop vdisk command in the CLI or using the vdisk’s Provisioning > Stop vdisk function in the SMU.
5. If moving a drive enclosure, power off the enclosure. If moving only the disks there is no need to power off the enclosure. After the drive
enclosure is powered off, there may be unwritable cache data in the drive enclosure. Use the clear cache command in the CLI to clear
the unwritable cache.
6. Remove the disks or enclosure containing the disks and attach or move them into the remote system.
a. You should power down an enclosure before inserting disks.
7. If the secondary volume’s snap-pools are on a different vdisk from the volume itself, start the snap pool’s vdisk using the start vdisk
command in CLI or the vdisk’s Provisioning > Start vdisk function in the SMU.
8. Start the secondary volume’s vdisks. The secondary volume appears on the system at the remote site.
9. Reattach the secondary volume to add it back to the set. This operation makes the secondary volume a part of the original set. Use the
reattach replication-volume command or the volume’s Provisioning > Reattach Replication Volume page in the SMU.
10. Continue replicating from the local site.
Technical white paper Page 55

Disaster recovery operations


If the local site fails, the applications need to switch to the remote site.
Linear replications
To bring up the remote site, do the following:
1. Convert the remote volume (secondary volume) to a primary volume using the set replication-primary-volume CLI command
or by selecting the secondary volume’s Provisioning > Set Replication Primary Volume function from the SMU.
a. During the conversion to a primary volume, the volume is rolled back, or synced, to a replication snapshot. By default, the volume
syncs to the latest replication snapshot, but you can choose any of the previous replication snapshots.
b. Any data that has not been replicated is lost.
c. A secondary volume can be converted to a primary via the SMU or CLI.
2. Map the new primary volume, which resides on the remote site, to a LUN and use as you would use the original primary volume. Switch
the applications to this new primary volume and continue using the applications.
Once the failure has been addressed at the local site, complete the following steps:
1. Make the original primary volume a secondary volume using the set replication-primary-volume CLI command or by selecting
the volume’s Provisioning > Set Replication Primary Volume function from the SMU.

Important
A secondary volume cannot be mapped, so be sure to unmap the original primary volume before attempting to make it a secondary volume.

2. Replicate any data written to the remote volume (now acting as primary volume residing at remote system) to the volume residing at the
local system (now acting as secondary volume). This can be performed in a single replication or in multiple replications. This ensures that
all data has been transferred properly.
After all the data is replicated back to the local site, convert the volume at the local site to the primary volume and then convert the
remote volume to the secondary volume.
a. To convert a primary volume to a secondary volume, set the other volume of the replication set as the primary—perform this
operation on both systems. You can perform this operation using the CLI command set replication-primary-volume or
using the SMU via the volume’s Provisioning > Set Replication Primary Volume function.
Re-establish the replication set to the remote site. Continue using the scheduler for running remote replications in regular intervals.
Technical white paper Page 56

Figure 33a: Disaster recovery operations for linear replications—Failover to remote site
Technical white paper Page 57

Figure 33b: Disaster recovery operations for linear replications—Failback to local site
Technical white paper Page 58

Virtual replications
To bring up the remote site, consider that the secondary volume cannot be mapped, you cannot reverse direction of the replication set, and
a snapshot can be the primary volume of a replication set. Once at least one replication has completed, to allow access to the secondary
volume’s data, you can either delete the replication set, which will remove the secondary volume from the replication set and convert it to a
standard base volume, create a snapshot of the secondary volume and access the snapshot rather than the volume itself, or enable
replication snapshot history and use the snapshots created automatically by the system. If the replication set is deleted, the only way to
include the volume that was a secondary volume into a replication set is to create a new replication set with the volume as the primary, or
source, volume.

The preferred method that provides the most flexibility is to use two snapshots of the secondary volume—one that is mapped read-write to
hosts and is intended for modification of the data, and one that, regardless of the type of mapping (read-only or read-write), is not to be modified.

Figure 34: Virtual replication failover to remote site


Technical white paper Page 59

When the primary site is recovered, if the primary volume and replication set is intact, you have the ability to determine from the two snapshots
what has changed, and copy those changes back to the primary site, either to the primary volume directly, or through the primary server.

Figure 35: Resync to primary site by copying changes


Technical white paper Page 60

If the primary volume or replication set cannot be recovered, or if the amount of modified data is significant, or if the changes cannot easily
be determined, create a new replication set with the read-write snapshot of the secondary volume as the new primary volume and replicate
back to the primary site. The reason for using the snapshot rather than deleting the replication set and using the secondary volume as the
primary volume for the new replication set is that you retain flexibility in case the primary volume becomes available and you want to copy
the changes back to the primary volume and leave the original replication set in place. Once you’ve performed the initial replication on the
“resynchronizing” replication set, create a snapshot of the secondary volume, copy that snapshot to a new, independent volume and create
a new replication set using the new volume as the primary volume for the new replication set. The reason for using a snapshot rather than
deleting the replication set and using the secondary volume as the new primary volume is to provide flexibility in case additional changes
need to be replicated from the backup site to the primary site. Copying the snapshot to a new, independent volume rather than using the
snapshot itself allows you to clean up the resynchronizing replication set without consuming space for both the volume and the snapshot.

Figure 36: Resync to primary site by creating replication sets


Technical white paper Page 61

If the primary volume or replication set cannot be recovered, or If the amount of modified data is significant, or if the changes cannot easily
be determined, and you have limited space on the local and remote arrays, create a new replication set with the read-write snapshot of the
secondary volume as the new primary volume and replicate back to the primary site. Once you’ve performed the initial replication on the
“resynchronizing” replication set, delete the replication set and delete the original secondary volume and modified snapshot on the remote
array, and use what was the secondary volume on the local array as the primary volume of a new replication set.

Figure 37: Resync to primary site by creating replication sets when space is limited
Technical white paper Page 62

Use cases
This white paper provides examples that demonstrate Remote Snap’s ability to replicate data in various situations.

Single office with a remote site for backup and disaster recovery using iSCSI to replicate data

Figure 38: Single office with a remote site for backup and disaster recovery (iSCSI)
Technical white paper Page 63

# create vdisk disks 1.1-3 level raid5 vd-r5-a


Success: Command completed successfully.

# create master-volume reserve 20 GB size 50 GB vdisk vd-r5-a FSDATA


Info: The volume was created. (spFSDATA)
Info: The volume was created. (FSDATA)
Success: Command completed successfully.

# create remote-system user manage password !manage 10.10.5.170


Success: Command completed successfully. (10.10.5.170) - The remote system was created.

# create replication-set link-type iSCSI remote-system 10.10.5.170 remote-vdisk vd-r5-a FSDATA


Info: The secondary volume was created. (rFSDATA)
Info: The primary volume was prepared for replication. (FSDATA)
Info: Started adding the secondary volume to the replication set. (rFSDATA)
Info: Verifying that the secondary volume was added to the replication set. This may take a couple of
minutes... (rFSDATA)
Info: The secondary volume was added to the replication set. (rFSDATA)
Info: The primary volume is ready for replication. (FSDATA)
Success: Command completed successfully.

# replicate volume FSDATA snapshot init-FSDATA


Info: The replication has started. (init-FSDATA)
Success: Command completed successfully.

Command example 23: Single office with a remote site for backup and disaster recover (iSCSI) CLI output—linear replication
# add disk-group disks 1.1-3 level raid5 pool a type virtual dg-r5-a
Success: Command completed successfully.

# create volume size 50 GB pool a FSDATA


Success: Command completed successfully. (FSDATA) - The volume was created.

# create peer-connection remote-username manage remote-password !manage remote-port-address 10.20.5.170 Local-


Remote
Info: This may take a few minutes to ping all port combinations...
Success: Command completed successfully.

# create replication-set peer-connection Local-Remote primary-volume FSDATA queue-policy discard snapshot-


history both snapshot-count 4 snapshot-basename FSDATA-Rep-Snaps FSDATA-Replication
Success: Command completed successfully.

# replicate FSDATA-Replication
Success: Command completed successfully.

Command example 24: Single office with a remote site for backup and disaster recover (iSCSI) CLI output—virtual replication (where both
primary and secondary volumes reside on an MSA 2050 or MSA 1050, and enabling replication snapshot history)
Technical white paper Page 64

To configure a single office with a remote site for backup and disaster recovery (iSCSI):
1. Set up a P2000 G3 FC/iSCSI combo or iSCSI controller array, an MSA 1050 or MSA 1040 iSCSI array, or an MSA 2050 or MSA 2040
SAN array with iSCSI-configured ports with enough disks (according to the application load and users), then configure the management
ports and iSCSI ports with IP addresses. Install the Remote Snap license if one has been purchased, or install the temporary license from
the system’s Tools > Install License page of the SMU (for the P2000 only). See the Setup requirements section for additional license
and other information.
2. Create the vdisks or disk groups and pools, then the master or base volumes FS Data and App A Data; if using linear replication, enable
snapshots when creating the volumes. For linear replication, if an existing snap pool is not specified, a snap pool is automatically created
with the default policy and size, or you can adjust the settings as necessary. Make sure the volumes are in different vdisks or pools and
that each vdisk or pool has enough space to expand the snap pool or snapshot space in the future.
3. Connect your array to the WAN. If using iSCSI over the WAN as part of your disaster recovery solution, connect your file server and
application server to the WAN. Connecting the management port of an array to the WAN helps you to manage the array remotely and is
necessary when using the SMU to create linear replication sets.
4. Map the volumes to the file server and the application server.
5. Identify a remote location and set up a second P2000 G3 FC/iSCSI combo or iSCSI controller array, an MSA 1050 or MSA 1040 iSCSI
array, or an MSA 2050 or MSA 2040 SAN array with iSCSI-configured ports and configure both management ports and the iSCSI ports.
This is the remote system. Configure the vdisks or pools to accommodate secondary volumes at a later stage.
6. Set up connection with the remote system:
a. For linear replications, in both the local system and the remote system, add the other system using the system’s Configuration >
Remote Systems > Add Remote System page of the SMU or the create remote-system command in the CLI.
b. For virtual replications, create a peer connection between the systems using the Create Peer Connection action in the Replications
topic of the SMU or the create peer-connection command in the CLI.
7. Verify the datapath between your local system and remote system. For linear replication, use the remote system’s Tools > Verify
Remote Link page of the SMU, or the verify links CLI command. For virtual replication, use the query peer-connection
command in the CLI. For the MSA 2050 or the MSA 1050, use the Query Peer Connection action of the Replications topic of the SMU.
Always configure sufficient iSCSI ports to facilitate a working redundant connection to the WAN.
8. Set up the linear replication sets for the volumes FS Data and App A Data using the system’s Wizards > Replication Setup Wizard,
using the volume’s Provisioning > Replicate Volume, or using the create replication-set CLI command, and choose iSCSI as
the link type. Set up the virtual replication sets for the volumes FS Data and App A Data using the Create Replication Set action in the
Replication topic in the SMU, or the create replication-set command in the CLI.
9. After the setup is complete, schedule the replication in desired intervals, based on the application load, critical data, replication window
(the time it takes to perform a replication) and so on. This enables you to have a complete backup and disaster recovery setup.
10. Verify the progress of replications by checking the replication images for linear replications, or by checking the replication sets for virtual
replications. This will list the progress or a completed message.
11. Verify the data at the remote location by exporting the linear replication image to a snapshot, or by creating a snapshot of the secondary
volume of the virtual replication set, or, when both the primary and secondary volumes reside on MSA 2050 or MSA 1050 arrays, by
enabling replication snapshot history, and mounting the snapshot to a host.
Technical white paper Page 65

In case of a failure at the local site, it is possible to switch the application to the remote site data by employing the procedures defined in the
Disaster recovery operations section. Alternatives include the following:
Linear replications
• Move the remote array to the local site, convert the secondary volumes to primary or delete the replication sets, and map the volumes to
the servers.
• Move disks or enclosures that contain the secondary volumes to the local site, install or attach to the local array, convert the secondary
volumes to primary, and map them to the servers.
• Replace the local array with a new array, convert the remote secondary volumes to primary, and then replicate the data to the new array.
Once done, convert the volumes of the new array to primary, map them to the servers, and convert the volumes on the remote array back
to secondary.
• Convert the secondary volumes at the remote array to primary or delete the replication sets and map the volumes to the servers.

Virtual replications

• Move the remote array to the local site, use replication history snapshots, or create snapshots of the secondary volumes, or delete the
replication sets and map the volumes or snapshots to the servers.
• Replace the local array with a new array, delete the replication sets, create new replication sets using the original secondary volumes as
the primary volumes, and then replicate the data to the new array. Once done, delete the new replication sets, map the new volumes to
the servers, create replication sets using the volumes at the primary site as the primary volumes, and replicate to the remote secondary array.
• Use replication history snapshots, or create snapshots of the secondary volumes, or delete the replications sets on the remote array, and
map the volumes to the servers.

For more information on disaster recovery, see Disaster recovery operations.


Technical white paper Page 66

Single office with local site disaster recovery and backup using iSCSI and host access using FC

Figure 39: Single office with local site disaster recovery and backup using iSCSI and host access using FC

To configure a single office with local site disaster recovery and backup using iSCSI and host access using FC:
1. Set up two P2000 G3 combo arrays or MSA 2050 or MSA 2040 SAN arrays with host ports set to a combination of FC and iSCSI at the
local site.
2. Connect the file servers and application servers to the arrays via an FC SAN.
3. Mount the volumes to the applications.
4. Create multiple replication sets with FS Data and App A Data as primary volumes and the secondary volumes on the second P2000 G3,
MSA 2050, or MSA 2040 system. Create a replication set with App B Data as the primary volume and the secondary volume on the first
system. These replication sets are created using the iSCSI link type. We recommend that the iSCSI host ports of both of the P2000 G3,
MSA 2050, or MSA 2040 systems are connected by a dedicated Ethernet link (LAN). See also the Network requirements.
Switch the applications to the other system if any failures occur on either of the two systems.
Technical white paper Page 67

Single office with a local site disaster recovery and backup using FC (only for linear replications or virtual
replications when both the primary and secondary volumes reside on MSA 2050 or MSA 1050 arrays)

Figure 40: Single office with a local site disaster recovery and backup using FC

To configure a single office with a local site disaster recovery and backup using FC:
1. For linear replications, set up two MSA 2040 Storage, MSA 1040 Storage, or P2000 G3 FC arrays at the local site. You can use any
combination of the three models—linear replication can occur between the three models as long as the P2000 G3 array has FW version
TS250 or later. To use virtual replication, both the primary and secondary volume must reside on MSA 2050 or MSA 1050 arrays.
2. Connect the file and application servers to these arrays via an FC SAN.
3. Mount the volumes to the applications.
4. Create multiple replication sets with FS Data and App A Data as primary volumes and the secondary volumes on the second Storage
system. Create a replication set with App B Data as the primary volume and the secondary volume on the first system. These sets are
created using the FC link type. For virtual replications, use FC port(s) for the peer connection.

Switch the applications to the other system if any failures occur on either of the two systems.
Technical white paper Page 68

Initial local replication—linear

Figure 41: Initial local replication—linear

Due to network bandwidth limitations, it may be beneficial to perform an initial replication using local replication, then reconfiguring to a
remote replication.

Review the Local replication and physical media transfer (for linear replications only) section. A brief overview of the steps:
1. Create a local replication on the local system.
2. Perform the initial replication.
3. Once the initial replication is complete, detach the secondary volume, which resides on the local system.
4. Once the detach operation is complete, stop the secondary volume’s vdisk and associated snap pool’s vdisk (if the secondary volume and
its snap pool reside on separate vdisks).
5. If moving a drive enclosure, power off the enclosure. If moving only the disks there is no need to power off the enclosure.
6. Remove the disks or enclosure containing the disks and attach or move them into the remote system.
7. If the secondary volume’s snap-pools are on a different vdisk from the volume itself, start the snap pool’s vdisk.
8. Start the secondary volume’s vdisks. The secondary volume appears on the system at the remote site.
9. Reattach the secondary volume to add it back to the set. This operation makes the secondary volume a part of the original set again.
10. Continue replicating from the local site.
Technical white paper Page 69

Initial local replication—virtual

Figure 42: Initial local replication—virtual

If using two MSA 2050 systems, both with half their host ports configured for FC and half for iSCSI, with the ultimate intention to use iSCSI
for remote replication, it may be beneficial to perform an initial replication with the remote array at the local site, then move the remote array
to the remote site and reconfigure as a remote replication. Because FC replication may be faster than iSCSI replication, especially if using
1GbE iSCSI SFPs, it may be more efficient to perform the initial replication using FC, then, after moving the remote array to the remote site,
change the protocol for the peer connection by setting the peer connection to use the iSCSI address.
1. With the remote system at the local site, and with both systems able to reach each other via FC, create a peer connection between the
local system and the remote system specifying an FC WWN.
2. Create the replication set(s) from the local, primary system using the peer connection.
3. Complete the initial replication(s).
4. Suspend all replications using the peer connection.
5. Move the remote system to the remote site.
6. Change the peer connection to use iSCSI using the set peer-connection CLI command on either system, specifying one of the
iSCSI host addresses of the other system.
7. Resume the replication set(s).
Technical white paper Page 70

Two branch offices with disaster recovery and backup

Figure 43: Two branch offices with disaster recovery and backup

1. Set up two P2000 G3 FC/iSCSI combo controller arrays or MSA 2050 SAN or MSA 2040 SAN arrays with host ports set to a
combination of FC and iSCSI with enough disks (according to the application load, users and secondary volumes) then configure the
management ports and iSCSI ports with IP addresses. Install the Remote Snap licenses if they have been purchased, or install the
temporary licenses from the system’s Tools > Install License page of the SMU (for the P2000 only).
2. On the array at site A, create the master or base volumes FS A Data and App A Data; if using linear replication, enable snapshots when
creating the volumes. For linear replications, if an existing snap pool is not specified, a snap pool is automatically created with the default
policy and size, or you can adjust the settings as necessary. Make sure the volumes are in different vdisks or pools and that each vdisk or
pool has enough space to expand the snap pool or snapshot space in the future.
3. On the array at site B, create the master or base volumes FS B Data and App B Data similar to the instructions above.
4. Connect both arrays to the WAN. If using iSCSI over the WAN as part of your disaster recovery solution, connect your file servers and
application servers to the WAN. Connecting the management ports of the arrays to the WAN helps you to manage either array remotely
and is necessary when using the SMU to create linear replication sets.
5. Map the volumes to the file servers and application servers.
6. At site A, create remote replication sets using the primary volumes FS A and App A. Corresponding secondary volumes are created
automatically on the array at site B.
7. Schedule replications at regular intervals. This ensures that data at the local site is backed up to the array at site B.
8. At site B, create remote replication sets using the primary volumes FS B and App B. Corresponding secondary volumes are created
automatically on the array at site A.
9. Schedule replications at regular intervals so that all data at site B is backed up to site B.

In case of failure at either site, you can fail over the application and file servers to the available site.
Technical white paper Page 71

Single office with a target model using FC and iSCSI ports

Figure 44: Single office with target model using FC and iSCSI ports

To configure a single office with a target model using FC and iSCSI ports:
1. Set up a P2000 G3 FC/iSCSI combo controller array or an MSA 2050 or MSA 2040 SAN array with host ports set to a combination of
FC and iSCSI with enough disks, according to the application load and number of users, and configure the management ports and iSCSI
ports with IP addresses.
2. Create master or base volumes App A Data, App B Data, and FS Data in the array.
3. Map FS Data to the iSCSI port so that the file server can use this volume via the iSCSI interface.
4. Map App A Data and App B Data volumes to the FC port so that the application servers can access these volumes via the FC SAN.

Using the P2000 G3 FC/iSCSI combo controllers or MSA 2050 or MSA 2040 SAN arrays with host ports set to a combination of FC and
iSCSI ports provides several advantages:
• You can leverage both the FC and iSCSI ports for target-mode operations.
• You can connect file servers and other application servers that are not part of the FC SAN to the array using the iSCSI ports via the LAN
or WAN.
• You can connect new servers with FC connectivity directly through the FC SAN.

Note
Accessing a volume from a host through both iSCSI and FC is not supported.
Technical white paper Page 72

Multiple local offices with a centralized backup (only for linear replications or virtual replications when the
Central Office array is an MSA 2050)

Figure 45: Multiple local offices with a centralized backup

1. Set up P2000 G3 FC/iSCSI combo controller arrays or MSA 2050 or MSA 2040 SAN arrays with host ports set to a combination of FC
and iSCSI with sufficient storage and configure the management and iSCSI ports with valid IP addresses. Install the Remote Snap license
at remote sites 1, 2, and 3.
2. For virtual replications, create peer connections between the remote sites and the central office.
3. Create Primary Volume #1, Primary Volume #2, and Primary Volume #3 on the corresponding remote site.
4. Set up a P2000 G3 FC/iSCSI controller array or an MSA 2050 or MSA 2040 SAN array with host ports set to a combination of FC and
iSCSI at the centralized location and make sure that it has enough disks to accommodate data coming from remote sites 1, 2, and 3 and
install the Remote Snap license.
5. Connect sites 1, 2, and 3 with the central site using the WAN and make sure iSCSI ports are configured and connected to this WAN.
6. Make sure the iSCSI ports of the arrays at site 1, 2, and 3 can access the iSCSI ports of the array at the central site.
7. Create replication sets for volume Primary Volume #1, specifying the central system and vdisks on it (for linear replications) or the peer
connection (for virtual replications) to allow automatic creation of secondary volumes at the central site.
Repeat step 6 for sites 2 and 3.

Schedule the replication in regular intervals so that data from sites 1, 2, and 3 replicates to the central site.
Technical white paper Page 73

Replication of application-consistent snapshots (only for linear replications or virtual replications when the
primary volume resides on an MSA 2050 or MSA 1050)
You can replicate application-consistent snapshots on a local array to a remote array. Use the SMU for manual operation and the CLI for
scripted operation. Both options require you to establish a mechanism that enables all application I/O to be suspended (quiesced) before the
snapshot is taken and resumed afterwards. Many applications enable this via a scripting method. For an illustration of the following steps,
see Figure 46.
To create application-consistent snapshots for any supported OS and any application:
1. Create the application volume. When defining the volume names, use a string name variant that will help identify the volumes as a larger
managed group:
For linear replications:
Using the SMU
Use the system’s Wizards > Provisioning Wizard to create the necessary vdisks and volumes.
Using the CLI
Use the create vdisk command.
Use the create master-volume command.
For virtual replications:
Using the SMU
Use the Add Disk Group action of the Pools topic to create or expand the necessary pools.
Use the Create Virtual Volumes action of the Volumes topic to create the necessary volumes.
Using the CLI
Use the add disk-group command.
Use the create volume command.
2. Create a replication set for each volume used by the application. Use a string name variant when defining the replication set name. This
helps identify each replication set as part of a larger managed group.
For linear replications:
Using the SMU
Use the systems’ Wizards > Replication Setup Wizard for each volume defined in step 1.
Using the CLI
Use the create replication-set command.
For virtual replications
Using the SMU
Use the Create Peer Connection action from the Replications topic.
Use the Create Replication Set action from the Volumes or Replications topic for each volume defined in step 1. Create
replication sets for volumes, not volume groups, since the last-snapshot option (used later) is not supported for replication sets
of volume groups.
Using the CLI
Use the create peer-connection command.
Use the create replication-set command.
Technical white paper Page 74

3. When the application and its volumes are in a quiesced state, you can create I/O-consistent snapshots across all volumes at same time.
For linear replications:
Using the SMU
Use the Provisioning > Create Multiple Snapshots operation of the system or vdisk.
Using the CLI
Use the create snapshots command.
For virtual replications:
Using the SMU
Select multiple volumes in the Volumes topic and then select the Create Snapshot action.
Using the CLI
Use the create snapshots command.
The SMU also enables scheduling snapshots one volume at a time. For application-consistent snapshots across multiple volumes, we
recommend a server-based scheduling as explained in the next step, step 4.
4. For an automated solution, schedule scripts on the application server that coordinate the quiescing of I/O, invoking of the CLI snapshot
commands, and resuming I/O. Verify that you defined the desired snapshot retention count. See Command example 25 for an example of
CLI snapshot commands.
The time interval between these snapshot groups will be utilized in the following steps.

Note
To achieve application-consistent snapshots, you must ensure application I/O to all volumes at the server level is suspended prior to taking
snapshots, and then resumed after the snapshots are taken. The array firmware will only create point-in-time consistent snapshots of
indicated volumes.
Technical white paper Page 75

Figure 46: Setup steps for replication of application-consistent snapshots


Technical white paper Page 76

To replicate application-consistent snapshots:


1. Ensure all volumes have established recurring snapshots as detailed above.
2. Schedule each replication set. See the Scheduling replications section for an example of CLI replication commands.
For linear replications:
Using the SMU
a. After the Replication Setup Wizard creates the replication set, the SMU will display the volume’s Provisioning > Replicate Volume
page. For the Initiate Replication option, select the Scheduled option.
b. Select Replicate Most Recent Snapshot for the Replication Mode so that the latest primary volume snapshot will be used.
c. Choose the remaining options on the page.
Using the CLI
a. Use the create task type ReplicateVolume command and specify last-snapshot for the replication-mode parameter.
b. Use the create schedule command.
For virtual replications:
Using the SMU
a. After the replication set is created, select the replication set from the Replications topic, choose the Replicate action, select the
Scheduled option, and provide a Task Name of your choosing.
b. Select the Last Snapshot option so that the latest primary volume snapshot will be used.
c. Choose the remaining options on the dialog.
Using the CLI
a. Use the create task type Replicate command and specify the last-snapshot option.
b. Use the create schedule command.
3. Ensure the reoccurring schedule for each replication set coordinates with the scheduled snapshots.
a. Calculate an appropriate time that falls between the application volumes snapshot times created in step 4.
b. See the Best practices for additional information.
Technical white paper Page 77

Create your own naming scheme (see the online help for create volume, create snapshots, and create replication-set
for name limitations) to manage your application data volumes, snapshot volumes, and replication sets. In your naming scheme, include
the ability to establish a recognizable grouping of multiple replication sets. This will help with managing the instances of your
application-consistent snapshots and the application-consistent replication sets when restore or export operations are used.
# create snapshots volume FSDATA,APPDATA,LOG fs1-snap,app1-snap,log1-snap
Success: Command completed successfully. (fs1-snap,app1-snap,log1-snap) - Snapshot(s) were created. (2016-02-01
17:24:33)
# show snapshots
vdisk Serial Number Name Creation Date/Time Status Status-Reason Source
Volume Snappool Name
Snap Data Unique Data Shared Data Priority User Priority Type
--------------------------------------------------------------------------------------------------------------
---------------
-------- vd-r5-a 00c0ffda02f30000cc94af5602000000 app1-snap 2016-02-01 17:24:29 Available N/A
APPDATA
spAPPDATA
0B 0B 0B 0x6000 0x0000 Standard snapshot
vd-r5-a 00c0ffda02f30000cc94af5601000000 fs1-snap 2016-02-01 17:24:29 Available N/A FSDATA
spFSDATA
0B 0B 0B 0x6000 0x0000 Standard snapshot
vd-r5-a 00c0ffda02f30000cc94af5603000000 log1-snap 2016-02-01 17:24:29 Available N/A LOG
spLOG
0B 0B 0B 0x6000 0x0000 Standard snapshot
--------------------------------------------------------------------------------------------------------------
---------------
--------
Success: Command completed successfully. (2016-02-01 17:24:39)

(For linear replications)


# replicate snapshot name repapp1 app1-snap
Info: The replication has started. (repapp1)
Success: Command completed successfully. (2016-02-01 17:26:06)

(For virtual replications)


# replicate snapshot app1-snap repapp1
WARNING: The replication source is changing to a user snapshot. This may significantly change the contents of
secondary volumes and create an abrupt discontinuity within a collection of historical snapshots. Do you want
to continue? (y/n) y
Success: Command completed successfully. (2016-02-01 17:26:06)

Command example 25: Examples of using the CLI for replication of application-consistent snapshots
Technical white paper Page 78

Replication of Microsoft® VSS-based application-consistent snapshots (only for linear replications or virtual
replications when the primary volume resides on an MSA 2050 or MSA 1050)
You can replicate the Microsoft VSS-based application-consistent snapshots on a local array to a remote array.
To create application-consistent snapshots using VSS:
1. Create the volumes for your application. When defining the volume names, use a string name variant that helps identify the volumes as a
larger-managed group.
For linear replications:
With the SMU
Use the system’s Wizards > Provisioning Wizard to create the necessary vdisks and volumes.
With the CLI
Use the create vdisk command.
Use the create master-volume command.
For virtual replications:
With the SMU
Use the Add Disk Group action of the Pools topic to create or expand the necessary pools.
Use the Create Virtual Volumes action of the Volumes topic to create the necessary volumes.
With the CLI
Use the add disk-group command.
Use the create volume command.
With a VDS client tool, refer to the vendor documentation to create the necessary volumes.
2. Create a replication set for each volume in the application. When defining the replication set name, use a string name variant that will
help identify each replication set as part of a larger managed group.
For linear replications:
With the SMU
Use the Replication Wizard for each volume defined in step 1.
With the CLI
Use the create replication command.
For virtual replications:
With the SMU
Use the Create Peer Connection action from the Replications topic.
Use the Create Replication Set action from the Volumes or Replications topic for each volume defined in step 1. Create
replication sets for volumes, not volume groups, since the last-snapshot option (used later) is not supported for replication sets
of volume groups.
With the CLI
Use the create peer-connection command.
Use the create replication-set command.
Technical white paper Page 79

3. Determine an appropriate Microsoft VSS backup application, or VSS requestor, that is certified to manage your VSS-compliant
application.
The P2000 G3, MSA 1040, MSA 2040, MSA 1050, and MSA 2050 VSS hardware providers are compatible with Microsoft Windows®
certified backup applications.
For a general scripted solution, see the Microsoft VSS documents for usage of the Windows Server® Diskshadow (Windows 2008 and
later) or VShadow (applicable for Windows 2003 and beyond) tools.
4. Configure your VSS backup application to perform VSS snapshots for all of your application’s volumes. The VSS backup application uses
the Microsoft VSS framework for managed coordination of quiescence of VSS-compatible applications and the creation of volume
snapshots through the VSS hardware provider.
Establish a reoccurring snapshot schedule with your VSS backup application.
The time interval between these snapshot groups will be used in the following steps.

Note
The VSS framework, the VSS Backup application (requestor), the VSS-compliant Application writer, and the VSS hardware provider achieve
application-consistent snapshots. The MSA 2050 Storage, MSA 1050 Storage, MSA 2040 Storage, MSA 1040 Storage, or P2000 G3
firmware only creates point-in-time snapshots of indicated volumes.

To replicate VSS-generated, application-consistent snapshots:


1. Ensure all volumes have established reoccurring snapshots as detailed above.
2. Schedule each replication set.
For linear replications:
With the SMU do the following:
a. After the Replication Setup Wizard creates the replication set, the SMU will display the volume’s Provisioning > Replicate Volume
page. For the Initiate Replication option, select the Scheduled option.
b. Select Replicate Most Recent Snapshot for the Replication Mode so that the latest primary volume snapshot will be used.
c. Choose the remaining options on the page.
With the CLI
a. Use the create task type ReplicateVolume command and specify last-snapshot for the replication-mode parameter.
b. Use the create schedule command.
For virtual replications:
Using the SMU
a. After the replication set is created, select the replication set from the Replications topic, choose the Replicate action, select the
Scheduled option, and provide a Task Name of your choosing.
b. Select the Last Snapshot option so that the latest primary volume snapshot will be used.
c. Choose the remaining options on the dialog.
Using the CLI
a. Use the create task type Replicate command and specify the last-snapshot option.
b. Use the create schedule command.
3. Ensure the reoccurring schedule for each replication set coordinates with the scheduled snapshots.
a. Calculate an appropriate time that falls between the application volumes snapshot times as created in step 4.
b. See the Best practices for additional information.
Technical white paper Page 80

Figure 47: Setup steps for replication of the VSS-based application-consistent snapshots
Technical white paper Page 81

Best practices
Fault tolerance
To achieve fault tolerance for Remote Snap setup, we recommend the following:
• For FC and iSCSI replications, the ports must be connected to at least one switch, but for excellent protection it is recommended that half
of the ports be connected to one switch (for example, the first port or first pair of ports on each controller) and the other half of the ports
be connected to a second switch, with both switches connected to a single SAN. This avoids having a single point of failure at the switch.
– Direct Attach configurations are not supported for replication over FC or iSCSI.
– The iSCSI ports must all be routable on a single network space.
In case of link failure, the replication operation will re-initiate within a specified amount of time. For linear replications, the amount of time is
defined by the parameter max-retry time of the set replication-volume-parameters command; the default value is 1800 seconds.
Set this time to a preferred value according to your setup. Once the retry time has passed, replication goes into a suspended state and then
needs user intervention to resume. For virtual replications, the system will attempt to resume the replication every 10 minutes for the first
hour, then every hour until the replication resumes. You can attempt to resume the virtual replication manually, or abort it, as desired.
• For linear replications, during a single replication, we recommend setting the maximum replication retry time on the secondary volume to
either 0 (retry forever), or 60 minutes for every 10 GB increment in volume size, to prevent a replication set from suspending when
multiple errors occur. This can be done in the CLI by issuing the following command:
set replication-volume-parameter max-retry-time <# in seconds>
• Replication services are supported on both single-controller and dual-controller environments for P2000 G3 arrays and MSA 2040
Storage. For the P2000 G3 array replication is supported only between similar environments. That is, a single-controller system can
replicate to a single-controller system or a dual-controller system can replicate to a dual-controller system. Replication between a
single-controller system and a dual-controller system is not supported. For MSA 2040 Storage this restriction does not apply; replication
is supported between a single-controller system and a dual-controller system. Only dual controller mode is supported for MSA 1040
Storage, MSA 1050 Storage, and MSA 2050 Storage. We recommend using a dual-controller array to try to avoid a failure of one
controller. If one controller fails, replication continues through the second controller.

Volume size and policy


For linear replications
• While setting up the master volumes, ensure the size of the vdisk and the volume is sufficient for current and future use. Once part of
a replication set, the sizes of the primary/secondary volume cannot be changed.
• Every master volume must have a snap pool associated with it. If no space exists on the primary volume’s virtual disk to create the snap
pool, or insufficient space is available to create or maintain a sufficiently large snap pool for the snapshots to be retained, the snap pool
should be created on a separate vdisk that does have sufficient space.
To help you accurately set a snap pool’s size, consider the following:
– What is the master volume size, and how much will the master volume data change between snapshots?
The amount of space needed by a snap shot in a snap pool depends on how much data is changed in the master volume and the
interval in which snapshots are taken. The longer the interval, the more data will be written to the snap pool.
– How many snapshots will be retained?
The more snapshots retained, the more space is occupied in the snap pool.
– How many snapshots will be modified?
Regular snapshots mounted with read/write will add more data to snap pool.
– How much modified (write) data will the snapshots have?
The more data modified for mounted snapshots, the more space is occupied in the snap pool.
• Although the array allows for more, it is recommended that no more than four volumes (master volume or snap pools) be created on a
single vdisk when used for snapshots or replication.
Technical white paper Page 82

• By default, a snap pool is created with a size equal to 20% of the volume size. An option is available to expand the snap pool size to the
desired value. By default, the snap pool policy is set to automatically expand when it reaches a threshold value of 90%. Note that the
expansion of a snap pool may take up the entire volume or vdisk, limiting the ability to put additional data on that vdisk. We recommend
that you set the auto expansion size to a value so that snap pools are not expanded too often. It is also important that you monitor the
threshold errors and ensure that you have free space to grow the snap pool as more data is retained.
• Snapshots can be manually deleted when they are no longer needed or automatically deleted through a snap pool policy. When a
snapshot is deleted, all data uniquely associated with that snapshot is deleted and associated space in the snap pool is freed for use.
In order to accommodate the number of volumes per vdisk limit delete unnecessary snapshots.
• Scheduled replications have a retention count—setting this appropriately can help maintain the snap pool size and expansion. Once the
retention count has been met, new snapshots displace the oldest snapshot for the replication set.

For virtual replications


Review the Snapshot space section. When setting up the pools for the primary and secondary volumes, consider the impact of the
overcommit flag for the pool.
• If overcommit is disabled, the volume and associated two (for MSA 2040 and MSA 1040) or three (for MSA 2050 and MSA 1050)
replication snapshots are fully provisioned, meaning the pool must be slightly over three (for MSA 2040 and MSA 1040) or four
(for MSA 2050 and MSA 1050) times the size of the volume.
• If overcommit is enabled, then the size of the pool should be sufficient for the amount of data in the volume, and for two sets of data
changes. The snapshot space can be managed using the set snapshot-space CLI command. The default snapshot space limit is 10%
of the pool, and the default limit policy is notify only.

The size of the primary volume can be increased after creating the replication set, and the size of the pools that contain the volumes can
change as well, increasing by adding disk groups, or decreasing by removing disk groups.

License
• Use a temporary license to enable Remote Snap and get a hands-on experience. For live disaster recovery setups, we recommend
upgrading to a permanent license. A temporary license expires after 60 or 180 days, disabling further replications. If you choose not to
install a permanent license after the temporary license expires, you can access the data of the secondary volume by deleting the
replication set.
• With a temporary license, test local replications and gain experience before setting up remote replications with live systems.
• To set up remote replication, you must have a Remote Snap license for both the remote and local systems.
• Updating to a permanent license at a later stage preserves the replication images.
• By default, there is a 64-snapshot limit that can be upgraded to a maximum number of 512 snapshots.
• Exporting a replication image to a standard snapshot is subject to the licensed limit; replication snapshots are not counted against the
licensed limit. Install a license that allows for the appropriate number of snapshots.
• Enabling the temporary license directly from the SMU is available only on the P2000 G3 arrays.

Scheduling
Linear replications
• In order to ensure that replication schedules are successful, we recommend scheduling no more than three volumes to start replicating
simultaneously, although as many as 16 (as many as 8 for the MSA 1040 Storage array) can replicate at the same time. These and other
replications should not be scheduled to start or recur less than one hour apart. If you schedule replications more frequently, some
scheduled replications may not have time to start.
• The Replicate most recent snapshot option on the primary volume’s Provisioning > Replication Volume page or specifying
last-snapshot for the replication-mode parameter of the create task command can be used when standard snapshots are
manually taken of the primary volume or when using any other tool such as Microsoft VSS and you want to replicate these snapshots.
This helps in achieving application-consistent snapshots.
• The retention count applies to both the primary and secondary system.
• You can set the replication image retention count to a preferred value. A best practice is to set the count such that deleting replication
images beyond the retention count is acceptable.
Technical white paper Page 83

• Schedules are only associated with the primary volume. You can see the status of a schedule by selecting it from the primary volume’s
View > Overview panel.
• You can modify an existing schedule to change any of the parameters such as interval time and retention count using the system’s or
primary volume’s Provisioning > Modify Schedule page of the SMU.
• For linear replications, when standard snapshots are taken at the primary volume in regular intervals (manually or using VSS), select the
proper time interval for the replication scheduler so that the latest snapshot is always replicated to the remote system. The system
restricts the minimum time interval between replications to 30 minutes.
The following table provides a summary example.
Table 3. Tabulation of resources used with replication of application-consistent snapshots
Suppose the application uses two MSA volumes.

Suppose snapshots are taken every 2 hours with a retention of 32 instances for Suppose replications are taken every 6 hours with a retention of 32 instances for
each volume. each replication set.
Total hours before snapshot rollover: 64 hours (2 days 8 hours)
Total hours before replication rollover: 192 hours (7 days)
Total snapshots used by replication: 32 (per array)
Total volumes used by replication: 4 (per array)
Total vdisks used by replication: 2 (per array)

Virtual replications
• Replications can be scheduled at most once per hour when the primary volume resides on an MSA 2040 or MSA 1040, and once per
half hour when the primary volume resides on an MSA 2050 or MSA 1050. Since replications are not queued on the MSA 2040 and
MSA 1040, meaning that a replication is discarded if it is started while an existing replication on that replication set is still running, and
even though up to one replication can be queued on the MSA 2050 and MSA 1050, please consider the rate of data change, the
network ability, the number of replications, and the host I/O rate when scheduling replications to avoid discarding a replication.
• Schedules are only associated with the primary volume. You can see the status of a schedule by selecting the primary volume from the
Volumes topic and hovering over the schedule in the Schedules tab.
• You can modify an existing schedule using the Manage Schedules action of the replication selected in the Replications topic.

Physical media transfer (linear replications only)


• When creating the secondary volumes and vdisks they reside on, ensure that the remote system does not have volumes or vdisks with the
same names as volumes or vdisks on the disks you will be transferring. Volumes or vdisks with the same name will cause conflicts in the
remote systems that are difficult to resolve.
• Power down the enclosure or shut down the controllers before inserting the disks at the remote system.
• After you’ve stopped the vdisk(s), you don’t need to power down the local system to remove the disks while performing physical media
transfer; however, after pressing the drive ejector button, wait approximately 30 seconds or until the media stops rotating before
removing a disk drive.
• While performing physical media transfer, it’s easier to have the snap pool of the secondary volume on the same vdisk as the secondary
volume. If the snap pool is on a different vdisk, you should first detach the secondary volume and stop the vdisk containing the secondary
volume before stopping the vdisk with the snap pool.
• Always check that the initial replication is completed before detaching the secondary volume from the replication set.
• After reattaching the secondary volume, initiate a replication from the primary volume to continue syncing the data between the local and
remote systems.
• When using Full Disk Encryption (FDE) on an MSA 2040 Storage array, it is a best practice to move media between systems that are
identically configured with FDE enabled or disabled. That is, move secured Self-Encrypting Drives (SEDs) to a secured FDE system, and
unsecured SEDs or non-SEDs to an unsecured FDE system or non-FDE system.
Technical white paper Page 84

Replication setup wizard (linear replications only)


• The system’s Wizards > Replication Setup Wizard helps set up remote or local replication sets. Enable Check Links when performing
remote replication. This will validate the links between the local and remote systems.
• When prompted, manually initiate or schedule the replication after the setup wizard is completed.

Application-consistent snapshots (linear replications only)


• When snapshots are taken manually, no I/O quiescing is done at the array level.
• Use the create snapshots command to take time-consistent snapshots across multiple volumes once the associated applications
have been quiesced.
• Use the Scheduled option of the primary volume’s Provisioning > Replicate Volume page to initiate the replication of these snapshots.
Select the Replicate most recent snapshot option.
• Software such as the Microsoft VSS framework enables quiescing applications and taking application-consistent snapshots.
Max. volume limits
• Replication images for linear replications and the two or three replication snapshots for virtual replications count against the volume per
vdisk or pool limit. Monitor the number of replication images created to avoid unexpectedly reaching this limit.
• For linear replications, you can restrict the number of replication images at the local system, where the primary volume is residing, by
using the retention count in the scheduler.
• For linear replications, delete older replication images at the remote system as needed. Older replication images are deleted automatically
once the volume count of the vdisk reaches the maximum volume limit. The retention count applies to both the primary volume snapshots
and the secondary volume snapshots.
• For virtual replications where both the primary and secondary volumes reside on MSA 2050 or MSA 1050 arrays, restrict the number of
replication history snapshots using the snapshot count parameter of the replication set.
• For linear replications, and virtual replications where both the primary and secondary volumes reside on MSA 2050 or MSA 1050 arrays,
you can take snapshots of up to 16 volumes at a single operation by using the create snapshots command.
• For linear replications, a vdisk can accommodate only 128 volumes, including the replication images in that vdisk.

Table 4. Configuration limits for the P2000 G3 array


Property Value

Maximum vdisks 32
Maximum Volumes 512
Maximum Volumes per vdisk 128
Maximum Snapshots per volume 127
Maximum LUNs 512
Maximum Disks 149
Number of Host Ports 8
Technical white paper Page 85

Table 5. Configuration limits for the MSA 2040 array


Property Linear value Virtual value

Maximum vdisks/Disk Groups 64 32


Maximum Volumes 512 1024
Maximum Volumes per vdisk/Pool 128 N/A
Maximum Snapshots per Volume 127 254
Maximum LUNs 512 1024
Maximum Disks 199 199
Number of Host Ports 8 8

Table 6. Configuration limits for the MSA 1040 array


Property Linear value Virtual value

Maximum vdisks/Disk Groups 64 32


Maximum Volumes 512 1024
Maximum Volumes per vdisk/Pool 128 N/A
Maximum Snapshots per Volume 127 254
Maximum LUNs 512 1024
Maximum Disks 99 99
Number of Host Ports 4 4

Table 7. Configuration limits for the MSA 2050 array


Property Value

Maximum Disk Groups 32


Maximum Volumes 1024
Maximum Volumes per Pool 512
Maximum Snapshots per Volume 254
Maximum LUNs 1024
Maximum Disks 192
Number of Host Ports 8

Table 8. Configuration limits for the MSA 1050 array


Property Value

Maximum Disk Groups 32


Maximum Volumes 1024
Maximum Volumes per Pool 512
Maximum Snapshots per Volume 254
Maximum LUNs 1024
Maximum Disks 96
Number of Host Ports 4
Technical white paper Page 86

Replication limits
Table 9. Replication configuration limits for the P2000 G3 array
Property Value

Remote Systems 3
Replication Sets 16

Table 10. Replication configuration limits for the MSA 2040 array
Property Linear value Virtual value

Remote Systems/Peer Connections 3 Remote Systems 1 Peer Connection


Replication Sets 16 32
Volumes per Volume Group Replicated N/A 16

Table 11. Replication configuration limits for the MSA 1040 array
Property Linear value Virtual value

Remote Systems/Peer Connections 3 Remote Systems 1 Peer Connection


Replication Sets 8 32
Volumes per Volume Group Replicated N/A 16

Table 12. Replication configuration limits for the MSA 2050 array
Property Value

Peer Connections 4
Replication Sets 32
Volumes per Volume Group Replicated 16

Table 13. Replication configuration limits for the MSA 1050 array
Property Value

Peer Connections 1
Replication Sets 32
Volumes per Volume Group Replicated 16

Monitoring
Replication
For linear replications, you can monitor the progress of an ongoing replication by selecting the replication image listed in the navigation tree.
The right panel displays the status and percentage of progress. When the replication is completed, the status appears as Completed.

For virtual replications, you can check the Status of an ongoing replication in the Replications topic and monitor the progress by hovering
over the replication set to see the current run progress and current and last run times and transferred data in the Replication Set
Information panel.
Technical white paper Page 87

Events
When monitoring the progress of ongoing replication, view the event log for the following events:
• Event code 316—Replication license expired—This indicates that the temporary license has expired. Remote Snap will no longer be
available until a permanent license is installed. All the replication data will be preserved even after the license has expired, but you cannot
create a new replication set or perform more replications. If you choose not to install a permanent license after the temporary license
expires, you can access the data of the secondary volume by deleting the replication set.
• Event codes 229, 230, and 231—Snap pool threshold—The snap pool can fill up when there is steady I/O and replication snapshots
are taken at regular intervals. When the warning threshold is crossed, event code 229, consider taking action: either remove the older
snapshots or expand the vdisk.
• Event codes 418, 431, and 581—Replication suspended—If the ongoing replication is suspended, an event is received. Any further
linear replication initiated is queued. Once the problem is identified and fixed, you can manually resume the replications.

For more related events, see the HPE MSA 2050 Event Descriptions Reference Guide, the HPE MSA 1050 Event Descriptions Reference
Guide, the HPE MSA 2040 Event Descriptions Reference Guide, the HPE MSA 1040 Event Descriptions Reference Guide, or the
HPE P2000 G3 MSA System Event Descriptions Reference Guide.
SNMP traps and email (SMTP) notifications
You can set up the array to send SNMP traps and email notifications for the events described above. Using the v2 SMU, use the system’s
Configuration > Services > SNMP Notification or Configuration > Services > Email Notification pages. Using the v3 SMU, select the
Set Up Notifications action from the Home topic. For the CLI, use the set snmp-parameters and set email-parameters
commands.

Performance tips
For a gain in replication and host I/O performance of up to 20%, enable jumbo frames on all infrastructure components (if supported by all)
in the path and on iSCSI controllers. Jumbo frames are disabled by default for the iSCSI host ports. You can enable them using either the
SMU or CLI.

Note
If your infrastructure does not support jumbo frames, enabling them only on your controllers may actually lower performance or even
prevent the creation of replication sets or replications.

With the v2 SMU, enable jumbo frames by going to the system’s Configuration > System Settings > Host Interfaces.

With the v3 SMU, select the Set Up Host Ports action from the System topic, then select the Advanced Settings tab of the Host Ports
Settings panel.
With the CLI, enable jumbo frames by using the command set iscsi-parameters jumbo-frames enabled.
Technical white paper Page 88

Troubleshooting
Issue

Replication enters a suspended state.


Recommended actions

If performing a local linear replication, ensure all the ports are configured and connected via the switch.

Check the Remote Snap license status at the local and remote site. If you are running a temporary license, the license may have expired.
Install a permanent license and manually resume replication.

The connectivity link may be broken:


For linear replications

Use the remote system’s Tools > Check Remote System Link in the SMU to check the link connectivity between the local and remote systems.

For virtual replications

Use the CLI command show peer-connections with the verify-links parameter to check the data link. Repair the link and make
sure all links are available between the systems, then manually resume the replication.

For virtual replications, the overcommit flag for the pool may be enabled and the pool’s high threshold has been exceeded. Hover over the
pool in the Pools topic in the SMU or use the show pools CLI command to see if overcommit is enabled, the percent the high threshold is
set at, and the available space to see if there is insufficient available space to continue. Add disk-groups to the pool or remove volumes or
snapshots if necessary to increase the available size of the pool.

The CHAP settings are not correct—check that the CHAP records exist and the secrets are correct.
Issue

You cannot perform an action such as changing the schedule for a replication set.
Recommended actions

For linear replications

Actions performed on a replication set, such as schedule creation or modification and adding or removing a secondary volume, must be
performed on the system where the primary volume resides.

Changing the primary volume is a coordinated effort between the local and remote systems. It must first be performed on the remote
system, and then on the local system. To help remember this, the secondary volume pulls data from the primary volume. To avoid a potential
conflict, do not attempt to have two secondary volumes.
Since the secondary volume cannot be mapped to the hosts, unmap a primary volume before converting it to a secondary volume.
For virtual replications

Actions that control replications, such as scheduling, initiating, suspending, resuming, or aborting a replication, must be performed on the
system where the primary volume resides.
Deleting a replication set or changing its name can be performed on either the primary volume’s system or the secondary volume’s system.
Issue

You can’t delete a linear replication set involving a secondary volume.


Recommended actions

Convert the secondary volume to a primary volume. You can now delete the replication set.
Technical white paper Page 89

FAQs
1. Do we support port failover?
Answer: Yes. See examples below to understand how it works.
Example
A dual controller system where the primary volume is owned by controller A and ports A1, A2, B1, and B2 are connected and part of the
replication set’s primary addresses (for linear replications, see the output of the show replication-sets command or the
Replication Addresses of the primary volume’s View > Overview to verify that a port is a primary address, for virtual replications, see
the output from show peer-connections or hover over the peer connection in the Replications topic of the SMU).
a. If port A1 fails, replication will go through A2 without any issues.
b. If port A1 and A2 fail, the replication will continue using the B1 and B2 ports of controller B.
2. Do we support load balancing with multiple replications in progress?
Answer: Yes.
Example
Four primary volumes owned by controller A and both ports (A1 and A2) are connected and used for replication.
a. All four sets will try to use both ports A1 and A2, unless the array doesn’t have sufficient resources to use both ports.
3. Why are replication history snapshots not being taken?
Answer: If a volume or a snapshot already exists with the name the replication history snapshot will use, the replication history snapshot
will not occur. Ensure the names of all existing volumes and snapshots do not conflict with the replication history snapshot naming
convention, basename_nnnn, where basename is something you’ve set when creating or modifying the replication set, and nnnn is a
number with leading zeroes, starting at 0.
4. Why didn’t a queued virtual replication begin once the previously ongoing replication completed or was aborted?
Answer: Check that the pool that contains the replication set is not full. If it is, add disk groups to the pool or remove volumes or
snapshots to provide more space. Once space is available, the queued replication will begin.
5. Can CHAP be added to a replication set at any time after it is created? For instance, if you have a local linear replication set for
doing an initial replication and then media transfer, do you need to set up CHAP before the set creation?
Answer: CHAP is specific to a system and not specific to the replication set. CHAP is specific to the local-to-remote system
communication path and vice versa. For linear replications, once you are done with the initial replication and physical media transfer, you
can enable CHAP before reattaching the secondary volume from the remote system; the reattach operation should go through fine.
6. Does using CHAP affect replication performance?
Answer: CHAP is just for initial authentication across nodes. Once a login is successful with another system, CHAP will not be involved in
further data transfer, so replication performance should not be affected.
7. I changed the CHAP settings on the array that the secondary volume resides on, but it did not affect an ongoing replication. How
do I get the CHAP settings to take effect?
Answer: Use the reset host-link CLI command to reset the SCSI nexus, or connection, between the primary and secondary arrays.
When the connection is re-established, the new CHAP settings will be used. Note that this means that if the array the primary volume
resides on does not have matching CHAP settings, ongoing and new replications will suspend. Change the array the primary volume
resides on to match the CHAP settings and reset its host links to allow replications to continue.
8. I created a master volume as the primary and did a local linear replication. Can I now do a remote replication with the same
primary volume?
Answer: A volume can only be part of one replication set. You need to delete the set and create a new set or remove the secondary
volume from the set and add the other remote secondary volume to the set.
Technical white paper Page 90

9. I initiated a remote linear replication, and now I am not getting an option to suspend the replication/abort the replication in the
local system.
Answer: By design, suspend and abort operations can only be performed on the secondary volume for linear replications. You can access
the secondary volume on the remote system; it has an option to suspend/resume replication.
10. I deleted the linear replication set using remove replication and all my replication images disappeared.
Answer: All the replication images are converted to standard snapshots and can be viewed under the volume in the Snapshots section
of the Configuration View panel of the SMU.
11. I see an option called Enable Snapshot when attempting to create a linear volume.
Answer: By selecting the box Enable Snapshot, you automatically create a snap pool for the volume. The created volume is now a
master volume.
12. I am not able to map to the secondary volume.
Answer: Secondary volumes cannot be presented to any hosts. For linear replications, you can export a snapshot of the secondary
volume, and for virtual replications you can create a snapshot of the secondary volume. Then, map the snapshot to hosts.
13. I cannot remove the primary volume from a linear replication set.
Answer: Only a secondary volume can be removed. If you want to remove the primary volume, first make the other volume the primary
volume, and then make the original primary volume a secondary volume. You can then remove the volume.
14. I can’t expand a primary or secondary volume in the linear replication set.
Answer: Master volumes cannot be expanded, because both the primary and the secondary volumes are master volumes; they can’t be
expanded even in a prepared state.
In the context of Remote Snap, volume expansion also causes problems because both the primary and the secondary volumes must be
identical in size. This further prohibits expanding volumes which are part of the set.
Note that for virtual replications, you can expand a primary volume—the secondary volume’s size will change on the next replication of
the set.
15. I expanded the primary virtual volume, but the secondary virtual volume’s size hasn’t changed.
Answer: The secondary volume’s size will increase on the next replication.
16. What is the Maximum Retry Time?
Answer: The Maximum Retry Time in the SMU and MaxRetryTime in the CLI refers to the maximum time in seconds to retry a single
linear replication. If this value is zero, there will be an infinite number of retries. This is valid only when on-error policy is set to retry.
Use the set replication-volume-parameters command to change these parameters.
A retry for a replication occurs every five minutes if an error is encountered. That is, a five-minute delay occurs in between retry attempts.
If the delta time from the current time to the initial retry time is greater than the Maximum Retry Time, the replication is suspended.
17. How does a virtual replication recover from a temporary peer connection failure?
Answer: The replication will be suspended and will attempt to resume every 10 minutes for the first hour, then once every hour until
successful or aborted by the user. If you see that the host ports on both systems have status of Up and OK health and are accessible to
each other, but the peer connection does not show OK health, restart the SC of both systems—the peer connection should come up.
18. Can the iSCSI controller ports run concurrent Remote Snap and I/O?
Answer: You can use iSCSI host ports for both Remote Snap and I/O with all supported FW versions of the P2000 G3 iSCSI arrays,
MSA 2050 Storage, MSA 2040 Storage, MSA 1050 Storage, and MSA 1040 Storage.
19. How do I delete a virtual replication set when the peer connection is down?
Answer: Use the local-only option of the delete replication-set CLI command.
20. I cannot delete a peer connection even though there is no virtual replication set present in the system. What can I do?
Answer: You may have deleted the replication set on the local system using the local-only option of the delete replication-
set CLI command while the peer connection was down, and now the peer connection is backed up. Delete the replication set from the
remote system to allow deleting the peer connection.
Technical white paper Page 91

21. I am about to perform an upgrade to an MSA 2050 or MSA 1050 by transferring disks from the old array to the new array. The
old array has virtual replication sets that I want to keep. What sort of preparation do I need to do?
Answer: Abort replications and suspend all replication sets on the old array. Remember that you must abort replications and suspend
replication sets from the array the primary volume of the replication set resides on. Also, if you plan on using the same iSCSI host port IP
addresses on the new array that were on the old array, ensure the old array is powered down and removed from the network before
powering up the new array, configuring its IP addresses, and connecting it to the network.

Summary
Remote Snap provides array-based, remote replication with a flexible architecture and simple management, and supports both Ethernet and
FC technology. The software protects against detrimental impacts to application performance, while the snapshot-based replication
technology minimizes the amount of data transferred. Remote Snap enables the use of multiple recovery points for daily backups (for linear
replications), access to data in remote sites, and business continuity when critical failures occur.

Glossary
Peer connection—A logical connection for virtual replications that defines the ports used to connect two systems. Virtual replication sets
use a peer connection to replicate from the primary to the secondary system.
Primary volume—The replication volume residing on the local system. It is the source from which replication snapshots are taken and
copied. It is externally accessible to host(s) and can be mapped for host I/O.
Replication-prepared volume—For linear volumes, the replication volume residing on a remote system that has not been added to a
replication set. You can create the replication-prepared volume using the SMU/CLI and can then use it as the secondary volume when
creating a linear replication set.
Remote system—A representation of a system which is added to the local system and which contains the address and authentication
tokens to access the remote system for linear replication management. The remote system may be queried for lists of vdisks, volumes, and
host I/O ports (used to replicate data) to aid in creating a linear replication set, for example.
Replication image (linear replications only)—The representation of a replication snapshot at both the local and remote systems. In
essence, it is the pair of replication snapshots that represent the point-in-time replication. In the SMU, clicking on the table shown in right
pane displays both the primary and secondary volume snapshots associated with a particular replication image. In the Configuration View
pane of the SMU the image name is the time at which it was created; in the CLI and elsewhere in the SMU the image name is the name of the
primary volume replication snapshot.
Replication set—The association between the source volume (primary volume) and the destination volume (secondary). A replication set is
a set of volumes associated with one another for the purposes of replicating data. To replicate data from one volume to another, you must
create a replication set to associate the two volumes. A replication set is a concept that spans systems. In other words, the volumes that are
part of a replication set are not necessarily (and not likely, and in the case of virtual replications, not allowed to be) located on the same
system. It is not a volume, but an association of volumes. A volume is part of exactly one replication set.
Replication snapshot—Replication snapshots are a special form of the existing snapshot functionality. They are explicitly used in replication
and do not count against a snapshot license.
Secondary volume—The replication volume residing on a remote system. For linear replications, this volume is also a normal master volume
and appears as a secondary volume once it is part of a replication set. For virtual replications, it is a base volume. It is the destination for the
replication snapshot copies. It cannot be mapped to any hosts.
Technical white paper Page 92

Sync points (linear replications only)—Replication snapshots are retained both on the primary volume and the secondary volume. When a
matching pair of snapshots is retained on both the primary and secondary volumes, they are referred to as sync points. There are four types of
sync points: the only replication snapshot that is copy-complete on any secondary system is the “only sync point;” the latest replication
snapshot that is copy-complete on any secondary system is the “current sync point;” the latest replication snapshot that is copy-complete on
all secondary systems is the “common sync point;” a common sync point that has been superseded by a new common sync point is an “old
common sync point.”
VSS HW Provider—A software driver supplied by the storage array vendor that enables the vendor’s storage array to interact with the
Microsoft Server Volume Shadow Copy Service framework (VSS).
VSS Requestor—A software tool or application that manages the execution of user VSS commands.

VSS Writer—A software driver supplied by the Windows Server Application vendor that enables the application to interact with the
Microsoft VSS framework.

For more information


• HPE MSA 2050 Storage
• HPE MSA 2050 Storage QuickSpecs
• HPE MSA 2050 CLI Reference Guide
• HPE MSA 2050 SMU Reference Guide
• HPE MSA 2050 Event Descriptions Reference Guide
• HPE MSA 1050 Storage
• HPE MSA 1050 Storage QuickSpecs
• HPE MSA 1050 CLI Reference Guide
• HPE MSA 1050 SMU Reference Guide
• HPE MSA 1050 Event Descriptions Reference Guide
• HPE MSA 2040 Storage
• HPE MSA 2040 Storage QuickSpecs
• HPE MSA 2040 CLI Reference Guide
• HPE MSA 2040 SMU Reference Guide
• HPE MSA 2040 Event Descriptions Reference Guide
• HPE MSA 1040 Storage
• HPE MSA 1040 Storage QuickSpecs
• HPE MSA 1040 CLI Reference Guide
• HPE MSA 1040 SMU Reference Guide
• HPE MSA 1040 Event Descriptions Reference Guide
• HPE P2000 G3 MSA Array Systems
• HPE MSA P2000 G3 Modular Smart Array Systems QuickSpecs
• HPE P2000 G3 MSA System CLI Reference Guide
• HPE P2000 G3 MSA System SMU Reference Guide
• HPE P2000 G3 MSA System Event Descriptions Reference Guide
Technical white paper

Sign up for updates

© Copyright 2010–2011, 2013–2014, 2016–2018 Hewlett Packard Enterprise Development LP. The information contained
herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are
set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as
constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions
contained herein.

Microsoft, Windows, and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the
United States and/or other countries. All other third-party marks are property of their respective owners.

4AA1-0977ENW, October 2018, Rev. 8

You might also like