Professional Documents
Culture Documents
Contents
Introduction ........................................................................................................................................................................................................................................................................................................................................ 4
Remote Snap product overview ................................................................................................................................................................................................................................................................................ 4
What’s new? ....................................................................................................................................................................................................................................................................................................................................... 5
Replication models and documentation conventions ................................................................................................................................................................................................................................. 5
Benefits of Remote Snap........................................................................................................................................................................................................................................................................................................ 6
Disaster recovery .................................................................................................................................................................................................................................................................................................................... 6
Backup .............................................................................................................................................................................................................................................................................................................................................. 6
Development .............................................................................................................................................................................................................................................................................................................................. 6
Components of Remote Snap............................................................................................................................................................................................................................................................................................ 7
Components common to linear and virtual replication........................................................................................................................................................................................................................ 7
Components of linear replication ............................................................................................................................................................................................................................................................................. 7
Components of virtual replication ........................................................................................................................................................................................................................................................................... 7
How the technology works .................................................................................................................................................................................................................................................................................................. 8
Linear replication process............................................................................................................................................................................................................................................................................................... 8
Virtual replication process ...........................................................................................................................................................................................................................................................................................10
Comparison of linear replications versus virtual replications.............................................................................................................................................................................................................12
Types of replications...............................................................................................................................................................................................................................................................................................................12
Local replication....................................................................................................................................................................................................................................................................................................................12
Remote replication .............................................................................................................................................................................................................................................................................................................12
Physical media transfer ..................................................................................................................................................................................................................................................................................................12
Remote Snap requirements ..............................................................................................................................................................................................................................................................................................13
Setup requirements ...........................................................................................................................................................................................................................................................................................................13
Snapshot space .....................................................................................................................................................................................................................................................................................................................14
Network requirements ....................................................................................................................................................................................................................................................................................................14
Remote Snap basic functions..........................................................................................................................................................................................................................................................................................16
General notes about using the SMU and CLI ............................................................................................................................................................................................................................................16
Preparing the systems ....................................................................................................................................................................................................................................................................................................18
Creating a replication set..............................................................................................................................................................................................................................................................................................24
Scheduling replications ..................................................................................................................................................................................................................................................................................................30
Deleting a replication set ..............................................................................................................................................................................................................................................................................................35
Accessing the secondary volume’s data.........................................................................................................................................................................................................................................................37
Setting the primary volume (linear replications only)........................................................................................................................................................................................................................40
Verifying replication data links ................................................................................................................................................................................................................................................................................41
Ports connected for replication ..............................................................................................................................................................................................................................................................................47
CHAP settings and Remote Snap ........................................................................................................................................................................................................................................................................51
Examples of replication types and operations ................................................................................................................................................................................................................................................52
Remote replication .............................................................................................................................................................................................................................................................................................................52
Local replication and physical media transfer (for linear replications only) ..................................................................................................................................................................53
Disaster recovery operations ....................................................................................................................................................................................................................................................................................55
Technical white paper
Introduction
This document provides information for using the MSA Remote Snap Software (Remote Snap). The following topics are covered:
• Benefits
• Components
• How the technology works
• Types of replications
• Requirements
• Basic functions
• Use cases
• Best practices
• Troubleshooting
• Frequently asked questions
Note
Remote Snap is not supported on the HPE P2000 G3 SAS MSA Array System Controller, the HPE MSA 1040 SAS Controller, the HPE MSA
2040 SAS Controller, the HPE MSA 1050 SAS Controller, or the HPE MSA 2050 SAS Controller.
Technical white paper Page 5
Remote Snap is a form of asynchronous replication that uses the snapshot functionality of the array to replicate block-level or page-level
data from a volume on a primary system to a volume on a secondary system. The secondary system may be at the same location as the first,
or it may be located at a remote site. Remote Snap only replicates blocks or pages that have changed since the last replication, thereby
providing efficient replication.
What’s new?
This section describes new enhancements and support added with the release of the VL100 firmware for MSA 2050 Storage and the
VE100 firmware for MSA 1050 Storage. Since neither the MSA 2050 nor the MSA 1050 supports linear storage, they do not support
replication of linear volumes; they only support replication of virtual volumes.
• The MSA 2050 and the MSA 1050 require authentication when creating or modifying peer connections.
• Increased peer connections to four per array for the MSA 2050; remains at one for the MSA 1050.
• When both the primary and secondary volumes reside on MSA 2050 or MSA 1050 arrays:
– Use FC for the peer connection for replication.
– Change the protocol used for the peer connection between FC and iSCSI for the MSA 2050.
– Queue replications—up to one replication can be queued.
– Up to 16 replication history snapshots available for replication sets of volumes (not for replication sets of volume groups).
• When the primary volume resides on an MSA 2050 or an MSA 1050 array:
– Decreased interval between scheduled replications from one hour to thirty minutes.
– Added the ability to optionally use an existing snapshot of the primary volume as the current snapshot when comparing against the
previous snapshot to determine data to transfer for replication. The existing snapshot may be created manually, automatically via a
schedule or snapshot history, or using the VSS hardware provider. Previously, the current snapshot of the primary volume was always
created as part of the replication process.
Remote Snap technology enables the following key data management and protection capabilities:
• Continuity of business systems in the event of a failure on the primary site
• Access to data at a remote site, for either dispersed operations or development activities
• Multiple recovery points using snapshots
Disaster recovery
Remote Snap provides access to data at a secondary site when the primary site experiences a critical failure. It allows several data volumes
(limits determined by model and volume type replicated) to be replicated. Replicating at regular intervals helps to protect the data. Recovery
time is reduced because the data is available at the secondary site; applications can switch to the secondary site with minimal downtime
using data from the last replication point. The data stored at the secondary site can then be used to restore the primary location once it is
back online, or the data can be exported and used by users at the secondary site.
Backup
Remote Snap can replicate volumes with marginal impact on server performance. It can be used by small businesses as a primary backup
tool and by large businesses as a secondary backup tool at data centers. Remote Snap can be used as interim storage for backing up to
removable media such as tape.
Alternatively, remote offices can replicate to central data centers where backups occur. The software reduces the overall backup time by
replicating only data that has been modified. Because linear volume replication and virtual volume replication where both the primary and
secondary volumes reside on MSA 2050 or MSA 1050 arrays supports either FC or Ethernet interconnects, businesses have the flexibility to
use the technology that best matches their current environment.
Development
Remote Snap enables different development use cases:
• An application administrator can test patches or changes in the primary system by switching the applications to the secondary site. Once
the testing of the patch update is completed, the administrator can switch the applications back to the primary site.
• A database application development team can have access to regularly scheduled snapshots of the replicated database volumes by
exporting the snapshots or, when both the primary and secondary volumes reside on MSA 2050 or MSA 1050 arrays, using replication
history snapshots on the secondary system. When the exported or replication history snapshot is no longer needed, it can be deleted.
Technical white paper Page 7
Remote Snap enables snapshots of data to reside on another array at a location other than the primary site. To perform a replication, the
system takes a snapshot of the volume to be replicated, creating a point-in-time image of the data. The system replicates this point-in-time
image to the destination volume by copying only the differences in the data between the current snapshot and the previous one via TCP/IP
(iSCSI) or FC.
The snapshot occurs at the volume level and is block-based or page-based. The software functions independently of the vdisk or disk group
configuration, so the secondary volume in a given set may have different underlying RAID levels, drive counts, drive sizes or drive types than
the primary volume, though the volume sizes are identical. Since the software functions at the raw block-level or page-level, it has no
knowledge of the volume’s operating system configuration, the file system, or any data that exists on the volume.
Linear replication uses a pull model while virtual replication uses a push model. In a pull model, the secondary volume’s system requests data
from the appropriate snapshot on the primary volume; in a push model the primary volume’s system writes data to the appropriate snapshot
on the secondary volume.
Linear replication repeats step 1 and queues steps 2–6 for each new replication command issued to the same replication set until the prior
replication command is complete. As long as the sync points are maintained, new replication commands to the same primary volume can be
performed while one or more previously executed replication commands are still in process. This enables you to take snapshots at discrete
intervals without waiting for any previous replications to complete.
Technical white paper Page 9
For the MSA 2040 and the MSA 1040, virtual replication does not keep more than the current and previous snapshots, and those snapshots
are not accessible. For the MSA 2050 and MSA 1050, virtual replication has three internal snapshots that are not accessible—the current
snapshot, the previous snapshot, and the queued snapshot. While these snapshots are not accessible, they can be listed with their storage
consumption using the show snapshots type replication CLI command. In addition, if both the primary and secondary volumes
reside on MSA 2050 or MSA 1050 arrays, replication snapshot history is available, and can exist for the secondary volume only, or for both
the secondary and primary volumes.
For the MSA 2040 and the MSA 1040, virtual replication does not queue replications; if a new replication request occurs while a previous
replication is in progress, the new request fails. For the MSA 2050 and MSA 1050, if the queue policy is set to queue latest, then if a new
replication request occurs while a previous replication is in progress, the system takes a snapshot and puts it in the queued internal
replication snapshot, replacing any previous snapshot with the new one, thus queuing at most one snapshot.
Technical white paper Page 11
Replication protocol iSCSI or FC iSCSI for MSA 2040 and MSA 1040, iSCSI or FC for
MSA 2050 and MSA 1050
Change replication protocol No Only for MSA 2050
Initial replication All blocks are copied Only allocated pages are copied
Queued replication Yes MSA 2050 and MSA 1050, at most one queued
Up to 16 replication history snapshots available on the
MSA 2050 and MSA 1050
Multiple replication images/replication history Up to the maximum number of snapshots
Not applicable for the MSA 2040 and MSA 1040—only
the current replication is retained
Modify primary volume/reverse replication direction Allowed Not allowed
Replicate snapshot of the primary volume using existing Allowed for the MSA 2050 and the MSA 1050, not
Allowed
replication set allowed for the MSA 2040 and MSA 1040
Create replication set using a snapshot or a volume
Not allowed Allowed
group as the source
Can replicate to or from more than one system Allowed Allowed for the MSA 2050, not allowed for the
MSA 2040, MSA 1050, or MSA 1040
Types of replications
A replication from a volume on a system to another volume on the same system is a local replication; a replication from a volume on one
system to a volume on another, separate, system is a remote replication, no matter the location of the primary or secondary system. Virtual
replication can perform remote replications, while linear replication can perform both local and remote replications, and can be reconfigured
between the two.
Local replication
Local replication occurs when the primary and secondary volumes reside on the same system. When creating the replication set, ensure the
primary volume resides on one vdisk and the secondary volume resides on another vdisk. Once the set is created, replications can be
initiated. Local replication still uses the host ports for replication, and so the host ports need to be configured and connected to a switch.
Remote replication
Remote replication occurs when the primary and secondary volumes reside on different systems. When creating the replication set, ensure
the primary volume resides on a vdisk or pool of the local, or primary, system and the secondary volume resides on a vdisk or pool on the
remote, or secondary, system. Once the set is created, replications can be initiated.
Note that when using Full Disk Encryption (FDE) on an MSA 2040 Storage array, it is a best practice to move media between systems that
are identically configured with FDE enabled or disabled. That is, move secured Self-Encrypting Drives (SEDs) to a secured FDE system, and
unsecured SEDs or non-SEDs to an unsecured FDE system or non-FDE system.
Technical white paper Page 13
Virtual replications
• Two arrays are required. All arrays should have all reachable ports configured and connected via a switch; direct connection between
systems is not supported.
• The Remote Snap license must be enabled on both systems:
– To explore Remote Snap on the MSA 1040 or MSA 2040, obtain a 180-day temporary license.
– To permanently enable Remote Snap, purchase a license.
• Remote Snap supports up to 32 replication sets per array. If a volume on the system is participating in a replication set, either as a primary
volume or as a secondary volume, it counts against the replication set limit.
• At least one virtual pool on each system is required to create a peer connection between the two systems.
• At least one volume on the primary system is required to create a replication set.
• If one of the arrays is an MSA 2050 or MSA 1050 and the other array is an MSA 2040 or MSA 1040, create or modify the peer
connection from the MSA 2050 or MSA 1050.
Note
For more information on controller types and additional specifications, see the HPE MSA 2050 Storage QuickSpecs, the HPE MSA 1050
Storage QuickSpecs, the HPE MSA 2040 Storage QuickSpecs, the HPE MSA 1040 Storage QuickSpecs, or the HPE MSA P2000 G3 Modular
Smart Array Systems QuickSpecs.
Technical white paper Page 14
Snapshot space
Since Remote Snap compares snapshots to determine data to transfer for replication, care must be given to provide sufficient space for the
snapshots. While overall space requirements depend on the rate of data change and frequency of replication, and determining this is beyond
the scope of this document, some information on managing snapshot space is provided here. To see the storage consumed by the
snapshots, use the show snapshots type replication CLI command.
For linear replications, set the initial snap pool size in a number of ways: by creating a snap pool and associating it to the volume during
creation, or when converting a standard volume to a master volume; by specifying the reserve (snap pool) size when creating a volume or
replication set; or by allowing the size to default to 20% or the minimum snap pool size of 5.37 GB, whichever is greater, when creating the
volume or the replication set. Use the set snap-pool-policy CLI command to set the policy to automatically attempt to expand the
snap pool, delete oldest snapshots, delete all snapshots, halt writes, or just to notify when various thresholds are reached. The defaults are to
only notify for the warning threshold, to automatically attempt to expand the snap pool for the error threshold, and to delete snapshots for
the critical threshold. The warning and error thresholds can be set using the set snap-pool-thresholds CLI command. The defaults
are 75% for the warning threshold, and 90% for the error threshold; the critical threshold is set at 98% and cannot be changed. Snapshot
deletion will follow the retention priority set for the pool—see the online help for the set priorities CLI command for more information
and the default priorities.
For virtual replications, if overcommit is disabled for the pool, snapshots are fully provisioned, including the internal replication snapshots—
storage is allocated to the snapshots equal to the size of the volume. If overcommit is enabled, the snapshot space has a soft limit as a
percentage of the pool’s physical space, the default is 10%. Set the snapshot space limit, and the snapshot space policy to only notify or delete
snapshots, using the set snapshot-space CLI command. If the policy is set to only notify, more snapshot space may be allocated, if
available. Note that adding or removing disk groups from the pool changes the size of the snapshot space; if removing a disk group causes the
snapshot space to exceed the limit, and the policy is set to delete snapshots, auto-deletion of snapshots occurs. Attempts to lower the limit
below the current allocated size of the snapshots will fail, as this may otherwise cause auto-deletion of snapshots. Only a snapshot that is a
leaf, that is, there are no snapshots of the snapshot, and is unmapped will be considered for auto-deletion. Retention priority is also considered
when auto-deleting; the retention priority is inherited from the snapshot’s parent, or can be reset on the snapshot itself, but changing the
parent’s priority does not propagate to existing children. Snapshot deletion considers priority first, then by date, oldest first. Snapshots are
deleted one at a time until the snapshot space drops below the limit. If no eligible snapshots exist, and snapshot space is still above the limit,
no new snapshots can be created, but existing snapshots can continue to consume space; to allow new snapshots to be created, change the
priority of one or more snapshots from never-delete to another priority, unmap a leaf snapshot, change the limit to a higher value, or add one
or more disk groups to the pool. To get notified of snapshot space usage, set thresholds for the snapshot space, the default for the low
threshold is 75% of the snapshot space, the default for the middle threshold is 90%, and the default for the high threshold is 99%.
Network requirements
The following is a guideline for setting up iSCSI controllers with 1 Gb and 10 Gb ports to use with Remote Snap. The two arrays do not have
to be in the same subnet, but must be connected to a network whose route tables and firewall or port-blocking settings allow iSCSI traffic
(port 3260) to pass between the two systems. CHAP must be configured appropriately on both arrays. If jumbo frames are enabled on
either array, it must be enabled on the other array and all network devices in the path between the arrays.
System or environment variables
Hardware type: 10 Gb or 1 Gb
Priority (set from set replication-volume-parameters priority): Low, medium, or high
Number of concurrent inbound replications (Rp) (from the primary system’s view): User-configured
Number of inbound channels (Cp) (from the primary system’s view): User-configured
Number of concurrent outbound replications (Rs) (from the secondary system’s view): User-configured
Number of outbound channels (Cs) (from the secondary system’s view): User-configured
Packet loss rate (PL): You may get this from a switch or router, or use a tool such as PathPing or MTR
Round trip time (RTT) in ms: Get this from ping
Bandwidth (BW) in Kilobytes/second (Kbps): Use a bandwidth speed test available from many websites
Congestion Avoidance Loss (CAL): This is difficult to obtain. It is generally around 30% for a WAN, but higher as distance increases
Technical white paper Page 15
Throughput requirements
Data Transfer Pending (DTP) depends on the Priority: 1280 for low, 2816 for medium, or 4096 for high.
Primary system calculations:
Primary timeout (TOp): 30 ms
Network throughput limit (NTL): Minimum of throughput limit by packet loss (if non-zero), throughput limit by RTT and throughput limit
by bandwidth
Results
Primary system throughput required to avoid timeout (TOAp) (KB/s) = Tp/Cp
Secondary system throughput required to avoid timeout (TOAs) (KB/s) = Ts/Cs for 1 Gb controllers, N/A for 10 Gb controllers.
For more information on Remote Snap functions, see the HPE MSA 2050 SMU reference guide, the HPE MSA 1050 SMU reference guide,
the HPE MSA 2040 SMU reference guide, the HPE MSA 1040 SMU reference guide, or the HPE P2000 G3 MSA System SMU reference guide.
Performing tasks using the v3 SMU or the MSA 2050 or MSA 1050 SMU
To perform a task, select the topic from the topic tabs on the left side of the interface, select the component in the topic pane, then select
the action from the Action menu. While some tasks may be performed in the Volumes topic, all tasks can be performed in the
Replications topic.
Technical white paper Page 18
Figure 6: Querying a peer connection on an MSA 2050 or MSA 1050 using the SMU
Technical white paper Page 20
Figure 7: Peer connection query results on an MSA 2050 or MSA 1050 using the SMU
The MSA 2040 and MSA 1040 do not have an SMU option available, use the query peer-connection CLI command, which is also
available on the MSA 2050 and MSA 1050.
Technical white paper Page 21
Peer Controllers
----------------
Controller: A
Storage Controller Code Version: VLS100R03-01
Management Controller Code Version: VXM100R004-01
IP Address: 10.10.5.172
Peer Controllers
----------------
Controller: B
Storage Controller Code Version: VLS100R03-01
Management Controller Code Version: VXM100R004-01
IP Address: 10.10.5.173
Figure 8: Creating an iSCSI peer connection from the MSA 2040 or MSA 1040 using the SMU
Command example 3: Creating a peer connection from the MSA 2040 or MSA 1040 using the CLI
Technical white paper Page 23
Since the MSA 2050 and MSA 1050 require authentication when creating or modifying peer connections, and neither the MSA 2040 nor
the MSA 1040 have the ability to obtain the required authentication information from the user to provide to the remote MSA 2050 or
MSA 1050, when one of the arrays is an MSA 2050 or MSA 1050 and the other array is an MSA 2040 or MSA 1040, create or modify the
peer connection from the MSA 2050 or MSA 1050.
Figure 9: Creating an FC peer connection from the MSA 2050 or the MSA 1050 using the SMU
Command example 4: Creating an iSCSI peer connection from the MSA 2050 or the MSA 1050 using the CLI
Technical white paper Page 24
Note
If creating a remote replication, add the remote system first.
From the volume’s Provisioning > Replicate Volume page, select the secondary system (the local system is the default), and either the
vdisk the secondary volume will automatically be created on or the replication-prepared secondary volume. Then select the link type (FC or
iSCSI). Finally, if you elected to initiate the replication; choose whether to initiate it now or schedule the replication. Allowing the system to
automatically create the secondary volume on the vdisk specified is the easiest and fastest choice for creating a replication set.
Figure 10: Creating a linear replication set using the SMU—automatic creation of secondary volume
Technical white paper Page 25
Note
If you want a secondary volume created for you on a vdisk on a remote system, you must add the remote system first. Even if you’re using a
replication-prepared volume on a remote system, adding the remote system first makes creating the replication set easier since you don’t
have to provide the addresses of the ports of the system that contains the secondary volume.
Use the create replication-set command, and specify the link-type (optional if supplying the primary-address), the
remote-system (for a remote replication, and if the remote system has previously been added), the remote-vdisk or replication-prepared
remote-volume, the secondary-address (optional for local replications or if the remote system has previously been added), and the
primary-volume.
# create replication-set link-type iSCSI secondary-address ip=10.20.5.170,10.30.5.171 remote-volume dst
primary-address ip=10.20.5.160,10.30.5.161 set src-dst src
Info: The volume was created. (spsrc)
Info: Converted the volume to a master volume. (src)
Info: The primary volume was prepared for replication. (src)
Info: Started adding the secondary volume to the replication set. (dst)
Info: Verifying that the secondary volume was added to the replication set. This may take a couple of
minutes... (dst)
Info: The secondary volume was added to the replication set. (dst)
Info: The primary volume is ready for replication. (src)
Success: Command completed successfully.
Command example 5: Creating a linear replication set using the CLI—specifying addresses and secondary volume when management ports
cannot communicate
Technical white paper Page 26
Note
Create the peer connection that is required as part of the replication set before attempting to create the replication set. If both systems
involved in the replication set are MSA 2050 arrays operating in a combination FC and iSCSI mode (two ports per controller are configured
for FC and two ports are configured for iSCSI), the connection can change protocols from iSCSI to FC or from FC to iSCSI.
Figure 11: Creating a virtual replication set from the MSA 2040 or MSA 1040 for a volume from the Replications topic in the SMU
Technical white paper Page 27
Figure 12: Creating a virtual replication set from the MSA 2050 or MSA 1050 for a volume from the Replications topic in the SMU
Technical white paper Page 28
Alternatively, from the Volumes topic, select the volume or a member of the volume group to replicate, then select the Create Replication
Set action. From the Create Replication Set panel, select Single Volume or Volume Group, provide the Replication Set Name, and modify
the Secondary Volume Name and Secondary Pool as desired. For the MSA 2050 and the MSA 1050, select the Queue Policy desired, and
select Secondary Volume Snapshot History and associated parameters if you wish to retain replication history snapshots. Note that setting
the Queue Policy to Queue Latest or enabling Secondary Volume Snapshot History is only successful when both the primary and
secondary volumes reside on MSA 2050 or MSA 1050 arrays. Finally chose whether to schedule replications. Once the replication set has
been created, you’ll have a chance to initiate the initial replication if replications were not scheduled.
Figure 13: Creating a virtual replication set from the MSA 2040 or MSA 1040 for a volume group from the Volumes topic in the SMU
Technical white paper Page 29
Figure 14: Creating a virtual replication set from the MSA 2050 or MSA 1050 for a volume from the Volumes topic in the SMU
Technical white paper Page 30
Command example 6: Creating a virtual replication set from the MSA 2040 or MSA 1040 using the CLI
Command example 7: Creating a virtual replication set for the MSA 2050 or MSA 1050 using the CLI
Scheduling replications
The scheduler can be used to replicate data from a primary volume in regular intervals to the remote system. Creating a replication schedule
consists of two parts; creating a replication task, which indicates the action of the task (in this case replication) and parameters associated
with the task, and creating a schedule for running the task. The CLI requires two commands to perform this, while the SMU creates the task
and schedule in one operation.
Common schedule parameters
Parameters common to all schedules are the start time and date, which must be a time and date in the future; recurrence or repetition
interval—if not set or selected, the replication will occur only once; end time and date or count limit (the number of times to run the task);
and time and date constraints, which only constrains when the task starts, and is not a window in which the task must complete.
Scheduling a linear replication
Notes on parameters for scheduled linear replication tasks
When a scheduled replication occurs, the name of a replication image (the name of the primary snapshot) created by the scheduled task will
begin with a prefix you specify, followed by “_R” and then a four digit number, starting with 0001; for example, if the prefix was “Data,” the
first replication image will have the name “Data_R0001.”
To control space usage, specify the number of images (replication snapshots) to retain; this is the retention count. The number is the
maximum number—fewer images (snapshots) may be retained due to snapshot space limitations.
There are two replication modes. For one mode, a new snapshot is created and the system will replicate it, for the other, the system will replicate
the most recent snapshot. The second mode is highly useful when another application, such as VSS, performs the actual snapshot creation.
Technical white paper Page 31
Figure 15: Creating a linear replication schedule when the replication set is created using the SMU
Technical white paper Page 32
Option 2: Create the schedule for an existing replication set. Select the primary volume’s Provisioning > Replicate Volume page, and select
the Scheduled radio button.
Figure 16: Creating a linear replication schedule for an existing replication set using the SMU
Then, create the schedule. You must provide the task to run and the schedule’s name; while you must also provide a schedule specification,
only the start time is required. See the Common schedule parameters section for more information. The minimum interval is 30 minutes.
# create schedule schedule-specification “start 2016-02-22 22:00 every 2 hours count 5" task-name DataRepTask
DataRepSchedule
Success: Command completed successfully. (DataRepSchedule) - The schedule was created.
Figure 17: Creating a virtual replication schedule from the MSA 2040 or MSA 1040 using the SMU
Technical white paper Page 34
Figure 18: Creating a virtual replication schedule from the MSA 2050 or MSA 1050 using the SMU
Command example 10: Creating a virtual replication task using the CLI
Then, create the schedule. You must provide the task to run and the schedule’s name; while you must also provide a schedule specification,
only the start time is required. See the Common schedule parameters section for more information. For the MSA 2050 and the MSA 1050,
the minimum interval is 30 minutes; for the MSA 2040 and the MSA 1040, the minimum interval is 60 minutes.
# create schedule schedule-specification “start 2016-02-25 07:00 every 60 minutes only any weekday of year"
task-name SrcDstTask SrcDstSched
Success: Command completed successfully. (SrcDstSchedule) - The schedule was created.
Command example 11: Creating a virtual replication schedule using the CLI
Technical white paper Page 35
Command example 13: Exporting a snapshot of a replication image using the CLI
Technical white paper Page 38
Command example 14: Creating a snapshot of a replication volume using the CLI
Technical white paper Page 39
You can choose to retain up to 16 snapshots; once the snapshot count is exceeded, the oldest unmapped history snapshot is discarded
automatically, irrespective of the retention priority, even if it is never-delete. Manually created snapshots do not count against the retention
limit and are not managed by the snapshot history feature; snapshots created with this feature do count against the array-wide maximum
licensed snapshots. You can set the retention priority of replication history snapshots just like you do for any other volume snapshot; this
specifies the retention priority in regards to snapshot space, not the snapshot history retention (see setting the snapshot retention priority
for a volume for more information).
Snapshot names are of the form <basename>+“_nnnn”, where nnnn is an integer that is incremented for each subsequent snapshot, and
basename is something you choose, up to 26 bytes. If primary snapshots are enabled, snapshots of the same name (including digits) will
exist on both the primary and secondary arrays. If attempting to create a history snapshot, but a volume with the name already exists, the
history snapshot will not be created; the replication operation itself will not be affected. All replication requests increment the number
identifier of the history snapshot’s name, no matter if the replication occurs or not. For example, if currently replicating history snapshot
SrcDst_0004, and four more replications are attempted before it is done, then when it is done, the next history snapshot will be SrcDst_0008
(SrcDst_0005 is replaced by SrcDst_0006 which is replaced by SrcDst_0007 which is replaced by SrcDst_0008 in the queue). As another
example, if a replication set is created without replication snapshot history enabled, five replications occur, then replication snapshot history
is enabled with the snapshot basename SrcDst, the first replication history snapshot will be SrcDst _0006.
Technical white paper Page 40
Since a secondary volume cannot be mapped, unmap a primary volume before changing it to a secondary volume. Once the data is
replicated back to the local system from the remote system, change the local system’s volume to a primary volume and change the remote
system’s volume to a secondary volume. Note that while both volumes can be designated as primary, only one volume in a set can be a
secondary volume.
Use the secondary or primary volume’s Provisioning > Set Replication Primary Volume page, or the set replication-primary-volume
command.
Command example 15: Setting the primary volume using the CLI
Technical white paper Page 41
Figure 24: Check remote system link output with iSCSI connectivity using the SMU
Command example 16: Check remote system link CLI output where the remote system is iSCSI only
Technical white paper Page 42
In the system’s Wizards > Replication Setup Wizard, you can also enable a remote link check by selecting the check box.
Figure 25: Check remote system link in the Replication Setup Wizard of the SMU
Technical white paper Page 43
For the MSA 2050 and the MSA 1050, use the Query Peer Connection action in the Replications topic to verify the data links—note that
when displaying the data links to a remote MSA 2040 or MSA 1040 system, the FC ports of the remote system are shown even though they
cannot be used in a peer connection.
Figure 26: Query peer connection output from the MSA 2050 or MSA 1050 using the SMU
Technical white paper Page 44
For virtual replications, use the CLI command show peer-connections with the verify-links parameter to check the data link.
# show peer-connections verify-links PhxSea
Info: This may take a few minutes to ping all port combinations...
Peer Connections
----------------
Peer Connection Name: PhxSea
Peer Connection Type: iSCSI
Connection Status: Online
Health: OK
Health Reason:
Health Recommendation:
Command example 17: Verify links for a peer connection for virtual replications using the CLI
Technical white paper Page 45
To check links between ports for local linear replications from the SMU use the system’s Tools > Check Local System Link and the CLI by
running the command verify links. This will check links from controller A ports to controller B ports irrespective of where you run the
command from (e.g., controller A or B).
# verify links
Port Type Links
---------------------------------------------
A1 FC B1
A2 FC B2
A3 iSCSI B3
A4 iSCSI B4
B1 FC A1
B2 FC A2
B3 iSCSI A3
B4 iSCSI A4
---------------------------------------------
Success: Command completed successfully.
In the CLI, you can use the same command to check remote system links for replication purposes; this tests the links to be used for
replication from one system to another system.
In the system’s Wizards > Replication Setup Wizard, you can also enable a local link check by selecting the check box.
Figure 28: Check local link in the Replication Setup Wizard of the SMU
Technical white paper Page 47
Note
This field shows N/A for a local primary or secondary volume.
Figure 29: Connected ports being used for a linear replication using the SMU
Technical white paper Page 48
Command example 19: Showing connected ports for a linear replication using the CLI
Technical white paper Page 49
Figure 30: Connected ports used in a peer connection for virtual replications using the SMU
Technical white paper Page 50
Command example 20: Showing connected ports for a peer connection for virtual replications using the CLI
Technical white paper Page 51
• After the CHAP records are created, enable CHAP on the primary system, the secondary system, or both.
To enable CHAP, use the set iscsi-parameters command:
# set iscsi-parameters chap enabled
CHAP Disabled (Secret: No; CHAP record: No) CHAP Disabled (Secret: No; CHAP record: No) Remote Snap works fine. No iSCSI authentication.
CHAP Enabled (Secret: SECRET1; CHAP record: Yes) CHAP Enabled (Secret: SECRET1; CHAP record: Yes) Remote Snap works fine.
Remote Snap will fail. Use the same secret for both the
CHAP Enabled (Secret: SECRET1; CHAP record: Yes) CHAP Enabled (Secret: SECRET2; CHAP record: Yes)
local and remote systems.
Remote Snap will fail. Enabling CHAP without
CHAP Enabled (Secret: No; CHAP record: No) CHAP Enabled (Secret: No; CHAP record: No) specifying a secret for an iSCSI initiator effectively
blocks that initiator.
CHAP Disabled (Secret: SECRET1; CHAP record: Yes) CHAP Enabled (Secret: SECRET1; CHAP record: Yes) Remote Snap works fine.
Remote Snap will fail. Use the same secret for both the
CHAP Disabled (Secret: SECRET1; CHAP record: Yes) CHAP Enabled (Secret: SECRET2; CHAP record: Yes)
local and remote systems.
Note
If you are performing a local replication involving iSCSI ports, CHAP will not be used.
Disabling or enabling CHAP will cause the host ports to reset. If the CHAP records are not configured correctly (see Table 2 CHAP settings
and corresponding behavior with Remote Snap), then replication cannot occur.
Technical white paper Page 52
Using FC
You can set up local and remote sites connected via an FC network and can have linear, and, when both the primary and secondary volumes
reside on MSA 2050 or MSA 1050 arrays, virtual, replication sets performing replications over FC. This is useful in cases where local and
remote sites are in different blocks of a campus or building.
FC ports can be used to transfer regular data while they are being used for replication. However, this action will result in decreased
performance for both the datapath and the replication transfer.
Using iSCSI
When the local and remote systems are in different geographical regions, you can create replication sets using iSCSI to perform replications
over a WAN. For example, when the local system is in New York and you are planning to set up your backup system (remote system) in
Houston, you can create remote replication sets using iSCSI as the transfer media for performing replications.
For linear replications, perform a physical media transfer to overcome bandwidth and latency issues with the initial replication. These issues
can sometimes be caused by a large amount of data in the primary volume getting replicated to a remote system. (See more on these issues
in the Physical media transfer section.)
For virtual replications, you may want to co-locate the systems to overcome bandwidth and latency limitations of a WAN. However, since
only allocated data is transferred, network limitations may be acceptable when the systems are dispersed geographically.
Technical white paper Page 53
Local replication and physical media transfer (for linear replications only)
See the following for an illustration of local replication and physical media transfer, resulting in a remote replication.
Important
A secondary volume cannot be mapped, so be sure to unmap the original primary volume before attempting to make it a secondary volume.
2. Replicate any data written to the remote volume (now acting as primary volume residing at remote system) to the volume residing at the
local system (now acting as secondary volume). This can be performed in a single replication or in multiple replications. This ensures that
all data has been transferred properly.
After all the data is replicated back to the local site, convert the volume at the local site to the primary volume and then convert the
remote volume to the secondary volume.
a. To convert a primary volume to a secondary volume, set the other volume of the replication set as the primary—perform this
operation on both systems. You can perform this operation using the CLI command set replication-primary-volume or
using the SMU via the volume’s Provisioning > Set Replication Primary Volume function.
Re-establish the replication set to the remote site. Continue using the scheduler for running remote replications in regular intervals.
Technical white paper Page 56
Figure 33a: Disaster recovery operations for linear replications—Failover to remote site
Technical white paper Page 57
Figure 33b: Disaster recovery operations for linear replications—Failback to local site
Technical white paper Page 58
Virtual replications
To bring up the remote site, consider that the secondary volume cannot be mapped, you cannot reverse direction of the replication set, and
a snapshot can be the primary volume of a replication set. Once at least one replication has completed, to allow access to the secondary
volume’s data, you can either delete the replication set, which will remove the secondary volume from the replication set and convert it to a
standard base volume, create a snapshot of the secondary volume and access the snapshot rather than the volume itself, or enable
replication snapshot history and use the snapshots created automatically by the system. If the replication set is deleted, the only way to
include the volume that was a secondary volume into a replication set is to create a new replication set with the volume as the primary, or
source, volume.
The preferred method that provides the most flexibility is to use two snapshots of the secondary volume—one that is mapped read-write to
hosts and is intended for modification of the data, and one that, regardless of the type of mapping (read-only or read-write), is not to be modified.
When the primary site is recovered, if the primary volume and replication set is intact, you have the ability to determine from the two snapshots
what has changed, and copy those changes back to the primary site, either to the primary volume directly, or through the primary server.
If the primary volume or replication set cannot be recovered, or if the amount of modified data is significant, or if the changes cannot easily
be determined, create a new replication set with the read-write snapshot of the secondary volume as the new primary volume and replicate
back to the primary site. The reason for using the snapshot rather than deleting the replication set and using the secondary volume as the
primary volume for the new replication set is that you retain flexibility in case the primary volume becomes available and you want to copy
the changes back to the primary volume and leave the original replication set in place. Once you’ve performed the initial replication on the
“resynchronizing” replication set, create a snapshot of the secondary volume, copy that snapshot to a new, independent volume and create
a new replication set using the new volume as the primary volume for the new replication set. The reason for using a snapshot rather than
deleting the replication set and using the secondary volume as the new primary volume is to provide flexibility in case additional changes
need to be replicated from the backup site to the primary site. Copying the snapshot to a new, independent volume rather than using the
snapshot itself allows you to clean up the resynchronizing replication set without consuming space for both the volume and the snapshot.
If the primary volume or replication set cannot be recovered, or If the amount of modified data is significant, or if the changes cannot easily
be determined, and you have limited space on the local and remote arrays, create a new replication set with the read-write snapshot of the
secondary volume as the new primary volume and replicate back to the primary site. Once you’ve performed the initial replication on the
“resynchronizing” replication set, delete the replication set and delete the original secondary volume and modified snapshot on the remote
array, and use what was the secondary volume on the local array as the primary volume of a new replication set.
Figure 37: Resync to primary site by creating replication sets when space is limited
Technical white paper Page 62
Use cases
This white paper provides examples that demonstrate Remote Snap’s ability to replicate data in various situations.
Single office with a remote site for backup and disaster recovery using iSCSI to replicate data
Figure 38: Single office with a remote site for backup and disaster recovery (iSCSI)
Technical white paper Page 63
Command example 23: Single office with a remote site for backup and disaster recover (iSCSI) CLI output—linear replication
# add disk-group disks 1.1-3 level raid5 pool a type virtual dg-r5-a
Success: Command completed successfully.
# replicate FSDATA-Replication
Success: Command completed successfully.
Command example 24: Single office with a remote site for backup and disaster recover (iSCSI) CLI output—virtual replication (where both
primary and secondary volumes reside on an MSA 2050 or MSA 1050, and enabling replication snapshot history)
Technical white paper Page 64
To configure a single office with a remote site for backup and disaster recovery (iSCSI):
1. Set up a P2000 G3 FC/iSCSI combo or iSCSI controller array, an MSA 1050 or MSA 1040 iSCSI array, or an MSA 2050 or MSA 2040
SAN array with iSCSI-configured ports with enough disks (according to the application load and users), then configure the management
ports and iSCSI ports with IP addresses. Install the Remote Snap license if one has been purchased, or install the temporary license from
the system’s Tools > Install License page of the SMU (for the P2000 only). See the Setup requirements section for additional license
and other information.
2. Create the vdisks or disk groups and pools, then the master or base volumes FS Data and App A Data; if using linear replication, enable
snapshots when creating the volumes. For linear replication, if an existing snap pool is not specified, a snap pool is automatically created
with the default policy and size, or you can adjust the settings as necessary. Make sure the volumes are in different vdisks or pools and
that each vdisk or pool has enough space to expand the snap pool or snapshot space in the future.
3. Connect your array to the WAN. If using iSCSI over the WAN as part of your disaster recovery solution, connect your file server and
application server to the WAN. Connecting the management port of an array to the WAN helps you to manage the array remotely and is
necessary when using the SMU to create linear replication sets.
4. Map the volumes to the file server and the application server.
5. Identify a remote location and set up a second P2000 G3 FC/iSCSI combo or iSCSI controller array, an MSA 1050 or MSA 1040 iSCSI
array, or an MSA 2050 or MSA 2040 SAN array with iSCSI-configured ports and configure both management ports and the iSCSI ports.
This is the remote system. Configure the vdisks or pools to accommodate secondary volumes at a later stage.
6. Set up connection with the remote system:
a. For linear replications, in both the local system and the remote system, add the other system using the system’s Configuration >
Remote Systems > Add Remote System page of the SMU or the create remote-system command in the CLI.
b. For virtual replications, create a peer connection between the systems using the Create Peer Connection action in the Replications
topic of the SMU or the create peer-connection command in the CLI.
7. Verify the datapath between your local system and remote system. For linear replication, use the remote system’s Tools > Verify
Remote Link page of the SMU, or the verify links CLI command. For virtual replication, use the query peer-connection
command in the CLI. For the MSA 2050 or the MSA 1050, use the Query Peer Connection action of the Replications topic of the SMU.
Always configure sufficient iSCSI ports to facilitate a working redundant connection to the WAN.
8. Set up the linear replication sets for the volumes FS Data and App A Data using the system’s Wizards > Replication Setup Wizard,
using the volume’s Provisioning > Replicate Volume, or using the create replication-set CLI command, and choose iSCSI as
the link type. Set up the virtual replication sets for the volumes FS Data and App A Data using the Create Replication Set action in the
Replication topic in the SMU, or the create replication-set command in the CLI.
9. After the setup is complete, schedule the replication in desired intervals, based on the application load, critical data, replication window
(the time it takes to perform a replication) and so on. This enables you to have a complete backup and disaster recovery setup.
10. Verify the progress of replications by checking the replication images for linear replications, or by checking the replication sets for virtual
replications. This will list the progress or a completed message.
11. Verify the data at the remote location by exporting the linear replication image to a snapshot, or by creating a snapshot of the secondary
volume of the virtual replication set, or, when both the primary and secondary volumes reside on MSA 2050 or MSA 1050 arrays, by
enabling replication snapshot history, and mounting the snapshot to a host.
Technical white paper Page 65
In case of a failure at the local site, it is possible to switch the application to the remote site data by employing the procedures defined in the
Disaster recovery operations section. Alternatives include the following:
Linear replications
• Move the remote array to the local site, convert the secondary volumes to primary or delete the replication sets, and map the volumes to
the servers.
• Move disks or enclosures that contain the secondary volumes to the local site, install or attach to the local array, convert the secondary
volumes to primary, and map them to the servers.
• Replace the local array with a new array, convert the remote secondary volumes to primary, and then replicate the data to the new array.
Once done, convert the volumes of the new array to primary, map them to the servers, and convert the volumes on the remote array back
to secondary.
• Convert the secondary volumes at the remote array to primary or delete the replication sets and map the volumes to the servers.
Virtual replications
• Move the remote array to the local site, use replication history snapshots, or create snapshots of the secondary volumes, or delete the
replication sets and map the volumes or snapshots to the servers.
• Replace the local array with a new array, delete the replication sets, create new replication sets using the original secondary volumes as
the primary volumes, and then replicate the data to the new array. Once done, delete the new replication sets, map the new volumes to
the servers, create replication sets using the volumes at the primary site as the primary volumes, and replicate to the remote secondary array.
• Use replication history snapshots, or create snapshots of the secondary volumes, or delete the replications sets on the remote array, and
map the volumes to the servers.
Single office with local site disaster recovery and backup using iSCSI and host access using FC
Figure 39: Single office with local site disaster recovery and backup using iSCSI and host access using FC
To configure a single office with local site disaster recovery and backup using iSCSI and host access using FC:
1. Set up two P2000 G3 combo arrays or MSA 2050 or MSA 2040 SAN arrays with host ports set to a combination of FC and iSCSI at the
local site.
2. Connect the file servers and application servers to the arrays via an FC SAN.
3. Mount the volumes to the applications.
4. Create multiple replication sets with FS Data and App A Data as primary volumes and the secondary volumes on the second P2000 G3,
MSA 2050, or MSA 2040 system. Create a replication set with App B Data as the primary volume and the secondary volume on the first
system. These replication sets are created using the iSCSI link type. We recommend that the iSCSI host ports of both of the P2000 G3,
MSA 2050, or MSA 2040 systems are connected by a dedicated Ethernet link (LAN). See also the Network requirements.
Switch the applications to the other system if any failures occur on either of the two systems.
Technical white paper Page 67
Single office with a local site disaster recovery and backup using FC (only for linear replications or virtual
replications when both the primary and secondary volumes reside on MSA 2050 or MSA 1050 arrays)
Figure 40: Single office with a local site disaster recovery and backup using FC
To configure a single office with a local site disaster recovery and backup using FC:
1. For linear replications, set up two MSA 2040 Storage, MSA 1040 Storage, or P2000 G3 FC arrays at the local site. You can use any
combination of the three models—linear replication can occur between the three models as long as the P2000 G3 array has FW version
TS250 or later. To use virtual replication, both the primary and secondary volume must reside on MSA 2050 or MSA 1050 arrays.
2. Connect the file and application servers to these arrays via an FC SAN.
3. Mount the volumes to the applications.
4. Create multiple replication sets with FS Data and App A Data as primary volumes and the secondary volumes on the second Storage
system. Create a replication set with App B Data as the primary volume and the secondary volume on the first system. These sets are
created using the FC link type. For virtual replications, use FC port(s) for the peer connection.
Switch the applications to the other system if any failures occur on either of the two systems.
Technical white paper Page 68
Due to network bandwidth limitations, it may be beneficial to perform an initial replication using local replication, then reconfiguring to a
remote replication.
Review the Local replication and physical media transfer (for linear replications only) section. A brief overview of the steps:
1. Create a local replication on the local system.
2. Perform the initial replication.
3. Once the initial replication is complete, detach the secondary volume, which resides on the local system.
4. Once the detach operation is complete, stop the secondary volume’s vdisk and associated snap pool’s vdisk (if the secondary volume and
its snap pool reside on separate vdisks).
5. If moving a drive enclosure, power off the enclosure. If moving only the disks there is no need to power off the enclosure.
6. Remove the disks or enclosure containing the disks and attach or move them into the remote system.
7. If the secondary volume’s snap-pools are on a different vdisk from the volume itself, start the snap pool’s vdisk.
8. Start the secondary volume’s vdisks. The secondary volume appears on the system at the remote site.
9. Reattach the secondary volume to add it back to the set. This operation makes the secondary volume a part of the original set again.
10. Continue replicating from the local site.
Technical white paper Page 69
If using two MSA 2050 systems, both with half their host ports configured for FC and half for iSCSI, with the ultimate intention to use iSCSI
for remote replication, it may be beneficial to perform an initial replication with the remote array at the local site, then move the remote array
to the remote site and reconfigure as a remote replication. Because FC replication may be faster than iSCSI replication, especially if using
1GbE iSCSI SFPs, it may be more efficient to perform the initial replication using FC, then, after moving the remote array to the remote site,
change the protocol for the peer connection by setting the peer connection to use the iSCSI address.
1. With the remote system at the local site, and with both systems able to reach each other via FC, create a peer connection between the
local system and the remote system specifying an FC WWN.
2. Create the replication set(s) from the local, primary system using the peer connection.
3. Complete the initial replication(s).
4. Suspend all replications using the peer connection.
5. Move the remote system to the remote site.
6. Change the peer connection to use iSCSI using the set peer-connection CLI command on either system, specifying one of the
iSCSI host addresses of the other system.
7. Resume the replication set(s).
Technical white paper Page 70
Figure 43: Two branch offices with disaster recovery and backup
1. Set up two P2000 G3 FC/iSCSI combo controller arrays or MSA 2050 SAN or MSA 2040 SAN arrays with host ports set to a
combination of FC and iSCSI with enough disks (according to the application load, users and secondary volumes) then configure the
management ports and iSCSI ports with IP addresses. Install the Remote Snap licenses if they have been purchased, or install the
temporary licenses from the system’s Tools > Install License page of the SMU (for the P2000 only).
2. On the array at site A, create the master or base volumes FS A Data and App A Data; if using linear replication, enable snapshots when
creating the volumes. For linear replications, if an existing snap pool is not specified, a snap pool is automatically created with the default
policy and size, or you can adjust the settings as necessary. Make sure the volumes are in different vdisks or pools and that each vdisk or
pool has enough space to expand the snap pool or snapshot space in the future.
3. On the array at site B, create the master or base volumes FS B Data and App B Data similar to the instructions above.
4. Connect both arrays to the WAN. If using iSCSI over the WAN as part of your disaster recovery solution, connect your file servers and
application servers to the WAN. Connecting the management ports of the arrays to the WAN helps you to manage either array remotely
and is necessary when using the SMU to create linear replication sets.
5. Map the volumes to the file servers and application servers.
6. At site A, create remote replication sets using the primary volumes FS A and App A. Corresponding secondary volumes are created
automatically on the array at site B.
7. Schedule replications at regular intervals. This ensures that data at the local site is backed up to the array at site B.
8. At site B, create remote replication sets using the primary volumes FS B and App B. Corresponding secondary volumes are created
automatically on the array at site A.
9. Schedule replications at regular intervals so that all data at site B is backed up to site B.
In case of failure at either site, you can fail over the application and file servers to the available site.
Technical white paper Page 71
Figure 44: Single office with target model using FC and iSCSI ports
To configure a single office with a target model using FC and iSCSI ports:
1. Set up a P2000 G3 FC/iSCSI combo controller array or an MSA 2050 or MSA 2040 SAN array with host ports set to a combination of
FC and iSCSI with enough disks, according to the application load and number of users, and configure the management ports and iSCSI
ports with IP addresses.
2. Create master or base volumes App A Data, App B Data, and FS Data in the array.
3. Map FS Data to the iSCSI port so that the file server can use this volume via the iSCSI interface.
4. Map App A Data and App B Data volumes to the FC port so that the application servers can access these volumes via the FC SAN.
Using the P2000 G3 FC/iSCSI combo controllers or MSA 2050 or MSA 2040 SAN arrays with host ports set to a combination of FC and
iSCSI ports provides several advantages:
• You can leverage both the FC and iSCSI ports for target-mode operations.
• You can connect file servers and other application servers that are not part of the FC SAN to the array using the iSCSI ports via the LAN
or WAN.
• You can connect new servers with FC connectivity directly through the FC SAN.
Note
Accessing a volume from a host through both iSCSI and FC is not supported.
Technical white paper Page 72
Multiple local offices with a centralized backup (only for linear replications or virtual replications when the
Central Office array is an MSA 2050)
1. Set up P2000 G3 FC/iSCSI combo controller arrays or MSA 2050 or MSA 2040 SAN arrays with host ports set to a combination of FC
and iSCSI with sufficient storage and configure the management and iSCSI ports with valid IP addresses. Install the Remote Snap license
at remote sites 1, 2, and 3.
2. For virtual replications, create peer connections between the remote sites and the central office.
3. Create Primary Volume #1, Primary Volume #2, and Primary Volume #3 on the corresponding remote site.
4. Set up a P2000 G3 FC/iSCSI controller array or an MSA 2050 or MSA 2040 SAN array with host ports set to a combination of FC and
iSCSI at the centralized location and make sure that it has enough disks to accommodate data coming from remote sites 1, 2, and 3 and
install the Remote Snap license.
5. Connect sites 1, 2, and 3 with the central site using the WAN and make sure iSCSI ports are configured and connected to this WAN.
6. Make sure the iSCSI ports of the arrays at site 1, 2, and 3 can access the iSCSI ports of the array at the central site.
7. Create replication sets for volume Primary Volume #1, specifying the central system and vdisks on it (for linear replications) or the peer
connection (for virtual replications) to allow automatic creation of secondary volumes at the central site.
Repeat step 6 for sites 2 and 3.
Schedule the replication in regular intervals so that data from sites 1, 2, and 3 replicates to the central site.
Technical white paper Page 73
Replication of application-consistent snapshots (only for linear replications or virtual replications when the
primary volume resides on an MSA 2050 or MSA 1050)
You can replicate application-consistent snapshots on a local array to a remote array. Use the SMU for manual operation and the CLI for
scripted operation. Both options require you to establish a mechanism that enables all application I/O to be suspended (quiesced) before the
snapshot is taken and resumed afterwards. Many applications enable this via a scripting method. For an illustration of the following steps,
see Figure 46.
To create application-consistent snapshots for any supported OS and any application:
1. Create the application volume. When defining the volume names, use a string name variant that will help identify the volumes as a larger
managed group:
For linear replications:
Using the SMU
Use the system’s Wizards > Provisioning Wizard to create the necessary vdisks and volumes.
Using the CLI
Use the create vdisk command.
Use the create master-volume command.
For virtual replications:
Using the SMU
Use the Add Disk Group action of the Pools topic to create or expand the necessary pools.
Use the Create Virtual Volumes action of the Volumes topic to create the necessary volumes.
Using the CLI
Use the add disk-group command.
Use the create volume command.
2. Create a replication set for each volume used by the application. Use a string name variant when defining the replication set name. This
helps identify each replication set as part of a larger managed group.
For linear replications:
Using the SMU
Use the systems’ Wizards > Replication Setup Wizard for each volume defined in step 1.
Using the CLI
Use the create replication-set command.
For virtual replications
Using the SMU
Use the Create Peer Connection action from the Replications topic.
Use the Create Replication Set action from the Volumes or Replications topic for each volume defined in step 1. Create
replication sets for volumes, not volume groups, since the last-snapshot option (used later) is not supported for replication sets
of volume groups.
Using the CLI
Use the create peer-connection command.
Use the create replication-set command.
Technical white paper Page 74
3. When the application and its volumes are in a quiesced state, you can create I/O-consistent snapshots across all volumes at same time.
For linear replications:
Using the SMU
Use the Provisioning > Create Multiple Snapshots operation of the system or vdisk.
Using the CLI
Use the create snapshots command.
For virtual replications:
Using the SMU
Select multiple volumes in the Volumes topic and then select the Create Snapshot action.
Using the CLI
Use the create snapshots command.
The SMU also enables scheduling snapshots one volume at a time. For application-consistent snapshots across multiple volumes, we
recommend a server-based scheduling as explained in the next step, step 4.
4. For an automated solution, schedule scripts on the application server that coordinate the quiescing of I/O, invoking of the CLI snapshot
commands, and resuming I/O. Verify that you defined the desired snapshot retention count. See Command example 25 for an example of
CLI snapshot commands.
The time interval between these snapshot groups will be utilized in the following steps.
Note
To achieve application-consistent snapshots, you must ensure application I/O to all volumes at the server level is suspended prior to taking
snapshots, and then resumed after the snapshots are taken. The array firmware will only create point-in-time consistent snapshots of
indicated volumes.
Technical white paper Page 75
Create your own naming scheme (see the online help for create volume, create snapshots, and create replication-set
for name limitations) to manage your application data volumes, snapshot volumes, and replication sets. In your naming scheme, include
the ability to establish a recognizable grouping of multiple replication sets. This will help with managing the instances of your
application-consistent snapshots and the application-consistent replication sets when restore or export operations are used.
# create snapshots volume FSDATA,APPDATA,LOG fs1-snap,app1-snap,log1-snap
Success: Command completed successfully. (fs1-snap,app1-snap,log1-snap) - Snapshot(s) were created. (2016-02-01
17:24:33)
# show snapshots
vdisk Serial Number Name Creation Date/Time Status Status-Reason Source
Volume Snappool Name
Snap Data Unique Data Shared Data Priority User Priority Type
--------------------------------------------------------------------------------------------------------------
---------------
-------- vd-r5-a 00c0ffda02f30000cc94af5602000000 app1-snap 2016-02-01 17:24:29 Available N/A
APPDATA
spAPPDATA
0B 0B 0B 0x6000 0x0000 Standard snapshot
vd-r5-a 00c0ffda02f30000cc94af5601000000 fs1-snap 2016-02-01 17:24:29 Available N/A FSDATA
spFSDATA
0B 0B 0B 0x6000 0x0000 Standard snapshot
vd-r5-a 00c0ffda02f30000cc94af5603000000 log1-snap 2016-02-01 17:24:29 Available N/A LOG
spLOG
0B 0B 0B 0x6000 0x0000 Standard snapshot
--------------------------------------------------------------------------------------------------------------
---------------
--------
Success: Command completed successfully. (2016-02-01 17:24:39)
Command example 25: Examples of using the CLI for replication of application-consistent snapshots
Technical white paper Page 78
Replication of Microsoft® VSS-based application-consistent snapshots (only for linear replications or virtual
replications when the primary volume resides on an MSA 2050 or MSA 1050)
You can replicate the Microsoft VSS-based application-consistent snapshots on a local array to a remote array.
To create application-consistent snapshots using VSS:
1. Create the volumes for your application. When defining the volume names, use a string name variant that helps identify the volumes as a
larger-managed group.
For linear replications:
With the SMU
Use the system’s Wizards > Provisioning Wizard to create the necessary vdisks and volumes.
With the CLI
Use the create vdisk command.
Use the create master-volume command.
For virtual replications:
With the SMU
Use the Add Disk Group action of the Pools topic to create or expand the necessary pools.
Use the Create Virtual Volumes action of the Volumes topic to create the necessary volumes.
With the CLI
Use the add disk-group command.
Use the create volume command.
With a VDS client tool, refer to the vendor documentation to create the necessary volumes.
2. Create a replication set for each volume in the application. When defining the replication set name, use a string name variant that will
help identify each replication set as part of a larger managed group.
For linear replications:
With the SMU
Use the Replication Wizard for each volume defined in step 1.
With the CLI
Use the create replication command.
For virtual replications:
With the SMU
Use the Create Peer Connection action from the Replications topic.
Use the Create Replication Set action from the Volumes or Replications topic for each volume defined in step 1. Create
replication sets for volumes, not volume groups, since the last-snapshot option (used later) is not supported for replication sets
of volume groups.
With the CLI
Use the create peer-connection command.
Use the create replication-set command.
Technical white paper Page 79
3. Determine an appropriate Microsoft VSS backup application, or VSS requestor, that is certified to manage your VSS-compliant
application.
The P2000 G3, MSA 1040, MSA 2040, MSA 1050, and MSA 2050 VSS hardware providers are compatible with Microsoft Windows®
certified backup applications.
For a general scripted solution, see the Microsoft VSS documents for usage of the Windows Server® Diskshadow (Windows 2008 and
later) or VShadow (applicable for Windows 2003 and beyond) tools.
4. Configure your VSS backup application to perform VSS snapshots for all of your application’s volumes. The VSS backup application uses
the Microsoft VSS framework for managed coordination of quiescence of VSS-compatible applications and the creation of volume
snapshots through the VSS hardware provider.
Establish a reoccurring snapshot schedule with your VSS backup application.
The time interval between these snapshot groups will be used in the following steps.
Note
The VSS framework, the VSS Backup application (requestor), the VSS-compliant Application writer, and the VSS hardware provider achieve
application-consistent snapshots. The MSA 2050 Storage, MSA 1050 Storage, MSA 2040 Storage, MSA 1040 Storage, or P2000 G3
firmware only creates point-in-time snapshots of indicated volumes.
Figure 47: Setup steps for replication of the VSS-based application-consistent snapshots
Technical white paper Page 81
Best practices
Fault tolerance
To achieve fault tolerance for Remote Snap setup, we recommend the following:
• For FC and iSCSI replications, the ports must be connected to at least one switch, but for excellent protection it is recommended that half
of the ports be connected to one switch (for example, the first port or first pair of ports on each controller) and the other half of the ports
be connected to a second switch, with both switches connected to a single SAN. This avoids having a single point of failure at the switch.
– Direct Attach configurations are not supported for replication over FC or iSCSI.
– The iSCSI ports must all be routable on a single network space.
In case of link failure, the replication operation will re-initiate within a specified amount of time. For linear replications, the amount of time is
defined by the parameter max-retry time of the set replication-volume-parameters command; the default value is 1800 seconds.
Set this time to a preferred value according to your setup. Once the retry time has passed, replication goes into a suspended state and then
needs user intervention to resume. For virtual replications, the system will attempt to resume the replication every 10 minutes for the first
hour, then every hour until the replication resumes. You can attempt to resume the virtual replication manually, or abort it, as desired.
• For linear replications, during a single replication, we recommend setting the maximum replication retry time on the secondary volume to
either 0 (retry forever), or 60 minutes for every 10 GB increment in volume size, to prevent a replication set from suspending when
multiple errors occur. This can be done in the CLI by issuing the following command:
set replication-volume-parameter max-retry-time <# in seconds>
• Replication services are supported on both single-controller and dual-controller environments for P2000 G3 arrays and MSA 2040
Storage. For the P2000 G3 array replication is supported only between similar environments. That is, a single-controller system can
replicate to a single-controller system or a dual-controller system can replicate to a dual-controller system. Replication between a
single-controller system and a dual-controller system is not supported. For MSA 2040 Storage this restriction does not apply; replication
is supported between a single-controller system and a dual-controller system. Only dual controller mode is supported for MSA 1040
Storage, MSA 1050 Storage, and MSA 2050 Storage. We recommend using a dual-controller array to try to avoid a failure of one
controller. If one controller fails, replication continues through the second controller.
• By default, a snap pool is created with a size equal to 20% of the volume size. An option is available to expand the snap pool size to the
desired value. By default, the snap pool policy is set to automatically expand when it reaches a threshold value of 90%. Note that the
expansion of a snap pool may take up the entire volume or vdisk, limiting the ability to put additional data on that vdisk. We recommend
that you set the auto expansion size to a value so that snap pools are not expanded too often. It is also important that you monitor the
threshold errors and ensure that you have free space to grow the snap pool as more data is retained.
• Snapshots can be manually deleted when they are no longer needed or automatically deleted through a snap pool policy. When a
snapshot is deleted, all data uniquely associated with that snapshot is deleted and associated space in the snap pool is freed for use.
In order to accommodate the number of volumes per vdisk limit delete unnecessary snapshots.
• Scheduled replications have a retention count—setting this appropriately can help maintain the snap pool size and expansion. Once the
retention count has been met, new snapshots displace the oldest snapshot for the replication set.
The size of the primary volume can be increased after creating the replication set, and the size of the pools that contain the volumes can
change as well, increasing by adding disk groups, or decreasing by removing disk groups.
License
• Use a temporary license to enable Remote Snap and get a hands-on experience. For live disaster recovery setups, we recommend
upgrading to a permanent license. A temporary license expires after 60 or 180 days, disabling further replications. If you choose not to
install a permanent license after the temporary license expires, you can access the data of the secondary volume by deleting the
replication set.
• With a temporary license, test local replications and gain experience before setting up remote replications with live systems.
• To set up remote replication, you must have a Remote Snap license for both the remote and local systems.
• Updating to a permanent license at a later stage preserves the replication images.
• By default, there is a 64-snapshot limit that can be upgraded to a maximum number of 512 snapshots.
• Exporting a replication image to a standard snapshot is subject to the licensed limit; replication snapshots are not counted against the
licensed limit. Install a license that allows for the appropriate number of snapshots.
• Enabling the temporary license directly from the SMU is available only on the P2000 G3 arrays.
Scheduling
Linear replications
• In order to ensure that replication schedules are successful, we recommend scheduling no more than three volumes to start replicating
simultaneously, although as many as 16 (as many as 8 for the MSA 1040 Storage array) can replicate at the same time. These and other
replications should not be scheduled to start or recur less than one hour apart. If you schedule replications more frequently, some
scheduled replications may not have time to start.
• The Replicate most recent snapshot option on the primary volume’s Provisioning > Replication Volume page or specifying
last-snapshot for the replication-mode parameter of the create task command can be used when standard snapshots are
manually taken of the primary volume or when using any other tool such as Microsoft VSS and you want to replicate these snapshots.
This helps in achieving application-consistent snapshots.
• The retention count applies to both the primary and secondary system.
• You can set the replication image retention count to a preferred value. A best practice is to set the count such that deleting replication
images beyond the retention count is acceptable.
Technical white paper Page 83
• Schedules are only associated with the primary volume. You can see the status of a schedule by selecting it from the primary volume’s
View > Overview panel.
• You can modify an existing schedule to change any of the parameters such as interval time and retention count using the system’s or
primary volume’s Provisioning > Modify Schedule page of the SMU.
• For linear replications, when standard snapshots are taken at the primary volume in regular intervals (manually or using VSS), select the
proper time interval for the replication scheduler so that the latest snapshot is always replicated to the remote system. The system
restricts the minimum time interval between replications to 30 minutes.
The following table provides a summary example.
Table 3. Tabulation of resources used with replication of application-consistent snapshots
Suppose the application uses two MSA volumes.
Suppose snapshots are taken every 2 hours with a retention of 32 instances for Suppose replications are taken every 6 hours with a retention of 32 instances for
each volume. each replication set.
Total hours before snapshot rollover: 64 hours (2 days 8 hours)
Total hours before replication rollover: 192 hours (7 days)
Total snapshots used by replication: 32 (per array)
Total volumes used by replication: 4 (per array)
Total vdisks used by replication: 2 (per array)
Virtual replications
• Replications can be scheduled at most once per hour when the primary volume resides on an MSA 2040 or MSA 1040, and once per
half hour when the primary volume resides on an MSA 2050 or MSA 1050. Since replications are not queued on the MSA 2040 and
MSA 1040, meaning that a replication is discarded if it is started while an existing replication on that replication set is still running, and
even though up to one replication can be queued on the MSA 2050 and MSA 1050, please consider the rate of data change, the
network ability, the number of replications, and the host I/O rate when scheduling replications to avoid discarding a replication.
• Schedules are only associated with the primary volume. You can see the status of a schedule by selecting the primary volume from the
Volumes topic and hovering over the schedule in the Schedules tab.
• You can modify an existing schedule using the Manage Schedules action of the replication selected in the Replications topic.
Maximum vdisks 32
Maximum Volumes 512
Maximum Volumes per vdisk 128
Maximum Snapshots per volume 127
Maximum LUNs 512
Maximum Disks 149
Number of Host Ports 8
Technical white paper Page 85
Replication limits
Table 9. Replication configuration limits for the P2000 G3 array
Property Value
Remote Systems 3
Replication Sets 16
Table 10. Replication configuration limits for the MSA 2040 array
Property Linear value Virtual value
Table 11. Replication configuration limits for the MSA 1040 array
Property Linear value Virtual value
Table 12. Replication configuration limits for the MSA 2050 array
Property Value
Peer Connections 4
Replication Sets 32
Volumes per Volume Group Replicated 16
Table 13. Replication configuration limits for the MSA 1050 array
Property Value
Peer Connections 1
Replication Sets 32
Volumes per Volume Group Replicated 16
Monitoring
Replication
For linear replications, you can monitor the progress of an ongoing replication by selecting the replication image listed in the navigation tree.
The right panel displays the status and percentage of progress. When the replication is completed, the status appears as Completed.
For virtual replications, you can check the Status of an ongoing replication in the Replications topic and monitor the progress by hovering
over the replication set to see the current run progress and current and last run times and transferred data in the Replication Set
Information panel.
Technical white paper Page 87
Events
When monitoring the progress of ongoing replication, view the event log for the following events:
• Event code 316—Replication license expired—This indicates that the temporary license has expired. Remote Snap will no longer be
available until a permanent license is installed. All the replication data will be preserved even after the license has expired, but you cannot
create a new replication set or perform more replications. If you choose not to install a permanent license after the temporary license
expires, you can access the data of the secondary volume by deleting the replication set.
• Event codes 229, 230, and 231—Snap pool threshold—The snap pool can fill up when there is steady I/O and replication snapshots
are taken at regular intervals. When the warning threshold is crossed, event code 229, consider taking action: either remove the older
snapshots or expand the vdisk.
• Event codes 418, 431, and 581—Replication suspended—If the ongoing replication is suspended, an event is received. Any further
linear replication initiated is queued. Once the problem is identified and fixed, you can manually resume the replications.
For more related events, see the HPE MSA 2050 Event Descriptions Reference Guide, the HPE MSA 1050 Event Descriptions Reference
Guide, the HPE MSA 2040 Event Descriptions Reference Guide, the HPE MSA 1040 Event Descriptions Reference Guide, or the
HPE P2000 G3 MSA System Event Descriptions Reference Guide.
SNMP traps and email (SMTP) notifications
You can set up the array to send SNMP traps and email notifications for the events described above. Using the v2 SMU, use the system’s
Configuration > Services > SNMP Notification or Configuration > Services > Email Notification pages. Using the v3 SMU, select the
Set Up Notifications action from the Home topic. For the CLI, use the set snmp-parameters and set email-parameters
commands.
Performance tips
For a gain in replication and host I/O performance of up to 20%, enable jumbo frames on all infrastructure components (if supported by all)
in the path and on iSCSI controllers. Jumbo frames are disabled by default for the iSCSI host ports. You can enable them using either the
SMU or CLI.
Note
If your infrastructure does not support jumbo frames, enabling them only on your controllers may actually lower performance or even
prevent the creation of replication sets or replications.
With the v2 SMU, enable jumbo frames by going to the system’s Configuration > System Settings > Host Interfaces.
With the v3 SMU, select the Set Up Host Ports action from the System topic, then select the Advanced Settings tab of the Host Ports
Settings panel.
With the CLI, enable jumbo frames by using the command set iscsi-parameters jumbo-frames enabled.
Technical white paper Page 88
Troubleshooting
Issue
If performing a local linear replication, ensure all the ports are configured and connected via the switch.
Check the Remote Snap license status at the local and remote site. If you are running a temporary license, the license may have expired.
Install a permanent license and manually resume replication.
Use the remote system’s Tools > Check Remote System Link in the SMU to check the link connectivity between the local and remote systems.
Use the CLI command show peer-connections with the verify-links parameter to check the data link. Repair the link and make
sure all links are available between the systems, then manually resume the replication.
For virtual replications, the overcommit flag for the pool may be enabled and the pool’s high threshold has been exceeded. Hover over the
pool in the Pools topic in the SMU or use the show pools CLI command to see if overcommit is enabled, the percent the high threshold is
set at, and the available space to see if there is insufficient available space to continue. Add disk-groups to the pool or remove volumes or
snapshots if necessary to increase the available size of the pool.
The CHAP settings are not correct—check that the CHAP records exist and the secrets are correct.
Issue
You cannot perform an action such as changing the schedule for a replication set.
Recommended actions
Actions performed on a replication set, such as schedule creation or modification and adding or removing a secondary volume, must be
performed on the system where the primary volume resides.
Changing the primary volume is a coordinated effort between the local and remote systems. It must first be performed on the remote
system, and then on the local system. To help remember this, the secondary volume pulls data from the primary volume. To avoid a potential
conflict, do not attempt to have two secondary volumes.
Since the secondary volume cannot be mapped to the hosts, unmap a primary volume before converting it to a secondary volume.
For virtual replications
Actions that control replications, such as scheduling, initiating, suspending, resuming, or aborting a replication, must be performed on the
system where the primary volume resides.
Deleting a replication set or changing its name can be performed on either the primary volume’s system or the secondary volume’s system.
Issue
Convert the secondary volume to a primary volume. You can now delete the replication set.
Technical white paper Page 89
FAQs
1. Do we support port failover?
Answer: Yes. See examples below to understand how it works.
Example
A dual controller system where the primary volume is owned by controller A and ports A1, A2, B1, and B2 are connected and part of the
replication set’s primary addresses (for linear replications, see the output of the show replication-sets command or the
Replication Addresses of the primary volume’s View > Overview to verify that a port is a primary address, for virtual replications, see
the output from show peer-connections or hover over the peer connection in the Replications topic of the SMU).
a. If port A1 fails, replication will go through A2 without any issues.
b. If port A1 and A2 fail, the replication will continue using the B1 and B2 ports of controller B.
2. Do we support load balancing with multiple replications in progress?
Answer: Yes.
Example
Four primary volumes owned by controller A and both ports (A1 and A2) are connected and used for replication.
a. All four sets will try to use both ports A1 and A2, unless the array doesn’t have sufficient resources to use both ports.
3. Why are replication history snapshots not being taken?
Answer: If a volume or a snapshot already exists with the name the replication history snapshot will use, the replication history snapshot
will not occur. Ensure the names of all existing volumes and snapshots do not conflict with the replication history snapshot naming
convention, basename_nnnn, where basename is something you’ve set when creating or modifying the replication set, and nnnn is a
number with leading zeroes, starting at 0.
4. Why didn’t a queued virtual replication begin once the previously ongoing replication completed or was aborted?
Answer: Check that the pool that contains the replication set is not full. If it is, add disk groups to the pool or remove volumes or
snapshots to provide more space. Once space is available, the queued replication will begin.
5. Can CHAP be added to a replication set at any time after it is created? For instance, if you have a local linear replication set for
doing an initial replication and then media transfer, do you need to set up CHAP before the set creation?
Answer: CHAP is specific to a system and not specific to the replication set. CHAP is specific to the local-to-remote system
communication path and vice versa. For linear replications, once you are done with the initial replication and physical media transfer, you
can enable CHAP before reattaching the secondary volume from the remote system; the reattach operation should go through fine.
6. Does using CHAP affect replication performance?
Answer: CHAP is just for initial authentication across nodes. Once a login is successful with another system, CHAP will not be involved in
further data transfer, so replication performance should not be affected.
7. I changed the CHAP settings on the array that the secondary volume resides on, but it did not affect an ongoing replication. How
do I get the CHAP settings to take effect?
Answer: Use the reset host-link CLI command to reset the SCSI nexus, or connection, between the primary and secondary arrays.
When the connection is re-established, the new CHAP settings will be used. Note that this means that if the array the primary volume
resides on does not have matching CHAP settings, ongoing and new replications will suspend. Change the array the primary volume
resides on to match the CHAP settings and reset its host links to allow replications to continue.
8. I created a master volume as the primary and did a local linear replication. Can I now do a remote replication with the same
primary volume?
Answer: A volume can only be part of one replication set. You need to delete the set and create a new set or remove the secondary
volume from the set and add the other remote secondary volume to the set.
Technical white paper Page 90
9. I initiated a remote linear replication, and now I am not getting an option to suspend the replication/abort the replication in the
local system.
Answer: By design, suspend and abort operations can only be performed on the secondary volume for linear replications. You can access
the secondary volume on the remote system; it has an option to suspend/resume replication.
10. I deleted the linear replication set using remove replication and all my replication images disappeared.
Answer: All the replication images are converted to standard snapshots and can be viewed under the volume in the Snapshots section
of the Configuration View panel of the SMU.
11. I see an option called Enable Snapshot when attempting to create a linear volume.
Answer: By selecting the box Enable Snapshot, you automatically create a snap pool for the volume. The created volume is now a
master volume.
12. I am not able to map to the secondary volume.
Answer: Secondary volumes cannot be presented to any hosts. For linear replications, you can export a snapshot of the secondary
volume, and for virtual replications you can create a snapshot of the secondary volume. Then, map the snapshot to hosts.
13. I cannot remove the primary volume from a linear replication set.
Answer: Only a secondary volume can be removed. If you want to remove the primary volume, first make the other volume the primary
volume, and then make the original primary volume a secondary volume. You can then remove the volume.
14. I can’t expand a primary or secondary volume in the linear replication set.
Answer: Master volumes cannot be expanded, because both the primary and the secondary volumes are master volumes; they can’t be
expanded even in a prepared state.
In the context of Remote Snap, volume expansion also causes problems because both the primary and the secondary volumes must be
identical in size. This further prohibits expanding volumes which are part of the set.
Note that for virtual replications, you can expand a primary volume—the secondary volume’s size will change on the next replication of
the set.
15. I expanded the primary virtual volume, but the secondary virtual volume’s size hasn’t changed.
Answer: The secondary volume’s size will increase on the next replication.
16. What is the Maximum Retry Time?
Answer: The Maximum Retry Time in the SMU and MaxRetryTime in the CLI refers to the maximum time in seconds to retry a single
linear replication. If this value is zero, there will be an infinite number of retries. This is valid only when on-error policy is set to retry.
Use the set replication-volume-parameters command to change these parameters.
A retry for a replication occurs every five minutes if an error is encountered. That is, a five-minute delay occurs in between retry attempts.
If the delta time from the current time to the initial retry time is greater than the Maximum Retry Time, the replication is suspended.
17. How does a virtual replication recover from a temporary peer connection failure?
Answer: The replication will be suspended and will attempt to resume every 10 minutes for the first hour, then once every hour until
successful or aborted by the user. If you see that the host ports on both systems have status of Up and OK health and are accessible to
each other, but the peer connection does not show OK health, restart the SC of both systems—the peer connection should come up.
18. Can the iSCSI controller ports run concurrent Remote Snap and I/O?
Answer: You can use iSCSI host ports for both Remote Snap and I/O with all supported FW versions of the P2000 G3 iSCSI arrays,
MSA 2050 Storage, MSA 2040 Storage, MSA 1050 Storage, and MSA 1040 Storage.
19. How do I delete a virtual replication set when the peer connection is down?
Answer: Use the local-only option of the delete replication-set CLI command.
20. I cannot delete a peer connection even though there is no virtual replication set present in the system. What can I do?
Answer: You may have deleted the replication set on the local system using the local-only option of the delete replication-
set CLI command while the peer connection was down, and now the peer connection is backed up. Delete the replication set from the
remote system to allow deleting the peer connection.
Technical white paper Page 91
21. I am about to perform an upgrade to an MSA 2050 or MSA 1050 by transferring disks from the old array to the new array. The
old array has virtual replication sets that I want to keep. What sort of preparation do I need to do?
Answer: Abort replications and suspend all replication sets on the old array. Remember that you must abort replications and suspend
replication sets from the array the primary volume of the replication set resides on. Also, if you plan on using the same iSCSI host port IP
addresses on the new array that were on the old array, ensure the old array is powered down and removed from the network before
powering up the new array, configuring its IP addresses, and connecting it to the network.
Summary
Remote Snap provides array-based, remote replication with a flexible architecture and simple management, and supports both Ethernet and
FC technology. The software protects against detrimental impacts to application performance, while the snapshot-based replication
technology minimizes the amount of data transferred. Remote Snap enables the use of multiple recovery points for daily backups (for linear
replications), access to data in remote sites, and business continuity when critical failures occur.
Glossary
Peer connection—A logical connection for virtual replications that defines the ports used to connect two systems. Virtual replication sets
use a peer connection to replicate from the primary to the secondary system.
Primary volume—The replication volume residing on the local system. It is the source from which replication snapshots are taken and
copied. It is externally accessible to host(s) and can be mapped for host I/O.
Replication-prepared volume—For linear volumes, the replication volume residing on a remote system that has not been added to a
replication set. You can create the replication-prepared volume using the SMU/CLI and can then use it as the secondary volume when
creating a linear replication set.
Remote system—A representation of a system which is added to the local system and which contains the address and authentication
tokens to access the remote system for linear replication management. The remote system may be queried for lists of vdisks, volumes, and
host I/O ports (used to replicate data) to aid in creating a linear replication set, for example.
Replication image (linear replications only)—The representation of a replication snapshot at both the local and remote systems. In
essence, it is the pair of replication snapshots that represent the point-in-time replication. In the SMU, clicking on the table shown in right
pane displays both the primary and secondary volume snapshots associated with a particular replication image. In the Configuration View
pane of the SMU the image name is the time at which it was created; in the CLI and elsewhere in the SMU the image name is the name of the
primary volume replication snapshot.
Replication set—The association between the source volume (primary volume) and the destination volume (secondary). A replication set is
a set of volumes associated with one another for the purposes of replicating data. To replicate data from one volume to another, you must
create a replication set to associate the two volumes. A replication set is a concept that spans systems. In other words, the volumes that are
part of a replication set are not necessarily (and not likely, and in the case of virtual replications, not allowed to be) located on the same
system. It is not a volume, but an association of volumes. A volume is part of exactly one replication set.
Replication snapshot—Replication snapshots are a special form of the existing snapshot functionality. They are explicitly used in replication
and do not count against a snapshot license.
Secondary volume—The replication volume residing on a remote system. For linear replications, this volume is also a normal master volume
and appears as a secondary volume once it is part of a replication set. For virtual replications, it is a base volume. It is the destination for the
replication snapshot copies. It cannot be mapped to any hosts.
Technical white paper Page 92
Sync points (linear replications only)—Replication snapshots are retained both on the primary volume and the secondary volume. When a
matching pair of snapshots is retained on both the primary and secondary volumes, they are referred to as sync points. There are four types of
sync points: the only replication snapshot that is copy-complete on any secondary system is the “only sync point;” the latest replication
snapshot that is copy-complete on any secondary system is the “current sync point;” the latest replication snapshot that is copy-complete on
all secondary systems is the “common sync point;” a common sync point that has been superseded by a new common sync point is an “old
common sync point.”
VSS HW Provider—A software driver supplied by the storage array vendor that enables the vendor’s storage array to interact with the
Microsoft Server Volume Shadow Copy Service framework (VSS).
VSS Requestor—A software tool or application that manages the execution of user VSS commands.
VSS Writer—A software driver supplied by the Windows Server Application vendor that enables the application to interact with the
Microsoft VSS framework.
© Copyright 2010–2011, 2013–2014, 2016–2018 Hewlett Packard Enterprise Development LP. The information contained
herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are
set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as
constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions
contained herein.
Microsoft, Windows, and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the
United States and/or other countries. All other third-party marks are property of their respective owners.