Professional Documents
Culture Documents
SnapView Foundations
SnapView Foundations - 1
Course Objectives
Upon completion of this course, you will be able to: Identify SnapView business uses Identify SnapView terminology Identify SnapView products for CLARiiON Identify SnapView snapshot functions and operation Identify SnapView clone functions and operation
SnapView Foundations - 2
The objectives for this course are shown here. Please take a moment to read them.
SnapView Foundations - 2
SnapView Foundations - 3
EMC SnapView products (snapshots and clones) enable users to perform various tasks on a given data set as required in a typical business environment without having to compromise data access. SnapView allows parallel processing without impacting the performance of the production application. Tasks can include disk-based backup and recovery, decision support and data testing, data warehousing, data reporting, and data movement operations.
SnapView Foundations - 3
SnapView Foundations - 4
SnapView is an array software product that runs on the EMC CLARiiON. Having the software resident on the array has several advantages over host-based products. Since SnapView executes on the storage system, no host processing cycles are spent managing information. EMC SnapView allows companies to make more effective use of their most valuable resource, information, by enabling parallel information access. SnapView allows multiple business processes to have concurrent, parallel access to information. SnapView creates logical point-in-time views of production information using snapshots, and point-intime copies using clones. Snapshots use only a fraction of the original disk space, while clones require the same amount of disk space as the source.
SnapView Foundations - 4
SnapView Terminology
Production host
Server where customer applications execute Source LUNs are accessed from production host
Admsnap utility
An executable program that runs interactively or with a script to manage clones and snapshots Resides on the servers connected to the storage system
SnapView Foundations - 5
Some SnapView terms are defined here. These terms are referred to throughout the training. The production host is where customer production applications are executed. The backup or secondary host is where the snapshot is accessed from. Any host may have only one view of a LUN active at any time. It may be the Source LUN itself, or one of the 8 permissible snapshots. No host may ever have a source LUN and a snapshot accessible to it at the same time. If the snapshot is to be used for testing or for backup using filesystem access, then the production host and secondary host must be running the same operating system. If raw backups are being performed, then the filesystem structure is irrelevant, and the backup host need not be running the same operating system as the production host. The admsnap utility is an executable program that runs interactively or with a script to manage clones and snapshots. The admsnap utility resides on the servers connected to the storage system.
SnapView Foundations - 5
Activate
Maps a snapshot to an available snapshot session
Snapshot
Snapshot is a frozen in time copy of a source LUN
SnapView Foundations - 6
The Source LUN is the production LUN that will be snapped. This is the LUN that is in use by the application, and is not visible to secondary hosts. When a snapshot is activated, it is made available to a SnapView session. The snapshot is a point-in-time view of the LUN and can be made accessible to a secondary host, but not to the primary host, once a SnapView session has been started on that LUN. The Reserved LUN Pool holds all the original data from the source LUN when the host writes to a chunk for the first time. The area may be grown if extra space is needed, or, if it has been configured as too large an area, it may be reduced in size. Note that the total number of reserved LUNs is limited and is model-dependent.
SnapView Foundations - 6
Chunk
Granularity at which data is copied from the source LUN to a reserved area
SnapView Foundations - 7
To start the tracking mechanism and create a virtual copy that has the potential to be seen by a host, we need to start a session. A session is associated with one or more snapshots, each of which is associated with a unique source LUN. Once a session has been started, data is moved to the Reserved LUN Pool as required by the COFW (Copy on First Write) mechanism. To make the snapshot appear online to the host, it is necessary to activate the snapshot. Sessions are identified by a session name, which should identify the session in a meaningful way. These names may be up to 64 characters long and consist of any mix of characters. Chunks are an aggregate of multiple disk blocks that SnapView uses to perform COFW operations. The chunk size is set to 64K (128 blocks). You cannot change this value.
SnapView Foundations - 7
Clone group
Contains a source LUN and all of its clones
SnapView Foundations - 8
When you fracture a clone, you break it off from its source LUN. Any server I/O requests made to the source LUN, after you fractured the clone, are not copied to the clone unless you manually perform a synchronization. Synchronizing a fractured clone unfractures the clone and updates the contents on the clone with its source LUN. A clone group contains a source LUN and all of its clones. When you create a clone group you specify a LUN to be cloned. This LUN is referred to as the source LUN. Once you create the clone group, the SnapView software assigns a unique ID to the group. No clones of the specified source LUN exist until you add a clone to the clone group. The purpose of creating a clone group is to designate a source LUN that you want to clone at some time. Clone private LUNs record information that identifies modified data chunks made to the source LUN and clone LUN after you have fractured the clone. A modified data chunk is a chunk of data that a server changes by writing to the clone or source LUN. A log in the clone private LUN records this information. This log reduces the time it takes to synchronize or reverse synchronize a clone with its source LUN since the software copies only modified (changed) chunks.
SnapView Foundations - 8
SnapView Foundations - 9
Requirements for sizing the RLP vary based on the environment. As a guide, consider small LUNs, keeping in mind that at least one LIN must be allocated per every source LUN that has a running session. SnapView write activity must be considered. The R/W radio and locality of the data affects the amount of data that needs to be copied to the reserved areas. If data has a high locality of reference, the data is copied only once and subsequent writes will not incur further COFWs. Multiple sessions require additional resources in the RLP. The longer the session is active, the more likely COFW activity will continue consuming addition space.
SnapView Foundations - 9
SnapView Foundations - 10
Due to the dynamic nature of reserved LUN assignment per source LUN, it may be better to have many smaller LUNs that can be used as a pool of individual resources. A limiting factor is that the total number of reserved LUNs allowed varies by storage system model. Each reserved LUN can be a different size and allocation to source LUNs, and is based on which is the next available reserved LUN, without regard to size. This means that there is no mechanism to ensure that a specified reserved LUN will be allocated to a specified source LUN. Because of the dynamic nature of the SnapView environment, assignment may be regarded as a random event (though, in fact, there are rules governing the assignment of reserved LUNs).
SnapView Foundations - 10
Example
LUNs to be snapped: 10 GB, 20 GB, 30 GB, 100 GB Average LUN size = 160 GB/4 = 40 GB Make each reserved LUN 4 GB in size Make 8 reserved LUNs
2008 EMC Corporation. All rights reserved. SnapView Foundations - 11
The COFW factor mentioned is a rough rule of thumb. It is expected that 10% of the data on the source LUN changes while the session is active. The overflow LUN factor of 2 is a safety margin. It allows twice the expected size, for a total of 20%. This example shows a total of 160 GB to be snapped, with eight reserved LUNs totaling 32 GB. Note that the calculation shown here is a compromise. Different results are obtained if the goal is to minimize the number of reserved LUNs or to minimize the wasted space in the reserved LUN pool. Also note that the Snapshot Wizard creates larger reserved LUNs. It is dealing with a potentially less experienced user and leaves more overhead for safety.
SnapView Foundations - 11
Snapshot Prerequisites
Source LUNs must be bound Source LUN must be presented to the Virtual Machine Assign the snapshot to a storage group
Enable data access control
SnapView Foundations - 12
Source LUNs must be bound. For a client or production server to access a source LUN, you must assign the source LUN to a storage group and connect the storage group to the production server. To do this, you must enable data access control on the storage system. For VMware ESX Servers, verify that the source LUN is presented to the Virtual Machine (guest operating system running on the virtual machine). The storage group must be connected to the secondary server that will activate the snapshot. You must assign the snapshot to a storage group other than the storage group that holds the source LUN. EMC supports placing a snapshot in the same storage group as its source LUN only if you use RM/Local, RM/SE, or VMware ESX Server to put the snapshot in the storage group. This software or server provides same host access to the snapshot and the source LUN. You must add a reserved LUN to the global reserved LUN pool. Event monitor is part of the Navisphere Agent and is available on many operating systems. Once configured, the event monitor runs continuously as a service or daemon, observing the state of all specified storage systems and notifying you when selected events occur.
SnapView Foundations - 12
SnapView Sessions
Point-in-time copy of a source LUN Keeps track of how the source LUN looks at a particular point in time Software stores a copy of the original data in the reserved LUN pool in chunks Secondary server can then activate (map) a snapshot to the SnapView session Must give each session a name
SnapView Foundations - 13
A SnapView session is a point-in-time copy of a source LUN. The session keeps track of how the source LUN looks at a particular point in time. After you start a SnapView session and as the production server writes to the source LUN(s), the software stores a copy of the original data in the reserved LUN pool in chunks. This copy is referred to as copy-on-first-write and occurs only once, which is when the server first modifies a data chunk on the source LUN(s). A secondary server then activates (maps) a snapshot to the SnapView session. The snapshot views the original source LUN data chunks that have been modified since you started the session from the reserved LUN pool and unmodified data chunks from the source LUN(s). You must give each session a name when you start the session. The name persists throughout the session and is viewable through Navisphere Manager. Use the name to determine session status and to stop the session.
SnapView Foundations - 13
SnapView Foundations - 14
Once the Reserved LUN Pool is configured and snapshots are created on the selected source LUNs, the user can start a Snapshot session. That procedure may be performed from the Navisphere GUI, CLI, or admsnap utility on the production host. The user must supply a session name. This name is used later to activate a snapshot. Sessions can be created for either a single source LUN or, for consistency purposes, on multiple source LUNs. When sessions are running, they may be viewed from the GUI or information may be gathered by using the CLI. All sessions are displayed under the sessions container in the GUI.
SnapView Foundations - 14
Persistent
Enabled on a session by default Survives failures and trespasses Available per session
Consistent
Preserves the point-in-time restartable copy across a set of source LUNs Available on a per-session basis Counts as one of the eight sessions per source LUN limit Not available on AX Series storage systems
SnapView Foundations - 15
Navisphere Manager enables persistent mode as the default. Persistent mode creates a session that can withstand the following failures and trespasses: SP reboot or failure Storage-system reboot or power failure A server I/O trespassing to the peer SP The persistence feature is available on a per session basis (not per snapshot or source LUN). In the event of a failure, source LUNs trespass to the other SP. Depending on your failover software, once the failed SP is running, you may need to issue a restore command in order to restore the proper source LUNs back to their original SP. For the appropriate restore command, refer to the documentation that shipped with your failover software. Consistent mode preserves the point-in-time restartable copy across a set of source LUNs. SnapView delays any I/O request to the set of source LUNs until the session has started on all LUNs. In the event of a failure, the software does not start the session on any source LUN and displays an error message. Consistent mode also prevents you from adding other LUNs to the session. It is not available on AX Series storage systems.
SnapView Foundations - 15
Access to SnapView
A SnapView session is a point-in-time copy of a source LUN. The session keeps track of how the source LUN looks at a particular point in time. A secondary server can then activate (map) a snapshot to the SnapView session. The snapshot views the original source LUN data chunks that have been modified since you started the session from the reserved LUN pool and unmodified data chunks from the source LUN(s). You must give each session a name when you start the session. The name persists throughout the session and is viewable through Navisphere Manager. Use the name to determine session status and to stop the session.
SnapView Foundations - 16
SnapView Foundations - 17
The Copy on First Write mechanism involves saving an original data block into the Reserved LUN Pool when that data block in the active filesystem is changed for the first time. The chunk is saved only once per snapshot. SnapView allows multiple snapshots of the same LUN. This ensures that the view of the LUN is consistent and, unless writes are made to the snapshot, are always a true indication of what the LUN looked like at the time it was snapped. Saving only chunks that have been changed allows efficient use of the disk space available; whereas a full copy of the LUN would use additional space equal in size to the active LUN. A snap may use as little as 10% of the space, on average.
SnapView Foundations - 17
Access to SnapView
SnapView uses Copy On First Write process, and the original chunk data is copied to the LUN Pool.
SnapView Foundations - 18
SnapView uses a process called Copy on First Write (COFW) when handling writes to the production data during a running session. The example shows that a snapshot is active on the production LUN. When a host attempts to write to the data on the production LUN, the original Chunk C is first copied to the Reserved LUN Pool, then the write is processed against the production LUN. This maintains the consistent, point-in-time copy of the data for the ongoing snapshot. A view of the snapshot points to both the unchanged data on the active LUN, plus the original data that was copied to the RLP.
SnapView Foundations - 18
Deactivating a snapshot
Makes it inaccessible (off-line) to secondary host Does not flush host buffers (unless performed with admsnap) Keeps COFW process active
SnapView Foundations - 19
Once you have at least one SnapView session running and a snapshot created for a given source LUN, you can activate a snapshot to a session. To make the snapshot visible to the host as a LUN, the snapshot must be activated. Activation may be performed from the GUI, from the CLI, or via admsnap on the backup host. The activate function, when administered from the admsnap server utility, also scans the secondary servers system buses for storage system devices and determine if any device is part of a SnapView session. You can then access these devices from the secondary server but you must specify the session name and the session must be active. Deactivation of a snapshot makes it inaccessible to the backup host. Normal data tracking continues, so if the snapshot is reactivated at a later stage, it will still be valid for the time that the session was started.
SnapView Foundations - 19
SnapView Rollback
Process to recover data on the source LUN Reverses the pointer-and-copy process
Non-destructive to the source LUNs other sessions/snapshots Original data is copied back to the source LUN
Independent of other sessions and snapshots Restores the contents of the source LUN to a point in time
Any session for the source LUN that was started
SnapView Foundations - 20
In addition to providing point-in-time views of data for concurrent access, these sessions also provide a way to recover data in the event of data corruption on the source LUN. Rollbacks copy the original data on the reserved LUN back to the source LUN in a non-destructive manner. Rollbacks operate independently of any other sessions or snapshots that may be defined in the source LUN. The SnapView rollback feature allows the user to restore the contents of a source LUN almost instantaneously to the point in time that any of the SnapView sessions for that source LUN were started.
SnapView Foundations - 20
During a rollback, the source LUN remains available for reads and writes while the rollback process takes place in the background. At the start of the rollback, you should flush the host buffers as well as the activated snapshot with the admsnap utility. The user must take the source off-line and bring it back online after the rollback session starts. In addition, if a snapshot is to remain activated to the session that is used for the rollback, that snapshot must be taken off-line until the rollback completes to prevent data inconsistency due to writes to the secondary server. Reads to areas that have been identified for copying back to the source are redirected to the reserved LUN and read directly from that LUN. Writes to areas that have been identified for copying back to the source generate a process by which the corresponding chunks are immediately copied from the reserved LUN to the source. The write is then satisfied from the source LUN. This way, new data written to the host supersedes the data being rolled back. All sessions and snapshots continue as normal.
SnapView Foundations - 21
SAN Copy
Snapshots can be used as a source for a full SAN Copy session Source LUN can remain online during the copy operation Snapshot of the destination SAN Copy LUN for incremental SAN Copy
MirrorView
Snapshots can be used with both primary and secondary MirrorView images Rollback provides extra protection
2008 EMC Corporation. All rights reserved. SnapView Foundations - 22
SnapView snapshots can be used in conjunction with clones. Users can replicate a source production LUN using a SnapView clone to create a full recoverable copy of their production data. The clone can be used as a source for starting a SnapView session. Multiple copies can be taken. A snapshot activated on a SnapView session of the source LUN can be used as a source for a full SAN Copy session. This allows for the source LUN to remain online during the copy operation. SnapView snapshots can be used with both MirrorView primary and secondary images. SnapView rollback feature provides an extra level of protection for MirrorView. If a failure occurs on the primary image, SnapView sessions running on the primary or secondary images can restore the image by performing a rollback.
SnapView Foundations - 22
SnapView Foundations - 23
A clone is a complete copy of a source LUN that uses pointer-and-copy based technology. You specify a source LUN when you create a clone group. The copy of the source LUN begins when you add a clone LUN to the clone group. The software assigns each clone a clone group ID. This ID remains with the clone until you remove the clone from its group. While the clone is part of the clone group and unfractured, any production write requests made to the source LUN are simultaneously copied to the clone. Once the clone contains the desired data, you can fracture the clone. Fracturing the clone separates it from its source LUN, after which you can make it available to a secondary server. This technology allows users to perform additional storage management functions with minimal impact to the production host.
SnapView Foundations - 23
SnapView Foundations - 24
Clones offer several advantages in certain situations. Because copies are physically separate, and reside on different disks and RAID groups from the source LUN, there is no impact from competing I/Os. Different I/O characteristics, such as database applications with highly random I/O patterns or backup applications with highly sequential I/O patterns running at the same time, do not compete for spindles. Physical or logical (human or application error) loss of one does not affect the data contained in the other.
SnapView Foundations - 24
Removing Clones
Cannot be in active sync or reverse-sync process
SnapView Foundations - 25
When you add a clone to a group, the source LUNs and their clones must be exactly the same size. When removing a clone from the clone group, it cannot be in the process of synching or reverse synching. Once the clone is removed from the group, the source and clone are reverted back to independent LUNs, this process does not affect the data on the LUNs. Any LUN is eligible to be cloned, except for the following: Hot spare LUNs Remote mirror LUNs (LUNs participating as either a primary or secondary image) Clone LUNs (LUNs participating in any clone group as either a source LUN or a clone LUN) Snapshot LUNs Private LUNs (LUNs reserved as clone private LUNs or for the reserved LUN pool)
SnapView Foundations - 25
Host access
Source can accept I/O at all times Clone cannot accept I/O during sync
SnapView Foundations - 26
Clone synchronization copies source data to the clone. Any data on the clone is overwritten with source data. Clone private LUNs record information that identifies data chunks on the source LUN and clone LUN that have been modified after you fractured the clone. These LUNs are a minimum of 250,000 blocks and are created one per SP. A modified data chunk is a chunk of data that a production or secondary server changes by writing to the source LUN or clone. A log in the clone private LUN records this information, but no actual data is written to the clone private LUN. This log reduces the time it takes to synchronize or reverse synchronize a clone and its source LUN since the software copies only modified chunks. Source LUN access is allowed during sync with use of mirroring. The clone, however, is inaccessible during sync. Any attempted host I/Os are rejected. Clones must be manually fractured following synchronization. This allows the administrator to pick the time that the clone should be fractured, depending on the data state. Once fractured, the clone is available to the secondary host.
SnapView Foundations - 26
SnapView Foundations - 27
The purpose of reverse synchronizing a fractured clone is to replace the data on the source LUN with the data on the clone. This allows you to revert to an earlier copy of the source LUN if, for instance, the source becomes corrupted or if new source LUN writes are not desired. To ensure that there is no data corruption on the source LUN, you have to take the source LUN offline before you initiate the reverse synchronization. Once the operation begins, you can bring the source LUN back online. During a reverse synchronization, the software automatically copies to the clone any incoming server writes to the source LUN.
SnapView Foundations - 27
SnapView Foundations - 28
Reverse synchronization has the effect of making the source appear as if it is identical to the clone at the beginning of the synchronization. Since this copy on demand mechanism is designed to coordinate the host I/Os to the source (rather than to the clone), host I/Os cannot be received by the clone during synchronization.
SnapView Foundations - 28
Protected restore
Host source writes not mirrored to clone Configure via individual clone property
Non-protected restore
Host source writes mirrored to clone
SnapView Foundations - 29
The Protected Restore feature protects the data on a clone during a reverse synchronization. When you select this feature during a reverse synchronization, the software will not copy to the clone any server writes made to the source LUN. Instead, the software records information in the clone private LUN to identify the source LUN writes for subsequent synchronizations. Once you initiate a reverse synchronization, the software immediately unfractures the clone that initiated the reverse synchronization. Then the software fractures any other clone in the clone group to protect it from corruption should the reverse synchronization operation fail. The software then begins to copy its data to its source LUN. After the reverse synchronization is complete, the software fractures the clone that initiated the reverse synchronization. You enable the Protected Restore feature on a per-clone basis and not on a per-clone-group basis. During a reverse synchronization, the software automatically copies to the clone any incoming server writes to the source LUN under a non-protected restore scenario. If you do not want source writes copied to the clone during a reverse synchronization, you must check the Use the Protected Restore option in the Add Clone dialog box before initiating a reverse synchronization.
SnapView Foundations - 29
Source LUN
Source LUN restored to Clone 1 state
X
Clone 2
X
Other clones fractured from Source LUN
Production Server
X
Backup Server
Clone 1
...
Clone 8
SnapView Foundations - 30
Reverse synchronization copies clone data to the source LUN. Data on the source is overwritten with clone data. As soon as the reverse-sync begins, the source LUN seems to be identical to the clone. This feature is known as an instant restore. Note: You can have up to eight clones per source LUN.
SnapView Foundations - 30
SnapView Foundations - 31
A consistent fracture is when you fracture more than one clone at the same time in order to preserve the point-in-time restartable copy across the set of clones. A restartable copy is a data state having dependent write consistency and where all internal database/application control information is consistent with a Database Management System/application image. Consistent operations can be initiated from either the Navisphere Management GUI or CLI. The clones you want to fracture must be in different clone groups. You cannot perform a consistent fracture between different storage systems. If there is a failure on any of the clones, the consistent fracture fails on all of the clones. If any clones within the group were fractured prior to the failure, the software re-synchronizes those clones.
SnapView Foundations - 31
SnapView Foundations - 32
Since a consistent fracture refers to fracturing a set of clones belonging to a write-ordered dependent source LUN, the associated source LUN for each clone must be unique. The user cannot perform a consistent fracture on multiple clones belonging to the same source LUN. Once fractured, the clone appears under the clone properties container in Navisphere Manager as administratively fractured. If there is a failure on any of the clones, the consistent fracture fails on all of the clones. If any clones within the group were fractured prior to the failure, the software re-synchronizes those clones.
SnapView Foundations - 32
SAN Copy
Copy from the source LUN or clone
SnapView Foundations - 33
Clones can be used with SnapView snapshots, SAN Copy, and both MirrorView/S and MirrorView/A (with some restrictions). Users can create clones and snapshots of the same source LUN, but they can also create snapshots of clones as well. This functionality provides users with multiple copies of their data. In addition to having a copy of data within the same storage system, users can have the added benefit of having another copy on another storage system using SAN Copy. Users have the ability to create a clone of a mirror, using the same source LUN at either the local or remote site. Users may: Clone a MirrorView primary image having the primary image serve as a clone source on the local storage system Clone a secondary image and have the secondary image serve as a clone source on the remote storage system Clone a primary and secondary image Note: You may not mirror a clone.
SnapView Foundations - 33
SnapView Foundations - 34
SnapView can be configured and managed using the management GUI, Secure CLI, Taskbar Wizard, or the admsnap command. The Secure CLI uses the same security features embodied in Navisphere Manager. Users are authenticated via a username/password/scope combination associated with each CLI command sent to the storage system. A mechanism exists to bypass this requirement which is especially useful in scripting. The admsnap command is host-resident; there is a different version for each platform.
SnapView Foundations - 34
Course Summary
Key points covered in this course: SnapView addresses business needs SnapView terminology SnapView products for CLARiiON SnapView snapshot functions and operation SnapView clone functions and operation
SnapView Foundations - 35
These are the key points covered in this training. Please take a moment to review them. This concludes the training. Please proceed to the Course Completion slide to take the assessment.
SnapView Foundations - 35