ORACLE DATABASE BACKUP AND

RECOVERY WITH VMAX3
EMC® VMAX® Engineering White Paper

ABSTRACT

With the introduction of the third generation VMAX disk arrays and local replication
and enhanced remote replication capabilities, Oracle database administrators have a
new way to protect their Oracle databases effectively and efficiently with
unprecedented ease of use and management.

June, 2015

EMC WHITE

To learn more about how EMC products, services, and solutions can help solve your business and IT challenges, contact your local
representative or authorized reseller, visit www.emc.com, or explore and compare products in the EMC Store.

Copyright © 2015 EMC Corporation. All Rights Reserved.

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without
notice.

The information in this publication is provided “as is.” EMC Corporation makes no representations or warranties of any kind with
respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a
particular purpose.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

All other trademarks used herein are the property of their respective owners.

Part Number H14232

2

TABLE OF CONTENTS

EXECUTIVE SUMMARY .............................................................................. 6
Audience ........................................................................................................... 7
Terminology ...................................................................................................... 7
VMAX3 Product Overview .................................................................................... 8
VMAX3 SnapVX Local Replication Overview ........................................................... 8
VMAX3 SRDF Remote Replication Overview ........................................................... 9

ORACLE DATABASE REPLICATION WITH TIMEFINDER AND SRDF
CONSIDERATIONS .................................................................................. 11
Number of Snapshots, Frequency, and Retention ................................................. 11
Snapshot Link Copy vs. No Copy Option .............................................................. 11
Oracle Database Restart vs. Recovery Solutions ................................................... 11
RMAN and VMAX3 Storage Replication ................................................................ 12
Command Execution and Host Users................................................................... 12
Oracle DBaaS SnapClone Integration on VMAX3 ................................................... 12
Storage Layout and ASM Disk Group Considerations ............................................. 12
Remote Replication Considerations ..................................................................... 13

ORACLE BACKUP AND DATA PROTECTION TEST CASES........................... 14
Test Configuration ............................................................................................ 14
Test Overview.................................................................................................. 14
Test Case 1: Creating a local restartable database replica for database clones ......... 15
Test Case 2: Creating a local recoverable database replica for backup and recovery . 17
Test Case 3: Performing FULL or incremental RMAN backup from a SnapVX replica .. 19
Test Case 4: Performing database recovery of Production using a recoverable
snapshot ......................................................................................................... 20
Test Case 5: Using SRDF/S and SRDF/A for database disaster recovery .................. 21
Test Case 6: Creating remote restartable copies .................................................. 23
Test Case 7: Creating remote recoverable database replicas.................................. 24
Test Case 8: Parallel recovery from remote backup image ..................................... 26
Test Case 9: Leveraging Access Control List replications for storage snapshots ........ 27

3

CONCLUSION .......................................................................................... 27
APPENDIX I - CONFIGURING ORACLE DATABASE STORAGE GROUPS FOR
REPLICATION ......................................................................................... 28
Creating snapshots for Oracle database storage groups ........................................ 28
Linking Oracle database snapshots for backup offload or repurposing ..................... 29
Restoring Oracle database using storage snapshot ............................................... 30
Creating a cascaded snapshot from an existing snapshot ...................................... 31

APPENDIX II – SRDF MODES AND TOPOLOGIES ..................................... 31
SRDF modes .................................................................................................... 31
SRDF topologies ............................................................................................... 32

APPENDIX III – SOLUTIONS ENABLER CLI COMMANDS FOR TIMEFINDER
SNAPVX MANAGEMENT ........................................................................... 33
Creation of periodic snaps ................................................................................. 33
Listing details of a snap .................................................................................... 33
Linking the snap to a storage group ................................................................... 33
Verifying current state of the snap ..................................................................... 34
Listing linked snaps .......................................................................................... 34
Restore from a snap ......................................................................................... 35

APPENDIX IV – SOLUTIONS ENABLER CLI COMMANDS FOR SRDF
MANAGEMENT ......................................................................................... 35
Listing local and remote VMAX SRDF adapters ..................................................... 35
Creating dynamic SRDF groups .......................................................................... 35
Creating SRDF device pairs for a storage group ................................................... 35
Listing the status of SRDF group ........................................................................ 36
Restoring SRDF group....................................................................................... 37

APPENDIX V - SOLUTIONS ENABLER ARRAY BASED ACCESS CONTROL
MANAGEMENT ......................................................................................... 38
Identifying the unique ID of the Unisphere for VMAX and database management
host ............................................................................................................... 38
Adding the UNIVMAX host to AdminGrp for ACL management ................................ 38
Authenticate UNIVMAX host for ACL management ................................................ 39
Create access group for database host ................................................................ 40
4

...... 42 REFERENCES .............................................. 41 Install Solutions Enabler on database hosts using non-root user ................................................................................ 43 5 ......... Create database device access pool ...............................................

across a single database. storage devices can be provisioned to VMs. Prior to Oracle 12c. even as incremental data changes are copied in the background. For example. This parallel restore capability provides DBA with faster accessibility to remote backups and shortens recovery times. maintaining incremental refresh relationships. and very robust in features. DBAs require the ability to create local and remote database replicas in seconds without disruption of Production host CPU or I/O activity for purposes such as testing patches. Oracle and EMC integrated the T10-DIF standard for end-to-end data integrity validation. especially across multiple servers. or a background copy of all the data can take place. As a result. At the same time. across multiple VMAX3 storage array systems if necessary. With Oracle ASMlib (Oracle 10g and above). and includes other features such as the ability to reclaim deleted 6 . • Disaster Recovery (DR) protection for two or three sites. The linked targets can remain space-efficient. including external data or message queues. Each read or write between Oracle and VMAX storage is validated. VMAX replication and Silent Data Corruption: Both VMAX and VMAX3 arrays protect all user data with T10-DIF from the moment it enters the storage until it is retrieved by the host. Starting with Oracle 12c. • With Oracle 12c Cloud Control DBaaS Snap Clone. This allows the DBAs to perform remote backup operations or create remote database copies. or VMs with their physical and virtual storage can be cloned (TimeFinder is called via APIs). TimeFinder® local replication values to Oracle include: • The ability to create instant and consistent database replicas for repurposing. • The ability to perform TimeFinder replica creation or restore operations in seconds. leveraging the Oracle Snapshot Storage Optimization Oracle 12c feature. Traditional solutions rely on host based replication.3 Storage Connect. where SRDF always maintains incremental updates between source and target devices. EMC and Oracle have made the creation of such replicas more efficient. in seconds – regardless of database size. SnapVX allows unlimited number of cascaded snapshots. • VMAX3 TimeFinder SnapVX snapshots are consistent by default. such valid backup images were achieved by using hot backup mode for only a few seconds. Each source device can have up to 256 space-efficient snapshots that can be linked to up to 1024 target devices. Oracle ASM Filter Driver (AFD) provides a wider host OS support for T10-DIF with VMAX. multiple databases. and the elongated time and complexity associated with their recovery. • The ability to utilize RMAN Block Change Tracking (BCT) file from a TimeFinder replica. in parallel SRDF will copy the restored data to the local site. including external data or message queues. SRDF® remote replication values to Oracle include: • Synchronous and Asynchronous consistent replication of a single or multiple databases. The target devices (in case of a replica) or source devices (in case of a restore) are available immediately with their data. regardless of the database size. when offloading backups from Production to a backup host. The disadvantages of this solution are the additional host I/O and CPU cycles consumed by the need to create such replicas. while a remote TimeFinder backup is being restored to the remote SRDF devices. their RPO and RTO requirements are becoming more stringent. and more importantly restored. faster to restore. while SRDF replicates the data remotely. integrated. including for local and remote replication. easier to create. and across multiple VMAX3 storage systems. • SRDF and TimeFinder are integrated.EXECUTIVE SUMMARY Many applications are required to be fully operational 24x7x365 and the data for these applications continues to grow. making it a full copy. • SRDF and TimeFinder can work in parallel to restore remote backups. In this way. publishing data to analytic systems. offloading backups from Production. the complexity of monitoring and maintaining them. For example. and more. running reports. DBAs can perform storage provisioning and replications directly from Enterprise Manager (TimeFinder is called via APIs). and the ability to meet these requirements without overhead or operations disruption. • The ability to create valid backup images without hot-backup mode that can be taken. there is a large gap between the requirements for fast and efficient protection and replication. The point of consistency is created before a disaster strikes. including cascaded or triangular relationships. developing Disaster Recovery (DR) strategy. Recovery operations can take place on either the Backup or Production host. rather than taking hours to achieve afterwards when using replications that are not consistent across applications and databases. TimeFinder can be used on the remote site to create writable snapshots or backup images of the database. creating development sandbox environments. • With Oracle VM 3.

Internally. after a host crash or Oracle ‘shutdown abort’. Storage Groups can be cascaded. With VMAX3 arrays. Starting with Oracle 11g. SnapVX VMAX3 TimeFinder Previous generations of TimeFinder referred to Snapshot as a space-saving copy of the source device. That means that for any two dependent I/Os that the replication application issues. DBAs gain the ability to protect and repurpose their databases quickly and easily. Recoverable state on the other hand requires a database media recovery. and maintaining Oracle databases backup and replication on VMAX3 storage arrays. To the Oracle database. either both will be included in the replica or only the first. TERMINOLOGY The following table provides explanation to important terms used in this paper. where capacity was consumed only for data changed after the snapshot time. on SnapVX Snapshot vs. such as log write followed by data update. TimeFinder SnapVX Clone snapshots are always space-efficient. Clone. offering higher scale and a wider feature set while maintaining the ability to emulate legacy behavior. In addition. VMAX3 Storage Group A collection of host addressable VMAX3 devices. and are interested in achieving higher database availability. Recoverable database Oracle can be simply started. protection from non-Oracle writes to Oracle data. Oracle allows database recovery from storage consistent replications without the use of hot-backup mode (details in Oracle support note: 604683. Oracle ASM and VMAX3 offer the most protected and robust platform for database storage and replications that maintains data integrity for each database read or write as well as its replications.ASM files in VMAX storage. performance. the user can choose to keep the target devices space-efficient or perform a full copy. This delivers new levels of data center efficiency and consolidation by reducing footprint and energy requirements. It enables VMAX3 arrays to embed storage infrastructure services like cloud access. rolling forward transaction log to achieve data consistency before the database can be opened.1). HYPERMAX OS delivers the ability to perform real-time and non-disruptive data services. and more. data mobility and data protection directly on the array. (b) specify FAST Service Levels (SLOs) to a group of devices. and control files to be consistent (see ‘Storage consistent replication’). where RPO=0 means no data loss of committed transactions. The feature has become more integrated with Oracle 12c and is called Oracle Storage Snapshot Optimization. without the time and complexity associated with host-based replications. A Storage Group can be used to (a) present devices to host (LUN masking). the snapshot data appears in a state from which Oracle can simply recover by performing crash/instance recovery when starting. A restartable state requires all log. and (c) manage device grouping for replication software such as SnapVX and SRDF. When they are linked to host-addressable target devices. Recovery Point Objective (RPO) refers to any amount of data loss after the recovery completes. both TimeFinder and SRDF use T10-DIF to validate all replicated data. the other hand referred to full copy of the source device. Term Description Restartable vs. and ease of storage management. VMAX3 HYPERMAX OS HYPERMAX OS is the industry’s first open converged storage hypervisor and operating system. It is assumed that readers have some familiarity with Oracle and the EMC VMAX3 family of storage arrays. data. AUDIENCE This white paper is intended for database and system administrators. 7 . Together. By utilizing VMAX3 local and remote replications. VMAX3 TimeFinder TimeFinder SnapVX is the latest generation in TimeFinder local replication software. and system architects who are responsible for implementing. and the increasing RTOs associated with growing databases size. RTO and RPO Recovery Time Objective (RTO) refers to the time it takes to recover a database after a failure. performing automatic crash/instance recovery without user intervention. such as the child storage groups used for setting FAST® Service Level Objectives (SLOs) and the parent storage groups used for LUN masking of all the database devices to the host. storage administrators. managing. Oracle distinguishes between a restartable and recoverable state of the database. Storage consistent Storage consistent replication refers to storage replications (local or remote) in which the target devices maintain write-order fidelity.

reporting and test/development environments. To learn more about VMAX3 and FAST best practices with Oracle databases. and supports both block and file (eNAS).5” • 2 TB/4 TB SAS 7. or any other process that requires parallel access to or preservation of the primary storage devices.760 drives • SSD Flash drives 200/400/800/1.5”/3. • 1 – 8 redundant VMAX3 Engines • Up to 4 PB usable capacity • Up to 256 FC host ports • Up to 16 TB global memory (mirrored) • Up to 384 Cores. intelligent. multi-tier storage that excels in providing FAST 1 (Fully Automated Storage Tiering) enabled performance management based on Service Level Objectives (SLO). VMAX3 new hardware architecture comes with more CPU power.5” Figure 1 VMAX3 storage array Figure 1 shows possible VMAX3 components.7 GHz Intel Xeon E5-2697-v2 • Up to 5. It offers dramatic increases in floor tile density.g. Refer to EMC documentation and release notes to find the most up to date supported components. VMAX3 SnapVX Local Replication Overview EMC TimeFinder SnapVX software delivers instant and storage-consistent point-in-time replicas of host devices that can be used for purposes such as the creation of gold copies. and access patterns continue to change over time. data warehouse refreshes. and a new Dynamic Virtual Matrix dual InfiniBand fabric interconnect that creates an extremely fast internal memory-to-memory and data-copy fabric. VMAX 100K. 1 Fully Automated Storage Tiering (FAST) allows VMAX3 storage to automatically and dynamically manage performance service level goals across the available storage resources to meet the application I/O demand. high capacity flash and hard disk drives in dense enclosures for both 2. patch testing. refer to the white paper: Deployment best practice for Oracle database with VMAX3 Service Level Object Management.5" and 3.5" drives. and incorporates a Dynamic Virtual Matrix interface that connects and shares resources across all VMAX3 engines. The replicated devices can contain the database data. While VMAX3 can ship as an all-flash array with the combination of EFD (Enterprise Flash Drives) and large persistent cache that accelerates both writes and reads even further. scalability.600 GB 2.2K RPM 3. and ease-of-use features. even as new data is added.5” • 300 GB 15K RPM SAS drives 2. The VMAX3 family of storage arrays comes pre-configured from factory to simplify deployment at customer sites and minimize time to first I/O. image files). It provides the highest levels of performance and availability featuring new hardware and software capabilities. The newest additions to the EMC VMAX3 family. 200K and 400K. 8 .5” • 300 GB – 1. data that is external to the database (e. larger persistent cache. it can also ship as hybrid. Each array uses Virtual Provisioning™ to allow the user easy and quick storage provisioning. message queues. and so on. backup and recovery. modular storage. Oracle home directories. deliver the latest in Tier-1 scale-out multi-controller architecture with consolidation and efficiency for the enterprise.2 TB 10K RPM SAS drives 2.VMAX3 Product Overview The EMC VMAX3 family of storage arrays is built on the strategy of simple.5”/3. VMAX3 TimeFinder SnapVX combines the best aspects of previous TimeFinder offerings and adds new functionality. 2. allowing the storage array to seamlessly grow from an entry-level configuration into the world’s largest storage array.5”/3.

including two. In this way. The snapshot time is saved with the snapshot and can be listed. or simply specifying the devices. ‘composite-group’ (CG). SRDF offers a variety of replication modes that can be combined in different topologies. a ‘device-group’ (DG). and even four sites. the target devices provide the exact snapshot point-in-time data only until the link relationship is terminated. the SRDF family is trusted for disaster recovery and business continuity. SRDF and TimeFinder are closely integrated to offer a combined solution for local and remote replication. snapshots can be restored back to the source devices. and acquire an ‘Optimized’ FAST Service Level Objective (SLO) by default. When a snapshot is established.Some of the main SnapVX capabilities related to native snapshots (emulation mode for legacy behavior is not covered) include: • With SnapVX. even if the snapshot name remains the same. See Appendix III for a list of basic TimeFinder SnapVX operations. At this time SRDF/S or SRDF/A mode can resume. • SnapVX provides the ability to create either space-efficient replicas or full-copy clones when linking snapshots to target devices. Snapshots also get a ‘generation’ number (starting with 0). • Snapshots are taken using the establish command. They only relate to a group of source devices and cannot be otherwise accessed directly. snapshots are natively targetless. Instead. The generation is incremented with each new snapshot. refer to the TechNote: EMC VMAX3 TM Local Replication and the EMC Solutions Enabler CLI Guides. Instead. and an optional expiration date. The target devices are an exact copy of the source devices (Production). • FAST Service Levels apply to either the source devices. or to the original version of the data when the source is modified. This will make the target devices a stand-alone copy. saving capacity and resources by providing space-efficient replicas. or Oracle recoverable backup copies based on Oracle Storage Snapshot Optimization. The target devices are typically seconds to minutes behind the source devices (Production). The recommended way is to use a storage group. • Each source device can have up to 256 snapshots that can be linked to up to 1024 targets. VMAX3 SRDF Remote Replication Overview The EMC Symmetrix Remote Data Facility (SRDF) family of software is the gold standard for remote replications in mission critical environments. SnapVX allows unlimited number of cascaded snapshots. • SnapVX snapshots are always consistent. SnapVX snapshot data resides in the same Storage Resource Pool (SRP) as the source devices. o SRDF Adaptive Copy (SRDF/ACP) mode which allows bulk transfers of data between source and target devices without write-order fidelity and without write performance impact to source devices. o SRDF Asynchronous (SRDF/A) mode which is used to create consistent replicas at unlimited distances without write response time penalty to the application. Use the “-copy” option to copy the full snapshot point-in-time data to the target devices during link. Snapshot operations such as establish and restore are also consistent – that means that the operation either succeeds or fails for all the devices as a unit. or linked to another set of target devices which can be made host-accessible. three. • SnapVX snapshots themselves are always space-efficient as they are simply a set of pointers pointing to the data source when it is unmodified. a ‘storage group’ (SG). This group is defined by using either a text file specifying the list of devices. If “-copy” option is not used. a snapshot name is provided. but not to the snapshots themselves. SRDF/ACP can be set to continuously send changes in bulk until the delta between source and target is reduced to a specified “skew”. a new snapshot can be taken from the target devices and linked back to the original source devices. • Snapshot operations are performed on a group of devices. It is also used to catch up after a long period that replication was suspended and many changes are owed to the remote site. That means that snapshot creation always maintains write-order fidelity. • SRDF groups: 9 . though consistent (“restartable”). For more information on SnapVX. This allows easy creation of restartable database copies. or to snapshot linked targets. Some of the main SRDF capabilities include: • SRDF modes of operation: o SRDF Synchronous (SRDF/S) mode which is used to create a no-data-loss of committed transactions solution. SRDF/ACP is typically used for data migrations as a Point-in-Time data transfer. Built for the industry leading high-end VMAX storage array. • Linked-target devices cannot ‘restore’ any changes directly to the source devices. Multiple snapshots of the same data utilize both storage and memory savings by pointing to the same location and consuming very little metadata.

where different SRDF groups can replicate in different directions. 10 . Note: Even when consistency is enabled the remote devices may not yet be consistent while SRDF state is sync-in-progress. and makes the R1 devices writable. o Consistency can be enabled for either Synchronous or Asynchronous replication mode. after an SRDF split or suspend) will be incremental. o An SRDF swap will change R1 and R2 personality. Appendix II. o SRDF consistency also implies that if a single device in a consistency group can’t replicate. o An SRDF group is a collection of matching devices in two VMAX3 storage arrays together with the SRDF ports that are used to replicate these devices between the arrays. Such grouping of consistency groups is called multi-session consistency (MSC). For more information on SRDF. o An SRDF failover makes the R2 devices writable. o SRDF session can restore the content of R2 devices back to R1. only passing changed data. o An SRDF consistency group always maintains write-order fidelity (also called: dependent-write consistency) to make sure that the target devices always provide a restartable replica of the source application. This happens when SRDF initial synchronization is taking place before it enters a consistent replication state. MSC maintains dependent-write consistent replications across all the participating SRDF groups. o SRDF session can establish replication between R1 to R2 devices. temporarily halting replication until a resume command is issued o An SRDF session can be split. and the replication direction for the session. and Appendix IV. if still accessible. • SRDF consistency: o An SRDF Consistency Group is an SRDF group to which consistency was enabled. Solutions Enabler CLI Commands for SRDF Management. o An SRDF failback copies changed data from R2 devices back to R1. This option helps in creating remote database backups when SRDF/A is used. or a ‘storage group’ (SG). for example: bring back a remote backup image. • SRDF sessions: o An SRDF session is created when replication starts between R1 and R2 devices in an SRDF group. The SRDF session will be suspended and application operations will proceed on the R2 devices. provide additional information. o Multiple SRDF groups set in SRDF/A mode can be combined within a single array or across arrays. The source devices in the SRDF group are called R1 devices. will change to Write_Disabled (read- only). and the target devices are called R2 devices. a ‘device-group’ (DG). R2 devices are made Write_Disabled (read-only). R1 devices. moving only changed data across the links. SRDF Modes and Topologies. o SRDF operations are performed on a group of devices contained in an SRDF group. TimeFinder and SRDF can restore in parallel. o SRDF replication sessions can go in either direction (bi-directional) between the two arrays. then the whole group will stop replicating to preserve target devices consistency. which not only suspends the replication but also makes the R2 devices read-writable. This group is defined by using either a text file specifying the list of devices. Restore will be incremental. Any subsequent establish (for example. o During replication the devices to which data is replicated are write-disabled (read-only). o An SRDF checkpoint command will not return the prompt until the content of the R1 devices has reached the R2 devices. ‘composite/consistency-group’ (CG). refer to the VMAX3 Family with HYPERMAX OS Product Guide. o An SRDF session can be suspended. HYPERMAX OS allows up to 250 SRDF groups per SRDF director. The recommended way is to use a storage group. Only on first establish R1 and R2 devices require full copy.

No-copy option: No-copy linked targets remain space efficient by sharing pointers with Production and the snapshot. No-copy linked targets are useful for storage capacity efficiency due to shared pointers. Because snapshots consume storage capacity based on the database change rate. Traditionally hot backup mode is used to create a database recoverable solution. When the snapshot is unlinked. For a snapshot that will be recovered on the Production host and therefore relies on the available logs and archive logs. Archive logs are not required and are not used for a database restart. and Retention VMAX3 TimeFinder SnapVX allows up to 256 snapshots per source device with minimal cache and capacity impact. Oracle database 12c enhanced the ability to create database recoverable solution based on storage replications by leveraging storage consistency instead of hot-backup mode. Another by- product of no-copy linked targets is that they do not retain their data after they are unlinked. They can be used for short term and light weight access to avoid affecting Production’s performance. If at that point the snapshot is unlinked. and will not be sharing pointers with Production. if snapshots are taken every 30 seconds. control. However. though they can be relinked later. For example. and if they are later relinked they will be incrementally refreshed from the snapshot (usually. it could be better to perform a link-copy and have them use independent pointers to storage. old snapshots should be terminated when no longer needed. Recovery Solutions TimeFinder SnapVX creates consistent snapshots by default. For example. When linking any snapshot to target devices.ORACLE DATABASE REPLICATION WITH TIMEFINDER AND SRDF CONSIDERATIONS Number of Snapshots. When longer retention period of the linked targets is anticipated. This feature of Oracle 12c is called: Oracle Snapshot Storage Optimization and is demonstrated in Test Case 2. • For a “restart” solution where no roll-forward is planned. and it will perform crash or instance recovery just as if the server rebooted or the DBA performed a shutdown abort. No Copy Option SnapVX snapshots cannot be directly accessed by a host. Linked targets for the existing snapshots can be further used to create additional Point-in-Time snapshots for repurposing or backups. It should be noted that during the background copy the storage backend utilization will increase and the operator may want to time such copy operations to periods of low system utilization to avoid any application performance overhead. which are well suited for a database restart solution. They can be either restored to the source devices or linked to up to 1024 sets of target devices. if the snapshot will be recovered on another host (such as when using linked targets) an 11 . A no-copy link can be changed to copy on demand to create a full-copy linked target. the linked-targets can be made a stand-alone copy of the source snapshot point-in-time data by using copy option. A restartable database replica can simply be opened. the linked targets will have their own copy of the point-in-time data of the snapshot. there will be no more than 30 seconds of data loss if it is needed to restore the database without recovery. Copy option: Alternatively. or heavy workload. SnapVX allows using the copy or no-copy option where no-copy is the default. Oracle Database Restart vs. To achieve a restartable solution all data. if a snapshot is taken every 30 seconds. after the snapshot is refreshed). When the background copy is complete. • For a “recovery” solution. the target devices no longer provide a coherent copy of the snapshot point-in-time data as before. Both methods allow Production host I/O writes to complete without delay due to background data copy while Production data is modified and the snapshot data preserves its Point-in-Time consistency. the snapshot can include just the data files. reads to the linked targets may affect Production performance as they share their storage via pointers to unmodified data. Frequency. and redo log files must participate in the consistent snapshot. Only changes to either the linked targets or Production devices consume additional storage capacity to preserve the original data. However. Snapshot Link Copy vs. the target devices will maintain their own coherent data. snapshots taken at very short intervals (seconds or minutes) ensure that RPO is limited to that interval. SnapVX minimizes the impact of Production host writes by using intelligent Redirect-on-Write and Asynchronous-Copy-on-First-Write. If snapshots are used as part of a disaster protection strategy then the frequency of creating snapshots can be determined based on the RTO and RPO needs. roll forward of the data from the last snapshot will be much faster than rolling forward from a nightly backup or hourly snapshots. Targets created using either of these options can be presented to the mount host and all the Test Cases described in this document can be executed on them. A recoverable database replica can perform database recovery to a desired point in time using archive and redo logs. frequent snapshots ensure that RTO is short as less data will need recovery during roll forward of logs to the current time.

for Oracle database replication or backup purposes. A storage admin host user account is used to perform storage management operations (such as TimeFinder SnapVX. RMAN based backups are described in Test Case 3. RMAN allows the use of block change tracking to maintain metadata for the changes in the backup. Oracle DBaaS SnapClone Integration on VMAX3 Oracle Database as a Service (DBaaS) provides self-service deployment of Oracle databases and resource pooling to cater to multi- tenant environments. RMAN backups can be offloaded to an alternate host by using a linked SnapVX target and mounting an Oracle database instance from this copy. if a restartable solution is performed.htm#EMCLO619 for more information about SnapClone functionality. SnapClone is developed by Oracle and integrated with VMAX3 SMI-S provider for management and storage provisioning for TEST and DEV copies from a TEST Master database using VMAX-based storage snapshots. • Use SUDO. it is important to mention that Solutions Enabler can be installed for a non-root user as described in Test Case 9. parallelism. RMAN incremental backups use block change tracking files (BCT) to quickly identify the changed blocks. Oracle also allows setting up a backup user and only providing them a specific set of authorizations appropriate for their task. cataloging. and backup file retention policies. the archive log replica will not be used. There are different ways to address this: • Allow the database backup operator (commonly a DBA) a controlled access to commands in Solutions Enabler and Data Domain. Two key points are described below: 12 . Command Execution and Host Users Typically. A typical Oracle database backup strategy involves running full database backups on a periodic basis. It is beyond the scope of this paper to document how access controls are executed.additional snapshot of the archive logs should be taken. the replica of the online logs will not be restored (especially since we don’t want to overwrite Production’s redo logs if those are still available). control. Please refer to Oracle note: https://docs. This type of role and security segregation is common and often helpful in large organizations where each group manages their respective infrastructure with a high level of expertise. If a recoverable solution is used. It also offers storage ceiling – or capacity quotas per thin pool – to give control over database storage consumption to DBAs. and redo logs in the first replica. detailed history. following the best practice described in Test Case 2 for recoverable replicas. Redo logs are not required in the snapshot. additional user accounts other than ‘sysadmin’ can be created that can manage such processes appropriately. RMAN validates every block prior to the backup to ensure data integrity and provides several options for higher efficiency. Once enabled. running incremental backups frequently backs up only the changes since the prior backup. or multipathing commands). however. The RMAN block change tracking mechanism can also be deployed when offloading backups to alternate hosts using VMAX3 snapshots. As RMAN backups use DBID when storing the backup catalog information. Solutions Enabler has a robust set of Access Controls that fit this situation. To further improve the efficiency of incremental backups. A different host user account may be used to setup and manage Data Domain systems. leveraging VMAX Access Controls (ACLs). As the size of the database grows. and archive logs in the second (following the best practice for recoverable database replica). RMAN and VMAX3 Storage Replication Oracle recovery manager (RMAN) is integrated with Oracle server for tighter control and integrity of Oracle database backups and recovery. VMAX3 array snapshot-based backups allow creating recoverable database copies and mounting them to an alternate host for RMAN backups. Oracle DBaaS SnapClone is storage agnostic and self-service approach to creating rapid and space efficient clone orchestrated through Oracle Enterprise Manager Cloud Control (EM12c). allowing the DBA to execute specific commands for the purpose of their backup (possibly in combination with Access Controls). This can be done by including all data. however. At this point SnapClone is available on the VMAX platform and development of VMAX3 support is underway.oracle. Similarly. Alternatively an Oracle database mounted using the VMAX3-based recoverable snapshots can be registered in RMAN catalog database for proper tracking of such snapshot-based backup images for use in a production database recovery using RMAN. In that case. such backups can be restored directly to production database. Storage Layout and ASM Disk Group Considerations Storage design principles for Oracle on VMAX3 The storage design principles for Oracle on VMAX3 are documented in the white paper: Deployment Best Practices for Oracle Database with VMAX3 Service Level Objective Management.com/cd/E24628_01/doc.121/e28814/cloud_db_portal. It is possible to create a hybrid replica that can be used for either recovery or restart. an Oracle host user account is used to execute Oracle RMAN or SQL commands.

since RAC uses shared storage by virtue of replicating all the database components (data. the target database can be started in cluster or single-instance. such as for better performance management). it can already have its +GRID ASM disk group configured ahead of time. redo logs. Remote Replication Considerations It is recommended for an SRDF/A solution to always use Consistency Enabled to ensure that if a single device cannot replicate. +REDO. redo and archive log files allows backup and restore of only the appropriate file types at the appropriate time. It is therefore strongly recommended to move the beginning of the first partition using fdisk or parted to an offset of 1MB (2048 blocks). In this case. o By having the beginning of the partition aligned. It is best practice that if a cluster layer is required at the mount hosts. However. o If only database restart solution is required. leaving the database without an immediately available valid replica. o When Oracle RAC is used it is recommended to use a separate ASM disk group for Grid infrastructure (for example. the cluster information is not part of a database backup and if a recovery is performed on another clustered server. • ASM Disk Groups and Oracle files: o A minimum of 3 sets of database devices should be defined for maximum flexibility: data/control files. maintaining a consistent database replica on the target devices. 13 . This preserves the last valid image of the database as a safety measure from rolling disasters. once the synchronization resumes and until the target is synchronized (SRDF/S) or consistent (SRDF/A). Oracle backup procedures require the archive logs to be replicated at a later time than the data files. we can restore only data files without overwriting the Production’s redo logs. then the data and log files can be mixed and replicated together (although they may be other reasons to separate them. control. the target is not a valid database image. and FRA (archive logs). Due to the legacy BIOS issue. each in its own Oracle ASM disk group (for example. In this way. • To allow offload of backup operations to the remote site as archive logs are required to create a stand-alone backup image of the database. The +GRID ASM disk group should not contain user data. the archive logs can use a different SRDF group and mode. Regardless of the choice. if SRDF replication was interrupted for any reason for a while (planned or unplanned) and changes were accumulated on the source. When using Oracle RAC on the Production host. Also. • Partition alignment on x86 based systems o Oracle recommends on Linux and Windows systems to create at least one partition on each storage device. SRDF is a restart solution and since database crash recovery never uses archive logs there is no need to include FRA (archive logs) in the SRDF replication. even if data. For example. I/O to VMAX3 will be aligned with storage tracks and FAST extents achieving best performance. during restore. if the redo logs are still available on the Production host. the entire SRDF group will stop replicating. it is not recommended to replicate the cluster layer (voting disks or cluster configuration devices) since these contain local hosts and subnets information. and control files). there are two reasons why they could be included: • If Flashback database functionality is required for the target. log. Rolling disasters is a term used when a first interruption to normal replication activities is followed by a secondary database failure on the source. For example. +FRA). +DATA. and therefore be ready to bring up the database when needed. potentially leveraging SRDF/A. Replicating the flashback logs in the same consistency group as the rest of the database allows the use of Flashback database on the target. based on mount hostnames and subnets. and log files are replicated with SRDF/S. +GRID). o The separation of data. it should be configured ahead of time. For that reason it is best practice before such resynchronization to take a TimeFinder gold copy replica at the target site. by default such partitions are rarely aligned. It is always recommended to have a database replica available at the SRDF remote site as a gold copy protection from rolling disasters.

and Table 3 shows the databases storage configuration. and data protection. but rather comparative differences of a standard database workload. Table 2 shows the host environment. Database configuration details The following tables show the Test Cases test environment. Table 1 shows the VMAX3 storage environment. • Storage groups were created on local and remote VMAX arrays for linking point-in-time snapshots for various Test Cases. Figure 2 Oracle local and D/R test configuration Test Overview General test notes • The FINDB database was configured to run an industry standard OLTP workload with 70/30 read/write ratio and 8KB block size. Figure 2 depicts the overall test configuration used to describe these Test Cases.RAID1 • 34 x 1TB 7K HDD .RAID6 (6+2) 14 .RAID5 (3+1) • 66 x 15K HDD . performance management. • DATA and REDO storage groups (and ASM disk groups) were cascaded into a parent storage group (FINDB_SG) for ease of provisioning.596 Drive mix (including spares) • 17 x EFDs . No special database tuning was done as the focus of the test was not on achieving maximum performance. using Oracle database 12c and ASM.ORACLE BACKUP AND DATA PROTECTION TEST CASES Test Configuration This section provides examples using Oracle database backup and data protection on VMAX3 arrays. Table 1 Test storage environment Configuration aspect Description Storage array Single engine VMAX 200K HYPERMAX OS 5977. • Production Oracle database SnapVX and SRDF operations were run on FINDB_SG.

Create a snapshot of Production database containing all control. Development. and Reporting environments. Note: A restartable database replica must include all database control. and redo log files. These environments can be periodically refreshed from Production. 2. Link the snapshot to target devices and present them to the Mount host. Creating remote restartable copies 7. 3. Creating a local recoverable database replica for backup and recovery 3. High level steps: 1. Parallel recovery from remote backup image 9. Start the Oracle database on the Mount host. Performing full or incremental RMAN backups from a SnapVX replica (including Block Change Tracking) 4.Table 2 Test host environment Configuration aspect Description Oracle Oracle Grid and Database release 12.1. 96 GB memory Volume Manager Oracle ASM Table 3 Test database configuration Production ASM Disk Storage Production Local Linked SRDF R2 SnapVX Database Groups Groups (SG) Parent SG Target SG SRDF R2 SG Target SG +DATA: 4 x 1 TB DATA_SG thin LUNs Name: FINDB_SG FINDB_MNT FINDB_R2 FINDB_R2_TGT +REDO: 4 x 150 REDO_SG FINDB GB thin LUNs Size: 1. Performing database recovery of Production using a recoverable snapshot 5. data. FINFRA_R2 FINFRA_R2_TGT FINFRA_MNT GB thin LUNs High level test cases: 1. Using SRDF/S and SRDF/A for database Disaster Recovery 6.0. 15 . and redo log files. Creating a local restartable database replica for database clones 2. Leveraging self-service replications for DBAs Test Case 1: Creating a local restartable database replica for database clones Objectives: The purpose of this Test Case is to demonstrate the use of SnapVX to create a local database restartable copy. Creating remote recoverable database replicas 8.5 TB +FRA: 4 x 100 FINFRA_SG .2 Linux Oracle Enterprise Linux 6 Multipathing Linux DM Multipath Hosts 2 x Cisco C240. and therefore the cascaded storage group FINDB_SG was used. data. The database clone can be started on a Mount host for purposes such as logical error detection or creation of Test. also referred to as a database clone.

• Mount the ASM disk groups that now contain the snapshot point-in-time data. Bring it up following the procedure below before mounting +DATA and +REDO ASM disk groups. and redo log files. SQL> shutdown immediate. Skip to the next step. REDO Mount Host FINDB_MNT DATA. o The storage group FINDB_MNT contains the linked-target devices of Production’s snapshot. o Log in to the ASM instance and dismount the ASM disk groups. SQL> dismount diskgroup DATA. If RAC is not used. shut down the database instance that will be refreshed and dismount its ASM disk groups: o Login to the database instance and shut it down. For all other links use the ‘relink’ option. The database will perform crash (or instance) recovery and will open. # symsnapvx –sg FINDB_SG –name FINDB_Restart establish On Mount host: • Complete pre-requisites: o GRID infrastructure and Oracle binaries should be installed ahead of time on the mount host. start the Oracle high-availability services. o As the Grid infrastructure user (ASM instance user). SQL> mount diskgroup DATA. If RAC is used on the Mount host then it should be pre-configured so the ASM disk groups from the snapshots can simply be mounted into the existing cluster. $ crsctl start has CRS-4123: Oracle High Availability Services has been started. $ sqlplus "/ as sysasm" SQL> alter system set asm_diskstring='/dev/mapper/ora*p1'. # symsnapvx –sg FINDB_SG –lnsg FINDB_MNT –snapshot_name FINDB_Restart relink • If RAC is used on the Mount host then it should be already configured and running using a separate ASM disk group and therefore +DATA and +REDO can simply be mounted. SQL> startup Note: Since there is no roll forward of transactions. the creation of database clones using SnapVX is very fast. For the first link use the ‘link’ option.Groups used: Server Storage Group ASM Disk Group Production Host FINDB_SG DATA. o Log in to the ASM instance and update the ASM disk string before mounting the ASM disk groups. data. It should be added to a masking view to make the target devices accessible to the Mount host. If RAC is not used on the Mount host see steps later to bring up Oracle High Availability Services (HAS). The time it takes to Oracle to complete crash recovery and opens depends on the amount of transactions in the log since the last checkpoint. • Log in to the database instance and start up the database (do not perform database recovery). • If refreshing an earlier snapshot. REDO Detailed steps: On Production host: • Create a snapshot of the Production database containing all control. SQL> mount diskgroup REDO. SQL> dismount diskgroup REDO. • Link the Production snapshot based on FINDB_SG to the target storage group FINDB_MNT. an ASM instance may not be running yet. 16 .

17 . 8. 10. 4. Note: See Test Case 3 for details on how the snapshot can be used to perform RMAN backups. SQL> alter system switch logfile. 2. switch logs. Mount the ASM disk groups on the Mount host. For Oracle databases prior to 12c. • As the database user. Create a consistent snapshot of Production control. it can be used to recover Production. and redo log files (which are contained in +DATA and +REDO ASM disk groups). Mount the database instance on the Mount host (do not open it). SQL> alter database begin backup. As the database user. Such a database replica can be used to recover Production. and save backup control file to +FRA ASM disk group. or opened in read-only mode. Optionally. If the replica is used to offload RMAN incremental backups to a Mount host then switch RMAN Block Change Tracking file manually. end hot-backup mode. 6. # symsnapvx–sg FINDB_SG –name Snapshot_Backup establish • Pre-Oracle 12c. place the Production database in hot-backup mode. Groups used: Server Storage Group ASM Disk Group Production Host FINDB_SG DATA. and redo log files contained in +DATA and +REDO ASM disk groups. 9. and save backup control file to +FRA ASM disk group. Create a snapshot of Production archive logs contained in +FRA ASM disk group. Link both snapshots to target devices and present them to the Mount host. data. 3.Test Case 2: Creating a local recoverable database replica for backup and recovery Objectives: The purpose of this Test Case is to demonstrate the use of SnapVX to create a local recoverable database replica. or can be mounted on a Mount host and used for RMAN backup and running reports. end hot-backup mode. High level steps: 1. place the Production database in hot-backup mode. archive the current log. SQL> alter system archive log current. 7. data. archive the current log. 5. REDO FINFRA_SG FRA Mount Host FINDB_MNT DATA. SQL> alter database end backup. switch logs. catalog the database backup with RMAN. For Oracle databases prior to 12c. REDO FINFRA_MNT FRA Detailed steps: On Production host: • Pre-Oracle 12c. • Create a snapshot of Production control. Note: As long as the database replica is only mounted.

• Log in to the database instance and mount the database (but do not open it with resetlogs). catalog the backup data files (all in +DATA disk group) with RMAN. see steps later to bring up Oracle High Availability Services (HAS). SQL> alter database backup controlfile to ‘+FRA/CTRLFILE_BKUP’ REUSE.1) • Create a snapshot of Production archive logs contained in +FRA ASM disk group. SQL> execute dbms_backup_restore. If RAC is used on the Mount host then it should be pre-configured so the ASM disk groups from the snapshots can simply be mounted into the existing cluster. SQL> mount diskgroup FRA. o Log in to the ASM instance and dismount the ASM disk groups that will be refreshed. • If the replica is not used to offload RMAN incremental backups to a Mount host (Test Case 3) then skip to the next step. • If refreshing an earlier snapshot. That means that if more than 8 incremental backup are taken before another level 0 (full) backup takes place. To have a stand-alone copy with all the data from the source. Note: By default SnapVX link uses no-copy mode. SQL> dismount diskgroup DATA. # symsnapvx –sg FINFRA_SG –name FRA_Backup establish On Mount host: • Complete pre-requisites: o GRID infrastructure and Oracle binaries should be installed ahead of time on the mount host. switch RMAN Block Change Tracking file manually. 18 .bctswitch(). RMAN will update the BCT file on the Mount host. follow the steps in the Test Case 1 to start the ASM instance and update the ASM disk string. SQL> startup mount • Optionally. However. • Link the Production snapshots based on FINDB_SG and FINFRA_SG to the target storage group: FINDB_MNT and FINFRA_MNT respectively. SQL> shutdown immediate. as a database user on Production. SQL> mount diskgroup REDO. then RMAN switches the version of the file with each backup automatically.1 and 1528510. To increase the number of versions in the BCT file use the init. o Login to the database instance and shut it down. For all other links use the ‘relink’ option. If RAC is not used on the Mount host.ora parameter _bct_bitmaps_per_file (see Oracle support notes: 1192652. • Mount the ASM disk groups that now contain the snapshot point-in-time data. a copy mode can be used by adding ‘-copy’ to the command. Otherwise. Note: By default Oracle only keeps 8 versions in the BCT file for incremental backups. shut down the database instance and dismount the ASM disk groups. Note: When RMAN incremental backups are taken using RMAN Block Change Tracking (BCT). Oracle provides an API for such cases that switches the BCT file manually on Production after incremental backups from the Mount host. RMAN will not be able to use the BCT file and will revert to scanning the whole database. # symsnapvx –sg FINDB_SG –lnsg FINDB_MNT –snapshot_name Snapshot_Backup relink # symsnapvx –sg FINFRA_SG –lnsg FINFRA_MNT –snapshot_name FRA_Backup relink • If ASM instance is not running. They should be added to a masking view to make the target devices accessible to the Mount host. o The storage groups FINDB_MNT and FINFRA_MNT contain the linked-target devices of Production’s snapshots. RMAN> catalog start with ‘+DATA’ noprompt. when the RMAN backup is offloaded to a Mount host. SQL> mount diskgroup DATA. SQL> dismount diskgroup REDO. For the first link use the ‘link’ option. SQL> dismount diskgroup FRA.

o If RMAN incremental backups are used. o Example for creating full backup (simplest form): RMAN> run { Backup database. On Mount host: • If no RMAN incremental backups are used then simply run RMAN backup script and perform a full database backup. For example: SQL> alter database enable block change tracking using file ‘+FRA/BCT/change_tracking. High level steps: 1. Perform Test Case 2 to create a recoverable replica of Production and mount it to the Mount host. In incremental backup.f’ reuse. Make sure that the block change tracking file is created in the +FRA ASM disk group.Test Case 3: Performing FULL or incremental RMAN backup from a SnapVX replica Objectives: The purpose of this test case is to offload RMAN backups to a Mount host using SnapVX snapshot. • Perform Test Case 2 to create recoverable replica of Production and mount it to the Mount host. At the end of this step. Perform RMAN full or incremental backup from the Mount host Groups used: Server Storage Group ASM Disk Group Production Host FINDB_SG DATA. For example. ASM disk group +FRA will be mounted to the Mount host with the Block Change Tracking file included. } • If RMAN incremental backups are used perform a full backup (also called ‘level 0’ backup) periodically. and Production’s BCT file will start tracking block changes with a new version. followed by level 1 backup. } o Example for creating an incremental backup: RMAN> run { Backup incremental level 1 database. 2. RMAN Block Change Tracking (BCT) is used from the Mount host. If RMAN incremental backups are used then enable block change tracking on Production. a weekly level 0 backup and daily level 1 backups. REDO FINFRA_MNT FRA Detailed steps: On Production host: • If RMAN incremental backups are used then enable block change tracking on Production. 3. as described in Test Case 2. The DBA can determine an incremental backup strategy between Differential or Cumulative incremental backups (refer to Oracle documentation for more details). make sure to switch BCT file manually after the step that archives the current log file. } 19 . o Example for creating first full backup as part of incremental backup strategy: RMAN> run { Backup incremental level 0 database. The RMAN backup can be full or incremental. REDO FINFRA_SG FRA Mount Host FINDB_MNT DATA.

On Production host – during restore: • Restore the SnapVX snapshot of +DATA ASM disk group alone to Production (do not restore the +REDO ASM disk group to avoid overwriting the current redo logs on Production if they survived). To do so. Recover the Production database using the archive logs and optionally the current redo log. # symsnapvx –sg DATA_SG –snapshot_name Snapshot_Backup verify –restored # symsnapvx –sg DATA_SG –snapshot_name Snapshot_Backup terminate -restored • SnapVX allows using the source devices as soon as the restore is initiated. It also demonstrates how to leverage the Oracle 12c Storage Snapshot Optimization feature during database recovery. The Test Case demonstrates full and point-in-time recovery. Test Case 4: Performing database recovery of Production using a recoverable snapshot Objectives: The purpose of this Test Case is to leverage a previously taken recoverable snapshot to perform a database recovery of the Production database. o If Production’s +DATA ASM disk group was still mounted then. even as background copy operation of the changed data is taking place in the background. 2. use ‘asmcmd’ or SQL dismount it (repeat on all nodes if RAC is used). Groups used: Server Storage Group ASM Disk Group Production Host (Parent) FINDB_SG (DATA. though there is no need to mount it to the Mount host. using ‘asmcmd’ or SQL mount DATA ASM disk group (repeat on all nodes if RAC is used). High level steps: 1. 3. Once the restore starts. REDO) (Child) DATA_SG DATA (Child) REDO_SG REDO FINFRA_SG FRA Detailed steps: On Production host – during backup: • Perform Test Case 2 to create a recoverable replica of Production. The linked target (or Mount host) will not be used in this scenario. 20 . o As Grid user. however. SQL> alter diskgroup DATA mount. # symsnapvx –sg DATA_SG –snapshot_name Snapshot_Backup restore o It is not necessary to wait for the snapshot restore to complete. the +DATA ASM disk group can be mounted on the Production host to the ASM instance. as Grid user. SQL> alter diskgroup DATA dismount. only the original snapshot of Production. Restore the SnapVX snapshot of +DATA ASM disk group alone to Production. o Restore the SnapVX snapshot of the +DATA ASM disk group alone. terminate the snapshot-restore session as a best practice. Perform Test Case 2 to create recoverable replica of Production. use the child storage group: DATA_SG instead of the cascaded storage group FINDB_SG that was used to create the original snapshot. (Do not restore the +REDO ASM disk group to avoid overwriting the current redo logs on Production if they survived). at some point after it completed. There is no need to wait for the restore to complete. o Verify for level 1 backups that the BCT file was used: SQL> select count(*) from v$backup_datafile where used_change_tracking='YES'.

Note: It might be necessary to point to the location of the online redo logs or archive logs if the recovery process didn’t locate them automatically (common in RAC implementations with multiple online or archive logs locations). # symsnapvx -sg FINFRA_SG –name FRA_Backup establish o List the FRA snapshots to choose which snapshot generation to restore (generation 0 is always the latest for a given snapshot-name) # symsnapvx -sg FINFRA_SG –snapshot_name FRA_Backup list -detail o Restore the appropriate +FRA snapshot and use its archive logs as necessary during the database recovery process. but were created after the old +FRA snapshot was taken and would therefore be lost when it is restored. SQL> recover database until time '2015-03-21 12:00:00' snapshot time '2015-03-21 11:50:40'. follow Oracle database recovery procedures. The goal is to apply any necessary archive logs as well as the online logs fully. Note: it is recommended that a new snapshot of +FRA be taken prior to retrieving the old +FRA snapshot with the missing archive logs. SQL> alter database open. SQL> alter diskgroup FRA mount. If the backup was taken using hot-backup mode. An example for using Storage Snapshot Optimization: SQL> alter session set NLS_DATE_FORMAT="YYYY-MM-DD HH24:MI:SS". SQL> alter diskgroup FRA dismount. # symsnapvx -sg FINFRA_SG –snapshot_name FRA_Backup verify -restored # symsnapvx -sg FINFRA_SG –snapshot_name FRA_Backup terminate -restored Test Case 5: Using SRDF/S and SRDF/A for database disaster recovery Objectives: The purpose of this test case is to leverage VMAX3 SRDF to create remote restartable copies and use that for the production database disaster recovery. Perform application restart. o When performing incomplete recovery. when leveraging the Oracle 12c feature Storage Optimized Snapshot. use the snapshot to retrieve the missing archive logs. o When performing full recovery (using the current redo log if still available). Start the application on the D/R site in the event of disaster at the production site. • Recover the Production database using the archive logs and optionally the current redo log. remove the “snapshot time <time>” reference. SQL> alter database open RESETLOGS. Set up SRDF between production and D/R sites. provide the time of the snapshot during the recovery. but exist in the +FRA snapshot. o It is not necessary to wait for the snapshot restore to complete. terminate the snapshot-restore session as a best practice. 21 . however. 2. o Create a new snapshot of +FRA prior to restoring an old one. The new snapshot will contain any additional archive logs that currently exist on the host. • If the recovery process requires archive logs that are no longer available on the server. at some point after it completed. 3. 4. # symsnapvx -sg FINFRA_SG –snapshot_name FRA_Backup restore –generation <gen_number> o Mount +FRA ASM disk group. For example: SQL> recover automatic database. Perform full establish and set up the appropriate replication mode (synchronous and/or asynchronous). High level steps: 1. o Dismount +FRA ASM disk group.

SQL> alter diskgroup FRA mount. # symrdf -rdfg 20 –set mode synchronous • Enable replication consistency when using SRDF asynchronous (SRDF/A). The commands below describe planned failover.Groups used: SRDF Site Server Storage Group ASM Disk Group R1 Production Host (Parent) FINDB_SG (DATA. # symrdf -sg FINDB_SG -rdfg 20 createpair -type R1 -remote_sg FINDB_R2 -establish • Optionally pair SRDF device for FRA storage group if archive logs and flash back logs are also replicated to remote site. SQL> startup mount 22 . [Failover DATA and REDO logs] # symrdf -sid 535 -rdfg 20 failover [Failover FRA logs] # symrdf –sid 535 –rdfg 21 failover • Mount ASM disk groups on the remote site. SRDF/A can be used for more efficient use of bandwidth. SQL> alter diskgroup DATA mount. REDO) (Child) DATA_R2_SG DATA (Child) REDO_R2_SG REDO R2 D/R host FINFRA_R2 FRA Detailed steps: SRDF replication setup example (performed from local storage management host): • Create a dynamic SRDF group between production and D/R sites. # symrdf addgrp -label FINDB -rdfg 20 -dir 1H:10 -remote_sid 536 -remote_dir 1E:7 -remote_rdfg 20 • Pair SRDF devices between the Production and remote storage groups that include the database data. # symrdf -rdfg 20 enable SRDF replication failover example (performed from remote storage management host): • In the event of an outage on Production site. If the FRA ASM disk group includes only archive logs. # symrdf -sg FINDB_FRA -rdfg 21 createpair -type R1 -remote_sg FINFRA_R2 -establish • Set the SRDF mode to synchronous or asynchronous. SQL> alter diskgroup REDO mount. The R2 devices are consistent but not yet read-writable. SQL> startup mount • Start an Oracle instance on the remote site. Perform SRDF failover to make the R2 devices write-enabled. control. If FRA includes also flashback logs then it should be consistent with FINDDB_SG and use the same SRDF group/mode. REDO) (Child) DATA_SG DATA (Child) REDO_SG REDO R1 Production Host FINFRA_SG FRA R2 D/R host (Parent) FINDB_R2 (DATA. and redo log files. In the event of a disaster this will be automatically done by SRDF. the SRDF link will fail and replication stops.

SQL> alter diskgroup FRA mount. 2. Use Test Case 5 to set up the remote site.On production host – for disaster recovery: • Split SRDF link and initiate the restore operation. # symrdf –sid 535 –sg FINDB_SG –rdfg 20 split # symrdf –sid 535 –sg FINDB_SG –rdfg 20 restore • When the restore operation is initiated. High level steps: 1. restart the production site application. <Use SQLPLUS> <For ASM instance> SQL> alter diskgroup DATA mount. REDO) • (Child) DATA_SG • DATA • (Child) REDO_SG • REDO R2 . Repeat the steps for FRA also if they are restored. Start an application for D/R testing using the snapshots. 3. [Failback DATA and REDO logs] # symrdf -sid 535 -rdfg 20 failback [Failback FRA logs] # symrdf –sid 535 –rdfg 21 failback Test Case 6: Creating remote restartable copies Objectives: The purpose of this test case is to leverage VMAX3 SRDF and SnapVX to create remote restartable copies and use that for the production database disaster recovery. The gold copy snapshot created on a D/R site is used to link with a separate target storage group for D/R testing. Use SnapVX to create snapshots using R2 devices. <For Oracle instance> SQL> startup mount • When the restore operation completes. REDO) • (Child) DATA_SG_R2TGT • DATA • (Child) REDO_SG_R2TGT • REDO 23 . SQL> alter diskgroup REDO mount. (Parent) FINDB_R2 (DATA. REDO) • (Child) DATA_SG_R2 • DATA • (Child) REDO_SG_R2 • REDO R2 D/R host (Parent) FINDB_R2_TGT (DATA. Groups used: SRDF Site Server Storage Group ASM Disk Group R1 Production Host (Parent) FINDB_SG (DATA. failback SRDF to revert the roles back to original.

REDO) 24 . Pre-Oracle 12c. On D/R host – during normal operation: • Create periodic point-in-time snapshots from R2 devices for D/R testing. Test execution steps: 1. use database backup mode prior to snapshot of DATA and REDO. REDO) • (Child) DATA_SG • DATA • (Child) REDO_SG • REDO R1 Production Host FINDB_FRA FRA R2 . Use SnapVX to create snapshots using R2 devices. Use Test Case 5 to set up the remote site. 5. Groups used: SRDF Site Server Storage Group ASM Disk Group R1 Production Host (Parent) FINDB_SG (DATA. Create control file copies and FRA snapshot. This test case uses the snapshot created off R2 to link with separate storage group for further backup. Mount snapshots and Oracle instance to prepare for backups as described in Test Case 2. SQL> mount diskgroup REDO. relink can be used to refresh the existing targets. For subsequent D/R testing. 2. # symrdf –sid 535 –sg FINDB_SG –rdfg 20 split # symsnapvx –sid 536 –sg FINDB_R2 –lnsg FINDB_R2TGT –snapshot_name FINDB_R2Gold link • Follow the rest of the steps in Test Case 5 for production host disaster recovery to perform SRDF restore. (Parent) FINDB_R2 (DATA. # symsnapvx –sid 536 –sg FINDB_R2 –lnsg FINDB_R2TGT –snapshot_name FINDB_R2Gold link • Mount ASM disk groups. relink can be used to refresh the existing targets. 3. • Restart the Oracle database. For subsequent D/R testing. Link the snapshot to the target storage group on R2 site. SQL> mount diskgroup DATA. 4. The snapshots generated this way also work with the Oracle 12c snapshot optimization feature. <Create snapshot for DATA and REDO on R2 site to be used for periodic D/R testing> # symsnapvx –sid 536–sg FINDB_R2 –name FINDB_R2Gold –nop –v establish On D/R host – during normal operation for D/R testing: • Link the snapshot to the target storage group on R2 site. SQL> startup On production host – for disaster recovery using the restartable snapshot: • Split SRDF link to prepare for the restore. Test Case 7: Creating remote recoverable database replicas Objectives: The purpose of this test case is to leverage VMAX3 SRDF and SnapVX to create remote recoverable copies to use for remote backups or recovery of the production database from remote backups.Detailed steps: On production host – during normal operation: • Use Test Case 5 to set up the remote site.

This is useful for example when production is placed in hot backup mode before the remote clone is taken. the generation number will be incremented while keeping the older generation as well. this is useful when production is placed in hot backup mode before the remote clone is taken. <Create snapshot for DATA and REDO to be used for backup on remote VMAX3> # symsnapvx –sid 536 –sg FINDB_R2 –name Snapshot_Backup_R2 –nop –v establish On production host: • Pre-Oracle 12c. Also save backup control ‘+FRA/CTRLFILE_BKUP’ to use with RMAN backup in FRA disk group to be available in FRA snap along with archived logs. take the database out of backup mode. For example. FINDB_FRA_R2 FRA R2 D/R host (Parent) FINDB_R2_TGT (DATA. SQL> alter database backup controlfile to ‘+FRA/CTRLFILE_BKUP’ REUSE. put the database in hot backup mode. REDO) • (Child) DATA_SG_R2TGT • DATA • (Child) REDO_SG_R2TGT • REDO R2 D/R host FRA_R2_TGT FRA Detailed steps: On production host: • Pre-Oracle 12c. SQL> alter database begin backup. b) No special action is needed when using SRDF synchronous mode (SRDF/S) on FINDB_SG On D/R host: • Create a snapshot of FRA disk group on the remote target. a) If SRDF asynchronous mode (SRDF/A) is used for SRDF replication of FINDB_SG then use the SRDF checkpoint command to make sure that the remote target datafile is also updated with backup mode. On D/R host: • Create a snapshot for DATA and REDO disk groups on the remote target. • (Child) DATA_SG_R2 • DATA • (Child) REDO_SG_R2 • REDO R2 . Name the snap to identify it as the backup image. b) No special action is needed when using SRDF synchronous mode (SRDF/S) on FINDB_SG. <Create VMAX3 Snapshot for FRA disk group > 25 . This can be avoided by terminating the snap prior to recreating it. Every time “-establish” is used with the same snapshot name. a) If SRDF asynchronous mode (SRDF/A) is used for SRDF replication of FINDB_FRA then use the SRDF checkpoint command to make sure that the remote FRA disk group is updated with necessary archived logs generated during backup mode. <Issue SRDF checkpoint command> # symrdf –sid 535 –sg FINDB_SG checkpoint Note: The SRDF checkpoint command will return control to the user only after the source device content reached the SRDF target. <Use SQLPLUS to take database out of backup mode> SQL> alter database end backup. • Perform a log switch and archive the current log. <Use SQLPLUS> SQL> alter system switch logfile. SQL> alter system archive log current. <Issue SRDF checkpoint command> # symrdf –sid 535 –sg FINDB_SG checkpoint Note: The SRDF checkpoint command will return control to the user only after the source device content reached the SRDF target devices (SRDF will wait two delta sets).

# symrdf –sid 535 –sg FINDB_SG –rdfg 20 split 26 . a) Shut down Oracle database. # symsnapvx –sid 536 –sg FINDB_FRA –name FINDB_FRABackup_R2 –nop –v establish • Link the snapshots Snapshot_Backup_R2 to FINDB_R2_TGT and FINDB_FRABackup_R2 to FRA_R2_TGT to continue with the rest of the steps and provision the storage to the D/R host. • Shut down the Production database and dismount ASM disk groups. REDO) • (Child) DATA_SG_R2 • DATA • (Child) REDO_SG_R2 • REDO R2 . SQL> dismount diskgroup DATA. Use Test Case 7 to create a remote database snapshot. 3. b) Dismount ASM diskgroups DATA and REDO and FRA disk groups. Test Case 8: Parallel recovery from remote backup image Objectives: The purpose of this test case is to demonstrate parallel recovery from a remote backup image by initiating a restore of the remote target from a remote snapshot and simultaneously starting SRDF restore. Test execution steps: 1. 2. Groups used: SRDF Site Server Storage Group ASM Disk Group R1 Production host (Parent) FINDB_SG (DATA. 4. SQL> dismount diskgroup REDO. Start production data recovery. Use SnapVX to restore R2 devices. FINFRA_R2 FRA Detailed steps: On production host – during normal operation: • Use Test Case 4 to create a recoverable image on the remote site. This test case is similar to Test Case 4 except that it uses a remote recoverable copy. • Split SRDF groups. (Parent) FINDB_R2 (DATA. Test scenario: Use remote snapshot to restore SRDF R2 devices and initiate SRDF restore simultaneously. • Use the backup operations described on Mount host in Test Case 2 to continue with further backup. SQL> dismount diskgroup FRA. REDO) • (Child) DATA_SG • DATA • (Child) REDO_SG • REDO R1 Production host FINDB_FRA FRA R2 . SQL> shutdown immediate. Start SRDF restore.

• Restore the remote target snapshot to R2 devices. The devices will show “SyncInProg” to indicate that the restore is going on. Once Symmetrix Access Control is set up. # symsnapvx –sid 536 –sg FINDB_FRA_R2 –snapshot_name FINDB_FRA_R2TGT restore <Verify the completion of the restore> # symsnapvx –sid 536 –sg FINDB_FRA_R2 –snapshot_name FINDB_FRA_R2TGT verify –summary <Terminate once restore completes> # symsnapvx –sid 536 –sg FINDB_FRA –snapshot_name FINDB_FRA_R2TGT terminate -restored • As soon as the restore from snap is initiated. Install Solutions Enabler as the non-root user of the choice that would manage Oracle database backups. # symrdf –sid 536 –sg FINDB_R2 –rdfg 20 restore <Verify the completion of the restore> # symrdf –sid 536 list • Mount ASM disk groups on R1 side. 27 . SRDF restore can be started. 3. CONCLUSION VMAX3 provides a platform for Oracle databases that is easy to provision. D/R and repurposing as TEST/DEV. and operate with the application performance needs in mind. Test execution steps: 1. Oracle DBAs can run snapshot operations as non-root user and all the test cases described earlier in the white paper can be executed. SRDF will start performing incremental restore from R2 to R1. Configure the Symmetrix Access Control List as described in Appendix V to create Symmetrix access control groups and pools with Oracle database devices. Grant BASE. BASECTRL and SNAP privileges to these entities. Symmetrix Access Control Lists are used to grant appropriate privileges to Oracle user to perform self-service database snapshots. 2. The state “Synchronized” will indicate completion of the restore. Test Case 9: Leveraging Access Control List replications for storage snapshots Objectives: The purpose of this test case is to demonstrate self-service orchestration of Oracle database snapshots for DBAs. manage. • Start up the database. This paper provides guidance on the latest features of VMAX3 for local and remote data protection along with various commonly deployed use cases including backup. It also covers self-service database replication that can be leveraged by database administrators to deploy additional copies under their control. <Restore Snap VX remote snapshot> # symsnapvx –sid 536 –sg FINDB_R2 –snapshot_name FINDB_R2TGT restore <Verify the completion of the restore> # symsnapvx –sid 536 –sg FINDB_R2 –snapshot_name FINDB_R2TGT verify –summary <Terminate once restore completes> # symsnapvx –sid 536 –sg FINDB_R2 –snapshot_name FINDB_R2TGT terminate -restored • Restore FRA disk group from target snap if needed for Production database recovery.

It also allows setting time to live for the snapshot for automatic expiration based on a user-provided period in number of days. A new named storage snapshot can be created or an existing snapshot can be refreshed using the screen. this allows easy deployment of Oracle 12c database recovery optimization from storage-based snapshots. This appendix shows how to provision storage for Oracle DATA.CONFIGURING ORACLE DATABASE STORAGE GROUPS FOR REPLICATION VMAX3 TimeFinder SnapVX and SRDF allow using VMAX3 Auto-Provisioning Groups – Storage Groups for provisioning storage for Oracle database clusters and also for creating Enginuity Consistent Assist based write order consistent snapshots. Changes to Oracle database provisioning using these storage groups is reflected into any new snapshots created after that. Following this provisioning model along with the Test Cases described earlier provides proper deployment guidelines for Oracle databases on VMAX3 to database and storage administrators. This simplifies configuring and provisioning Oracle database storage for data protection. REDO. Figure 3 shows an Oracle server provisioning storage using cascaded storage groups. While providing desired control over SLO management. 28 . Separating archive logs from this group allows independent management of data protection for archived logs. availability and recoverability. Cascading DATA and REDO into a parent SG allows creation of restartable copies of the database. Figure 1 Oracle database cascaded storage groups Creating snapshots for Oracle database storage groups Figure 4 shows how to create the snapshot for Oracle database storage. and FRA disk groups to ensure database recovery SLAs are achievable. Additional snapshots from the linked target can also be created in the same way.APPENDIX I . making it very easy to manage database growth.

One snapshot can be linked to multiple targets storage groups.Figure 2 Unisphere create snapshot Linking Oracle database snapshots for backup offload or repurposing Figure 5 shows how to select existing snapshot to link to a target storage group for backup offloading or repurposing. If the full copy if desired. By default the snapshots are linked in space saving no copy mode wherein copy operation is differed until the source tracks are written. 29 . if relink to the same target storage group is desired select existing target storage group option. copy check box can be used.

Figure 4 Unisphere restore from snapshot 30 .Figure 3 Unisphere creating linked target Restoring Oracle database using storage snapshot Figure 6 shows how to select an existing snapshot to restore a source storage group.

The destaged cycle on the remote array destages the data to the R2 devices. preventing any write response time penalty to the application. . though it can be set differently. without write response time penalty to the application. The capture cycle is the cycle that accepts new writes to R1 devices while it is open. Figure 7 shows how to use an existing snapshot to create additional point-in-time cascaded snapshots. o In SRDF/A each host write to an R1 device gets acknowledged immediately after it registered with the local VMAX3 persistent cache. cycle time can increase during peak workloads as more data needs to be transferred over the links. o Host I/O latency will be affected by the distance between the storage arrays. SRDF software only destages full cycles to the R2 devices. 31 . though during peak workload more than one cycle can be waiting on the R1 array to be transmitted. . o Writes to the R1 devices are grouped into cycles. The default time for capture cycle to remain open for writes is 30 seconds. o SRDF/S makes sure that data on both the source and target devices is exactly the same. o In SRDF/S each host write to an R1 device gets acknowledged only after the I/O was copied to the R2 storage system persistent cache. • SRDF Asynchronous (SRDF/A) is used to create consistent replicas at unlimited distances. . The Transmit cycle is a cycle that was closed for updates and its data is sent from the local to the remote array. The receive cycle on the remote array receives the data from the transmit cycle. After the peak. the cycle time will go back to its set time (default of 30 seconds). • SRDF Synchronous (SRDF/S) is used to create a no-data-loss of committed transactions solution. In legacy mode (at least one of the arrays is not a VMAX3). These basic modes can be combined to create different replication topologies (described in this appendix).Creating a cascaded snapshot from an existing snapshot TimeFinder Snap VX allows creating snaps from an existing snapshot for repurposing the same point-in-time copy for other uses. Figure 5 Unisphere creating cascaded snapshot APPENDIX II – SRDF MODES AND TOPOLOGIES SRDF modes SRDF modes define SRDF replication behavior. In multi-cycle mode (both arrays are VMAX3). cycle time remains the same.

• Cascaded SRDF: Cascaded SRDF is a three-site topology in which replication takes place from site A to site B. AR uses TimeFinder to create a PiT replica of production on site A. SRDF topologies A two-site SRDF topology includes SRDF sessions in SRDF/S. In site B TimeFinder is used to create a replica which is then replicated to site C. To limit VMAX3 cache usage by capture cycle during peak workload time and to avoid stopping replication due to too many outstanding I/Os. in which another TimeFinder replica is created as a gold copy. until site A can come back. Three-site SRDF topologies include: • Concurrent SRDF: Concurrent SRDF is a three-site topology in which replication takes place from site A simultaneously to site B and site C. saving bandwidth. though they are beyond the scope of this paper. and/or SRDF/ACP between two storage arrays. • SRDF/STAR: SRDF/STAR offers an intelligent three-site topology similar to concurrent SRDF. topologies. . . 32 . and from there to site C. SRDF/A. . Source R1 devices are replicated simultaneously to two different sets of R2 target devices on two different remote arrays. then uses SRDF to replicate it to site B. Write-order fidelity is maintained between cycles. However. one SRDF group can be set as SRDF/S replicating to a near site and the other as SRDF/A. In this topology. depending on how fast the links can transmit the cycles and the cycle time. if SRDF/STAR replications between site A and B use SRDF/S and replications between site A and C use SRDF/A. Once the amount of changed data has been reduced. o Consistency should always be enabled when protecting databases and applications with SRDF/A to make sure the R2 devices create a consistent “restartable” replica. and other details refer to the VMAX3 Family with HYPERMAX OS Product Guide. o While SRDF ACP is not valid for ongoing consistent replications it is a good way of transferring changed data in bulk between source and target devices after replications were suspended for an elongated period of time. when cycles are received every 30 seconds at the remote storage array its data will be 15 seconds behind production (if transmit cycle was fully received). site B can become the DR site for site C. if site A failed. or 1 minute behind (if transmit cycle was not fully received it will be discarded during failover to maintain R2 consistency). this topology offers capacity and cost savings as site B only uses cache to receive the replicated data from site A and transfer it to site C. VMAX3 offers a Delta Set Extension (DSE) pool which is local storage on the source side that can help buffer outstanding data to target during peak times. For example. and from there to site C. There are also 4-site topologies. and as R1 to site C. In a two-site topology. • SRDF/AR: SRDF Automatic Replication (AR) can be set as either a two or a three-site replication topology. or the first of the I/Os will be in one cycle and the dependent I/O in the next. replicating to a far site. where each RDF group can be set in different mode and each array may contain R1 and R2 devices of different groups. In site C the gold copy replica is created and the process repeats itself. In a three-site topology. ACP mode can be maintained until a certain skew of leftover changes to transmit is achieved. Then the process repeats. the SRDF mode can be changed to Sync or Async as appropriate. It offers slower replication when network bandwidth is limited and without performance overhead. While the capture cycle is open. site B doesn’t hold R21 devices with real capacity. where site A replicates simultaneously to site B and site C. R1 devices in site A replicate to site B to a set of devices called R21. For example. accumulating many changes on the source. site B holds the full capacity of the replicated data. only the latest update to the same storage location will be sent to the R2. • SRDF Adaptive Copy (SRDF/ACP) mode allows bulk transfers of data between source and target devices without maintaining write-order fidelity and without write performance impact to source devices. If site A fails and Production operations continue on site C. site A replicates to Site B using SRDF/S. Site B can become a DR site for site C afterwards. • SRDF/EDP: Extended data protection SRDF topology is similar to cascaded SRDF. For example. though slightly behind. o The R2 target devices maintain a consistent replica of the R1 devices. as site A replicates to site B. However. For full details on SRDF modes. two dependent I/Os will always be in the same cycle. This feature is called write-folding. site B and site C can communicate to merge the changes and resume DR. if site A fails then site B can send the remaining changes to site C for a no-data-loss solution at any distance. in EDP. R21 devices behave as R2 to site A. Site C has the R2 devices. For example. Instead. o SRDF ACP is also good for migrations (also referred to as SRDF Data Mobility) as it allows a Point-in-Time data push between source and target devices.

.. 33 ..... The command lists all the snaps for the given storage group. Polling for Establish. All the objects associated with that storage group will be included in the snap and a consistent point-in-time snap will be created.................. = Non-GCM Linking the snap to a storage group This command shows how to link a snap to target storage group... ........... ..... Tue Mar 31 10:12:51 2015 3 3 Wed Apr 1 10:12:51 2015 Establish operation successfully executed for the storage group FINDB_SG Flgs: (F)ailed : X = Failed..Started.-------------------------------..... ......... ......... = No Link Exists (R)estore : X = Restore Active............. The difference between delta track and non-shared track will give the shared tracks shared by this snap.......Done.... linking is done using no_copy mode.......... Please wait..........---------..... The newer snapshot with the same name can be created and the generation number will be incremented with generation 0 identifying the most recent one.. Polling for Establish.......Started... Polling for Activate........---------....---......APPENDIX III – SOLUTIONS ENABLER CLI COMMANDS FOR TIMEFINDER SNAPVX MANAGEMENT Creation of periodic snaps This command allows creation of periodic snaps from a database storage group....---. Similar syntax can also be used for linked target storage groups... # symsnapvx -sid 535 -sg FINDB_SG -name FINDB_SG –snapshot_name FINDB_Snap_1 establish [-ttl –delta <#of days>…] Execute Establish operation for Storage Group FINDB_SG (y/[n]) ? y Establish operation execution is in progress for the storage group FINDB_SG.. By default...... # symsnapvx -sid 535 -sg FINDB_SG -snapshot_name FINDB_Snap_1 -lnsg FINDB_MNT link [-copy] Execute Link operation for Storage Group FINDB_SG (y/[n]) ? y Link operation execution is in progress for the storage group FINDB_SG. Polling for Activate........ # symsnapvx -sid 535 -sg FINDB_SG -name FINDB_Snap_1 list -detail Storage Group (SG) Name : FINDB_SG SG's Symmetrix ID : 000196700535 (Microcode Version: 5977) Total Sym Flgs Deltas Non-Shared Dev Snapshot Name Gen FLRG Snapshot Timestamp (Tracks) (Tracks) Expiration Date ----. = No Failure (L)ink : X = Link Exists.............-----------------------. = No Restore Active (G)CM : X = GCM......... Please wait.........Done.......---------------- -------- 000BC FINDB_Snap_1 0 ..... Establish operation successfully executed for the storage group FINDB_SG Listing details of a snap This command shows the details about a snapshot including delta and non-shared tracks and expiration time...

........Done.... all the snaps are created with –nocopy... When the link is created using the -copy option.............. and provides other useful information. Link operation successfully executed for the storage group FINDB_SG Verifying current state of the snap This command provides the current summary of the given snapshot.. ------ Total 8 Track(s) ----------- Total Remaining 38469660 All devices in the group 'FINDB_SG' are in 'Established' state..... indicates whether modified target tracks exist......XX Tue Mar 31 10:12:52 2015 000C1 FINDB_Snap_1 0 00058 .... Polling for Link...X Tue Mar 31 10:12:52 2015 000C3 FINDB_Snap_1 0 0005A ...... Tue Mar 31 10:12:52 2015 000BE FINDB_Snap_1 0 00055 .. This shows the number of devices included in the snap and the total number of tracks protected but not copied... The same command can be used to check the remaining tracks to copy during the restore operation...... By default. # symsnapvx -sid 535 -sg FINDB_SG -snapshot_name FINDB_Snap_1 verify -summary Storage Group (SG) Name : FINDB_SG Snapshot State Count ----------------------................ Tue Mar 31 10:12:52 2015 000C0 FINDB_Snap_1 0 00057 ..... Tue Mar 31 10:12:52 2015 000BD FINDB_Snap_1 0 00054 ...---...------------------------ 000BC FINDB_Snap_1 0 00053 .......... the 100% copy is indicated by “Total Remaining” count reported as 0..... Tue Mar 31 10:12:52 2015 000BF FINDB_Snap_1 0 00056 .-------------------------------....X Tue Mar 31 10:12:52 2015 000C2 FINDB_Snap_1 0 00059 ..... Polling for Link...---.. Listing linked snaps This command lists the named linked snap and specifies the status of the copy or defined operation....----.Started..... # symsnapvx -sid 535 -sg FINDB_SG -snapshot_name FINDB_Snap_1 list -linked Storage Group (SG) Name : FINDB_SG SG's Symmetrix ID : 000196700535 (Microcode Version: 5977) ------------------------------------------------------------------------------- Sym Link Flgs Dev Snapshot Name Gen Dev FCMD Snapshot Timestamp ----. ------ Established 8 EstablishInProg 0 NoSnapshot 0 Failed 0 ----------------------.X Tue Mar 31 10:12:52 2015 34 ..X........

= No Failure (C)opy : I = CopyInProg. D = Copied/Destaged. # symsnapvx -sid 535 -sg FINDB_SG -snapshot_name FINDB_Snap_1 terminate –restored # symsnapvx -sid 535 -sg FINDB_SG -snapshot_name FINDB_Snap_1 verify –summary # symsnapvx -sid 535 -sg FINDB_SG -snapshot_name FINDB_Snap_1 terminate -restored APPENDIX IV – SOLUTIONS ENABLER CLI COMMANDS FOR SRDF MANAGEMENT Listing local and remote VMAX SRDF adapters This command shows how to list existing SRDF directors.-------. . C = Copied.---.Flgs: (F)ailed : F = Force Failed. # symrdf addgrp -label FINDB -rdfg 20 -sid 535 -dir 1H:10 -remote_sid 536 -remote_dir 1E:7 -remote_rdfg 20 Execute a Dynamic RDF Addgrp operation for group 'FINDB_1' on Symm: 000196700535 (y/[n]) ? y Successfully Added Dynamic RDF Group 'FINDB_1' for Symm: 000196700535 Creating SRDF device pairs for a storage group This command shows how to create SRDF device pairs between local and remote VMAX arrays. a new dynamic SRDF group can be created with proper director ports and group numbers. and start syncing the tracks from R1 to R2 between those for remote protection. = NoCopy Link (M)odified : X = Modified Target Data. Based on the output generated from the prior command. . and dynamic SRDF groups. X = Failed. # symrdf -sid 535 -sg FINDB_SG -rdfg 20 createpair -type R1 -remote_sg FINDB_R2 -establish Execute an RDF 'Create Pair' operation for storage 35 . The command listed below should be run on both local and remote VMAX to obtain the full listing needed for subsequent commands. . = Not Modified (D)efined : X = All Tracks Defined.--------------- RF-1H 10 000197200056 1 (00) 1 (00) Online Online 10 000197200056 10 (09) 10 (09) Online Online RF-2H 10 000197200056 1 (00) 1 (00) Online Online 10 000197200056 10 (09) 10 (09) Online Online Creating dynamic SRDF groups This command shows how to create a dynamic SRDF group. . the restore session can be terminated while keeping the original point-in-time snap for subsequent use. = Define in progress Restore from a snap This command shows how to restore a storage group from a point in time snap. identify R1 and R2 devices. # symcfg -sid 535 list -ra all Symmetrix ID: 000196700535 (Local) SYMMETRIX RDF DIRECTORS Remote Local Remote Status Ident Port SymmID RA Grp RA Grp Dir Port ----.-------. available ports.-----------. Once the restore operation completes.

...Done. Merge track tables between source and target in (0535........ Resume RDF link(s) for device(s) in (0535... = ACp off (Mirror) T(ype) : 1 = R1.---.........Started..--------.. Resume RDF link(s) for device(s) in (0535......--................ = Disabled A(daptive Copy) : D = Disk Mode.1.020)......... E = Semi-sync..020)...1........... Create RDF Pair in (0535....020).......... .. 2 = R2 36 .... Mark target device(s) in (0535.....------..........020) for full copy from source.Done. Devices: 00BC-00C3 in (0535. Devices: 00BC-00C3 in (0535. Merge track tables between source and target in (0535..Started....----.020).........020) for full copy from source.Started...........020)...Marked..020).. 0 3058068 RW RW Split … Total ------.. C = Adaptive Copy : M = Mixed D(omino) : X = Enabled.------- Track(s) 0 13141430 MB(s) 0 1642679 Legend for MODES: M(ode of Operation) : A = Async. # symrdf -sid 535 list -rdfg 20 Symmetrix ID: 000196700535 Local Device View --------------------------------------------------------------------------- STATUS MODES RDF S T A T E S Sym Sym RDF --------...020)......020).. Listing the status of SRDF group This command shows how to get information about the existing SRDF group.....R1 Inv R2 Inv ---------------------- Dev RDev Typ:G SA RA LNK MDATE Tracks Tracks Dev RDev Pair ----.. Create RDF Pair in (0535......Merged........... The RDF 'Create Pair' operation successfully executed for storage group 'FINDB_SG'...----..... .. Mark target device(s) in (0535......Done.... S = Sync........Started...... 0 3058067 RW RW Split 000BD 00069 R1:20 RW RW NR S......----.------. Please wait.....-------........Done..------------- 000BC 00068 R1:20 RW RW NR S..group 'FINDB_SG' (y/[n]) ? y An RDF 'Create Pair' operation execution is in progress for storage group 'FINDB_SG'. W = WP Mode.

.. Mark source device(s) in (0536.Started... 00C3-00C3 in (0536....020) on SA at source (R1)...... Merge track tables between source and target in (0536....... Resume RDF link(s) for device(s) in (0536...Started.......... Resume RDF link(s) for device(s) in (0536............... Suspend RDF link(s) for device(s) in (0536.............Started....Done....020) to refresh from target...........Merged. (Consistency) E(xempt): X = Enabled.........020)... ..... ...........020).. M = Mixed.Done...... Devices: 00BC-00C3 in (0536..... Devices: 00BC-00C0.Done.......Done..020).........Done..... Please wait......Done....Marked........... Merge track tables between source and target in (0536... = Disabled..... Mark Copy Invalid Tracks in (0536....... Mark source device(s) in (0536... Read/Write Enable device(s) in (0536........................ Write Disable device(s) in (0536. 37 .. Mark Copy Invalid Tracks in (0536...020). The RDF 'Incremental Restore' operation successfully initiated for storage group 'FINDB_R2'...020).020)....Done.. Devices: 0068-006B in (0536....020)..020) to refresh from target...Done..Started. Write Disable device(s) in (0536.020).............020) on RA at target (R2). # symrdf -sid 536 -sg FINDB_R2 -rdfg 20 restore Execute an RDF 'Incremental Restore' operation for storage group 'FINDB_R2' (y/[n]) ? y An RDF 'Incremental Restore' operation execution is in progress for storage group 'FINDB_R2'..020).= N/A Restoring SRDF group This command shows how to restore an SRDF group from R2 to R1....020) on SA at source (R1).......Marked....020)...........

and commit performs the prepare operations and executes the command. The components of Array Based Access Controls are: • Access Groups: Groups that contain the unique Host ID and descriptive Host Name of non-root users. A PIN can also be set up using the SymmWin procedure wizard. Install Solutions Enabler as non-Root Oracle user and run SnapVX operations from Oracle user on the devices granted access to that user. By setting ACLs on database devices. but array-based Access Control can also be performed using the Solutions Enabler Command Line interface using the general syntax: symacl –sid –file preview | prepare | commit. This appendix illustrates ACL management using Unisphere. Initialize the SYMACL database. 3. identify and add an admin host with host access to run SYMACL commands. The high-level steps to set ACLs are: 1. With this syntax. SYMCLI_ACCESS_PIN. or entered manually. Identify unique IDs of the UNIVMAX and database management hosts: use SYMCLI for this step.SOLUTIONS ENABLER ARRAY BASED ACCESS CONTROL MANAGEMENT VMAX Solutions Enabler array-based Access Control (ACL) allows DBA users to perform VMAX management from the database host or a host under DBA control. For Unisphere access. The Host ID is provided by running ‘symacl –unique’ command on the appropriate host. preview verifies the syntax. • Access Pools: Pools that specify the set of devices for operations. Grant base management and data protection management privileges to the access group. 2. 6. Create an access pool for database devices. 5. Add UNIVMAX host to AdminGrp for ACL management: use SYMCLI for this step. the SymmACL database comes pre-initialized with this group that has to be first initialized if not done already. 7. Once Unisphere is added to SymmACL database. and security. The Storage Admin PIN can be set in an environment variable. Adding UNIVMAX host to this group will allow management of access groups and pools from UNIVMAX. the VMAX SymmWin procedure wizard must be used to add “Host_Based” access ID of the Unisphere host to the SymmACL database. <Set the Environment variable to specify the SYMCLI access PIN> # export SYMCLI_ACCESS_PIN=<XXXX> <Add the host access ID to the AdminGrp> # symacl -sid 535 commit << DELIM 38 . DBAs can perform data protection operations with better control. isolation. • Access Control Entry (ACE): Entries in the Access Control Database that specify the permission level for the Access Control Groups and on which pools they can operate. Note: On VMAX3. Create an access group for the database host. Identifying the unique ID of the Unisphere for VMAX and database management host Run this SYMCLI command on both UNIVMAX and database management hosts to retrieve their unique IDs # symacl -sid 535 -unique The unique id for this host is: XXXXXXXX-XXXXXXXX-XXXXXXXX Adding the UNIVMAX host to AdminGrp for ACL management VMAX3 contains a pre-created AdminGrp which allows full VMAX administrative control of ACLs. the following command can also be run from Unisphere graphical user interface instead of using Solutions Enabler Command Line from a host with granted access. 4.APPENDIX V . prepare runs preview and checks if the execution is possible.

...................................................Done.................... DELIM Command file: (stdin) PREVIEW............................ add host accid <XXXXXXXX.........Started...................Done..............................................................................................................................................Done.... Figure 6 Unisphere enabling access control using PIN 39 ............XXXXXXXX-XXXXXXXX> name <UNIVMAX Host Name> to accgroup AdminGrp............................... PREPARE...... PREPARE.....................................................Done. PREVIEW....................... Adding Host access id DSIB1134 to group AdminGrp.........Started............................................... Authenticate UNIVMAX host for ACL management Enter the PIN to authenticate UNIVMAX for SYMACL management............. Starting COMMIT..........................................

Create access group for database host Create a database host management access group using the database management host access ID. Figure 7 Unisphere creating access group using host unique ID 40 .

Create database device access pool Create a database device pool by selecting the devices in FINDB_SG. Figure 8 Unisphere creating access pool 41 .

.Grant base and snap privileges Create Access Control Entry for the database access group and database access pool created in earlier steps. and SNAP privileges to the access pool to allow running snap operations. Do you want to run these daemons as a non-root user? [N]:Y Please enter the user name : oracle . Assign BASE. with VMAX3.. Install root directory of previous Installation : /home/oracle/SE Working root directory [/usr/emc] : /home/oracle/SE . install Solutions Enabler for the Oracle user. The installation has to be performed as root user. Once ACLs are set as described above. Figure 9 Unisphere grant base and snap privileges to the group Install Solutions Enabler on database hosts using non-root user Typically Solutions Enabler is installed as the root user. though the option for allowing a non-root user is part of the installation. Solutions Enabler can also be installed to start certain daemons using non-root user and management operations can be run from there./se8020_install.sh –install . On the Application Management host. BASECTRL. however... non-root users can run Snap operations on the access groups for their host. The following examples illustrate running SnapVX operations from Oracle user account. <Running SYMSNAPVX command on a storage group with devices in access group> # . #----------------------------------------------------------------------------- 42 ...

...... Polling for Establish.. # su – oracle # symcfg disc # sympd list –gb • Below is a test showing snapshot operations run as Oracle user.......2.... • To allow the Oracle user to run symcfg discover and list commands.....................0... <Running SYMSNAPVX command on a storage group with devices in access group> # symsnapvx -sid 535 -sg FINDB_SG -name DSIB0122_FINDB_Oracle establish Execute Establish operation for Storage Group FINDB_SG (y/[n]) ? y Establish operation execution is in progress for the storage group FINDB_SG.... Symmetrix access control denied the request REFERENCES • EMC VMAX3 Family with HYPERMAX OS Product Guide • Unisphere for VMAX3 Documentation Set • EMC VMAX3 TM Local Replication TechNote • Deployment Best Practices for Oracle Database with VMAX3 SLO management 43 ..Done....... permission is required to use the Solutions Enabler daemons.Started... Please wait..... Polling for Activate.Started................... Polling for Establish............................................. Please wait........Done. Update the daemon_users file.....# The following HAS BEEN INSTALLED in /home/oracle/SE via the rpm utility..... Polling for Activate.......0 RT KIT #----------------------------------------------------------------------------- Establish operation execution is in progress for the storage group FINDB_FRA........ Please wait.................... Establish operation successfully executed for the storage group FINDB_SG <Running SYMSNAPVX command on a storage group with devices NOT in access group> # symsnapvx -sid 535 -sg FINDB_FRA -name FINDB_FRA_SNAP establish Execute Establish operation for Storage Group FINDB_FRA (y/[n]) ? y Establish operation execution is in progress for the storage group FINDB_FRA...... # cd /var/symapi/config # vi daemon_users # Add entry to allow user access to base daemon oracle storapid oracle storgnsd • Test Oracle user access... #----------------------------------------------------------------------------- ITEM PRODUCT VERSION 01 EMC Solutions Enabler V8...............