PROTECTPOINT FILE SYSTEM AGENT

3
WITH VMAX – BACKUP & RECOVERY
BEST PRACTICE FOR ORACLE ON ASM

EMC® VMAX® Engineering White Paper

ABSTRACT
The integration of ProtectPoint™ File System Agent, TimeFinder® SnapVX and the
Data Domain system allows Oracle database backup and restore to take place entirely
within the integrated system. This capability not only reduces host I/O and CPU
overhead, allowing the host to focus on servicing database transactions, but also
provides higher efficiency for the backup and recovery process.

April, 2015

EMC WHITE PAPER

To learn more about how EMC products, services, and solutions can help solve your business and IT challenges, contact your local
representative or authorized reseller, visit www.emc.com, or explore and compare products in the EMC Store.

Copyright © 2015 EMC Corporation. All Rights Reserved.
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without
notice.

The information in this publication is provided “as is.” EMC Corporation makes no representations or warranties of any kind with
respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a
particular purpose.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.
Part Number <H14777>

2

TABLE OF CONTENTS

EXECUTIVE SUMMARY .............................................................................. 5
AUDIENCE ......................................................................................................... 5

PRODUCT OVERVIEW ................................................................................ 5
Terminology ...................................................................................................... 5
VMAX3 Product Overview ..................................................................................... 7
Data Domain Product Overview ............................................................................ 9
ProtectPoint Product Overview ........................................................................... 10

ORACLE AND PROTECTPOINT FILE SYSTEM AGENT CONSIDERATIONS ... 13
RMAN Backup integrations with Backup Media Managers ....................................... 13
RMAN and ProtectPoint File System Agent integration points with ASM ................... 14
ProtectPoint File System Agent and Oracle Real Application Clusters (RAC) ............. 14
ProtectPoint file System Agent and Remote Replications with SRDF ........................ 14
Backup to disk with SnapVX .............................................................................. 15
Command Execution Permissions for Oracle Users ................................................ 15
ProtectPoint Configuration File, Backup Devices, and ASM changes......................... 15

ORACLE BACKUP AND RECOVERY USE CASES WITH PROTECTPOINT FILE
SYSTEM AGENT ....................................................................................... 16
Oracle Database Backup/Recovery Use Cases – The Big Picture ............................. 16
Backup and Recovery Use Cases Setup ............................................................... 19
Backup Oracle ASM database using ProtectPoint .................................................. 19
Using Mount host to pick a backup-set to copy to production (4a, Recoverable) ....... 20
Using Mount host and Database Clone for logical recovery (4a, RestarTable) ........... 22
RMAN Recovery of Production Without SnapVX Copy (4b) ..................................... 23
RMAN Recovery of Production After Copy, overwriting Production data devices (4c).. 26

CONCLUSION .......................................................................................... 28
APPENDIXES........................................................................................... 29
Appendix I – ProtectPoint System Setup ............................................................. 29
Appendix II – Sample CLI Commands: SnapVX, ProtectPoint, Data Domain ............. 41
Appendix III – Providing Solutions Enabler Access to non-root Users ...................... 43
3

Appendix IV – Scripts Used in the Use Cases ....................................................... 45

REFERENCES ........................................................................................... 51

4

EXECUTIVE SUMMARY
Many applications are required to be fully operational 24x7x365 and the data for these applications continues to grow. At the same
time, their RPO and RTO requirements are becoming more stringent. As a result, there is a large gap between the requirement for
fast and efficient protection, and the ability to meet this requirement without disruption. Traditional backup is unable to meet this
requirement due to the inefficiencies of reading and writing all the data during full backups. More importantly, during recovery the
recovery process itself (‘roll forward’) cannot start until the initial image of the database is fully restored, which can take a very long
time. This has led many datacenters to use snapshots for more efficient protection. However, snapshot data is typically left within
the primary storage array together with its source, risking the loss of both in the event of data center failure or storage
unavailability. Also, often there is no strong integration between the database backup process, managed by the database
administrator, and the snapshot operations, managed by the storage administrator. Finally, it is more advantageous to store the
backups in media that does not consume primary storage and also benefits from deduplication, compression, and remote replication
such as the Data Domain system offers.
EMC ProtectPoint addresses these gaps by integrating best-in-class EMC products, the VMAX3 storage array and the Data Domain
system, making the backup and recovery process more automated, efficient, and integrated.

The integration of ProtectPoint, TimeFinder SnapVX and the Data Domain system allows Oracle ASM database backup and restore to
take place entirely within the integrated system! This capability not only reduces host I/O and CPU overhead, allowing the host to
focus on servicing database transactions, but also provides higher efficiency for the backup and recovery process. Backup efficiencies
are introduced by not requiring any read or write I/Os of the data files by the host. Instead, TimeFinder SnapVX creates a snapshot
which is a valid backup of the database, and then copies it directly to the Data Domain system, leveraging VMAX3 Federated Tier
Storage (FTS). For Oracle databases prior to 12c Hot Backup mode is used, though for only a few seconds – regardless of the size of
the database. As soon as the snapshot is created, Hot Backup mode is ended immediately. The snapshot is then incrementally copied
to the Data Domain system in the background, while database operations continue as normal. Oracle 12c offers a new feature:
Oracle Storage Snapshot Optimization that allows database backups without the need for Hot-Backup mode, leveraging storage
snapshot consistency, which is an inherent feature of SnapVX. The combination of Oracle 12c, VMAX3, and Data Domain allows the
highest backup efficiency.
Restore efficiencies are introduced in a similar way by not requiring any read or write I/Os of the data files by the host. Instead, Data
Domain places the required backup-set on its restore encapsulated devices. The restore devices can be directly mounted to a Mount
host for small-scale data retrievals, or mounted to the Production host and cataloged with RMAN so RMAN recover functionality can
be used for production-database recovery (e.g. fixing physical block corruption, missing datafiles, etc.). A third option is available,
where the restore devices content is copied by SnapVX, overwriting the native VMAX3 Production devices. This option is best used
when the Production database requires a complete restore from backup, or for a large-scale recovery that should not be performed
from the encapsulated Data Domain devices.

Note: This white paper addresses the values and best practices of ProtectPoint File System Agent v1.0 and VMAX3 where the Oracle
database resides on ASM. It does not cover ProtectPoint for Oracle databases residing on file systems.

Note: At this time ProtectPoint File System Agent requires an approved RPQ for Oracle ASM deployments. By itself, TimeFinder is
fully supported to create Oracle ASM recoverable and restartable replicas and backups, as has been the case for many years.

AUDIENCE
This white paper is intended for database and system administrators, storage administrators, and system architects who are
responsible for implementing, managing, and maintaining Oracle databases backup and recovery strategy with VMAX3 storage
systems. It is assumed that readers have some familiarity with Oracle and the EMC VMAX3 family of storage arrays, and are
interested in achieving higher database availability, performance, and ease of storage management.

PRODUCT OVERVIEW
TERMINOLOGY
The following table explains important terms used in this paper.

5

and agility for applications. ensuring high availability. either both will be included in the replica. (b) specify FAST Service Levels (SLOs) to a group of devices. rolling forward transaction log to achieve data consistency before the database can be opened. 3 VMAX TimeFinder SnapVX TimeFinder SnapVX is the latest generation in TimeFinder local replication software. Clone. RTO and RPO Recovery Time Objective (RTO) refers to the time it takes to recover a database after a failure.Term Description Oracle Automatic Storage Oracle ASM is a volume manager and a file system for Oracle database files that supports single- Management (ASM) instance Oracle Database and Oracle Real Application Clusters (Oracle RAC) configurations. where RPO=0 means no data loss of committed transactions. A Storage Group can be used to (a) present devices to host (LUN masking). When they are linked to host-addressable target devices. where capacity was consumed only for data changed after the snapshot time. In addition. performing automatic crash/instance recovery without user intervention. Recoverable state on the other hand requires a database media recovery. scalability. data mobility and data protection directly on the array. That means that for any two dependent I/Os that the application issue. A restartable database state requires all log. 6 . such as log write followed by data update. Recoverable Oracle distinguishes between a restartable and recoverable state of the database. Clone device. or only the first. Storage consistent Storage consistent replications refer to storage replications (local or remote) in which the target replications devices maintain write-order fidelity. Oracle can be simply started. data. or perform a full copy. It enables VMAX3 to embed storage infrastructure services like cloud access. file systems. Starting with Oracle 11g. Restartable vs. HYPERMAX OS delivers the ability to perform real-time and non-disruptive data services. To the Oracle database the snapshot data looks like after a host crash. This delivers new levels of data center efficiency and consolidation by reducing footprint and energy requirements. Recovery Point Objective (RPO) refers to any amount of data-loss after the recovery completes. Oracle allows database recovery from storage consistent replications without the use of hot-backup mode (details in Oracle support note: 604683. The feature has become more integrated with Oracle 12c and is called Oracle Storage Snapshot Optimization. offering higher scale and a wider feature set while maintaining the ability to emulate legacy behavior. VMAX3 Storage Group A collection of host addressable VMAX3 devices. 3 VMAX TimeFinder SnapVX Previous generations of TimeFinder referred to snapshot as a space-saving copy of the source Snapshot vs. a state from which Oracle can simply recover by performing crash/instance recovery when starting. 3 VMAX Federated Tiered Federated Tiered Storage (FTS) is a feature of VMAX3 that allows an external storage system to be Storage (FTS) connected to the VMAX3 backend and provides physical capacity that is managed by VMAX3 software. and control files to be consistent (see ‘Storage consistent replications’). Oracle Real Application Oracle Real Application Clusters (RAC) is a clustered version of Oracle Database based on a Clusters (RAC) comprehensive high-availability stack that can be used as the foundation of a database cloud system as well as a shared infrastructure. and raw devices. Storage Groups can be cascaded. With VMAX3.1). TimeFinder SnapVX snapshots are always space-efficient. and (c) manage grouping of devices for replications software such as SnapVX and SRDF®. the user can choose to keep the target devices space-efficient. such as the child storage groups used for setting FAST Service Level Objectives (SLOs) and the parent used for LUN masking of all the database devices to the host. Oracle ASM is Oracle’s recommended storage management solution that provides an alternative to conventional volume managers. 3 VMAX HYPERMAX OS HYPERMAX OS is the industry’s first open converged storage hypervisor and operating system. on the other hand referred to full copy of the source device. or Oracle ‘shutdown abort’.

Refer to EMC documentation and release notes to find the latest supported components.5” • 300 GB – 1.600 GB 2. VMAX3 Federated Tiered Storage Federated Tiered Storage (FTS) is a feature of VMAX3 that allows external storage to be connected to the VMAX3 backend and provide physical capacity that is managed by VMAX3 software. Contact your EMC representative for more details. and a new Dynamic Virtual Matrix dual InfiniBand fabric interconnect that creates an extremely fast internal memory-to-memory and data-copy fabric. and incorporates a Dynamic Virtual Matrix interface that connects and shares resources across all VMAX3 engines. Figure 1 shows possible VMAX3 components.5" drives. and access patterns continue to change over time. where HYPERMAX OS will initialize them and create native VMAX3 device structures. VMAX3 new hardware architecture comes with more CPU power. and data migration.760 drives • SSD Flash drives 200/400/800/1. The external storage devices can be encapsulated by VMAX3. 7 .5”/3. The newest additions to the EMC VMAX3 family. including cache optimizations.5" and 3. or presented as raw disks to VMAX3. multi-tier storage that excels in providing FAST 1 (Fully Automated Storage Tiering) enabled performance management based on Service Level Objectives (SLO). 2 Additional drive types and capacities may be available. While VMAX3 can ship as an all-flash array with the combination of EFD (Enterprise Flash Drives) and large persistent cache that accelerates both writes and reads even further. Figure 1 VMAX3 storage array 2 • 1 – 8 VMAX3 Engines • Up to 4 PB usable capacity • Up to 256 FC host ports • Up to 16 TB global memory (mirrored) • Up to 384 Cores.7 GHz Intel Xeon E5-2697-v2 • Up to 5.2 TB 10K RPM SAS drives 2. modular storage. high capacity flash and hard disk drives in dense enclosures for both 2. It offers dramatic increases in floor tile density. allowing the storage array to seamlessly grow from an entry-level configuration into the world’s largest storage array. even as new data is added.5” • 2 TB/4 TB SAS 7.5”/3.2K RPM 3. 200K and 400K. and supports both block and file (eNAS).VMAX3 PRODUCT OVERVIEW Introduction to VMAX3 The EMC VMAX3 family of storage arrays is built on the strategy of simple.5” To learn more about VMAX3 and FAST best practices with Oracle databases refer to the white paper: Deployment best practice for Oracle database with VMAX3 Service Level Object Management. and therefore their data preserved and independent of VMAX3 specific structures. deliver the latest in Tier-1 scale-out multi-controller architecture with consolidation and efficiency for the enterprise. The VMAX3 family of storage arrays comes pre-configured from factory to simplify deployment at customer sites and minimize time to first I/O. data management. 1 Fully Automated Storage Tiering (FAST) allows VMAX3 storage to automatically and dynamically manage performance service level goals across the available storage resources to meet the application I/O demand. Attaching external storage to a VMAX3 enables the use of physical disk capacity on a storage system that is not a VMAX3 array. local and remote replications. while gaining access to VMAX3 features.5”/3. larger persistent cache. 2. Each array uses Virtual Provisioning to allow the user easy and quick storage provisioning.5” • 300 GB 15K RPM SAS drives 2. intelligent. it can also ship as hybrid. It provides the highest levels of performance and availability featuring new hardware and software capabilities. VMAX 100K.

VMAX3 SnapVX Local Replication Overview EMC TimeFinder SnapVX software delivers instant and storage-consistent point-in-time replicas of host devices that can be used for purposes such as the creation of gold copies.FTS is implemented entirely within HYPERMAX OS and does not require any additional hardware besides the VMAX3 and the external storage.g. They only relate to a group of source devices and cannot be otherwise accessed directly. SnapVX snapshot data resides in the same Storage Resource Pool (SRP) as the source devices. Multiple snapshots of the same data utilize both storage and memory savings by pointing to the same location and consuming very little metadata. • SnapVX snapshots are always consistent. a snapshot name is provided. This allows easy creation of restartable database copies. The snapshot time is saved with the snapshot and can be listed. but not to the snapshots themselves. or to snapshot linked targets. data that is external to the database (e. patch testing. The generation is incremented with each new snapshot. and an optional expiration date. When a snapshot is established. • SnapVX provides the ability to create either space-efficient replicas or full-copy clones when linking snapshots to target devices. reporting and test/dev environments. ‘composite-group’ (CG). • SnapVX snapshots themselves are always space-efficient as they are simply a set of pointers pointing to the data source when it is unmodified. VMAX3 and Data Domain become an integrated system in which TimeFinder SnapVX local replication technology operates in coordination with Data Domain using ProtectPoint File System Agent software. or simply specifying the devices. Connectivity with the external array is established using fiber channel ports. By leveraging FTS. If “-copy” option is not used. • Linked-target devices cannot ‘restore’ any changes directly to the source devices. a ‘storage group’ (SG). In this way. a new snapshot can be taken from the target devices and linked back to the original source devices. image files). See Appendixes for a list of basic TimeFinder SnapVX operations. • FAST Service Levels apply to either the source devices. Instead. SnapVX allows unlimited number of cascaded snapshots. • Each source device can have up to 256 snapshots that can be linked to up to 1024 targets. the assumption is that the external storage provides storage protection and therefore VMAX3 will not add its own RAID to the external storage devices. saving capacity and resources by providing space-efficient replicas. or preservation of the primary storage devices. and so on. Some of the main SnapVX capabilities related to native snapshots (emulation mode for legacy behavior is not covered): • With SnapVX. This group is defined by using either a text file specifying the list of devices. or linked to another set of target devices which can be made host-accessible. backup and recovery. or any other process that requires parallel access to. or to the original version of the data when the source is modified. scalability. snapshots are natively targetless. and ease-of-use features. data warehouse refreshes. The recommended way is to use a storage group. and acquire an ‘Optimized’ FAST Service Level Objective (SLO) by default. Note: While the external storage presented via FTS is managed by VMAX3 HYPERMAX OS and benefits from many of the VMAX3 features and capabilities. That means that snapshot creation always maintains write-order fidelity. The replicated devices can contain the database data. the target devices provide the exact snapshot point-in-time data only until the link relationship is terminated. providing a powerful Oracle database backup and recovery solution. snapshots can be restored back to the source devices. 8 . message queues. Instead. • Snapshots are taken using the establish command. This will make the target devices a stand-alone copy. • Snapshot operations are performed on a group of devices. a ‘device-group’ (DG). Oracle home directories. Snapshot operations such as establish and restore are also consistent – that means that the operation either succeeds or fails for all the devices as a unit. even if the snapshot name remains the same. Snapshots also get a ‘generation’ number (starting with 0). VMAX3 TimeFinder SnapVX combines the best aspects of previous TimeFinder offerings and adds new functionality. Use the “-copy” option to copy the full snapshot point-in-time data to the target devices during link. or Oracle recoverable backup copies based on Oracle Storage Snapshot Optimization.

it is important to mention a few basic Data Domain components that are used in this integration. and other resiliency features transparent to the application. After the incremental copy completes. the Data Domain system can be used with a different VMAX3 storage array if necessary. However. Data on disk is available online and onsite for longer retention periods. Understanding Data Domain device encapsulation and SnapVX relationship With Data Domain being the external storage behind FTS. Data Domain uses it to create a static-image for each. overwriting them. DATA DOMAIN PRODUCT OVERVIEW Introduction to Data Domain Data Domain deduplication storage systems offer a cost-effective alternative to tape that allows users to enjoy the retention and recovery benefits of inline deduplication. Figure 2 EMC Data Domain deduplication storage systems Data Domain systems reduce the amount of disk storage needed to retain and protect data by 10 to 30 times. continuous fault detection and self-healing. All Data Domain systems are built as the data store of last resort. benefiting from deduplication. In that way. and therefore SnapVX will copy the backup data to them. Since the backup and restore devices are overwritten with each new backup or restore respectively. • A static-image is created for each backup device within the Data Domain system once the devices received all their data from SnapVX.For more information on SnapVX refer to the TechNote: EMC VMAX3 TM Local Replication. and restore devices. and restores become fast and reliable. which is enabled by the EMC Data Domain Data Invulnerability Architecture – end-to-end data verification. The ability to encapsulate Data Domain devices as VMAX3 devices allows TimeFinder SnapVX to operate on them. and together the static-images create a backup-set. With the industry’s fastest deduplication storage controller. and remote replications capabilities. Understanding Data Domain backup and restore devices The VMAX3 integration with Data Domain uses two identical sets of encapsulated devices: backup devices. the Data Domain devices are encapsulated to preserve their data structures. 9 . and the EMC Solutions Enabler Product Guides. • The encapsulated restore devices are used for database restore operations. They can be mounted directly to Production or a Mount host. • The encapsulated backup devices are used as a backup target. it is the static-images that are kept as distinct backups in the Data Domain system and presented via ProtectPoint catalog. Understanding Data Domain static-images and backup-sets A full overview of Data Domain system is beyond the scope of this paper. Static-images benefit from Data Domain File System capabilities of deduplication and compression and can add metadata to describe their content. or their data can be copied with SnapVX link-copy to VMAX3 native devices. Data Domain systems allow more backups to complete faster while putting less pressure on limited backup windows. Storing only unique data on disk also means that data can be cost-effectively replicated over existing networks to remote sites for DR. as well as network-efficient replication over the wide area network (WAN) for disaster recovery (DR). compression.

which is also shown in Figure 3. Maximum of 1024 device groups per pool. VTL.com/data-protection/data-domain/index.emc. Note: Data Domain can replicate the backups to another Data Domain system by using Data Domain Replicator (separately licensed). Table 1 depicts the basic Data Domain vdisk block device object hierarchy. and now also a block device service that enables it to expose devices as FC targets. Overall. more efficient backup while eliminating the backup impact on application servers.4 and above.• Static images with matching backup id are called backup-sets. ProtectPoint reduces cost and complexity by eliminating traditional backup applications while still providing the benefits of native backups. 10 . it is recommended that all matching Data Domain backup and restore devices belong to the same Data Domain device group. where future releases of Data Domain OS may offer additional capabilities. By integrating industry leading primary storage and protection storage. NFS. Maximum of 32 pools with DD OS 5. Some key values of ProtectPoint are: • Non-intrusive data protection: Eliminate backup impact on the application by removing the server from the data path and minimizing the traditional backup window. Maximum of 2048 devices Figure 3 Data Domain block device hierarchy Note: When preparing Data Domain devices. ProtectPoint provides the performance of snapshots with the functionality of backups. Table 1 Data Domain block device hierarchy Name Description Pool Similar to a ‘Department’ level. PROTECTPOINT PRODUCT OVERVIEW ProtectPoint Product Overview EMC ProtectPoint provides faster.htm Data Domain Block Device Service Data Domain supports a variety of protocols. ProtectPoint maintains and lists the backup-sets with their metadata to help selecting the appropriate backup-set to restore. Device Group Similar to the ‘Application’ level. Device Host device equivalent. The block device service in Data Domain is called vdisk and allows the creation of backup and restore Data Domain devices that can be encapsulated by VMAX3 FTS and used by ProtectPoint. keep in mind that the replication granularity is currently for a single backup-set. While Data Domain file system structure is not covered in this paper. For more information on Data Domain visit: http://www. including CIFS.

This empowers application owners with the control they desire without additional cost and complexity. ProtectPoint File System Agent Product Overview EMC ProtectPoint is a software product that takes advantage of the integration between best-in-class EMC products. By eliminating traditional backup applications.• Fast backup and instant access: Meet stringent application protection SLAs with ProtectPoint by backup up directly from primary storage to a Data Domain system. The following discussion is focused on ProtectPoint File System Agent integration with VMAX3 for Oracle databases deployed on ASM. • Simple and efficient: ProtectPoint eliminates the complexity of traditional backup and introduces unparalleled efficiency by minimizing infrastructure requirements. and self-healing ensure that backup data is accurately stored. ProtectPoint eliminates backup impact on the local area network (LAN) and minimizes storage area network (SAN) bandwidth requirements by sending only unique data from primary storage to protection storage. and recoverable throughout its lifecycle on a Data Domain system. It can also copy individual files to Data Domain via the network when necessary. recovery and replication directly from their native application utilities. VMAX3 with FTS and Data Domain. continuous fault detection. ProtectPoint includes both Application Agent and File System Agent. • Reliable protection: Since ProtectPoint backs up data to a Data Domain system it is protected with the Data Domain Data Invulnerability Architecture that provides the industry’s best defense against data integrity issues. Inline write-and-read verification. Figure 4 ProtectPoint Underlying Technology Figure 4 illustrates ProtectPoint underlying technology components. • Application owner control: ProtectPoint provides application owners and database administrators with complete control of their own backup. By protecting all data on the Data Domain system. The ProtectPoint Agent includes two components: the ProtectPoint File System Agent. such as RMAN. Data Domain then stores the full backup image with compression and deduplication on its file system. The image shows the storage snapshot of the Production devices. retained. ProtectPoint enables faster. VMAX3 ProtectPoint benefits for Oracle databases deployed on ASM: 11 . When the Application Agent executes the backup. it is fully managed by the application. ProtectPoint reduces backup storage requirements by 10 to 30 times. and the Application Agent. Since TimeFinder tracks changed data it only sends the changes to Data Domain. which always represents a full backup (point-in-time copy of Production’s data). The snapshot data is sent incrementally over the FC links from the VMAX3 storage array directly to the Data Domain system as a full backup. ProtectPoint File System Agent is a command line tool available when the full application integration is not (for example. leveraging RMAN Proxy Copy functionality yet utilizing the power of storage snapshots. to provide a backup offload optimization and automation. more frequent backup for enterprise applications. as RMAN does not support Proxy Copy for Oracle ASM).

+GRID) that does not contain any application data. Alternatively they can be mounted to the Production host and cataloged with RMAN so the full scale of RMAN functionality can be used for production database recovery. leveraging VMAX3 Federated Tier Storage (FTS). o For Oracle databases starting with release 12c Oracle offers a new feature: Oracle Storage Snapshot Optimization that allows database backups without the need for Hot-Backup mode. It can also manage Data Domain replication of backups to a remote Data Domain system for additional protection. ProtectPoint File System Agent Components ProtectPoint. Hot Backup mode is ended immediately. which means that no additional time will be spent during recovery to apply incremental backups. • ProtectPoint database backup efficiencies: o Backup does not require host resources or any read or write I/Os of the data files by the host. • ProtectPoint database restore efficiencies: o Restoring the data from Data Domain does not require any read or write I/Os of the data files by the host. It maintains a catalog of the backups. as described in Figure 4. • Production host (or hosts in the case of Oracle RAC) with VMAX3 native devices hosting the Oracle database. Therefore. Data Domain places the required backup-set on its restore encapsulated devices in seconds.• ProtectPoint allows Oracle databases backup and restore operations to take place entirely within the integrated system of VMAX3 and Data Domain. Instead. A third option exists where the encapsulated restore devices are used by SnapVX to copy their data over the native Production VMAX3 devices (if the Production database requires a complete restore from backup). TimeFinder SnapVX creates an internal point-in-time consistent snapshot of the database. Note: When Oracle RAC is used it is recommended to use a dedicated ASM disk group for Grid (e. which is an inherent feature of SnapVX. In this way. Instead. Also. o ProtectPoint utilizes Data Domain deduplication and optional compression to reduce the size of backup-sets. +FRA). The ASM disk groups from the backup can simply be mounted to the ready cluster and used. 3 Oracle 11gR2 also allows database recovery from a storage consistent replica that was taken without hot-backup mode. EMC ProtectPoint: Solutions Guide and ProtectPoint: Implementation Guide. o TimeFinder SnapVX snapshots are incremental.1. if the Mount host or another remote server that will perform the restore are clustered. where Hot Backup mode is still needed. As soon as the snapshot is created. For more information on ProtectPoint see: EMC ProtectPoint: A Detailed Review. it is only required for a few seconds – regardless of the size of the database. is based on the following key components. o The encapsulated restore devices can be mounted to a Mount host for surgical recovery and small-scale data retrievals.g. o Although only changed data is copied to Data Domain. they will have their own dedicated +GRID ASM disk group set up ahead of time. and then copies it directly to the Data Domain system. Oracle is leveraging storage snapshot consistency.g. during restore. 12 . etc. After initial full-copy during system setup. This backup-offload capability reduces host I/O overhead and CPU utilization. and FRA (archive logs). each snapshot (backup) is a full point-in-time image of the production data files. each in its own Oracle ASM disk group (e. you can restore only data files without overwriting the Production’s redo logs. For example. However. Oracle backup procedures require the archive logs to be copied later than the data files. Instead. a prior scan of the data files is needed. +DATA. all future copies (backups) between source database devices to the Data Domain backup devices will only copy changed data. o A minimum of 3 sets of database devices should be defined for maximum flexibility: data/control files. if the recovery is not full. if the redo logs are still available on Production. redo logs. when deployed with VMAX3 and Oracle ASM. For more details refer to Oracle support note: 604683. and archive log files allows ProtectPoint to backup and restore only the appropriate file type at the appropriate time. and allows the host to focus its resources on servicing database transactions. redo. Data Domain backup-sets are always full (level 0). o The separation of data. o For Oracle databases prior to 12c 3. +REDO.

it provides the media manager software a list of database files to backup or restore via the proxy-copy API. and RMAN Catalog. RMAN is not aware of which host a backup was performed from. it allows the media manager software to intercept the writes and add its own optimizations. and optionally Unisphere for VMAX3 software is installed. The management host does not require its own VMAX3 storage devices. The media manager 13 . host operating systems and more. RMAN can also maintain a Catalog with a repository of backups it either performed on its own or were performed outside of RMAN. MML based backup can be executed from the Production host. Note: Refer to ProtectPoint release notes for details on supported Data Domain systems. • An optional Mount host. but cataloged with RMAN (so RMAN can utilize them during database recovery). Besides performing the backup or recovery operations. database block verification. RMAN can restore to the Production host. and can leverage RMAN Block Change Tracking during incremental backups. and restore devices. For that reason. or a database clone. RMAN backups can be taken directly from the Production host. or to which host the database is restored. RMAN can be used to backup Oracle databases on file systems or ASM. In this case the encapsulated restore devices can be mounted to the Mount host to review the backup content before copying it over to the Production host. where ProtectPoint. Each of them is identical to the database production devices it will backup or restore. As with normal RMAN backup. Data Domain Boost for Oracle RMAN is a deployment of this backup model. Some of the features RMAN provides are: • Database block integrity validation • Granular restores of tablespaces and data files • Recovery from database block corruptions • Backup efficiency with Block Change Tracking (bitmap) for incremental backups RMAN backup and recovery operations rely on the database ID and database file location (on file systems or ASM disk groups). 3 • VMAX storage array with Federated Tier Storage (FTS) and encapsulated Data Domain backup and restore device sets. Solutions Enabler. and writing it back to the backup media. typically from the Production host. Note: For I/O intensive recoveries it is recommended to first copy the backup-set to native VMAX3 devices. or from a physical standby database. backup and restore times increase as database size grows. This allows DBAs flexibility when planning their backup and recovery strategies. • RMAN proxy-copy backup is an integration where RMAN initiates the backup or restore. Mount host. • RMAN SBT based backup (also known as MML) is an RMAN integration with a 3rd party media manager such as Data Domain. standby-database. restore and recovery operations of the Oracle database. Mount host is used when the DBA prefers to mount the backup not on the production environment. • Data Domain system leveraging vdisk service and with two identical sets of devices: backup devices. where like normal RMAN backup. However. RMAN typically uses one of the following integrations with 3rd party backup media managers such as Data Domain:: • RMAN disk backup is fully executed by RMAN. Instead. however backup and restore times are still affected by the database size. RMAN first reads the backup data from primary storage to the host. ORACLE AND PROTECTPOINT FILE SYSTEM AGENT CONSIDERATIONS RMAN BACKUP INTEGRATIONS WITH BACKUP MEDIA MANAGERS Oracle Recovery Manager (RMAN) is used by many Oracle DBAs to perform comprehensive backup. or a physical standby database. o The backup and restore Data Domain devices are created in Data Domain and exposed as VMAX3 encapsulated devices via FTS. from a Mount host where a TimeFinder database replica is mounted (such as when using SnapVX to create database recoverable replicas). a standby-database. o Not shown is a remote Data Domain system if Data Domain Replicator is used to replicate backups to another system. but does not perform it. or a database clone such as SnapVX can create. That also means that regardless of where the backup was taken.• Management host. MML based backups are fully integrated with RMAN. As a result. though it requires a few tiny devices called gatekeepers for communication with the VMAX3 storage array. or for extracting small data sets (‘logical’ recovery). Normal RMAN backup is based on reading all the data that requires backup from primary storage to the host.

With TimeFinder SnapVX the backup (snapshot) takes seconds. database recovery. the backup or restore are initiated from a shell script. RMAN proxy copy backup is not covered here. and others. As soon as it is available (possibly in seconds when mounting the encapsulated restore devices to Production) regardless of database size. A proxy-copy backup utilizing storage snapshots increases backup efficiencies as backup time no longer depends on the database size. Since this paper is focused on ASM deployment. therefore this model cannot be used for ASM until Oracle does provide support. The reason is that RAC requires all database files to be shared across all nodes. and/or +FRA disk groups from the backup as necessary. though RMAN performs the actual database recovery after it catalogs the backup-set. 14 . Therefore. • MML based backup with Data Domain boost for Oracle RMAN. cascaded. Data Domain offers the following types of integrations with Oracle RMAN: • RMAN disk backup where Data Domain is the backup target. each backup is full (level 0). consider performing the ProtectPoint backup from an SRDF target. This initial disk group is typically named +GRID. Oracle deploys the first ASM disk group when installing Grid Infrastructure. After this the DBA can use the breadth of RMAN recover commands and functionality to perform database recovery procedures. and the +FRA will include the archive logs from all the nodes. such as data block recovery. Note that starting with Oracle database release 11gR2. and if the replica/backup is mounted to a different cluster. data file recovery. Simply mount the +DATA. especially on x86_64 and Linux. this first ASM disk group remains separate from any database data. Therefore. The encapsulated restore devices only contain a single specific set of static-images once they are utilized. but not all the prior backup content that is saved within Data Domain as static-images. The restore is also initiated by a shell script (and not RMAN) which is used to bring back the right backup-set from Data Domain. including Synchronous. RMAN AND PROTECTPOINT FILE SYSTEM AGENT INTEGRATION POINTS WITH ASM Since RMAN does not support proxy-copy with ASM. potentially using storage snapshots. In summary. • A partial integration with RMAN for backup offload when the database is deployed on ASM. whether the storage snapshots are for a single database server or a clustered database. For more information about Oracle and SRDF see: EMC Symmetrix VMAX using EMC SRDF/TimeFinder and Oracle. even if data is later copied incrementally in the background. It is initiated and managed by RMAN. consider using ProtectPoint with Data Domain Replicator. +REDO. the +DATA disk group will include the shared data files (and control files). PROTECTPOINT FILE SYSTEM AGENT AND ORACLE REAL APPLICATION CLUSTERS (RAC) Oracle Real Application Clusters (RAC) offers improved high-availability and load balancing and is very popular. In this case. To execute the backup remotely. further reducing recovery time. three-site (SRDF/STAR). some of the RMAN functionality cannot be leveraged. EMC recommends that when using storage replications (local or remote). Also. software is responsible for the actual data copy. From backup and replication perspective for storage snapshots. PROTECTPOINT FILE SYSTEM AGENT AND REMOTE REPLICATIONS WITH SRDF SRDF provides a robust set of remote replication capabilities between VMAX and VMAX3 storage arrays. The reason is that the encapsulated backup devices only contain the latest backup. then the cluster can be pre-installed ahead of time. the +REDO disk group will include all the redo logs files for all the nodes. Asynchronous. RMAN can be used to catalogs as image copies. This model only works when the database is on file systems. Note: Oracle does not support RMAN proxy-copy backups with ASM. Using a set of ASM disk groups similar to those provided in the examples in this paper. there is no need to replicate the encapsulated backup or restore devices. it makes no difference whether the database is clustered or not. When using SRDF to replicate the Production database remotely. To replicate backups taken locally to a remote Data Domain system. Since the cluster does not contain any user data it does not need to be part of the replications. and more. the backup is initiated by a shell script (and not RMAN). the replica will include all database files.

If. To easily execute the integrated solution described in this white paper. it is critical to make sure it is up to date and correct. test/dev environments. In a similar way. Again. having a gold copy nearby provides increased availability. as database capacities continue to increase. Therefore. This is fine for logical recovery or when the old backup is cataloged with RMAN. it is recommended to perform a new database backup after making ASM changes. renaming it appropriately. the ASM changes will have to be redone. AND ASM CHANGES ProtectPoint relies on a configuration file that contains vital information such as the Data Domain systems information (local. the ASM changes will have to be re-done. and regardless of the size of the database. they are taken within seconds. and many other use cases. the ability to execute specific commands in Oracle. ProtectPoint. patch tests. Oracle also allows setting up a backup user and only providing them a specific set of authorizations appropriate for their task. Since the list of devices is hard-coded into the configuration file and is critical to the validity of the backup (it must include the correct ASM devices for the different ASM disk groups). protection.BACKUP TO DISK WITH SNAPVX You can use TimeFinder SnapVX to easily and quickly create local database backups. • Use SUDO. leveraging VMAX Access Controls (ACLs). Production ASM disk groups remain intact. 15 . For more information about Oracle and TimeFinder see: EMC Symmetrix VMAX using EMC SRDF/TimeFinder and Oracle. It is highly recommended to perform a new backup after making ASM changes. or multipathing commands). allowing the DBA to execute specific commands for the purpose of their backup (possibly in combination with Access Controls). other than ‘sysadmin’ that can manage the Data Domain system appropriately. PROTECTPOINT CONFIGURATION FILE. the device IDs should also be added to the appropriate ProtectPoint configuration file along with their new matching backup and restore devices. and a different host user account may be used to setup and manage Data Domain system. TimeFinder is also integrated with SRDF to offload backups to a remote site or restore from a remote site. Since the DBA may use any of the older backups. a storage admin host user account is used to perform storage management operations (such as TimeFinder SnapVX. Restoring snapshots back to their source devices is also very quick. the old backup is used to overwrite Production devices. and peace of mind. Also in the configuration file is a list of the native VMAX3 devices and Data Domain backup and restore devices. COMMAND EXECUTION PERMISSIONS FOR ORACLE USERS Typically. Solutions Enabler. TimeFinder SnapVX replicas can create valid database backup images or database clones. however. an Oracle host user account is used to execute Oracle RMAN or SQL commands. the old backup is used to overwrite Production devices. and Data Domain is required. gold copies. A ProtectPoint restore from an older backup will have the older ASM disk group structure. • A less common case is when ASM devices are removed from the ASM disk groups. BACKUP DEVICES. Data Domain can create additional user accounts. It can be used to restore the older backups and therefore enough backup and restore devices should be maintained for them. reporting instances. however. An example for setting up VMAX3 Access Controls is provided in Appendix III. There are two ways to address this: • Allow the database backup operator (commonly a DBA) controlled access to commands in Solutions Enabler. if the old backup is used for logical recovery or by RMAN after it is cataloged. and remote if used) and the location of the ProtectPoint logs and Catalog. • When changes are made to ASM such as adding devices to the ASM disk group. As in the previous point. This type of role and security segregation is common and often helpful in large organizations where each group manages their respective infrastructure with a high level of expertise. If. it is recommended to keep the old ProtectPoint configuration file. SnapVX snapshots can be taken at any time.

regardless of database size.ORACLE BACKUP AND RECOVERY USE CASES WITH PROTECTPOINT FILE SYSTEM AGENT ORACLE DATABASE BACKUP/RECOVERY USE CASES – THE BIG PICTURE This section provides a high level overview of Oracle ASM database backup and recovery use cases with ProtectPoint File System Agent integration. 16 . Following the overview. • Step (2): Perform two ProtectPoint backup create with an appropriate description: one using database configuration file. o SnapVX will create a consistent snapshot of +FRA ASM disk group (archive logs). each use case is described in detail. • Step (1b): Perform ProtectPoint snapshot create using fra configuration file. • For Oracle databases prior to 12c: End hot-backup mode for the database. or prelude to 4c) 3) Logical recovery on Mount host using the backup as a database-clone (4a restartable) 4) RMAN recovery of Production data files without overwriting them with a backup (4b) 5) RMAN recovery after overwriting Production data files with a backup (4c) Figure 5 ProtectPoint Workflow Backup Oracle ASM database using ProtectPoint File System Agent • For Oracle databases prior to 12c: Begin hot-backup mode for the database. • Step (1a): Perform ProtectPoint snapshot create using database configuration file o SnapVX will create a consistent snapshot of both +DATA and +REDO ASM disk groups together within seconds. as described in Figure 4. The following use cases are shown: 1) Backup Oracle ASM database using ProtectPoint 2) Read-only inspection of the backup on Mount host (4a recoverable. • In Oracle database: switch and archive the current logs. Note: Remote replications of backups using a secondary Data Domain system is not described in the use cases but can be used as part of the solution. the other using fra configuration file.

However. However. 17 . If there are no plans to copy the backup-set to Production. No roll-forward recovery is allowed on that database clone. open the database read-write with resetlogs instead of read-only. • Review the data. Oracle performs crash-recovery. only the data files and occasionally archive logs will be used to recover production. when searching for a valid version of a database block after it was found corrupted). +REDO. then open the database READ ONLY. after the roll-forward recovery. ProtectPoint will create a new backup-set. If found suitable. and SnapVX uses storage consistency by default. o Data Domain places the content of the backup-set on the encapsulated restore devices. or prelude to 4c) • Purpose: Use this method to inspect backup-sets quickly. the other using the fra configuration file and matching backup-id from the same backup time. creating a database clone with all committed transactions up to the time of the snapshot. the time it takes Oracle to perform crash-recovery depends on the amount of transactions since the last checkpoint. control. and +FRA disk group is not required. as shown by ProtectPoint backup show list. dismount the ASM disk groups. • Add the +DATA and +FRA encapsulated restore devices to the Mount host masking view. o When the DBA is not sure which +DATA backup-set to use for SnapVX link-copy that overwrites Production data devices (4c). o It is possible to use this method for ‘logical’ recovery. • Step (3): Perform two ProtectPoint backup restores. At the end of this process Data Domain will add two backup-sets: the first with a consistent image of the data. and log files). prior to performing a SnapVX copy overwriting Production (4c). as no SnapVX copy is performed and no archives are applied. leveraging the latest archive and redo logs from production. it can be copied with SnapVX link-copy to Production. • Add the +DATA and +REDO encapsulated restore devices to the Mount host masking view. so they become visible to the host (+REDO ASM disk group is not used while the database is not opened read/write) • Step (4a): Mount the 3 ASM disk groups on the Mount host. • Perform minimal database media recovery using the available archive logs in the +FRA. and redo log files. and the second with the minimum set of archive logs required to recover the data files. • Step (3): Perform ProtectPoint backup restore using the database configuration file and a backup-id. It goes through minimal recovery (just enough so it can be opened read-only and inspected). control. so they can become visible to the host. o Because +DATA and +LOG snapshot was taken in a single operation. bring another backup-set and repeat. if the Production database is truly lost. Typically. the result is a consistent and restartable database replica (the snapshot must include all data. o The time to access this database clone is relatively fast. Instead. without the need to apply any archive logs. use the Mount host to quickly browse through the backup-sets (for example. The DBA can open the database on the Mount host normally for read/write. simply start the database. one using the database configuration file and a matching backup-id. • Do not perform database media recovery. dismount the ASM disk groups and perform SnapVX link-copy to Production (4c). these two backup-sets are self-contained and are enough to restore the entire database. Read-only inspection of the backup on Mount host (4a recoverable. o Data Domain places the content of the backup-sets (+DATA. • Step (4a): Mount the 2 ASM disk groups on the Mount host. o SnapVX link-copy will send incremental changes to the Data Domain encapsulated backup devices. and +FRA) on the encapsulated restore devices. If appropriate. o The database and FRA backups are mounted on the Mount host. Logical recovery on Mount host using the backup as database-clone (4a restartable) • Purpose: Use this method if the DBA needs to retrieve a small amount of data and wants to use one of the backup-sets as a database ‘clone’. If not. When the data changes are fully copied.

dismount the +RESTORED_FRA ASM disk group and bring in the next. o The ASM +DATA disk group on the encapsulated restore devices is renamed to +RESTORED_DATA and then cataloged with RMAN. if it was lost. or other methods to perform logical recovery of the Production data files. o After the catalog operation RMAN can use this backup-set for normal RMAN recovery operations on Production. If the Production host sustained a complete loss of its data files follow use case (4c). mount it to ASM and use its archive logs. if another volume manager is used the volumes should be renamed to not conflict with the existing Production volume groups of the volume manager. • Step (3): Perform ProtectPoint backup restore using the database configuration file and a backup-id.• Review the data and extract the appropriate records using Oracle Data Pump. • Step (3): Perform ProtectPoint backup restore using the database configuration file and a backup-id. consider creating a new one. o This recovery method is best utilized for small corruptions. prior to performing the SnapVX link-copy. etc. The content of the backup +REDO logs will not be used for recovery and it may be quicker to just create a new +REDO disk group and log files. o Data Domain places the content of the backup-set on the encapsulated restore devices. RMAN recovery after overwriting Production data files with a backup (4c) • Purpose: Use this method if it is clear that the Production host is completely lost and it is better to overwrite its data files with the backup content and roll it forward rather than perform targeted recovery as described in use case (4b). as described in use case (4a. If the DBA is not sure which +DATA backup-set they need. Note: If ASMlib is used the Oracle ASM disks will need to be renamed as well. 18 . allowing the DBA to use normal RMAN recover commands to recover the Production database. RMAN recovery of Production data files without overwriting them with a backup (4b) • Purpose: Use this method to recover the existing Production data files. consider browsing the content of the backup-sets first by using the Mount host. prelude to 4c). if another volume manager is used the volumes should be renamed to not conflict with the existing Production volume groups of the volume manager. o Data Domain will place the content of the backup-set on the matching encapsulated restore devices. however. overwriting the Production native VMAX3 +DATA ASM disk group with its content. repeating this step as necessary. It allows the recovery to start within minutes as the encapsulated restore devices are mounted directly to the Production host and not copied first to native VMAX3 devices. such as database block corruptions. • Add only the +DATA encapsulated restore devices to the Production host masking view. However. Similarly. database recovery will only start after the copy completes. Database Links. mount it to ASM and catalog it with RMAN. • Dismount the Production +DATA ASM disk group. • If RMAN requires missing archive logs. Note: If the Production host’s +REDO ASM disk group survived. o If more than one +FRA backup-set is required. • Use SnapVX to link-copy the +DATA encapsulated restore devices. Note: If ASMlib is used the Oracle ASM disks will need to be renamed as well. o Add the +FRA encapsulated restore devices to the Production host masking view (only needed first time). a few missing data files. o Rename the encapsulated +FRA ASM disk group to +RESTORED_FRA. or using SnapVX to copy an older version or +REDO from backup. do not overwrite it with a backup-set of the logs. • Step (4b): Rename the encapsulated ASM disk group to +RESTORED_DATA. repeat a similar process for older +FRA backup sets: o ProtectPoint backup restore using the fra configuration file and a backup-id. o The SnapVX link-copy from the encapsulated restore devices to the native VMAX3 devices is performed inside the integrated system without using host I/Os. • Step (4c): Mount the restored +DATA disk group and perform database media recovery (using RMAN or SQL). Similarly. Note: Do not include the +REDO encapsulated restore devices to Production masking view as they are not used in this scenario.

and scripts starting with “ora_” reference Oracle SQL or RMAN commands. A script: ‘os_rescan. In ProtectPoint File System Agent v1. This topic is beyond the scope of this white paper though you can review your HBA and host operating system documentation.sh rstr_data_sg show [root@dsib1141 scripts]# . as a work-around. o If more than one +FRA backup-set is required. use the ‘oracleasm scandisks’ command. which vary based on the circumstances and type of failure. repeating this step as necessary. However. Note: Scripts starting with “se_” reference Solutions Enabler commands. dismount the +RESTORED_FRA ASM disk group and bring in the next. they need to be in not_ready (NR) state. the script ‘pp_backup. containing all the database data./pp_snap. if the DBA 19 . ready. there are ways to do it online without rebooting. or not_ready. repeat a similar process from (4b) for older +FRA backup sets: o ProtectPoint backup restore using the fra configuration file and a backup-id. or not-ready. [root@dsib1141 scripts]# . [root@dsib1141 scripts]# .sh rstr_redo_sg show [root@dsib1141 scripts]# . a rescan of the SCSI bus may be required for the host to recognize them. and the other with all the devices of +FRA ASM disk group.sh’ attaches the TimeFinder SnapVX snapshot time to the user-provided backup description. +REDO and +FRA ASM disk group devices) to the encapsulated backup devices. This can obviously be achieved by a host reboot.0 the protectpoint backup show list command lists a different time than the snapshot time. At the end of the setup you will have: o An initial snapshot and a one-time full link-copy between Production database devices (+DATA. control. or when using ASMlib. depending on the operating system. It will show the status of the SG devices. prior to creating a snapshot of the restore devices (use case 4c).sh rstr_fra_sg show • When presenting the encapsulated restore devices to the host (either Mount or Production). or whether ASMlib is used.sh script to check the restore devices state and change it between ready and not_ready.• If RMAN requires missing archive logs. make them ready.sh’ takes a storage group (SG) name and a command option: show. • Step (1a): Use the ProtectPoint snapshot create command to create a snapshot for both +DATA and +REDO ASM disk groups. The content of the scripts is in Appendix IV – Scripts. HBA type. containing archive logs. o Two ProtectPoint configuration files: one with devices of +DATA and +REDO ASM disk groups.sh database Note: When using the Oracle 12c Storage Snapshot Optimization feature instead of hot backup mode for a backup solution. o Add the +FRA encapsulated restore devices to the Production host masking view (only needed first time). Summary While this guide cannot cover all possible backup and recovery scenarios. SQL> alter database begin backup. o Rename the encapsulated +FRA ASM disk group to +RESTORED_FRA. Scripts starting with “pp_” reference ProtectPoint commands. Therefore. respectively. BACKUP ORACLE ASM DATABASE USING PROTECTPOINT • For Oracle databases prior to 12c place the Production database in hot-backup mode. Script note: The script ‘se_devs./se_devs. mount it to ASM and use its archive logs. • Prior to using protectpoint restore prepare the restore devices need to be in read-write (RW) state. However.sh’ is used to perform this activity online in the following examples. Running any of the scripts without parameters will display the required parameters. • The following use cases deploy simple Linux shell scripts to simplify their execution. and log files. Oracle requires the time of the snapshot during database recovery. BACKUP AND RECOVERY USE CASES SETUP • Perform a system setup as described in Appendix I – ProtectPoint System Setup./se_devs./se_devs. You can use the se_devs. it provides an overview and examples of key scenarios that can be used leveraging ProtectPoint File System Agent integration. However.

/pp_list_backup. ----------.monitor progress [root@dsib1141 scripts]# . SQL> select * from test. ---------./pp_snap.52./pp_backup. Use case 4b demonstrates recovery from physical block corruption and has new set of values in the ‘test’ table included in that use case.List Protect Point backups ---------. RECOVERABLE) • Perform ProtectPoint backup list to choose a backup-id to restore using either of the configuration files.51./se_snap_show. Note also that ‘pp_backup.663259 AM -04:00 after pp backup fra started 25-MAR-15 09. ---------.sh database <. SQL> alter database end backup.monitor progress [root@dsib1141 scripts]# .59./pp_backup. In addition. the ‘.49.46.580309 AM -04:00 after db snapshot 25-MAR-15 09. ---------.36. believes that the Management host clock (from which SnapVX snapshot time is taken) is not coordinated with the Database servers clock./se_snap_show.15.40. [root@dsib1141 scripts]# . ----------.sh database “Nightly backup” [root@dsib1141 scripts]# .435286 AM -04:00 after pp backup database started 25-MAR-15 09. and the fra configuration file and a backup-id: 20 . [root@dsib1141 scripts]# .sh’ runs it in the background.sh’ script can be used to monitor the copy progress./ora_switchandarchive.45.174418 AM -04:00 after fra snapshot 25-MAR-15 09.sh’ adds the actual snapshot time to the description’s end in brackets. ------------------.48. ---------------------------------------- Backups found: 2 • During the backup process a ‘test’ table was used to insert known records.sh • Step (1b): Execute ProtectPoint snapshot create using fra configuration file to create a snapshot of the +FRA ASM disk group (archive logs).sh fra “Nightly backup” [root@dsib1141 scripts]# . ----------. -------------------------------- 25-MAR-15 09./se_snap_show.------------------.514299 AM -04:00 both backups completed 7 rows selected.13. This way. both “fra” and “database” backup copies to Data Domain can take place simultaneously. • For Oracle databases prior to 12c end hot-backup mode. the ‘protectpoint backup create’ command will only return the prompt when the copy finishes and the backup-set is created.sh fra • Step (2): Link-copy the snapshot PiT data incrementally to Data Domain using two ProtectPoint backup create commands with an appropriate user-description for the backup: one using database configuration file. the script ‘. ---------------------------------------- 35cc4…72fe 2015-03-25 09:47:37 00:02:49 complete database Nightly backup (2015_03_25_09:45:11) bc020…d08c 2015-03-25 09:50:51 00:02:10 complete fra Nightly backup (2015_03_25_09:46:28) ---------. These records will now be used during the recovery use cases 4a and 4c as a reference to how much recovery is performed. [root@dsib1141 scripts]# ./pp_list_backup. • In Oracle (using SQL or RMAN) switch and archive the logs. ---------------------------------------- Backup id Backup start time Duration Status Description (hh:mm:ss) ---------./pp_backup.sh database <.sh database • Step (3): Perform two ProtectPoint backup restore using the database configuration file and a backup-id. Instead. and the backup objects (database or fra) to the beginning.848576 AM -04:00 after log switch 25-MAR-15 09.37.sh fra <. It can take a while with no progress indication. TS REC -------------------------------------------------.sh’ script to capture the current time from the database server or the database itself.342907 AM -04:00 before db snapshot 25-MAR-15 09.46. [root@dsib1141 scripts]# . the DBA may prefer to modify the ‘pp_backup.------------------. Script note: If running in the foreground.08. USING MOUNT HOST TO PICK A BACKUP-SET TO COPY TO PRODUCTION (4A. the other using fra configuration file.

/pp_restore. 21 . RMAN> exit [oracle@dsib1136 ~]$ sqlplus "/ as sysdba" SQL> alter database mount.875267149' no longer needed for this recovery ORA-00279: change 989138 generated at 03/25/2015 09:45:54 needed for thread 1 ORA-00289: suggestion : +FRA ORA-00280: change 989138 for thread 1 is in sequence #54 ORA-00278: log file '+FRA/ORCL/ARCHIVELOG/2015_03_25/thread_1_seq_53.sh fra bc0202f6-52f1-ce09-a46e-ebb9ae61d08c • Add the +DATA. [oracle@dsib1136 ~]$ sqlplus "/ as sysasm" SQL> alter system set asm_diskstring='/dev/mapper/ora*p1'. in the case of a single instance. However.308. and +FRA ASM disk groups./pp_restore. [oracle@dsib1136 ~]$ TODB [oracle@dsib1136 ~]$ rman RMAN> connect target / RMAN> startup nomount. Oracle High-Availability Services may need to be started first.309. +REDO. • Perform minimal database media recovery using the available archive logs in the +FRA.sh database 35cc4c92-9ab5-09a4-a6f6-b832494372fe [root@dsib1141 scripts]# .BCK'. ORA-00279: change 783215 generated at 03/25/2015 09:43:21 needed for thread 1 ORA-00289: suggestion : +FRA/ORCL/ARCHIVELOG/2015_03_25/thread_1_seq_52.875267149 ORA-00280: change 783215 for thread 1 is in sequence #52 Specify log: {<RET>=suggested | filename | AUTO | CANCEL} AUTO ORA-00279: change 982703 generated at 03/25/2015 09:45:49 needed for thread 1 ORA-00289: suggestion : +FRA/ORCL/ARCHIVELOG/2015_03_25/thread_1_seq_53.308. recoverable): Mount the 3 ASM disk groups on the Mount host. o If RAC is running on the Mount host then it should be already configured and running using a separate ASM disk group (+GRID). SQL> alter diskgroup data mount. o Mount +DATA. SQL> recover database until cancel using backup controlfile snapshot time 'MAR-25-2015 09:45:11'.rstr_fra_sg • Step (4a. and +FRA encapsulated restore devices to the Mount host masking view: [root@dsib1141 scripts]# symaccess -type storage -name mount_sg add sg rstr_data_sg. [root@dsib1136 ~]# su .875267155' no longer needed for this recovery ORA-00308: cannot open archived log '+FRA' <. [root@dsib1141 scripts]# . o In the example. RMAN can be used for the media recovery (as in use case 4c). +REDO.rstr_redo_sg.oracle [oracle@dsib1136 ~]$ TOGRID [oracle@dsib1136 ~]$ crsctl start has CRS-4123: Oracle High Availability Services has been started. SQL> alter diskgroup redo mount. Alternatively.309. then open the database READ ONLY.no more archive logs left to use! ORA-17503: ksfdopn:2 Failed to open file +FRA ORA-15045: ASM file name '+FRA' is not in reference form SQL> alter database open read only.875267155 ORA-00280: change 982703 for thread 1 is in sequence #53 ORA-00278: log file '+FRA/ORCL/ARCHIVELOG/2015_03_25/thread_1_seq_52. SQL> alter diskgroup fra mount. RMAN> restore controlfile from '+FRA/CTRL. RMAN is used to copy the backup control file to its right place and then SQL is used with automatic media recovery.

Committed transactions have been recovered up to the point of the log switch and FRA backup. review the ‘test’ table. [root@dsib1136 ~]# su . RESTARTABLE) • Perform ProtectPoint backup list to choose a backup-id to restore. [root@dsib1141 scripts]# .37. USING MOUNT HOST AND DATABASE CLONE FOR LOGICAL RECOVERY (4A. Oracle will only log its inability to write new archive logs in the database alert. -------------------------------- 25-MAR-15 09. Total System Global Area 1325400064 bytes Fixed Size 3710112 bytes Variable Size 1107297120 bytes 22 .log). If a single instance database is used on the Mount host (even if Production is clustered). [root@dsib1141 scripts]# .sh database 35cc4c92-9ab5-09a4-a6f6-b832494372fe • Add the +DATA and +REDO encapsulated restore devices to the Mount host masking view. Oracle High- Availability Services may need to be started first. • For reference. Diskgroup altered. o If RAC is used on the Mount host then it should be already configured and running using a separate ASM disk group (+GRID).580309 AM -04:00 after db snapshot 2 rows selected. consider disabling archive log mode before opening the database. Note: Since the database was in archive log mode and you did not bring back +FRA from backup. Database altered. TS REC -------------------------------------------------. This recovery only included the minimum archives required to open the database. Instead.sh database • Step (3): Perform ProtectPoint backup restore using the database configuration file and a backup-id.342907 AM -04:00 before db snapshot 25-MAR-15 09. SQL> alter diskgroup data mount.oracle [oracle@dsib1136 ~]$ TOGRID [oracle@dsib1136 ~]$ crsctl start has CRS-4123: Oracle High Availability Services has been started. Either configuration file will show the same list. [oracle@dsib1136 ~]$ TODB [oracle@dsib1136 ~]$ sqlplus "/ as sysdba" SQL> startup ORACLE instance started. System altered. [oracle@dsib1136 ~]$ sqlplus "/ as sysasm" SQL> alter system set asm_diskstring='/dev/mapper/ora*p1'.36. SQL> alter diskgroup redo mount. [root@dsib1141 scripts]# symaccess -type storage -name mount_sg add sg rstr_data_sg.rstr_redo_sg • Step (4a): Mount the 2 ASM disk groups on the Mount host. Diskgroup altered. [oracle@dsib1136 ~]$ TODB [oracle@dsib1136 ~]$ sqlplus "/ as sysdba" SQL> select * from test. or leaving it in place (as long as archives are optional. o Mount the +DATA and +REDO ASM disk groups.45./pp_list_backup. • Do not perform database media recovery./pp_restore. simply start the database.49.

SQL> select * from test. select * from corrupt_test where password='P7777' * ERROR at line 1: ORA-01578: ORACLE data block corrupted (file # 6. Database opened.-------..sh database --------.----------------.. ---------------------------------------------------------------- Backup id Backup start time Duration Status Description (hh:mm:ss) --------.-------. o Before the corruption.46. TS REC -------------------------------------------------.45. ---------------------------------------------------------------- 9df…620 2015-03-30 13:31:28 00:01:48 complete fra Nightly before block corruption (2015_03_30_12:47:04) e7e…5a0 2015-03-30 13:38:17 00:14:39 complete database Nightly before block corruption (2015_03_30_12:39:58) --------.342907 AM -04:00 before db snapshot RMAN RECOVERY OF PRODUCTION WITHOUT SNAPVX COPY (4B) • First.751072 PM -04:00 after database snapshot 30-MAR-15 12. • Perform ProtectPoint backup list using either of the configuration files to choose a backup-id to restore.-------------------------------- 25-MAR-15 09.----------------. Since the database was only started from the time of the backup.43.22.932923 PM -04:00 after log switch 30-MAR-15 12.48. -------. • For reference..37.39./pp_list_backup. 23 . make them ready. Database Buffers 67108864 bytes Redo Buffers 147283968 bytes Database mounted. block # 151) ORA-01110: data file 6: '+DATA/ORCL/DATAFILE/bad_data' SQL> exit [oracle@dsib1141 oracle]$ dbv file='+DATA/ORCL/DATAFILE/bad_data' blocksize=8192 .------------------------------------- Cap Sym Physical SA :P Config Attribute Sts (MB) ---------------------------. without any roll forward of logs. If not.------------------------------------- 00027 Not Visible ***:*** TDEV N/Grp'd NR 102401 00028 Not Visible ***:*** TDEV N/Grp'd NR 102401 00029 Not Visible ***:*** TDEV N/Grp'd NR 102401 0002A Not Visible ***:*** TDEV N/Grp'd NR 102401 4 The method to corrupt a database block in ASM is introduced in this blog.026100 PM -04:00 before database snapshot 30-MAR-15 12. [root@dsib1141 scripts]# .49.262434 PM -04:00 after fra snapshot o Introduce a physical block corruption to one of the data files in ASM. [root@dsib1141 scripts]# .. -------. TS REC -------------------------------------------------.44. the only transaction reported is the one prior to the snapshot.------. simulate a block corruption to demonstrate this use case 4. which is the backup time. perform a backup (as described in use case 1a) and introduce a new set of known records to ‘test’ table during the backup process as before: SQL> select * from test.41. Total Pages Marked Corrupt : 1 .----------------.sh rstr_data_sg show Symmetrix ID: 000196700531 Device Name Dir Device ---------------------------. and then query the table.------.-------. SQL> select * from corrupt_test where password='P7777'. review the ‘test’ table./se_devs. -------. -------------------------------- 30-MAR-15 12. ---------------------------------------------------------------- Backups found: 2 • Make sure that the +DATA encapsulated restore devices are in “ready” state.

cataloging done List of Cataloged Files ======================= File Name: +RESTORED_DATA/ORCL/DATAFILE/system.. [oracle@dsib1141 oracle]$ cat .875207563 File Name: +RESTORED_DATA/ORCL/DATAFILE/undotbs1.txt asm_diskstring='/dev/mapper/ora_dd*p1' [oracle@dsib1141 oracle]$ sqlplus "/ as sysasm" SQL> select name. renaming it to +RESTORED_DATA.sh rstr_data_sg ready • Step (3): Perform ProtectPoint backup restore using the database configuration file and a backup-id (the operation takes a few seconds). scan the host SCSI bus for new devices.if RMAN catalog is available (optional) RMAN> catalog start with '+RESTORED_DATA/ORCL/DATAFILE' noprompt.SYMMETRIX ora_dd_data2 (360000970000196700531533030303238) dm-37 EMC.875207543 File Name: +RESTORED_DATA/ORCL/DATAFILE/sysaux. [root@dsib1141 scripts]# ./ora_rename_DATA.. RMAN> connect catalog rco/oracle@catdb <. [root@dsib1141 scripts]# ..260.sh [root@dsib1141 scripts]# service multipathd restart [root@dsib1141 scripts]# multipath -ll | grep dd_data ora_dd_data4 (360000970000196700531533030303241) dm-38 EMC. and mount it to ASM.263. .258. [root@dsib1141 scripts]# symaccess -type storage -name prod_database_sg add sg rstr_data_sg o If necessary.259.txt /dev/mapper/ora_dd_data1p1 DATA RESTORED_DATA /dev/mapper/ora_dd_data2p1 DATA RESTORED_DATA /dev/mapper/ora_dd_data3p1 DATA RESTORED_DATA /dev/mapper/ora_dd_data4p1 DATA RESTORED_DATA [oracle@dsib1141 oracle]$ TOGRID [oracle@dsib1141 oracle]$ renamedg dgname=DATA newdgname=RESTORED_DATA config=...875207561 File Name: +RESTORED_DATA/ORCL/DATAFILE/sys_undots. o First. If additional backups of +DATA are needed. in this case – recover a block corruption.262./pp_restore.SYMMETRIX ora_dd_data3 (360000970000196700531533030303239) dm-39 EMC. 24 ./ora_rename_DATA.875208249 File Name: +RESTORED_DATA/ORCL/DATAFILE/iops. NAME STATE -----------------------------. find where is the corruption by scanning the database and then checking the trace log.sh database e7e13abe-204c-3838-8c22-ff2bb9a225a0 • Step (4b): Mount the +DATA encapsulated restore devices ASM disk group to Production. o Add the +DATA encapsulated restore devices to the Production host masking view.875216445 File Name: +RESTORED_DATA/ORCL/DATAFILE/bad_data./se_devs. cataloging files. RMAN> validate check logical database./os_rescan.SYMMETRIX ora_dd_data1 (360000970000196700531533030303237) dm-36 EMC.SYMMETRIX o Rename the encapsulated +DATA ASM disk group to +RESTORED_DATA. • Catalog the +RESTORED_DATA ASM diskgroup on the encapsulated restore devices with RMAN. [root@dsib1141 scripts]# .261. Then use it for RMAN recovery. state from v$asm_diskgroup. Diskgroup altered. .875708743 • Perform RMAN recovery using commands based on the situation (in this case – physical block corruption recovery).. unmount the +RESTORED_DATA disk group and repeat the process with another backup-set of +DATA.----------- REDO MOUNTED RESTORED_DATA DISMOUNTED FRA MOUNTED DATA MOUNTED SQL> alter diskgroup restored_data mount. Note that if ASMlib is used the encapsulated restore devices will need to be renamed using ASMlib commands (oracleasm renamedisk) prior to the next step of renaming the ASM diskgroup.

restart the service. [root@dsib1141 scripts]# symaccess -type storage -name mount_sg remove sg rstr_fra_sg [root@dsib1141 scripts]# symaccess -type storage -name prod_database_sg add sg rstr_fra_sg o On production.trc for details [oracle@dsib1141 oracle]$ grep Corrupt /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_8556. [root@dsib1141 scripts]# .--------------..---------- 6 FAILED 0 1121 1280 1675452 File Name: +DATA/ORCL/DATAFILE/bad_data Block Type Blocks Failing Blocks Processed ---------..---------------- Data 0 27 Index 0 0 Other 1 132 validate found one or more corrupt blocks See trace file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_8556. File Status Marked Corrupt Empty Blocks Blocks Examined High SCN ---.----------- REDO MOUNTED FRA MOUNTED DATA MOUNTED RESTORED_DATA DISMOUNTED RESTORED_FRA DISMOUNTED SQL> alter diskgroup RESTORED_FRA mount.263.-----------. • If RMAN requires missing archive logs during the recovery (if performing a different type of RMAN recovery than block corruption).trc Corrupt block relative dba: 0x01800097 (file 6./ora_rename_FRA. 25 . Total Pages Marked Corrupt : 0 .-----. unmount this disk group and repeat the process.-------------. mount it to ASM. state from v$asm_diskgroup. choose an appropriate +FRA backup-set from Data Domain and mount the +FRA encapsulated restore devices to production. renaming the disk group to +RESTORED_FRA. and if more are necessary.. . o ProtectPoint backup restore using the fra configuration file and a backup-id. rescan the SCSI bus and../os_rescan./pp_restore. [root@dsib1141 scripts]# .txt verbose=yes asm_diskstring='/dev/mapper/ora_dd*p1' [oracle@dsib1141 oracle]$ sqlplus "/ as sysasm" SQL> select name.txt /dev/mapper/ora_dd_fra1p1 FRA RESTORED_FRA /dev/mapper/ora_dd_fra2p1 FRA RESTORED_FRA /dev/mapper/ora_dd_fra3p1 FRA RESTORED_FRA /dev/mapper/ora_dd_fra4p1 FRA RESTORED_FRA [oracle@dsib1141 oracle]$ TOGRID [oracle@dsib1141 oracle]$ renamedg dgname=FRA newdgname=RESTORED_FRA config=.. NAME STATE -----------------------------.sh [root@dsib1141 scripts]# service multipathd restart [root@dsib1141 scripts]# multipath -ll | grep dd_fra o Rename the encapsulated +FRA ASM disk group to +RESTORED_FRA.. Use the archives. if using DM-multipath.-------------. channel ORA_DISK_1: restoring block(s) from datafile copy +RESTORED_DATA/ORCL/DATAFILE/bad_data.875708743 . block 151) RMAN> recover datafile 6 block 151.. Finished recover at 30-MAR-15 [oracle@dsib1141 oracle]$ dbv file='+DATA/ORCL/DATAFILE/bad_data' blocksize=8192 . and use its archive logs.sh fra <backup-id> o Add the +FRA encapsulated restore devices to the Production host masking view (remove them from the Mount host masking view if they were associated with it previously.. [oracle@dsib1141 oracle]$ cat ora_rename_FRA.

------------------------------------- 00027 Not Visible ***:*** TDEV N/Grp'd NR 102401 00028 Not Visible ***:*** TDEV N/Grp'd NR 102401 00029 Not Visible ***:*** TDEV N/Grp'd NR 102401 0002A Not Visible ***:*** TDEV N/Grp'd NR 102401 o Create a snapshot of the +DATA encapsulated restore devices./pp_list_backup. In that case.sh database 35cc4c92-9ab5-09a4-a6f6-b832494372fe • Step (4c): Perform SnapVX Establish followed by a link-copy from the +DATA encapsulated restore devices back to the Production devices. OVERWRITING PRODUCTION DATA DEVICES (4C) • If a backup-set was picked up on the Mount host (following use case 4a.sh rstr data 26 ./pp_restore. [root@dsib1141 scripts]# . o Link-copy the snapshot from the +DATA encapsulated restore devices to Production +DATA devices.sh rstr data o Make sure that the Production database is down.------. and that +DATA disk group on the Production host is dismounted./se_devs. The second specifies which set of devices to snap: data files only (data). or archive logs (fra). o To perform a link-copy from the encapsulated restore devices./se_snap_link. Diskgroup altered. [root@dsib1141 scripts]# . • Perform ProtectPoint backup list using either of the configuration files to choose a backup-id to restore.oracle [oracle@dsib1141 ~]$ TODB [oracle@dsib1141 ~]$ sqlplus "/ as sysdba" SQL> shutdown abort. Script note: The script ‘se_snap_create. then the encapsulated restore devices already have the required backup-set and likely the database was even rolled forward and opened read-only. close the database and unmount the ASM disk groups. [root@dsib1141 scripts]# . they need to be made not-ready and have a snapshot that can be linked-copied.------./se_snap_create.sh rstr_data_sg not_ready 'Not Ready' Device operation successfully completed for the storage group. o If the Mount host had the encapsulated +DATA or +FRA restore devices mounted.sh rstr data o Monitor the progress of the copy. The first specifies if the snapshot is to Production or Restore devices (prod|rstr).sh database • Step (3): Perform ProtectPoint backup restore using the database configuration file and a backup-id. overwriting them./se_snap_show. [root@dsib1141 scripts]# .sh’ requires two parameters. RMAN RECOVERY OF PRODUCTION AFTER COPY.------------------------------------- Cap Sym Physical SA :P Config Attribute Sts (MB) ---------------------------. data and logs together (database). recoverable). you can skip the next few steps of performing a ProtectPoint restore and jump right to step (4c) to perform SnapVX link-copy after dismounting the Production ASM +DATA disk group. [root@dsib1141 scripts]# .sh rstr_data_sg show Symmetrix ID: 000196700531 Device Name Dir Device ---------------------------. [root@dsib1141 scripts]# ./se_devs. [root@dsib1141 scripts]# . [root@dsib1141 scripts]# su . SQL> exit [oracle@dsib1141 ~]$ TOGRID [oracle@dsib1141 ~]$ sqlplus "/ as sysasm" SQL> alter diskgroup DATA dismount.

Use the archives.oracle [oracle@dsib1136 ~]$ TOGRID [oracle@dsib1136 ~]$ crsctl start has CRS-4123: Oracle High Availability Services has been started. However. choose an appropriate +FRA backup from Data Domain and mount the +FRA encapsulated restore devices to the Production host. Alternatively. scan the host SCSI bus for new devices. in the case of a single instance.875207521 thread=1 sequence=72 media recovery complete.0 .875267629 thread=1 sequence=71 archived log file name=+REDO/ORCL/ONLINELOG/group_2.sh fra bc0202f6-52f1-ce09-a46e-ebb9ae61d08c o Add the +FRA encapsulated restore devices to the Production host masking view (remove them first from the Mount host masking view if they were associated with its masking view) 27 . o ProtectPoint backup restore using the fra configuration file and a backup-id. RMAN> recover database./os_rescan. 2014.257. o If necessary. o When the copy is done./se_snap_unlink. RMAN> restore controlfile from '+FRA/CTRL. [oracle@dsib1141 ~]$ TODB [oracle@dsib1141 ~]$ rman Recovery Manager: Release 12. [root@dsib1141 scripts]# .1.327.sh [root@dsib1141 scripts]# service multipathd restart o If RAC is running on Production host. unlink the session (no need to keep it. NAME STATE -----------------------------.Production on Mon Mar 30 09:42:29 2015 Copyright (c) 1982. [root@dsib1141 scripts]# . All rights reserved. bringing any missing archive logs from backup. unmount this disk group and repeat the process. [oracle@dsib1141 ~]$ srvctl start asm [oracle@dsib1136 ~]$ sqlplus "/ as sysasm" SQL> select name.BCK'. ----------- REDO MOUNTED FRA MOUNTED DATA MOUNTED • Perform database media recovery using the available archive logs in Production.875267627 thread=1 sequence=70 archived log file name=+FRA/ORCL/ARCHIVELOG/2015_03_25/thread_1_seq_71.sh rstr data + symsnapvx -sg rstr_data_sg -lnsg prod_data_sg -snapshot_name rstr_data unlink • Mount the copied +DATA ASM disk groups on the Production host. RMAN> alter database mount. elapsed time: 00:02:10 Finished recover at 30-MAR-15 RMAN> alter database open resetlogs.326. o In the following example RMAN is used to copy the backup control file to its right place and then recover the database. Oracle and/or its affiliates.2. o Mount +DATA ASM disk groups or start ASM and make sure +DATA is mounted. renaming the disk group to +RESTORED_FRA. archived log file name=+FRA/ORCL/ARCHIVELOG/2015_03_25/thread_1_seq_70. RMAN> connect target / connected to target database (not started) RMAN> startup nomount. [root@dsib1141 scripts]# . Statement processed • If RMAN requires missing archive logs during the recovery./pp_restore. [root@dsib1136 ~]# su .0. RMAN or SQL can be used to perform incomplete media recovery if necessary.state from v$asm_diskgroup. and if more are necessary. Oracle High-Availability Services may need to be started first. it should be already configured and running using a separate ASM disk group (+GRID).

state from v$asm_diskgroup.848576 AM -04:00 after log switch 25-MAR-15 09. including those from after the backup. Only changed data is sent to the Data Domain system. yet all backups are full.59.txt verbose=yes asm_diskstring='/dev/mapper/ora_dd*p1' . if using DM-multipath.342907 AM -04:00 before db snapshot 25-MAR-15 09.174418 AM -04:00 after fra snapshot 25-MAR-15 09.sh [root@dsib1141 scripts]# service multipathd restart [root@dsib1141 scripts]# multipath -ll | grep dd_fra o Rename the encapsulated +FRA ASM disk group to +RESTORED_FRA. With ProtectPoint File System Agent the backup time is no longer dependent on the size of the database.. NAME STATE -----------------------------.. [oracle@dsib1141 oracle]$ cat ora_rename_FRA. 28 . review the ‘test’ table.. leveraging the direct connectivity between Data Domain and the VMAX3 storage array. Diskgroup altered. [root@dsib1141 scripts]# .580309 AM -04:00 after db snapshot 25-MAR-15 09. [root@dsib1141 scripts]# symaccess -type storage -name mount_sg remove sg rstr_fra_sg [root@dsib1141 scripts]# symaccess -type storage -name prod_database_sg add sg rstr_fra_sg o On Production.txt verbose=yes asm_diskstring=/dev/mapper/ora_dd*p1 Executing phase 1 .46. and remote replications to protect the backups..514299 AM -04:00 both backups completed 7 rows selected. Completed phase 1 Executing phase 2 .51. mount it to ASM. Completed phase 2 [oracle@dsib1141 oracle]$ sqlplus "/ as sysasm" SQL> select name..08. Data Domain offers deduplication.40.48.13. allowing the Production host to service application transactions more efficiently.663259 AM -04:00 after pp backup fra started 25-MAR-15 09. rescan the SCSI bus and.435286 AM -04:00 after pp backup database started 25-MAR-15 09. [oracle@dsib1141 oracle]$ TODB [oracle@dsib1141 oracle]$ sqlplus "/ as sysdba" SQL> select * from test. compression. Both Backup and Recovery provide great savings in host CPU and I/Os. restart the service. even in the face of growing database capacities and increased workload. Restore operations are fast. All the committed transactions have been recovered. • For reference.15. CONCLUSION ProtectPoint File System Agent offers a solution to the growing challenge of maintaining backup and recovery SLAs./ora_rename_FRA.36. TS REC -------------------------------------------------.49. and use its archive logs./ora_rename_FRA./os_rescan.37.txt /dev/mapper/ora_dd_fra1p1 FRA RESTORED_FRA /dev/mapper/ora_dd_fra2p1 FRA RESTORED_FRA /dev/mapper/ora_dd_fra3p1 FRA RESTORED_FRA /dev/mapper/ora_dd_fra4p1 FRA RESTORED_FRA [oracle@dsib1141 oracle]$ TOGRID [oracle@dsib1141 oracle]$ renamedg dgname=FRA newdgname=RESTORED_FRA config=. -------------------------------- 25-MAR-15 09.45..52.46.----------- REDO MOUNTED FRA MOUNTED DATA MOUNTED RESTORED_FRA DISMOUNTED SQL> alter diskgroup RESTORED_FRA mount. renamedg operation: dgname=FRA newdgname=RESTORED_FRA config=.

Set up ProtectPoint software Table 2 describes the native devices and vdisk configuration (after step 3. Remember to use the command: symdev list -encapsulated -wwn_encapsulated to identify Data Domain device WWNs. Set up initial SnapVX sessions 8. Set up Production host 4. Set up Management host 3. Make sure that: • SAN connectivity exists between switch(s) and: o Data Domain o VMAX3 o Management host o Production host o Mount host (optional) • SAN zones are created for: o Data Domain FC ports with VMAX3 FTS DX ports 29 . Set up Data Domain system 6. 5. Set up Mount host (optional) 5. and 6). Table 2 Devices and SG configuration ASM DG Prod DD backup vdisks DD restore vdisks WWN WWN Dev SG Dev (shortened) SG DDR Dev (shortened) SG DDR REDO 013 prod_redo_sg 01B 6002…AD00000 bkup_redo_sg vdisk-dev0 023 6002…AD00008 rstr_redo_sg vdisk-dev8 REDO 014 prod_redo_sg 01C 6002…AD00001 bkup_redo_sg vdisk-dev1 024 6002…AD00009 rstr_redo_sg vdisk-dev9 REDO 015 prod_redo_sg 01D 6002…AD00002 bkup_redo_sg vdisk-dev2 025 6002…AD0000A rstr_redo_sg vdisk-dev10 REDO 016 prod_redo_sg 01E 6002…AD00003 bkup_redo_sg vdisk-dev3 026 6002…AD0000B rstr_redo_sg vdisk-dev11 DATA 017 prod_data_sg 01F 6002…AD00004 bkup_data_sg vdisk-dev4 027 6002…AD0000C rstr_data_sg vdisk-dev12 DATA 018 prod_data_sg 020 6002…AD00005 bkup_data_sg vdisk-dev5 028 6002…AD0000D rstr_data_sg vdisk-dev13 DATA 019 prod_data_sg 021 6002…AD00006 bkup_data_sg vdisk-dev6 029 6002…AD0000E rstr_data_sg vdisk-dev14 DATA 01A prod_data_sg 022 6002…AD00007 bkup_data_sg vdisk-dev7 02A 6002…AD0000F rstr_data_sg vdisk-dev15 FRA 037 prod_fra_sg 03B 6002…AD00010 bkup_fra_sg vdisk-dev16 03F 6002…AD00014 rstr_fra_dg vdisk-dev20 FRA 038 prod_fra_sg 03C 6002…AD00011 bkup_fra_sg vdisk-dev17 040 6002…AD00015 rstr_fra_dg vdisk-dev21 FRA 039 prod_fra_sg 03D 6002…AD00012 bkup_fra_sg vdisk-dev18 041 6002…AD00016 rstr_fra_dg vdisk-dev22 FRA 03A prod_fra_sg 03E 6002…AD00013 bkup_fra_sg vdisk-dev19 042 6002…AD00017 rstr_fra_dg vdisk-dev23 Set up Physical Connectivity The assumption is that the physical system connectivity was done as part of system installation by EMC personnel. Set up encapsulated vdisks 7.APPENDIXES APPENDIX I – PROTECTPOINT SYSTEM SETUP Setup Steps Overview To prepare the system for ProtectPoint. complete the following steps: 1. Set up physical connectivity 2.

• Refresh the SE database: symcfg discover. config=tdev . 4 x 100GB devices for +DATA ASM disk group. Then list the available storage devices: symdev list -mb. o Management host and VMAX3 front-end ports o Production host and VMAX3 front-end ports o Mount host (optional) and VMAX3 front-end ports Set up Management Host Software and Masking Views The Management host is where all user commands and scripts are initiated. Note that although in this white paper the same host (dsib1141) was used for both Production and Management hosts. these devices do not need to be backed up as they do not contain any user data. export SYMCLI_SID=000196700531) • Install Unisphere for VMAX3 (optional).108.lss. • If there are available gatekeepers they can be used. (e. [root@dsib1141 scripts]# symconfigure -sid 531 -cmd "create dev count=4.lss.com dsib0018 DDS • Create a masking view for the management host with the gatekeepers.2D:8 symaccess create view -name mgmt_mv –sg mgmt_gk –ig mgmt_ig –pg mgmt_pg Set up Production host Production database devices may already exist.1 localhost. config=tdev . emulation=FBA.: export PATH=$PATH:/usr/symcli/bin).136 dsib1136. For example: #!/bin/bash # To find HBA port WWNs run the following command: # cat /sys/class/fc_host/host?/port_name set -x export SYMCLI_SID=000196700531 symaccess -type storage -name mgmt_gk create devs 2D:31 # gatekeeprs for management host symaccess -type initiator -name mgmt_ig create symaccess -type initiator -name mgmt_ig add -wwn <hba1_port1_wwn> symaccess -type initiator -name mgmt_ig add -wwn <hba2_port1_wwn> symaccess -type port -name mgmt_pg create -dirport 1D:8.108. If not." commit [root@dsib1141 scripts]# symconfigure -sid 531 -cmd "create dev count=4. symconfigure -sid 531 -cmd "create gatekeeper count=8 .lss.245. config=tdev .141 dsib1141.size=100 GB. emulation=FBA.g.108. in a real deployment they should be two separate hosts.0." commit) • Update /etc/hosts with references to Production host.localdomain6 localhost6 10. (e.emc." commit [root@dsib1141 scripts]# symconfigure -sid 531 -cmd "create dev count=4. Note: If Grid Infrastructure is used (Oracle RAC)." commit • Create a masking view for the Production host. o If a single VMAX3 is managed. The commands are executed on the Management host.com dsib1136 Mount 10. • If Solutions Enabler Access Controls (ACL) are to be used to limit the set of devices and operations that the Management host can perform.0. Otherwise create additional small communication devices.18 dsib0018.com dsib1141 Prod 10. add its ID to the environment variable so it will not be needed during SE CLI execution.g.size=10 GB.size=200 GB. 30 . make sure that the EMC personnel also initialize the ACL database in the VMAX3 (requires EMC personnel).emc. emulation=FBA. and Data Domain.244.245. they can be created using Solutions Enabler CLI or Unisphere. Perform the following operations to set up the Management host: • Install Solutions Enabler (SE) CLI software. • Post SE installation: o Update the path to include SE binaries (e. [root@dsib1141 scripts]# cat /etc/hosts 127.localdomain localhost ::1 localhost6.g. • The following example creates via CLI: 4 x 10GB devices for +REDO ASM disk group. Mount host.emc. and 4 x 200GB for +FRA ASM disk group.

export PATH=$DB_HOME/bin:$PATH" alias TOGRID="export ORACLE_HOME=$GRID_HOME. +REDO. Note: For RAC deployments. This cascaded SG (called prod_database_sg) is used to create a storage consistent replica of the database for local replications (SnapVX) or remote replications (SRDF).log" #export PATH=$PATH:. It will also be used for logical recoveries on the Mount host. and log files (but not archive logs / FRA). control.Port group symaccess -type initiator -name prod_ig create <.3D:8.log" alias AT="tail -200f /u01/app/grid/diag/asm/+asm/+ASM/trace/alert_+ASM. then . Cascaded SG symaccess create view -name prod_database_mv –sg prod_database_sg –ig prod_ig –pg prod_pg <.0/grid/ export DB_SID=orcl export GRID_SID=+ASM export OMS_HOME=/u01/app/oracle/middleware/oms #export ORACLE_BASE=$DB_BASE export ORACLE_HOME=$DB_HOME export ORACLE_SID=$DB_SID export PATH=$BASE_PATH:$ORACLE_HOME/bin:.FRA masking view • Example of the Oracle user . REDO SG symaccess -type storage -name prod_data_sg create devs 17:1A <. DATA SG symaccess -type storage -name prod_fra_sg create devs 37:3a <.prod_data_sg <.1/db export GRID_HOME=/u01/app/grid/product/12. and can be used separately for device masking to hosts. FRA SG symaccess -type storage -name prod_database_sg create sg prod_redo_sg.DB masking view symaccess create view -name prod_fra_mv –sg prod_fra_sg –ig prod_ig –pg prod_pg <.2D:8.bash_profile script from the Production host (including the aliases TODB and TOGRID): [oracle@dsib1141 ~]$ cat ~/.1. export PATH=$GRID_HOME/bin:$PATH" alias DH="cd $DB_HOME" alias GH="cd $GRID_HOME" alias OT="tail -200f /u01/app/oracle/diag/rdbms/orcl/orcl/trace/alert_orcl. where multiple nodes need access to the shared production database storage devices. export ORACLE_BASE=$GRID_BASE. export ORACLE_BASE=$DB_BASE. Note: Each ASM disk group type (+DATA. and +FRA) gets its own storage-group (SG) and therefore it can have its own FAST SLO. /etc/bashrc fi export BASE_PATH=$PATH export DB_BASE=/u01/app/oracle export GRID_BASE=/u01/app/grid export DB_HOME=/u01/app/oracle/12.1/grid/lib 31 .bash_profile # Source global definitions if [ -f /etc/bashrc ]. use a cascaded Initiator Group that includes all the Production servers’ initiators. alias TODB="export ORACLE_HOME=$DB_HOME.Initiator group symaccess -type initiator -name prod_ig add -wwn 21000024ff3de26e symaccess -type initiator -name prod_ig add -wwn 21000024ff3de26f symaccess -type initiator -name prod_ig add -wwn 21000024ff3de19c symaccess -type initiator -name prod_ig add -wwn 21000024ff3de19d symaccess -type storage -name prod_redo_sg create devs 13:16 <. This example also creates a cascaded SG that includes ALL the database devices for: data.:$ORACLE_HOME/bin #export LD_LIBRARY_PATH=/u01/app/grid/12. export ORACLE_SID=$GRID_SID. export ORACLE_SID=$DB_SID.4D:8 <. • Example masking views for the Production host: #!/bin/bash # To find HBA port WWNs run the following command: # cat /sys/class/fc_host/host?/port_name set -x export SYMCLI_SID=000196700531 symaccess -type port -name prod_pg create -dirport 1D:8.

they are called “ora_dataNN”.conf. o Create a cascaded storage group that for now only includes the gatekeepers SG.• If using dm-multipath make sure that not only the Production devices are accounted for in /etc/multipath. and “ora_fraNN”: . it is recommended to configure Grid Infrastructure (+GRID ASM disk group) in advance with the cluster configuration and quorum devices. symaccess -type initiator -name mount_ig create symaccess -type initiator -name mount_ig add -wwn 21000024ff3de192 symaccess -type initiator -name mount_ig add -wwn 21000024ff3de193 symaccess -type initiator -name mount_ig add -wwn 21000024ff3de19a symaccess -type initiator -name mount_ig add -wwn 21000024ff3de19b symaccess -type storage -name mount_gk create devs 32:36 symaccess -type storage -name mount_sg create sg mount_gk <. multipath { wwid 360000970000196700531533030303433 alias ora_dd_data1 } multipath { wwid 360000970000196700531533030303434 alias ora_dd_data2 } .ora dsib1136:/download/scripts/oracle/initASM. and ASM can simply mount (open) the ASM disk groups. [oracle@dsib1141 dbs]$ scp /tmp/initASM.. In the following example. when a backup is ready to be mounted to the Mount host the encapsulated restore devices will be masked to the host (made visible). In the following example. Set up Mount host (optional) The Mount host is optional and can be used for logical recoveries or to browse through Data Domain backup-sets (using the encapsulated restore devices) before starting a SnapVX link-copy operation that can take some time to complete. • If RAC is not used on the Mount host. in case they are mounted to Production for recovery. This way.ora' from spfile.encapsulated SG for use later symaccess -type port -name mount_pg create -dirport 1D:8. perform the following steps: o Install Grid and Oracle database binaries only with the same version as Production (do not create disk groups or database). they are called “ora_dd_dataNN”. when restore devices should be mounted.. their SGs will simply be added to the cascaded SG and they will be accessible by the Mount host. o Extract the ASM init. o Since a masking view cannot be created with a storage group. you can add a storage group with a few gatekeepers. • If RAC is used on the Mount host. multipath { wwid 360000970000196700531533030303433 alias ora_data1 } multipath { wwid 360000970000196700531533030303434 alias ora_data2 } . • From the Management host. [oracle@dsib1141 dbs]$ TOGRID [oracle@dsib1141 dbs]$ sqlplus “/ as sysasm” SQL> create pfile='/tmp/initASM..ora file from Production. Later. create a masking view for the Mount host.. and “ora_dd_fraNN”: . but also updated with entries for the +DATA and +FRA encapsulated restore devices. and then copy it to the Mount host. and +FRA encapsulated restore devices.4D:8 symaccess view create -name mount_mv -sg mount_sg -ig mount_ig -pg mount_pg • If using dm-multipath make sure to add entries in /etc/multipath. To prepare. then ASM will not be able to start until it has access to the initial ASM disk group (which will not be available until a backup-set is mounted to the Mount host). +LOG...conf for the +DATA..ora 32 ..

then during recovery use cases on the Mount host. when the ASM disk group devices become visible to the Mount host. No further work is required. so the devices can be registered with the host. export ORACLE_BASE=$DB_BASE. after the vdisks have been encapsulated. export ORACLE_BASE=$GRID_BASE. That way the host will not need to be rebooted again later. and +FRA encapsulated restore devices be presented to the Mount host during setup.bash_profile script from Mount host (including the aliases TODB and TOGRID): [oracle@dsib1136 ~]$ cat ~/. Set up Data Domain system Licenses and SSH • License the Data Domain system for vdisk service. export PATH=$DB_HOME/bin:$PATH" alias TOGRID="export ORACLE_HOME=$GRID_HOME.ora [oracle@dsib1136 ~]$ srvctl start asm [oracle@dsib1136 ~]$ srvctl status asm • The following is an example of the Oracle user .1. ASM will find its own labels. +REDO. and +FRA devices from Production. o Later on. run the following commands (mounting the appropriate ASM disk groups as necessary to the recovery scenario).log" alias AT="tail -200f /u01/app/grid/diag/asm/+asm/+ASM/trace/alert_+ASM. then during the recovery use cases on the Mount host. and mount them. [root@dsib1141]# ssh sysadmin@DDS license show [root@dsib1141]# ssh sysadmin@DDS license add <license-key> 33 . export ORACLE_SID=$DB_SID.:$ORACLE_HOME/bin #export LD_LIBRARY_PATH=/u01/app/grid/12.  If Production uses EMC PowerPath (without ASMlib). export PATH=$GRID_HOME/bin:$PATH" alias ls="ls -Fa" alias ll="ls -Fla" alias llt="ls -Flart" alias df="df -H" alias DH="cd $DB_HOME" alias GH="cd $GRID_HOME" alias OT="tail -200f /u01/app/oracle/diag/rdbms/oltp/orcl/trace/alert_orcl. [oracle@dsib1136 ~]$ srvctl add asm -p /download/scripts/oracle/initASM. No further work is required. it can simply rescan for the new storage devices. /etc/bashrc fi export BASE_PATH=$PATH export DB_BASE=/u01/app/oracle export GRID_BASE=/u01/app/grid export DB_HOME=/u01/app/oracle/12.1/grid/lib • When preparing for the encapsulated restore devices on the mount host: o It is highly recommended that the +DATA. This step can only take place later. o Match Mount host ASM disk string and device names with Production:  If Production uses ASMlib.0/grid/ export DB_SID=orcl export GRID_SID=+ASM alias TODB="export ORACLE_HOME=$DB_HOME. etc. +REDO.1/db export GRID_HOME=/u01/app/grid/product/12. then . the file /etc/multipath. export ORACLE_SID=$GRID_SID.conf should contain similar aliases to the +DATA. o If dm-multiplath is used. only using WWNs of the matching encapsulated restore devices.log" #export PATH=$PATH:. find its own labels. and the Mount host rebooted once.bash_profile # Source global definitions if [ -f /etc/bashrc ]. remote replications.

---. Note: Only follow this step if Data Domain CLI will be scripted from Management host.. -----------. Otherwise. using <heads> <cylinders> and <sectors> instead of MiB or Gib: # On Management host: [root@dsib1141 ~]# symdev show 013 . For eExample. Geometry : Native { Sectors/Track : 256 => Equivalent to vdisk “Sectors per track” Tracks/Cylinder : 15 => Equivalent to vdisk “Heads” Cylinders : 5462 => Equivalent to vdisk “Cylinders” 512-byte Blocks : 20974080 MegaBytes : 10241 KiloBytes : 10487040 } 34 .ssh/id_rsa.• Set secure SSH between management host and Data Domain system. Note that “DDS” is an entry in /etc/hosts with the IP address of the Data Domain system.pub Set up vdisk service and devices • Enable FC service (probably already enabled).. [root@dsib1141]# ssh-keygen -t rsa [root@dsib1141]# ssh sysadmin@DDS adminaccess add ssh-keys < ~/. sysadmin@dsib0018# scsitarget enable sysadmin@dsib0018# scsitarget status • Enable vdisk service. sysadmin@dsib0018# vdisk pool create ERP user sysadmin sysadmin@dsib0018# vdisk pool show list • Create vdisk device group (for example. one for backup and one for restore devices matching in capacity to the Production host +REDO. That way. sysadmin@dsib0018# vdisk device-group create OLTP pool ERP sysadmin@dsib0018# vdisk device-group show list • Create 2 identical groups of vdisk devices.. +DATA. Note: If the VMAX3 native devices were created using capacity notation of “MB” or “GB” you can also create the vdisks with “MiB” and “GiB” matching capacities. and use a matching heads/cylinders/sectors geometry when creating the vdisk devices. OLTP). ERP). There is no need to follow this step when ProtectPoint CLIs are used exclusively to communicate with the Data Domain system. inspect the geometry of the VMAX3 native device using a symdev show command.. ----------------------------------------------- vdisk-dev0 OLTP ERP 10241 60:02:18:80:00:08:a0:24:19:05:48:90:7a:d0:00:00 vdisk-dev1 OLTP ERP 10241 60:02:18:80:00:08:a0:24:19:05:48:90:7a:d0:00:01 vdisk-dev2 OLTP ERP 10241 60:02:18:80:00:08:a0:24:19:05:48:90:7a:d0:00:02 vdisk-dev3 OLTP ERP 10241 60:02:18:80:00:08:a0:24:19:05:48:90:7a:d0:00:03 . -------. sysadmin@dsib0018# vdisk device create pool ERP device-group OLTP count 8 capacity 10 GiB sysadmin@dsib0018# vdisk device create pool ERP device-group OLTP count 8 capacity 100 GiB sysadmin@dsib0018# vdisk device create pool ERP device-group OLTP count 8 capacity 200 GiB sysadmin@dsib0018# vdisk device show list pool ERP Device Device-group Pool Capacity WWNN (MiB) ----------. and +FRA devices. sysadmin@dsib0018# vdisk enable sysadmin@dsib0018# vdisk status • Create a vdisk Pool (for example. Data Domain will not ask for password with each set of commands.

------. ----. • In the following example a script on the Management host is used with these steps: o ssh to the Data Domain system and capture the list of vdisks in pool ERP with their WWNs. Note: Data Domain uses a similar concept to VMAX Auto-provisioning Groups (device masking) with an Access Group containing the initiators (VMAX backend port in this case.Add vdisks to Access Group sysadmin@dsib0018# scsitarget group show detailed ERP <.List Access Group details Set up encapsulated vdisk • After the vdisks are created and added to a Data Domain Access Group they will become visible to the VMAX3 and therefore can be encapsulated. #!/bin/bash set -x # Get vdisk WWN's from DDS ########################## # DDS output looks like this: # Device Device-group Pool Capacity WWNN # (MiB) # ----------.List vdisk devices sysadmin@dsib0018# scsitarget group add ERP device vdisk-dev* <.txt’..Check their status endpoint-fc-1 0b FibreChannel Yes Online endpoint-fc-2 1a FibreChannel Yes Online endpoint-fc-3 1b FibreChannel Yes Online ------------. ------ sysadmin@dsib0018# vdisk device show list pool ERP <. o Execute the command file from a symconfigure CLI command. sysadmin@dsib0018# scsitarget group create ERP service vdisk <. 35 . Save the output in file: ‘vdisk_wwn_only. Remove the colon from the WWNs. ----------------------. Save it in file: ‘vdisk_wwn.. -----------. Save it in file: ‘CMD. o Create a command line for symconfigure that creates encapsulated external disks. ---.List DD FC ports Endpoint System Address Transport Enabled Status ------------.txt’. ------. ----------------------. ------- sysadmin@dsib0018# scsitarget group add ERP initiator initiator-* <. -----------. ----------------------------------------------- # vdisk-dev0 OLTP ERP 10241 60:02:18:80:00:08:a0:24:19:05:48:90:7a:d0:00:00 # vdisk-dev1 OLTP ERP 10241 60:02:18:80:00:08:a0:24:19:05:48:90:7a:d0:00:01 # vdisk-dev2 OLTP ERP 10241 60:02:18:80:00:08:a0:24:19:05:48:90:7a:d0:00:02 # . -------------.Create Access Group sysadmin@dsib0018# scsitarget initiator show list <. and the vdisks devices to be encapsulated.List VMAX DX ports Initiator System Address Group Service ----------. ----. # On Data Domain: sysadmin@dsib0018# vdisk device create count <count> heads <head-count> cylinders <cylinder- count> sectors-per-track <sector-count> pool <pool-name> device-group <device-group-name> Create Access Group for Data Domain device masking • Create a Data Domain Access Group and add the vdisks to it. which will only be visible after the zoning was done correctly).txt’. -------------. o Remove everything but the WWN of each vdisk. ------ endpoint-fc-0 0a FibreChannel Yes Online <. ------- initiator-1 50:00:09:73:50:08:4c:05 n/a n/a initiator-2 50:00:09:73:50:08:4c:49 n/a n/a initiator-3 50:00:09:73:50:08:4c:09 n/a n/a initiator-4 50:00:09:73:50:08:4c:45 n/a n/a ----------. -----------.Add DX ports to Access Group sysadmin@dsib0018# scsitarget endpoint show list <. -------.

/CMD. The second parameter indicates whether to snap data (just +DATA)./vdisk_wwn.txt # Create a symconfigure command file #################################### rm -f .txt while read line. Script note: The ’se_snap_link. This example uses the SGs: bkup_database_sg.sh’ first parameter specifies on which devices to operate: Production (prod) or Restore (rstr). database (+DATA and +REDO)./vdisk_wwn_only." echo $CMD >> . or archive logs (+FRA). Note: The first link-copy is a full copy. Use the following scripts to create the initial snapshots./snap_create./snap_create./CMD. do stringarray=($line) echo ${stringarray[4]} | sed 's/[\:_-]//g' >> . Subsequent links will only send changed data to the encapsulated backup devices. 36 . Script note: The ’se_snap_create./CMD. the initial snapshot created by the admin user) or ‘1’ for monitoring later snapshots created by ProtectPoint Commands./vdisk_wwn_only.e.sh’ first parameter specifies on which devices to operate: Production (prod) or Restore (rstr). encapsulate_data=yes.sh fra database • Use SnapVX link –copy command to copy the snapshot data to Data Domain encapsulated backup devices. log. The second parameter indicates whether to link-copy data (just +DATA). build the storage groups for them: symaccess -type storage -name bkup_redo_sg create devs 1B:1E symaccess -type storage -name bkup_data_sg create devs 1F:22 symaccess -type storage -name bkup_fra_sg create devs 3B:3E symaccess -type storage -name bkup_database_sg create sg bkup_data_sg.txt done < . and bkup_fra_sg as the target SGs for the copy. and prod_fra_sg (which includes the archive logs).txt commit • To list encapsulated devices: [root@dsib1141 scripts]# symdev list -encapsulated –gb [root@dsib1141 scripts]# symdev list -encapsulated -wwn_encapsulated • Now that the vdisks are encapsulated. or archive logs (+FRA).bkup_redo_sg symaccess -type storage -name rstr_redo_sg create devs 23:26 symaccess -type storage -name rstr_data_sg create devs 27:2A symaccess -type storage -name rstr_fra_sg create devs 3F:42 Set up initial SnapVX sessions • To create the initial snapshot.sh’ first parameter specifies on which devices to operate: Production (prod) or Restore (rstr). ssh sysadmin@DDS "vdisk device show list pool ERP" | grep vdisk > ./vdisk_wwn.txt # Leave only WWNs and remove the colon ###################################### rm -f .sh prod database [root@dsib1141 ~]# ./vdisk_wwn_only. database (+DATA and +REDO). do CMD="add external_disk wwn=$line. [root@dsib1141 ~]# . This example uses two Storage Groups (SGs) for the SnapVX session between the native VMAX3 devices and the Data Domain encapsulated backup devices: prod_database_sg (which includes all data. Script note: The ’se_snap_verify. use the SnapVX establish command.txt done < . The second parameter indicates whether to link-copy data (just +DATA).txt while read line. Use the following scripts to perform it. and control files). or archive logs (+FRA). The third uses ‘0’ to indicate this is the initial link (i. database (+DATA and +REDO).txt # Execute the command file ########################## symconfigure -sid 531 -nop -v -file .

X Sun Mar 22 21:13:04 2015 0 100 00039 prod_fra 0 0003D ..monitor the copy progress • When monitoring link-copy to Data Domain wait until the copy changed to ‘D’ (destaged) state. once it is set. [root@dsib1141 scripts]# symsnapvx -sg prod_fra_sg -snapshot_name prod_fra list -linked -detail -i 15 Storage Group (SG) Name : prod_fra_sg SG's Symmetrix ID : 000196700531 (Microcode Version: 5977) ----------------------------------------------------------------------------------------------- Sym Link Flgs Remaining Done Dev Snapshot Name Gen Dev FCMD Snapshot Timestamp (Tracks) (%) ----.D.0 software to the Management host where Solutions Enabler is installed. For example: /opt/emc/protectpoint-1.X Sun Mar 22 21:13:04 2015 0 100 0003A prod_fra 0 0003E . which means all write pending tracks from VMAX3 cache were sent to Data Domain backend devices.0..0. C = Copied. [root@dsib1141 ProtectPoint]# tar xvf protectpoint-1. [root@dsib1141 ~]# . While working with the configuration file is cumbersome.0. then untar the kit and run the installation.D. or fra in this case) will have its own ProtectPoint configuration file with its unique devices. X = Failed. Provide information to generate config file Application name : OLTP_database Application version : 12c Application information : Oracle 12c OLTP database files Please manually edit /opt/emc/protectpoint-1.localdomain localhost ::1 localhost6.0.sh prod fra 0 <.sh prod fra [root@dsib1141 ~]# .1/config/protectpoint. [root@dsib1141 config]# cat /etc/hosts 127./protectpoint_install./se_snap_verify.1/bin • Update /etc/hosts to include IPv6 and localhost.----.0. D = Copied/Destaged.sh prod database [root@dsib1141 ~]# .sh prod database 0 <.1-linux-x86-64.tar [root@dsib1141 ProtectPoint]# . Each backup session (database.D. .---.-------------------------------.X Sun Mar 22 21:13:04 2015 0 100 00038 prod_fra 0 0003C .-----------------------.---------.0.0.---.---- 00037 prod_fra 0 0003B . .0./se_snap_link. it can be reused with every backup without a change. 37 .sh –install . = Not Modified (D)efined : X = All Tracks Defined.localdomain6 localhost6 • Update or create the ProtectPoint configuration files. . = No Failure (C)opy : I = CopyInProg. .D.config file for remaining configuration Installation complete • Update the user PATH to include the location of the software. = NoCopy Link (M)odified : X = Modified Target Data./se_snap_link. = Define in progress Set up ProtectPoint File System Agent software • Copy ProtectPoint File System Agent v1./se_snap_verify. Note: Remember to update the configuration file if the Oracle database or FRA devices change! Note: To simplify updating the configuration file(s) always refer to the configuration in Table 2 (and make sure to keep it up-to- date).X Sun Mar 22 21:13:04 2015 0 100 ---------- 0 Flgs: (F)ailed : F = Force Failed.monitor the copy progress [root@dsib1141 ~]# .1 localhost.

default value is 2.SYMID} # SRC_SYMID = <SymID for Source VMAX Device> 38 .0. 3 or 4> # LOGFILE_SIZE is optional.1" # CATALOG_DIR is optional.18 # DD_PORT is optional.• During the ProtectPoint installation the first ProtectPoint configuration is created. used just for validation that all # devices belong to this device group # DD_DEVICE_GROUP = <device group name> # SYstem ID of the VMAX system with production devices # SYMID = <VMAX SymID> SYMID = 000196700531 ########### Primary Devices on Primary System ######################## # All section name starting with PRIMARY_DEVICE_ will be backed up on # Primary DD i.config: ###################################################################### # this is just template file # Indentation just made for readability ###################################################################### [GENERAL] # APP_NAME is optional APP_NAME = "OLTP_database" # APP_VERSION is optional APP_VERSION = "12c" # APP_INFO is optional APP_INFO = "Oracle 12c OLTP database files" # give absolute path of base directory where catalog.SYMID} # FTS_SYMID = <SymID for FTS encapsulated DD Device> FTS_SYMDEVID = 0003B # WWN of the DD VDISK device for backup DD_WWN = 600218800008A024190548907AD00010 [PRIMARY_DEVICE_2] # SRC_SYMID is optional.e.244.SYMID} # SRC_SYMID = <SymID for Source VMAX Device> SRC_SYMDEVID = 00037 # this is optional. FRA configuration file: PP_fra. # 3: error + warning + info./config/. default value is 4 MB # LOGFILE_SIZE = <Log file size in MB> # LOGFILE_COUNT is optional. [PRIMARY_SYSTEM] [PRIMARY_DEVICE_1] # SRC_SYMID is optional. default value is 3009 # DD_PORT = <Port number to connect DD System> # The Data Domain user . 4: error + warning + info + debug # LOGLEVEL = <Log Level 2.owner of the DD_POOL # DD_USER = <user> DD_USER = sysadmin # DD_POOL is optional. The following is an example of update configuration file for FRA.108.BASE_DIR}/lockbox # LOCKBOX_DIR = <RSA Lock Box dir> # LOG_DIR is optional. 2: error + warning. default is ${[PRIMARY_SYSTEM]. default is ${[PRIMARY_SYSTEM]. default value is ${[PRIMARY_SYSTEM]. used just for validation that all devices # belong to this pool # DD_POOL = <pool name> # DD_DEVICE_GROUP is optional.0.BASE_DIR}/catalog # CATALOG_DIR = <catalog dir> # LOCKBOX_DIR is optional.BASE_DIR}/log # LOG_DIR = <log dir> # LOGLEVEL is optional. by default 16 files will be kept # LOGFILE_COUNT = <Number of log files> ##################### Primary System ################################# # VMAX Devices will be backed up to this System [PRIMARY_SYSTEM] # VMAX Devices will be backed up to this DD System # DD_SYSTEM = <host/IP> DD_SYSTEM = 10. Find it in the installation base directory under . log & lockbox # files should be generated by default BASE_DIR = "/opt/emc/protectpoint-1. default is ${[GENERAL]. default is ${[GENERAL]. default value is ${[GENERAL].

default is ${[PRIMARY_SYSTEM].SYMID} # FTS_SYMID = <SymID for FTS encapsulated DD Device> FTS_SYMDEVID = 041 # WWN of the DD VDISK device for Restore DD_WWN = 600218800008A024190548907AD00016 [PRIMARY_SYSTEM_RESTORE_DEVICE_4] # FTS_SYMID is optional.e. default value is ${[PRIMARY_SYSTEM]. default is ${[PRIMARY_SYSTEM]. default is ${[PRIMARY_SYSTEM]. [PRIMARY_SYSTEM] # Total number of restore devices should be greater than or equal to # number of static images in backup & should have exact geometry as # static image in backup [PRIMARY_SYSTEM_RESTORE_DEVICE_1] # FTS_SYMID is optional. # [PRIMARY_SYSTEM] to Secondary System # # [SECONDARY_SYSTEM] 39 . default is ${[PRIMARY_SYSTEM].SYMID} # SRC_SYMID = <SymID for Source VMAX Device> SRC_SYMDEVID = 00039 # this is optional.SYMID} # FTS_SYMID = <SymID for FTS encapsulated DD Device> FTS_SYMDEVID = 03F # WWN of the DD VDISK device for Restore DD_WWN = 600218800008A024190548907AD00014 [PRIMARY_SYSTEM_RESTORE_DEVICE_2] # FTS_SYMID is optional.SYMID} # FTS_SYMID = <SymID for FTS encapsulated DD Device> FTS_SYMDEVID = 042 # WWN of the DD VDISK device for Restore DD_WWN = 600218800008A024190548907AD00017 # # ###################################################################### ###################################################################### ################## Secondary System ################################## # Backup will be replicated/copied from Primary DD i. default is ${[PRIMARY_SYSTEM].SYMID} # FTS_SYMID = <SymID for FTS encapsulated DD Device> FTS_SYMDEVID = 0003D # WWN of the DD VDISK device for backup DD_WWN = 600218800008A024190548907AD00012 [PRIMARY_DEVICE_4] # SRC_SYMID is optional.e.SYMID} # FTS_SYMID = <SymID for FTS encapsulated DD Device> FTS_SYMDEVID = 0003C # WWN of the DD VDISK device for backup DD_WWN = 600218800008A024190548907AD00011 [PRIMARY_DEVICE_3] # SRC_SYMID is optional.SYMID} # SRC_SYMID = <SymID for Source VMAX Device> SRC_SYMDEVID = 0003A # FTS_SYMID is optional. default is ${[PRIMARY_SYSTEM].SYMID} # FTS_SYMID = <SymID for FTS encapsulated DD Device> FTS_SYMDEVID = 0003E # WWN of the DD VDISK device for backup DD_WWN = 600218800008A024190548907AD00013 ###################################################################### ############### Restore Devices on Primary System #################### # All section name starting with PRIMARY_SYSTEM_RESTORE_DEVICE will be # used to restore on Primary DD i.SYMID} # FTS_SYMID = <SymID for FTS encapsulated DD Device> FTS_SYMDEVID = 040 # WWN of the DD VDISK device for Restore DD_WWN = 600218800008A024190548907AD00015 [PRIMARY_SYSTEM_RESTORE_DEVICE_3] # FTS_SYMID is optional. default is ${[PRIMARY_SYSTEM]. default value is ${[PRIMARY_SYSTEM]. SRC_SYMDEVID = 00038 # this is optional.

........0.............. [SECONDARY_SYSTEM] # Total number of restore devices should be greater than or equal to # number of static images in backup & should have exact geometry as # static image in backup # [SECONDARY_SYSTEM_RESTORE_DEVICE_1] # # FTS_SYMID is optional........................[OK] Backup Devices are in same Data Domain Device Group.....................[OK] Backup Devices are unique................1/config/PP_fra......[OK] Validating Primary System: Connection Information....[N/A] Configuration is valid.SYMID} # # FTS_SYMID = <SymID for FTS encapsulated Restore DD Device> # FTS_SYMDEVID = <SymDevID for FTS encapsulated RestoreDD Device> # # WWN of the DD VDISK device for Restore # DD_WWN = <WWN for Restore DD Device> # # [SECONDARY_SYSTEM_RESTORE_DEVICE_2] # # FTS_SYMID is optional.e......... 40 .......................... ProtectPoint will only operate on the devices that appear in the configuration file..[OK] Backup Device's VMAX & DD Device Configuration..........owner of the DD_POOL # DD_USER = <user> # # DD vdisk pool on the remote DD system # DD_POOL = <pool name> # # DD device-group where the replicated images will be available # DD_DEVICE_GROUP = <device group name> # # <SymID for FTS encapsulated Restore DD Device> # # this is optional if no restore device or FTS_SYMID mentioned in # # each restore device # # SYMID = <VMAX SymID> # ########### Restore Devices on Secondary System ###################### # All section name starting with SECONDARY_SYSTEM_RESTORE_DEVICE will # be used to restore on Secondary DD i........ [root@dsib1141 config]# protectpoint config validate config-file /opt/emc/protectpoint- 1... default value is 3009 # # DD_PORT = <Port number to connect DD System> # # The Data Domain user ............................[N/A] Replication License.....config Validating host requirements..[N/A] Replication Device Group.... • Validate the configuration file using the ProtectPoint command: config validate Note: Validate the configuration file only after SSH credentials are established..........................[OK] Replication License. and the initial SnapVX sessions are created and linked..............[N/A] Validating Secondary System: Connection Information..[N/A] Restore Devices are in same Data Domain Device Group[N/A] Restore Devices are unique......[OK] Restore Device's VMAX & DD Device Configuration. # # Hostname/IP of the DD system for DD Replication # DD_SYSTEM = <host/IP> # # DD_PORT is optional.0..SYMID} # # FTS_SYMID = <SymID for FTS encapsulated Restore DD Device> # FTS_SYMDEVID = <SymDevID for FTS encapsulated RestoreDD Device> # # WWN of the DD VDISK device for Restore # DD_WWN = <WWN for Restore DD Device> # # ###################################################################### Note: ProtectPoint does not compare the content of the SnapVX sessions (or storage groups) with the devices listed in the configuration file..... default is ${[SECONDARY_SYSTEM]........... default is ${[SECONDARY_SYSTEM]......... the configuration file is updated with the devices information..[N/A] Restore Device's VMAX & DD Device Configuration...[OK] Restore Devices are in same Data Domain Device Group[OK] Restore Devices are unique..............[N/A] Validating Primary and Secondary System are different...........

For example: # symsnapvx -sg prod_data_sg –snapshot_name prod_data restore Link / Relink / Unlink: SnapVX link makes the snapshot point-in-time data available to another set of host-addressable devices. Based on the options used. Example: # symsnapvx -sg prod_data_sg -name prod_data establish List: Lists the available snapshots. etc. For example: # symsnapvx -sg prod_data_sg -lnsg test_data_sg -snapshot_name prod_data link -copy –exact Verify: Verifies that a SnapVX operation completed or is in a certain state. or also linked targets. The snapshot name and optionally generation number are provided. For example: # symsnapvx -sg prod_data_sg -name prod_data verify –copied –destaged Terminate / Terminate-restored: Terminates a snapshot. verify can be used to determine if a copy was fully completed AND destaged from VMAX persistent cache. The group of target devices can be specified using a ‘storage-group’ as well. To remove a linked target relationship. PROTECTPOINT. use the ‘unlink’ option. the output will also show how much storage is consumed by the snapshots. See the previous section regarding the use of ‘-copy’ during the link operation. or Solutions Enabler command line (CLI). The output can show all snapshots or be limited to a specific storage group. restored snapshots. For example: # symsnapvx -sg prod_data_sg -snaphost_name prod_data terminate Basic ProtectPoint File System Agent Commands ProtectPoint File System Agent utilizes Command Line Interface (CLI) to perform the following main functions: • Add/remove ssh credentials: Establishes or removes ssh credentials between the host and the Data Domain system. use the ‘relink’ option.APPENDIX II – SAMPLE CLI COMMANDS: SNAPVX. and more. or terminates a restored state for a snapshot (allowing another one to be restored to the source devices). For example: # symsnapvx -sg prod_data_sg list Restore: Restores a specific snapshot to its source devices. Based on the options. whether a background copy operation is completed. DATA DOMAIN This section includes: 1) Basic TimeFinder SnapVX operations and commands 2) Basic ProtectPoint File System Agent commands 3) Basic Data Domain commands Basic TimeFinder SnapVX operations The following are basic SnapVX operations that can be executed via Unisphere. To perform an incremental refresh of linked-target devices. Syntax: protectpoint catalog update [dd-system {primary | secondary}] [config-file <file- path>] 41 . The following examples use storage group (-sg) syntax. and pointed to by the option ‘-lnsg’ (linked storage group). For example. Syntax: protectpoint security add dd-credentials [dd-system {primary | secondary}] [config- file <file-path>] • Update Data Domain catalog: Creates or refreshes the backup catalog on the primary or secondary Data Domain system. This is especially useful when FTS is used to make sure that all the data was copied to the external storage. Establish: Snapshots are taken using the establish command. the list can include only source devices and their snapshots.

If Oracle Hot-Backup mode is required (database releases older than 12c). Syntax: o protectpoint replication run backup-id <backup-id> [sourcedd-system {primary | secondary}] [config-file <file-path>] o protectpoint replication show list [source-dd-system {primary | secondary}] [{last <n> {count | days | weeks | months}} | {from <MMDDhhmm> [[<CC>] <YY> ] [to MMDDhhmm [[<CC>] <YY> ]]}] [config-file <file-path>] o protectpoint replication abort [source-dd-system {primary | secondary}] [config- file <file-path>] • Validate ProtectPoint configuration file: Provides information and validation of the ProtectPoint configuration file. a list of relevant CLI commands is provided in the Setup Data Domain system section. Syntax: protectpoint backup show list [dd-system {primary | secondary}] [{last <n > {count | days | weeks | months}} | {from <MMDDhhmm> [[<CC>] <YY >] [to <MMDDhhmm> [[<CC ] <YY>]]}] [status {complete | in-progress | failed | partial}] [config-file <file- path>] • Restore: Places a backup-set data on the encapsulated restore devices. Only one active replication session can exist for a backup set at a time. Syntax: protectpoint backup delete backup-id <backup-id > [dd-system {primary | secondary}] [config-file <file-path>] • Backup show: Lists available backups based on different criteria. as shown in the previous step. The syntax is ‘ssh sysadmin@DDS <command>’ where sysadmin is the Data Domain admin user. and hot-backup mode ended (if it was used for database releases lower than 12c). Syntax: protectpoint restore prepare backup-id <backup-id> [dd-system {primary | secondary}] [config-file <file-path>] • Manage replications: Replicates data from one Data Domain system to another (primary or secondary). creates a new static-image in the Data Domain system for the copied data.5 only allows ProtectPoint to remote replicate a single backup-id (a backup-set). however. Syntax: protectpoint snapshot create • Backup create: After a snapshot of the database was taken. views replication status and history. Note: Data Domain OS 5. The full description of managing Data Domain system is beyond the scope of this white paper. begin hot-backup before the snapshot and end it after the snapshot is taken. waits for the copy to be fully done (‘destaged’). Syntax: protectpoint config validate [config-file <file-path>] Basic Data Domain Commands Data Domain system can be managed by using a comprehensive set of CLIs or by using Data Domain System Manager graphic user interface (GUI). While other users with lower permissions can be configured. and DDS is a notation of the Data 42 . Note: The default management user for the Data Domain system is ‘sysadmin’. Note: The following examples execute Data Domain commands by using an ssh remote login and execution of commands. or stops the replication. the ‘sysadmin’ user is used in this white paper. Syntax: protectpoint backup create description "<backup-description>" [config-file <file- path>] • Backup delete: Deletes a backup. At the end of this step a new backup-set is created in Data Domain. and assigns it the appropriate metadata. a backup create performs the following operations: It relinks the snapshot to the Data Domain encapsulated backup devices.• Snapshot create: Creates a SnapVX snapshot of the database.

• Access Control Entry (ACE): Entries in the Access Control Database specifying the permissions level for the Access Control Groups and on which pools they can operate.. The installation has to be performed as root user. Array Based Access Control commands are executed using symacl –sid –file <filename> preview | prepare | commit. APPENDIX III – PROVIDING SOLUTIONS ENABLER ACCESS TO NON-ROOT USERS The following appendix describes how to provide DBAs controlled access to Solutions Enabler so they can perform their TimeFinder replication. The Storage Admin PIN can be set in an environment variable: SYMCLI_ACCESS_PIN or entered manually./se8020_install. The host ID is provided by running ‘symacl –unique’ command on the appropriate host.. backup and recovery procedures without needing root access.. Install root directory of previous Installation : /home/oracle/SE Working root directory [/usr/emc] : /home/oracle/SE .0. permission is required to use the Solutions Enabler daemons. #----------------------------------------------------------------------------- # The following HAS BEEN INSTALLED in /home/oracle/SE via the rpm utility. and are not preceded by ‘ssh’.e. #----------------------------------------------------------------------------- ITEM PRODUCT VERSION 01 EMC Solutions Enabler V8.Domain system IP address set in the /etc/hosts file (i. Where preview verifies the syntax. install Solutions Enabler for the Oracle user. Some of the examples are shown as executed by sysadmin user directly on the Data Domain system. Install Solutions Enabler for non-root user • On the Application Management host.2.sh –install . as shown below. For more information on Data Domain command lines refer to: EMC Data Domain Operating System Command Reference Guide. and commit performs the prepare operations and executes the command. The components of Array Based Access Controls are: • Access Groups: These groups contain unique Host ID and descriptive Host Name of the non-root users. Do you want to run these daemons as a non-root user? [N]:Y Please enter the user name : oracle .. • Access Pools: Specify the set of devices for operations. though the option for allowing a non-root user is part of the installation. prepare runs preview and checks if the execution is possible. [root@dsib1136 SE]# .. Update the daemon_users file: [root@dsib1136 ~]# cd /var/symapi/config/ [root@dsib1136 config]# vi daemon_users # Add entry to allow user access to base daemon oracle storapid oracle storgnsd • Test Oracle user access: [root@dsib1136 config]# su – oracle [oracle@dsib1136 ~]$ symcfg disc [oracle@dsib1136 ~]$ sympd list –gb 43 . The feature is called Solutions Enabler Array Based Access Controls and can be configured from Unisphere or Solutions Enabler.0 RT KIT #----------------------------------------------------------------------------- • To allow the Oracle user to run symcfg discover and list commands. ‘DDS’ can be replaced by the IP address or a different name)..

cmd [root@dsib1100 ~]# symacl commit -file .. First.cmd • Create Application Access Control pool: [root@dsib1100 ~]# echo "create accpool protectpoint.cmd [root@dsib1100 ~]# symacl commit -file ." > acl_pp_add_devs. they enter the Storage Admin management host unique ID. After that they can create Access Controls from the Storage Management host. [oracle@dsib1141 ~]$ sympd list Symmetrix ID: 000196700531 Device Name Dir Device ---------------------------." > ./acl_pp_add_host.cmd • On the Application Management host (where the DBA executes backup and recovery operations)." > acl_pp_add_host." >> acl_pp_add_devs." > acl_pp_grant_access.------.------------------------------------- . The Storage Admin should provide them the PIN and unique ID.cmd • Have the Oracle user try to run the command from the ProtectPoint Management host prior to granting access.------------------------------------- Cap Physical Sym SA :P Config Attribute Sts (MB) ---------------------------.------../acl_pp_add_devs./acl_pp_create_accgrp. as shown below.cmd [root@dsib1100 ~]# symacl commit -file . [oracle@dsib1141 ~]$ sympd list Symmetrix ID: 000196700531 Symmetrix access control denied the request • Grant permissions to the Application Access Group (choose appropriately based on documentation)./acl_pp_grant_access. • Review the created Access Controls. [root@dsib1100 ~]# echo "grant access=BASE.BASECTRL to accgroup protectpoint for accpool protectpoint.SNAP. get the unique host ID: [root@dsib1141 ~]# symacl -sid 531 -unique The unique id for this host is: 2F5A05AC-50498CC9-9C38777E • Add the Application Management host to the Application Access Control group: [root@dsib1100 ~]# echo "add host accid 2F5A05AC-50498CC9-9C38777E name protectpoint_mgmt to accgroup protectpoint. [root@dsib1100 ~]# symacl list -accgroup [root@dsib1100 ~]# symacl list –accpool [root@dsib1100 ~]# symacl list –acl [root@dsib1100 ~]# symacl show accpool protectpoint [root@dsib1100 ~]# symacl show accgroup protectpoint 44 ./acl_pp_create_accgrp.cmd • Have the Oracle user try to run the command from the Application Management host prior to granting access.cmd [root@dsib1100 ~]# symacl commit -file . Create Array Base Access Controls for ProtectPoint Management host • Create the Application Access Control Group: [root@dsib1100 ~]# echo "create accgroup protectpoint.cmd [root@dsib1100 ~]# echo "add dev 3B:42 to accpool protectpoint." > acl_pp_create_pool. The unique ID is provided by running: ‘symacl –unique’ on the Storage Management host.cmd [root@dsib1100 ~]# symacl commit -file .cmd • Add the Application storage devices to the pool (including target devices): [root@dsib1100 ~]# echo "add dev 13:2A to accpool protectpoint./acl_pp_create_pool.Set Management Host in Access Controls Database • EMC support personnel will run a Wizard on SymmWin. then the Admin user and PIN (password).

then echo "options: 1) database|fra 2) backup-descriptsion" exit fi OPT=$1 DESC="$2" case $OPT in database|fra) PIT_DATE=`.sh #!/bin/bash set -x if [ "$#" -ne 2 ].. Note that it does not matter which control file is used. [root@dsib1141 scripts]# cat pp_backup.sh’ – Performs a SnapVX link-copy to Data Domain.. performs switch log files. then echo "options: 1) database|fra 2) backup-id" exit fi OPT=$1 case $OPT in database|fra) CONF=$PP_CONF_LOC/PP_${OPT}.sh’ – Deletes a ProtectPoint backup-id.config protectpoint backup delete backup-id $2 config-file $CONF .sh $OPT` # get the snapshot time DESC_WITH_DATE="$OPT $DESC $PIT_DATE" CONF=$PP_CONF_LOC/PP_${OPT}.. If Oracle database is on another host.sh’ – Logs in to Oracle. [root@dsib1141 scripts]# more ora_switchandarchive. Esac o ‘pp_delete_backup. It uses for parameters a ProtectPoint control file type (‘database’ or ‘fra’) and a backup-id.sh #!/bin/bash set -x if [ "$#" -ne 2 ]. alter system archive log current. *) echo "options: 1) database|fra 2) backup-id" 45 . ssh first to the host. then creates a new backup-set with a description.config protectpoint backup create description "$DESC_WITH_DATE" config-file $CONF & PID=$! echo "protectpoint backup create is running in the background with PID: $PID and description: $DESC_WITH_DATE" ./se_prod_snaptime.sh #!/bin/bash set -x su . The scripts add to the description the source devices for the backup (‘database’ or ‘fra’) as well as the SnapVX snapshot (which is when the database backup was created as a snapshot).oracle << ! set -x sqlplus "/ as sysdba" << EOF alter system switch logfile. *) echo "options: 1) database|fra 2) backup-descriptsion" exit . [root@dsib1141 scripts]# cat pp_delete_backup. and archives the current log(s). ALTER DATABASE BACKUP CONTROLFILE TO '+FRA/CTRL.BCK' REUSE.APPENDIX IV – SCRIPTS USED IN THE USE CASES Oracle scripts o ‘ora_switchandarchive. EOF ! ProtectPoint scripts o ‘pp_backup.

config protectpoint backup show list config-file $CONF .sh #!/bin/bash #set -x if [ "$#" -ne 1 ]. Esac 46 ..sh’ – Creates a new SnapVX snapshot for Production devices: ‘database’ or ‘fra’. date ..sh #!/bin/bash set -x if [ "$#" -ne 2 ].sh #!/bin/bash set -x if [ "$#" -ne 1 ].. [root@dsib1141 scripts]# cat pp_list_backup. [root@dsib1141 scripts]# cat pp_restore. Esac o ‘pp_list_backup.sh’ – Places a Data Domain backup set (using a ProtectPoint backup-id) on the appropriate restore devices. then echo "options: database|fra" exit fi OPT=$1 case $OPT in database|fra) CONF=$PP_CONF_LOC/PP_${OPT}. Esac o ‘pp_restore. then echo "options: 1) database|fra 2) backup-id" exit fi OPT=$1 case $OPT in database|fra) CONF=$PP_CONF_LOC/PP_${OPT}. *) echo "options: 1) database|fra 2) backup-id" exit ... Esac o ‘pp_snap.config protectpoint restore prepare backup-id $2 config-file $CONF . then echo "options: database|fra" exit fi OPT=$1 case $OPT in database|fra) CONF=$PP_CONF_LOC/PP_${OPT}.sh’ – Lists the ProtectPoint backup-sets.. exit . *) echo "options: database|fra" exit . *) echo "options: database|fra" exit . [root@dsib1141 scripts]# cat pp_snap.config protectpoint snapshot create config-file $CONF echo "Backup time: "..

then echo "options: <sg_name> operation: show|ready|not_ready" exit fi SG_NAME=$1 OP=$2 if [ $OP == "show" ]./vdisk_wwn_only./vdisk_wwn. then symdev list -sg ${SG_NAME} exit fi if [ $OP == "ready" ] || [ $OP == "not_ready" ]. [root@dsib1141 scripts]# cat se_devs.sh’ – Encapsulates with VMAX3 backup and restore devices.txt # Remove irrelevant lines and remove colon from WWNs #################################################### rm -f ./CMD. -----------. [root@dsib1141 scripts]# cat se_prod_snaptime. ---. [root@dsib1141 scripts]# cat se_encapsulate. do CMD="add external_disk wwn=$line.sh #!/bin/bash set -x # Get vdisk WWN's from DDS ########################## # DDS output looks like this: # Device Device-group Pool Capacity WWNN # (MiB) # ----------. devices: database (+DATA and +REDO)./vdisk_wwn_only. -------. ----------------------------------------------- # vdisk-dev0 OLTP ERP 10241 60:02:18:80:00:08:a0:24:19:05:48:90:7a:d0:00:00 # vdisk-dev1 OLTP ERP 10241 60:02:18:80:00:08:a0:24:19:05:48:90:7a:d0:00:01 # vdisk-dev2 OLTP ERP 10241 60:02:18:80:00:08:a0:24:19:05:48:90:7a:d0:00:02 # .Solutions Enabler scripts: o ‘se_devs.$6.sh’ – Captures the timestamp of the last snapshot taken for database or fra.$8." echo $CMD >> .. then symsg -sg ${SG_NAME} $OP fi o ‘se_encapsulate./CMD. exit}')) MM=`date -d "${PIT[1]} 1" "+%m"` # Replace MMM with numeric MM PIT_DATE="(${PIT[0]}_${MM}_${PIT[2]}_${PIT[3]})" echo $PIT_DATE o ‘se_snap_create./CMD.txt done < .txt # Create and execute a symconfigure command file ################################################ rm -f . data (just +DATA)./vdisk_wwn.txt while read line.sh #!/bin/bash #set -x if [ $# -ne 1 ].sh’ script.sh #!/bin/bash #set -x if [ "$#" -ne 2 ]. do stringarray=($line) echo ${stringarray[4]} | sed 's/[\:_-]//g' >> .txt symconfigure -sid 531 -nop -v -file .txt commit o ‘se_prod_snaptime.sh’ – create a SnapVX snapshot for production (prod) or restore (rstr) storage group./vdisk_wwn_only.sh’ – Shows an SG device status and updates it to READY or NOT_READY. 47 .. or fra (+FRA). exit.txt done < . fi SG=$1 # Capture snapvx snapshot time into an array PIT=($(symsnapvx -sg prod_${SG}_sg list | awk '/NSM/ {print $9.$7. ssh sysadmin@DDS "vdisk device show list pool ERP" | grep vdisk > . encapsulate_data=yes. then echo "options: database|fra". It is used by ‘pp_backup.txt while read line.

[root@dsib1141 scripts]# cat se_snap_create.sh #!/bin/bash set -x if [ "$#" -ne 2 ]. at intervals of 30 seconds. [root@dsib1141 scripts]# cat se_snap_link.sh’ – Similar to ‘se_snap_create. then SRS_SG=rstr_${FILE_TYPE}_sg TGT_SG=prod_${FILE_TYPE}_sg fi if [ $DEV_ORIGIN == "rstr" ] && [ $FILE_TYPE == "database" ]." exit fi SNAP_NAME=${DEV_ORIGIN}_${FILE_TYPE} symsnapvx -sg ${SRS_SG} -lnsg ${TGT_SG} -snapshot_name ${SNAP_NAME} link -copy -exact symsnapvx list o ‘se_snap_show.sh’ – Shows the status of a snapshot. then SRS_SG=prod_${FILE_TYPE}_sg TGT_SG=bkup_${FILE_TYPE}_sg fi if [ $DEV_ORIGIN == "rstr" ]. then echo "specify production primary devices. then echo "specify database or fra devices" echo "options: prod|rstr database|data|fra" exit fi symsnapvx -sg ${DEV_ORIGIN}_${FILE_TYPE}_sg -name ${DEV_ORIGIN}_${FILE_TYPE} establish -v symsnapvx list o ‘se_snap_link. then echo "specify link source: production devices. including link-copy progress. or encapsulated restore devices" echo "options: prod|rstr database|data|fra" exit fi 48 . then echo "options: prod|rstr database|data|fra" exit fi DEV_ORIGIN=$1 FILE_TYPE=$2 if [ $DEV_ORIGIN != "prod" ] && [ $DEV_ORIGIN != "rstr" ]. then echo "options: prod|rstr database|data|fra" exit fi DEV_ORIGIN=$1 FILE_TYPE=$2 if [ $DEV_ORIGIN != "prod" ] && [ $DEV_ORIGIN != "rstr" ] || [ $FILE_TYPE != "database" ] && [ $FILE_TYPE != "data" ] && [ $FILE_TYPE != "fra" ] then echo "options: prod|rstr database|data|fra" exit fi if [ $DEV_ORIGIN == "prod" ]. except performs a link-copy instead of establish.sh’. then echo "This is a block to prevent overwriting the production redo logs unintentionally. [root@dsib1141 scripts]# cat se_snap_show. then echo "options: prod|rstr database|data|fra" exit fi DEV_ORIGIN=$1 FILE_TYPE=$2 if [ $DEV_ORIGIN != "prod" ] && [ $DEV_ORIGIN != "rstr" ].sh #!/bin/bash set -x if [ "$#" -ne 2 ]. or encapsulated restore devices" echo "options: prod|rstr database|data|fra" exit fi if [ $FILE_TYPE != "database" ] && [ $FILE_TYPE != "data" ] && [ $FILE_TYPE != "fra" ].sh #!/bin/bash set -x if [ "$#" -ne 2 ].

sh’ – Unlinks a snapshot (such as at the end of use case 4c). [root@dsib1141 scripts]# cat se_snap_terminate. then SG=rstr_${FILE_TYPE}_sg fi #SNAP_NAME=${DEV_ORIGIN}_${FILE_TYPE} symsnapvx -sg $SG list -linked -copied -detail -i 30 o ‘se_snap_terminate. then SG=prod_${FILE_TYPE}_sg fi if [ $DEV_ORIGIN == "rstr" ]. or encapsulated restore devices" echo "options: prod|rstr database|data|fra" exit fi if [ $FILE_TYPE != "database" ] && [ $FILE_TYPE != "data" ] && [ $FILE_TYPE != "fra" ]. data or fra devices" echo "options: prod|rstr database|data|fra" exit fi if [ $DEV_ORIGIN == "prod" ].sh’ – Terminates a snapshot. fi symsnapvx -sg ${DEV_ORIGIN}_${FILE_TYPE}_sg -snapshot_name ${DEV_ORIGIN}_${FILE_TYPE} terminate -v symsnapvx list o ‘se_snap_unlink. [root@dsib1141 scripts]# cat se_snap_unlink. then echo "specify which devices to link: database. remove if truely necessary. then exit # Blocked to prevent terminating Prod snap. then echo "specify database or fra devices" echo "options: prod|rstr database|data|fra" exit fi if [ $DEV_ORIGIN == "prod" ].sh #!/bin/bash set -x if [ "$#" -ne 2 ]. then echo "options: prod|rstr database|data|fra" exit fi DEV_ORIGIN=$1 FILE_TYPE=$2 if [ $DEV_ORIGIN != "prod" ] && [ $DEV_ORIGIN != "rstr" ] || [ $FILE_TYPE != "database" ] && [ $FILE_TYPE != "data" ] && [ $FILE_TYPE != "fra" ] then echo "options: prod|rstr database|data|fra" exit fi if [ $DEV_ORIGIN == "prod" ]. then SRS_SG=prod_${FILE_TYPE}_sg TGT_SG=bkup_${FILE_TYPE}_sg fi if [ $DEV_ORIGIN == "rstr" ]. then SRS_SG=rstr_${FILE_TYPE}_sg TGT_SG=prod_${FILE_TYPE}_sg fi SNAP_NAME=${DEV_ORIGIN}_${FILE_TYPE} symsnapvx -sg ${SRS_SG} -lnsg ${TGT_SG} -snapshot_name ${SNAP_NAME} unlink 49 . then echo "specify production primary devices.if [ $FILE_TYPE != "database" ] && [ $FILE_TYPE != "data" ] && [ $FILE_TYPE != "fra" ]. then echo "options: prod|rstr database|data|fra" exit fi DEV_ORIGIN=$1 FILE_TYPE=$2 if [ $DEV_ORIGIN != "prod" ] && [ $DEV_ORIGIN != "rstr" ].sh #!/bin/bash set -x if [ "$#" -ne 2 ].

prod_redo_sg. or encapsulated restore devices" echo "options: prod|rstr database|data|fra NSM=0|1" exit fi if [ $FILE_TYPE != "database" ] && [ $FILE_TYPE != "data" ] && [ $FILE_TYPE != "fra" ].sh’ – Combines the different device masking commands for the system setup.prod_fra_sg symaccess -type storage -name mount_sg create sg mount_gk 50 . fi done date o ‘se_aclx. then echo "options: prod|rstr database|data|fra NSM=0|1" exit fi DEV_ORIGIN=$1 FILE_TYPE=$2 NSM=$3 if [ $DEV_ORIGIN != "prod" ] && [ $DEV_ORIGIN != "rstr" ].bkup_redo_sg symaccess -type storage -name bkup_fra_sg create devs 3B:3E symaccess -type storage -name rstr_redo_sg create devs 23:26 symaccess -type storage -name rstr_data_sg create devs 27:2A symaccess -type storage -name rstr_fra_sg create devs 3F:42 symaccess -type storage -name prod_sg create sg prod_gk. then echo "specify which devices to link: database.sh #!/bin/bash set -x if [ "$#" -ne 3 ].symsnapvx list o ‘se_snap_verify. [root@dsib1141 scripts]# cat .prod_data_sg symaccess -type storage -name bkup_redo_sg create devs 1B:1E symaccess -type storage -name bkup_data_sg create devs 1F:22 symaccess -type storage -name bkup_database_sg create sg bkup_data_sg.sh #!/bin/bash # To find HBA port WWNs run the following command: # cat /sys/class/fc_host/host?/port_name set -x export SYMCLI_SID=000196700531 symaccess -type storage -name prod_gk create devs 2D:31 symaccess -type storage -name mount_gk create devs 32:36 symaccess -type storage -name backup_sg create devs 1B:22 symaccess -type storage -name prod_redo_sg create devs 13:16 symaccess -type storage -name prod_data_sg create devs 17:1A symaccess -type storage -name prod_fra_sg create devs 37:3a symaccess -type storage -name prod_database_sg create sg prod_redo_sg.sh’ – Waits until a snapshot link-copy is fully in ‘copied’ or ‘destaged’ state while showing progress. then sleep 30. data or fra devices" echo "options: prod|rstr database|data|fra NSM=0|1" exit fi if [ $NSM -eq 1 ].prod_data_sg. [root@dsib1141 scripts]# cat . then SNAPSHOT_NAME="NSM_SNAPVX" else SNAPSHOT_NAME=${DEV_ORIGIN}_${FILE_TYPE} fi SG_NAME=${SNAPSHOT_NAME}_sg STAT=1 while [ $STAT -ne 0 ] do symsnapvx -sg $SG_NAME -snapshot_name $SNAPSHOT_NAME list -detail -linked symsnapvx -sg $SG_NAME -snapshot_name $SNAPSHOT_NAME verify -copied -destaged STAT=$? if [ $STAT -ne 0 ]./aclx./se_snap_verify. then echo "specify link source: production devices.

4D:8 symaccess view create -name mgmt_mv -sg mgmt_sg -ig mgmt_ig -pg mgmt_pg symaccess view create -name prod_database_mv -sg prod_database_sg -ig prod_ig -pg prod_pg symaccess view create -name prod_fra_mv -sg prod_fra_sg -ig prod_ig -pg prod_pg symaccess view create -name mount_mv -sg mount_sg -ig mount_ig -pg mount_pg REFERENCES • EMC VMAX3 Family with HYPERMAX OS Product Guide • EMC Symmetrix VMAX using EMC SRDF/TimeFinder and Oracle • EMC VMAX3 TM Local Replication • http://www.3D:8.htm 51 .4D:8 symaccess -type port -name mount_pg create -dirport 1D:8.emc. symaccess -type initiator -name prod_ig create symaccess -type initiator -name prod_ig add -wwn 21000024ff3de26e symaccess -type initiator -name prod_ig add -wwn 21000024ff3de26f symaccess -type initiator -name prod_ig add -wwn 21000024ff3de19c symaccess -type initiator -name prod_ig add -wwn 21000024ff3de19d symaccess -type initiator -name mount_ig create symaccess -type initiator -name mount_ig add -wwn 21000024ff3de192 symaccess -type initiator -name mount_ig add -wwn 21000024ff3de193 symaccess -type initiator -name mount_ig add -wwn 21000024ff3de19a symaccess -type initiator -name mount_ig add -wwn 21000024ff3de19b symaccess -type port -name prod_pg create -dirport 1D:8.com/data-protection/protectpoint/index.emc.2D:8.com/data-protection/data-domain/index.htm • http://www.

Master your semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master your semester with Scribd & The New York Times

Cancel anytime.