White Paper

PROTECTPOINT FOR FILE SYSTEMS
WITH VMAX3 AND VPLEX – OFFLOADING BACKUP
AND RECOVERY

Abstract
A solution consisting of ProtectPoint and VPLEX that
allows file system backup and restore to take place
between VMAX3 and Data Domain within the virtual
storage infrastructure. This capability not only reduces
host I/O and CPU overhead, allowing the host to focus on
processing and applications, but also provides higher
efficiency for the backup and recovery process itself
whilst allowing VPLEX to non-disruptive mobility and
continuous availability.

Copyright © 2015 EMC Corporation. All Rights Reserved.
EMC believes the information in this publication is accurate of its publication date. The
information is subject to change without notice.
The information in this publication is provided “as is”. EMC Corporation makes no
representations or warranties of any kind with respect to the information in this
publication, and specifically disclaims implied warranties of merchantability or fitness for
a particular purpose.
Use, copying, and distribution of any EMC software described in this publication requires
an applicable software license.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on
EMC.com.
All other trademarks used herein are the property of their respective owners.
Part Number H14720

PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 2

Table of Contents
Executive summary ............................................................................................................ 4
Document scope and limitations ........................................................................................ 4
Audience ............................................................................................................................ 5
Introduction ...................................................................................................................... 5
Product Overview ............................................................................................................... 6
VPLEX ............................................................................................................................... 8
EMC VPLEX Virtual Storage ................................................................................................. 8
EMC VPLEX Architecture...................................................................................................... 8
EMC VPLEX Family .............................................................................................................. 9
ProtectPoint ...................................................................................................................... 9
EMC ProtectPoint ................................................................................................................ 9
EMC ProtectPoint .............................................................................................................. 10
ProtectPoint for File System Features and Functions ..................................................... 13
VMAX3 Product Overview .................................................................................................. 13
VMAX3 FAST.X ................................................................................................................... 14
VMAX3 SnapVX Local Replication Overview ....................................................................... 15
Data Domain Product Overview ........................................................................................ 16
Section 1: ProtectPoint for File System with VMAX3 & VPLEX ............................................. 19
Introduction ..................................................................................................................... 19
ProtectPoint for File Systems Components ........................................................................ 19
ProtectPoint Backups using VPLEX with VMAX3 Overview ................................................. 20
ProtectPoint for File Systems Backup Procedure Scope................................................. 22
ProtectPoint for File Systems Pre-checks and Backup Procedure....................................... 23
Using ProtectPoint Recovery Device Images for Granular Recovery with VPLEX .................. 27
ProtectPoint for File Systems Pre-checks and Production Restore Procedure ..................... 28
ProtectPoint Restore Procedure for RAID-0 VPLEX devices ................................................. 29
ProtectPoint 2.0 Full Restore of VPLEX RAID-1 Virtual Volumes .......................................... 33
Conclusion ...................................................................................................................... 39
References ...................................................................................................................... 39

PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 3

the data for these applications continues to grow rapidly. managed by the host administrator. it is much more efficient to store the backups on media that does not consume primary storage. These two factors combined with ever tightening RPO and RTO requirements have placed a premium on traditional backup windows. ProtectPoint for File Systems allows for scripting to carry out any host side activities to ensure an application consistent backup is taken. etc. fixing physical block corruption. Finally. but also provides higher efficiency for the backup and recovery process itself. there is a large gap between the requirement for fast and efficient data protection. risking the loss of both in the event of a data center failure. during recovery the recovery process itself (‘roll forward’) can’t start until the initial file system image was fully restored first. often there is no strong integration between the file system backup. and the snapshot operations. More importantly. Document scope and limitations Both ProtectPoint and VPLEX are constantly evolving as a platform. This option is best used when the production file system requires a complete restore from backup. VMAX3.VMAX3 storage array and Data Domain system to provide storage-based backup and recovery in an automated.). The restore devices can be directly mounted to a Mount host for small-scale data retrievals. This has led many datacenters to use snapshots for more efficient protection. allowing the host to focus on servicing applications and transactions. At the same time. efficient. EMC ProtectPoint addresses these gaps by enabling best in class EMC products -. and Data Domain allows the highest backup efficiency. or storage unavailability. However. The combination of ProtectPoint. and then copies it directly to the Data Domain system. missing data files. Also.g. Instead. Data Domain places the required backup-set on its restore encapsulated devices. where the restore devices content is copied by SnapVX. and remote replication such as the Data Domain system offers. overwriting the native VMAX3 Production devices. Restore efficiencies are introduced in a similar way by not requiring any read or write I/Os of the data files by the host. and integrated manner. Backup efficiencies are introduced by not requiring any read or write I/Os of the data files by the host. As soon as the snapshot is created. Conventional backup is unable to meet this requirement due to the inefficiencies of reading and writing all the data during full backups. or mounted to the Production host and cataloged with RMAN so the full scale of RMAN functionality can be used for production-database recovery (e. Instead. which can take a very long time. the file system is operation returned to normal. snapshot data is typically confined to a storage array along with its source data. The file system snapshot is then incrementally copied to the Data Domain system in the background. ProtectPoint creates an internal point-in-time consistent copy of the database. and the ability to deliver said protection without disruption. The enabling technology for the ProtectPoint is the integration between VMAX3 and Data Domain systems which allows file system backup and restore to take place entirely within the storage infrastructure! This capability not only reduces host I/O and CPU overhead.Executive summary Continuous availability is increasingly a baseline requirement for enterprise applications. As a result. A 3rd option is available. and also benefits from de-duplication. managed by the storage admin. or for a large-scale recovery that shouldn’t be performed from the encapsulated Data Domain devices. while file system operations continue as normal. compression. The procedures and technology discussed in this white paper are only applicable to the VPLEX Local and VPLEX Metro products with ProtectPoint PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 4 .

managing. Introduction This white paper addresses following administrative tasks for ProtectPoint operations consisting of VPLEX. PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 5 . VMAX3. VPLEX. and system administrators who are responsible for implementing. The necessary pre-backup considerations. logical checks. storage administrators. and DataDomain:  Pre-backup checks  ProtectPoint for File Systems backups  Pre-recovery checks  ProtectPoint for File Systems file recovery This paper examines the technical impact of VPLEX on ProtectPoint for VMAX3 using ProtectPoint for File Systems backup and recovery. VMAX3. Please consult with your local EMC support representatives if you are uncertain as to the applicability of these procedures to your ProtectPoint. and maintaining file system backup and recovery strategy with VMAX3 storage systems. and are interested in achieving improved file system availability. In particular. process changes and use case examples are provided. Audience This white paper is intended for technology architects. The examples illustrate the correct way to account for VPLEX and potential storage conditions that could impact the consistency of backup.integration of VMAX3 and Data Domain systems. This solution is specific to the use of ProtectPoint for File Systems which allows the orchestration between ProtectPoint and VPLEX.2 or higher. ProtectPoint for File Systems. performance. and Data Domain environment. It is assumed that readers have some familiarity with VPLEX and the EMC VMAX3 family of storage arrays. the procedures described only apply to versions of VPLEX running GeoSynchrony version 5. and ease of storage management.

Storage volume LUN or unit of storage used by VPLEX presented by the back-end arrays. VMAX3 TimeFinder Previous generations of TimeFinder referred to snapshot as a space-saving copy of the source Snapshot vs. Clones. Crash consistent state. where RPO=0 means no data loss of committed transactions. even while the application is running. the replicated file system image is crash consistent. (b) specify FAST Service Levels (SLOs) to a group of devices on VMAX3. This may require taking a given application in and out of backup mode. where capacity was consumed only for data changed after the snapshot time. With VMAX3. Recovery Point Objective (RPO) refers to any amount of data-loss after the recovery completes. Storage Groups Storage Groups can be used to (a) present devices to host (LUN masking) on VMAX3.X FAST. VMAX3 TimeFinder TimeFinder SnapVX is the latest development in TimeFinder local replications software. offering SnapVX higher scale and wider feature set.Product Overview TERMINOLOGY The following table provides explanation to important terms used in this paper. often times relying on the file system journal and/or logging to achieve file system consistency before the file system can be mounted by the host. PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 6 . VMAX Federated Tiered Federated Tiered Storage (FTS) is a feature of VMAX3 that allows an external storage to be Storage (FTS) connected to the VMAX3 backend and provide physical capacity that is managed by VMAX3 software. HYPERMAX OS delivers the ability to perform real-time and non-disruptive data services. such as the child storage groups can be used for setting FAST Service Level Objectives (SLOs) and the parent used for LUN masking of all the database devices to the host. Storage Consistent vs. file data. RTO and RPO Recovery Time Objective (RTO) refers to the time it takes to recover a database after a failure. File systems make a distinction between a crash consistent and fully consistent state of the file Crash Consistent file system. and file metadata system to be consistent (see ‘storage consistent replications’) on the storage media. A fully consistent state requires all file system journal data. A snapshot or backup is considered to be application aware if the snapshot of backup is taken through direct integration with a given applications API. To the host. It enables VMAX3 to embed storage infrastructure services like cloud access. VMAX3 FAST. This delivers new levels of data center efficiency and consolidation by reducing footprint and energy requirements.X is the latest development in FTS. In this state. file systems can be simply mounted without user intervention. In addition. the user can choose if to keep the target devices space-efficient. or perform full copy. data mobility and data protection directly on the array. VMAX3 HYPERMAX OS HYPERMAX OS is the industry’s first open converged storage hypervisor and operating system. VMAX3 Storage Groups can be cascaded. and (c) manage groups of devices used by VPLEX and/or for replication software such as VMAX3 SnapVX and SRDF. Clone device. TimeFinder SnapVX snapshots are always space-efficient. on the other hand referred to full copy of the source device. yet maintaining the ability to emulate legacy behavior. When they are linked to host-addressable target devices. Storage consistent Storage consistent replications refer to storage replication (local or remote) that maintains write- replications order fidelity at the target devices. Term Description Application Consistent vs A snapshot or backup is only considered Application Consistent if that snapshot or backup is Application Aware taken in a way specifically supported by a given application. requires file system recovery. on the other hand.

multi-tiered block storage frame. hyper consolidated. VNX. VPLEX Director The central processing and intelligence of the VPLEX solution. using redundant. For example. EMC CX4. VPLEX cluster A collection of VPLEX engines in one rack. Storage Array An intelligent. VMAX3. highly available. Virtual volume Unit of storage presented from VPLEX front-end ports to hosts. VPLEX Engine Consists of two directors and is the unit of scale for the VPLEX solution. PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 7 . scalable. There are redundant (A and B) directors in each VPLEX Engine. VPLEX device Logical layer where protection scheme applied to an extent or group of extents used by VPLEX. Back-end port VPLEX director port connected to storage arrays (acts as an initiator). and XtremIO. VMAX. private Fibre Channel connections as the cluster interconnect. Symmetrix DMX4. Front-end port VPLEX director port connected to host initiators (acts as a target).Extent All or part of a storage volume used by VPLEX.

VMAX3. and XtremIO (plus over 65 3rd party array products) by providing an abstraction layer between the storage array and the host. VNX2.VPLEX EMC VPLEX Virtual Storage EMC VPLEX virtualizes block storage from arrays such as EMC CLARiiON. which lets customers to start small and grow big with predictable service levels  Advanced data caching utilizing large-scale SDRAM cache to improve performance and reduce I/O latency and array contention  Distributed cache coherence for automatic sharing. VNX. and failover of I/O across the cluster PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 8 . VMAX. Symmetrix. The logical components of each VPLEX virtual volume are shown below in Figure 1. VPLEX physically resides between the servers and the block storage arrays. Figure 1: EMC VPLEX virtual volume components EMC VPLEX Architecture VPLEX is a virtual storage solution designed for use with heterogeneous host operating system environments and with both EMC and non-EMC block storage arrays. balancing. It introduces a virtual storage layer which enables VPLEX to the following desirable storage characteristics:  Scale-out clustering hardware.

data source integration. and a single point of management for heterogeneous block storage systems within and across data centers at synchronous distance. VPLEX Metro: This solution is for customers who desire non-disruptive and fluid mobility. and protection storage. control. high availability. This also illustrates the decoupling of the management of protection (data management services) from protection storage and data source integration. The EMC protection storage architecture is a blueprint for data protection transformation and investment protection (FIGURE 3) that focuses on three key areas: data management services. PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 9 . and a single point of management for heterogeneous block storage systems within a single data center.  Consistent view of one or more LUNs across VPLEX clusters separated either by a few feet within a data center or across synchronous distances (up to 10ms). which is critical for software-defined IT that requires the decoupling of the management. ProtectPoint EMC ProtectPoint EMC ProtectPoint provides storage-integrated data protection that complements existing EMC data protection and availability solutions and demonstrates the latest proof point of the protection storage architecture. The VPLEX Metro offering also includes the unique capability to remotely export virtual storage across datacenters without the need for physical storage at the remote site. and data planes. high availability. enabling new models of high availability and workload relocation Figure 2: High level VPLEX architecture EMC VPLEX Family VPLEX Local: This solution is appropriate for customers who desire non-disruptive and fluid mobility.

EMC ProtectPoint The data plane carries the data from source to destination. unlike other backup mechanisms that consume valuable host side resources on the primary storage. One of the benefits of leveraging Data Domain protection storage is its industry leading inline deduplication technology. In addition. the data plane ( Figure 4) is the connection between primary storage to the Data Domain system. Rather. ProtectPoint is an industry first solution that protects data by copying it directly from its source (primary storage) to the protection storage via the most efficient path and without application impact. ProtectPoint is very different from snap and replication solutions thanks to the efficient way the data is processed and stored by the protection storage system. Since ProtectPoint leverages primary storage change block tracking technology. ProtectPoint addresses two aspects of data source integration . When a backup is triggered. ProtectPoint data movement is handled by separate resources of the primary storage that are dedicated to protection workflows. ProtectPoint was designed by decoupling the data plane from the control plane to directly drive the underlying capabilities on the primary and protection storage. ProtectPoint is neither adding application integration to snapshots nor adding snapshot support to backup software. When the Data Domain system PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 10 . Figure 3: The EMC Protection Storage Architecture Specifically.the integration with primary storage and applications. but would not fully address the problems with backup and would inevitably experience many of the limitations of snapshots. ProtectPoint leverages key technologies within the primary storage and protection storage and introduces new protection software. the primary storage knows exactly what has changed since the last backup and only has to send those unique blocks across the network. unlike a traditional backup application. it minimizes data sent on the data plane. Both of these approaches would bring some benefits. The direct data movement from primary to protection storage eliminates the local area network impact by isolating all data traffic to the SAN. This protection software is a data protection agent that drives the backup process and supports integration with the application being protected. With ProtectPoint. This agent also enables the application administrator to control his own backup and recovery operations. To achieve this.

which enables 10x to 30x reduction in storage requirements. it's compressed inline and stored on the Data Domain system.receives the changed blocks from the primary storage. which ensures data remains recoverable throughout its lifecycles on the Data Domain system. ProtectPoint for VMAX3 leverages the underlying technology of SnapVX and Fast. the control plane coordinates each of the steps along with other related activities. If the segment is unique. The choice of integration is intended to ensure the simplest overall deployment of end to end protection for the application. as with all data on a Data Domain system. it will segment the incoming data stream and uniquely identify each segments and compares each segments to all previously stored data to determine if it’s unique. the system will simply use a pointer and will not store the segment again.X. but is de-duplicated against all other known data. Backup Data Flow VMAX3 with Data Domain Restore Data Flow VMAX3 with Data Domain Figure 4: ProtectPoint Data Plane for VMAX3 While the data plane carries out the data movement and processing. PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 11 . In addition. ProtectPoint backups are protected against data integrity issues by the Data Domain Data Invulnerability Architecture with continuous fault detection and self-healing. if the segment is not unique. the Data Domain system then creates a new full independent backup image. After the data is ingested and de-duplicated. but still enables simple recovery. However. This backup image is independent from all previous backups.

Since this is a fairly simple process and decoupled from the actual movement of data. The ProtectPoint control plane ensures that these ProtectPoint backup and recovery operations seamlessly coexist with traditional primary storage availability workflows. To control backup operations.to orchestrate backup (the transfer of changed blocks and creation of backup images) and recovery operations. as the longer the application is in backup mode the more IOs are queued in the logs and the heavier the impact on the application when exiting backup mode. First. Second is the ProtectPoint controller. ProtectPoint backups are recorded in a catalog on the Data Domain system along with the backup data. the control plane is governed by two key functions within the ProtectPoint agent that runs on the application server being protected. The control plane coordinates all the activities for the backup and recovery (full and granular) workflows. EMC ProtectPoint for File Systems uses an agent with a simple command line interface to drive the ProtectPoint flows with simple scripting to orchestrate with any application along with any additional orchestration that may be needed such as additional local snapshots and in the case of this white paper VPLEX. For mission-critical applications. The ProtectPoint agent stores the credentials in an RSA secure lockbox. the application only needs to be in backup mode for the moment that the backup is triggered. In addition.With ProtectPoint. which is just the time it takes to create a ProtectPoint snapshot. the ProtectPoint controller has the necessary configuration data and credentials to provide connectivity and authentication to the primary and protection storage. With ProtectPoint. Figure 5: ProtectPoint Control Plane PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 12 . is the application layer that supports or controls the application and file system integration. which controls the processes described in the data plane section above. the agent stores configuration data . In addition. this is vitally important. the application only needs to be in backup mode for a brief instant. The control plane carries out the sequencing that provides one of the most critical benefits of the ProtectPoint – eliminating the backup impact on the application being protected.mapping the LUNs on the primary storage to the storage devices on the Data Domain system .

 Operates on the device level.5" and 3. You can use ProtectPoint for File System to complete the following tasks:  Create a snapshot of the production application LUNs on the primary storage system. Each array uses Virtual Provisioning to allow the user easy and quick storage PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 13 .ProtectPoint for File System Features and Functions ProtectPoint File System Agent includes the following features:  Provides a CLI interface that you can use to trigger the primary storage to Data Domain workflow for backup and restore operations.  Triggers backup and restore operations on the primary storage system and Data Domain system through the use of primary storage system features and Data Domain block services for ProtectPoint management libraries. allowing the storage array to seamlessly grow from an entry-level configuration into the world’s largest storage array.  Manage the lifecycles of the data backups by listing and optionally deleting existing backups. intelligent.  Manage replication from the source Data Domain system to a Data Domain system in the data recovery site. 200K and 400K.  Securely manage the credentials for the Data Domain systems. and supports both block and file (eNAS). modular storage. respectively.  Validate the content and format of the configuration files.  Provides an interface to replicate backups to a secondary Data Domain system for disaster recovery. not with file system objects. deliver the latest in Tier-1 scale- out multi-controller architecture with consolidation and efficiency for the enterprise.  Manage the ProtectPoint backup and restore catalog.  Show the ProtectPoint version number.5" drives.  Provides commands for lifecycle management of the backups. ProtectPoint works with primary storage LUNs and Data Domain block services for ProtectPoint devices.  Trigger the movement of data created from the backups on the primary storage system to the Data Domain devices. VMAX3 Product Overview The EMC VMAX3 family of storage arrays is built on the strategy of simple.  Create a static-image for each LUN in the data set on the Data Domain system. and incorporates a Dynamic Virtual Matrix interface that connects and shares resources across all VMAX3 engines. It provides the highest levels of performance and availability featuring new hardware and software capabilities. It offers dramatic increases in floor tile density. The newest additions to the EMC VMAX3 family. high capacity flash and hard disk drives in dense enclosures for both 2. VMAX3 family of storage arrays come pre-configured from factory to simplify deployment at customer sites and minimize time to first I/O. VMAX 100K.

including cache optimizations. Refer to EMC documentation and release notes to find the most up to date supported components. and therefore their data preserved and independent of VMAX3 specific structures. Attaching external storage to a VMAX3 enables the use of physical disk capacity on a storage system that is not a VMAX3 array. data management. The external storage devices can be encapsulated by VMAX3. VMAX3 new hardware architecture comes with more CPU power. While VMAX3 can ship as an all-flash array with the combination of EFD (Enterprise Flash Drives) and large persistent cache that accelerates both writes and reads even farther. and a new Dynamic Virtual Matrix dual InfiniBand fabric interconnect that creates an extremely fast internal memory-to-memory and data-copy fabric.5”/3.2 TB 10K RPM SAS drives 2. 1 Fully Automated Storage Tiering (FAST) allows VMAX3 storage to automatically and dynamically manage performance service level goals across the available storage resources to meet the application I/O demand.5”  300 GB 15K RPM SAS drives 2. Figure 6shows possible VMAX3 components. it can also ship as hybrid. FTS is implemented entirely within HYPERMAX OS and doesn’t require any additional hardware besides the VMAX3 and the external storage.X Previously named Federated Tiered Storage (FTS).760 drives  SSD Flash drives 200/400/800 GB 2. even as new data is added.5”  2 TB/4 TB SAS 7.provisioning. or presented as raw disks to VMAX3. and data migration. FAST. where HYPERMAX OS will initialize them and create native VMAX3 device structures.7 GHz Intel Xeon E5-2697-v2  Up to 5.5” Figure 6: VMAX3 storage array VMAX3 FAST.5”  300 GB – 1.5”/3.5”/3.X is a feature of VMAX3 that allows an external storage to be connected to the VMAX3 backend and provide physical capacity that is managed by VMAX3 software. multi-tier storage that excels in providing FAST1 (Fully Automated Storage Tiering) enabled performance management based on Service Level Objectives (SLO). 2. larger persistent cache. PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 14 . Connectivity with the external array is established using fiber channel ports.  1 – 8 redundant VMAX3 Engines  Up to 4 PB usable capacity  Up to 256 FC host ports  Up to 16 TB global memory (mirrored)  Up to 384 Cores.2K RPM 3. while gaining access to VMAX3 features. local and remote replications. and access patterns continue to change over time.

and ease-of-use features. or to the original version of the data when it is modified. or preservation of the primary storage devices. or linked to another set of target devices which can be made host-accessible. This group is defined by using either a text file specifying the list of devices. or any other process that requires parallel access to. or a ‘storage group’ (SG). scalability.  Each source device can have up to 256 snapshots.  Snapshot operations are performed on a group of devices. VMAX3 SnapVX Local Replication Overview EMC TimeFinder® SnapVX software delivers instant and storage-consistent point-in-time replicas of host devices that can be used for purposes such as the creation of gold copies. VMAX3 TimeFinder SnapVX combines the best aspects of previous TimeFinder offerings and adds new functionality. Snapshot operations such as establish and restore are also consistent – that means that the operation either succeeds or fails for all the devices as a unit. even if the snapshot name remains the same.  SnapVX provides the ability to create either space-efficient or full-copy replicas when linking snapshots to target devices. Use the “-copy” option to copy the full snapshot point-in-time data to the target devices during link. This will make the target devices a stand-alone copy. Some of the main SnapVX capabilities related to native snapshots (emulation mode for legacy behavior is not covered):  With SnapVX.  Snapshots are taken using the establish command. reporting and test/dev environments. Snapshots also get a ‘generation’ number (starting with 0) such as older snapshots generation is incremented with each new snapshot. Instead. That means that snapshot creation always maintains write-order fidelity. They only relate to a source devices and can’t be otherwise accessed directly. When a snapshot is established. saving capacity and resources by providing space-efficient replicas. backup and recovery.  SnapVX snapshots themselves are always space-efficient such as they are simply a set of pointers pointing to the original data when it is unmodified. snapshots can be restored back to the source devices. a snapshot name is provided. The recommended way is to use a storage group. snapshots are natively targetless. data warehouse refreshes.  SnapVX snapshots are always consistent. patch testing. If “-copy” option is not used. Multiple snapshots of the same data utilize both storage and memory savings by pointing to the same location and consuming very little metadata. The snapshot time is saved with the snapshot and can be listed. PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 15 . and an optional expiration date. ‘composite-group’ (CG). and each snapshot can be linked to up to 1024 targets. the assumption is that the external storage provides storage protection and therefore VMAX3 will not add its own RAID to the external storage devices. the target devices provide the exact snapshot point-in-time data only until the link relationship is terminated. a ‘device-group’ (DG). Note: While the external storage presented via FTS is managed by VMAX3 HYPERMAX OS and benefits from many of the VMAX3 features and capabilities.

SnapVX snapshot data resides in the same Storage Resource Pool (SRP) as the source devices.  FAST Service Levels apply to either the source devices. and now also block device service that enables it to expose devices as FC targets. and other resiliency features transparent to the application. a new snapshot can be taken from the target devices and linked back to the original source devices. Figure 7 EMC Data Domain deduplication storage systems Data Domain systems reduce the amount of disk storage needed to retain and protect data by 10 to 30 times. and acquire an ‘Optimized’ FAST Service Level Objective (SLO) by default. Storing only unique data on disk also means that data can be cost-effectively replicated over existing networks to remote sites for DR. The block device service in Data Domain is called vdisk and allows the creation DataDomain based backup and restore devices that can be encapsulated by VMAX3 FTS and used by ProtectPoint. VTL. SnapVX allows unlimited number of cascaded snapshots. The Data Domain devices PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 16 . With Data Domain being an external storage it can be used behind FTS.htm Data Domain Block Device Service Data Domain supports a variety of protocols. For more information on Data Domain visit: http://www. which is enabled by the EMC Data Domain Data Invulnerability Architecture – end-to-end data verification. NFS. but not to the snapshots themselves. Data Domain Product Overview Introduction to Data Domain Data Domain deduplication storage systems offer a cost-effective alternative to tape that allows users to enjoy the retention and recovery benefits of inline deduplication. continuous fault detection and self-healing. Linked-target devices can’t ‘restore’ any changes directly to the source devices.com/data-protection/data-domain/index. or to snapshot linked targets.emc. as well as network-efficient replication over the wide area network (WAN) for disaster recovery (DR). including CIFS. All Data Domain systems are built as the data store of last resort. Instead. With the industry’s fastest deduplication storage controller. Data on disk is available online and onsite for longer retention periods. Data Domain systems allow more backups to complete faster while putting less pressure on limited backup windows. In this way. The ability to encapsulate Data Domain devices as VMAX3 devices allows TimeFinder SnapVX to operate on them. and restores become fast and reliable.

Remember that there will be two identical sets of devices: Data Domain backup devices. Therefore. and Data Domain restore devices. Note: Data Domain can replicate the backups to another Data Domain system by using Data Domain Replicator (separately licensed). Device Group Similar to the ‘Application’ level. Device Host device equivalent. it is critical that all the Data Domain devices that participate in the same backup-set will belong to the same Data Domain device group. the Data Domain system can be used with a different VMAX3 if necessary. Table 1 depicts the basic Data Domain vdisk block device object hierarchy. Maximum of 2048 devices Figure 8 Data Domain block device hierarchy Note: Backup and restore operations of backup-sets are performed at a Data Domain device group granularity.are encapsulated to preserve their data structures.5. when preparing Data Domain devices. or a vdisk Pool2. PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 17 . the reader should be aware that the replication granularity is either single backup-set. In that way. Refer to Data Domain OS release notes for available features. which is also shown in Figure 8. 2 This feature is not currently available with Data Domain OS 5. Maximum of 32 pools. Table 1 Data Domain block device hierarchy Name Description Pool Similar to a ‘Department’ level. While Data Domain file system structure is not covered in this paper. Maximum of 1024 device groups per pool. and they all have to be part of the same device group.

over-writing them. and restore devices. or their data can be copied with SnapVX link-copy to VMAX3 native devices. including user-provided information.  A static-image is created for each backup device within the Data Domain system once the devices received all their data from SnapVX. Static images with matching metadata are called backup-sets. They can be mounted directly to Production or a Mount host. PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 18 . Since the backup and restore devices are overwritten with each new backup or restore respectively. The encapsulated backup devices are used as a backup target. Understanding Data Domain static-images and backup-sets for ProtectPoint A full overview of Data Domain system is beyond the scope of this paper. Static-images utilize de-dup and optional compression capabilities. it is the static-images that are kept as distinct backups in the Data Domain catalog. it is important to mention a few basic Data Domain components that are used in this integration. After the incremental copy completed. Data Domain will use it to create a static-image for each. The encapsulated restore devices are used for database restore operations. and together the static-images create a backup-set.Understanding Data Domain backup and restore devices for ProtectPoint The VMAX3 integration with Data Domain uses two identical sets of encapsulated devices: backup devices. However. Data Domain catalog maintains and list backup sets with their metadata to help selecting the appropriate backup-set to restore. and therefore SnapVX will copy the backup data to them. compression. benefiting from de-dup. and remote replications capabilities. and can add metadata to describe their content.

and to facilitate direct connectivity to the application controls. to reduce the host infrastructure requirements. or for extracting small data sets (‘logical’ recovery). extent configuration. when deployed with VPLEX and VMAX3 is based on the following key components:  Production host (or hosts) with VPLEX virtual volumes based on VMAX3 storage volumes containing the file system.  Data Domain system leveraging block services and with two identical sets of devices: backup devices. Each of them is identical to the production devices. In that case the encapsulated restore devices can be mounted to a test/dev host to review the backup content before restoring to production. however.  Management host. Remote scripting can also be used to control the application being protected. o The backup and restore Data Domain devices are created in Data Domain and exposed as VMAX3 encapsulated devices via FTS. o Not shown is a remote Data Domain system if Data Domain Replicator is used to replicate backups to another system. ProtectPoint for File Systems Components ProtectPoint. The ProtectPoint FSA will also be installed on this host  An optional Recovery or Mount host. PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 19 . and overall VPLEX cluster status. These checks include items such as the VPLEX device geometry. and restore devices. Mount host is used when the administrator prefers to use the backup not on the production environment. though it requires a few tiny devices called Gatekeepers for communication with the VMAX3 storage array. The ProtectPoint agent and Solutions Enabler are typically installed on a storage management (non-production) host. In order to ensure a consistent backup of VPLEX virtual volumes created using VMAX3 there are several pre-checks that should be performed before every backup with ProtectPoint. It is. The following sections provide a complete listing of the pre-check items that an administrator must add to his script or must manually confirm when using ProtectPoint with VPLEX and VMAX3-based virtual volumes. The management host doesn’t require its own VMAX3 storage devices.Section 1: ProtectPoint for File System with VMAX3 & VPLEX Introduction From a mechanics perspective.  VMAX3 storage array with Federated Tier Storage (FTS) and encapsulated Data Domain backup and restore device sets. acceptable to install the PP FSA and EMC Solutions Enabler on a production host as a way to simplify the configuration. where optionally Unisphere for VMAX3 and Data Domain DDMC may be installed. VMAX3 storage volume status. creating backups and carrying out restores using ProtectPoint for FIle Systems with VMAX3 and VPLEX works very much like it does when VMAX3 is directly connected to the host and VPLEX is not is use.

This means that the VPLEX device capacity = extent capacity = VMAX3 device capacity. The primary requirement of this solution the VMAX3 storage mapping through the VPLEX virtualization layer -.ProtectPoint Backups using VPLEX with VMAX3 Overview VMAX3 with SnapVX provides near instant copies of production volumes. In the case of ProtectPoint. Figure 9 shows a one to one mapped VMAX3-based virtual volume (RAID-0 geometry) presented through VPLEX to a host. To create a ProtectPoint backup the VMAX3 leverages SnapVX point in time copies onto a DataDomain appliance. and storage array connectivity combinations for ProtectPoint backups are illustrated in Figures 9 and 10. This section will provide a procedure for creating consistent backups of VPLEX virtual volumes using ProtectPoint. Potential host. By using ProtectPoint for File Systems this backup process can be automated making the entire solution simple. These copies can be presented back through VPLEX to the production host. and time efficient. This restricts the VPLEX device geometries to single extent RAID-0 or to two extent RAID-1. App OS Production Production Point In Unique Backup Data Data Time Copy Blocks Images Ingested SAN Inline CBT Transfer SAN Dedupe Virtual Change Block Tracking Deduplication Index Storage Figure 9: ProtectPoint Backup of a Raid-0 VPLEX Virtual Volume PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 20 . the same SnapVX and FAST.the VPLEX virtual volumes must consist of one to one mapped VMAX3 storage volumes. VPLEX. or to a QA/Test/Dev host.X technologies are leveraged to allow backups to be created on the DataDomain. This capability eliminates backup I/O from the host along with space efficient backup images. repeatable.

For restore. contains information about the source devices. mirror legs. the best practice configuration would only leverage ProtectPoint on one mirror leg. but currently ProtectPoint does not support this configuration. App OS Production Production Point In Unique Backup Data Data Time Copy Blocks Images Ingested SAN Raid-1 Inline CBT Transfer SAN Dedupe Virtual Change Block Tracking Deduplication Index Storage Figure 10: ProtectPoint Backup of a Raid-1 VPLEX Virtual Volume In Figure 10. the restore target devices.to create a backup of a VPLEX virtual volume based on a 1:1 mapped VMAX3 extent. PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 21 . Note: It is also possible to design complex multi-site backup solutions using VPLEX distributed raid-1 devices and VMAX3 storage at two data centers.config. The required checks to determine the VPLEX volume device geometry. the determination of which mirror leg is being used for ProtectPoint backup becomes pivotal for a successful backup (and restore). Note: Applications that span multiple VMAX3 arrays are able to use SnapVX to achieve consistent snapshots. Though it is possible both legs could be from the same VMAX3 or two different VMAX3 arrays. The key difference in this topology is the mirror legs -. The objective remains the same as with Figure 9 . and mirror leg heath are provided in following procedure sections. Each RAID-1 device has two mirror legs based on a single extent (indicated in blue and dark blue). the backup target devices.only one of them will be backed up using ProtectPoint. however. the VPLEX virtual volume (indicated in orange) sits on top of a RAID-1 device (indicated in green). only one site can do restore to a distributed device and a full rebuild would be required for the non- restored mirror leg (as shown later in this document). These solutions would provide local backup and restore capabilities at each data center without the need to replicate the backups from one DD device to the other. ProtectPoint Configuration File The default configuration file. protectpoint. Regardless of the chosen RAID-1 configuration. Each of the two extents is equal in size to the underlying storage volume from the VMAX3 and from a second (possibly non- VMAX3) array. and the relationships between these devices for both the primary storage system and Data Domain system.

PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 22 . Block services for ProtectPoint pool and device-group on the secondary Data Domain system Primary Data Domain system Information for the primary Data Domain system. Table 2: Required Configuration file information ProtectPoint for File Systems Backup Procedure Scope The following procedure is intended to supplement the standard best practices for ProtectPoint file system backups on VMAX3 taking into account for VPLEX with RAID-0 extents from VMAX3. Wide Number (WWN) of the Snapshots are copied to these devices. control path operations for the Data Domain system. VMAX3. Used to identify the production VPLEX virtual volumes that the file systems and/or data reside on. See the ProtectPoint File System Agent 2. VPLEX storage-volume VPD ID Identifier for the underlying storage array devices consumed by VPLEX. Configuration information Purpose VPLEX virtual volume VPD ID Identifier for the VPLEX virtual volumes exposed to hosts. and picked up on the encapsulated backup target devices Data Domain system. and password. static-image or MTree replication.com for comprehensive details on supported Data Domain. Used for hostname or IP address.emc. Solutions Enabler. Used to map the VPLEX virtual volumes to VMAX3 storage volumes. and ProtectPoint for File Systems versions and SAN connectivity requirements. Symdev-ID and SYMM-ID of the Identifiers for the VMAX3 production devices. username. The required configuration information for the modified configuration file is shown in Table 2 below. Used to identify VMAX3™ source devices the production devices from which ProtectPoint creates a snapshot on the VMAX3 system. you will be modifying the default configuration file to include the specific details about your devices. Used for all hostname or IP address.0 Installation and Administration Guide available at https://support. Data Domain WWN of the Copies the static-image that contains the backup image from encapsulated restore target devices the primary Data Domain system to the secondary Data Domain system. username. Symdev-ID and Data Domain World VMAX3 identifier for the encapsulated Data Domain device. and password Secondary Data Domain system Information for the secondary Data Domain system.When you set up the ProtectPoint for File Systems on the AR host.

The VPLEX storage-view will contain all VPLEX volumes that are visible to a given host or group of hosts. Use the VMAX3 VPD ID or the VMAX3 Storage Volume Name to determine the VMAX3 Symdev-ID within the VMAX3. The outputs provide the virtual volume (yellow highlight) to VMAX3 (green highlight) mapping for each VPLEX volume:  From the GUI navigate to the virtual volumes context. Identify the host file system and corresponding VMAX3 Symdev-ID for volumes to be backed up. The underlying storage volumes details will be visible. To determine the VPLEX to VMAX3 mapping use one of the following:  From the VPLEX CLI use the ‘show-use-hierarchy’ or ‘device drill-down’ command for each virtual volume(s).ProtectPoint for File Systems Pre-checks and Backup Procedure 1. select (mouse-over) the virtual volume(s) and then click the ‘View Map’ hyperlink: The underlying VMAX3 volume(s) will be shown for each of the above methods. The VMAX3 Symdev-ID can be determined using information contained within the storage-volumes context in the VPLEX CLI or GUI: PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 23 .

007A) will be used to build a configuration mapping file used by ProtectPoint to perform backup operations.0 Command Reference Guide available at https://support.emc.com PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 24 .e.The VMAX3 Symdev-ID (i. The configuration mapping file is covered in detail within the ProtectPoint File System Agent 2.

The configuration file for ProtectPoint 2. starting in Table 5 on page 15 and continuing through until Table 10 on page 19.0 is modified to include the VMAX3 information in order to make SnapVX snapshots of the VMAX3 volume using the FTS mapped DataDomain vdisk.Within the ProtectPoint FSA 2. PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 25 . we are gathering information to use for Table 7 (and possibly Table 8).0 configuration file parameter is defined and explained. Here the Symdev-Id information you captured in the section above is used to specific which VMAX3 (exposed through VPLEX) to make a ProtectPoint enabled backup of: In this case the virtual volume mapping output is used to determine the VMAX3 Symdev-ID.0 command reference guide. every ProtectPoint 2. In this case.

Using the VPLEX CLI or REST API. VMAX3 and DD are all in the same data center. e. Confirm VPLEX volumes to be copied do not have ‘remote’ locality (from same VPLEX cluster). Distributed RAID-1 device legs must be a combination of RAID-0 (single extent) and/or RAID-1 (two extents) device geometries. ii. Confirm Health Status is ‘ok’ iii. a. Device geometry can be determined by issuing the ‘ll’ command at the /clusters/<cluster name>/devices/<device name> context. check for valid backup conditions to ensure data consistency: Note: Each of the following checks are typically scripted or built into code that orchestrates the overall backup process. Using the VPLEX CLI issue the ‘ndu status’ command and confirm that the response is ‘No firmware. Confirm a VPLEX code upgrade is not in progress. Issue the ‘ll’ command from the /clusters/<cluster name>/virtual-volumes/<virtual volume name> context and check ‘recoverpoint-protection-at’ is set to [] and ‘recoverpoint-usage’ is set to ‘-‘. Confirm each volume is 1:1 mapped (single extent) RAID-0 or 1:1 mapped (two extent) local RAID- 1. The goal here is to ensure the host. Confirm Operational Status is ‘ok’ c. Confirm the source VPLEX virtual volume device geometry is not RAID-C. f. Issue the ‘ll’ command against the /clusters/<local cluster name>/virtual volumes/<virtual volume name> context and confirm locality is ‘local’ or ‘distributed’. Note: VPLEX consistency group membership should align with any VMAX3 storage group membership whenever possible. or SSD upgrade is in progress. Confirm the device is not being protected by RecoverPoint (Note: ProtectPoint restore to a RecoverPoint protected device would lead to data inconsistency). Issue the ‘ll’ command from the /clusters/<cluster name>/devices context or from /distributed-storage/distributed- devices/<distributed device name>/components context. In most cases all members of the consistency should be backed up together. For RAID-1 or distributed RAID-1 based virtual volumes. h. Confirm the device is healthy. Ensure virtual volumes are members of the same VPLEX consistency group. g. d. BIOS/POST.’ b. Consistency group membership can be determined by issuing ll from the /clusters/<cluster name>/consistency-groups/<consistency group name> context. Issue the ‘ll’ command from the /clusters/<cluster name>/virtual- volumes/<virtual volume name> context for each volume(s) to be copied. confirm underlying VMAX3 storage volume status is not failed or in an error state. Confirm the underlying device status is not marked ‘out of date’ or in a ‘rebuilding’ state. PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 26 . i.2.

i. perform VMAX3 to VPLEX masking/storage group modification to add the backup images. Note: Best practice is to set the VPLEX consistency group detach rule (winning site) to match site where VMAX3 backups are performed. PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 27 . If any of the above conditions fail. If WAN links are completely down. perform these additional VPLEX specific steps: 1. If the conditions are being checked by a script an error message or alert should be generated.0 administration guide available on https://support. Confirm the VMAX3 Recovery Device images are visible to VPLEX BE ports. Using ProtectPoint Recovery Device Images for Granular Recovery with VPLEX Once a backup is created it can be presented (zoned and lun masked) directly from the VMAX3 back to a test or development host. confirm VMAX3 backup is being made at winning site.emc.com for details on selecting the correct Recovery Image (backup) and making it available through VMAX3. App OS Production Production Point In Unique Backup Data Data Time Blocks Images Copy Ingested SAN SAN Recovery Recovery Device Device Granular Recovery Mapping Dedupe Index Figure 11: Granular Recovery For ProtectPoint Recovery Device images presented back through VPLEX. Issue the ‘cluster status’ command and confirm ‘wan-com’ status is ‘ok’ or ‘degraded’. Figure 11 below shows the granular restore process using ProtectPoint for File Systems with VMAX3 and VPLEX. 4. Here you are trying to avoid backing up a stale VMAX3 mirror leg. 3. The ProtectPoint File System Agent 2. In some situations the administrator may wish to present the backup image through VPLEX to the production host to perform a granular restore. For distributed RAID-1 confirm WAN link status.0 Installation and Administration guide located at https://support.com contains the detailed steps. Follow standard ProtectPoint for File Systems procedures to create a backup of the host file system(s). See the ProtectPoint 2. As necessary.emc. the backup should be aborted.

To account for this difference. c. As necessary. Claim storage volumes from the VMAX3 containing the backup images. create VPLEX storage view(s) b. each VMAX3 production device must be one- to-one mapped through the VPLEX virtualization stack. Use the ‘Create Virtual Volumes’ button from the Arrays context within the UI or ii. We’re also assuming RAID-0 (single extent only) device geometry in this section. Create single extent. In this section we review the ProtectPoint restore process. Present VPLEX virtual volumes built from the backup images to host(s) a. The primary difference is VMAX3 is now writing data directly from the backup image on the Data Domain system (outside of the VPLEX I/O path) back to the VMAX3 production device. RAID-1 restore has some added complexity and will be addressed in a later section. 2. and virtual volume(s) from the VPLEX CLI 3. VPLEX read cache must be invalidated as part of the restore process. Create VPLEX virtual volumes from the backup images using the VPLEX UI or CLI: i. Add virtual volumes built from backup images to storage view(s) c. single member RAID-0 geometry device. As necessary. This means that the VPLEX device capacity = extent capacity = storage volume capacity = Symdev-ID capacity. perform zoning of virtual volumes to hosts following traditional zoning best practices ProtectPoint for File Systems Pre-checks and Production Restore Procedure The ProtectPoint restore or recovery process with VPLEX is similar to the standard VMAX3 restore process. As a result VPLEX read cache for the production virtual volume does not match was has been restored. Identify the Symdev-ID from the VMAX3 of the backup images. PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 28 . For the RAID-0 restore use case. Perform one-to-one encapsulation through the Unisphere for VPLEX UI or VPLEX CLI: a. b. Figure 12 (below) illustrates the case where data is being written within a storage array from the ProtectPoint backup image to a VMAX3 device that is used by VPLEX.

For 5.2 code. the consistency group context will indicate if it is RecoverPoint protected or not as well. You will need this information in step 5 below. confirm that you are restoring all of the volumes that the application requires to run and not a subset of all volumes. 1. The objectives here are to prevent host access and to clear the host’s read cache. Check ‘recoverpoint-protection-at’ is set to [] and ‘recoverpoint-usage’ is set to ’-‘. Use virtual-volume cache-invalidate to invalidate the individual source volume(s): VPlexcli:/> virtual-volume cache-invalidate <virtual volume> or PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 29 . 2. or B. 4. Confirm a VPLEX code upgrade is not in progress. 5. Invalidate VPLEX read cache on the source virtual volume(s). App OS Production Production Point In Unique Backup Data Data Time Blocks Images Copy Ingested SAN SAN Recovery Recovery Device Device Mapping Dedupe Virtual Index Storage Figure 12: ProtectPoint Restore with VPLEX RAID-0 Production Volume ProtectPoint Restore Procedure for RAID-0 VPLEX devices 1. Confirm the production VMAX3 storage volume(s) you wish to restore is/are not RecoverPoint protected. Confirm consistency group membership and dependent virtual volume relationships. BIOS/POST. Using the VPLEX CLI. or SSD upgrade is in progress. remove the source virtual volume(s) from all storage views. Issue the ‘ll’ command from the /clusters/<cluster name>/virtual volumes/<virtual volume name> context. Shut down any host applications using the source VPLEX volume(s) / file systems that will be restored. Make note of the virtual volume lun numbers within the storage view prior to removing them. For pre-5. There are several options to achieve VPLEX read cache invalidation depending on your VPLEX GeoSynchrony code version: A.2 and higher code.’ 3. unmount the associated virtual volumes on the host. As necessary. issue the ‘ndu status’ command and confirm that the response is ‘No firmware. If the virtual volume is a member of a consistency group. In other words.

so the use of the virtual-volume cache- invalidate-status command is recommended to confirm completion of invalidation tasks. This is the case even when the consistency group command is used.3 P3 and higher there is no longer a need to wait 30s. This can be done concurrently with Step 3. Use consistency-group cache-invalidate to invalidate an entire VPLEX consistency group of source volumes.3 P3+ and 5. PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 30 . With GeoSynchrony 5. With GeoSynchrony 5. The invalidation process can take a non-trivial amount of time. it is recommended to wait a minimum of 30 seconds to ensure that the VPLEX read cache has been invalidated for each virtual volume.2 code. Note: The virtual-volume cache-invalidate commands operate on a single virtual-volume at a time.4 SP1+ code the cache-invalidate command runs in the foreground and does not require a follow up status check.invalidate command will fail if any member virtual volume doesn’t invalidate. The cache-invalidate command now runs as a foreground process and not as an asynchronous background request. The consistency- group cache. Follow each command in step 1 with the command virtual-volume cache-invalidate-status to confirm the cache invalidation process has completed. Once the command completes the invalidation is complete. VPlexcli:/> virtual-volume cache-invalidate-status <virtual volume> Example output for a cache invalidation job in progress: cache-invalidate-status ----------------------- director-1-1-A status: in-progress result: - cause: - Example output for a cache invalidation job completed successfully: cache-invalidate-status ----------------------- director-1-1-A status: completed result: successful cause: - Note: For pre-5. Invalidation will fail if host I/O is still in progress to the virtual volumes. VPlexcli:/> consistency-group cache-invalidate <consistency group> 2.

Confirm the IO Status of the production VMAX3 storage volumes within VPLEX is “alive” by doing a long listing against the storage volumes context for your cluster. mount devices.2 code. PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 31 . Ensure that there are no errors or connectivity issues to the back-end storage devices. This means using either RecoverPoint or array-based copies.emc. 10. Example output showing desired back-end status: Figure 14: Connectivity Validation 8. restore access to virtual volume(s) based on source devices for host(s): Add the virtual volume back to the view. The ProtectPoint File System Agent 2. The VPLEX Clusters should not be undergoing a NDU while this command is being executed. specifying the original LUN number (noted in step 2) using VPLEX CLI: /clusters/<cluster name>/exports/storage-views> addvirtualvolume -v storage_view_name/ -o (lun#. 6. Note: Cache-invalidate command must not be executed on the RecoverPoint enabled virtual- volumes. for any given virtual volume. As necessary. As necessary. Follow standard ProtectPoint for File Systems procedures to perform a full restore the host file system(s).0 Installation and Administration guide located at https://support. but not both. Resolve any error conditions with the back-end storage before proceeding. 7.com contains the detailed full restore steps. For example: Figure 13: Storage Volumes Long Listing In addition. rescan devices and restore paths (for example. powermt restore) on hosts. virtual_volume_name) -f 9. confirm VPLEX back-end paths are healthy by issuing the “connectivity validate-be” command from the VPLEX CLI. For Pre 5.

PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 32 .11. Restart applications.

0 Full Restore of VPLEX RAID-1 Virtual Volumes When VPLEX virtual volumes have RAID-1 geometry. in this case. In order insure data consistency across both mirror legs after a full restore has occurred. For example. ProtectPoint / Backup administrators must add a step to resynchronize the 2nd VPLEX mirror leg. The typical VMAX3 production device restore procedure restores the VMAX3 mirror leg of a VPLEX RAID-1 device. These steps are critical to ensure data consistency on the second VPLEX RAID-1 mirror leg (the one that is not being restored by the VMAX3) and to ensure each virtual volume’s read cache is consistent. Prerequisites This section assumes VPLEX virtual volumes are based on distributed or local RAID-1 VPLEX virtual volumes built from at least one VMAX3 array with ProtectPoint. Figure 9 (below) illustrates the workflow when a ProtectPoint restore performed to a VPLEX local RAID-1 or distributed RAID-1 device(s) based on VMAX3 production device(s). This applies for both local RAID-1 and distributed (VPLEX Metro) RAID-1 VPLEX devices. the ProtectPoint restore process must account for the second (non-restored) mirror leg.0 Full Restore of VPLEX RAID-1 Virtual Volumes Technical Note: This same set of steps can be applied to remote array-based copy products like SRDF or MirrorView. the VPLEX virtual volumes must possess both of the following attributes: PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 33 . can be used to do a restore to a production (R1/Primary Image) volume. In addition. an SRDF R2 or MirrorView Secondary Image is essentially identical in function to local array-based copy.ProtectPoint 2. The remote copy. App OS Production Production Point In Unique Backup Data Data Time Blocks Images Copy Ingested SAN Raid-1 SAN Recovery Recovery Device Device Rebuild Mapping Dedupe Virtual Index Storage Figure 15: ProtectPoint 2.

Issue the ‘ll’ command from the /clusters/<cluster name>/virtual-volumes/<virtual volume name> context and check ‘recoverpoint-protection-at’ is set to [] and ‘recoverpoint-usage’ is set to ‘-‘. d. e. Device geometry can be determined by issuing ‘ll’ at the /clusters/<cluster name>/devices/<device name> context. The VPLEX volume configuration is such that device capacity = extent capacity = storage volume capacity = Symdev-ID capacity). Confirm each volume is 1:1 mapped (single extent) RAID-0 or 1:1 mapped (two extent) local RAID-1. Confirm the underlying VPLEX device geometry is not RAID-c. Confirm a VPLEX ndu is not in progress. or SSD upgrade is in progress. Confirm VPLEX volumes being restored have the same locality (from same VPLEX cluster) and are in the same array. Confirm Health Status is ‘ok’ iii. h. Issue the ‘ll’ command from the /clusters/<cluster name>/virtual-volumes/<virtual volume name> context for each volume(s) to be copied. In most cases all members of the consistency should be copied together. g. Confirm the restore target device is healthy.0 Restore Procedure for VPLEX RAID-1 Virtual Volumes 1. Using the VPLEX CLI or REST API. BIOS/POST. PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 34 . Confirm underlying VMAX3 storage volume status is not failed or in an error state.’ b.  The volumes must have a single-extent RAID-1 (two single extents being mirrored locally or remotely). a. Consistency group membership can be determined by issuing ll from the /clusters/<cluster name>/consistency- groups/<consistency group name> context. ProtectPoint 2. i. f. ii. Confirm the underlying device status is not marked ‘out of date’ or in a ‘rebuilding’ state. Confirm the VMAX3 production device is not being protected by RecoverPoint. Distributed RAID-1 device legs must be a combination of RAID-0 (single extent) and/or RAID-1 (two extent) device geometries. The VPLEX virtual volumes must be one-to-one mapped storage volume from at least one VMAX3 storage array. Confirm Operational Status is ‘ok’ c. Ensure virtual volumes are members of the same VPLEX consistency group. Issue the ‘ll’ command from the /clusters/<cluster name>/devices context or from /distributed- storage/distributed-devices/<distributed device name>/components context. Using the VPLEX CLI issue the ‘ndu status’ command and confirm that the response is ‘No firmware. check for valid restore conditions to ensure data consistency: Note: Each of the following checks are typically scripted or built into code that orchestrates the overall restore process on the array.

2 code. Note: Best practice is to set the VPLEX consistency group detach rule (winning site) to match site where the VMAX3 and Data Domain systems reside. confirm array based restore is being done at winning site. There are several options to achieve VPLEX read cache invalidation depending on your VPLEX GeoSynchrony code version: A. As necessary. The objectives here are to prevent host access and to clear the host’s read cache. 1. or B. VPlexcli:/> consistency-group cache-invalidate <consistency group> 2. Use virtual-volume cache-invalidate to invalidate the individual source volume(s): VPlexcli:/> virtual-volume cache-invalidate <virtual volume> or Use consistency-group cache-invalidate to invalidate an entire VPLEX consistency group of source volumes. For distributed RAID-1 confirm VPLEX WAN links are not down. Follow each command in step 1 with the command virtual-volume cache-invalidate- status to confirm the cache invalidation process has completed. Depending on how things have been scripted. i. VPlexcli:/> virtual-volume cache-invalidate-status <virtual volume> Example output for a cache invalidation job in progress: cache-invalidate-status ----------------------- director-1-1-A status: in-progress result: - cause: - PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 35 . Shut down any host applications using the production VPLEX virtual volume(s) or file system that will be restored. remove the source virtual volume(s) from all storage views. Make note of the virtual volume lun numbers within the storage view prior to removing them. If WAN links are completely down. You will need this information in step 7 below. 2. Issue the ‘cluster status’ command and confirm ‘wan-com’ status is ‘ok’ or ‘degraded’. unmount the associated virtual volumes on the host. For pre-5. Invalidate VPLEX read cache on the virtual volume(s) being restored.2 and higher code. ProtectPoint for File System will handle taking the filesystem offline / unmounting prior to restore. For 5. 3.

The consistency- group invalidate-cache command will fail if any member virtual volume doesn’t invalidate.4 SP1+ code the cache-invalidate command runs in the foreground and does not require a follow up status check. If the virtual volume is a member of a consistency group.2 code. The invalidation process can take a non-trivial amount of time. For example. This will typically be the non-VMAX3 mirror leg. Detach the VPLEX device RAID-1 or Distributed RAID-1 mirror leg that will not be restored during the array- based restore processes. itself. This is the case even when the consistency group command is used. but not both. With GeoSynchrony 5. PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 36 . for a given virtual volume. Example output for a cache invalidation job completed successfully: cache-invalidate-status ----------------------- director-1-1-A status: completed result: successful cause: - Note: For pre-5. This can be done concurrently with Step 3. 4. Note: Cache-invalidate command must not be executed on the Recover Point enabled virtual- volumes. In this case the virtual volume will need to be removed from the consistency group *before* it the mirror leg is detached. so the use of the virtual-volume cache- invalidate-status command is recommended to confirm completion of invalidation tasks. a RAID-1 device then both the non-restored local leg and the remote leg must be detached. Note: The virtual-volume cache-invalidate commands operate on a single virtual-volume at a time. Use the detach-mirror command to detach the mirror leg(s): device detach-mirror -m <device_mirror_to_detach> -d <distributed_device_name> –i –f Note: Depending on the raid geometry for each leg of the distributed device. if the VPLEX distributed device mirror leg being restored is. Once the command completes the invalidation is complete. This means using either RecoverPoint or array-based copies. it is recommended to wait a minimum of 30 seconds to ensure that the VPLEX read cache has been invalidated for each virtual volume. it may be necessary to detach both the local mirror leg and the remote mirror leg. in some cases the virtual volume may no longer have storage at one site which may cause the detach command to fail.3 P3+ and 5. This is because only 1 storage volume is being restored and there are up to 3 additional mirrored copies maintained by VPLEX (1 local and 1 or 2 remote). Invalidation will fail if host I/O is still in progress to the virtual volumes. The VPLEX Clusters should not be undergoing a NDU while this command is being executed.

For example: Figure 16: Storage Volumes Long Listing In addition.com contains the detailed full restore steps. Ensure that there are no errors or connectivity issues to the back-end storage devices. Confirm the IO Status of storage volumes based on array-based clones is “alive” by doing a long listing against the storage volumes context for your cluster. The ProtectPoint File System Agent 2. Reattach the second mirror leg: RAID-1: device attach-mirror -m <2nd mirror leg to attach> -d /clusters/<local cluster name>/devices/<existing RAID-1 device> Distributed RAID-1: device attach-mirror -m <2nd mirror leg to attach> -d /clusters/<cluster name>/devices/<existing distributed RAID-1 device> PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 37 .0 Installation and Administration guide located at https://support. confirm VPLEX back-end paths are healthy by issuing the “connectivity validate-be” command from the VPLEX CLI.emc. 6. Resolve any error conditions with the back-end storage before proceeding. Example output showing desired back-end status: Figure 17: Connectivity Validation 7.5. Follow standard ProtectPoint for File System procedures to perform a full restore the host file system(s).

PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 38 . This is done to ensure that the VPLEX cache has been cleared for the volumes. Rescan devices and restore paths (powermt restore) on hosts. 11. 9. VPLEX will synchronize the second mirror leg in the background while using the first mirror leg as necessary to service reads to any unsynchronized blocks. If the virtual volume is built from a local RAID 1 device: /clusters/<local cluster name>/exports/storage-views> addvirtualvolume -v storage_view_name/ -o (lun#. distributed_device_name_vol)-f The lun# is the previously recorded value from step 2 for each virtual volume. 8. For Pre 5. Note: Full mirror synchronization is not required prior to restoring access to virtual volumes.2 code. The array-based restore will likely will take 30 seconds. Use the information you recorded in step 3 to ensure the same LUN numbering when you restore the virtual volume access. 10. Note: The device you are attaching in this step will be overwritten with the data from the newly restored source device. Note: EMC recommends waiting at least 30 seconds after removing access from a storage view to restore access. Note: Some hosts and applications are sensitive to LUN numbering changes. but if you are scripting be sure to add a pause.device_Symm0191_065_1_vol/) -f If the virtual volume is built from a distributed RAID 1 device: /clusters/<remote cluster name>/exports/storage-views> addvirtualvolume -v storage_view_name/ -o (lun#. distributed_device_name_vol)-f /clusters/<local cluster name>/exports/storage-views> addvirtualvolume -v storage_view_name/ -o (lun#. restore host access to the VPLEX volume(s). Restart applications. Mount devices (if mounts are used).

While modifications to existing procedures are necessary. The VPLEX UI.Conclusion ProtectPoint for File Systems continues to provide important backup efficiencies and time savings when EMC VPLEX is added to a ProtectPoint environment supporting VMAX3.com:  White Paper: Workload Resiliency with EMC VPLEX  VPLEX Administrators Guide  VPLEX Configuration Guide  VPLEX SolVe Desktop from http://support.EMC. References The following reference documents are available at Support.0 Installation and Administration Guide PROTECTPOINT FOR FILE SYSTEMS WITH VPLEX AND VMAX3 39 . Test. In addition to using ProtectPoint technology for backup and restore.com  EMC ProtectPoint: A Technical Overview  EMC ProtectPoint: Solutions Guide  ProtectPoint: Implementation Guide  EMC ProtectPoint File System Agent with VMAX3 – Backup & Recovery Best Practices for Oracle on ASM  ProtectPoint File System Agent 2. and RESTful API provide a variety of methods to allow ProtectPoint to deliver high business value as the premier backup and restore offload technology on the market.emc. the changes are relatively simple to implement. VPLEX customers can also access Recovery Images and perform QA. CLI. and Development operations through VPLEX.