You are on page 1of 121

SAN

CLARIION

Clariion
Agenda Introduction Hardware overview Software overview Clariion Management Clariion Configuration Clariion Objects Clariion Applications

Clariion Timeline

Clariion Timeline

All members of the CX family have a similar architecture. The main differences are the number of front-end and back-end ports, the CPU types and speeds, and the amount of memory per SP.

Clariion Hardware

Clarion Hardware Architecture


Delivering Data and Application Availability
Fully redundant architecture SP, cooling, data paths, SPS Non-stop operation Online software upgrades Online hardware changes No single points of failure

Tiered capacity FC and ATA disks From five to 480 disks


Flexibility Mix drive types, RAID levels RAID levels 0, 1, 1+0, 3, 5 Up to 16 GB memory Dual I/O paths with no disruptive failover

Continuous diagnostics Data and system CLARalert


Advanced data integrity Mirrored write cache De-stage write cache to DISK upon power failure

Clariion Architecture

Clariion Architecture is based on intelligent Storage Processors that manage physical drives on the back-end and service host requests on the front-end. Depending on the module, each Storage processor includes either one or two CPUs. Storage Processors communicate to each other over the CLARiiON Messaging Interface (CMI). Both the front-end connection to the host and the back-end connection to the physical storage are 2Gb/4GB Fibre channel

CLARIION Features
Data Integrity How CLARiiON keeps data safe (Mirrored write cache ,vault, etc) Data Availability Ensuring uninterrupted host access to data (Hardware redundancy,pathfailover software(powerpath), Error reporting capability) CLARiiON Performance What makes CLARiiON a great performer (cache, Dual SPs , Dual/quad back-end FC buses ) CLARiiON Storage Objects A first look at LUNs, and access to them ( RAID Groups, LUNs , MetaLUNs,Storage Groups)

Modular Building Blocks in Storage system


The CLARiiON storage system is based upon a modular architecture. There are four building blocks in a Clariion.

DPE - Disk Processor Enclosure Contains both disks and processor


DAE - Disk Array Enclosure Contains disks only

SPE - Storage Processor Enclosure Contains storage processor


SPS - Standby Power Supply Provide battery backup protection

The DPE houses the storage Processor(s) and the first set of Fibre Channel disks. The DPE includes: Two power supplies ,each have a power input connector that is fed by SPS Two Storage Processors that include the SP and LCC functionality. Each SP has memory and one or more processors. Backend ports , Frontend Ports ,Serial port and Ethernet Management port

DAE (Disk Array Enclosure)

Disk Status LEDs Green for connectivity Blinks during disk activity Amber for Fault Enclosure Status LEDs Green = Power Amber = Fault

DAE (Disk Array Enclosure)

DAE

SPA (Storage Processor Enclosure)


Front View of SPA

SPE
Rear view of SPE

SPS (Standby Power Supplies)

The CLARiiON is powered on or off using the switch on the SPS. The RJ11 connection is to the Storage processor and used to communicate lost of AC power and signals the SP to begin the vault operation. Once the vault operation is complete, the SP signals the SPS that it is OK to remove AC power Note: Until the batteries are fully charged, write caching will be disabled

DAE-OS Front view

The DAE-OS contains slots for 15 dual-ported Fibre Channel disk drives. The first five drives are referred to as the Vault drives. Disks 0-3 required to boot the Storage Processors Disks 0-4 required to enable write caching These disks must remain in the original slots! The DAE-OS enclosure must be connected to bus zero and assigned the enclosure address 0.

Private Space

Private space on Vault/Code Drives


The first five drives in DAE are called code drives They are also used for vaulting purpose. 6.5 GB of each drive of code drives is reserved to store Flare image, SPA and SPB boot images and for PSM LUN and Vaulting purpose Flare is triple mirrored PSM LUN triple mirrored Vault: Vault is a reserved area found on 1st nine disks of DPE in FC series and 1st five disks of DPE on CX series. Data in write cache is dumped to the vault area in power failure emergency. Once the system is turned on vault transfers dumped data back to cache PSM LUN: Persistent Storage Manager LUN ,created at the time of initialization by Navisphere PSM is a hidden LUN where the records of configuration information and access logix database are stored. It resides in the first three disks of code drives Both SPs can access a single PSM and update themselves with new configurations via Clariion Messaging interface(CMI)

Clariion Operating Environment

The CLARiiON arrays boot operating system is either Windows NT or Windows XP depending on the processor model After booting each SP Executes FLARE software. FLARE software manages all functions of the CLARiiON storage system(provisioning, resource allocation, Memory management etc. Access Logix software is optional software that runs within the FLARE operating environment on each storage processor (SP).It is used for LUN masking Navisphere provides a centralized tool to monitor, configure, and analyze performance of clariion storage systems. CLARiiON can also be managed as part of EMC ControlCenter, allowing full end-to-end management. Other array software includes SnapView, MirrorView, and SANCopy.

Clariion Management

Basic Clariion Management


EMC Navisphere Manager Browser-based Manages multiple storage systems and multiple hosts Managing RAID Groups Managing LUNs Managing advanced functionality (Storage Groups, metaLUNs, SnapView, MirrorView, SAN Copy etc) Relies on host agent and SP agent Single Point of Management (SPoM) EMC Navisphere CLI / Secure CLI Managing the storage system Managing RAID Groups Managing LUNs Managing advanced functionality

Software Components
Software Components Array Software Base (FLARE) code (with or without Access Logix) Array agent Management Server Management UI SnapView MirrorView SAN Copy Management Station Software Internet Explorer or Netscape Java Navisphere Management UI ClarAlert Host Software Navisphere Host Agent HBA drivers PowerPath Note: The Navisphere UI may run either on the management station or on the array.

Initializing a Clarion
Initializing an array refers to the setting of the TCP/IP network parameters and establishing domain security. Initialize array can be done using a serial connection and a point-to-point network ( Default IP http://192.168.1.1/setup) We can set network parameters (IP,hostname,subnet mask,Gateway,peer IP(sp A/B) Further array configuration is performed using either GUI or CLI after the array has been initialized. Array name, access control, Fibre Channel link speeds, etc. Additional domain users and Privileged User Lists Read and write cache parameters Storage objects: RAID Groups LUNs Storage Groups

Component Communication in managing the Clariion

Clariion Management
In-Band Management o
FLARE

FC

Fabric

Navisphere GUI
(Management Host)

Out of Band Management Naviagent converts SCSI calls to TCP/IP and TCP/IP to SCSI
TCP/IP
RJ-45 Navisphere GUI (Management Host)

Flare

NAVI AGENT

MGMT SERVER

Clariion Management

Clariion Managemet

Domain contains one Master and other storages are treated as slaves We can configure name for Storage Domain( Default name: Domain Default) Each storage system can be a member of only one domain

Navisphere Users
There are three roles of users: Administrator Can do anything including create and delete users. Manager Can fully manage array but cannot modify/create/delete other users. Monitor Can only look. There are two scopes: Local Global

Classic Navisphere CLI used a Privileged user list to authenticate user requests. The Array Agents privileged users list does not include user1 and therefore the request is denied.

The privileged user list now includes user1 as a privileged user when logged in at IP address 10.128.2.10.

The Host Agent also uses its own privileged user list. This illustrates an attempt by Management Server to restart the Host Agent on a computer whose IP address is 10.128.2.10. The Host Agent will refuse the command unless the array is listed as a privileged user in agent.config.

While an SP does not have a login user ID, the default user name of system is used for the SP. The format of the privileged user list in Host Agents agent.config file is system@<IP Address>.

Clariion configuration
Introduction to Navisphere Manager Configure the Clariion Clarion Security ( Domain configuration and Creaing user A/Cs etc Configure Cache, Verify available softwares, acess logix, Network configuration, Verify SPs WWNs and setting SP agent privileged users etc) Create RAID groups

Bind LUNS and MetaLUNs


Initiator Records and host registration

Access logix
Create storage groups

RAID groups and LUNS


RAID Group: RAID Group is a collection of Physical Drives from which an administrator may bind one or more LUNs. Once the first LUN is bound within a RAID group, all other LUNs will the RAID Group will share the same protection scheme. Using the Navisphere GUI and or CLI we can administer RAID groups(Create, Expand, Destroy etc) LUN: LUN is a Logical Unit
The process of creating a LUN is called Binding When presented to a host it is assigned a Logical Unit Number and it appears to the host as a disk drive Using the Navisphere GUI and or CLI we can administer LUNs( Bind LUN, Changing LUN properties, Unbinding LUN etc)

RAID groups and LUNs


MetaLUN: Collection of individual LUNs that act in together with, and are presented to, a host or application as a single storage entity Created by taking new and/or pre-existing LUNs and logically connecting them together Expand existing volumes while on-line Concatenated Striped Combined Stripe and Concatenated

MetaLUN Terminology
FLARE LUN (FLU) A logical partition of a RAID group. The basic logical units managed by FLARE, which serve as the building blocks for MetaLUN components. MetaLUN A storage volume consisting of two or more FLUs whose capacity grows dynamically by adding FLUs to it Component A group of one or more FLARE LUNs that get concatenated to a MetaLUN as a single or striped unit Base LUN The original FLARE LUN from which the MetaLUN is created. The MetaLUN is created by virtue of expanding the base LUNs capacity. Note : The MetaLUN is presented to the host in exactly the same way it was before the expansion i.e. the Name, LUN ID, SCIS ID, and WWN is the same. The only thing that changed is the capacity is increased. To Expand a LUN, right click on the LUN and select Expand This invokes the Storage Wizard

LUN Mapping

LUN 0

FC SCSI level allows multiple LUNs at single target To make it allow we need to map the LUNs in /kernel/drv/sd.conf file and update the driver using # update_drv f sd Example: name=sd parent=lpfc target=0 lun=1 name=sd parent=lpfc target=0 lun=2

Access Logix

Access Logix
What Access Logix is Why Access Logix is needed Configuring Access Logix Storage Groups Configuring Storage Groups

Access Logix
Access Logix is a licensed software package that runs on each storage processor. SAN switches allow multiple hosts physical access to the same SP ports . Without Access Logix, all hosts would see all LUNs. Access logix solve this problem using LUN Masking by creating Storage groups. Controls which host have access to which LUNs Allows multiple hosts to effectively share a CLARiiON array

Initiator Records

Initiator records are created during Fibre Channel Login HBA performs port login to each SP port during initialization Initiator-Registration records are stored persistently on array LUNs are masked to all records for a specific host Access Control Lists maps LUN UIDs to the set of Initiator Records associated with a host

Manual and Auto Registration


Automatic Registration: Registration is performed automatically when a HBA is connected to an array

There are two parts to the registration process: Fibre Channel port login (plogi) where the HBA logs into the SP port Creates initiator records for each connection Viewed in Navisphere in Connectivity Status Host Agent registration where the host agent completes the initiator record information with host information
Manual Registration:

The Group Edit button, on the Connectivity Status main screen, allows manual registration of a host which is logged in to. In FC series we need to do manual registration. In CX series the registration is done automatically if Host agent is installed on Fabric hosts

Storage Groups
Managing Storage Groups Creating Storage Groups Viewing and changing Storage Group properties Adding and removing LUNs Connecting and disconnecting hosts Destroying Storage Groups

LUN Masking with Access logix


All LUNs are accessible through all SP ports LUN ownership is active/passive LUNS are assigned to storage Groups When a host is connected to a storage group, it has access to all LUNs within the storage Group

Access Logix Switch Zoning


Zoning determines which hosts see what ports on a storage system Fabric level access control Multiple Hosts may be zoned to share the same ports Access Logix determines which LUNs are accessible to which host LUN level access control Both Zoning and Access Logix are used together

Access Logix Limits


Host may be connected to only one Storage Group per array If multiple arrays in environment, host may be connected to one Storage Group in each array Number of hosts per storage system varies based on the number of connections Maximum of 256 LUNs in a Storage Group A Storage Group is local to 1 storage system Host agent must be running . If not, manually register initiators

User must be authorized to manage Access Logix

Persistent Binding

The c# refers to the HBA instance, the t# refers to the target instance(SPs front-end port) and the d# is the SCSI address assigned to the LUN. The HBA number and the SCSI address are static but the t# by default is assigned in the order in which the targets are identified during the configuration process of a system boot. The order that a target is discovered can be different between reboots. Persistent binding binds the WWN of a SP port to a t# so that every time the system boots, the same SP port on the same array will have the same t#.

Persistent Binding
HBA configuration files /kernel/drv/<driver>.conf - lpfc.conf for Emulex Persistent binding SP port WWPN mapped to controller/target address 500601604004b0c7:lpfc0t2 Disable the auto mapping in lpfc.conf(automap=0)

Power path

What is Power path

Host Based Software Resides between application and SCSI device driver Provides Intelligent I/O path management Transparent to the application Automatic detection and recovery from host-to-array path failures

The Value of Power path


Support for Windows, and UNIX server Improves SAN performance Provides Path Failover Allows applications to continue to access LUNS in the event of failure of any component in the IOP Requires careful planning and design to eliminate any single point of-failures Multiple HBAs Fabric zoning to provide multiple paths Provides load balancing Balances IO requests across HBAs and paths Does not balance IO across Storage Processors Supports EMC Symmetrix, CLARiiON and some 3rd party storage systems PowerPath creates a path set for each LUN, and creates a pseudo-device that may be used in place of the native device

EMC Power path


EMC POWER 0

EMC Power path

SCSI Device Driver

LUN 0

EMC Power path


Clariion Architecture CLARiiON supports an Active-Passive architecture LUNs are owned by a Storage Processor When LUNs are bound, a default LUN owner is assigned In the event of a SP or path failure, LUNs can be trespassed to the peer Storage Processor

Trespass is temporary change in ownership When the storage system is powered-on, LUN ownership returns to the Default Owner

Path Failover

EMC power path

Power path utility kit


PowerPath Utility Kit is intended for host environments where there is only a single HBA and a need to perform SP failover but there is no load balancing or HBA failover Zoning configuration example: HBA1 to SPA Port 0 -HBA 1 to SPB port 0

Power Path Administration


Power path settings on Clariion for each host: Tools Failover setup wizard (Enable Array coman path and set Failover mode as 1 for power path. Power Path Administration provides both GUI(windows) and CLI (All platforms) CLI Administration: 1.Install Power path pkg on Hosts 2. Update PATH variable with /etc extension for all powerpath cmds to work 3.Add power path License: # /etc/emcpreg -add License Key # /etc/emcpreg list to list the installed power path license details 4. To verify that PowerPath devices are configured on the host: # powermt display dev=all 5. To Configure any missing logical devices. #powermt config 6. To remove dead paths #powermt check 7. Powermt restore To restore dead paths after have been repaired

Clariion Applications

Clariion Applications
Snapview Snapshots Snapview Clones SAN Copy Mirror Copy

Snap view over view


Snap view helps to create point-in-time copies of data Provide support for consistent on-line backup or data replication Data copies can be used for purposes other than backup (testing, decision support Scenarios) Snap view components: Snapshot Use pointer-based replication and Copy on First Write technology Make use of a Reserved LUN Pool to save data chunks Have three managed objects Snapshot, session, Reserved LUN Pool Clone Make full copies of the source LUN Track changes to source LUN and clones in the Fracture Log Have three managed objects: Clone Group, Clone, Clone Private LUN Clones and Snapshots are managed by Navispheare Manager and Navisphere CLI

Snapview Snapshots

Snapshot Definition SnapView Snapshot - an instantaneous frozen virtual copy of a LUN on a storage system Instantaneous Snapshots are created instantly no data is copied at creation time Frozen Snapshot will not change UNLESS the user writes to it Original view available by deactivating changed Snapshot Virtual copy Not a real LUN - made up of pointers, original and saved blocks Uses a copy on first write (COFW) mechanism Requires a save area the reserved LUN Pool

Snapview Snapshot
Snapview Snapshot Components: Reserver LUN pool Snapview Snapshot Snapview Session Production Host Backup Host Source LUN Copy on First Write (COFW) Rollback

Snapview Snapshot Components


Reserved LUN pool: Collection of LUNs to support the pointer-based design of Snapview .Total number of reserved LUNs is limited.The limit is model-dependent. Snapview Snapshot: A defined virtual device that is presented to host and enables visibility into running session. The snapshot will be defined under a source LUN. Snapshot can only be assigned to single session. Snapshot Session: Process of defining point-in-time designation by invoking copy-on-first-write activity for updates to the source LUN. Starting a session assigns a reserved LUN to the Source LUN. As far as this session is concerned, until a snapshot is activated ,the point in-time copy is not visible to any servers. At any time we can activate a snapshot to this session in order to present the point-in time image to a host. Each source LUN can have upto eight sessions

Snapview Snapshot Components


Production Host: Server, where Customer Application execute Source LUNs are accessed from Production Host Backup Host: Host where Backup process occurs Backup Media attached to Backup Host Snapshots are accessed from Backup Host Source LUN: The LUN contains production data on which we want to start a Snap view Session and optionally activate a snapshot to that session COFW: The copy on first write mechanism involves saving an original data area from the source LUN into a Reserved LUN area when that data block in the active file system is about to be changed Rollback: Enables recovery of the source LUN by copying data in the reserved LUN back to the Source LUN

Once a session starts, the SnapView mechanism is tracking changes to the LUN and reserved LUN Pool space is required
Source LUNS cant share Reserver(Private) LUNS

Managing Snapshots
Procedure to Create and Manage Snapshots: 1. Configure Reserve LUN pool ReserveLUNpool- configure Add LUNs for both SPs 2. Create Storage group for prod host and add source LUN
3. Create file system on Source LUN and add data

4. Create Snapshot from LUN0 Storagegroup SourceLUN SnapviewCreate Snapshot


5. Create Snap session from LUN0 Storagegroup SourceLUN Snapview- Start SnapView session 6. Activate Snapshot Snapview- SnapshotsSelect the snapshot Activate Snapshot (Select a session for that snapshot)

Managing Snapshots
7. 8. 9. Create Storage group for Backup host and add snapshot virtual LUN Mount emc device of snap LUN on backup host Verify the Data.

10 Do some modification from Prod Host 11. Umount the prod LUN 12. Perform Roll Back of Snap view session Snapview sessions Select sessionstart Rollback 13. Remount the prod LUN and observer the old data

Snapview Clones

SNAP view Clone


SnapView Clones - Overview SnapView Clone a full copy of a LUN internal to a storage system. Clones take time to populate (synchronize) Clone is independent of the Source once synchronization is complete

2-way synchronization Clones may be incrementally updated from the source LUN source LUNs may be incrementally updated from a clone Clone must be EXACTLY the same size as source LUN

Snapshots and Clone Features

Managing Snapview Clones


Procedure to create Clones: 1. Prepare Clone private LUN(CPL) and Fractured log Storage system - Snapview - clone feature properties ( add those private LUNS) Fracture Log: Located in SP Memory Bitmap Tracks modified extents between source LUN and each clone Allows incremental resynchronization in either direction Private LUN for each SP Must be 128 MB (250,000 blocks) or greater Used for all clones owned by SP No clone operations allowed until CPL created Contains persistent Fracture Logs

Managing Snapview Clones


2. Create Storage group for a host and add source LUN 3.Create file system for the emc device and add data 3. Create Clone group for a source LUN Storage System Snapview Create Clone group 4. Add clone to Clone group (Make sure the Synchronized status) SnapView ClonesClonegroup add clone 5. Fracture the Clone SnapView Clonesclone Fracture 6. Add clone to the Backup Host storage group. 7.Mount the emc device of the clone on Backup host and check the data. 8.Add some data on clone through backup host. 9. Initiate the Reverse Synchronization and observe the updated data from prod side

Mirror Copy

Mirror view
Agenda Types of Mirror copy Synchronous ( Mirror view/S) Asynchronous (Mirror view/A) How MirrorView make remote copies of LUNs The required steps in MirrorView administration Mirror View with Snap View

Mirror Copy overview


Optional storage system-based software This product is designed as storage system-based disaster-recovery(DR) solutions for mirroring local production data to a remote/disaster recovery site. Mirrorview/S is a sysnchronous product that mirrors data between local and remote storage systems Mirrorview/A is asynchronous product that offers extended-distance replication based on periodic incremental update model mirrors data Business requirements determine the structure of DR solution The buisiness will decide how much data loss is tolerable and how soon the data must be accessable again in the event of disaster.

Mirror copy overview


It is a requirement that critical business information always be available. To protect this information it is necessary for a DR plan to be in place to safe guard against any disaster.
Recovery objects: Recovery objects are service levels that must be met to minimize the loss of information and revenue in the event of disaster. The criticality of business application and information defines the recovery objectives. The terms commonly used to define the recovery objectives are: Recovery point objective(RPO) Recovery time objective(RTO)

Recovery point objective: Recovery point objective defines the amount of acceptable data loss in the event of disaster. RPO is typically expressed in duration of time. Some applications may have zero tolerance for loss of data in the event of disaster. (Example: Financial Applications)

Mirror copy overview


Recovery time objective(RTO): RTO is defined as amount of time required to bring the business application back online after disaster occurs. Mission critical application may be required to be back online in seconds, without any noticeable impact to the end users.

Replication Models
Replication solutions can be broadly categorized as synchronous and asynchronous.
Synchronous replication model: In a synchronous replication model, each server write on the primary side is written concurrently to the secondary site. RPO is zero, since the transfer of each I/O to the secondary occurs before acknowledgement is sent to the server Data at the secondary site is exactly the same as data at the primary site at the time of disaster disaster.

Asynchronous replication model


Asynchronous replication models decouple the remote replication of the I/O from the acknowledgement to the server. Allows longer distance replication because application write response time is not dependent on the latency of the link. Periodic updates happens from primary to secondary at user-determined frequency

Biderection Mirroring

Mirror view Terminology and Data States


Primary Image: The LUN contains production data and contents of which replicated to the secondary Image Secondary Image: A LUN that contains the mirror of the primary image LUN.This LUN must reside on different clariion than the primary Image. Fracture: A condition in which the I/O is not mirrored to the secondary image. (Admin Facture,System Facture) Promote: The operation by which the administrator changes an images role from secondary to primary. As part of this operation the previous primary will become secondary. Data States Out of sync - full sync needed In sync - Primary LUN and Secondary LUN contain identical data Consistent-The state in which a secondary image is a byte-for-byte duplicate of the primary image either now or at some point in the past. Transition from this state can occur to either the synchronizing state or the in-sync state. Synchronizing - mirror sync operation in progress

MirrorView/S Fracture Log and Write Intent Log Fracture Log: Resident in SP memory, hence volatile Tracks changed regions on Primary LUN when Secondary is unreachable When Secondary becomes reachable, Fracture Log is used to resynchronize data incrementally Fracture Log is not persistent if Write Intent Log is not used

Write Intent Log: Optional allocated per mirror Primary LUN Persistently stored - uses private LUNs Used to minimize recovery in the event of failure on Primary storage system Two LUNs of at least 128 MB each

Comparison between SnapView clones and MirrorView/Synchronous

MirrorView Mirror Creation


Connect storage systems Physically, by zoning Logically, by Manage MirrorView Connections dialog Create Remote Mirror Designate a LUN to be a Primary LUN Specify a mirror name and a mirror type

Add secondary image(s)


Mirror is created in the inactive state, quickly changes to active

Remote mirror view connection

Configure and Manage Mirror view/S


1.Add source LUN to storage group and create file system on it and store some data. 2.Manage mirror connections. Storage system -- mirrorView--- >Manage mirror connections 3. Allocate write intent log Storage system MirrorView- Allocate write intent log 4.Create Remote mirror Storage System-- MirrorView- Create Remote Mirror 5. Add Secondary Image Remote Mirrors Select the mirror Add Secondary Image 6. Promote the secondary and add the LUN to the any DR storage group and verify the data.

Mirror view with Snap view


SnapView is CLARiiONs storage-system-based replication software for local replicas. It supports both snapshots, and clones. When used with MirrorView, SnapView can provide local replicas of primary and/or secondary images. It allows for secondary access to data at either the primary location, secondary location, or both, without taking production data offline for backup , testing etc.

SAN COPY

EMC SAN COPY


What is SAN COPY?

SANCOPY is a optional software available on storage sytem. It enable storage system to copy data at a block level directly across the SAN from one storage system to another or within a single Clariion system.
SAN COPY can move data from one source to multiple destinations concurrently. SAN Copy connects through a SAN, and also supports protocols that let you use the IP WAN to send data over extended distances.

SAN Copy is designed as a multipurpose replication product for data migrations, content distribution, and disaster recovery (DR) .
SAN Copy does not provide the complete end-to-end protection that MirrorView provides

EMC SAN COPY

SAN COPY overview


Bulk data transfer CLARiiON to/from CLARiiON, Symmetrix, and other vendor storage Source LUN may be a point in time copy SnapView Clone, SnapView Snapshot, Symmetrix point in time copy SAN-Based data transfer Offloads host traffic no host-to-host data transfer Higher performance less traffic OS independent Full or incremental copies SAN Copy CLARiiON ports must be zoned to attached storage system ports LUNs must be made available to SAN Copy ports SAN Copy cannot share an SP port with MirrorView

SAN COPY features and Benefits


Features: Multiple Sessions can run concurrently A Session may have multiple destinations Implementing SAN Copy Over Extended Distances

SAN Copy has several benefits over host-based replication options: Performance is optimal because data is moved directly across the SAN. No host software is required for the copy operation because SAN Copy executes on a CLARiiON storage system. SAN Copy offers interoperability with many non-CLARiiON storage systems.

SAN Copy Creation Process Flow

SAN Copy Creation Process Flow


On source storage system: 1. Prepare source LUN with file system data 2. Create Reserve LUN pool

3.Configuring SAN COPY connections Storage system SAN COPY connections Register each selected SAN Copy port to ports of the peer storage systems 4. Once the registration process is complete, we can connect the SAN Copy port to a Storage Group on the peer CLARiiON storage system.

On Destination Storage system:


6.Create a LUN with same size and create a storage group(SANCOPY)

7. Add LUN to the storage group

SAN Copy Creation Process Flow


8.Assign SAN COPy connections to a storage group Each SAN Copy port acts like a host initiator and, therefore, can connect to only one Storage Group at a time in a storage system Storage group SANCOPY connections On source storage system: 9. Create Session: Storage system SANCOPY- Create session Select source LUN from local storage and destination LUN from other storage and select it as FULL session 10.Start session 9. Add the destination LUN in any host storage group and verify the data by mounting 10. Update the source LUN and create a incremental session and verify the data.

Thank You

You might also like