Professional Documents
Culture Documents
CLARIION
Clariion
Agenda Introduction Hardware overview Software overview Clariion Management Clariion Configuration Clariion Objects Clariion Applications
Clariion Timeline
Clariion Timeline
All members of the CX family have a similar architecture. The main differences are the number of front-end and back-end ports, the CPU types and speeds, and the amount of memory per SP.
Clariion Hardware
Clariion Architecture
Clariion Architecture is based on intelligent Storage Processors that manage physical drives on the back-end and service host requests on the front-end. Depending on the module, each Storage processor includes either one or two CPUs. Storage Processors communicate to each other over the CLARiiON Messaging Interface (CMI). Both the front-end connection to the host and the back-end connection to the physical storage are 2Gb/4GB Fibre channel
CLARIION Features
Data Integrity How CLARiiON keeps data safe (Mirrored write cache ,vault, etc) Data Availability Ensuring uninterrupted host access to data (Hardware redundancy,pathfailover software(powerpath), Error reporting capability) CLARiiON Performance What makes CLARiiON a great performer (cache, Dual SPs , Dual/quad back-end FC buses ) CLARiiON Storage Objects A first look at LUNs, and access to them ( RAID Groups, LUNs , MetaLUNs,Storage Groups)
The DPE houses the storage Processor(s) and the first set of Fibre Channel disks. The DPE includes: Two power supplies ,each have a power input connector that is fed by SPS Two Storage Processors that include the SP and LCC functionality. Each SP has memory and one or more processors. Backend ports , Frontend Ports ,Serial port and Ethernet Management port
Disk Status LEDs Green for connectivity Blinks during disk activity Amber for Fault Enclosure Status LEDs Green = Power Amber = Fault
DAE
SPE
Rear view of SPE
The CLARiiON is powered on or off using the switch on the SPS. The RJ11 connection is to the Storage processor and used to communicate lost of AC power and signals the SP to begin the vault operation. Once the vault operation is complete, the SP signals the SPS that it is OK to remove AC power Note: Until the batteries are fully charged, write caching will be disabled
The DAE-OS contains slots for 15 dual-ported Fibre Channel disk drives. The first five drives are referred to as the Vault drives. Disks 0-3 required to boot the Storage Processors Disks 0-4 required to enable write caching These disks must remain in the original slots! The DAE-OS enclosure must be connected to bus zero and assigned the enclosure address 0.
Private Space
The CLARiiON arrays boot operating system is either Windows NT or Windows XP depending on the processor model After booting each SP Executes FLARE software. FLARE software manages all functions of the CLARiiON storage system(provisioning, resource allocation, Memory management etc. Access Logix software is optional software that runs within the FLARE operating environment on each storage processor (SP).It is used for LUN masking Navisphere provides a centralized tool to monitor, configure, and analyze performance of clariion storage systems. CLARiiON can also be managed as part of EMC ControlCenter, allowing full end-to-end management. Other array software includes SnapView, MirrorView, and SANCopy.
Clariion Management
Software Components
Software Components Array Software Base (FLARE) code (with or without Access Logix) Array agent Management Server Management UI SnapView MirrorView SAN Copy Management Station Software Internet Explorer or Netscape Java Navisphere Management UI ClarAlert Host Software Navisphere Host Agent HBA drivers PowerPath Note: The Navisphere UI may run either on the management station or on the array.
Initializing a Clarion
Initializing an array refers to the setting of the TCP/IP network parameters and establishing domain security. Initialize array can be done using a serial connection and a point-to-point network ( Default IP http://192.168.1.1/setup) We can set network parameters (IP,hostname,subnet mask,Gateway,peer IP(sp A/B) Further array configuration is performed using either GUI or CLI after the array has been initialized. Array name, access control, Fibre Channel link speeds, etc. Additional domain users and Privileged User Lists Read and write cache parameters Storage objects: RAID Groups LUNs Storage Groups
Clariion Management
In-Band Management o
FLARE
FC
Fabric
Navisphere GUI
(Management Host)
Out of Band Management Naviagent converts SCSI calls to TCP/IP and TCP/IP to SCSI
TCP/IP
RJ-45 Navisphere GUI (Management Host)
Flare
NAVI AGENT
MGMT SERVER
Clariion Management
Clariion Managemet
Domain contains one Master and other storages are treated as slaves We can configure name for Storage Domain( Default name: Domain Default) Each storage system can be a member of only one domain
Navisphere Users
There are three roles of users: Administrator Can do anything including create and delete users. Manager Can fully manage array but cannot modify/create/delete other users. Monitor Can only look. There are two scopes: Local Global
Classic Navisphere CLI used a Privileged user list to authenticate user requests. The Array Agents privileged users list does not include user1 and therefore the request is denied.
The privileged user list now includes user1 as a privileged user when logged in at IP address 10.128.2.10.
The Host Agent also uses its own privileged user list. This illustrates an attempt by Management Server to restart the Host Agent on a computer whose IP address is 10.128.2.10. The Host Agent will refuse the command unless the array is listed as a privileged user in agent.config.
While an SP does not have a login user ID, the default user name of system is used for the SP. The format of the privileged user list in Host Agents agent.config file is system@<IP Address>.
Clariion configuration
Introduction to Navisphere Manager Configure the Clariion Clarion Security ( Domain configuration and Creaing user A/Cs etc Configure Cache, Verify available softwares, acess logix, Network configuration, Verify SPs WWNs and setting SP agent privileged users etc) Create RAID groups
Access logix
Create storage groups
MetaLUN Terminology
FLARE LUN (FLU) A logical partition of a RAID group. The basic logical units managed by FLARE, which serve as the building blocks for MetaLUN components. MetaLUN A storage volume consisting of two or more FLUs whose capacity grows dynamically by adding FLUs to it Component A group of one or more FLARE LUNs that get concatenated to a MetaLUN as a single or striped unit Base LUN The original FLARE LUN from which the MetaLUN is created. The MetaLUN is created by virtue of expanding the base LUNs capacity. Note : The MetaLUN is presented to the host in exactly the same way it was before the expansion i.e. the Name, LUN ID, SCIS ID, and WWN is the same. The only thing that changed is the capacity is increased. To Expand a LUN, right click on the LUN and select Expand This invokes the Storage Wizard
LUN Mapping
LUN 0
FC SCSI level allows multiple LUNs at single target To make it allow we need to map the LUNs in /kernel/drv/sd.conf file and update the driver using # update_drv f sd Example: name=sd parent=lpfc target=0 lun=1 name=sd parent=lpfc target=0 lun=2
Access Logix
Access Logix
What Access Logix is Why Access Logix is needed Configuring Access Logix Storage Groups Configuring Storage Groups
Access Logix
Access Logix is a licensed software package that runs on each storage processor. SAN switches allow multiple hosts physical access to the same SP ports . Without Access Logix, all hosts would see all LUNs. Access logix solve this problem using LUN Masking by creating Storage groups. Controls which host have access to which LUNs Allows multiple hosts to effectively share a CLARiiON array
Initiator Records
Initiator records are created during Fibre Channel Login HBA performs port login to each SP port during initialization Initiator-Registration records are stored persistently on array LUNs are masked to all records for a specific host Access Control Lists maps LUN UIDs to the set of Initiator Records associated with a host
There are two parts to the registration process: Fibre Channel port login (plogi) where the HBA logs into the SP port Creates initiator records for each connection Viewed in Navisphere in Connectivity Status Host Agent registration where the host agent completes the initiator record information with host information
Manual Registration:
The Group Edit button, on the Connectivity Status main screen, allows manual registration of a host which is logged in to. In FC series we need to do manual registration. In CX series the registration is done automatically if Host agent is installed on Fabric hosts
Storage Groups
Managing Storage Groups Creating Storage Groups Viewing and changing Storage Group properties Adding and removing LUNs Connecting and disconnecting hosts Destroying Storage Groups
Persistent Binding
The c# refers to the HBA instance, the t# refers to the target instance(SPs front-end port) and the d# is the SCSI address assigned to the LUN. The HBA number and the SCSI address are static but the t# by default is assigned in the order in which the targets are identified during the configuration process of a system boot. The order that a target is discovered can be different between reboots. Persistent binding binds the WWN of a SP port to a t# so that every time the system boots, the same SP port on the same array will have the same t#.
Persistent Binding
HBA configuration files /kernel/drv/<driver>.conf - lpfc.conf for Emulex Persistent binding SP port WWPN mapped to controller/target address 500601604004b0c7:lpfc0t2 Disable the auto mapping in lpfc.conf(automap=0)
Power path
Host Based Software Resides between application and SCSI device driver Provides Intelligent I/O path management Transparent to the application Automatic detection and recovery from host-to-array path failures
LUN 0
Trespass is temporary change in ownership When the storage system is powered-on, LUN ownership returns to the Default Owner
Path Failover
Clariion Applications
Clariion Applications
Snapview Snapshots Snapview Clones SAN Copy Mirror Copy
Snapview Snapshots
Snapshot Definition SnapView Snapshot - an instantaneous frozen virtual copy of a LUN on a storage system Instantaneous Snapshots are created instantly no data is copied at creation time Frozen Snapshot will not change UNLESS the user writes to it Original view available by deactivating changed Snapshot Virtual copy Not a real LUN - made up of pointers, original and saved blocks Uses a copy on first write (COFW) mechanism Requires a save area the reserved LUN Pool
Snapview Snapshot
Snapview Snapshot Components: Reserver LUN pool Snapview Snapshot Snapview Session Production Host Backup Host Source LUN Copy on First Write (COFW) Rollback
Once a session starts, the SnapView mechanism is tracking changes to the LUN and reserved LUN Pool space is required
Source LUNS cant share Reserver(Private) LUNS
Managing Snapshots
Procedure to Create and Manage Snapshots: 1. Configure Reserve LUN pool ReserveLUNpool- configure Add LUNs for both SPs 2. Create Storage group for prod host and add source LUN
3. Create file system on Source LUN and add data
Managing Snapshots
7. 8. 9. Create Storage group for Backup host and add snapshot virtual LUN Mount emc device of snap LUN on backup host Verify the Data.
10 Do some modification from Prod Host 11. Umount the prod LUN 12. Perform Roll Back of Snap view session Snapview sessions Select sessionstart Rollback 13. Remount the prod LUN and observer the old data
Snapview Clones
2-way synchronization Clones may be incrementally updated from the source LUN source LUNs may be incrementally updated from a clone Clone must be EXACTLY the same size as source LUN
Mirror Copy
Mirror view
Agenda Types of Mirror copy Synchronous ( Mirror view/S) Asynchronous (Mirror view/A) How MirrorView make remote copies of LUNs The required steps in MirrorView administration Mirror View with Snap View
Recovery point objective: Recovery point objective defines the amount of acceptable data loss in the event of disaster. RPO is typically expressed in duration of time. Some applications may have zero tolerance for loss of data in the event of disaster. (Example: Financial Applications)
Replication Models
Replication solutions can be broadly categorized as synchronous and asynchronous.
Synchronous replication model: In a synchronous replication model, each server write on the primary side is written concurrently to the secondary site. RPO is zero, since the transfer of each I/O to the secondary occurs before acknowledgement is sent to the server Data at the secondary site is exactly the same as data at the primary site at the time of disaster disaster.
Biderection Mirroring
MirrorView/S Fracture Log and Write Intent Log Fracture Log: Resident in SP memory, hence volatile Tracks changed regions on Primary LUN when Secondary is unreachable When Secondary becomes reachable, Fracture Log is used to resynchronize data incrementally Fracture Log is not persistent if Write Intent Log is not used
Write Intent Log: Optional allocated per mirror Primary LUN Persistently stored - uses private LUNs Used to minimize recovery in the event of failure on Primary storage system Two LUNs of at least 128 MB each
SAN COPY
SANCOPY is a optional software available on storage sytem. It enable storage system to copy data at a block level directly across the SAN from one storage system to another or within a single Clariion system.
SAN COPY can move data from one source to multiple destinations concurrently. SAN Copy connects through a SAN, and also supports protocols that let you use the IP WAN to send data over extended distances.
SAN Copy is designed as a multipurpose replication product for data migrations, content distribution, and disaster recovery (DR) .
SAN Copy does not provide the complete end-to-end protection that MirrorView provides
SAN Copy has several benefits over host-based replication options: Performance is optimal because data is moved directly across the SAN. No host software is required for the copy operation because SAN Copy executes on a CLARiiON storage system. SAN Copy offers interoperability with many non-CLARiiON storage systems.
3.Configuring SAN COPY connections Storage system SAN COPY connections Register each selected SAN Copy port to ports of the peer storage systems 4. Once the registration process is complete, we can connect the SAN Copy port to a Storage Group on the peer CLARiiON storage system.
Thank You