Professional Documents
Culture Documents
Day1 03
Day1 03
1
Topics
z Basic Architecture
z Hardware Components
2
Basic Architecture
Fibre Channel
Fibre Channel
3
Data Protection
4
Modular Approach
z All arrays start with a minimum configuration, containing the Storage
Processors (SP’s) and some disk drives.
– CX600 has a minimum requirement of an SPE and a DAE-OS
– CX400 / CX200 have a minimum requirement of a DPE
z Capacity can be added at any time by attaching additional Disk Array
Enclosures (DAE)
Standard DAE
Standard DAE
5
The CLARiiON Storage Processor (SP) - Intro
Read
Cache
System
*Read and Write cache sizes are tunable. System memory is fixed.
6
CLARiiON Storage Processor (SP) – CX600
5
1
3 4 6 7 8 2
7
CX600 Storage Processor Enclosure (SPE) - BACK
8
CX600 Storage Processor Enclosure (SPE) - FRONT
2 2 2 3
1 1 1
9
CX600/400/200 Standby Power Supply (SPS)
1 2 3 4 5
10
The CLARiiON Disk Array Enclosure (DAE) - Intro
0 1 2 … 14
The Disk Array Enclosure is the “capacity building block” of the array. It contains:
11
Disk Array Enclosure (DAE) – Rear & Front
0
5
9
6 7 8
1 2 3 4
11 11
12
10
REAR FRONT
3U Rack Mount Chassis Slots for 15 Dual-Ported Fibre Channel disk
Two Link Control Card drives
(1) Status Indicator LEDs (10) Disk Status LEDs
Green Indicates Power
Green for connectivity
Amber Indicates Fault Blinks during disk activity
(2) EXP port
Amber for Fault
HSSDC Connector
Link LED (lit for receive signal) Integrated onto disk chassis (not part of
physical enclosure)
(3) Bus Indicator LEDs
0 or 1 will light depending on which back end port (11) Enclosure Status LEDs
it originates from Green = Power
LEDs marked 2-7 are not on CLARiiON CX arrays
Amber = Fault
(4) PRI port
(12) DAE-OS Vault Drives (CX-2GBDAE-OS Only)
HSSDC Connector
Link LED (lit for receive signal) Disks 0-3 required to boot the Storage
Processors
Two DAE Power Supplies Disks 0-4 required to enable write caching
(5) Two integrated fans per power supply These disks must remain in the original
Same rules as SPE Power Supply Fans slots!
(6) Fan Fault LEDs (amber)
(7) Power Supply Status LEDs
Green = Power
Amber = Fault
(8) AC Input & Power switch
DAE-OS should powered-on/off using SPS switch
12
CX600 Configuration & Cabling
CX-2G
DAE
Minimum Configuration
CX-2G
DAE-OS
CX600
SPE
CX600
© 2003 EMC Corporation. All rights reserved.
SPS 13
13
CX400 – Disk Processor Enclosure (DPE)
6
5
1 2 3 4 7
The CX400 DPE uses the same chassis, disks, and power supplies as a standard DAE. It is, in a sense a DAE-OS,
running CX400 code instead of CX600 code, and with Storage Processors where the LCC’s would normally go. The
enclosure address switch must be set to “0” for operation.
14
CX400 – Configuration and Cabling
Configuration
Maximum
Configuration
Minimum
15
CX200 – Disk Processor Enclosure (DPE)
5
4
1 2 3 6
The CX200 DPE uses the same chassis, disks, and power supplies as a standard DAE. It is, in a sense a
DAE-OS, running CX200 code instead of CX600 code, and with Storage Processors where the LCC’s would
normally go. The enclosure address switch must be set to “0” for operation.
16
CX200 – Configuration and Cabling
Configuration
Maximum
Configuration
Minimum
17
Using a CLARiiON – the RAID Group
SPB
SPA
© 2003 EMC Corporation. All rights reserved. 18
A RAID Group on a CLARiiON is not the units which get exported to hosts. The
disks will work together to provide data, but will be sliced up into individual Logical
Units (LUNs) to be served to host systems.
18
Using a CLARiiON – the Logical Unit (LUN)
RAID Group
4 LUNs
Free
Space
0
2
0 SPB 1
3
2 SPA 3
One or more LUNs will be striped (bound) across a RAID Group. These LUNs will
have a RAID Type assigned to them. All LUNs on a RAID Group will use the same
RAID Type. LUNS will also be assigned to only one SP, and will be served up to a
host via that SP, as physical disk drives.
19
Using a CLARiiON – Memory Types
z System – used for Array Operations
– Used for operating system, Base software, Navisphere
components, advanced features
– Also used as a buffer for I/O
z Read Cache, Write Cache – Used for host data
– All RAID types except RAID-3
– Writes: Mirrored to the peer SP, single setting
– Read: Non-mirrored, individual settings
z RAID-3 – Used for RAID-3 host data
– Non-mirrored, no cache-like controls (more like “buffering”)
– Do not mix RAID-3 with other RAID types in the same array
– Available only on legacy (non-CX) products
20
CLARiiON Data Flow – WRITE (Cache Hit)
0101101
Peer Bus
(CMI)
SPB
Disk Drives
Write
Cache
WRITE 0101101
ACK
If write cache is enabled, host writes to a logical device will enter cache on the
owning Storage Processor. Since all write cache is mirrored between SP’s, the IO
must be copied into the write cache on the other array via the peer bus (CMI).
When that operation is complete, an Acknowledgement is sent back through the
CMI channel to the owning SP, and is passed back to host, completing the write
operation.
21
CLARiiON Data Flow – WRITE (Cache Miss)
0101101
Peer Bus
(CMI)
SPB
Disk Drives
Write
Cache
WRITE
ACK
If write cache is disabled, host writes to a logical device will proceed directly to the
destination drives. When that operation is complete, an Acknowledgement is sent
back through the owning SP, and is passed back to host, completing the write
operation.
22
CLARiiON Data Flow – Cache Flushing - Idle
Write Cache
High Watermark
Low Watermark
001001001001
11010010010010100100100110
10010011001001010011010100
11010100110101011010010011
10110100101010110101100101
Disk Drives
Back-end
Fibre
With write cache enabled, all writes will enter the write cache on the owning SP,
then get mirrored to the other SP. As the write cache fills, it attempts to empty the
write cache to the destination disk drives. This process is called “Flushing”.
We have the ability to control this activity by setting watermarks for write cache, a
Low and a High. Until cache fullness reaches the Low Watermark (default value =
40%), the SP is in a state called “Idle Flushing”. The SP will attempt to clear cache
lazily and during idle periods of the array. The array is at its peak cache
performance during idle flushing.
23
CLARiiON Data Flow – Cache Flushing - Watermark
Write Cache
001001001001
11010010010010100100100110
10010011001001010011010100
11010100110101011010010011 High Watermark
10110100101010110101100101
11010010010010100100100110
10010011001001010011010100
11010100110101011010010011 Low Watermark
10110100101010110101100101
11010010010010100100100110
10010011001001010011010100
11010100110101011010010011
10110100101010110101100101
Disk Drives
Back-end
Fibre
. At the low water mark the flushing process is given a higher priority by assigning a
single thread to the write cache flushing process for each scheduler cycle. As the
amount of unflushed data in write cache increases beyond the low water mark
higher priorities are assigned to the flushing process at regular intervals by
assigning additional threads to each scheduler cycle, until a maximum of 4 threads
are assigned at the high water mark.
24
CLARiiON Data Flow – Cache Flushing - Forced
00100100100111010010010010
Write Cache
10010010011010010011001001 Writes halted
01001101010011010100110101 until space is cleared
01101001001110110100101010
11010010010010100100100110
10010011001001010011010100 High Watermark
11010100110101011010010011
10110100101010110101100101
11010010010010100100100110
10010011001001010011010100
11010100110101011010010011 Low Watermark
10110100101010110101100101
11010010010010100100100110
10010011001001010011010100
11010100110101011010010011
10110100101010110101100101 Disk Drives
Back-end
Fibre
If the data in the write cache continues to increase, forced flushing will occur at a
point where there is not enough room in write cache for the next write I/O to fit.
Write I/Os to the array will be halted until enough data is flushed to make sufficient
room available for the next I/O. This process will continue until the forced flushing
plus the scheduled (watermark) flushing creates enough room for normal caching to
continue. All during this process write cache remains enabled.
25
CLARiiON Data Flow – Cache DUMP
Back-end
Fibre THE VAULT
stripe on drives 0-4 on DPE/DAE-OS
When problems occur, such as a power or SP Failure, the contents of write cache
are copied into a special area of the first 5 drives on the array called THE VAULT.
26
CLARiiON Data Flow – READ (Cache Hit)
0101101
SPB
Disk Drives
Write
Cache
Request 0101101
READ
When data is Read-requested from the CLARiiON, the SP first checks the contents
of READ and WRITE Cache for the data. If the data has been recent written to the
array, and not flushed to the disks and overwritten by new cached data, it can be
read directly from WRITE Cache. It also may be in READ cache from an earlier
request. If it is in cache, the data will immediately be retrieved from cache for the
host, and flagged as Most Recently Used. The Least Recently Used (LRU)
contents of Read and Write cache are the first to be over-written.
A Read Cache Hit is considerably faster than retrieving the data directly from drives.
27
CLARiiON Data Flow – READ (Cache miss)
0101101
SPB
Disk Drives
Write
Cache
Request
READ
If the requested data is not currently in READ or WRITE Cache, the data will be
retrieved from the appropriate disk drives. At that point a copy will be kept in READ
Cache to expedite subsequent Reads of the same data. This data fetched to READ
Cache will overwrite the Least Recently Used (LRU) data currently in Read Cache.
Also the Storage Processor has the ability to anticipate subsequent reads, and will
pre-fetch data not requested. We have the ability to decide when to begin
prefetching, and how much data to prefetch.
28
CLARiiON Value Add – Access Logix
CX200/400/600, FC4500/4700
29
CLARiiON Value Add – SnapView
CX400/600, FC4700
Tape Device
DB
DB’
Snapshot or Clone
SnapView includes two separate types of internal data replication: Snapshots and
Clones
Snapshots: Point-in-time virtual copies of LUNs. Allow for on-line back-up of
databases while only using 10-20% of the source disk resources.
Clones: Full Synchronous copies of LUNs within the same array. Can be used as a
point-in-time, FULL copy of a LUN through the fracture process, or as a data
recovery solution in the event of a failure.
30
CLARiiON Value Add – MirrorView
CX400/600, FC4700
Real-Time Synchronization
DB DB’
FABRIC
MirrorView is the CLARiiON remote disaster recovery solution. Two arrays have
dedicated connection over Fibre Channel or T3 lines. It is synchronous mirroring,
meaning that all writes to the source must be completed to the Secondary array
before the acknowledgement is sent to the source host.
During a disaster, the Secondary Image can be promoted and mounted by a
standby host, minimizing downtime.
31
CLARiiON Value Add – SAN Copy
CX400/600, FC4700
DB’
t ion
M igra
ata
on al D Mail
ir ecti
Bi-D
Mail’
Data Migration
DB DB’’
FABRIC
FABRIC
SAN Copy allows for data replication to heterogeneous storage arrays through Fibre
Channel topology, without involving hosts, or LAN topolgies.
32
CX-Series Comparison
33
Summary
z The CLARiiON CX Arrays use Fibre Channel, industry-
standard RAID, and High-Availability hardware.
z The CLARiiON Storage units are called LUNs, which
are slices of RAID Groups, which are collections of
physical disk drives.
z CLARiiON-Host data flow performance is dependant
upon READ and WRITE Caching.
z Flushing is the technique a CLARiiON employs to
migrate data from write cache to the LUNs.
z A Cache DUMP is one way the CLARiiON protects the
contents of WRITE Cache
z The CX400/600 have Software available which add
additional capabilities.
© 2003 EMC Corporation. All rights reserved. 34
34