You are on page 1of 34

CLARiiON CX-Series

Introduction to Hardware and Operation

© 2003 EMC Corporation. All rights reserved. 1

1
Topics

z Basic Architecture

z Modular, “Building Block” Approach

z Hardware Components

z Basic Functionality – Data Flow

z Introduction of advanced capabilities

© 2003 EMC Corporation. All rights reserved. 2

2
Basic Architecture

Fibre Channel
Fibre Channel

z External High-Availability Storage Device using Fibre


Channel Protocol
– Fibre Channel connectivity to host systems
– Fibre Channel internal architecture for disks

© 2003 EMC Corporation. All rights reserved. 3

3
Data Protection

z RAID Protection to protect against drive failure


– RAID 1, 5, 1/0, 3
– Global Hot Spares
– Non-protected RAID 0 and Single Disk also supported

z Redundant Field Replaceable Units (FRU’s) to protect


against component failures

z Support for remote replication / disaster recovery via


MirrorView optional software

© 2003 EMC Corporation. All rights reserved. 4

4
Modular Approach
z All arrays start with a minimum configuration, containing the Storage
Processors (SP’s) and some disk drives.
– CX600 has a minimum requirement of an SPE and a DAE-OS
– CX400 / CX200 have a minimum requirement of a DPE
z Capacity can be added at any time by attaching additional Disk Array
Enclosures (DAE)

Standard DAE

Standard DAE

CX600 SPE + DAE-OS CX400/200 DPE

© 2003 EMC Corporation. All rights reserved. 5

5
The CLARiiON Storage Processor (SP) - Intro

z The I/O and Functionality Center of the array

Front-End Memory* Back-End


Fibre Fibre Disk Drives
To hosts or switches
Write
Cache

Read
Cache
System

© 2003 EMC Corporation. All rights reserved. 6

*Read and Write cache sizes are tunable. System memory is fixed.

6
CLARiiON Storage Processor (SP) – CX600

5
1

3 4 6 7 8 2

© 2003 EMC Corporation. All rights reserved. 7

SPE – 4U Rack Mount Chassis (each SP = 2U)


(1) 4 Front End 2Gb ports per Storage Processor
4 unique WWPNs per Storage Processor
Individually configurable for 1Gb or 2Gb operation
(2) Dual Back End 2Gb loops
120 drives per loop maximum
Dual 2Gb CMI channels (not shown)
SP Communication Path – internal connection between SPA-SPB
Main use is in write cache mirroring
2GB or 4GB of memory per Storage Processor (not shown)
512MB DDR DIMMs
2 way interleaved
Same amount must be installed on both Storage Processors
2GB Max write cache
Dual 2Ghz Intel Processors (Pentium 4 Server Chip Prestonia not shown)
Server Works Grand Champion- Low End chipset (GC-LE)
(3) 10/100 LAN Management Port
(4) SP Status LEDs
Used for determining SP Health
Used for Boot monitoring
(5) Port80h Card
Used for Post monitoring
On the mainboard, visible through the grate of the chassis
(6) SPS Monitoring Port
(7) Serial/PPP Port
For array initialization and maintenance
Monitoring SP start-up (through HyperTerminal or other terminal emulation software)
(8) Dual Auxiliary Ports (AUX0 & AUX1)
Not for field use on CX-series SAN arrays

7
CX600 Storage Processor Enclosure (SPE) - BACK

© 2003 EMC Corporation. All rights reserved. 8

•Two Storage Processors (active/active)


•Two hot-swappable power supplies per enclosure (active/active, n+1)
•Each has two integrated fans
•If you remove a power supply, the fans in the remaining power supply will
speed up and cool the disk drives and power supply indefinitely.
•If there is a single fan fault in a supply, the remaining fan in the power
supply speeds up
•If there is a power supply failure, the fans are powered from the remaining
power supply and all power supply fans in the enclosure continue to spin at
normal speed.
•Power Cable Lock
•AC Power Input and switch
•Power cable connects to the SPS
•Power off/on should occur at the SPS
•Active LED (Green)

8
CX600 Storage Processor Enclosure (SPE) - FRONT

2 2 2 3

1 1 1

© 2003 EMC Corporation. All rights reserved. 9

(1) Three hot-swappable fan packs


•Each has two fans (6 total)
•Enclosure requires 5 fans spinning to remain powered.
•If one of more fan fails, the remaining spin faster
•If two or more fans fail, or a single fan pack is removed, the array will power
down in 2 minutes
(2) Amber fan fault lights
•Not visible through front panel (not shown)
(3) Chassis Power Indicator (green)
Chassis Fault Indicator (amber)

9
CX600/400/200 Standby Power Supply (SPS)

1 2 3 4 5

© 2003 EMC Corporation. All rights reserved. 10

(1) Power Switch


Determines whether the SPS is distributing power to the SPE Power Supply
(2) AC Power Input
Accepts power from an AC power source.
(3) SPE & DAE-OS AC Power Output
Provides power to an SPE and DAE-OS power supplies
(4) Monitoring Port
Provides information about the status of the SPS to the SPE storage
processor.
(5) Status LEDs (from bottom to top)
Fault LED- Amber when a fault is detected on the standby power supply.
Battery Discharged LED- Amber when the SPS battery is discharged
On Battery LED- Amber when the SPS battery is providing the power to the
SPE Power Supply.
Power LED- Green when the SPS is active and fully charged.
Flashing Green when the SPS battery is being charged.

10
The CLARiiON Disk Array Enclosure (DAE) - Intro

z Adds storage to a CLARiiON Array

PRI Port EXP Port


from previous DAE or SPA Expand to next DAE
Link Control Card
(LCC) A

0 1 2 … 14

Link Control Card


(LCC) B
from previous DAE or SPB Expand to next DAE

PRI Port EXP Port

© 2003 EMC Corporation. All rights reserved. 11

The Disk Array Enclosure is the “capacity building block” of the array. It contains:

•From 0 to 15 disk drives of varying size


•Dual ported to both LCC’s
•Two Link Control Cards
•Act as the Fibre Channel hubs for the drives on that enclosure
•Have the ability to expand to other DAE’s through the EXP port.
•Connect to previous enclosures via the PRI Port.
•Power supplies with integrated fan packs.

11
Disk Array Enclosure (DAE) – Rear & Front
0
5
9
6 7 8

1 2 3 4

11 11

12

10

© 2003 EMC Corporation. All rights reserved. 12

REAR FRONT
3U Rack Mount Chassis Slots for 15 Dual-Ported Fibre Channel disk
Two Link Control Card drives
(1) Status Indicator LEDs (10) Disk Status LEDs
Green Indicates Power
Green for connectivity
Amber Indicates Fault Blinks during disk activity
(2) EXP port
Amber for Fault
HSSDC Connector
Link LED (lit for receive signal) Integrated onto disk chassis (not part of
physical enclosure)
(3) Bus Indicator LEDs
0 or 1 will light depending on which back end port (11) Enclosure Status LEDs
it originates from Green = Power
LEDs marked 2-7 are not on CLARiiON CX arrays
Amber = Fault
(4) PRI port
(12) DAE-OS Vault Drives (CX-2GBDAE-OS Only)
HSSDC Connector
Link LED (lit for receive signal) Disks 0-3 required to boot the Storage
Processors
Two DAE Power Supplies Disks 0-4 required to enable write caching
(5) Two integrated fans per power supply These disks must remain in the original
Same rules as SPE Power Supply Fans slots!
(6) Fan Fault LEDs (amber)
(7) Power Supply Status LEDs
Green = Power
Amber = Fault
(8) AC Input & Power switch
DAE-OS should powered-on/off using SPS switch

(9) Enclosure Address Switch


Must be set to 0 for DAE-OS
0011223344...

12
CX600 Configuration & Cabling

CX-2G
DAE
Minimum Configuration

CX-2G
DAE-OS

CX600
SPE

CX600
© 2003 EMC Corporation. All rights reserved.
SPS 13

Cabling for Power


•Connect SPE and DAE-OS Power Supply “A” to the SPS “A” (on the right)
•Connect SPE and DAE-OS Power Supply “B” to the SPS “B” (on the left)
•Connect SPS “A” monitoring cable to SPA SPS Port.
•Connect SPS “B” monitoring cable to SPB SPS Port.
•Connect SPS “A” to the PDU on the right
•Connect SPS “B” to the PDU on the left.
•Connect all DAE’s above the DAE-OS directly to the PDU.

Cabling Back-end Loops


•Connect SPA back-end Port 0 to the PRI port on the DAE-OS, LCC-A (right).
•Connect SPB back-end Port 0 to the PRI port on the DAE-OS, LCC-B (left).
•Connect SPA back-end Port 1 to the PRI port on next DAE, LCC-A (right).
•Connect SPB back-end Port 1 to the PRI port on next DAE, LCC-B (left).

8 Total DAE’s are allowed per bus (loop)


•120 Drives Per Loop
•240 Drives Total

13
CX400 – Disk Processor Enclosure (DPE)

6
5

1 2 3 4 7

© 2003 EMC Corporation. All rights reserved. 14

The CX400 DPE uses the same chassis, disks, and power supplies as a standard DAE. It is, in a sense a DAE-OS,
running CX400 code instead of CX600 code, and with Storage Processors where the LCC’s would normally go. The
enclosure address switch must be set to “0” for operation.

2 CX400 Storage Processors


Same form factor as LCC’s
Includes SP and LCC Functionality
(1) BE0 – Back End Bus Port 0
Extends Bus 0 beyond the drives in the DPE2
Should be connected to the second DAE2 (not the first)
Uses HSSDC connector
Upper link LED indicates receive traffic
(2) BE1 – Back End Bus Port 1
Originates Bus 1 when connected to the 1st DAE2
Uses HSSDC connector
Lower link LED indicates receive traffic
(3) 2 Front End 2Gb ports per Storage Processor
Configurable for 1 or 2 Gbit connectivity (default to 2 Gbit)
(4) Enclosure Status LEDs
Green Indicates Power
Amber indicates Fault
Can be used to monitor the boot process
(5) 10/100 Ethernet Management Port
Operates the same as CX600
(6) Serial/PPP Port
Operates the same as CX600
(7) SPS Monitoring Port
Dual 2Gb CMI channels (not shown)
1GB of memory each Storage Processor (not shown)
512MB DDR DIMMs
2 way interleaved
Single low-power 800 MHz mobile Pentium III processor (not shown)

14
CX400 – Configuration and Cabling

Configuration
Maximum
Configuration
Minimum

© 2003 EMC Corporation. All rights reserved. 15

Cabling for Power


•Connect DPE Power Supply “A” to the SPS “A” (on the right)
•Connect DPE Power Supply “B” to the SPS “B” (on the left)
•Connect SPS “A” monitoring cable to SPA SPS Port.
•Connect SPS “B” monitoring cable to SPB SPS Port.
•Connect SPS “A” to the PDU on the right
•Connect SPS “B” to the PDU on the left.
•Connect all DAE’s directly to the PDU.

Cabling Back-end Loops


•Connect SPA back-end Port 1 to the PRI port on first DAE, LCC-A (right).
•Connect SPB back-end Port 1 to the PRI port on first DAE, LCC-B (left).
•Connect SPA back-end Port 0 to the PRI port on second DAE, LCC-A.
•Connect SPB back-end Port 0 to the PRI port on second DAE, LCC-B.
•Connect LCC-A, EXP Port on First DAE, to LCC-A PRI Port on Third DAE.
•Connect LCC-B, EXP Port on First DAE, to LCC-B PRI Port on Third DAE.

Enclosure Addresses From DPE on up: 0,0,1,1

15
CX200 – Disk Processor Enclosure (DPE)

5
4

1 2 3 6

© 2003 EMC Corporation. All rights reserved. 16

The CX200 DPE uses the same chassis, disks, and power supplies as a standard DAE. It is, in a sense a
DAE-OS, running CX200 code instead of CX600 code, and with Storage Processors where the LCC’s would
normally go. The enclosure address switch must be set to “0” for operation.

2 CX200 Storage Processors


Same form factor as LCC’s
Includes SP and LCC Functionality
(1) BE – Back End Bus Port
Allows addition of a single DAE to array
Uses HSSDC connector
LED indicates receive traffic
(2) A & B Front End 2Gb ports
A & B Ports are connected internal via a Fibre Channel hub, and have only a single WWPN. Both
ports will be in use only for a cluster connection
Configurable for 1 or 2 Gbit connectivity (default to 2 Gbit)
(3) Enclosure Status LEDs
Green Indicates Power
Amber indicates Fault
Can be used to monitor the boot process
(4) 10/100 Ethernet Management Port
Operates the same as CX600
(5) Serial/PPP Port
Operates the same as CX600
(6) SPS Monitoring Port
Dual 2Gb CMI channels (not shown)
512MB of memory each Storage Processor (not shown)
512MB DDR DIMMs
2 way interleaved
Single low-power 800 MHz mobile Pentium III processor (not shown)

16
CX200 – Configuration and Cabling

Configuration
Maximum
Configuration
Minimum

© 2003 EMC Corporation. All rights reserved. 17

Cabling for Power


•Connect DPE Power Supply “A” to the SPS “A” (on the right)
•Connect DPE Power Supply “B” to the PDU on the left.
•Connect SPS “A” monitoring cable to SPA SPS Port.
•Connect SPS “A” to the PDU on the right
•Connect DAE directly to the PDU’s.

Cabling Back-end Loops


•Connect SPA back-end Port to the PRI port on first DAE, LCC-A (right).
•Connect SPB back-end Port to the PRI port on first DAE, LCC-B (left).

Enclosure Addresses From DPE on up: 0,1

17
Using a CLARiiON – the RAID Group

z A RAID Group is a logical collection of disks

RAID Group 0x00

SPB
SPA
© 2003 EMC Corporation. All rights reserved. 18

A RAID Group on a CLARiiON is not the units which get exported to hosts. The
disks will work together to provide data, but will be sliced up into individual Logical
Units (LUNs) to be served to host systems.

18
Using a CLARiiON – the Logical Unit (LUN)

RAID Group

4 LUNs

Free
Space
0

2
0 SPB 1
3
2 SPA 3

© 2003 EMC Corporation. All rights reserved. 19

One or more LUNs will be striped (bound) across a RAID Group. These LUNs will
have a RAID Type assigned to them. All LUNs on a RAID Group will use the same
RAID Type. LUNS will also be assigned to only one SP, and will be served up to a
host via that SP, as physical disk drives.

19
Using a CLARiiON – Memory Types
z System – used for Array Operations
– Used for operating system, Base software, Navisphere
components, advanced features
– Also used as a buffer for I/O
z Read Cache, Write Cache – Used for host data
– All RAID types except RAID-3
– Writes: Mirrored to the peer SP, single setting
– Read: Non-mirrored, individual settings
z RAID-3 – Used for RAID-3 host data
– Non-mirrored, no cache-like controls (more like “buffering”)
– Do not mix RAID-3 with other RAID types in the same array
– Available only on legacy (non-CX) products

© 2003 EMC Corporation. All rights reserved. 20

20
CLARiiON Data Flow – WRITE (Cache Hit)

SPA Disk Drives


Write
Cache
0101101

0101101
Peer Bus
(CMI)

SPB
Disk Drives
Write
Cache
WRITE 0101101

ACK

© 2003 EMC Corporation. All rights reserved. 21

If write cache is enabled, host writes to a logical device will enter cache on the
owning Storage Processor. Since all write cache is mirrored between SP’s, the IO
must be copied into the write cache on the other array via the peer bus (CMI).
When that operation is complete, an Acknowledgement is sent back through the
CMI channel to the owning SP, and is passed back to host, completing the write
operation.

21
CLARiiON Data Flow – WRITE (Cache Miss)

SPA Disk Drives


0101101
Write
Cache

0101101
Peer Bus
(CMI)

SPB
Disk Drives
Write
Cache
WRITE

ACK

© 2003 EMC Corporation. All rights reserved. 22

If write cache is disabled, host writes to a logical device will proceed directly to the
destination drives. When that operation is complete, an Acknowledgement is sent
back through the owning SP, and is passed back to host, completing the write
operation.

22
CLARiiON Data Flow – Cache Flushing - Idle

Write Cache

High Watermark

Low Watermark
001001001001
11010010010010100100100110
10010011001001010011010100
11010100110101011010010011
10110100101010110101100101
Disk Drives

Back-end
Fibre

© 2003 EMC Corporation. All rights reserved. 23

With write cache enabled, all writes will enter the write cache on the owning SP,
then get mirrored to the other SP. As the write cache fills, it attempts to empty the
write cache to the destination disk drives. This process is called “Flushing”.
We have the ability to control this activity by setting watermarks for write cache, a
Low and a High. Until cache fullness reaches the Low Watermark (default value =
40%), the SP is in a state called “Idle Flushing”. The SP will attempt to clear cache
lazily and during idle periods of the array. The array is at its peak cache
performance during idle flushing.

23
CLARiiON Data Flow – Cache Flushing - Watermark

Write Cache
001001001001
11010010010010100100100110
10010011001001010011010100
11010100110101011010010011 High Watermark
10110100101010110101100101
11010010010010100100100110
10010011001001010011010100
11010100110101011010010011 Low Watermark
10110100101010110101100101
11010010010010100100100110
10010011001001010011010100
11010100110101011010010011
10110100101010110101100101
Disk Drives

Back-end
Fibre

© 2003 EMC Corporation. All rights reserved. 24

. At the low water mark the flushing process is given a higher priority by assigning a
single thread to the write cache flushing process for each scheduler cycle. As the
amount of unflushed data in write cache increases beyond the low water mark
higher priorities are assigned to the flushing process at regular intervals by
assigning additional threads to each scheduler cycle, until a maximum of 4 threads
are assigned at the high water mark.

24
CLARiiON Data Flow – Cache Flushing - Forced
00100100100111010010010010
Write Cache
10010010011010010011001001 Writes halted
01001101010011010100110101 until space is cleared
01101001001110110100101010
11010010010010100100100110
10010011001001010011010100 High Watermark
11010100110101011010010011
10110100101010110101100101
11010010010010100100100110
10010011001001010011010100
11010100110101011010010011 Low Watermark
10110100101010110101100101
11010010010010100100100110
10010011001001010011010100
11010100110101011010010011
10110100101010110101100101 Disk Drives

Back-end
Fibre

© 2003 EMC Corporation. All rights reserved. 25

If the data in the write cache continues to increase, forced flushing will occur at a
point where there is not enough room in write cache for the next write I/O to fit.
Write I/Os to the array will be halted until enough data is flushed to make sufficient
room available for the next I/O. This process will continue until the forced flushing
plus the scheduled (watermark) flushing creates enough room for normal caching to
continue. All during this process write cache remains enabled.

25
CLARiiON Data Flow – Cache DUMP

Write Cache POWER


FAILURE
0100101010
11010010010010100100100110
10010011001001010011010100
11010100110101011010010011
10110100101010110101100101
11010010010010100100100110
10010011001001010011010100 •• SPS
SPSNotifies
Notifiesthe
theSP
SPof
ofaaPower
Power
11010100110101011010010011 Failure
Failure
10110100101010110101100101
11010010010010100100100110 •• SP
SPdisables
disablescache
cacheand
andcopies
copiesCache
Cache
10010011001001010011010100 content
contentinto
intothe
thevault
vault
11010100110101011010010011
10110100101010110101100101

Back-end
Fibre THE VAULT
stripe on drives 0-4 on DPE/DAE-OS

© 2003 EMC Corporation. All rights reserved. 26

When problems occur, such as a power or SP Failure, the contents of write cache
are copied into a special area of the first 5 drives on the array called THE VAULT.

26
CLARiiON Data Flow – READ (Cache Hit)

SPA Read Disk Drives


Cache
Write
Cache
0101101

0101101

SPB
Disk Drives
Write
Cache
Request 0101101

READ

© 2003 EMC Corporation. All rights reserved. 27

When data is Read-requested from the CLARiiON, the SP first checks the contents
of READ and WRITE Cache for the data. If the data has been recent written to the
array, and not flushed to the disks and overwritten by new cached data, it can be
read directly from WRITE Cache. It also may be in READ cache from an earlier
request. If it is in cache, the data will immediately be retrieved from cache for the
host, and flagged as Most Recently Used. The Least Recently Used (LRU)
contents of Read and Write cache are the first to be over-written.

A Read Cache Hit is considerably faster than retrieving the data directly from drives.

27
CLARiiON Data Flow – READ (Cache miss)

SPA Read Cache Disk Drives


0101101 0101101
Write
Cache

0101101

SPB
Disk Drives
Write
Cache
Request

READ

© 2003 EMC Corporation. All rights reserved. 28

If the requested data is not currently in READ or WRITE Cache, the data will be
retrieved from the appropriate disk drives. At that point a copy will be kept in READ
Cache to expedite subsequent Reads of the same data. This data fetched to READ
Cache will overwrite the Least Recently Used (LRU) data currently in Read Cache.

Also the Storage Processor has the ability to anticipate subsequent reads, and will
pre-fetch data not requested. We have the ability to decide when to begin
prefetching, and how much data to prefetch.

28
CLARiiON Value Add – Access Logix
CX200/400/600, FC4500/4700

© 2003 EMC Corporation. All rights reserved. 29

Access Logix is array-based software which facilities a shared-storage environment.


It will segment the storage on a CLARiiON so that hosts will only see the LUNs we
intend for them to see. The generic term for this is LUN Masking, as the array is
technically hiding unwanted LUNs from the hosts.

29
CLARiiON Value Add – SnapView
CX400/600, FC4700

Production Host Backup Host

Tape Device

Data Source LUN

DB

DB’
Snapshot or Clone

© 2003 EMC Corporation. All rights reserved. 30

SnapView includes two separate types of internal data replication: Snapshots and
Clones
Snapshots: Point-in-time virtual copies of LUNs. Allow for on-line back-up of
databases while only using 10-20% of the source disk resources.
Clones: Full Synchronous copies of LUNs within the same array. Can be used as a
point-in-time, FULL copy of a LUN through the fracture process, or as a data
recovery solution in the event of a failure.

30
CLARiiON Value Add – MirrorView
CX400/600, FC4700

Production Host Standby Host

Real-Time Synchronization
DB DB’

Primary Source LUN Secondary Mirror LUN

FABRIC

© 2003 EMC Corporation. All rights reserved. 31

MirrorView is the CLARiiON remote disaster recovery solution. Two arrays have
dedicated connection over Fibre Channel or T3 lines. It is synchronous mirroring,
meaning that all writes to the source must be completed to the Secondary array
before the acknowledgement is sent to the source host.
During a disaster, the Secondary Image can be promoted and mounted by a
standby host, minimizing downtime.

31
CLARiiON Value Add – SAN Copy
CX400/600, FC4700

DB’
t ion
M igra
ata
on al D Mail
ir ecti
Bi-D
Mail’

Data Migration
DB DB’’

FABRIC
FABRIC

© 2003 EMC Corporation. All rights reserved. 32

SAN Copy allows for data replication to heterogeneous storage arrays through Fibre
Channel topology, without involving hosts, or LAN topolgies.

32
CX-Series Comparison

© 2003 EMC Corporation. All rights reserved. 33

33
Summary
z The CLARiiON CX Arrays use Fibre Channel, industry-
standard RAID, and High-Availability hardware.
z The CLARiiON Storage units are called LUNs, which
are slices of RAID Groups, which are collections of
physical disk drives.
z CLARiiON-Host data flow performance is dependant
upon READ and WRITE Caching.
z Flushing is the technique a CLARiiON employs to
migrate data from write cache to the LUNs.
z A Cache DUMP is one way the CLARiiON protects the
contents of WRITE Cache
z The CX400/600 have Software available which add
additional capabilities.
© 2003 EMC Corporation. All rights reserved. 34

34

You might also like