Professional Documents
Culture Documents
Interview Questions 1
Interview Questions 1
Ans: A fabric is a virtual space in which all storage nodes communicate with each other over
distances. It can be created with a single switch or a group of switches connected together. Each
switch contains a unique domain identifier which is used in the address schema of the fabric.
In order to identify the nodes in a fabric, 24-bit fibre channel addressing is used.
Fabric services: When a device logs into a fabric, its information is maintained in a database. The
common services found in a fabric are:
Login Service
Name Service
Fabric Controller
Management Server
Fabric Management : Monitoring and managing the switches is a daily activity for most SAN
administrators. Activities include accessing the specific management software for monitoring
purposes and zoning.
2. What is ISL?
Ans: Switches are connected to each other in a fabric using Inter-switch Links (ISL).
3. Switch fabric ?
Ans: Switched Fabric - Each device has a unique dedicated I/O path to the device it is
communicating with. This is accomplished by implementing a fabric switch.
4. Lun Migration?
Ans: LUN Migration Information:
LUN Migration provides the ability to migrate data from one LUN to another dynamically.
The target LUN assumes the identity of the source LUN.
The source LUN is unbound when migration process is complete.
Host access to the LUN can continue during the migration process.
The target LUN must be the same size or larger than the source.
The source and target LUNs do not need to be the same RAID type or disk type (FC<->ATA).
Both LUNs and metaLUNs can be sources and targets.
Individual components LUNs in a metaLUN cannot be migrated indepedently - the entire
metaLUN must be migrated as a unit.
The migration process can be throttled.
Reserved LUNs cannot be migrated.
5. What is heterogeneous?
Ans: A network that includes computers and other devices from different manufacturers. For
example, local-area networks (LANs) that connect PCs with Apple Macintosh computers are
heterogeneous.
6. What is zoning? What all the different types of zoning?
Ans: There are several configuration layers involved in granting nodes the ability to
communicate with each other:
Members - Nodes within the SAN which can be included in a zone.
Zones - Contains a set of members that can access each other. A port or a node can be members
of multiple zones.
Zone Sets - A group of zones that can be activated or deactivated as a single entity in either a
single unit or a multi-unit fabric. Only one zone set can be active at one time per fabric. Can also
be referred to as a Zone Configuration.
In general, zoning can be divided into three categories:
WWN zoning - WWN zoning uses the unique identifiers of a node which have been recorded
in the switches to either allow or block access A major advantage of WWN zoning is flexibility.
The SAN can be re-cabled without having to reconfigure the zone information since the WWN is
static to the port.
Port zoning - Port zoning uses physical ports to define zones. Access to data is determined by
what physical port a node is connected to. Although this method is quite secure, should recabling
occur zoning configuration information must be updated.
Mixed Zoning Mixed zoning combines the two methods above. Using mixed zoning allows a
specific port to be tied to a node WWN. This is not a typical method.
7. what is Single HBA Zoning?
Ans: Under single-HBA zoning, each HBA is configured with its own zone. The members of the
zone consist of the HBA and one or more storage ports with the volumes that the HBA will use.
Two reasons for Single HBA Zoning include:
Cuts down on the reset time for any change made in the state of the fabric.
Only the nodes within the same zone will be forced to log back into the fabric after a RSCN
(Registered State Change Notification).
8. What is LUN Masking?
Ans: Device (LUN) Masking ensures that volume access to servers is controlled appropriately.
This prevents unauthorized or accidental use in a distributed environment.
A zone set can have multiple host HBAs and a common storage port. LUN Masking prevents
multiple hosts from trying to access the same volume presented on the common storage port.
9. What is iSNS (Internet Storage Name service)?
Ans: Each Fibre Channel Name Service message has an equivalent iSNS message. This mapping
is transparent, allowing iFCP fabrics with iSNS support to provide the same services that Fibre
Channel fabrics can.
When an iFCP or iSCSI gateway receives a Name Service ELS, it is directly converted to the
equivalent iSNS Name Service message. The gateway intercepts the response and maps any
addressing information obtained from queries to its internal address translation table before
forwarding the Name Service ELS response to the original Fibre Channel requester.
10. What is Replication? Replication Software?
Ans: Local replication is a technique for ensuring Business Continuity by making exact copies of
data. With replication, data on the replica will be identical to the data on the original at the pointin-time that the replica was created.
Replica - An exact copy (in all details)
Replication - The process of reproducing data
Examples:
Copy a specific file
Copy all the data used by a database application
Copy all the data in a UNIX Volume Group (including underlying logical volumes, file
systems, etc.)
Copy data on a storage array to a remote storage array
EMC Symmetrix Arrays
EMC TimeFinder/Mirror
Full volume mirroring
EMC TimeFinder/Clone
Full volume replication
EMC TimeFinder/SNAP
When LUNs have been bound, they are assigned to hosts. Normal host procedures, such as
partitioning, formatting and labeling, will then be performed to make the LUN usable. The
CLARiiON software that controls host access (LUN Masking) to LUNs is Access logic.
15. How you will check compatibility for server when youre installing new box?
Ans: Compatibility Matrix in E-Lab Interoperability navigator powerlink.emc.com
16.What is Hot Spare?
Ans: A hot spare is an idle component (often a drive) in a RAID array that becomes a temporary
replacement for a failed component.
For example:
The hot spare takes the failed drives identity in the array.
Data recovery takes place. How this happens is based on the RAID implementation:
If parity was used, data will be rebuilt onto the hot spare from the parity and data on the
surviving drives.
If mirroring was used, data will be rebuilt using the data from the surviving mirrored drive.
The failed drive is replaced with a new drive at some time later.
One of the following occurs:
The hot spare replaces the new drive permanentlymeaning that it is no longer a hot spare
and a new hot spare must be configured on the system.
When the new drive is added to the system, data from the hot spare is copied to the new drive.
The hot spare returns to its idle state, ready to replace the next failed drive.
Note: The hot spare drive needs to be large enough to accommodate the data from the failed
drive
Hot spare replacement can be:
Automatic - when a disks recoverable error rates exceed a predetermined threshold, the disk
subsystem tries to copy data from the failing disk to a spare one. If this task completes before the
damaged disk fails, the subsystem switches to the spare and marks the failing disk unusable. (If
not it uses parity or the mirrored disk to recover the data, as appropriate).
User initiated - the administrator tells the system when to do the rebuild. This gives the
administrator control (e.g., rebuild overnight so as not to degrade system performance), however,
the system is vulnerable to another
failure because the hot spare is now unavailable. Some systems implement multiple hot spares to
improve availability.
16. What is Hot Swap?
Ans: Like hot spares, hot swaps enable a system to recover quickly in the event of a failure. With
a hot swap the user can replace the failed hardware (such as a controller) without having to shut
down the system.
17. Data Availability at the host?
Ans:
Multiple HBAs: Redundancy can be implemented using multiple HBAs. HBAs are the hosts
connection to the storage subsystem.
Multi-pathing software: Multi-pathing software is a server-resident, availability enhancing,
software solution. It utilizes the available HBAs on the server to provide redundant
communication paths between host and storage devices. It provides multiple path I/O
capabilities and path failover, and may also provide automatic load balancing. This assures
uninterrupted data transfers even in the event of a path failure.
Clustering: Clustering uses redundant host systems connected together. In the event that one of
the hosts in the cluster fails, its functions will be assumed by the surviving member(s). Cluster
members can be configured to transparently take over each others workload, with minimal or no
impact to the user.
18. What is HBA?
Ans: The host connects to storage devices using special hardware called a Host Bus Adapter
(HBA). HBAs are generally implemented as either an add-on card or a chip on the motherboard
of the host. The ports on the HBA are the hosts connection to the storage subsystem. There may
be multiple HBAs in a host. The HBA has the processing capability to handle some storage
commands, thereby reducing the burden on the host CPU.
19. What Volume Manager?
Ans: The Volume Manager is an optional intermediate layer between the filesystem and the
physical disks. It can aggregate several smaller disks to form a larger virtual disk and make this
virtual disk visible to higher level programs and applications. It optimizes access to storage and
simplifies the management of storage resources.
20. DAS,NAS,SAN?
DAS : In a Direct Attached Storage (DAS) environment, servers connect directly to the disk array
typically via a SCSI interface. The same connectivity port on the Disk array cannot be shared
between multiple servers. Clients connect to the Servers through the Local Area Network The
distance between the Server and the Disk array is governed by the SCSI limitations. With the
advent of Storage Area Networks and Fibre Channel interface, this method of Disk array access is
becoming less prevalent.
NAS: In a Network Attached Storage (NAS) environment, NAS Devices access the disks in an
array via direct connection or through external connectivity. The NAS heads are optimized for
file serving. They are setup to export/share file systems. Servers called NAS clients access these
file systems over the Local Area Network (LAN) to run applications. The clients connect to these
servers also over the LAN.
SAN: In a Storage Area Network (SAN) environment, servers access the disk array through
adedicated network designated as SAN in the slide. SAN consists of Fibre Channel switches that
provide connectivity between the servers and the disk array. In this model, multiple servers can
access the same Fibre Channel port on the disk array. The distance between the server and the
disk array can also be greater than that permitted in a direct attached SCSI environment. Clients
communicate with the servers over the Local Area Network (LAN).
21.What is DPE2?
Ans: All CX-series models now ship with the new UltraPoint Disk Array Enclosure (DAE2P).
22.What is CRU signature?
Ans: The CRU Signature defines what Enclosure/ Bus/ Slot the disk is in. It also has an entry that
is unique for each RAID group so a disk not only must match the Bus/ Enclosure/Slot but also
the RAID group in order to be included in as part of a LUN. If you pull one disk, that slot is
marked for rebuild. If you pull a second disk from a RAID group, the RAID group shuts down. If
you then insert a new disk for the second disk that you pulled, Flare (the array operating system)
will try to bring up the LUN in "n -1 disks mode" but since it is not the original disk, it cannot
bring the LUN online and, instead, returns a CRU signature error. If you insert the right disk
(that is, the one you pulled from that slot), the LUN will come back online. If you were to insert a
new disk into the first slot you pulled or insert the original disk that you pulled from that slot, the
disk will be rebuilt because that first slot is marked as needing a rebuild. When a slot requires a
rebuild, the CRU signature of the disk does not matter. A rebuilt disk is assigned a new CRU
signature.
The CRU Signature is created when a LUN is bound and is stored on the private area of each
disk.
23. What is CMI and Function?
Ans: SPs communicate via the CMI or PCI Express bus
CLARiiON Messaging Interface
New models will use the faster PCI express bus
A Fibre Channel connection between the SPs, called the CMI, carries data between the SPs, and
also carries status information. On FC-series arrays, the CMI is a single connection, running at
100 MB/s. On CX-series arrays, the CMI is a dual connection, with each connection running at
200 MB/s. On newer CX3 series arrays, the communication path between SPs will use the faster
PCI express bus
On FC-series arrays, there is an internal mechanism to determine where the fault lies if the CMI
should fail it uses a backup CMI, which is a low-speed serial connection between the SPs. On
the CX arrays, the dual CMI itself allows this determination.
24. Explain raid levels?
Ans:
RAID 0 Striped Array with no Fault Tolerance
RAID 1 Disk Mirroring
RAID 1+0 Mirroring and Striping
RAID 3 - Parallel access array with dedicated parity disk
RAID 4- Striped array with independent disks and a dedicated parity disk
RAID 5 - Striped array with independent disks and distributed parity
RAID 6 - Striped array with independent disks and dual distributed parity
25. What is PSM LUN?
Ans: A PSM LUN was created during installation or immediately after a CLARiiON FC4700
array was installed.
The PSM LUN was not properly planned or the customer did not want the type of RAID Groups
or LUNs that were configured. Destroying and recreating the PSM LUN assumes that no data has
been stored on the array and that the procedure takes place during the initial installation of the
array. While RAID Group and LUN information is unaffected, all Access Logix configuration
information (including host mappings, storage groups, host information, etc.) and all
configuration information for optional SnapView and MirrorView software are lost.
26.How to create raid groups, binding Lun, create storage group?
Ans: Right click Array properties create raid group, Right click Array properties bind Lun,
Right click Array properties Storage group
27. What is mirror view?
Ans: MirrorView is the CLARiiON remote disaster recovery solution. Two arrays have dedicated
connection over Fibre Channel or T3 lines. It is synchronous mirroring, meaning that all writes to
the source must be completed to the Secondary array before the acknowledgement is sent to the
source host. During a disaster, the Secondary Image can be promoted and mounted by a Standby
host, minimizing downtime.
28. What is Remote replication ?
Ans: Remote Replication:
Replica is available at a remote facility
Could be a few miles away or half way around the world
Backup and Vaulting are not considered remote replication
Synchronous Replication
Replica is identical to source at all times Zero RPO
Asynchronous Replication
Replica is behind the source by a finite margin Small RPO
Connectivity
Network infrastructure over which data is transported from source site to remote site
29. What is DWDM?
Ans: Dense Wavelength Division Multiplexing (DWDM)
DWDM is a technology that puts data from different sources together on an optical fiber with
each signal carried on its own separate light wavelength (commonly referred to as a lambda or ).
Up to 32 protected and 64 unprotected separate
wavelengths of data can be multiplexed into a light
stream transmitted on a single optical fiber.
30. What all storage Arrays & SAN Devices supported by EMC ECC?
Ans:
Storage arrays:
EMC Symmetrix
EMC CLARiiON
EMC Centera
EMC Celerra and Network Appliances NAS servers
EMC Invista
Hitachi Data Systems (including the HP and Sun resold versions)
HP Storageworks
IBM ESS
SMI-S (Storage Management Initiative Specification) compliant arrays
SAN Devices:
EMC Connectrix
Brocade
McData
Cisco
Inrange (CNT)
IBM Blade Server (IBM-branded Brocade models only)
Dell Blade Server (Dell-branded Brocade models only)
31. What is reserved pool?
Ans: The CLARiiON storage system must be configured with a Reserved LUN Pool in order to
use SnapView Snapshot features. The Reserved LUN Pool consists of 2 parts: LUNs for use by
SPA and LUNs for use by SPB. Each of those parts is made up of one or more Reserved LUNs.
The LUNs used are bound in the normal manner. However, they are not placed in storage groups
and allocated to hosts, they are used internally by the storage system software. These are known
as private LUNs because they cannot be used, or seen, by attached hosts.
Like any LUN, a Reserved LUN will be owned by only one SP at any time and they may be
trespassed if the need should arise (i.e., if an SP should fail).
Just as each storage system model has a maximum number of LUNs it will support, each also has
a maximum number of LUNs which may be added to the Reserved LUN Pool.
The first step in SnapView configuration will usually be the assignment of LUNs to the Reserved
LUN Pool. Only then will SnapView Sessions be allowed to start. Remember that as snapable
LUNs are added to the storage system, the LUN Pool size will have to be reviewed. Changes may
be made online.
LUNs used in the Reserved LUN Pool are not host-visible, though they do count towards the
maximum number of LUNs allowed on a storage system.
32.Explain DASD,JBOD,DISK Array?
Ans: DASD Direct Access Storage Device (originally introduced by IBM in 1956) is the oldest
of the techniques for accessing disks from a host computer. Disks are directly accessed from the
host (historically a mainframe system) and tightly coupled to the host environment. A hard drive
in a personal computer is an example of a DASD system. Typically, you can view the DASD as a
one-to-one relationship between a server/computer and its disk drive.
JBOD: JBOD is an acronym for just a bunch of disks. The drives in a JBOD array can be
independently addressed and accessed by the Server.
DISK ARRAY : Disk arrays extend the concept of JBODs by improving performance and
reliability. They have multiple host I/O ports. This enables connecting multiple hosts to the same
disk array. Array management software allows the partitioning or segregation of array resources,
so that a disk orgroup of disks can be allocated to each of the hosts. Typically they have
controllers that can perform RAID (Redundant Array of Independent Disks) calculations.
32.What is BCV?
Ans: The most fundamental element of TimeFinder/Mirror is a specially defined volume called a
Business Continuity Volume. A BCV is a Symmetrix volume with special attributes that allows it
to be attached to another Symmetrix Logical Volume within the same Symmetrix as the next
available mirror. It must be of the same size, type, and emulation (for mainframe 3380/3390) as
the device which it will mirror. Each BCV has its own host address and Symmetrix device
number.
33. How SRDF works?
Ans: SRDF used for Data mirroring between physically separate Symmetrix systems
34. Explain FCIP, IFCP, ISCSI?
Ans:
fabric provides scalability and dedicated bandwidth between any given pair of interconnected devices. It uses a 24-bit address (called the Fibre Channel Address) to route
traffic, and can accommodate as many as 15 million devices in a single fabric.
41. What is cluster?
Ans: Multiple server acts like single server, when one server down, user in the end will not get
any effort.
42. What is JBOD?
Ans: JBOD is an acronym for just a bunch of disks. The drives in a JBOD array can be
Independently addressed and accessed by the Server.
Bidirectional mirroring
Integration with EMC SnapView LUN copy software
Replication over long distances
CLARiiON MirrorView/A Environment
MirrorView/A operates in a highly available environment, leveraging the dual-SP design of
CLARiiON systems. If one SP fails, MirrorView/A running on the other SP will control and
maintain the mirrored LUNs. If the server is able to fail over I/O to the remaining SP, then
periodic updates will continue. The high-availability features
of RAID protect against disk failure, and mirrors are resilient to an SP failure in the primary or
secondary storage system.
53. What Is EMC SAN Copy?
EMC SAN Copy is storage-system-based software for copying data directly from a logical unit on
one storage system to destination logical units on supported remote systems without using host
resources. SAN Copy connects through a storage area network (SAN), and also supports
protocols that let you use the IP WAN to send data over extended distances.
SAN Copy runs in the SPs of a supported storage system (called a SAN Copy storage system),
and not on the host servers connected to the storage systems. As a result, the host processing
resources are free for production applications while the SPs copy data.
SAN Copy can copy data between logical units as follows:
Within a CLARiiON storage system
On CLARiiON storage systems
On CLARiiON and Symmetrix storage systems
On CLARiiON and qualified non-EMC storage systems
SAN Copy can use any CLARiiON SP ports to copy data, provided the port is not being used for
MirrorView connections. Multiple sessions can share the same port. You choose which ports SAN
Copy sessions use through switch zoning.
SAN Copy Features and Benefits
SAN Copy adds value to customer systems by offering the following
features:
Storage-system-based data mover application that offloads the host, thereby improving host
performance
Software that you can use in conjunction with replication software, allowing I/O with the
source logical unit to continue during the copy process
Simultaneous sessions that copy data to multiple CLARiiON and Symmetrix storage systems
Easy-to-use, web-based application for configuring and managing SAN Copy
SAN Copy/E runs only on a CX300 or AX-Series storage system and can only copy data to CXSeries storage systems that are running San Copy.
SAN Copy/E can use any CLARiiON SP ports to copy data, provided the port is not being used
for MirrorView connections. Multiple sessions can share the same port. You choose which ports
SAN Copy sessions use through switch zoning.
SAN Copy/E Features and Benefits
The SAN Copy/E adds value to customer systems by offering the following features:
Incremental copy sessions from AX-Series or CX300 storage systems to SAN Copy systems
located in the data center.
Storage-system-based data mover application that uses storage area network (SAN) rather than
host resources to copy data resulting in a faster copy process.
Easy-to-use, web-based application for configuring and managing SAN Copy/E.
Software that you can use in conjunction with replication software, allowing I/O with the
source logical unit to continue during the copy process
the write cache data, that data is dumped to the vault area. It is protected there by the nonvolatile nature of disk
58. What is flushing and how many levels?
Ans:
Three levels of flushing:
Idle - Low I/Os to the LUN; user I/Os continue
Watermark - Priority depends on cache fullness; user I/Os continue
Forced - Cache has no free space; user I/Os queue
59. Explain Meta LUNs?
a metaLUN is created by combining LUNs
Dynamically increase LUN capacity
Can be done on-line while host I/O is in progress
A LUN can be expanded to create a metaLUN and a metaLUN can be further expanded by
adding additional LUNs
Striped or concatenated
Data is restriped when a striped metaLUN is created
Appears to host as a single LUN
Added to storage group like any other LUN
Can be used with MirrorView, SnapView, or SAN Copy
Supported only on CX family with Navisphere 6.5+
60. What is Trespassing:
If the Storage Processor, Host Bus Adapter, cable, or any component in the I/O path fails,
ownership of the LUN can be moved to the surviving SP
Process is called LUN Trespassing
61. what is power path ?
Host Connectivity Redundancy PowerPath Failover Software
Host resident program for automatic detection and management of failed paths
Host will typically be configured with multiple paths to LUN
If HBA, cable or Switch fails,
PowerPath will redirect I/O over surviving path
If Storage Processor fails, PowerPath will Trespass LUN to surviving Storage Processor and
redirect I/O
Dynamic load balancing across HBA and Fabric Not Storage Processors
62. Explain FLARE Operating Environment?
FLARE software manages all functions of the CLARiiON storage system. Each storage system
ships with a complete copy of FLARE software installed. When you power up the storage system,
each SP boots and executes FLARE software.
FLARE performs provisioning and resource allocation
Memory budgets for caching and for snap sessions, mirrors, clones, copies
Process Scheduling
Boot Management
Access Logix software is optional software that runs within the FLARE operating environment
on each storage processor (SP). Access Logix provides access control and allows multiple hosts to
share the storage system. This LUN Masking functionality is implemented using Storage
Groups. A Storage Group is one or more LUNs within a storage system that are reserved for one
or more hosts and are inaccessible to other hosts. When you power up the storage system, each
SP boots and executes its Access Logix software.
Disadvantages: Reconfiguration
WWPN Zoning Advantage: Flexibility, Reconfiguration, Troubleshooting
Disadvantages: Spoofing, HBA replacement
67. Hard and soft zoning?
Ans:
WWN zoning - WWN zoning uses the unique identifiers of a node which have been recorded in
the switches to either allow or block access A major advantage of WWN zoning is flexibility. The
SAN can be re-cabled without having to reconfigure the zone information since the WWN is
static to the port.
Port zoning - Port zoning uses physical ports to define zones. Access to data is determined by
what physical port a node is connected to. Although this method is quite secure, should recabling
occur zoning configuration information must be updated.
68. What is the difference between Hub and switch?
Ans: A switch is much faster than a hub and reduces collisions/retransmissions. Switches send
traffic only to the destination device/port, while hubs broadcast the data to all ports. If you have
a choice, a switch will provide more reliable and faster transfers of data.
CX500
CX300
Disks
240
120
60
SPs
CPU/SP
FE/SP
4@2Gb
2@2Gb
2@2Gb
BE/SP
4@2Gb
2@2Gb
1@2Gb
IOPS
200K
120K
50K
MB/sec
1520
760
680
Mem
8GB
4GB
2GB
Cache
3224MB
1515MB
571MB
W Cache
3072MB
1515MB
571MB
HA Hosts
256
128
64
Min Size
8U
4U
4U
LUNs
2048
1024
512
RGs
240
120
60
LUNs/RG
128
128
128
SGs
512
256
128
LUNs/SG
256
256
256
CMI
2x2Gb
2x2Gb
2x2Gb
Rsrvd LUNs
100
50
25
CX400
CX200
CX200LC
Disks
240
60
30
15
SPs
CPU/SP
FE/SP
4@2Gb
2@2Gb
2@2Gb
2@2Gb
BE/SP
2@2Gb
2@2Gb
1@2Gb
1@2Gb
IOPS
150K
60K
40K
20K
MB/sec
1300
680
200
100
Mem
8GB
2GB
1GB
512MB
Cache
3470MB
619MB
237MB
237MB
W Cache
3072MB
619MB
237MB
237MB
HA Hosts
128
64
15
NA
Min Size
8U
4U
4U
4U
LUNs
1024
512
256
256
LUNs/RG
128
128
128
128
RGs
240
60
30
15
SGs
256
128
30
CMI
2x2Gb
2x2Gb
2x2Gb
NA
Rsrvd LUNs
100
50
25
NA
Back To Index
CX500i/300i Model Comparison
CX500i
CX300i
Disks
120
60
SPs
CPU/SP
2x1.6GHz 1x800MHz
iSCSI/SP
2@1Gb
2@1Gb
BE/SP
2@2Gb
1@2Gb
Mem
4GB
2GB
Cache
1515MB
571MB
W Cache
1515MB
571MB
Connects
128
64
Min Size
4U
4U
LUNs
1024
512
RGs
120
60
LUNs/RG
128
128
SGs
512
128
LUNs/SG
256
256
CMI
2x2Gb
2x2Gb
Rsrvd LUNs
50
25
'Connects' is the max number of NICs/HBAs that can connect to the array, regardless of
the number of hosts.
Back To Index
Max FC Capacity Comparison
Disk CX700 CX500 CX300
36GB 8.6TB 4.2TB 2.1TB
73GB 17.5TB 8.6TB 4.4TB
146GB 35TB 17.4TB 8.6TB
300GB 72TB
36TB
18TB
The max number of FC drives per RAID Group is 16, making the largest possible current
FC RAID group size 4.8 TB (raw) (16 x 300GB on a CX-Series array).
All sizes are raw.
Back To Index
Max ATA Capacity Comparison
Disk CX700 CX500 CX300
250GB 56TB 26.2TB 11.2TB
320GB 72TB 33.6TB 14.4TB
ATA Disk Drive Notes:
The max number of ATA drives per RAID Group is 16, making the largest possible
current ATA RAID group size 5.12TB (raw) (16 x 320GB on a CX-Series array).
All sizes are raw and assume that all DAEs, other than the first one, are ATA.
The 250GB drives are 7200RPM SATA - the 320GB drives are 5400RPM PATA.
All sizes do not include the FC drives in the first DAE.
The first DAE must be FC and contains at least 5 drives. All other DAEs can be ATA or a
mix of ATA & FC.
250GB SATA and 320GB PATA drives can be mixed in the same DAE.
Back To Index
General CLARiiON Information
Back To Index
iSCSI CLARiiON Information
iSCSI arrays have 1Gb copper Ethernet FE ports instead of Fibre Channel FE ports.
You cannot mix Fibre Channel and iSCSI ports on the same array.
Refer to the EMC Support Matrix for supported host iSCSI connectivity.
The iSCSI ports and the 10/100 host management ports can be on the same IP subnet.
iSNS is supported.
IPSEC is not supported natively on the arrays.
Both standard NICs as well as iSCSI HBAs (e.g. QLogic QLA4010) are supported for host
access.
PowerPath (V4.3.1 or later) supports multi-pathing and load balancing for iSCSI arrays.
MirrorView/S/A and SAN Copy are not supported on iSCSI arrays.
Back To Index
metaLUN Information
metaLUNs form an abstract LUN that is presented to the host as a single piece of storage
but consists of 2 or more 'back end' LUNs
The use of metaLUNs is optional. The capability is available in the base FLARE upgrade
R12.
metaLUNs and traditional LUNs can be mixed on the same array.
metaLUNs are created from an initial LUN referred to as the 'base' LUN.
The metaLUN takes on the characteristics of the base LUN when it is created (WWN,
Nice name, etc.), which can be modified by the user.
Creation of a metaLUN is dynamic - the creation process is functionally transparent to
any hosts accessing the base LUN.
FC and ATA LUNs cannot be mixed in the same metaLUN.
Ownership of the back-end LUNs that make up a metaLUN will all be moved to the
same SP as the base LUN.
All LUNs that make up a metaLUN become private.
Destroying a metaLUN destroys all the LUNs that make up that metaLUN.
If a LUN uses SnapView, MirrorView or SAN Copy it must be removed from those
applications before it can be expanded using metaLUNs.
metaLUN components do not count against the max LUN count for an array; however,
they have their own limits (see below).
metaLUNs can be striped or concatenated.
Striping Considerations
o All striped LUNs must be the same size and RAID type.
o Striping will generally provide better performance since more spindles are
available.
o If a new LUN is added to a striped metaLUN, all data on the existing LUNs will
be restriped.
o The new space will not be available until re-striping occurs.
o For optimal performance LUNs should be in different RAID groups (spindles).
Concatenation Considerations
o Any LUN types can be concatenated together except for R0 LUNs.
o R0 LUNs can only be concatenated with other R0 LUNs.
o Concatenation occurs by adding components to a base or existing metaLUN
LUN.
o A component is a collection of one or more LUNs identical in RAID type and size
that are striped together.
o The space added by concatenating a LUN is available immediately for use.
o You can only add LUNs to the last component in a metaLUN. You cannot insert
LUNs into the chain of component LUNs.
metaLUN Configuration
Item
Max metaLUNs.
1024
512
256
LUNs/Cmpnt
32
32
16
Concat. Cmpnts/metaLUN
16
LUNs in metaLUNs
512
256
128
Back To Index
LUN Migration Information
LUN Migration provides the ability to migrate data from one LUN to another
dynamically.
The target LUN assumes the identity of the source LUN.
The source LUN is unbound when migration process is complete.
Host access to the LUN can continue during the migration process.
The target LUN must be the same size or larger than the source.
The source and target LUNs do not need to be the same RAID type or disk type (FC<>ATA).
Both LUNs and metaLUNs can be sources and targets.
Individual components LUNs in a metaLUN cannot be migrated indepedently - the
entire metaLUN must be migrated as a unit.
The migration process can be throttled.
Reserved LUNs cannot be migrated.
Reliability/Availability Features
All components are dual-redundant and hot swappable (no single point of failure).
Write cache is protected by a 'vault' area on disk. On a failure the contents are written to
disks (de-staged or dumped). When the failure is corrected the contents are written to the
back-end disks and write cache is re-enabled.
The de-stage process is supported by batteries during power failures. Write cache will
not be re-enabled until the batteries are sufficiently recharged to support another cache
de-stage.
The following conditions must be met for write-cache to be enabled:
o There must be a standby power supply present, and it must be fully charged.
o At least 4 vault drives must be present (all 5 if 'Non-HA' option is not selected);
they cannot be faulted or rebuilding.
o The ability to keep write cache enabled when a single vault drive fails is optional
under R12 and later.
o Both storage processors must be present and functional.
o Both power supplies must be present in the DPE/SPE.
o Both fan packs must be present in the DPE/SPE.
o The DPE/SPE and all DAEs must have two non-faulted link control cards (LCC)
each.
Each data block on a CLARiiON contain 8 bytes of error checking data.
o 8 bytes consist of LRC, Shedstamp, Writestamp and Timestamp
SNiiFER runs in the background and continuously checks all data blocks for errors.
Updates to the array SW are non-disruptive from a host perspective.
Failure of an SP results in all LUNs owned by that SP being trespassed to the other SP
(assuming PowerPath is running on the host(s) accessing those LUNs).
Slightly higher (0.0025%) reliability can be achieved using vertical RAID groups rather
than horizontal ones.
Striping a RAID1 RG across multiple DAEs that include the first DAE (the one containing
the vault drives) is not recommended.
Back To Index
Security Features
Navisphere
o Arrays can be configured into domains to control who can manage.
o Named role-based accounts.
o Roles are Read Only (Monitor), Manager and Security Manager.
o All management communications with array are encrypted with 128-bit SSL.
o All actions performed on an array are logged by username@hostname.
NaviCLI
o username@hostname is authenticated against privileged list of Navi agent (on
host for pre-FC4700, SP for all others).
o No encryption.
o Password is sent in clear.
o Communicates on TCP/IP port 6389.
Back To Index
Software
Navisphere
NaviCLI
Navisphere Integrator
Navisphere Analyzer
LUN Masking
SnapView
MirrorView/S
MirrorView/A
SAN Copy
PowerPath
CLARalert/OnAlert
Back To Index
Navisphere (6.19)
Array-based package for managing CLARiiON arrays.
Back To SW Index
NaviCLI (6.19)
Host-based utility that provides command line control of CLARiiON arrays.
Java
Secure
Implemented
C++
Java
C++
Comm
Port 6389
Port 443
or 2163 (SSL)
Port 443
or 2163 (SSL)
Security
Basic
Navi 6.x
Navi 6.x
Addtl. SW Reqd.
None
Java JRE
None
OS's
Back To SW Index
Navisphere Integrator (V6.19)
Host-based package for integrating CLARiiON management into 3rd party packages.
Back To SW Index
Navisphere Analyzer (V6.19)
Provides detailed performance metrics.
o
o
SnapView sessions
SAN Copy sessions
Back To SW Index
LUN Masking
FLARE provides LUN masking to control host access to LUNs.
Connect hosts with different OSs to the same array port (through a switch).
Hosts and LUNs are combined into Storage Groups.
Allows assignment of a Storage Group to more than one host (for clustering).
A Storage Group can be assigned up to 256 LUs.
Supports multiple paths to the Storage Group (in conjunction with host-based
Powerpath).
Disallows changing or deleting the hidden Management storage group.
Disallows deleting a storage group that has hosts assigned to it.
Disallows deleting a storage group that has LUNs in it.
Disallows unbinding an LUN that is in a storage group.
When activated, changes Default Storage Group to Management Storage Group.
o Management Storage Group is a communications mechanism only (LUN 0 or
LUN Z).
o It never contains any actual LUNs.
Back To SW Index
SnapView (V2.19)
Provides Snapshots (point-in-time pointer-based copies of LUNs with copy-on-first-write
functionality) and BCVs (pont-in-time full copies aka. clones).
Array
Snaps/LUN
Snap Srcs/Array
100
50
25
Snaps/Array
300
150
100
Sessions/LUN
Reserved LUNs
100
50
25
BCVs/Source
BCV Groups
50
25
25
BCV Sources/Array
100
50
25
BCV Images/Array
200
100
50
Snaps/BCV
2 req
2 req
2 req
SnapView Notes:
Back To SW Index
MirrorView/S (V2.19)
Provides full-copy synchronous remote mirroring of LUNs between 2 or more arrays.
Back To SW Index
MirrorView/A (V2.19)
Provides full-copy asynchronous remote mirroring of LUNs between 2 or more arrays.
Utilizes delta set technology to track changes between transfer cycles. Whatever changes
between cycles is what's tranferred.
Available for CX400/500/600/700.
Supports mirroring between different CLARiiON models (CX400/500/600/700).
Supports consistency groups (Note: All LUNs in a consistency group are consistent
relative to each other, not necessarily to the applications view of the data).
Once mirrors are in a Consistency Group, you cannot fracture, synchronize, promote, or
destroy individual mirrors that are in the Consistency Group.
All secondary images in a Consistency Group must be on the same remote storage
system.
Back To SW Index
SAN Copy (V2.19)
SAN Copy - LUN copy across SAN between arrays.
Back To SW Index
PowerPath (4.4)
Provides host-based path load-balancing and failover.
Back To SW Index
CLARalert (6.2)
Provides remote support for CLARiiON arrays.
Components.
o CLARalert
o Navisphere host agent or NaviCLI
OnAlert is no longer used for ClarAlert dial-home.
Notifications can be sent via dial-up or email
Dial-up requires Windows NT/2000 management station with modem
Email can be via either Windows or Sun/Solaris station
Max monitered system per central monitor is 1000.
Back To SW Index
Definitions