Professional Documents
Culture Documents
SC8000 Controller
Deployment Guide
Storage Center SC8000 Controller Deployment Guide
Document Number: 680‐087‐001
Caution: Indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.
Warning: Indicates that failure to follow directions could result in property damage, personal
injury, or death.
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Storage Center Hardware Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Storage Center Architecture Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Communication Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Front End . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Back End . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
System Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
SC8000 Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
SC8000 Front-Panel Features and Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
SC8000 Back-Panel Features and Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
SC280 SAS Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
SC280 Enclosure Front Panel Features and Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
SC280 Drawer Front-Panel Features and Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
SC280 Back-Panel Features and Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
SC280 EMM Features and Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
SC280 Power Supply Unit Features and Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Fan Module Features and Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
SC280 Enterprise Plus Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
SC280 Drive Numbering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
SC200/220 SAS Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
SC200/220 Front-Panel Features and Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
SC200/220 Back-Panel Features and Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
SC200/220 EMM Features and Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
SC200/220 Enterprise Plus Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
SC200/220 Drive Numbering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
EN-SASx6x12/24 Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
EN-SASx6x12/24 Front-Panel Features and Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
EN-SASx6x12/24 Back-Panel Features and Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
EN-SASx6x12/24 SBB Module Features and Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
EN-SASx6x12/24 Standard SAS Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
EN-SASx6x12/24 Drive Numbering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Dell Compellent v
Contents
B Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
Troubleshooting the Serial Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
Troubleshooting Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
Troubleshooting Licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Troubleshooting Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
The preface introduces you to the Storage Center documentation. It provides the purpose and audience of
this document as well as a list of related publications.
Purpose
The Storage Center SC8000 Controller Deployment Guide provides cabling instructions for Storage Center
controllers, switches, and enclosures and provides instructions for configuring a new Dell Compellent
Storage Center using the System Manager Startup wizard.
Audience
Target audiences for this document are Dell installers and Dell business partners. The intended reader has
a working knowledge of storage concepts.
Related Publications
The following documents comprise the core Storage Center documentation set:
Storage Center Release Notes
Contains information about features and open and resolved issues for a particular product version.
Storage Center System Manager Administrator’s Guide
Describes the Storage Center System Manager software that manages an individual Storage Center.
Storage Center Software Update Guide
Describes how to upgrade Storage Center software from an earlier version to the current version.
Storage Center Maintenance CD Instructions
Describes how to install Storage Center software on Storage Center controllers. Installing Storage Center
software using the Storage Center Maintenance CD is intended for use only by sites that cannot update
Storage Center using the standard update options available through the Storage Center System
Manager.
Storage Center Command Utility Reference Guide
Provides instructions for using the Storage Center Command Utility. The Command Utility provides a
command‐line interface (CLI) to enable management of Storage Center functionality on Windows,
Linux, Solaris, and AIX platforms.
Storage Center Command Set for Windows PowerShell
Provides instructions for getting started with Windows PowerShell cmdlets and scripting objects that
interact with the Storage Center via the PowerShell interactive shell, scripts, and PowerShell hosting
applications. Help for individual cmdlets is available online.
Dell Compellent ix
Dell TechCenter
Provides technical white papers, best practice guides, and frequently asked questions about Dell
Storage products. Go to: http://en.community.dell.com/techcenter/storage/ and select Dell Compellent
in the Table of Contents.
The Storage Center SC8000 Controller Deployment Guide describes how to install and
configure Dell Compellent Storage Center. The information provided in this Deployment
Guide is intended to be used by Dell installers and Dell business partners.
Contents
Storage Center Hardware Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Storage Center Architecture Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Communication Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
SC8000 Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
SC280 SAS Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
SC200/220 SAS Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
EN‐SASx6x12/24 Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Dell Compellent 1
Chapter 1 Introduction
Controllers
A Storage Center controller provides the central processing capability for the Storage
Center Operating System (OS), application software (Storage Center System Manager), and
managing RAID storage. A Storage Center can be configured with a single controller
(single‐controller Storage Center) or a pair of controllers (dual‐controller Storage Center).
In a dual‐controller Storage Center configuration, the pair of controllers must be the same
controller model.
IO cards in the controller provide communication with drive enclosures and servers that
use the storage. Controllers provide two types of IO ports:
Front‐end ports: Hosts, servers, or Network Attached Storage (NAS) appliances access
storage by connecting to controller Fibre Channel IO cards, FCoE IO cards, or iSCSI IO
cards through one or more Fibre Channel or Ethernet network switches. Ports for these
connections are located on the back of the controller, but are designated as front‐end
ports.
Back‐end ports: Enclosures, which hold the physical drives that provide back‐end
storage, connect directly to the controller. Fibre Channel and SAS transports are
supported through ports designated as back‐end ports. Back‐end ports are in their own
private network between the controllers and the drive enclosures.
Switches
Dell Compellent offers Fibre Channel (FC) and iSCSI switches and IO cards as part of the
total Storage Center product package. Switches provide robust connectivity to servers,
allowing for the use of multiple controllers and redundant transport paths. Cabling
between the controller IO cards and switches (and servers) is referred to as front‐end
connectivity.
Enclosures
Enclosures house and control drives that provide storage. Enclosures are connected
directly to controller IO cards. These connections are referred to as back‐end connectivity.
New Storage Center deployments use 6 GB Serial Attached SCSI (SAS) disk enclosures.
Fibre Channel Switched Bunch of Disks (SBOD) and Serial Advanced Technology
Attachment (SATA) enclosures are supported for existing Storage Centers and for
controller migrations only (for example, upgrading an existing Storage Center by
migrating from a CT‐SC040 controller to an SC8000 controller).
Dell Compellent 3
Chapter 1 Introduction
Communication Links
A Storage Center controller uses multiple types of communication links for both data
transfer and administrative functions. Communication links are classified into three types:
front end, back end, and system administration.
1 3
2 4
6 7
Front End
Front‐End connectivity provides connections from servers to controllers using either
Ethernet (iSCSI) or Fibre Channel switches. Front‐End connectivity also supports all
replication traffic.
iSCSI connections provide reads and writes from a server to a controller through an
Ethernet switch.
Fibre Channel over Ethernet (FCoE) uses the FCoE protocol for data on the front end for
lossless transmission of data.
Fibre Channel (FC) connections on the front end are used to provide reads and writes
from a server to a controller.
Back End
Back‐end connectivity is strictly between the controllers and enclosures. The following
protocols are supported to communicate with the enclosures.
SAS uses a point‐to‐point switched top topology on four lanes. Each lane can perform
concurrent IO transactions at 6 Gb/sec. (Transfer speeds for earlier versions are 3 Gb/
sec.)
Fibre Channel connections were used on the back end for CT‐SC040 controllers. Fibre
Channel IO cards can be migrated from CT‐SC040 controllers to SC8000 controllers.
System Administration
The following connections are used for communicating with computers outside of Storage
Center:
Serial: Used for initial configuration and other service‐related tasks.
Ethernet: Used for administration and IPC between the controllers.
ETH 0 port: Configuration, administration, and management of a Storage Center.
ETH 1 port: Inter‐Process Communications (IPC) between controllers.
Note: The Ethernet ports on the back of the controller can operate at 10 Mb , 100
Mb or 1 Gb. These ports are not used for server or enclosure connectivity. For
optimal performance, connect to the port using 1 Gbps.
iDRAC: Provides access to the Integrated Dell Remote Access Controller (iDRAC) for
integrated peer controller management through a dedicated 1 Gb Ethernet port.
Dell Compellent 5
Chapter 1 Introduction
SC8000 Controller
The Storage Center SC8000 controller ships with two CPUs, a cache card, an Integrated Dell
Remote Access Controller 7 (iDRAC7), and a four‐port Intel network daughter card. Up to
six IO cards can be installed: three full‐height and three low‐profile. The Storage Center OS
is installed on an embedded Secure Digital (SD) card.
Component Description
Processor 2.50 GHz Dual 6 core
Bus Speed 1333 MHz
System Cache 16 GB
(expandable to 64 GB)
Cache Card 512 MB
PCIe (Slot 7)
Expansion Slots Gen 3 PCIe:
Three full height,
Three low profile
Transport Speeds
Fibre Channel 8 Gb or 16 Gb
10 Gb
FCoE
3 Gb, 6 Gb
SAS 1 Gb, 10 Gb
iSCSI
Note: 3 Gb SAS is supported for migrations only; 3 Gb SAS
enclosures are supported using a 6 Gb IO Card.
Maximum Front‐End (FC) Ports 14
(Using two 4‐port IO cards plus three 2‐port IO cards)
Maximum Back‐End Ports 20
(Using five 4‐port IO cards)
Maximum Back‐End SAS 6
Chains Supported
Maximum 6 Gb SAS Drives 168 drives/chain, up to 1008 total drives
Maximum 3 Gb SAS Drives 48 drives/chain = 288
Component Description
Form Factor 2U
Power Supplies 2
Fans 6
7 8 9
2 NMI button For the SC8000, this button is disabled in the BIOS.
3 System Used to locate a particular controller within a rack. When a
identification System ID button on the front or back panel is pressed, the front
button LCD panel and the back system status indicator flash until one
of the buttons is pressed again.
• Press to toggle the system ID on and off.
• If the controller stops responding during POST, press and
hold the system ID button for more than five seconds to enter
BIOS progress mode.
4 Video connector Allows you to connect a VGA monitor to the controller.
5 LCD menu buttons — Allows you to navigate the control panel LCD menu.
6 LCD panel — Displays system ID, status information, and system error
messages.
• The LCD lights blue during normal controller operation.
• The LCD lights amber when the controller needs attention,
and the LCD panel displays an error code followed by
descriptive text.
Note: If the controller is connected to a power source and an
error is detected, the LCD lights amber regardless of whether
the controller is turned on or off.
Dell Compellent 7
Chapter 1 Introduction
8 USB connectors (2) Allows you to connect USB devices to the controller. The ports
are USB 2.0‐compliant.
9 EST panel — A slide‐out panel containing an information tag that lists
system information including the Express Service Tag,
embedded NIC port 1 MAC address, and iDRAC7 Enterprise
card MAC address.
2 Select Selects the menu item highlighted by the cursor.
3 Right Moves the cursor forward in one‐step increments.
During message scrolling:
• Press once to increase scrolling speed
• Press again to stop
• Press again to return to default scrolling speed
• Press again to repeat the cycle
Home Screen
The Home screen displays user‐configurable information about the controller. This screen
is displayed during normal operation when there are no status messages or errors.
To navigate to the Home screen from another menu, continue to select the up arrow
until the Home icon is displayed, and then select the Home icon.
From the Home screen, press the Select button to enter the main menu.
Setup Menu
The Setup menu displays user‐configurable defaults for iDRAC, LCD error messages, and
the Home screen.
Note: When you select an option in the Setup menu, you must confirm the option
before proceeding to the next action.
Option Description
iDRAC Select DHCP or Static IP to configure the network mode. If Static IP is selected, the
available fields are IP, Subnet (Sub), and Gateway (Gtw). Select Setup DNS to
enable DNS and to view domain addresses. Two separate DNS entries are available.
Set error Select SEL to display LCD error messages in a form that matches the description in
the System Event Log. This is useful when trying to match an LCD message with a
System Event Log entry.
Select Simple to display LCD error messages in a simplified user‐friendly
description. See SC8000 Service Guide for a list of messages in this format.
Set home Select the default information display on the LCD Home screen. See View Menu to
see the options and option items that can be set as the default on the Home screen.
View Menu
The Setup menu displays iDRAC, asset, power, and temperature information.
Note: When you select an option in the View menu, you must confirm the option
before proceeding to the next action.
Option Description
iDRAC IP Displays the IPv4 or IPv6 addresses for the iDRAC7. Addresses include DNS
(Primary and Secondary), Gateway, IP, and Subnet (IPv6 does not have Subnet).
MAC Displays the MAC addresses for iDRAC, iSCSI, or Network devices.
Name Displays the name of the Host, Model, or User String for the controller.
Number Displays the Asset tag or the Service Tag for the controller.
Power Displays the power output of the controller in BTU/hr or Watts. The display format
can be configured in the Set home submenu of the Setup menu.
Temperature Displays the temperature of the controller in Celsius or Fahrenheit. The display
format can be configured in the Set home submenu of the Setup menu.
Dell Compellent 9
Chapter 1 Introduction
IO card
1 3 0 2
IPC MGMT
1 2 3 4 5 6 7 8 9
1 System Used to locate a particular controller within a rack. When a
identification System ID button on the front or back panel is pressed, the
button front LCD panel and the back system status indicator flash
until one of the buttons is pressed again.
• Press to toggle the system ID on and off.
• If the controller stops responding during POST, press
and hold the system ID button for more than five
seconds to enter BIOS progress mode.
2 System — Connects the optional system status indicator assembly
identification through the optional cable management arm.
connector
3 iDRAC — Connects to the Ethernet switch in the rack.
4 Serial port Allows you to connect to the controller using a serial
interface.
5 Video connector Allows you to connect a VGA monitor to the controller.
6 USB connectors Allows you to connect USB devices to the controller. The
ports are USB 2.0‐compliant.
7 Embedded Connects the controller to the Ethernet switch and to other
Ethernet connectors controllers in the rack. The ports function as follows:
• Port 1: ETH1 ‐ 10GbE capable, configured for 1GbE
(Connect to second controller for IPC).
• Port 0: ETH0 ‐ 1GbE (Connects to the Ethernet switch for
system login, email, alerts, SNMP traps, Phone Home
data, and access for software).
Note: Ports 2 and 3 are not used.
8 Power Supply 1 — Hot‐swappable, 750 W AC
100‐240 VAC, auto‐ranging, 50/60 Hz.
9 Power Supply 2 —
2 Activity • Flashing: Indicates transmit/receive activity.
indicator • Steady: Indicates valid network connection.
Dell Compellent 11
Chapter 1 Introduction
Caution: The power supplies must be of the same input type and have the same
maximum output power. Mismatched power supplies result in an error condition.
Caution: AC power supplies support both 220 V and 110 V input voltages. When
two identical power supplies receive different input voltages, they can output
different wattages and trigger a mismatch.
Caution: When correcting a power supply mismatch, replace only the power
supply with the flashing indicator. Swapping the opposite power supply to make a
matched pair can result in an error condition and unexpected controller shutdown.
2 Input switch Not currently used (though it can be used to set the
Unit ID Display, the ID is automatically reset by the
Storage Center).
3 System power indicator • Amber: Enclosure is in standby (not operational).
• Green: Enclosure is on (operational).
4 Module fault Amber: Hardware fault (PCM fault, Fan fault, SSB
module fault. Check indicators on individual
modules).
5 Logical status indicator Amber: change of status or fault from something other
than the enclosure itself (this may be from an internal
or external RAID controller or HBA, but is usually
associated with a disk drive as indicated by its own
fault LED).
6 Drawer Fault 1 Amber: drive, cable, or sideplane fault has occurred in
drawer 1.
7 Drawer Fault 2 Amber: drive, cable, or sideplane fault has occurred in
drawer 2.
Dell Compellent 13
Chapter 1 Introduction
2 Drawer Fault Amber: Sideplane card fault or drive failure causing
loss of availability or redundancy.
3 Logical fault indicator • Amber: Host indicated drive fault.
• Flashing Amber: Array(s) in impacted state.
4 Cable Fault Amber: Cable fault.
5 Activity Bar Graph Six variable‐intensity LEDs dynamically display drive
access in the enclosure drawer.
4
5
Item Name
1 Optional cable retention positions (4).
2 IO enclosure management modules (2). See SC280 EMM Features and Indicators on
page 16.
3 Fan modules (5).
4 Power supply units (2). See SC280 Power Supply Unit Features and Indicators on page 17.
5 Power switches (2).
6 Optional cable retention positions (2).
Dell Compellent 15
Chapter 1 Introduction
Each port has four separately arbitrated physical lanes. A green LED indicates the status of
each lane. The IO Module and Fault indicators provide status as described below.
1 3 4
2 5 6 7
Figure 12. IO Module Indicators
2 IO Module fault Amber: IO module error.
indicator
3 Console — Factory use only.
4 Lane LEDs — • Solid Green: Connected, No activity.
• Flashing: Activity.
5 SAS port A (in) Connects to other enclosures.
6 SAS port B (out) Connects to other enclosures.
7 SAS Port C The C‐port always connects to the controller.
1 2 3 4 5
2 PSU fault indicator • Amber (steady) — PSU fault, PSU not supplying
power
• Amber (flashing) — PSU firmware is downloading
3 AC power indicator • Amber (steady) — AC power is not detected
• Amber (flashing) — PSU firmware is downloading
4 Power OK indicator • Green (steady) — This PSU is providing power
• Green (flashing) — AC power is present, but this
PSU is in standby mode (the other PSU is providing
power)
5 Power switch — Controls power for the enclosure.
Separate and unique conditions are indicated if all three LEDs are in the same state:
If all three LEDs are off, then there is no AC power to either PSU.
If all three LEDs are on, then the General Enclosure Management (GEM) software has
lost communication with the PSU.
Dell Compellent 17
Chapter 1 Introduction
2 Module OK Green — Module OK
3 Battery fault Not currently used
4 Fan fault Amber — Loss of communication with the fan module,
or reported fan speed is out of tolerance
Dell Compellent 19
Chapter 1 Introduction
2 Power status Lights when at least one power supply is supplying power to
indicator the enclosure.
• Off: Both power supplies are off.
• On steady green: At least one power supply is providing
power to the enclosure.
3 Hard drives — • SC200: Up to 12 3.5‐inch hard drives.
• SC220: Up to 24 2.5‐inch hard drives.
Dell Compellent 21
Chapter 1 Introduction
1
2
3
! !
Compellent SC2
Compellent SC2
4 5 6
Figure 19. SC200/SC220 Back Panel Features and Indicators
2 Power supply/cooling • Amber: Power supply/cooling fan fault is detected
fan indicator • Off: Normal operation
3 AC power indicator • Green: Power supply module is connected to a
source of AC power, whether the power switch is on
• Off: Power supply module is disconnected from a
source of AC power
4 Power switches (2) — Controls power for the enclosure. There is one switch
for each power supply/cooling fan module.
5 Power supply/cooling — Contains a 700 W power supply and fans that provide
fan modules (2) cooling for the enclosure
6 Enclosure Management — EMMs provide the data path and enclosure
Modules (2) management functions for the enclosure
Compellent SC2
1 2 3 4 5 6 7
2 Debug port For engineering use only
3 SAS port A (in) Connects to the controller or to other enclosures. SAS ports
A and B can be used for either input or output, however for
cabling consistency, use port A as an input port.
4 Port A link status • Green: All the links to the port are connected
• Amber: One or more links are not connected
• Off: Enclosure is not connected
5 SAS port B (out) Connects to the controller or to other enclosures. SAS ports
A and B can be used for either input or output, however for
cabling consistency, use port B as an output port.
6 Port B link status • Green: All the links to the port are connected
• Amber: One or more links are not connected
• Off: Enclosure is not connected
7 EMM status indicator • Steady Green: Normal operation
• Amber: Enclosure did not boot or is not properly
configured
• Flashing Green: Automatic update in process
• Flashing Amber Twice: Enclosure is unable to
communicate with other enclosures
• Flashing Amber Four Times: Firmware update failed
• Flashing Amber Five Times: Firmware versions are
different between the two EMMs
Dell Compellent 23
Chapter 1 Introduction
2
1
2 Hard drive status indicator • Steady green: Normal operation
• Flashing green (on 1sec/ off 1 sec): Drive or enclosure
indicator is set to On in System Manager. (With the
enclosure indicator, all drives’ fault LEDs flash.)
• Off: No power to the drive
Dell Compellent 25
Chapter 1 Introduction
EN-SASx6x12/24 Enclosures
SAS enclosures hold drives for data storage and connect to the controllers through
back‐end ports. EN‐SASX6X12 enclosures can hold up to 12‐3.5ʺ drives, and EN‐SASX6X24
can hold up to 24‐2.5ʺ drives.
EN‐SASx6x12/24 enclosures cannot be connected to the same chain as SC200 or SC220
enclosures.
EN-SASX6X24 Enclosure
2 System power indicator • Green: Power is on
• Amber: Unit is on standby
3 Module fault indicator Amber: Enclosure module failure (PCM fault, Fan fault,
SSB module fault. Check indicators on individual
modules)
4 Logical fault indicator • Amber: Drive failure
• Flashing Amber: Unexpected enclosure ID (Module
Fault also flashes amber)
5 LED display — Indicates the enclosure ID number — the software sets this
number during installation, and matches the Disk ID
displayed in the Storage Center System Manager
1 3
2 4
5 6 7
Figure 26. SAS Enclosure Rear View — Controls and Indicators
2 Fan indicator • Green: Normal operation
• Amber: Fan failure
3 PCM status indicator • Green: Normal operation
• Amber: Power supply failure
4 DC power indicator • Green: Normal operation
• Amber: No DC power
5 Power switch — Controls power for the enclosure
6 Power Cooling Module (PCM) — one of — Contains fans that provide cooling for
two the enclosure
7 Storage Bridge Bay (SBB) — one of two — See EN‐SASx6x12/24 SBB Module
Features and Indicators on page 28
Dell Compellent 27
Chapter 1 Introduction
Each port has four separately arbitrated physical lanes. A green LED indicates the status of
each lane. The IO Module and Fault indicators provide status as described below.
1 3 4 5 6
2
Figure 27. IO Module Indicators
2 IO Module fault On: IO module error
indicator
3 SAS port A (in) Connects to other enclosures
4 Lane LEDs — • Solid Green: Connected, No activity
• Flashing: Activity
5 SAS port B (out) Connects to other enclosures
6 SAS Port C The C‐port always connects to the controller.
2
Figure 28. EN-SASx6x12 Standard SAS Drive Indicators
1 2
2 Green LED • Green: Connected, No activity
• Flashing: Activity
3 Anti‐tamper Indicator (when • Red: Locked
present) • Black: Unlocked
4 Lock (when present) Insert key or T10 Torx driver and turn to lock/unlock
Dell Compellent 29
Chapter 1 Introduction
Verify that the ordered equipment arrived safely, install IO cards in the controller(s), disks
in the enclosures (as required) and mount the equipment in a rack.
Contents
Prepare for Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Safety Precautions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Install IO Cards the Controller(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Label the Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Mount the Hardware in a Rack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Install Drives in SC280 Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Install Drives in EN‐SASx6x12/24 Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Install Enterprise Plus Drives in SC200/220 Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Dell Compellent 31
Chapter 2 Install the Hardware
Connectivity: The rack must be wired for connectivity to the management network and
any networks that carry front‐end IO from the Storage Center to servers.
RS‐232 serial cable and PC or Used to run commands and view console messages during
laptop controller configuration.
Computer connected to the Used to connect to the Storage Center System Manager (through a
same network as the Storage web browser) to complete the Storage Center configuration.
Center
Hardware Serial Number Used to configure the controller(s).
(HSN) and System Serial
Number (SSN)
Storage Architect or Business (Optional) Provide site‐specific settings used during deployment:
Partner supplied pre‐ • List of hardware needed to support storage requirements
installation document(s)
• Optional connectivity diagrams to illustrate cabling between the
controllers, enclosures, switches, and servers
• Optional network information, such as IP addresses, subnet
masks, gateways
Storage Center Maintenance (Optional) Used to update the Storage Center OS when internet
CD access is not available.
Safety Precautions
To avoid injury and damage to the equipment, always follow these safety precautions.
If equipment described in the document is used in a manner not specified by Dell
Compellent, the protection provided by the equipment may be impaired. For your safety
and protection, observe the following rules:
Note: See the safety and regulatory information that shipped with each Storage
Center component. Warranty information may be included within this document or
as a separate document.
Do not work alone when working with high‐voltage components.
Do not use mats designed to decrease electrostatic discharge as protection from
electrical shock. Instead, use rubber mats that have been specifically designed as
electrical insulators.
Do not remove covers from the Power Control Module (PCM). Disconnect the power
connection before removing a PCM from the enclosure.
Do not remove a faulty PCM unless you have a replacement model of the correct type
ready for insertion. A faulty PCM must be replaced with a fully operational module
within 24 hours.
Permanently unplug the enclosure before you move it or if you think it has become
damaged in any way. When powered by multiple AC sources, disconnect all supply
power for complete isolation.
Warning: Disconnect power from the controller when removing or installing
components that are not hot‐swappable. When disconnecting power, first power
down the controller using the Storage Center System Manager and then unplug the
power cords from all the power supply modules in the controller.
Warning: To avoid injury, do not attempt to the lift a controller or enclosure by
yourself. Always get assistance when lifting the controller.
Caution: Do not operate the controller without the cover, except when replacing
cooling fans.
Warning: The memory modules may be hot for several minutes after the controller
has been powered down. Wait for the memory modules to cool before handling
them.
Dell Compellent 33
Chapter 2 Install the Hardware
Caution: To ensure proper controller cooling, memory module blanks must be
installed in any memory socket that is not occupied.
Warning: To prevent eye damage, do not stare into the laser beam aperture of an
SFP+ transceiver.
Caution: Do not operate the controller without the cover, except when replacing
cooling fans.
Caution: Do not slide the controller into a rack with the cover open. Riser 2
contains a chassis intrusion detection switch that can be easily damaged when
sliding a controller into the rack with the cover open.
1 Rotate the latch release lock counter clockwise to the unlocked position.
2 Lift the latch on top of the controller and slide the cover back.
3 Grasp the cover on both sides, and carefully lift the cover away from the controller.
2
3
Dell Compellent 35
Chapter 2 Install the Hardware
Note: For proper seating of the cooling shroud in the chassis, ensure that the
cables inside the controller are routed along the chassis wall and secured using
the cable securing bracket.
Install IO Cards
Install the IO cards in the controller(s) based on the IO card priority rules. For dual‐
controller systems, install IO cards in the same locations in both controllers.
Note: Low‐profile IO cards are shipped with an adaptive full‐height bracket that
must be installed to use the IO card in a slot designed for a full‐height IO card. When
a low‐profile card is installed in a full‐height slot the ports are reversed, so port
number 1 is on the right.
Note: To use Multi‐VLAN Tagging (MVLAN) install a Chelsio 10 Gb iSCSI card.
MVLAN is managed in Enterprise Manager.
Dell Compellent 37
Chapter 2 Install the Hardware
* These IO cards cannot be ordered for SC8000 controllers. An SC8000 controller can contain these IO cards only if they were
transferred to the SC8000 controller during an upgrade from a CT‐SC030 or CT‐SC040 controller.
See Also
SC8000 Back‐Panel Features and Indicators on page 10
4
6 5
2 Open the IO card latch by pressing the latch tab and rotating the latch down.
1 2 3
3 Install each low‐profile IO card.
a Locate the slot where the IO card is installed.
1 2 3
b Unpack the replacement IO card and prepare it for installation. For instructions, see
the documentation accompanying the IO card.
c Holding the IO card by its edges, position the card so that the card‐edge connector
aligns with the IO card connector on the riser.
d Insert the card‐edge connector firmly into the IO card connector until the card is
fully seated.
Caution: Make sure that the IO card is fully seated in the IO card connector.
IO cards that are not in full contact with the connector cause unpredictable
failures in the Storage Center.
e Close the IO card latch.
Dell Compellent 39
Chapter 2 Install the Hardware
4 Insert riser 1.
a Align the riser with the connector and the riser guides.
b Lower the riser into place until the riser is fully seated in the connector.
4
6 5
1
2
Dell Compellent 41
Chapter 2 Install the Hardware
2 Lift the IO card latch out of the slot.
2
3 Unpack the replacement IO card and prepare it for installation. For instructions, see the
documentation accompanying the IO card.
4 Hold the IO card by its edges and position the card so that the card‐edge connector
aligns with the IO card connector on the riser.
5 Insert the card‐edge connector firmly into the IO card connector until the card is fully
seated. Use the supports provided on the cooling shroud.
Caution: Make sure that the IO card is fully seated in the IO card connector. IO
cards that are not in full contact with the connector cause unpredictable failures
in Storage Center.
Dell Compellent 43
Chapter 2 Install the Hardware
2 System Serial Number 10011
2 Apply two labels to each controller.
a Apply one label to the front of the controller on the top‐left corner.
b Apply one label to the rear of the controller on the middle of the handle.
CONTR1
10011
2U and 3U Enclosures
Install 2U and 3U enclosures in the rack to allow for expansion and so that the rack does
not become top‐heavy. This can be accomplished in one of two configurations.
Install the controllers on the bottom of the rack and the enclosures above the controllers.
Install the controllers in the middle of the rack and the enclosures above and below the
controllers.
Steps
1 Mount the controller(s) in a rack. See the instructions included with the rail kit for
detailed steps.
2 Mount the enclosure(s) in the rack. See the instructions included with the rail kit for
detailed steps.
Dell Compellent 45
Chapter 2 Install the Hardware
5U Enclosures
The 5U enclosures can weigh up to 131 kg (288 lb) and should be installed in the lower
portion of the rack to ensure rack stability.
When installed in a rack, the SC280 enclosure extends 36 in. from the front rack posts to the
back of the chassis. Rack modifications may be required to avoid blocking PDUs or to allow
proper strain relief for SAS cables. Contact Dell Technical Support Services if site
preparation has not already been done.
Warning: A fully configured SC280 enclosure weighs up to 130.7 kg (287.5 lb). An
unpopulated enclosure weighs 62 kg (137 lb). To avoid injury, always lift with two
people, and use lifting straps when required.
If installed above the lower 20U of a rack, a customer‐provided mechanical lift must
be used.
Prerequisites
Make sure that the site has the following:
208 V power from an independent source or a rack power distribution unit with a UPS.
(110 V power will not work.)
A 5U space in the lower 20U of the rack. If the enclosure is to be installed above the 20U
mark (not recommended), a customer‐provided mechanical lift is required.
Power Distribution Unit (PDU) outlets must be type IEC‐320‐C19, with a minimum
output Amperage of 20A per connection.
Steps
Mount the SC280 enclosure into the rack using mounting rails, and optionally secure with
brackets to prevent tipping.
1 Mount the controller(s) in a rack. See the instructions included with the rail kit for
detailed steps.
2 Mount the rails in the rack according to instructions included in the rail kit.
Warning: When unpacking the 5U enclosure, two people using lift straps are
required to avoid injury.
3 Mount the enclosure onto the rack rails, using two people to lift the enclosure into
position.
4 Install Hold Down Brackets (HDBs).
a Secure the fixing screws to the rear of the enclosure.
b Secure the HDBs to the rear of the rack.
Dell Compellent 47
Chapter 2 Install the Hardware
Caution: If the enclosure operates for too long (depending on altitude) with drive
drawers open, the enclosure can overheat, causing potential drive failure and data
loss. Such use may invalidate the warranty.
Installing Drives
Follow these requirements when installing drives in the enclosure.
Any row with drives installed must be filled entirely (14 drives).
The minimum number of drives in an enclosure is 14 (one row).
The number of populated rows between drawers must not differ by more than one.
Populate rows beginning at the front of the enclosure and moving to the rear.
For a new installation, follow these recommendations for populating 42‐drive enclosures.
14 drives in top drawer, front row
14 drives in bottom drawer, front row
14 drives in top drawer, middle row
Steps
Caution: Removing a drive from the DDIC can damage the DDIC. Do not remove
drives from DDICs.
1 Lower the DDIC into the slot.
2 Push and hold down the DDIC and slide it toward the back of the enclosure until the
DDIC latches to the backplane.
3 After all drives are installed, close the drawer.
4 When the enclosure is turned on, make sure that the drive LEDs are not lit, indicating
normal conditions.
Caution: When the enclosure turns off, the disks continue to spin after losing
power. To avoid damage to the disk, wait for the disk to stop spinning
(approximately 10 seconds after shut down) before pulling the disk from the
enclosure.
See Also
SC280 Enterprise Plus Drives on page 19
Dell Compellent 49
Chapter 2 Install the Hardware
5 Rotate the handle until it fully engages and clicks into place.
6 Make sure that all drive carriers are fully engaged in the enclosure by firmly pushing
each one into the slot.
Dell Compellent 51
Chapter 2 Install the Hardware
Note: Dell Enterprise Plus drives and Enterprise Solid State Drives (ESSDs) can
only be installed in SC200/220 enclosures.
Dell Compellent 53
Chapter 2 Install the Hardware
Enclosures are connected to controllers using SAS, a point‐to‐point switched topology on
four lanes. Each lane can perform concurrent IO transactions at 6 Gb/sec. Dell Compellent
recommends connecting the SAS drive enclosures to the controller(s) using the most
redundant options that are available.
Contents
SAS Enclosure Cabling Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Connect IPC for Dual‐Controller Storage Centers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Connect SC200/SC220 Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Connect SC280 Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Connect EN‐SASx6x12/24 Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Label the Back‐End Cables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
SAS Redundancy
Redundant SAS cabling ensures that in the event an IO Card or port fails, IO continues to
flow through the remaining functioning path(s). For dual‐controller Storage Centers,
redundant cabling also provides protection for controller failures. To achieve redundancy,
each SAS chain is made up of two paths that are referred to as the A side and B side.
Dell Compellent 55
Chapter 3 Connect the Back End
2
Figure 52. Controller SAS Port Types
1 Initiator Only (Ports 1–2)
2 Initiator/Target (Ports 3–4)
Note: Storage Center currently runs the 10 Gb ports at 1 Gb speed. Use a Cat. 6
cable to ensure that a hardware change is unnecessary when 10 Gb rates are
implemented.
1 3 0 2
IPC MGMT
1 3 0 2
IPC MGMT
Dell Compellent 57
Chapter 3 Connect the Back End
Dell Compellent 59
Chapter 3 Connect the Back End
c Wrap the label around the cable until it fully encircles the cable. The bottom of each
pre‐made label is clear so that it does not obscure the text.
Chain 1: Side B 1 Connect the first controller port 3 to the last enclosure, bottom EMM, port B.
2 Connect the remaining enclosures in series from A port to B port using the
bottom EMM.
3 If the Storage Center has dual‐controllers, connect the top enclosure, bottom
EMM, port A to the second controller, port 1.
Note: The cabling illustrations in this chapter refer to enclosures as Enclosure 1,
Enclosure 2, and so on. The numbers shown in the illustrations may not match the
Index number assigned by the System Manager. Storage Center assigns an Index
only after the system is powered up and at least one drive is assigned to a Disk
Folder.
A B
A B
Figure 60. Single SC8000 Controller, Single SAS Enclosure, Low Profile SAS IO Card
Path Connections
Chain 1: A Side Controller: slot 3, port 1 → Enclosure: top EMM, port A.
Chain 1: B Side Controller: slot 3, port 3 → Enclosure: bottom EMM, port B.
Dell Compellent 61
Chapter 3 Connect the Back End
4 3 2 1
A B
A B
Path Connections
Chain 1: A Side Controller: slot 6, port 1 → Enclosure: top EMM, port A.
Chain 1: B Side Controller: slot 6, port 3 → Enclosure: bottom EMM, port B.
Enclosure 1
A B
A B
Enclosure 2
A B
A B
Enclosure 3
A B
A B
Enclosure 4
A B
A B
Figure 62. Single SC8000 Controller, Single SAS Chain, Low-profile SAS Card
Path Connections
Chain 1: A Side 1 Controller: slot 3, port 1 → Enclosure 1: top EMM, port A.
2 Enclosure 1: top EMM, port B → Enclosure 2: top EMM, port A.
3 Enclosure 2: top EMM, port B → Enclosure 3: top EMM, port A.
4 Enclosure 3: top EMM, port B → Enclosure 4: top EMM, port A.
Chain 1: B Side 1 Enclosure 1: bottom EMM, port B → Enclosure 2: bottom EMM, port A.
2 Enclosure 2: bottom EMM, port B → Enclosure 3: bottom EMM, port A.
3 Enclosure 3: bottom EMM, port B → Enclosure 4: bottom EMM, port A.
4 Enclosure 4: bottom EMM, port B → Controller: slot 3, port 3.
Dell Compellent 63
Chapter 3 Connect the Back End
Controller
4 3 2 1
Enclosure 1
A B
A B
Enclosure 2
A B
A B
Enclosure 3
A B
A B
Enclosure 4
A B
A B
Path Connections
Chain 1: A Side 1 Controller: slot 6, port 1 → Enclosure 1: top EMM, port A.
2 Enclosure 1: top EMM, port B → Enclosure 2: top EMM, port A.
3 Enclosure 2: top EMM, port B → Enclosure 3: top EMM, port A.
4 Enclosure 3: top EMM, port B → Enclosure 4: top EMM, port A.
Chain 1: B Side 1 Enclosure 1: bottom EMM, port B → Enclosure 2: bottom EMM, port A.
2 Enclosure 2: bottom EMM, port B → Enclosure 3: bottom EMM, port A.
3 Enclosure 3: bottom EMM, port B → Enclosure 4: bottom EMM, port A.
4 Enclosure 4: bottom EMM, port B → Controller: slot 6, port 3.
1 2 3 4
Enclosure 1
A B
A B
Enclosure 2
A B
A B
Enclosure 3
A B
A B
Enclosure 4
A B
A B
Figure 64. Single SC8000 Controller, Four SAS Enclosures, Two IO Cards, Two Chains, Low-profile
SAS
Path Connections
Chain 1: A Side 1 Controller: slot 2, port 1 → Enclosure 1: top EMM, port A.
2 Enclosure 1: top EMM, port B → Enclosure 2: top EMM, port A.
Chain 1: B Side 1 Enclosure 1: bottom EMM, port B → Enclosure 2: bottom EMM, port A.
2 Enclosure 2: bottom EMM, port B → Controller: slot 3, port 3.
Chain 2: A Side 1 Controller: slot 2 port 3 → Enclosure 3: top EMM, port A.
2 Enclosure 3: top EMM, port B → Enclosure 4: top EMM, port A.
Chain 2: B Side 1 Enclosure 3: bottom EMM, port B → Enclosure 4: bottom EMM, port A.
2 Enclosure 4: bottom EMM, port B → Controller slot 3 port 1.
Dell Compellent 65
Chapter 3 Connect the Back End
Enclosure 1
A B
A B
Enclosure 2
A B
A B
Enclosure 3
A B
A B
Enclosure 4
A B
A B
Figure 65. Single SC8000 Controller, Four SAS Enclosures, Two IO Cards, Two Chains
Path Connections
Chain 1: A Side 1 Controller: slot 6, port 1 → Enclosure 1: top EMM, port A.
2 Enclosure 1: top EMM, port B → Enclosure 2: top EMM, port A.
Chain 1: B Side 1 Enclosure 1: bottom EMM, port B → Enclosure 2: bottom EMM, port A.
2 Enclosure 2: bottom EMM, port B → Controller: slot 5, port 3.
Chain 2: A Side 1 Controller: slot 6, port 3 → Enclosure 3: top EMM, port A.
2 Enclosure 3: top EMM, port B → Enclosure 4: top EMM, port A.
Chain 2: B Side 1 Enclosure 3: bottom EMM, port B → Enclosure 4: bottom EMM, port A.
2 Enclosure 4: bottom EMM, port B → Controller slot 5, port 1.
Note: In a single‐chain configuration, use SAS ports 2 (initiator) and 4 (target) to
create a redundant IPC connection. When you configure these ports using the IO
port wizard, set the port Purpose to Unknown.
1 2 3 4 Controller A
Controller B
Enclosure
A B
A B
Figure 66. Dual SC8000 Controllers, One Enclosure, One Chain, Low Profile SAS
Path Connections
IPC 1 Controller A: slot 3, port 2 → Controller B: slot 3, port 4.
2 Controller B: slot 3, port 2 → Controller A: slot 3, port 4.
Chain 1: A Side 1 Controller A: slot 3, port 1 → Enclosure: top EMM, port A.
2 Enclosure: top EMM, port B → Controller B: slot 3, port 3.
Chain 1: B Side 1 Controller B: slot 3, port 1 → Enclosure: bottom EMM, port A.
2 Enclosure: bottom EMM, port B → Controller A: slot 3, port 3.
Dell Compellent 67
Chapter 3 Connect the Back End
Note: In a single‐chain configuration, use SAS ports 2 (initiator) and 4 (target) to
create a redundant IPC connection. When you configure these ports using the IO
port wizard, set the port Purpose to Unknown.
Controller A 4 3 2 1
Controller B 4 3 2 1
Enclosure
A B
A B
Path Connections
IPC 1 Controller A: slot 6, port 2 → Controller B: slot 6, port 4.
2 Controller B: slot 6, port 2 → Controller A: slot 6, port 4.
Chain 1: A Side 1 Controller A: slot 6, port 1 → Enclosure: top EMM, port A.
2 Enclosure: top EMM, port B → Controller B: slot 6, port 3.
Chain 1: B Side 1 Controller B: slot 6, port 1 → Enclosure: bottom EMM, port A.
2 Enclosure: bottom EMM, port B → Controller A: slot 6, port 3.
Note: In a single‐chain configuration, use SAS ports 2 (initiator) and 4 (target) to
create a redundant IPC connection. When you configure these ports using the IO
port wizard, set the port Purpose to Unknown.
1 2 34 Controller A
Controller B
Enclosure 1
A B
A B
Enclosure 2
A B
A B
Figure 68. Dual SC8000 Controllers, Two Enclosures, One Chain, Low Profile SAS
Path Connections
IPC 1 Controller A: slot 3, port 2 → Controller B: slot 3, port 4.
2 Controller B: slot 3, port 2 → Controller A: slot 3, port 4.
Chain 1: A Side 1 Controller A: slot 3, port 1 → Enclosure 1: top EMM, port A.
2 Enclosure 1: top EMM, port B → Enclosure 2: top EMM, port A.
3 Enclosure 2: top EMM, port B → Controller B: slot 3, port 3.
Chain 1: B Side 1 Controller B: slot 3, port 1 → Enclosure 1: bottom EMM, port A.
2 Enclosure 1: bottom EMM, port B → Enclosure 2: bottom EMM, port A.
3 Enclosure 2: bottom EMM, port B → Controller A: slot 3, port 3.
Dell Compellent 69
Chapter 3 Connect the Back End
Controller A 4 3 2 1
Controller B 4 3 2 1
Enclosure 1
A B
A B
Enclosure 2
A B
A B
Path Connections
IPC 1 Controller A: slot 6, port 2 → Controller B: slot 6, port 4.
2 Controller B: slot 6, port 2 → Controller A: slot 6, port 4.
Chain 1: A Side 1 Controller A: slot 6, port 1 → Enclosure 1: top EMM, port A.
2 Enclosure 1: top EMM, port B → Enclosure 2: top EMM, port A.
3 Enclosure 2: top EMM, port B → Controller B: slot 6, port 3.
Chain 1: B Side 1 Controller B: slot 6, port 1 → Enclosure 1: bottom EMM, port A.
2 Enclosure 1: bottom EMM, port B → Enclosure 2: bottom EMM, port A.
3 Enclosure 2: bottom EMM, port B → Controller A: slot 6, port 3.
Controller A
Controller B
Enclosure 1
A B
A B
Enclosure 2
A B
A B
Enclosure 3
A B
A B
Enclosure 4
A B
A B
Figure 70. Dual SC8000 Controllers, Two Enclosures, Two Chains, Low-profile SAS Card
Dell Compellent 71
Chapter 3 Connect the Back End
Path Connections
Chain 1: A Side 1 Controller A: slot 2 port 1 → Enclosure 1: top EMM, port A.
2 Enclosure 1: top EMM, port B → Enclosure 2: top EMM, port A.
3 Enclosure 2: top EMM, port B → Controller B: slot 2, port 3.
Controller A
4 3 2 1 4 3 2 1
Controller B 4 3 2 1 4 3 2 1
Enclosure 1
A B
A B
Enclosure 2
A B
A B
Enclosure 3
A B
A B
Enclosure 4
A B
A B
Path Connections
Chain 1: A Side 1 Controller A: slot 6, port 1 → Enclosure 1: top EMM, port A.
2 Enclosure 1: top EMM, port B → Enclosure 2: top EMM, port A.
3 Enclosure 2: top EMM, port B → Controller B: slot 6, port 3.
Dell Compellent 73
Chapter 3 Connect the Back End
Path Connections
Chain 2: B Side 1 Controller B: slot 5, port 3 → Enclosure 3: bottom EMM, port A.
2 Enclosure 3: bottom EMM, port B → Enclosure 4: bottom EMM, port A.
3 Enclosure 4: bottom EMM, port B → Controller A: slot 5, port 1.
Controller B
4 3 2 1 4 3 2 1
Enclosure 1 Enclosure 5
A B A B
A B A B
Enclosure 2 Enclosure 6
A B A B
A B A B
Enclosure 3 Enclosure 7
A B A B
A B A B
Enclosure 4 Enclosure 8
A B A B
A B A B
Path Connections
IPC (Optional) Create IPC connections between both controllers on ports 2 and 4 of
each IO card.
Chain 1: A Side 1 Controller A: slot 6, port 1 → Enclosure 1: top EMM, port A.
2 Starting with Enclosure 1, daisy‐chain the A side of Chain 1 by cabling: top
EMM, port B → next enclosure down: top EMM, port A.
3 Finish with Enclosure 4: top EMM, port B → Controller B: slot 6, port 3.
Path Connections
Chain 1: B Side 1 Controller A: slot 5, port 3 → Enclosure 4: bottom EMM, port B.
2 Starting with Enclosure 4, daisy‐chain the B side of Chain 1 by cabling: bottom
EMM, port A → next enclosure up: bottom EMM, port B.
3 Finish with Enclosure 1: bottom EMM, port A → Controller B: slot 5, port 1.
Chain 2: A Side 1 Controller A: slot 6, port 3 → Enclosure 5: top EMM, port A.
2 Starting with Enclosure 5, daisy‐chain the A side of Chain 2 by cabling: top
EMM, port B → next enclosure down: top EMM, port A.
3 Finish with Enclosure 8: top, port B → Controller B: slot 6, port 1.
Chain 2: B Side 1 Controller A: slot 5, port 1 → Enclosure 8: bottom EMM, port B.
2 Starting with Enclosure 8, daisy‐chain the B side of Chain 2 by cabling: bottom
EMM, port A → next enclosure up: bottom EMM, port B.
3 Finish with Enclosure 5: bottom, port A → Controller B: slot 5, port 3.
Note: The cabling illustrations in this chapter refer to enclosures as Enclosure 1,
Enclosure 2, and so on. The numbers shown in the illustrations may not match the
Index number assigned by the System Manager. Storage Center assigns an Index
only after the system is powered up and at least one drive is assigned to a Disk
Folder.
Dell Compellent 75
Chapter 3 Connect the Back End
See Also
SAS Enclosure Cabling Guidelines on page 55
1234 4 3 2 1
A B C A B C
Path Connections
Chain 1: A Side Controller: slot 3, port 1 → Enclosure: left EMM, port C.
Chain 1: B Side Controller: slot 3, port 3 → Enclosure: right EMM, port C.
Controller
4 3 2 1
1 234
Enclosure 1
A B C A B C
Enclosure 2
A B C A B C
Figure 74. Single SC8000 Controller, Two SC280 Enclosures, Single Chain
Path Connections
Chain 1: A Side 1 Controller: slot 3, port 1 → Enclosure 1: left EMM, port C.
2 Enclosure 1: left EMM, port B → Enclosure 2: left EMM, port A.
Chain 1: B Side 1 Controller: slot 3, port 3 → Enclosure 2: right EMM, port C.
2 Enclosure 2: right EMM, port B → Enclosure 1: right EMM, port A.
Dell Compellent 77
Chapter 3 Connect the Back End
Note: In a single‐chain configuration, use SAS ports 2 (initiator) and 4 (target) to
create a redundant IPC connection. When you configure these ports using the IO
port wizard, set the port Purpose to Unknown.
Controller A
1 2 3 4 4 3 2 1
Controller B
1 23 4 4 3 2 1
Enclosure
A B C A B C
Figure 75. Dual SC8000 Controllers, One SC280 Enclosure, One Chain
Path Connections
IPC 1 Controller A: slot 3, port 2 → Controller B: slot 3, port 4.
2 Controller B: slot 3, port 2 → Controller A: slot 3, port 4.
Chain 1: A Side 1 Controller A: slot 3, port 1 → Enclosure: left EMM, port C.
2 Enclosure: left EMM, port B → Controller B: slot 3, port 3.
Chain 1: B Side 1 Controller B: slot 3, port 1 → Enclosure: right EMM, port C.
2 Enclosure: right EMM, port B → Controller A: slot 3, port 3.
Note: In a single‐chain configuration, use SAS ports 2 (initiator) and 4 (target) to
create a redundant IPC connection. When you configure these ports using the IO
port wizard, set the port Purpose to Unknown.
Controller A
1 23 4 4 3 2 1
Controller B
1 23 4 4 3 2 1
Enclosure 1
A B C A B C
Enclosure 2
A B C A B C
Figure 76. Dual SC8000 Controllers, Two SC280 Enclosures, One Chain
Dell Compellent 79
Chapter 3 Connect the Back End
Path Connections
IPC 1 Controller A: slot 3, port 2 → Controller B: slot 3, port 4.
2 Controller B: slot 3, port 2 → Controller A: slot 3, port 4.
Chain 1: A Side 1 Controller A: slot 3, port 1 → Enclosure: 1 left EMM, port C.
2 Enclosure 1: left EMM, port B → Enclosure: 2 left EMM, port A.
3 Enclosure 2: left EMM, port C → Controller B: slot 3, port 3.
Chain 1: B Side 1 Controller B: slot 3, port 1 → Enclosure: 1 right EMM, port C.
2 Enclosure 1: right EMM, port A → Enclosure: 2 right EMM, port B.
3 Enclosure 2: right EMM, port C → Controller A: slot 3, port 3.
Note: The maximum storage space is 1 petabyte, regardless of the number or type
of enclosures.
Dell Compellent 81
Chapter 3 Connect the Back End
Controller A
1234 4 3 2 1
1234
Controller B
4 3 2 1
Enclosure 1
Enclosure 2
Enclosure 3
Enclosure 4
Path Connections
IPC (Optional) Create IPC connections between both controllers on ports 2 and 4 of
each SAS IO card.
SC280 Enclosures
Chain 1: A Side 1 Controller A: slot 2, port 1 → Enclosure 1: left EMM, port C.
2 Enclosure 1 left EMM, port B → Enclosure 2: left EMM, port A.
3 Finish with Enclosure 2: left EMM, port C → Controller B: slot 2, port 3.
Chain 1: B Side 1 Controller B: slot 3, port 1 → Enclosure 1: right EMM, port C.
2 Enclosure 1 right EMM, port A → Enclosure 2: right EMM, port B.
3 Finish with Enclosure 2: right EMM, port C → Controller A: slot 3, port 3.
SC200/220 Enclosures
Chain 2: A Side 1 Controller A: slot 2, port 3 → Enclosure 3: top EMM, port A.
2 Enclosure 3 top EMM, port B → Enclosure 4: top EMM, port A.
3 Finish with Enclosure 4: top EMM, port B → Controller B: slot 2, port 1.
Chain 2: B Side 1 Controller A: slot 3, port 1 → Enclosure 4: bottom EMM, port B.
2 Enclosure 4 bottom EMM, port A → Enclosure 3: bottom EMM, port B.
3 Finish with Enclosure 3: bottom EMM, port A → Controller B: slot 3, port 3.
Dell Compellent 83
Chapter 3 Connect the Back End
Chain 1: Side B 1 Connect controller port 3 to the last enclosure, bottom SBB, port C.
2 Connect the remaining enclosures in series from B port to A port using the
bottom SBB.
3 If the Storage Center has dual‐controllers, connect the top enclosure, bottom
SBB, port C to the second controller, port 1.
Note: The cabling illustrations in this chapter refer to enclosures as Enclosure 1,
Enclosure 2, and so on. The numbers shown in the illustrations may not match the
Index number assigned by the System Manager. Storage Center which assigns an
Index only after the system is powered up and at least one drive is assigned to a Disk
Folder.
See Also
SAS Enclosure Cabling Guidelines on page 55
4 3 2 1
Path Connections
Chain 1: A Side Controller: slot 6, port 1 → Enclosure: top SBB, port C.
Chain 1: B Side Controller: slot 6, port 3 → Enclosure: bottom SBB, port C.
Dell Compellent 85
Chapter 3 Connect the Back End
Controller
4 3 2 1
Enclosure 1
Enclosure 2
Enclosure 3
Enclosure 4
Path Connections
Chain 1: A Side 1 Controller: slot 6, port 1 → Enclosure 1: top SBB, port C.
2 Enclosure 1: top SBB, port B → Enclosure 2: top SBB, port A.
3 Enclosure 2: top SBB, port B → Enclosure 3: top SBB, port A.
4 Enclosure 3: top SBB, port B → Enclosure 4: top SBB, port A.
Chain 1: B Side 1 Enclosure 1: bottom SBB, port A → Enclosure 2: bottom SBB, port B.
2 Enclosure 2: bottom SBB, port A → Enclosure 3: bottom SBB, port B.
3 Enclosure 3: bottom SBB, port A → Enclosure 4: bottom SBB, port B.
4 Enclosure 4: bottom SBB, port C → Controller: slot 6, port 3.
Enclosure 1
Enclosure 2
Enclosure 3
Enclosure 4
Figure 80. Single SC8000 Controller, Four 6 Gb Enclosures, Two IO Cards, Two Chains
Path Connections
Chain 1: A Side 1 Controller: slot 6, port 1 → Enclosure 1: top SBB, port C.
2 Enclosure 1: top SBB, port B → Enclosure 2: top SBB, port A.
Chain 1: B Side 1 Enclosure 1: bottom SBB, port A → Enclosure 2: bottom SBB, port B.
2 Enclosure 2: bottom SBB, port C → Controller: slot 5, port 3.
Chain 2: A Side 1 Controller: slot 6, port 3 → Enclosure 3: top SBB, port C.
2 Enclosure 3: top SBB, port B → Enclosure 4: top SBB, port A.
Chain 2: B Side 1 Enclosure 3: bottom SBB, port A → Enclosure 4: bottom SBB, port B.
2 Enclosure 4: bottom SBB, port C → Controller slot 5, port 1.
Dell Compellent 87
Chapter 3 Connect the Back End
Controller A 4 3 2 1
Controller B
Enclosure 1
Enclosure 2
Path Connections
IPC 1 Controller A: slot 6, port 2 → Controller B: slot 6, port 4.
2 Controller B: slot 6, port 2 → Controller A: slot 6, port 4.
Chain 1: A Side 1 Controller A: slot 6, port 1 → Enclosure 1: top SBB, port C.
2 Enclosure 1: top SBB, port B → Enclosure 2: top SBB, port A.
3 Enclosure 2: top SBB, port C → Controller B: slot 6, port 3.
Chain 1: B Side 1 Controller B: slot 6, port 1 → Enclosure 1: bottom SBB, port C.
2 Enclosure 1: bottom SBB, port A → Enclosure 2: bottom SBB, port B.
3 Enclosure 2: bottom SBB, port C → Controller A: slot 6, port 3.
Controller A 4 3 2 1 4 3 2 1
Controller B
Enclosure 1
Enclosure 2
Enclosure 3
Enclosure 4
Path Connections
Chain 1: A Side 1 Controller A: slot 6, port 1 → Enclosure 1: top SBB, port C.
2 Enclosure 1: top SBB, port B → Enclosure 2: top SBB, port A.
3 Enclosure 2: top SBB, port C → Controller B: slot 6, port 3.
Dell Compellent 89
Chapter 3 Connect the Back End
Controller A 4 3 2 1
Controller B
Enclosure 1
Enclosure 2
Enclosure 3
Enclosure 4
Figure 83. Dual SC8000 Controllers, Four Enclosures, One IO Card, One Chain
Path Connections
IPC 1 Controller A: slot 6, port 2 → Controller B: slot 6, port 4.
2 Controller B: slot 6, port 2 → Controller A: slot 6, port 4.
Chain 1: A Side 1 Controller A: slot 6, port 1 → Enclosure 1: top SBB, port C.
2 Enclosure 1, top SBB, port B → Enclosure 2, top SBB, port A.
3 Continue to daisy‐chain enclosures: top SBB, port B → next enclosure down:
top SBB, port A.
4 Enclosure 4: top SBB, port C → Controller B: slot 6, port 3.
Controller A 4 3 2 1
Controller 4 3 2 1
Enclosure 1 Enclosure 5
Enclosure 2 Enclosure 6
Enclosure 3 Enclosure 7
Enclosure 4 Enclosure 8
Path Connections
IPC (Optional) Create IPC connections between both controllers on ports 2 and 4 of
each IO card.
Chain 1: A Side 1 Controller A: slot 6, port 1 → Enclosure 1: top SBB, port C.
2 Starting with Enclosure 1, daisy‐chain the A side of Chain 1 by cabling: top
SBB, port B → next enclosure down: top SBB, port A.
3 Finish with Enclosure 4: top SBB, port C ? Controller B: slot 6, port 3.
Chain 1: B Side 1 Controller A: slot 5, port 3 → Enclosure 4: bottom SBB, port C.
2 Starting with Enclosure 4, daisy‐chain the B side of Chain 1 by cabling: bottom
SBB, port B → next enclosure up: bottom SBB, port A.
3 Finish with Enclosure 1: bottom SBB, port C → Controller B: slot 5, port 1.
Dell Compellent 91
Chapter 3 Connect the Back End
Path Connections
Chain 2: A Side 1 Controller A: slot 6, port 3→ Enclosure 5: top SBB, port C.
2 Starting with Enclosure 5, daisy‐chain the A side of Chain 2 by cabling: top
SBB, port B → next enclosure down: top SBB, port A.
3 Finish with Enclosure 8: top SBB, port C ? Controller B: slot 6, port 1.
Chain 2: B Side 1 Controller A: slot 5, port 1 → Enclosure 8: bottom SBB, port C.
2 Starting with Enclosure 8, daisy‐chain the B side of Chain 2 by cabling: bottom
SBB, port B → next enclosure up: bottom SBB, port A.
3 Finish with Enclosure 5: bottom SBB, port C → Controller B: slot 5, port 3.
Note: Cables that connect one enclosure to another do not need to be labeled.
Prerequisites
If the Storage Center has EN‐SASx6x12/24 enclosures, you must have a label maker, or four
blank labels and a writing utensil.
Steps
1 Locate or create labels for the back‐end cables.
EN‐SASx6x12/24 enclosures: Create labels that label the purpose (Backend), chain
number, and side (A or B).
Example: Backend 1A
SC200/220 enclosures: Locate the back‐end cable labels that shipped with the
enclosures. The pre‐made labels include the purpose (Backend), chain number, and
side (A or B).
Dell Compellent 93
Chapter 3 Connect the Back End
2 Apply cable labels to both ends of each SAS cable that connects a controller to an
enclosure.
a Near the connector, align the label perpendicular to the cable and affix it starting
with the top edge of the label.
Front‐end cabling refers to connections between the controller and the server. Front‐end
connections can be made using iSCSI, Fibre Channel, or Fibre Channel over Ethernet
(FCoE). Dell Compellent recommends connecting servers to the controller(s) using the
most redundant options available.
Contents
Types of Redundancy for Front‐End Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Multipath IO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Front‐End Connectivity Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Fibre Channel Zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Connect the Front End for a Single‐Controller Storage Center . . . . . . . . . . . . . . . . . . . . 103
Connect the Front End for a Dual‐Controller Storage Center . . . . . . . . . . . . . . . . . . . . . 105
Label the Front‐End Cables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Dell Compellent 95
Chapter 4 Connect the Front End
Multipath IO
Multipath IO (MPIO) allows a server to use multiple paths for IO if they are available.
MPIO software offers redundancy at the path level. MPIO loads as a driver on the server,
and typically operates in a round‐robin manner by sending packets first down one path
and then the other. If a path fails, MPIO software continues to send packets down the
functioning path. MPIO is operating‐system specific.
Linux • Dell Compellent Storage Center Linux Best Practices
• Dell Compellent Best Practices: Storage Center with SUSE Linux
Enterprise Server 11
VMware vSphere 5.x Dell Compellent Storage Center Best Practices with vSphere 5.x
Windows Server 2003 Storage Center Multipath IO (MPIO) Manager for Microsoft Servers User
Guide
Windows Server 2008, 2008 Dell Compellent Storage Center Microsoft Multipath IO (MPIO) Best
R2, and 2012 Practices Guide
Note: In Legacy mode, reserve ports and primary ports reside on separate
controllers, providing controller‐level failover only. Legacy mode does not provide
port‐level failover.
The front‐end connectivity mode is configured independently for Fibre Channel and iSCSI.
Both transport types can be configured to use the same mode or different modes to meet
the needs of the network infrastructure. For example, a Storage Center can be configured
to use virtual port mode for FC and legacy mode for iSCSI.
Note: Dell Compellent strongly recommends using virtual port mode unless the
network environment does not meet the requirements for virtual port mode.
Requirement Description
License The Storage Center must be licensed for virtual port mode.
Switches Front‐end ports must be connected to Fibre Channel or Ethernet switches;
servers cannot be directly connected to controller front‐end ports.
Dell Compellent 97
Chapter 4 Connect the Front End
Requirement Description
Multipathing If multiple active paths are available to a server, the server must be
configured for MPIO to use more than one path simultaneously.
iSCSI networks • NAT must be disabled for iSCSI replication traffic.
• CHAP authentication must be disabled.
Fibre Channel fabrics • The FC topology must be switched fabric. Point‐to‐point and arbitrated
loop topologies are not supported.
• FC switches must be zoned to meet the virtual port mode zoning
requirements.
• FC switches must support N_Port ID Virtualization (NPIV).
• Persistent FCID must be disabled on FC switches.
Note: AIX servers are not supported.
See Also
Fibre Channel Zoning on page 102
Server
FC Switch
Active Ports
Controller A Controller B
Note: To use both primary paths simultaneously, the server must be configured to
use MPIO.
The following table summarizes the failover behaviors for this configuration.
Controller A fails Virtual ports on controller A fail over by moving to physical ports
on controller B.
Controller B fails Virtual ports on controller B fail over by moving to physical ports on
controller A.
A single port fails The virtual port associated with the failed physical port moves to
another physical port in the fault domain.
Legacy Mode
Legacy mode provides controller redundancy for a dual‐controller Storage Center by
connecting multiple primary and reserved ports to each Fibre Channel or Ethernet switch.
In legacy mode, each primary port on a controller is paired with a corresponding reserved
port on the other controller. During normal conditions, the primary ports process IO and
the reserved ports are in standby mode. If a controller fails, the primary ports fail over to
the corresponding reserved ports on the other controller. This approach ensures that
servers connected to the switch do not lose connectivity if one of the controllers fails. For
optimal performance, the primary ports should be evenly distributed across both
controllers. When possible, front‐end connections should be made to separate controller IO
cards to improve redundancy.
Note: For a single‐controller Storage Center, only one fault domain is required for
each transport type (FC or iSCSI) because there are no reserved ports.
Requirement Description
Controller front‐end Each controller must have enough front‐end ports to connect two paths to
ports each Fibre Channel or Ethernet switch.
Multipathing If multiple active paths are available to a server, the server must be
configured for MPIO to use more than one path simultaneously.
Fibre Channel zoning FC switches must be zoned to meet the legacy mode zoning requirements.
See Also
Fibre Channel Zoning on page 102
Dell Compellent 99
Chapter 4 Connect the Front End
FC Switch
Controller A Controller B
Note: To use multiple paths simultaneously, the server must be configured to use
MPIO.
The following table summarizes the failover behaviors for this configuration.
Controller A fails In fault domain 1, primary port P1 fails over to reserved port R1.
Controller B fails In fault domain 2, primary port P2 fails over to reserved port R2.
A single port fails The port does not fail over because there was no controller failure. If
a second path is available, MPIO software on the server provides
fault tolerance by sending IO to the functioning port in the other
fault domain.
Note: In Legacy mode, reserve ports and primary ports reside on separate
controllers, providing controller‐level failover only. Legacy mode does not provide
port‐level failover.
Note: For virtual port mode, WWN zoning is recommended.
Virtual port mode • Include all Storage Center virtual WWNs in a single zone.
• Include all Storage Center physical WWNs in a single zone.
• Create server zones that contain a single initiator and multiple
targets (Storage Center virtual WWNs), and which include the
server HBA WWN.
Fibre Channel replication • Include all Storage Center physical WWNs from System A and B
in a single zone.
• Include all Storage Center physical WWNs of System A and the
virtual WWNs of System B on the particular Fabric.
• Include all Storage Center physical WWNs of System B and the
virtual WWNs of System A on the particular Fabric.
Note: Some ports may not be used or dedicated for replication,
however ports that are used must be in these zones.
Fibre Channel replication Include all Storage Center front‐end ports from System A and
System B in a single zone.
Note: The cabling instructions and diagrams in this section are valid for a particular
IO card configuration. If the controllers you are installing contain different IO cards,
adjust the cabling accordingly.
Note: iSCSI servers must be connected to Ethernet switches. An iSCSI server cannot
be directly connected to a front‐end iSCSI port on the controller.
Steps
1 Connect the FC fabrics.
a Connect a controller FC port to each FC fabric.
b Connect one or more FC servers to each FC fabric.
c Configure FC zoning to meet the zoning requirements.
If you are planning to configure FC in legacy mode, configure zoning to meet the
legacy mode zoning requirements.
If you are planning to configure FC in virtual port mode, configure zoning to
meet the virtual port mode zoning requirements.
2 Connect the iSCSI networks.
a Connect a controller iSCSI port to each iSCSI network.
b Connect one or more iSCSI servers to each iSCSI network.
3 During the software setup process, configure fault domains based on the connectivity
mode:
Legacy mode: Create one fault domain for each transport type (FC and iSCSI).
Virtual port mode: Create one fault domain for each fabric or directly connected FC
server.
Controller
Note: iSCSI servers must be connected to Ethernet switches. An iSCSI server cannot
be directly connected to a front‐end iSCSI port on the controller.
Steps
1 Connect the FC fabrics.
a Connect two controller FC ports to each FC fabric.
b Connect one or more FC servers to each FC fabric, making two connections from
each server to the fabric.
c Configure FC zoning to meet the zoning requirements.
If you are planning to configure FC in legacy mode, configure zoning to meet the
legacy mode zoning requirements.
If you are planning to configure FC in virtual port mode, configure zoning to
meet the virtual port mode zoning requirements.
2 Connect the iSCSI networks.
a Connect two controller iSCSI ports to each iSCSI network.
b Connect one or more iSCSI servers to each iSCSI network, making two connections
from each server to the network.
3 During the software setup process, configure fault domains based on the connectivity
mode:
Legacy mode: Create one fault domain for each transport type (FC and iSCSI).
Virtual port mode: Create one fault domain for each fabric or directly connected FC
server.
FC Server SCSI Server
Controller
Note: The cabling instructions and diagrams in this section are valid for a particular
IO card configuration. If the controllers you are installing contain different IO cards,
adjust the cabling accordingly.
Note: To prevent a port or cable failure from blocking access to volumes mapped to
a controller, connect two additional fault domains so that each controller has two
primary paths to the FC fabric.
Prerequisites
Each controller must have two available FC front‐end ports.
Steps
1 Connect each FC server to both FC fabrics.
2 Connect fault domain 1 (shown in orange) to fabric 1.
Primary port P1: Controller A, slot 1, port 1 → Fabric 1
Reserved port R1: Controller B, slot 1, port 1 → Fabric 1
3 Connect fault domain 2 (shown in blue) to fabric 1.
Primary port P2: Controller B, slot 2, port 1 → Fabric 1
Reserved port R2: Controller A, slot 2, port 1 → Fabric 1
4 Configure FC zoning to meet the legacy mode zoning requirements.
Server 1 Server 2
Fabric 1
R1 R2
P2 P1
1 2 Controller A
Controller B
1 2
Server 1 Server 2
Fabric 1 Fabric 2
R1 R2 R4 P3 P4 R3
P2 P1
1 2 Controller A
Controller B
1 2
Controller B, slot 2, port 1 → Fabric 1
3 Configure FC zoning to meet the legacy mode zoning requirements.
Server 1 Server 2
Fabric 1
Controller A
1 2
1 2 Controller B
Fabric 1 Fabric 2
Controller A
1 2
1 2 Controller B
See Also
Requirements for Virtual Port Mode on page 97
Fibre Channel Zoning on page 102
Note: To prevent a port or cable failure from blocking access to volumes mapped to
a controller, connect two additional fault domains so that each controller has two
primary paths to the iSCSI network.
Prerequisites
Each controller must have two available iSCSI front‐end ports.
Steps
1 Connect each iSCSI server to both Ethernet switches.
2 Connect fault domain 1 (shown in orange) to switch 1.
Primary port P1: Controller A, slot 3, port 1 → Switch 1
Reserved port R1: Controller B, slot 3, port 1 → Switch 1
3 Connect fault domain 2 (shown in blue) to switch 1.
Primary port P2: Controller B, slot 4, port 2 → Switch 1
Reserved port R2: Controller A, slot 4, port 2 → Switch 1
Server
Ethernet Switch 1
iSCSI Network 1
P1 R1 P2 R2
Controller A
1 2 2 1 4 3 2 1
Controller B
1 2 2 1 4 3 2 1
3 Connect fault domain 2 (shown in blue) to switch 1.
Primary port P2: Controller B, slot 4, port 2 → Switch 1
Reserved port R2: Controller A, slot 4, port 2 → Switch 1
4 Connect fault domain 3 (shown in green) to switch 2.
Primary port P3: Controller A, slot 4, port 1 → Switch 2
Reserved port R3: Controller B, slot 4, port 1 → Switch 2
5 Connect fault domain 4 (shown in purple) to switch 2.
Primary port P4: Controller B, slot 3, port 2 → Switch 2
Reserved port R4: Controller A, slot 3, port 2 → Switch 2
Server
P1 R1 P2 R2 R4 P3 R3 P4
Controller A 2 1 4 3 2 1
1 2
Controller B 2 1 4 3 2 1
1 2
Ethernet Switch 1
iSCSI Network 1
Controller A 2 1 4 3 2 1
1 2
Controller B 2 1 4 3 2 1
1 2
Server
Controller A 2 1 4 3 2 1
1 2
Controller B 2 1 4 3 2 1
1 2
2 Apply cable labels to both ends of each iSCSI cable that connects a controller to a front‐
end switch.
a Prepare two labels by writing the controller number, slot, and port that the cable is
connected to.
b Near the connector, align the label perpendicular to the cable and affix it starting
with the top edge of the label.
3 Apply cable labels to both ends of each FC cable that connects a controller a front‐end
fabric or server.
a Prepare two labels by writing the controller number, slot, and port that the cable is
connected to.
b Near the connector, align the dotted line on the label with the edge of the cable and
affix it starting with the top edge of the label.
Connect the Storage Center management interfaces and configure the Storage Center
software.
Contents
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Connect the Ethernet Management and iDRAC Interfaces . . . . . . . . . . . . . . . . . . . . . . . 121
Turn on the Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Configure the Controller(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Verify Software Support for Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Start the Storage Center Startup Wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Complete the Startup Wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Label SC200 and SC220 Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Prerequisites
Before the Storage Center software can be configured, the hardware must be connected,
and you must have the required materials and documents.
Hardware Configuration
All hardware must be installed and cabled before beginning the software setup process. If
server connectivity is through Fibre Channel (FC), the FC switches must be configured and
zoned before configureing the controller(s).
Required Materials
Remote connections to the Storage Center are required during different stages of
configuration.
Make sure that you have the items listed in the following table.
Computer connected to the Used to connect to the Storage Center System Manager (through a
same network as the Storage web browser) to complete the Storage Center configuration.
Center
Storage Center license file Used to activate purchased features.
License files use the naming convention snxxx_35_date.lic, where:
• snxxx is the serial number of the Storage Center.
• 35 indicates that the system runs software higher than version 3.5.
• date shows the license generation date in YYMMDD format.
• .lic is the file extension.
Required Documents
A Storage Architect or Business Partner designed the components of Storage Center for the
specific site. Pre‐installation documentation contains detailed information from the design
that is used during the setup process.
Pre‐installation documents include:
List of hardware needed to support storage requirements
Hardware Serial Number (HSN) and System Serial Number (SSN) used to configure the
controller(s)
Optional connectivity diagrams to illustrate cabling between the controllers,
enclosures, switches, and servers
Optional network information, such as IP addresses, subnet masks, gateways
Refer to these documents for information about site‐specific settings used during controller
configuration.
SC200/220 (enclosures) 6.2.1
SC280 (enclosures) 6.4.1
Controller
1 3 0 2
IPC MGMT
iDRAC
1 Gb ETH 0 Port
1 Gb ETH 0 Port
Controller A
1 3 0 2
iDRAC IPC MGMT
Controller B
1 3 0 2
IPC MGMT
2 Label both ends of each Ethernet cable with the appropriate management or iDRAC
label.
a Near the connector, align the label perpendicular to the cable and affix it starting
with the top edge of the label.
Note: Make sure that all disks are up and spinning at full speed before turning
on any controllers.
3 Turn on each controller by pressing and holding the power button on the front of the
controller. The fans turn on as an indication that the controller is starting to come up.
Note: If configuring a dual‐controller Storage Center, perform this task for both
controllers. Use the lowest serial number for Controller 1.
1 Establish a serial connection to the controller.
a Use an RS‐232 serial cable to connect a PC or laptop to a Storage Center controller’s
RS‐232 serial port. (To connect from a USB port, use a USB to RS‐232 converter.)
b Turn on the PC.
c Open a terminal emulator or a command‐line interface. Configure the connection as
shown in the following table.
Setting Value
Emulation VT220
Column Mode 132
Line Wrapping Off
Connection Serial Port
Connection Type Direct
Baud Rate 115,200
Parity None
Data Bits 8
Stop Bits 1
Flow Control Hardware or default
Note: To facilitate troubleshooting, enable logging for the session.
d Press the Enter key several times to initiate the connection. The terminal echoes back
to indicate that connectivity has been established. If the prompt (>) is not displayed,
see Troubleshooting the Serial Connection on page 205.
2 Set the HSN and SSN using the SSN(s) provided in the pre‐installation documents.
Caution: The following commands are for new installations only. Running the
cs purge all command on an existing Storage Center controller deletes the
existing configuration and can make data inaccessible.
Controller 1: Set the HSN and SSN to the lower SSN.
shellaccess developer
platform init hsn set [lower SSN]
platform init ssn set [lower SSN]
cs purge all
-reset
Controller 2: (Dual‐controller only) Set the HSN and SSN to the higher SSN.
shellaccess developer
platform init hsn set [higher SSN]
platform init ssn set [higher SSN]
cs purge all
-reset
The controller takes several minutes to reboot. The controller may reboot more than
once — this is normal.
3 When the reboot is complete, verify that the table that is displayed ends with:
2 1 Finish
4 Verify the new settings by running the controller show command.
Note: Although entered in decimal, the new HSN and SSN values are returned
in hexadecimal.
5 Record the factory default eth1 address. For dual‐controller Storage Centers, the eth1
address from Controller 2 is needed during the setup process.
Note: If configuring a dual‐controller Storage Center, perform this task for both
controllers.
1 Run the following command to configure the management interface (etho).
controller ipconfig eth0 [IP address] [netmask] [gateway]
Example:
controller ipconfig eth0 172.31.1.101 255.255.0.0 172.31.0.1
2 Run the following command to specify one or more DNS servers.
controller dnsserver [DNS server 1 IP address] [DNS server 2 IP address]
Example:
controller dnsserver 172.31.0.50 172.31.0.60
3 Run the following command to specify the domain to which the Storage Center belongs.
controller domainname [domain name]
Example:
controller domainname corp.dom
4 Verify the changes to the network settings by running the following command:
controller show
Note: To use the iDRAC in a non‐DHCP environment, a static IP address must be
set.
To see if the iDRAC is set to use a static IP address or DHCP, run the following
command:
platform bmc get source
To see what IP address is assigned to the iDRAC, run the following command:
platform bmc show
To set the iDRAC to a static IP address, run the following commands:
platform bmc set ip [IP address]
platform bmc set netmask [netmask]
platform bmc set gateway [gateway]
Example:
platform bmc set ip 10.0.0.12
platform bmc set netmask 255.255.255.0
platform bmc set gateway 10.0.0.1
Note: On a dual‐controller Storage Center, change the iDRAC IP address on
both SC8000 controllers.
To set the iDRAC to use DHCP, run the following command:
platform bmc set source DHCP
Note that DHCP must be entered in all caps.
06 04 01
In the example above, the controller is running Storage Center OS 6.4.1.
3 If the controller is running a version older than the minimum version specified below,
upgrade Storage Center OS to the minimum version or later.
SC280 6.4.1
a Use a web browser to connect to the management IP of the controller. The Storage
Center Startup Wizard appears.
b Accept the license agreement, then close the Startup Wizard.
c Update Storage Center OS to the latest version.
If the Storage Center has internet connectivity, use the System Manager to
download and install the latest Storage Center OS. For instructions, see the
Storage Center Software Update Guide.
If the Storage Center does not have internet connectivity, use the Storage Center
Maintenance CD to perform the upgrade. See the Storage Center Maintenance CD
Instructions document for details.
4 After updating the Storage Center OS, check for additional updates again. Enclosure
firmware updates may not install with the OS update, since Storage Center does not
recognize some new hardware until after the OS is installed.
Note: When using the network to upgrade Storage Center to OS 6.4.1 for an
SC280 enclosure, the enclosure firmware updates must be installed separately
after Storage Center OS 6.4.1 is installed.
Check the Update Details window for Enclosure updates marked Deferrable, and
install them now. The enclosure firmware updates may take 20‐30 minutes. Do not
interrupt the update after it has started.
5 If you are configuring a dual‐controller Storage Center, perform steps 1–4 on the second
controller. Firmware upgrades should have installed when the first controller was
updated.
6 After both controllers are upgraded to the latest version, log out of System Manager and
log back in to continue with the setup process (see Step 3 on page 128).
Note: Messages that appear may differ depending on the browser used. (Click
Continue to this website in Internet Explorer or add a security exception in
Firefox.)
The Storage Center Login page appears.
3 Enter the default user name and password:
User: Admin
Password: mmm
Note: The user name and password are case‐sensitive.
4 Click Login.
5 If prompted, click Yes to acknowledge certificate messages.
The Startup Wizard appears and displays the Software End User License Agreement
(EULA).
License Agreement
Use the License Agreement page to read and accept the Software End User License
Agreement (EULA).
1 Enter information for the required Approving Customer Name and Approving
Customer Title fields. The approving customer’s name and title and the approval date
are recorded and sent to Dell Technical Support Services using Phone Home.
Note: The End User License Agreement is also displayed the first time any new
user logs on to the Storage Center. When displayed for a new user, the EULA
does not require a customer name or title.
2 After reading the EULA, click Accept to continue with setup. The Load License page
appears.
3 Select the license file, then click Load License. The Startup Wizard displays a message
when the license is successfully loaded.
If the license loads successfully, the message The license submission has completed
successfully appears.
If the message Error: The license file is not valid message is displayed, see
Troubleshooting Licenses on page 206.
4 Click Continue.
If standard drives are detected, the Create Disk Folder page appears.
If Self‐Encrypting Drives (SEDs) that are Federal Information Processing Standard
(FIPS) 140‐2 certified are detected, Storage Center formats the drives as if Secure
Data will be used, and you are asked whether to create a Secure Data folder. See
About Secure Data on page 131.
If the No disks could be found message is displayed, see Troubleshooting Enclosures
on page 205.
Note: Because disks that contain user data cannot be moved from a Secure Data
folder, Storage Center does not crypto erase disks that contain user data.
To protect data at rest, all SEDs in a Secure Data disk folder lock when power is removed
(Lock on Reset enabled). When power is removed from the drive, the drive cannot be
unlocked without access to the authority credential stored in the key management server.
1 Select disks for the disk folder. By default, the Startup Wizard selects all available
disks.
Note: A Secure Data folder can only contain self‐encrypting FIPS drives, which
are indicated by a yellow lock icon.
a Scroll through the list looking at the Enclosure column and the Position or Name
column to verify that all the expected enclosures and disks are listed.
If enclosures or disks are missing, the issues might be fixed by following
Troubleshooting Enclosures on page 205.
b (Optional) From the list of disks, select disks to include in the disk folder. By default,
all disks are selected.
To exclude individual disks from the disk folder, clear the corresponding check
boxes.
To include individual disks in the disk folder, click Unselect All and then select
the individual disks to include.
To select all disks, click Select All.
c Click Continue. The Startup Wizard displays a prompt to select disks to designate
as hot spares.
2 Review the hot spare disks automatically selected by the Startup Wizard.
A hot spare disk is held in reserve until a disk fails, at which point the hot spare replaces
the failed disk. The hot spare disk must be as large or larger than the largest disk of its
type in the disk folder. For redundancy, there must be at least one hot spare for each
enclosure.
In general, System Manager uses the following best practices in designating hot spares:
For 2U enclosures, one spare disk for every disk class (10K, 7.5K, and so on).
For 5U enclosures, one spare disk for every 21 disks, however no single row will
contain more than one spare disk.
a (Optional) Change the selection by selecting or clearing disks to be used as hot
spares.
b Click Continue, and if prompted, click OK to confirm. The Startup Wizard displays
a summary of the disk folder that will be created.
3 (Optional) Modify the default folder name and enter a description in the Notes field.
4 (Optional) Click Advanced to configure advanced disk folder options. The wizard
displays options for redundancy and datapage size.
Note: The default managed disk folder settings are appropriate for most sites. If
you are considering changing the default settings, contact Dell Technical
Support Services for advice.
a Select Prepare Disk Folder for redundant storage.
b Configure the Tier Redundancy for each tier.
The redundancy level for each tier defaults to either single redundant or dual
redundant depending upon the disks expected to be found in the tier. If a tier
contains at least six managed disks of which one is 900 GB or greater, then that tier
and all tiers below it are set to dual redundant storage by default. If a tier contains
at least six managed disks of 1.8 TB or greater, then that tier and all tiers below it are
set to dual redundant storage and cannot be changed.
Single redundant storage protects against the loss of any one drive.
RAID 10 (each disk is mirrored)
RAID 5‐5 (4 data segments/1 parity segment per stripe)
RAID 5‐9 (8 data segments/1 parity segment per stripe)
Dual redundant storage protects against the loss of any two drives:
RAID 10 Dual Mirror (data is written simultaneously to three separate disks)
RAID 6‐6 (4 data segments/2 parity segments per stripe)
RAID 6‐10 (8 data segments/2 parity segments per stripe)
c From the Datapage Size drop‐down menu, choose a datapage size.
2 MB: (Default) Recommended for most application needs.
512 KB: Appropriate for applications with high performance needs (such as
certain databases) or environments in which Replays are taken frequently under
heavy IO. Selecting this size reduces the amount of space the System Manager
can present to servers.
4 MB: Appropriate for systems that use a large amount of disk space with
infrequent Replays (such as video streaming).
Caution: When considering using either the 512 KB or 4 MB datapage
settings, it is recommended that you contact Dell Technical Support Services
for advice on balancing resources and to understand the impact on
performance.
d To configure the disk folder to use RAID 0, select the Prepare for Non‐Redundant
Storage check box. Non‐redundant storage does not protect data in the event of a
disk failure. Select this option only for data that is backed up some other way.
e Click Continue. The disk folder summary appears.
5 Click Create Now, and if prompted, click OK to confirm. The Add Controller page
appears.
Add a Controller
Use the Add Controller pages to configure the second controller and to enter the IPv6
address, if used, on the leader controller.
1 Proceed based on the number of controllers in the Storage Center.
Single‐controller Storage Center: Click Continue Setup. The Time Settings page
appears.
Dual‐controller Storage Center: Click Add Controller. The wizard prompts you to
enter IPv6 address information for the first (Leader) controller.
2 Proceed based on whether you use IPv6 for controller addressing.
a If IPv6 is used, enter IPv6 information for the first controller and click Continue.
b If IPv6 is not used, click Skip IPv6 Configuration.
Note: Information displayed in the following figure is for illustration only.
The values displayed are unique to each Storage Center.
3 Configure the following fields:
Controller ID: If the HSN for Controller 2 is included in the license file, the add
controller process uses that value as the Controller ID and a different value cannot
be entered. If the HSN for Controller 2 is not in the license file, enter the HSN value.
The dimmed box in the preceding figure indicates that the HSN was in the license
file making the Controller ID box unavailable.
Ether 0 Interface: Enter the IP Address, Net Mask, and Gateway for the Ether 0
interface on Controller 2.
Ether 1 Interface: Enter the IP Address, Net Mask, and Gateway for the Ether 1
interface. Use the IP address from the controller show console command for
Controller 2. (Used for communication between controllers.)
Primary DNS Server: Enter the IP address of the primary DNS server.
Secondary DNS Server: (Optional) Enter the IP address of a secondary DNS server.
Domain Name: (Optional) Enter the domain name for the controller.
4 Click Continue.
The Startup Wizard displays a message that data and configuration information on the
second controller is lost and asks for confirmation.
5 Click Join Now. Wait for the process to complete and for the controller to reboot, which
can take a few minutes. When complete, the Time Settings page appears.
1 Set the time zone.
a From the Region drop‐down menu, select the geographical region in which the
Storage Center is located.
b From the Time Zone drop‐down menu, select the time zone in which the Storage
Center is located.
Note: For locations in the United States, either select US as the region and select
a time zone name, or select America as the region and select a city within the
same time zone.
2 Set the system time using one of the following methods.
To configure time manually, select Configure Time Manually, then enter the date
and time.
To configure time using a Network Time Protocol (NTP) server, select Use NTP
Time Server, then enter the fully qualified domain name (FQDN) or IP address of
an NTP server.
Note: Accurate time synchronization is critical for replications. Dell Compellent
recommends using NTP to set the system time. For more information, see:
support.ntp.org/bin/view/Support/WebHome.
3 When the system time has been set, click Continue. The System Setup page appears.
1 In the System Name field, enter a name for the Storage Center. This is typically the
serial number of Controller 1.
2 (Dual‐controller only) In the Management IP Address field, enter the management IP
address specified in the pre‐installation documents. (On single‐controller systems, this
field is not displayed and the message at the top of the pane reads simply, Set the
System’s name.)
The management IP address is distinct from the controller 1 and controller 2 addresses.
It is the address that manages a dual‐controller Storage Center as a whole. If either
controller fails, the management IP address remains valid.
3 Click Continue. The wizard prompts you to enable or disable the read and write cache.
4 Select or clear the check boxes to enable or disable read and write cache.
Note: Disable cache only if no volumes will ever use cache. If cache is left
enabled on this page, it can be disabled later for individual volumes. See the
Storage Center System Manager Administrator’s Guide for information on disabling
cache on a volume‐by‐volume basis.
5 Click Continue.
If a Secure Data folder is used with FIPS SEDs, you are prompted to configure a key
management server.
if standard drives are used, the Configure SMTP page appears.
Note: The key management server must be available to manage new self‐
encrypting FIPS disks into a Secure Data folder that has already been in the Secure
state.
1 Enter the Host name or IP address of the key management server, the port number to
use, and a communication timeout value.
2 (Optional) Enter the name of the host or hosts to use if the primary host is unavailable.
3 If the key management server is configured to validate the client certificate, enter the
key management credentials in the Client Username and Client Password boxes.
4 Click Continue. The next page of the wizard provides information about the SSL
certificate files required to communicate with the key management server. The next
page of the wizard provides information about the SSL certificate files required to
communicate with the key management server. The certificate files are generated by
your system administrator; they are not supplied by Dell Compellent.
5 Click Upload Client Certs.
6 Navigate to and select the existing public key (*.pem) file.
7 Click Continue.
8 Repeat Step 5 through Step 7 for the second controller (if present).
9 Click Upload Certstore Cert.
10 Navigate to and select the existing public key (*.pem) file.
11 Click Continue. The last page of the wizard appears.
12 Click Finish Now. The Configure SMTP page appears.
Configure SMTP
Use the Configure SMTP page to configure the SMTP mail server and the sender email
address. This configuration enables alert message emails to be sent to users who have
specified a recipient address in their contact properties.
Note: To configure SMTP later, click Skip SMTP Configuration. SMTP settings can
be configured later in the System Manager through the Storage Management
menu.
1 In the SMTP Mail Server field, enter the IP address or fully qualified domain name of
the SMTP email server. Click Test server to verify connectivity to the SMTP server.
2 In the Sender E‐mail Address field, enter the email address of the sender. This address
is required by most SMTP servers and is used as the MAIL FROM address of email
messages.
3 (Optional) Click Advanced to configure additional SMTP settings for sites that use an
advanced SMTP configuration. The page for advanced options appears.
a Verify that the Enable SMTP E‐mail check box is selected.
b In the SMTP Mail Server field, verify the IP address or fully qualified domain name
of the SMTP mail server that you entered in Step 1 on page 142. Modify this field if
necessary.
c In the Backup SMTP Mail Server field, enter the IP address or fully qualified
domain name of the backup SMTP mail server.
d Click Test server to test the connection(s).
e In the Sender E‐mail Address (MAIL FROM) field, verify the email address of the
sender that you entered in Step 2 on page 142. Modify this field if necessary.
f In the Common Subject Line field, enter a common subject line for all emails from
Storage Center.
g (Optional) Select Send Extended Hello (EHLO) to configure use of extended hello
for mail system compatibility.
Instead of beginning the session with the HELO command, the receiving host issues
the HELO command. If the sending host accepts this command, the receiving host
then sends it a list of SMTP extensions it understands, and the sending host then
knows which SMTP extensions it can use to communicate with the receiving host.
Implementing Extended SMTP (ESMTP) requires no modification of the SMTP
configuration for either the client or the mail server.
h (Optional) Select Use Authorized Login (AUTH LOGIN) and enter the Login ID
and Password if the email system requires the use of an authorized login.
4 Click Continue. The Update Setup page appears.
1 Select an update option from the drop‐down menu:
Do not automatically check for software updates: Select this option to disable
automatic checking for updates. If this option is selected, the following is still
available:
Manually check for updates by selecting Storage Management→ System→
Update→ Check for Update as described in the Storage Center System Manager
Administrator’s Guide.
Change update options by selecting Storage Management→ System→
Update→ Configure Automatic Updates as described in the Storage Center
System Manager Administrator’s Guide.
Notify me of a software update but do not download automatically: Select this
option to automatically check for updates and receive notification when an update
is available. Updates are not downloaded until you explicitly download the update.
(This is the default setting.)
Download software updates automatically and notify me: Select this option to
automatically download updates and receive notification when the download is
complete.
Never check for software updates (Phone Home not available): Select this option
to prevent the system from ever checking for updates. This option is for secure sites
at which Phone Home is not available.
2 Click Continue. The User Setup page appears.
3 From the Session Timeout drop‐down list, select the session timeout.
4 In the Email, Email 2, and Email 3 fields, enter email addresses to which the Storage
Center sends system alerts.
5 (Optional) Click Send test e‐mail to make sure that the email addresses are accurate.
Note: Make sure that the administrator receives the test email(s). Storage Center
uses email to send system alerts.
6 Click Continue.
If the Storage Center has iSCSI IO cards, the Configure IO Cards page appears.
If there are no iSCSI IO cards, the Configure Ports page appears.
Configure IO Cards
The Configure IO Cards page appears if the Storage Center detects iSCSI IO cards. Use this
page to configure network settings for iSCSI IO cards.
1 (Optional) Click Skip iSCSI IO Card Configuration to configure network settings for
the iSCSI IO cards later. The Configure Ports page appears.
Note: Dell Compellent recommends that you configure iSCSI IO cards during
setup.
2 For each iSCSI IO card that is used, complete the IP Address, a Subnet Mask, and
Gateway fields. Uninitialized iSCSI IO cards have an IP Address of 0.0.0.0 and will be
listed on a warnings page in the next step.
3 When you are finished, click Continue.
If no messages are generated, iSCSI IO card configuration is saved and the
Configure Ports page appears.
If there are issues, a message appears.
Click No to go back and correct the IP Address, Subnet Mask, and/or Gateway
address for cards that generated errors and/or warnings.
Click Yes to ignore the message and continue. The Configure Ports page appears.
Configure Ports
Use the Configure Ports page to configure the front‐end and back‐end ports.
If … Then …
Virtual Ports is Not Licensed All transport types use Legacy Mode. In this case, the Startup
Wizard displays the Initial Port Configuration page. To
continue, go to Configure Ports When Virtual Port Mode Is Not
Licensed on page 147.
Virtual Ports is Licensed FC and iSCSI transport types can use legacy mode or virtual port
mode. In this case, the Startup Wizard displays an operational
mode choice page that allows for the choice between modes for
each supported transport type. To continue, go to Configure
Ports When Virtual Port Mode Is Licensed on page 149.
Note: To skip port initialization or if there is no FC switch set up, click Skip Port
Initialization.
1 Click Continue to generate the initial port configuration.
While the initial port configuration is being generated, the Startup Wizard displays a
page showing progress. After the configuration has been generated, it is automatically
validated.
If the validation is successful, a confirmation page appears stating that the port
configuration has been generated.
If the validation fails, a page displaying warnings appears.
2 Click Configure Local Ports. The wizard displays a tab for each type of IO card that is
installed in the Storage Center (FC, iSCSI, and SAS).
3 Configure the Purpose, Fault Domain, and User Alias for all transport types. See
Configure Local Ports on page 154.
4 When you are finished configuring the local ports, click Assign Now. The wizard
informs you that the port configuration has been generated.
5 Click Continue. The Generate SSL Cert page appears.
1 Select the front‐end connectivity mode(s).
a Select the operational mode for FC and iSCSI transports.
b Click Continue to configure the selected operational mode(s). The wizard informs
you that the operational modes have been configured.
c Click Continue to begin port initialization in the selected operational mode(s).
The wizard verifies the configuration, converts selected transports to the selected
mode, displays progress pages, and presents a confirmation page when operational
modes for transport types have been configured and initialized.
d Click Continue. The page that appears next depends on whether there are any iSCSI
transports in virtual port mode:
iSCSI in virtual port mode: A request is displayed to provide the IP address
information required to create the default iSCSI Fault Domain. Go to Step 2 on
page 151.
iSCSI in legacy port mode (or no iSCSI ports): While the initial port
configuration is being generated, the Startup Wizard displays a page showing
progress. After the configuration has been generated, it is automatically
validated. If the validation is successful, the confirmation page appears. If there
are issues, a page appears listing error messages. Go to Step 3 on page 152.
2 (iSCSI virtual port mode only) Configure the control port for the iSCSI fault domain.
a Enter the IP address, netmask, gateway, and port for the control port for the new
iSCSI fault domain. Check the pre‐installation documentation for this address.
b Click Continue.
The Startup Wizard generates the new iSCSI fault domain and the initial port
configuration. While the initial port configuration is being generated, the Startup
Wizard displays a page showing progress. After the configuration has been
generated, it is automatically validated.
If there are issues, an error message page appears.
If the validation is successful, a confirmation page appears.
3 Click Configure Local Ports. The wizard displays a tab for each type of IO card that is
installed in the Storage Center (FC, iSCSI, and SAS).
4 Configure the Purpose, Fault Domain, and User Alias for all transport types. See
Configure Local Ports on page 154.
5 When you are finished configuring the local ports, click Assign Now. The wizard
informs you that the port configuration has been generated.
6 Click Continue. The Generate SSL Cert page appears.
Back End FC Port is connected to disk enclosures.
2 Create the required fault domains. For a dual‐controller Storage Center, create a fault
domain for each pair of redundant FC ports. For a single‐controller Storage Center,
create a single fault domain for all FC ports.
a Click Edit Fault Domains. The wizard displays a list of the currently defined fault
domains.
b Click Create Fault Domain. A dialog box appears.
c In the Name field, type a name for the fault domain.
d From the Type drop‐down menu, select FC.
e (Optional) In the Notes field, type a description of the fault domain.
f Click Continue. The dialog box displays a summary.
g Click Create Now to create the fault domain.
h Repeat steps b–g as needed to create additional fault domains.
i When you are finished creating fault domains, click Return. The FC tab appears.
3 Configure each back‐end FC port.
a Set the Purpose field to Back End.
b Confirm that the Fault Domain field is set to <none>.
c (Optional) Type a descriptive name in the User Alias field.
4 Configure each front‐end FC port.
a Set the Purpose field to Front End Primary or Front End Reserved as appropriate.
b Set the Fault Domain field to the appropriate fault domain that you created.
c (Optional) Type a descriptive name in the User Alias field.
5 Configure each port that is unused or used for IPC between controllers.
a Set the Purpose field to Unknown.
b Confirm that the Fault Domain field is set to <none>.
c (Optional) Type a descriptive name in the User Alias field.
2 Create a fault domain for each FC fabric.
a Click Edit Fault Domains. The wizard displays a list of the currently defined fault
domains.
b Click Create Fault Domain. A dialog box appears.
c In the Name field, type a name for the fault domain.
d From the Type drop‐down menu, select FC.
e (Optional) In the Notes field, type a description of the fault domain.
f Click Continue. The dialog box displays a summary.
g Click Create Now to create the fault domain.
h Repeat steps b–g as needed to create additional fault domains.
i When you are finished creating fault domains, click Return. The FC tab appears.
3 Configure each back‐end FC port.
a Set the Purpose field to Back End.
b Confirm that the Fault Domain field is set to <none>.
c (Optional) Type a descriptive name in the User Alias field.
4 Configure each front‐end FC port.
a Set the Purpose field to Front End.
b Set the Fault Domain field to the appropriate fault domain that you created.
c (Optional) Type a descriptive name in the User Alias field.
5 (Optional) Change the preferred physical port for one or more virtual ports.
a Click Edit Virtual Ports. The wizard displays a list of virtual ports.
b For each virtual port that you want to modify, select the preferred physical port In
the Preferred Physical Port.
c When you are finished, click Apply Changes. The iSCSI tab appears.
6 Configure each port that is unused or used for IPC between controllers.
a Set the Purpose field to Unknown.
b Confirm that the Fault Domain field is set to <none>.
c (Optional) Type a descriptive name in the User Alias field.
2 Create the required fault domains. For a dual‐controller Storage Center, create a fault
domain for each pair of redundant iSCSI ports. For a single‐controller Storage Center,
create a single fault domain for all iSCSI ports.
a Click Edit Fault Domains. The wizard displays a list of the currently defined fault
domains.
b Click Create Fault Domain. A dialog box appears.
c In the Name field, type a name for the fault domain.
d From the Type drop‐down menu, select iSCSI.
e (Optional) In the Notes field, type a description of the fault domain.
f Click Continue. The dialog box displays a summary.
g Click Create Now to create the fault domain.
h Repeat steps b–g as needed to create additional fault domains.
i When you are finished creating fault domains, click Return. The iSCSI tab appears.
3 Configure each front‐end iSCSI port.
a Set the Purpose field to Front End Primary or Front End Reserved as appropriate.
b Set the Fault Domain field to the appropriate fault domain that you created.
c (Optional) Type a descriptive name in the User Alias field.
4 Confirm that unused ports are not configured.
a Set the Purpose field to Unknown.
b Confirm that the Fault Domain field is set to <none>.
c (Optional) Type a descriptive name in the User Alias field.
2 Create a fault domain for each iSCSI network.
a Click Edit Fault Domains. The wizard displays a list of the currently defined fault
domains.
b Click Create Fault Domain. A dialog box appears.
c In the Name field, type a name for the fault domain.
d From the Type drop‐down menu, select iSCSI.
e (Optional) In the Notes field, type a description of the fault domain.
f Click Continue. The dialog box displays a summary.
g Click Create Now to create the fault domain.
h Repeat steps b–g as needed to create additional fault domains.
i When you are finished creating fault domains, click Return. The iSCSI tab appears.
3 Configure each front‐end iSCSI port.
a Set the Purpose field to Front End.
b Set the Fault Domain field to the appropriate fault domain that you created.
c (Optional) Type a descriptive name in the User Alias field.
4 (Optional) Change the preferred physical port for one or more virtual ports.
a Click Edit Virtual Ports. The wizard displays a list of virtual ports.
b For each virtual port that you want to modify, select the preferred physical port In
the Preferred Physical Port.
c When you are finished, click Apply Changes. The iSCSI tab appears.
5 Confirm that unused ports are not configured.
a Set the Purpose field to Unknown.
b Confirm that the Fault Domain field is set to <none>.
c (Optional) Type a descriptive name in the User Alias field.
2 Configure each back‐end SAS port.
a Set the Purpose field to Back End.
b Confirm that the Fault Domain field is set to <none>.
c (Optional) Type a descriptive name in the User Alias field.
3 Configure each port that is unused or used for IPC between controllers.
a Set the Purpose field to Unknown.
b Confirm that the Fault Domain field is set to <none>.
c (Optional) Type a descriptive name in the User Alias field.
Caution: Do not click Skip to bypass this page. Clicking Skip can result in
connection disruptions to Storage Center System Manager.
Import a Certificate
If an SSL certificate has already been generated, import the certificate.
Prerequisites
The public key file must be in x.509 format.
Steps
1 Click Import. A file browser appears.
2 Browse to the location of the public key (*.pem) file and select it.
3 Click Next.
4 Browse to the location of the private key file (*.pem).
5 Click Next. A Summary page appears that identifies the key files selected.
6 Click Save to import the certificates.
2 In the text field, enter all DNS host name(s), IPv4 address(es), and IPv6 address(es) for
this Storage Center, separated by commas. The wizard prepopulates the field with
information it knows, but it is likely that more site‐specific host names and addresses
need to be entered.
Caution: The host name(s) and address(es) must match this Storage Center or
you will not be able to reconnect to it.
3 Click Generate Now. A new certificate is generated and the session ends. To continue,
refresh the browser and log on to the Storage Center again.
Note: If the enclosure is deleted from System Manager, and then added back in, it
is assigned a new index number, requiring a label change.
1 Use the Storage Center System Manager to map each enclosure ID to a Service Tag.
a From the System Tree, expand the Enclosures node.
b Select an enclosure.
c On the General tab, locate and record the Index and Service Tag.
2 Create a label for each SC200/SC220 enclosure with the enclosure ID number.
3 Apply an ID label to the left‐front of each enclosure.
ID 01
Perform connectivity and failover tests to make sure that the deployment was successful,
then change administrative passwords, Phone Home, and check for updates.
Contents
Verify Connectivity and Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Configure a Phone Home Proxy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Phone Home . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Check for Storage Center Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Next Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
f Click Continue. The wizard displays a summary.
g Click Create Now. The wizard displays a list of optional actions.
h Click Close. The wizard closes.
6 Create a 25 GB test volume called TestVol1.
a In the System Tree, select Volumes.
a In the Create Server wizard, click Create Volume. the Create Volume wizard
appears.
b Set the volume size to 25 GB, then click Continue.
c Click Continue to apply the default Replay Profile.
d Type TestVol1 in the Name field, then click Continue. The wizard displays a
summary.
e Click Create Now. The wizard displays a list of optional actions.
f Click Close.
7 (Dual‐controller only) Repeat Step 6 to create a second test volume named TestVol2.
8 Map TestVol1 to the first controller.
a In the System Tree, expand the Volumes node.
b Select TestVol1.
c Click Map Volume to Server. The Map Volume to Server wizard appears.
d Select the server, then click Continue. The wizard displays a summary.
e Click Advanced. The wizard displays advanced mapping options.
f In the Restrict Mapping Paths area, select the Map to controller check box, then
select the controller from the drop‐down menu.
g Click Continue. The wizard displays a summary.
h Click Create Now. The mapping is created and the wizard closes.
9 (Dual‐controller only) Repeat Step 8 to map TestVol2 to the second controller.
10 On the server, partition and format the test volume(s).
d After the copy finishes, turn on the controller.
e Wait 3 minutes and verify that the controller has finished starting.
f Rebalance the ports.
3 Manually shut down the second controller while copying data to TestVol2 to verify that
IO is not interrupted by the failover event.
a Copy the Test folder to the TestVol2 volume.
b During the copy process, shut down the second controller (the controller through
which the volume is mapped).
c Verify that the copy process continues after the controller is shut down.
d After the copy finishes, turn on the controller.
e Wait 3 minutes and verify that the controller has finished starting.
f Rebalance the ports.
Test MPIO
If the network environment and servers are configured for MPIO, perform tests to make
sure that failed paths do not interrupt IO.
1 Create a Test folder on the server and copy at least 2GB of data into it.
2 Make sure that the FC or iSCSI server is configured to use load balancing MPIO (round‐
robin).
3 Manually disconnect a path while copying data to TestVol1 to verify that MPIO is
functioning correctly.
a Copy the Test folder to the TestVol1 volume.
b During the copy process, disconnect one of the paths and verify that the copy
process continues.
c Reconnect the port.
4 Repeat Step 3 as necessary to test additional paths.
5 (Dual‐controller Storage Center) Restart the controller that contains the active path
while IO is being transferred.
6 If the Storage Center is not in a production environment, restart the switch that contains
the active path while IO is being transferred.
3 Enter a new password in the Enter Password and Reenter Password fields.
4 Click OK.
5 Click Next.
6 (Optional) Enter a new user name in the User Name field.
7 Enter a new password in the New Password and Confirm New Password fields.
8 Click Apply.
2 Select Use Phone Home Proxy Server and enter the following:
Proxy Server Address: Enter the IP address of the proxy server.
Port: Enter the TCP port number of the proxy server.
Proxy User Name: Enter the user name for the proxy server.
Proxy Password/Confirm Password: Password for the proxy server.
3 Click OK.
Phone Home
Complete setup by sending Storage Center configuration information to Dell Technical
Support Services with Phone Home.
Phone Home sends a copy of a Storage Center configuration to Dell Technical Support
Services to enable them to support a Storage Center. The initial configuration is sent to Dell
Technical Support Services when Storage Center is installed.
Prerequisites
TCP ports 22 and 443 must be allowed outbound to the internet.
If the network requires hosts to use a proxy server to reach the internet, the Storage
Center must be configured to use a Phone Home proxy.
Steps
1 From the Storage Management menu, select System→ Phone Home→ Phone Home.
The Phone Home wizard starts, listing any previous Phone Home events.
2 Click Phone Home Now. The Phone Home In Progress dialog box is displayed.
3 Click OK.
4 When the State column lists all items with Success, click Close.
Next Steps
After installation is complete, you might want to perform some basic tasks to configure
Storage Center for your environment. These tasks are configuration‐dependent so some
might not apply to your site.
See the Storage Center Administrator’s Guide for detailed configuration instructions,
including how to:
Manage Unmanaged Disks
Add Storage Center users; including configuring Lightweight Directory Access
Protocol (LDAP)
Configure password rules for local users
Create servers
Configure User Volume Defaults
Add Storage Center volumes
Add a login message
This chapter describes how to add or remove Dell Compellent enclosures from a Storage
Center.
Contents
Adding SC280 Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Adding SC200/220 Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
Removing Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Note: Enclosures must be running the current version of firmware before
managing the drives in Storage Center System Manager.
7 Check for available updates:
a From the Storage Management menu, select System→ Update→ Update Status.
b Click Check Now. As Storage Center checks for updates, status appears in the
Update Status wizard.
c If an update is available, download and install the update as described in Storage
Center Software Update Guide.
8 The Storage Center System Manager informs you that you have new, unassigned disks.
From the Storage Management menu, select Disk→ Manage Unassigned Disks to
move them to the managed disk folder and add the space to the disk pool.
9 Redistribute data across all drives, by selecting Storage Management→
Disk→ Rebalance RAID.
Controller A
1 2 3 4
Controller B
1 2 3 4
Enclosure 1
A B C A B C
Enclosure 2
A B C A B C
Controller A
1 2 3 4
Controller B
1 2 3 4
Enclosure 1
A B C A B C
Enclosure 2
A B C A B C
Caution: Before adding an enclosure to an existing chain, make sure that your data
is backed up. For maximum protection, add enclosures during a service outage.
Note: Enclosures must be running the current version of firmware before managing
the drives in Storage Center System Manager.
1 Check for available updates:
a From the Storage Management menu, select System→ Update→ Update Status.
b Click Check Now. As Storage Center checks for updates, status appears in the
Update Status wizard.
c If an update is available, download and install the update as described in Storage
Center Software Update Guide.
2 The Storage Center System Manager informs you that you have new, unassigned disks.
From the Storage Management menu, select Disk→ Manage Unassigned Disks to
move them to the managed disk folder and add the space to the disk pool. For more
information, see the Storage Center System Manager Administrator’s Guide.
3 View the list of enclosures in System Manager and verify that the new enclosures and
disks appear in the System Tree.
4 In the Storage Center System Manager, select Storage Management→ System→
Setup→ Configure Local Ports. Make sure that:
Slot and slot ports are Up and the Purpose is set to back‐end.
Target Count equals the number of added drives, demonstrating that the controller
recognizes the new enclosures.
5 To redistribute data across all drives, from the Storage Management menu, select
Disk→ Rebalance RAID.
Controller A
1 2 3 4
Disconnect
Controllers,
A-side
Controller B
1 2 3 4
Move Cable
to Enclosure
Enclosure 1
2, port C
A B C A B C
Enclosure 2
A B C A B C
Controller A
1 2 3 4
Controller B
1 2 3 4
Enclosure 1
A B C A B C
Add cable
Enclosure 2
A B C A B C
Controller A
1 2 3 4
Controller B
1 2 3 4
Disconnect
Controllers,
B-side
Move Cable to
Enclosure 2,
port C
Enclosure 1
A B C A B C
Enclosure 2
A B C A B C
Controller A
1 2 3 4
Controller B
1 2 3 4
Enclosure 1
A B C A B C
Add cable
Enclosure 2
A B C A B C
Controller
Enclosure 1
Enclosure 2
Enclosure 3
Enclosure 4
Caution: Before adding an enclosure to an existing chain, make sure that your data
is backed up. For maximum protection, add enclosures during a service outage.
Controller A
Disconnect A side
Controller B
Enclosure 1
Enclosure 2
Add cable
Enclosure 3
Connect
Enclosures
(A side)
Add cable
New Enclosures
Enclosure 4
Controller A
Controller B
Enclosure 1
Enclosure 2
Enclosure 3
Enclosure 4
8 Disconnect the B‐side cables from the controllers as shown in Figure 120:
Controller A: slot 6 port 3.
Controller B, slot 6 port 1.
IO continues through the A‐side cables.
9 Move the cable from Enclosure 2: bottom, port B→ Enclosure 4: bottom, port B.
10 Use new cables to connect the new enclosures to the existing enclosures (B side):
Enclosure 2: bottom, port B→ Enclosure 3: bottom, port A.
Enclosure 3: bottom port B→ to Enclosure 4: bottom port A.
Enclosures 3 and 4 are now part of the B‐side chain and can be connected to Controllers
A and B.
Controller A
Disconnect B side
Controller B
Enclosure 1
Enclosure 2
Add cable
Enclosure 3
Connect
Enclosures
(B side)
Add cable
Enclosure 4
11 Connect Enclosure 4: bottom, port B→ Controller A: slot 6, port 3.
12 Reconnect Enclosure 1: bottom, port A→ Controller B: slot 6, port 1.
Controller A
Controller B
Enclosure 1
Enclosure 2
Enclosure 3
Enclosure 4
Add an Enclosure
Add a new enclosure while maintaining redundancy.
1 Power on the enclosures being added. When the enclosures spin up, make sure that the
front panel enclosure and power status LEDs show normal operation.
2 Remove the two IPC cables from the Controllers, ports 2 and 4.
3 Connect: Controller A, port 2→ new Enclosure 3: top, port A.
4 Connect Enclosure 3: top, port B→ Controller B, port 4.
5 In the Storage Center System Manager, select Storage Management→ System→
Setup→ Configure Local Ports.
6 Make sure that:
Slot and slot ports are Up and the Purpose is set to back‐end.
Target Count equals the number of added drives, demonstrating that the controller
recognizes the new enclosures.
Controller A
Controller B
Enclosure 1
Enclosure 2
Enclosure 3
7 Connect Controller A: port 4→ new Enclosure 3: bottom, port B.
8 Connect Enclosure 3: bottom, port A→ Controller B: port 2.
9 In the Storage Center System Manager, select Storage Management→ System→
Setup→ Configure Local Ports.
10 Make sure that:
Slot and slot ports are Up and the Purpose is set to back‐end.
Target Count equals the number of added drives, demonstrating that the controller
recognizes the new enclosures.
Controller A
Controller B
Enclosure 1
Enclosure 2
Enclosure 2
Removing Enclosures
The following procedure describes how to use Storage Center System Manager to move
data off the disks in the enclosure to be removed. For more detailed instructions, see the
Storage Center System Manager Administrator’s Guide.
Caution: Before you remove an enclosure, make sure that your data is backed up
and that all disks to be removed are empty of data.
Release Disks
Release disks to remove them from the pool in preparation of removing the enclosure.
1 In the Storage Center tree, expand the Disks node to view the disks in the enclosure.
2 Select all the disks in the enclosure you are removing.
3 From the shortcut menu, select Release Disk. The Release Disk wizard starts.
4 Note the warning and click Yes to continue. The disk is immediately released and a
dialog box opens asking if you wish to rebalance RAID devices now, later, or skip
balancing.
5 Click Yes. Wait for the rebalance operation to finish.
Enclosures with all drives in the unassigned disk folder are now safe to remove.
Caution: To disconnect the A‐side cabling without a system outage, make sure that
you first disconnect the A‐side cable from the controller. Disconnecting any other
cable first disrupts IO to the enclosure, resulting in a system outage.
Controller A
1 2 3 4
Controller B
1 2 3 4
Disconnect A side
Enclosure 1
A B C A B C
Enclosure 2
A B C A B C
Controller A
1 2 3 4
Controller B
1 2 3 4
Enclosure 1
Reconnect A side,
after moving the
A B C A B C
cable below
Enclosure 2
A B C A B C
Caution: To disconnect the B‐side cabling without a system outage, make sure that
you first disconnect the B‐side cable from the controller. Disconnecting any other
cable first disrupts IO to the enclosure, resulting in a system outage.
Disconnect
B side
Controller A
1 2 3 4
Controller B
1 2 3 4
Enclosure 1
A B C A B C
A B C A B C
Figure 126. Disconnecting the SC280 Enclosure from the B-side Chain
Reconnect B side
Controller A
1 2 3 4
Controller B
1 2 3 4
Enclosure 1
A B C A B C
Caution: To disconnect the A‐side cabling without a system outage, make sure that
you first disconnect the A‐side cable from the controller. Disconnecting any other
cable first disrupts IO to the enclosure, resulting in a system outage.
Controller B
Enclosure 1
Enclosure 2
Enclosure 3
Enclosure 4
Controller B
Enclosure 1
Enclosure 2
Enclosure 3
Enclosure 4
Caution: To disconnect the B‐side cabling without a system outage, make sure that
you first disconnect the B‐side cable from the controller. Disconnecting any other
cable first disrupts IO to the enclosure, resulting in a system outage.
Controller A
Disconnect
B side
Controller B
Enclosure 1
Enclosure 2
Move to Enclosure
2, bottom, port A
Enclosure 3
Enclosure 4
Figure 130. Disconnecting the SC200/220 Enclosure from the B-side Chain
Controller B
This appendix contains troubleshooting steps for common problems.
Contents
Troubleshooting the Serial Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
Troubleshooting Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
Troubleshooting Licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Troubleshooting Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Troubleshooting Enclosures
Issue Possible Reason Solution
No Disks Found by Startup Need to update software and/or 1 Click Quit.
Wizard: firmware to use attached 2 Check for Storage Center Updates on page 172.
enclosure.
No disks could be found 3 Complete the Startup Wizard on page 129.
attached to this Dell
Compellent Storage Center.
Enclosure firmware update Cabling is not redundant. 1 Check the back‐end cabling and ensure that
fails redundant connections are used.
See SAS Redundancy on page 55 for more
information.
Troubleshooting Licenses
Issue Possible Reason Solution
Error: The license file is not Can happen with a dual‐ Make sure that all the steps in Chapter 5: Set up the
valid (could be old license controller Storage Center if Storage Center Software, on page 119 are followed
file). Contact Dell Technical there is an attempt to apply the in order and try again.
Support Services for a new license file before the controllers • Make sure to connect to the IP address of the
license file. are joined and the license file is controller with the lower serial number.
applied to the controller with
• Make sure the serial number in the name of the
the wrong serial number.
license file matches the controller.
Troubleshooting Updates
Issue Possible Reason Solution
Cannot complete Startup Your network might employ a 1 Establish a serial connection to the Storage
Wizard and cannot check for proxy server, Configure Phone Center.
updates Home Proxy Server Using the 2 Run the following commands:
Command Line Interface (CLI).
mc values set UsePhHomeProxy true
mc values set PhHomeProxyString
[proxy server IP address]
mc values set PhHomeProxyUserName
[proxy server user name]
mc values set PhPomeProxyPassword
[proxy server password]
3 Check for Storage Center Updates on page 172.
4 Complete the Startup Wizard on page 129.