Professional Documents
Culture Documents
XtremIO - X2 - XMS 6 0 1 - 6 0 2 - XIOS 6 0 - 6 0 1 - 6 0 2 - SPG - 302 004 390 - Rev 03
XtremIO - X2 - XMS 6 0 1 - 6 0 2 - XIOS 6 0 - 6 0 1 - 6 0 2 - SPG - 302 004 390 - Rev 03
Topics include:
Overview................................................................................................................... 2
Preparing the Site for Installing the Cluster within an "EMC Titan-D" Rack .................. 3
Preparing the Site for Installing the Cluster into a Non-"EMC Titan-D" Rack .............. 17
Hardware Requirements.......................................................................................... 28
Physical XMS Requirements .................................................................................... 29
Virtual XMS Requirements....................................................................................... 29
Temperature Requirements ..................................................................................... 31
Shipping and Storage Requirements ....................................................................... 32
Security Requirements ............................................................................................ 34
ESRS and Connect-Home Requirements .................................................................. 35
Ports and Protocols ................................................................................................. 37
Provisioning WWNs and IQNs .................................................................................. 38
Note: This document was accurate at publication time. Go to EMC Online Support
(https://support.emc.com) to ensure that you are using the latest version of this
document.
This document provides specific information on XtremIO clusters that are managed by
XMS version 6.0.1. For XtremIO clusters that are managed by other versions, refer to the
appropriate XtremIO product documents which are provided for the corresponding
versions.
Overview
Overview
Note: The customer should verify that all site preparation requirements in this guide
(including, but not limited to adequate HVAC, power, floor space, security, etc.) are
completely met before the XtremIO cluster is installed, and the specified operating
environment is maintained for optimal system operation.
An EMC® XtremIO Storage Array requires a properly equipped computer room with
controlled temperature and humidity, proper airflow and ventilation, proper power and
grounding, cluster cable routing facilities, and fire equipment.
To verify that the computer room meets the requirements for XtremIO Storage Array,
confirm that:
Any customer concerns have been addressed through planning sessions between
EMC and the customer.
The site meets the requirements described in this document.
The site contains LAN connections for remote service operation.
Note: It is not allowed to split an existing XtremIO cluster into multiple clusters, or to
merge multiple, existing XtremIO clusters into a single cluster. This is due to the very high
logistical overheads that are required for performing such non-standard procedures.
Width
24 in. (61 cm)
Height
75 in.
(190.5 cm)
Power cord length
15 ft. (4.57 m)
Rear access
39 in. (99.1 cm)
Depth
44 in. (111.76 cm)
Front access
48 in. (121.92 cm)
81.00 in.
(2.06 m)
Make sure to leave approximately 96 inches (2.43 meters) clearance at the rack’s rear in
order to unload the unit and roll it off the pallet, as shown in Figure 3.
96.00 in.
(2.43 m)
Rack Stabilizing
If you intend to secure the optional anti-tip bracket to your site floor, prepare the location
for the mounting bolts. The anti-tip bracket provides an extra measure of anti-tip security.
One or two kits may be used. For racks with components that slide, we recommend that
you use two kits.
12.00
7.00
.44
21.25
17.25
3.39
1.56 2.78
Figure 4 Anti-Tip Bracket
Note: The values shown in Table 1 are for X2-R configurations, and do not include the rack
weight. For X2-S configurations of two, three and four X-Brick weights, subtract 18.7 lb
(8.5kg). For X2-T configurations (single X-Brick), subtract 18 lb (8.16 kg).
Install the rack in raised or non-raised floor environments that are capable of supporting
at least 1,180kg (2,600 lbs.) per rack. Your system may weigh less than this, but requires
extra floor support margin to accommodate equipment upgrades and/or reconfiguration.
In a raised floor environment:
EMC recommends 24 x 24 inch or (60 x 60 cm) heavy-duty, concrete filled steel floor
tiles.
Use only floor tiles and stringers rated to withstand:
• Concentrated loads of two casters or leveling feet, each weighing up to 1,000 lb
(454 kg).
• Minimum static ultimate load of 3,000 lb (1,361 kg).
• Rolling loads of 1,000 lb (454 kg). On floor tiles that do not meet the 1,000 lb
rolling load rating, use coverings such a plywood to protect floors during system
roll.
Position adjacent racks with no more than two casters or leveling feet on a single floor
tile.
Note: Cutting tiles per specifications as shown in Figure 5 on page 7 ensures the
proper caster wheel placement.
Cable Routing
EMC recommends installing the equipment into a room with a raised floor, for
accommodation of the cabling under the floor.
If you are installing the rack onto a raised floor, cut a cable-access hole in one tile as
shown in Figure 5.
8 in.
Floor tile (20.3 cm)
24 in. (61 cm) square
C
Cutout Cutout detail 9 in.
(22.9 cm)
6 in.
(15.2 cm)
C
8 in.
(20.3 cm)
Front
Cutouts in 24 x 24 in tiles must be no more that 8 inches (20.3 cm) wide by 6 inches
(15.3 cm) deep, and centered on the tiles, 9 inches (22.9 cm) from the front and rear and
8 inches (20.3 cm) from the sides. Since cutouts will weaken the tile, you can minimize
deflection by adding pedestal mounts adjacent to the cutout. The number and placement
of additional pedestal mounts relative to a cutout must be in accordance with the floor tile
manufactures' recommendations.
When positioning the rack, take care to avoid moving a caster into a floor tile cutout.
Make sure that the combined weight of any other objects in the data center does not
compromise the structural integrity of the raised floor and/or the sub-floor (non-raised
floor).
EMC recommends that a certified data center design consultant inspect your site to
ensure that the floor is capable of supporting the system and surrounding weight. Note
that the actual rack weight depends on your specific product configuration. You can
calculate your total using the tools available at:
https://powercalculator.emc.com/Main.aspx
Detail A
(right front
corner)
Dimension 3.62
17.102 minimum 20.580 maximum to center of caster
(based on swivel (based on swivel wheel from this surface
position of caster wheel) position of caster wheel) Detail B
1.750
18.830 Caster swivel
diameter Bottom view
Outer surface Outer surface Leveling feet
Rear of rear door of rear door 1.750 Rear
Swivel diameter
reference (see
detail B)
32.620
maximum
(based on
swivel position
31.740 of caster wheel)
30.870
minimum
(based on 40.390
swivel position
of caster wheel)
Leveling feet
3.620
Front 20.700
Right 20.650
Top view side view Front
Dimension 3.620 to center of
Note: Some items in the views are caster wheel from this surface
removed for clarity. (see detail A)
The customer is ultimately responsible for ensuring that the data center floor on which the
EMC system is to be configured is capable of supporting the system weight, whether the
system is configured directly on the data center floor, or on a raised floor supported by the
data center floor. Failure to comply with these floor-loading requirements could result in
severe damage to the EMC system, the raised floor, subfloor, site floor and the
surrounding infrastructure. Notwithstanding anything to the contrary in any agreement
between EMC and customer, EMC fully disclaims any and all liability for any damage or
injury resulting from customer's failure to ensure that the raised floor, subfloor and/or site
floor are capable of supporting the system weight as specified in this guide. The customer
assumes all risk and liability associated with such failure.
Power Requirements
Depending on the rack configuration and input AC power source, single or three-phase
listed in Table 2 and Table 4 on page 11, the rack requires two to eight independent power
sources. To determine your site requirements, use the published technical specifications
and device rating labels for all non-EMC equipment. This helps in providing the current
draw of the devices in each rack. The total current draw for each rack can then be
calculated. For EMC products, refer to “EMC Power Calculator” located at
http://powercalculator.emc.com/XtremIO.aspx and select the calculator for the XtremIO
hardware currently in use..
Input nominal 200 - 240V AC +/- 10% L - L nom 220 - 240V AC +/- 10% L - L nom
voltage
Frequency 50 - 60 Hz 50 - 60 Hz
Circuit breakers 30 A 32 A
NEMA L6-30P
NEMA L6-30R
International
Australia
Input nominal 200 - 240V AC +/- 10% L - L nom 220 - 240V AC +/- 10% L - L nom
voltage
Frequency 50 - 60 Hz 50 - 60 Hz
Circuit breakers 50 A 32 A
Note: The interface connector options for the Delta and Wye three-phase
PDUs are listed in Table 5.
North America
International
3 in 4 in
APP
FLY Lead (CE can add appropriate plug
base on customer receptacle)
North America
038-004-778 3 Phase WYE International BLK Sursum K52S30A or Hubbell C530C6S Black
Power Consumption
Table 8 and Table 9 show the cluster power consumption and heat dissipation.
Calculations in these tables are intended to provide typical and maximum power and heat
dissipations. Ensure that the installation site meets these typical and worst-case
requirements.
Note: For specific environmental conditions, refer to the “EMC Power Calculator” located
at http://powercalculator.emc.com/XtremIO.aspx and select the calculator for the
XtremIO hardware currently in use.
Note: The figures refer to cluster configurations not including a physical XMS. The figures
for an XMS are detailed separately.
Note: A cluster's DAE can have varying SSD configurations. The figures refer to cluster
configurations with fully-populated DAEs.
PDU Configuration
Factory-assembled racks are shipped in a “four PDU” configuration.
Table 9 describes the number of line drops that are required per zone according to the
amount of X-Bricks in the cluster.
Single X-Brick 1 1 1
Two X-Bricks
Three X-Bricks 2 1 1
Four X-Bricks
Note: The PDU configurations do not include a power On/Off switch. Make sure the (four)
circuit breaker switches on each PDU are UP, in the OFF (0) position until you are ready to
supply AC power. Make sure that the power is OFF before disconnecting power from a PDU.
PDU = 100-563-477
(Titan-D, Single Phase)
X-Bricks 7 and 8
X-Bricks 5 and 6
An XMS (if present) is connected
to the console outlet at the rear
of the PDU.
X-Bricks 3 and 4
X-Bricks 1 and 2
16AMP
16AMP
CB
21
21
CB
B
C
21
21
CB
CB
11
11
CB
B
C
1
1
CB
16AMP
16AMP
16AMP
16AMP
CB
10
10
CB
CB10
10
CB
CB
9
9
CB
CB9
16AMP 9
CB
16AMP
16AMP
16AMP
CB
8
8
CB
CB8
8
CB
CB
7 CB7
7
CB 7
CB
16 AMP
16AMP
16AMP
16AMP
CB
6 CB6
6
CB 6
CB
CB
5 CB5
5
CB 5
CB
16AMP
16AMP
16 AMP
16AMP
CB
4 CB4
4
CB 4
CB
CB
3 CB3
3
CB 3
CB
16AMP
16AMP
16AMP
16 AMP
CB
2 CB2
2
CB 2
CB
CB
1
Second Line Cord for X-Bricks 5-8 CB1
1
CB 1
CB
16AMP
16AMP
40.75 in
(103.5 cm)
33 in 42.5 in
(83.8 cm) (108 cm)
Rack Requirements
Non-”EMC Titan-D” racks must meet the following requirements, depending on PDU
orientation:
Figure 11 shows a non-"EMC Titan-D" rack with rear-facing PDUs.
Front
Front Door
Rack
Rack
d
Post
Post
a
Front NEMA
Front NEMA
b
Rack Top View
i
j
Rear NEMA
Rear NEMA
PDU
PDU
e f
Rack
Rack
Post
Post
c
19" NEMA
g
Rear Door
h
Rear
Figure 11 Rear of Rack with Rear-Facing PDUs Service Clearance (Top View)
Non-”EMC Titan-D” racks with rear-facing PDUs must meet the requirements shown in
Table 10.
a Distance between front surface of the rack and the front NEMA rail
c Distance between rear surface of the chassis to rear surface of the rack, min = 2.5"
(63.5mm)
d If a front door exist, distance between inner front surface of the front door and the
front NEMA rail, min=2.5” (63.5mm)
e Distance between the inside surface of the rear post and the rear vertical edge of
the chassis and rails; min 2.5" (63.5mm) is recommended.
Note: If no rack post, minimum recommended distance is measured to inside
surface of the rack.
g Minimum = distance between NEMA rails 19" (482.6) +2e, Min=24" (609.6mm)
h Minimum = distance between NEMA rails + 2x"e" = 2x"f" = 19" + 2.5" + "f" = 21.5" +
"f"
If all of the requirements (described in Table 10) for a non-"EMC Titan-D" rack are not met,
and the customer wishes to continue with a non-compliant rack, an RPQ process must be
initiated.
Front
Front Door
Rack
Rack
Post
d
Post
a
Front NEMA
Front NEMA
b
Rack Top View
j
Rear NEMA
Rear NEMA
PDU PDU
e 19" NEMA c
Rack
Rack
Post
Post
f
Rear Door
h
Rear
Figure 12 Rear of Rack with Center-Facing PDUs Service Clearance (Top View)
Non-”EMC Titan-D” racks with center-facing PDUs must meet the requirements shown in
Table 11.
a Distance between front surface of the rack and the front NEMA rail
c Distance between rear surface of the chassis to rear surface of the rack, min = 2.5"
(63.5mm)
d If a front door exist, distance between inner front surface of the front door and the
front NEMA rail, min=2.5” (63.5mm).
e Distance between the inside surface of the rear post and the rear vertical edge of the
chassis and rails; min 2.5" (63.5mm) is recommended.
Note: If no rack post, minimum recommended distance is measured to inside surface
of the rack.
g PDU depth + 3" (76.2mm) AC cable bend clearance. Racks equipped with
center-facing PDUs that fail to meet this requirement are permitted, as long as the
NEMA rail/product area is not compromised. Therefore some outlets may NOT be
accessible. If those outlets are required, an RPQ request for in-rack PDUs should be
submitted. In all cases, the DAE's PSU accesses must remain unblocked.
j Minimum rack depth = "i" + "c" 36.5" (927.1mm) + 2.5" (63.50) = 39" (990mm)
Note: A rear door need not be present. Regardless, all hardware must be within the
boundaries of the rack.
If all of the requirements (described in Table 11) for a non-"EMC Titan-D" rack are not met,
and the customer wishes to continue with a non-compliant rack, an RPQ process must be
initiated.
Essential Requirements:
The rack space requirements of the different XtremIO Storage Array configurations are
as follows:
• A single X-Brick cluster requires 5U of contiguous rack space.
• A two X-Brick cluster requires 11U of contiguous rack space.
• A three X-Brick cluster requires 16U of contiguous rack space.
• A four X-Brick cluster requires 20U contiguous rack space.
Note: An optional physical XMS may occupy the upper-most U in the rack.
AC power:
• 200-240 VAC +/- 10% single phase or three-phase 50-60 Hz, power connection.
Table 4 on page 11 shows the three-phase power connection requirements.
• Redundant power zones, one on each side of the rack. Each power zone should
have capacity for the maximum power load (refer to Table 14 on page 27).
• The provided power cables suit AC outlets located within 24 inches of each
component receptacle (this does not include the X2-R InfiniBand Switches, whose
AC inlets are located in the front panel and are therefore provided with long power
cables).
Note: If you are using longer power cables, make sure they are cables of a high
standard.
Cluster Weight
Table 12 shows the approximate weights of XtremIO X2-R, X2-S and X2-T configurations,
when fully populated.
Configuration X2-R Approx. Weight X2-S Approx. Weight X2-T Approx. Weight
Single X-Brick cluster 190 lb (86 kg) 190 lb (86 kg) 176 lb (79.8 kg)
Two X-Brick cluster 468.1 lb (212.7 kg) 431.4 lb (195.7 kg) N/A
Three X-Brick cluster 664.8 lb (301.9 kg) 629 lb (284.9 kg) N/A
Four X-Brick cluster 861.2 lb (391 kg) 824.5 lb (374 kg) N/A
Note: The values shown in Table 12 do not include the rack weight. For configurations that
include a physical XMS, add 15 kg.
Component Dimensions
Table 13 shows the dimensions, weight and rack space of each major component in the
XtremIO Storage Array.
Disk Array Enclosure Height: 88.9 mm (3.5 in) 2U 97.0 lb (44 kg)1
(DAE) Width: 438 mm (17.25 in)
Depth: 927.1 mm (36.5 in)
Components Stacking
The following figures display the components stacking order according to the purchased
configuration.
A single X-Brick cluster requires 5U of contiguous rack space.
Figure 13 describes the stacking of a single X-Brick cluster.
XMS (Optional)
X1-DAE
Place Holder
XMS (Optional)
IBSW-2
IBSW-1
X2-DAE
X2-SC2
X2-SC1
Cable Management Duct
X1-SC2
X1-SC1
X1-DAE
Place Holder
XMS (Optional)
X3-DAE
IBSW-2
IBSW-1
X2-DAE
X2-SC2
X2-SC1
Cable Management Duct
X1-SC2
X1-SC1
X1-DAE
Place Holder
XMS (Optional)
X4-DAE
X4-SC2
X4-SC1
Cable Management Duct
X3-SC2
X3-SC1
X3-DAE
IBSW-2
IBSW-1
X2-DAE
X2-SC2
X2-SC1
Cable Management Duct
X1-SC2
X1-SC1
X1-DAE
Place Holder
Power Requirements
Table 14 details the power requirements for each configuration, as well as the number of
IEC 320-C13 outlets required on each power zone.
Note: For specific environmental conditions, refer to the “EMC Power Calculator” located
at http://powercalculator.emc.com/XtremIO.aspx and select the calculator for the
XtremIO hardware currently in use.
Note: The figures refer to cluster configurations, not including a physical XMS. The
requirements for an XMS are detailed separately.
XMS 200 1 1
Table 15 details the power consumption and socket related data for the cluster
components.
Hardware Requirements
Table 16 shows the hardware requirements for each XtremIO Storage Array configuration.
Note: For XtremIO virtual XMS pre-deployment requirements, refer to “Virtual XMS
Requirements”.
Note: For XtremIO physical XMS pre-deployment requirements refer to “Physical XMS
Requirements”.
Table 17 Virtual XMS VM Configurations per the Expected Total Number of Volumes
Note: It is possible to initially configure the virtual XMS per the Regular configuration,
and at a later stage, adjust the virtual XMS to the Expanded configuration. For details
on expanding the virtual XMS configuration, refer to the XtremIO Storage Array User
Guide.
Note: Shared storage used in this case should not originate from the XtremIO cluster
managed by the Virtual XMS.
Network connectivity: The Virtual XMS should be located in the same Local Area
Network (LAN) as the XtremIO cluster.
Host: The virtual XMS VM should be deployed on a single host, running ESX 5.x or 6.x
(or more, if virtual XMS high-availability is required).
Note: XtremIO Storage Array supports both ESX and ESXi. For simplification, all
references to ESX server/host apply to both ESX and ESXi, unless stated otherwise.
The host should be on VMware vSphere HCL approved hardware and meet the
following configuration requirements:
• Single socket dual core CPU
• One 1GbE NICs
• Redundant power supply
Other Specifications
The OVA package from which the virtual XMS is deployed contains VMware tools.
Therefore, no VMware tools upgrade is required following virtual XMS deployment.
The deployed virtual XMS Shares memory resource allocation is set to High. Therefore,
the virtual XMS is given high priority on memory allocation when required.
Note: In case non-standard Shares memory resource allocation is used, the virtual
XMS Shares memory resource allocation should be adjusted post-deployment.
For information pertaining to managing the virtual XMS, refer to the XtremIO Storage Array
User Guide.
Temperature Requirements
Table 18 shows the XtremIO Storage Array environmental operating range requirements.
Note: The environmental data shown in Table 18 complies with ASHRAE A3 standards.
Condition Setting
Systems that are mounted on an approved EMC package complete transportation testing
to withstand shock and vibrations in the vertical direction only. Table 21 shows the
respective maximum shock and vibration values not to exceed.
Security Requirements
This section describes the security requirements in the data center.
Firewall Settings
Set the FW rules prior to installation.
If the XMS is on a different subnet than the X-Bricks, open TCP, UDP and ICMP firewall
ports in both directions. Refer to Table 22 on page 37.
Open TCP ports between the XMS and the managing desktop running the XMS GUI.
Refer to Table 22 on page 37.
Open the services you want to enable from the XMS to the relevant target systems.
Refer to Table 22 on page 37.
EMC® Secure Remote Support (ESRS) provides a secure, IP-based, distributed remote
support solution that enables command, control and visibility of remote support access.
ESRS configuration options with XtremIO are ESRS VE gateway, legacy type ESRS gateway
and ESRS IP Client. These three configuration options provide Connect-In and
Connect-Home functionalities. ESRS VE configuration option also enables receiving
cluster-related Advisory Notices, such as notifications of newly-released XtremIO
versions, documentation and support messages.
As these ESRS configuration options provide the best level of remote support to the
cluster, it is highly recommended that you select one of them when completing the XMS
installation.
If the customer refuses to use ESRS as a connectivity solution, the XMS can be configured
to connect-home only. The connect-home only options with XtremIO are Email or FTPS.
These two options do not provide connect-in functionality to the XMS, but merely ensure
that EMC receives regular configuration report and product alert information from the
customer’s XtremIO environment.
Note: ESRS IP Client can be used only in a single cluster configuration. Consider using
ESRS-VE that provides a range of additional features, including multi-cluster support.
The following are preconditions for deploying ESRS on XtremIO, using the IP Client
configuration:
The customer should deploy ESRS Integration as part of the XtremIO deployment,
using an IP Client agent that will be configured on the XMS.
Customer should open an HTTPS connection between the XMS and the EMC ESRS
server at esrs3.emc.com. Refer to Table 22 on page 37.
The XtremIO cluster should be in the EMC Install Base (i.e. assigned by EMC
Manufacturing with a formal PSNT).
When these preconditions are fully met, it is possible to proceed and deploy ESRS
integration with the IP Client configuration as part of the XtremIO installation.
TCP 25 SMTP XMS -> SYR SMTP Server When Email connect-home only
configuration is used
TCP 443 & HTTPS XMS <-> ESRS GW Server (ESRS To/from ESRS Gateway
9443 VE or legacy-based) (bi-directional connectivity
required from XMS to ESRS
GW)
TCP 443 & HTTPS XMS <-> ESRS (esrs3.emc.com When ESRS IP Client
8443 for IP Client) configuration is used
(bi-directional connectivity
required from XMS to ESRS)
TCP 989 & FTPS XMS -> SYR FTPS Server When FTPS connect-home
990, (corpusfep3.emc.com for only configuration is used
28000- FTPS)
30000
TCP 11000 - XMLRPC XMS -> XtremIO Storage Used for Cluster expansion
11031 Controllers and FRU procedures
TCP 22 SSH MGMT Desktop -> XMS Allow XMS shell access
TCP 22000 - SSH XMS -> XtremIO Storage Used for Cluster expansion
22031 Controllers and FRU procedures
TCP 443 HTTPS MGMT Desktop -> XMS Used for XMCLI, RESTful API
TCP 3260 iSCSI Hosts -> Storage Controllers iSCSI TCP port can be altered
if necessary
TCP 23000 - IPMI XMS -> Storage Controllers Used for Cluster expansion
23031 and FRU procedures
TCP 443 HTTPS XtremIO Storage Controller -> Used for service procedures
XMS with Technician Advisor
22 SSH
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without
notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO
REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION,
AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR
PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
For the most up-to-date regulatory document for your product line, go to the Technical Documentation and Advisories section on EMC
Powerlink.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.
All other trademarks used herein are the property of their respective owners.