Professional Documents
Culture Documents
HARDWARE CONCEPTS
PARTICIPANT GUIDE
PARTICIPANT GUIDE
Table of Contents
Installation Engagement............................................................................................ 7
Module Objectives ............................................................................................................... 8
Customer Engagement Responsibility ................................................................................. 9
Physical Tools Required .................................................................................................... 10
Installation and Implementation Phases ............................................................................. 12
SolVe ................................................................................................................................. 13
Safety Precautions and Considerations ............................................................................. 16
Onsite Do's and Don'ts....................................................................................................... 18
Appendix ................................................................................................. 79
Glossary .................................................................................................. 89
The graphic shows the PowerScale Solutions Expert certification track. You can
leverage the Dell Technologies Proven Professional program to realize your full
potential. A combination of technology-focused and role-based training and exams
to cover concepts and principles as well as the full range of Dell Technologies'
hardware, software, and solutions. You can accelerate your career and your
organization’s capabilities.
PowerScale Solutions
(C) - Classroom
Prerequisite Skills
To understand the content and successfully complete this course, a student must
have a suitable knowledge base or skill set. The student must have an
understanding of:
• Current PowerScale hardware portfolio and the OneFS operating system
• PowerScale Concepts
• Isilon InfiniBand to Ethernet Backend Conversion
Course Objectives
Installation Engagement
Module Objectives
There are five steps or phases for acquiring a PowerScale cluster. Each phase has
a separate team that engages with the customer. In the design phase a Solution
Architect (SA) works with the customer, determine their specific needs, and
documents what the solution looks like. After the product purchase, shipment, and
delivery to the customer site the install and implementation phase of a PowerScale
cluster begins. The result of the SA engagement is PowerScale Pre-Engagement
Questionnaire (PEQ) that the Customer Engineers (CE) and Implementation
Specialist (IS) uses to install and configure the cluster. Before the install phase, all
design decisions have been made.
3
2 4
1 5
1: The cables that are required are a single CAT5/CAT6 network patch cord, to
directly connect your laptop to the node. USB-to-serial adapter, preferably one that
uses the Prolific 2303 Chipset.
4: Basic hand tools: screwdrivers (flat-head and Phillips), wire cutters, anti-static
wrist strap.
There are three distinct steps in the install and implementation phase: Install, Build,
and Implement.
1: During the install, the components are unpacked and racked, and switches are
rack that is mounted. Nodes are connected to the back-end switches, power is
added, and front-end network cables are connected between the cluster and
customer network. The Customer Engineer or CE performs these tasks.
2: Depending on the role, the CE may perform the cluster build also. The cluster
build is achieved when the system is powered on, the PowerScale Configuration
Wizard has been launched and the information added.
3: In some regions, running the Configuration Wizard may be the sole responsibility
of the IS. After the cluster is built, the IS configures the features of OneFS as
written in the PEQ.
SolVe
Before you arrive at a client site, remember to read the call notes and follow the
processes that are detailed in them. Check if there are any special instructions from
PowerScale Technical Support that you must follow.
1: Download SolVe Desktop application on the system. Go to the Tools and Sites
section, choose SolVe. And select SolVe Desktop Executable. Depending on the
browser used, you may be presented with security dialogue boxes. Take the
needed actions to launch the executable.
2:
Click through the Setup wizard and then select Install. Clicking Finish launches
the SolVe Desktop. SolVe must be authorized for use. Select OK. A few general
items1.
1 Notice the dialog in the lower left showing the version. This area also shows the
progress when upgrading and downloading content. Also notice in the lower right
the service topics. Once connected, many of articles that are shown may not be
3: From the menu, select Authorize and download the list of available products.
Adhere to the instructions shown, that is to leave SolVe open, enter credentials,
this is using SSO, and open the keychain file. Select OK. And then go to
downloads and open the keychain file.
4: Next are the Release Notes. Review and then close this window. Bring back the
SolVe. Notice the dialog2 in the lower left indicating the keychain is loaded, that
means you are authorized, and content is updated. Now, scroll down, and click
PowerScale to gather the PowrScale content.
5: Click OK. Again, note the progress in the lower left. Once the download is
complete, you see that the PowerScale image has changed. Tools that are
downloaded appear in the upper left corner of the screen without the green arrow
present.
6: Now you can click PowerScale and view the available procedures. If updates are
available for download, you see an information icon, click the icon, and approve the
updated content download.
2The icons with a green arrow indicate that the user must click the icon in order to
download the tool.
3The view is dependent upon Partner Type. A service partner sees what an
employee sees, a direct sales partner sees what a customer sees, and an
ASP/ASN partner sees products depending upon credentials.
When working with PowerScale equipment, it is critical to ensure you adhere to the
following precautions.
6
1
2
5
3
1: The AC supply circuit for PowerScale nodes must supply the total current that is
specified on the label of the node. All AC power supply connections must be
properly grounded. Connections that are not directly connected to the branch
circuit, such as nodes that are connected to a power strip, must also be properly
grounded. Do not overload the branch circuit of the AC supply that provides power
to the rack holding PowerScale nodes. The total rack load should not exceed 80%
of the branch circuit rating. For high availability, the left and right sides of any rack
must receive power from separate branch feed circuits. To help protect the system
from sudden increases or decreases in electrical power, use a surge suppressor,
line conditioner, or uninterruptible power supply or UPS.
2: To avoid personal injury or damage to the hardware, always use two people to
lift or move a node or chassis. A Gen 6 chassis can weigh more than 200 lbs. It is
recommended to use a lift to install the components into the rack. If a lift is not
available, you must remove all drive sleds and compute modules from the chassis
before lifting. Even when lifting an empty chassis, never attempt to lift and install
with fewer than two people.
3:
Electrostatic Discharge
4: If you install PowerScale nodes in a rack that is not bolted to the floor, use both
front and side stabilizers. Installing PowerScale nodes in an unbolted rack without
these stabilizers could cause the rack to tip over, potentially resulting in bodily
injury. Use only approved replacement parts and equipment.
6: You can install racks in raised or nonraised floor data centers capable of
supporting that system. It is your responsibility to ensure that data center floor can
support the weight of the system. A fully populated rack with A2000 chassis’
weighs about 3,500 lbs (1,590 kg). If the floor is rated at less than 3,500 lbs, then
additional care and planning must be taken. Some data center floors have different
static load vs. dynamic (rolling) load specifications, and sectional weight and load
point limits. This becomes important while moving preracked solutions around the
data center.
4Failure to heed these warnings may also void the product warranty. Only trained
and qualified personnel should install or replace equipment. Select the button
options for specific information. Always refer to the current Site Preparation and
Planning Guide for proper procedures and environmental information.
When onsite, remember to represent Dell EMC and yourself in the best possible
light. Do not change the PEQ without the approval of the design team. Any
approved changes should be meticulously tracked and any appropriate change
control processes should be followed. Remember to bring your documentation and
copies to provide to the customer.
Tip: To make an Install Base entry, use the IB Status Change page
link.
Module Objectives
5The Gen 6 platform reduces the data center rack footprints with support for four
nodes in a single 4U chassis. It enables enterprise to take on new and more
demanding unstructured data applications. The Gen 6 can store, manage, and
protect massively large datasets with ease. With the Gen 6, enterprises can gain
new levels of efficiency and achieve faster business outcomes.
6 The ideal use cases for Gen 6.5 (F200 and F600) is remote office/back office,
factory floors, IoT, and retail. Gen 6.5 also targets smaller companies in the core
verticals, and partner solutions, including OEM. The key advantages are low entry
price points and the flexibility to add nodes individually, as opposed to a chassis/2
node minimum for Gen 6.
The design goal for the PowerScale nodes is to keep the simple ideology of NAS,
provide the agility of the cloud, and the cost of commodity.
The Gen 6x family has different offerings that are based on the need for
performance and capacity. As Gen 6 is a modular architecture, you can scale out
compute and capacity separately. OneFS powers all the nodes.
Gen 6
• F-Series
• H-Series
• A-series
Gen 6.5
Gen 6.5 requires a minimum of three nodes to form a cluster. You can add single
nodes to the cluster. The F600 and F200 are a 1U form factor and based on the
R640 architecture.
• F6007
• F2008
7Mid-level All Flash Array 1U PE server with 10 (8 usable) x 2.5” drive bays,
enterprise NVMe SSDs (RI, 1DWPD), data reduction standard. Front End
networking options for 10/25 GbE or 40/100 GbE and 100 GbE Back End. Also
called as Cobalt Nodes.
8 Entry-level All Flash Array 1U PE server with 4 x 3.5” drive bays (w/ 2.5” drive
trays), enterprise SAS SSDs (RI, 1DWPD), data reduction standard. 10/25 GbE
Front/Back End networking. Also called as Sonic Nodes.
Gen 6 requires a minimum of four nodes to form a cluster. You must add nodes to
the cluster in pairs.
The chassis holds four compute nodes and 20 drive sled slots.
Both compute modules in a node pair power-on immediately when one of the
nodes is connected to a power source.
Gen 6 chassis
1 10 9
2 8
4
6
3
5 7
1: The compute module bay of the two nodes make up one node pair. Scaling out a
cluster with Gen 6 nodes is done by adding more node pairs.
2: Each Gen 6 node provides two ports for front-end connectivity. The connectivity
options for clients and applications are 10 GbE, 25 GbE, and 40 GbE.
3: Each node can have 1 or 2 SSDs that are used as L3 cache, global namespace
acceleration (GNA), or other SSD strategies.
4: Each Gen 6 node provides two ports for back-end connectivity. A Gen 6 node
supports 10 GbE, 40 GbE, and InfiniBand.
5: Power supply unit - Peer node redundancy: When a compute module power
supply failure takes place, the power supply from the peer node temporarily
provides power to both nodes.
6: Each node has five drive sleds. Depending on the length of the chassis and type
of the drive, each node can handle up to 30 drives or as few as 15.
8: The sled can be either a short sled or a long sled. The types are:
9: The chassis comes in two different depths, the normal depth is about 37 inches
and the deep chassis is about 40 inches.
10: Large journals offer flexibility in determining when data should be moved to the
disk. Each node has a dedicated M.2 vault drive for the journal. A node mirrors
their journal to its peer node. The node writes the journal contents to the vault when
a power loss occurs. A backup battery helps maintain power while data is stored in
the vault.
Gen 6.5 requires a minimum of three nodes to form a cluster. You can add single
nodes to the cluster. The F600 and F200 are a 1U form factor and based on the
R640 architecture.
1
5
8 2
7 4
1: Scaling out an F200 or an F600 node pool only requires adding one node.
3: Each Gen F200 and F600 node provides two ports for backend connectivity. The
PCIe slot 1 is used.
4: Redundant power supply units - When a power supply fails, the secondary
power supply in the node provides power. Power is supplied to the system equally
from both PSUs when the Hot Spare feature is disabled. Hot Spare is configured
using the iDRAC settings.
5: Disks in a node are all the same type. Each F200 node has four SAS SSDs.
6: The nodes come in two different 1U models, the F200 and F600. You need
nodes of the same type to form a cluster.
7: The F200 front-end connectivity uses the rack network daughter card (rNDC).
Important: The F600 nodes have a 4-port 1 GB NIC in the rNDC slot.
OneFS does not support this NIC on the F600.
Gen 6 Chassis
All Gen 6 chassis come with the front panel and the front panel display module.
The front panel covers the drive sleds while allowing access to the display.
Movie:
The web version of this content contains a movie.
Script: This demonstration takes a tour of the Gen 6 front panel display, drive
sleds, and an outside look at the node’s compute modules. We’ll focus on
identifying components and indicator function.
We’ll start the tour on the front panel display. This allows various administrative
tasks and provides alerts. There are 5 navigation buttons that let the administrator
select each node to administer. There are 4 node status indicators. If a node’s
status light indicator is yellow, it indicates a fault with the corresponding node. The
product badges indicate the types of nodes installed in the chassis. Only two
badges are necessary because nodes can only be installed in matched adjacent
node pairs. The front panel display is hinged to allow access to the drive sleds it
covers and contains LEDs to help the administrator see the status of each node.
Sleds
Now, taking the front bezel off the chassis and you will see the drive sleds for the
nodes. The Gen 6 chassis has 20 total drive sled slots that can be individually
serviced, but only one sled per node can be safely removed at a time. The graphic
shows that each node is paired with 5 drive sleds. The status lights on the face of
the sled indicate whether the sled is currently in service, and whether the sled
contains a failing drive. The service request button informs the node that the sled
needs to be removed, allowing the node to prepare it for removal by moving key
boot information away from drives in that sled. This temporarily suspends the
drives in the sled from the cluster file system, and then spins them down. This is
done to maximize survivability in the event of further failures and protect the cluster
file system from the effect of having several drives temporarily go missing. The do-
not-remove light blinks while the sled is being prepared for removal, and then turns
off when it is ready. We’ll see this here. The sleds come in different types. First,
when configured for nodes that support 3.5" drives, there are 3 drives per sled, as
shown here, equaling 15 drives per node, making 60 drives per chassis. The
second type is a longer sled that holds four 3.5” drives. This is used in the deep
archive, deep rack chassis for A2000 nodes. The long sleds have 20 drives per
node, for up to 80 3.5" drives per chassis. In the 3.5" drive sleds, the yellow LED
drive fault lights are on the paddle cards attached to the drives, and they are also
visible through the cover of the drive sled as indicated here. The long sled has 4
LED viewing locations. The third type of sled applies to nodes supporting 2.5"
drives. The 2.5” drive sleds can have 3 or 6 drives per sled (as shown), 15 or 30
drives per node, making 60 or 120 drives per fully populated chassis. Internally to
the 2.5" sled, there are individual fault lights for each drive. The yellow LED
associated with each drive is visible through holes in the top cover of the sled so
that you can see which drive needs replacement. The LED will stay on for about 10
minutes while the sled is out of the chassis.
Compute
When we look at the back, we see the four nodes’ compute modules in the chassis’
compute bays. We also see the terra cotta colored release lever on each compute
module, secured by a thumb screw. As shown, compute module bay 1 and 2 make
up one node pair and bay 3 and 4 make up the other node pair. In the event of a
compute module power supply failure, the power supply from the peer compute
module in the node pair will temporarily provide power to both nodes. Let’s move
the upper right of a compute module. The top light is a blue, power LED and below
that is an amber, fault LED. Each compute module has a ‘DO NOT REMOVE’
indicator light which is shaped like a raised hand with a line through it. To service
the compute module in question, shut down the affected node and wait until the
‘DO NOT REMOVE’ light goes out. Then it is safe to remove and service the unit in
question. The uHDMI port is used for factory debugging. The PCIE card on the
right is for external network connectivity and the left PCIE card is for internal
network connectivity. The compute module has a 1GbE management port, and the
DB9 serial console port. Each compute module has either a 1100W dual-voltage
(low and medium compute) or a 1450W high-line (240V) only (high and ultra-
compute) power supply unit. If high-line only nodes are being installed in a low-line
(120V) only environment, two 1U rack-mountable step-up transformers are required
for each Gen 6 chassis. Always keep in mind that Gen 6 nodes do not have power
buttons - both compute modules in a node pair will power on immediately when one
is connected to a live power source. There are also status indicator lights such as
the PSU fault light. All nodes have an ASHRAE (American Society of Heating,
This hardware tour will take a deeper look inside the node’s compute module.
Movie:
The web version of this content contains a movie.
Script: This demonstration takes a tour of the inside of the Gen 6 compute module.
First, let’s take at the back of the chassis. The chassis can have two or four
compute modules. Remember that a node is a ¼ of the chassis and consists of a
compute module and five drive sleds. Each node pairs with a peer node to form a
node pair. Shown here, nodes three and four form a node pair. Let’s start by
removing the node’s compute module to get a look inside. This demonstration does
not use a powered system. This tour does not highlight the steps for removing
components. Remember to always follow the proper removal and install
procedures from the SolVe Desktop.
WARNING: Only qualified Dell EMC personnel are allowed to open compute
nodes.
Let’s remove the node’s lid. This can be a bit tricky on the first time. Pull the blue
release handle without pressing down on the lid. Pressing down on the lid while
trying to open will keep the node lid from popping up. The lid portion of the compute
module holds the motherboard, CPU and RAM. There are two different
motherboard designs to accommodate different CPU types; the performance-based
Broadwell-EP or the cost optimized Broadwell-DE. Shown here is the Broadwell-DE
based board that the H400, A200, and A2000 use. Note the position of the four
DIMMs and their slot numbering. Here is the Broadwell-EP based board that the
F800, H600 and H500 use. Note the position of the four DIMMs and their slot
numbering. The DIMMs are field replaceable units. The CPU is not. Due to the
density and positioning of motherboard components around the DIMM slots,
damage to the motherboard is possible if care is not taken while removing and
installing DIMM modules.
Let’s turn to the lower portion of the compute module. First, we see the fan module.
This is a replaceable unit. Shown is the release lever for the fans.
The riser card, on the right side of the compute module, contains the PCIE card
slots, the NVRAM vault battery, and the M.2 card containing the NVRAM vault.
Let’s remove this to get a closer look. Removing the riser card can be tricky the first
time. Note the two blue tabs for removing the HBA riser, a sliding tab at the back
and a fixed tab at the front. At the same time, push the sliding tab in the direction of
the arrow on the tab and free the front end by pulling the riser away from the
locking pin on the side of the chassis with the fixed tab. Lift the tabs to unseat the
riser and pull it straight up. Try this at least once before going onsite to replace a
component. Here are the two PCIe slots and the ‘Pelican’ slot. They are x4 or x8
depending on the performance level of the node. The internal NIC for
communication between nodes is the PCI card shown on the left, the external PCI
card is on the right. The external NIC is used for client and application access.
Depending on the performance level of the node, the external NIC may either be a
full-size PCIe card facing left, or a ‘Pelican’ card connected to the smaller
proprietary slot between the two PCIe slots and facing right.
Next is the battery. The backup battery maintains power to the compute node while
journal data is being stored in the M.2 vault during an unexpected power loss
event. Note that because the riser card and the battery are paired, if the battery
needs to be replaced, it is replaced together with the riser card. Lastly, as seen
here, the M.2 vault disk is located under the battery. The M.2 vault disk is also a
field replaceable unit. This concludes the inside tour. Remember to review the
documentation on the SolVe Desktop for proper removal and replacement of the
node’s compute module components.
Generation 6 4U Node
New PowerScale F600 nodes with full NVMe support deliver massive performance
in a compact form factor. OneFS delivers up to 80% storage utilization for
maximum storage efficiency. Data deduplication can further reduce storage
requirements by up to 30% and inline data compression on the F200, F600, F810
all-flash platforms, and the H5600 hybrid platform can reduce the space that is
consumed.
Generation 6 Terminologies
Pre-Engagement Questionnaire
Module Objectives
Job Roles
There are four job roles that are associated with PowerScale hardware installation
and implementation process.
Pre-Engagement Questionnaire
The PowerScale PEQ is the replacement for the Configuration Guide. The stated
purpose of the PEQ is to document the Professional Services project installation
parameters and to facilitate the communication between the responsible resources.
The PEQ incorporates the process workflow and eases hand-off from Pre-Sales to
Delivery. It is a delivery document, which benefits other roles, helps define roles
and responsibilities and is not the same as the Qualifier.
PEQ Tour
Cover
To start the application, open the PEQ spreadsheet tool. The first tab that is
displayed is the Cover tab. The Cover tab contains the creation date and the
customer name.
Begin filling out the document from upper left to bottom right. SE shares the
Customer contact information and describes at a high level what the project team is
expected to do at each site, using the provided drop-down menus. The SE also
provides general customer environment information, such as Operating Systems in
use, backup apps and protocols, and any specialty licenses sold. Accurate and
On the Solution Diagram tab, the SE provides the solution diagrams or topologies
that are used during the presales cycle.
Checklist (PM)
Project Manager begins with the Engagement Checklist tab to help them plan
project tasks with a great deal of granularity.
It is also the responsibility of the Project Manager to maintain the Data Center
readiness information about the Project Details tab. Here the PM focuses on
verifying that each site has met the power, cooling, networking, and other
prerequisites before scheduling resources. The PM should also complete the
Administrative Details section with team member information, project Id details, and
an optional timeline.
Hardware
The hardware tab shows the physical connections parameters and some basic
logical parameters necessary to “stand up” the cluster. When multiple node types
are selected and defined on the Engagement Details tab, the Cluster Details
section includes a complete listing of the extended Node Details and Front-End
Switch details.
Cluster
The Cluster tab represents a single cluster and its logical configuration. Each
section on the Cluster Tab has a designated number (Yellow Chevron). The
numbers represent the listed priority of that section and should be completed in
order starting with number one. This tab is split into sections that describe different
features. These tabs are enabled through the questions in the Licensing \ Features
section.
Reference
Note: The Solution Architect (SA) typically fills out the PEQ.
Module Objectives
Network: There are two types of networks that are associated with a cluster:
internal and external.
Ethernet
Clients connect to the cluster using Ethernet connections10 that are available on all
nodes.
9 In general, keeping the network configuration simple provides the best results with
the lowest amount of administrative overhead. OneFS offers network provisioning
rules to automate the configuration of additional nodes as clusters grow.
10Because each node provides its own Ethernet ports, the amount of network
bandwidth available to the cluster scales linearly.
OneFS supports a single cluster11 on the internal network. This back-end network,
which is configured with redundant switches for high availability, acts as the
backplane for the cluster.12
12 This enables each node to act as a contributor in the cluster and isolating node-
to-node communication to a private, high-speed, low-latency network. This back-
end network utilizes Internet Protocol (IP) for node-to-node communication.
The Gen 6x back-end topology in OneFS 8.2 and later supports scaling a
PowerScale cluster to 252 nodes. See the participant guide for more details.
27 uplinks per
spine switch
Leaf-Spine is a two-level hierarchy where nodes connect to leaf switches, and leaf
switches connects to spine switches. Leaf switches do not connect to one another,
and spine switches do not connect to one another. Each leaf switch connects with
each spine switch and all leaf switches have the same number of uplinks to the
spine switches.
The new topology uses the maximum internal bandwidth and 32-port count of Dell
Z9100 switches. When planning for growth, F800 and H600 nodes should connect
over 40 GbE ports whereas A200 nodes may connect using 4x1 breakout cables.
Scale planning enables for nondisruptive upgrades, meaning as nodes are added,
no recabling of the backend network is required. Ideally, plan for three years of
growth. The table shows the switch requirements as the cluster scales. In the table,
Max Nodes indicate that each node is connected to a leaf switch using a 40 GbE
port.
5. Create a cluster by using any four nodes on the first Leaf switch.
6. Confirm that OneFS 8.2 or later is installed on the cluster.
7. Add the remaining nodes to the cluster that was created in step 5.
8. Confirm the cluster installation by checking the CELOG events.
Legacy Connectivity
Three types of InfiniBand cable are used with currently deployed clusters. Older
nodes and switches, which run at DDR or SDR speeds use the legacy CX4
connector. In mixed environments (QDR nodes and DDR switch, or conversely) a
hybrid IB cable is used. This cable has a CX4 connector on one end and a QSFP
connector on the other. However, QDR nodes are incompatible with SDR switches.
On each cable, the connector types identify the cables. The graphic shows, the
combination of the type of node and the type of InfiniBand switch port determines
the correct cable type.
Node Interconnectivity
1: Backend ports int-a and int-b. The int-b port is the upper port. Gen 6 backend
ports are identical for InfiniBand and Ethernet and cannot be identified by looking at
the node. If Gen 6 nodes are integrated in a Gen 5 or earlier cluster, the backend
will use InfiniBand. Note that there is a procedure to convert an InfiniBand backend
to Ethernet if the cluster no longer has pre-Gen 6 nodes.
2: PowerScale nodes with different backend speeds can connect to the same
backend switch and not see any performance issues. For example, an environment
has a mixed cluster where A200 nodes have 10 GbE backend ports and H600
nodes have 40 GbE backend ports. Both node types can connect to a 40 GbE
switch without effecting the performance of other nodes on the switch. The 40 GbE
switch provides 40 GbE to the H600 nodes and 10 GbE to the A200 nodes.
3: There are two speeds for the backend Ethernet switches, 10 GbE and 40 GbE.
Some nodes, such as archival nodes, might not need to use all of a 10 GbE port
bandwidth while other workflows might need the full utilization of the 40 GbE port
bandwidth. The Ethernet performance is comparable to InfiniBand so there should
be no performance bottlenecks with mixed performance nodes in a single cluster.
Administrators should not see any performance differences if moving from
InfiniBand to Ethernet.
Gen 6 nodes can use either an InfiniBand or Ethernet switch on the backend.
InfiniBand was designed as a high-speed interconnect for high-performance
computing, and Ethernet provides the flexibility and high speeds that sufficiently
support the PowerScale internal communications.
Gen 6.5 only supports Ethernet. All new, PowerScale clusters support Ethernet
only.
The graphic shows a closer look at the external and internal connectivity. Slot 1 is
used for backend communication on both the F200 and F600. Slot 3 is used for the
F600 2x 25 GbE or 2x 100 GbE front-end network connections. The rack network
daughter card (rNDC) is used for the F200 2x 25 GbE front-end network
connections.
Note: The graphic shows the R640 and does not represent the F200 and F600 PCIe and rNDC
configuration.
The external network provides connectivity for clients over standard file-based
protocols. It supports link aggregation, and network scalability is provided through
software in OneFS. A Gen 6 node has to 2 front-end ports - 10 GigE, 25 GigE, or
40 GigE, and one 1 GigE port for management. Gen 6.5 nodes have 2 front-end
ports - 10 GigE, 25 GigE, or 100 GigE.
Breakout Cables
The 40 GbE and 100 GbE connections are 4 individual lines of 10 GbE and 25
GbE. Most switches support breaking out a QSFP port into four SFP ports using a
1:4 breakout cable. The backend is done automatically when the switch detects the
cable type as a breakout cable. The front end is often configured manually on a per
port basis.
Cabling Considerations
Module Objectives
• Serial Console
• Web Administration Interface (WebUI)
• Command Line Interface (CLI).
• OneFS Application Programming Interface (API)
• Front Panel Display
Movie:
Link:
https://edutube.emc.com/Player.aspx?vno=KjBgi9m8LmZLw58klDHmOA==&autopl
ay=true
Script: Four options are available for managing the cluster. The web
administration interface (WebUI), the command-line interface (CLI), the serial
console, or the platform application programming interface (PAPI), also called the
OneFS API. The first management interface that you may use is a serial console to
node 1. A serial connection using a terminal emulator, such as PuTTY, is used to
initially configure the cluster. The serial console gives you serial access when you
cannot or do not want to use the network. Other reasons for accessing using a
serial connection may be for troubleshooting, site rules, a network outage, and so
on. Shown are the terminal emulator settings.
Configuration Manager
For initial configuration, access the CLI by establishing a serial connection to the
node designated as node 1. The serial console gives you serial access when you
cannot or do not want to use the network. Other reasons for accessing using a
serial connection may be for troubleshooting, site rules, a network outage, so on.
Serial Port14
14 The serial port is usually a male DB9 connector. This port is called the service
port. Connect a serial null modem cable between a serial port of a local client, such
as a laptop, and the node service port. Connect to the node designated as node 1.
As most laptops today no longer have serial ports, you might need to use a USB-
to-serial converter. On the local client, launch a serial terminal emulator.
• Data bits = 8
• Parity = none
• Stop bits = 1
• Flow control = hardware
isi config
Changes
prompt to
>>>
The isi config command, pronounced "izzy config," opens the configuration
console. The console contains configured settings from the time the Wizard started
running.
Use the console to change initial configuration settings. When in the isi config
console, other configuration commands are unavailable. The exit command is
used to go back to the default CLI.
OneFS
version
User must have logon privileges
Connect to
any node in
cluster over
HTTPS on
port 8080
The WebUI requires at least one IP address that is configured16 on one of the
external Ethernet ports presents in one of the nodes.
choose option 1, the Configuration Wizard steps you through the process of
creating a cluster. If you choose option 2, the Configuration Wizard ends after the
node finishes joining the cluster. You can then configure the cluster using the
WebUI or the CLI.
• Out-of-band17
• In-band18
Both methods are done using any SSH client such as OpenSSH or PuTTY. Access
to the interface changes based on the assigned privileges.
OneFS commands are code that is built on top of the UNIX environment and are
specific to OneFS management. You can use commands together in compound
command structures combining UNIX commands with customer facing and internal
commands.
4
1
5
3 6
17Accessed using a serial cable that is connected to the serial port on the back of
each node. As many laptops no longer have a serial port, a USB-serial port adapter
may be needed.
5: The CLI command use includes the capability to customize the base command
with the use of options, also known as switches and flags. A single command with
multiple options result in many different permutations, and each combination
results in different actions performed.
6: The CLI is a scriptable interface. The UNIX shell enables scripting and execution
of many UNIX and OneFS commands.
CLI Usage
Option explanation
The man isi or isi --help command is an important command for a new
administrator. These commands provide an explanation of the available isi
commands and command options. You can also view a basic description of any
command and its available options by typing the -h option after the command.
OneFS applies authentication and RBAC controls to API commands to ensure that
only authorized commands are run.
19A chief benefit of PAPI is its scripting simplicity, enabling customers to automate
their storage administration.
3: Some commands are not PAPI aware, meaning that RBAC roles do not apply.
These commands are internal, low-level commands that are available to
administrators through the CLI. Commands not PAPI aware: isi config, isi
get, isi set, and isi services
4: The number indicates the PAPI version. If an upgrade introduces a new version
of PAPI, some backward compatibility ensures that there is a grace period for old
scripts to be rewritten.
The Gen 6 front panel display is an LCD screen with five buttons that are used for
basic administration tasks20.
The Gen 6.5 front panel has limited functionality21 compared to the Gen 6.
20Some of them include adding the node to a cluster, checking node or drive
status, events, cluster details, capacity, IP and MAC addresses.
21You can join a node to a cluster and the panel display node name after the node
has joined the cluster.
Course Summary
Course Summary
Now that you have completed this course, you should be able to:
→ Discuss installation engagement actions.
→ Explain the use of PEQ in implementation.
→ Describe PowerScale nodes.
→ Identify the PowerScale node internal and external networking components.
→ Explain the PowerScale cluster management tools.
Electrostatic Discharge
Electrostatic Discharge is a major cause of damage to electronic components and
potentially dangerous to the installer. To avoid ESD damage, review ESD
procedures before arriving at the customer site and adhere to the precautions when
onsite.
Antistatic Packaging:
Leave components in
antistatic packaging until
time to install.
PowerScale Nodes
Individual PowerScale nodes provide the data storage capacity and processing
power of the PowerScale scale-out NAS platform. All of the nodes are peers to
each other and so there is no single 'master' node and no single 'administrative
node'.
• No single master
• No single point of administration
Administration can be done from any node in the cluster as each node provides
network connectivity, storage, memory, non-volatile RAM (NVDIMM) and
processing power found in the Central Processing Units (CPUs). There are also
different node configurations, compute, and capacity. These varied configurations
can be mixed and matched to meet specific business needs.
Each contains.
• Disks
• Processor
• Cache
Tip: Gen 6 nodes can exist within the same cluster. Every
PowerScale node is equal to every other PowerScale node of the
same type in a cluster. No one specific node is a controller or filer.
F-Series
The F-series nodes sit at the top of both performance and capacity with all-flash
arrays for ultra-compute and high capacity. The all flash platforms can accomplish
250-300k protocol operations per chassis and get 15 GB/s aggregate read
throughput from the chassis. Even when the cluster scales, the latency remains
predictable.
• F80022
• F81023
22 The F800 is suitable for workflows that require extreme performance and
efficiency. It is an all-flash array with ultra-high performance. The F800 sits at the
top of both the performance and capacity platform offerings when implementing the
15.4TB model, giving it the distinction of being both the fastest and densest Gen 6
node.
23 The F810 is suitable for workflows that require extreme performance and
efficiency. The F810 also provides high-speed inline data deduplication and in-line
data compression. It delivers up to 3:1 efficiency, depending on your specific
dataset and workload.
H-Series
After F-series nodes, next in terms of computing power are the H-series nodes.
These are hybrid storage platforms that are highly flexible and strike a balance
between large capacity and high-performance storage to provide support for a
broad range of enterprise file workloads.
• H40024
• H50025
• H560026
• H60027
25The H500 is a versatile hybrid platform that delivers up to 5 GB/s bandwidth per
chassis with a capacity ranging from 120 TB to 720 TB per chassis. It is an ideal
choice for organizations looking to consolidate and support a broad range of file
workloads on a single platform. H500 is comparable to a top of the line X410,
combining a high compute performance node with SATA drives. The whole Gen 6
architecture is inherently modular and flexible with respect to its specifications.
26The H5600 combines massive scalability – 960 TB per chassis and up to 8 GB/s
bandwidth in an efficient, highly dense, deep 4U chassis. The H5600 delivers inline
data compression and deduplication. It is designed to support a wide range of
demanding, large-scale file applications and workloads.
A-Series
The A-series nodes namely have lesser compute power compared to other nodes
and are designed for data archival purposes. The archive platforms can be
combined with new or existing all-flash and hybrid storage systems into a single
cluster that provides an efficient tiered storage solution.
• A20028
• A200029
28The A200 is an ideal active archive storage solution that combines near-primary
accessibility, value and ease of use.
29The A2000 is an ideal solution for high density, deep archive storage that
safeguards data efficiently for long-term retention. The A2000 is capable of
containing 80, 10TB drives for 800TBs of storage by using a deeper chassis with
longer drive sleds containing more drives in each sled.
OneFS CLI
The command-line interface runs "isi" commands to configure, monitor, and
manage the cluster. Access to the command-line interface is through a secure shell
(SSH) connection to any node in the cluster.
PAPI
The customer uses OneFS application programming interface (API) to automate
the retrieval of the most detailed network traffic statistics. It is divided into two
functional areas: One area enables cluster configuration, management, and
monitoring functionality, and the other area enables operations on files and
directories on the cluster.
Serial Console
The serial console is used for initial cluster configurations by establishing serial
access to the node designated as node 1.
WebUI
The browser-based OneFS web administration interface provides secure access
with OneFS-supported browsers. This interface is used to view robust graphical
monitoring displays and to perform cluster-management tasks.