You are on page 1of 94

POWERSCALE

HARDWARE CONCEPTS

PARTICIPANT GUIDE

PARTICIPANT GUIDE
Table of Contents

PowerScale Hardware Concepts .............................................................................. 2


Rebranding - Isilon is now PowerScale ................................................................................ 3
PowerScale Solutions Certification Journey Map ................................................................. 4
Prerequisite Skills ................................................................................................................ 5
Course Objectives................................................................................................................ 6

Installation Engagement............................................................................................ 7
Module Objectives ............................................................................................................... 8
Customer Engagement Responsibility ................................................................................. 9
Physical Tools Required .................................................................................................... 10
Installation and Implementation Phases ............................................................................. 12
SolVe ................................................................................................................................. 13
Safety Precautions and Considerations ............................................................................. 16
Onsite Do's and Don'ts....................................................................................................... 18

Introduction to PowerScale Nodes ......................................................................... 19


Module Objectives ............................................................................................................. 20
PowerScale Node Specifications ....................................................................................... 21
PowerScale Hardware Overview........................................................................................ 23
PowerScale Nodes Overview ............................................................................................. 24
PowerScale Node Types.................................................................................................... 25
Gen 6 Hardware Components............................................................................................ 27
Gen 6.5 Hardware Components......................................................................................... 29
PowerScale Node Tour - Generation 6 .............................................................................. 31
Advantages and Terminologies .......................................................................................... 35

Pre-Engagement Questionnaire ............................................................................. 36


Module Objectives ............................................................................................................. 37
Job Roles ........................................................................................................................... 38
Pre-Engagement Questionnaire ......................................................................................... 40
PEQ Tour ........................................................................................................................... 41

PowerScale Hardware Concepts

Page ii © Copyright 2020 Dell Inc.


Internal and External Networking ........................................................................... 48
Module Objectives ............................................................................................................. 49
PowerScale Networking Architecture ................................................................................. 50
Leaf-Spine Backend Network ............................................................................................. 52
Legacy Connectivity ........................................................................................................... 54
Node Interconnectivity ....................................................................................................... 55
F200 and F600 Network Connectivity ................................................................................ 57
PowerScale Architecture - External Network ...................................................................... 58
Breakout Cables ................................................................................................................ 59
Cabling Considerations ...................................................................................................... 60

Cluster Management Tools ..................................................................................... 61


Module Objectives ............................................................................................................. 62
OneFS Management Tools ................................................................................................ 63
Serial Console Video ......................................................................................................... 64
Configuration Manager ...................................................................................................... 65
isi config ..................................................................................................................... 67
Web Administration Interface (WebUI) ............................................................................... 68
Command Line Interface (CLI) ........................................................................................... 70
CLI Usage .......................................................................................................................... 72
OneFS Application Programming Interface (API) ............................................................... 73
Front Panel Display............................................................................................................ 75

Course Summary ..................................................................................................... 76


Course Summary ............................................................................................................... 77

Appendix ................................................................................................. 79

Glossary .................................................................................................. 89

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page iii


PowerScale Hardware Concepts

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 1


PowerScale Hardware Concepts

PowerScale Hardware Concepts

PowerScale Hardware Concepts

Page 2 © Copyright 2020 Dell Inc.


PowerScale Hardware Concepts

Rebranding - Isilon is now PowerScale

Important: In mid-2020 Isilon launched a new hardware platform, the


F200 and F600 branded as Dell EMC PowerScale. Over time the
Isilon brand will convert to the new platforms PowerScale branding. In
the meantime, you will continue to see Isilon and PowerScale used
interchangeably, including within this course and any lab activities.
OneFS CLI isi commands, command syntax, and man pages may
have instances of "Isilon".
Videos associated with the course may still use the "Isilon" brand.
Resources such as white papers, troubleshooting guides, other
technical documentation, community pages, blog posts, and others
will continue to use the "Isilon" brand.
The rebranding initiative is an iterative process and rebranding all
instances of "Isilon" to "PowerScale" may take some time.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 3


PowerScale Hardware Concepts

PowerScale Solutions Certification Journey Map

The graphic shows the PowerScale Solutions Expert certification track. You can
leverage the Dell Technologies Proven Professional program to realize your full
potential. A combination of technology-focused and role-based training and exams
to cover concepts and principles as well as the full range of Dell Technologies'
hardware, software, and solutions. You can accelerate your career and your
organization’s capabilities.

PowerScale Solutions

A. PowerScale Advanced Administration (C, VC)

B. PowerScale Advanced Disaster Recovery (C, VC)

(Knowledge and Experience based Exam)

Implementation Specialist, PowerScale Technology Architect Specialist, Platform Engineer, PowerScale


PowerScale

A. PowerScale Concepts (ODC)


A. PowerScale Concepts (ODC) A. PowerScale Concepts (ODC) B. PowerScale Hardware Concepts (ODC)
C. PowerScale Hardware Installation (ODC)
B. PowerScale Administration (C,VC,ODC) B. PowerScale Solution Design (ODC) D. PowerScale Hardware Maintenance
(ODC)
E. PowerScale Implementation (ODC)

Information Storage and Management

Information Storage and Management (C, VC, ODC)

(C) - Classroom

(VC) - Virtual Classroom

(ODC) - On Demand Course

For more information, visit: http://dell.com/certification

PowerScale Hardware Concepts

Page 4 © Copyright 2020 Dell Inc.


PowerScale Hardware Concepts

Prerequisite Skills

To understand the content and successfully complete this course, a student must
have a suitable knowledge base or skill set. The student must have an
understanding of:
• Current PowerScale hardware portfolio and the OneFS operating system
• PowerScale Concepts
• Isilon InfiniBand to Ethernet Backend Conversion

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 5


PowerScale Hardware Concepts

Course Objectives

After completion of this course, you will be able to:


→ Discuss installation engagement actions.
→ Explain the use of PEQ in implementation.
→ Describe PowerScale nodes.
→ Identify the PowerScale node internal and external networking components.
→ Explain the PowerScale cluster management tools.

PowerScale Hardware Concepts

Page 6 © Copyright 2020 Dell Inc.


Installation Engagement

Installation Engagement

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 7


Installation Engagement

Module Objectives

After completing this lesson, you will be able to:


• Describe the Customer Engineer and Implementation Specialist roles and
responsibilities.
• Explain the customer engagement procedures.

PowerScale Hardware Concepts

Page 8 © Copyright 2020 Dell Inc.


Installation Engagement

Customer Engagement Responsibility

There are five steps or phases for acquiring a PowerScale cluster. Each phase has
a separate team that engages with the customer. In the design phase a Solution
Architect (SA) works with the customer, determine their specific needs, and
documents what the solution looks like. After the product purchase, shipment, and
delivery to the customer site the install and implementation phase of a PowerScale
cluster begins. The result of the SA engagement is PowerScale Pre-Engagement
Questionnaire (PEQ) that the Customer Engineers (CE) and Implementation
Specialist (IS) uses to install and configure the cluster. Before the install phase, all
design decisions have been made.

Note: The Pre-Engagement Questionnaire (PEQ) is now replacement


for PowerScale Configuration Guide.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 9


Installation Engagement

Physical Tools Required

Shown in the graphic are suggested tools for a typical installation.

3
2 4

1 5

1: The cables that are required are a single CAT5/CAT6 network patch cord, to
directly connect your laptop to the node. USB-to-serial adapter, preferably one that
uses the Prolific 2303 Chipset.

2: DB9-to-DB9 Null modem cable (female/female).

3: The software that is required or recommended is:

• Latest recommended OneFS release


• Latest cluster firmware
• Latest drive firmware package
• SolVe Online
• WinSCP - copies files to and from cluster
• PuTTy - serial access cluster via SSH

4: Basic hand tools: screwdrivers (flat-head and Phillips), wire cutters, anti-static
wrist strap.

5: Cable ties/Velcro strips for cable management and routing.

PowerScale Hardware Concepts

Page 10 © Copyright 2020 Dell Inc.


Installation Engagement

Resources: Links to download the WinSCP and PuTTy software.


Other software can be downloaded at support.emc.com.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 11


Installation Engagement

Installation and Implementation Phases

There are three distinct steps in the install and implementation phase: Install, Build,
and Implement.

1: During the install, the components are unpacked and racked, and switches are
rack that is mounted. Nodes are connected to the back-end switches, power is
added, and front-end network cables are connected between the cluster and
customer network. The Customer Engineer or CE performs these tasks.

2: Depending on the role, the CE may perform the cluster build also. The cluster
build is achieved when the system is powered on, the PowerScale Configuration
Wizard has been launched and the information added.

3: In some regions, running the Configuration Wizard may be the sole responsibility
of the IS. After the cluster is built, the IS configures the features of OneFS as
written in the PEQ.

PowerScale Hardware Concepts

Page 12 © Copyright 2020 Dell Inc.


Installation Engagement

SolVe

Before you arrive at a client site, remember to read the call notes and follow the
processes that are detailed in them. Check if there are any special instructions from
PowerScale Technical Support that you must follow.

SolVe Online is a revised and updated version of SolVe Desktop. It is a knowledge


management-led standard procedure for DELL-EMC field, service partners, and
customers.

1: Download SolVe Desktop application on the system. Go to the Tools and Sites
section, choose SolVe. And select SolVe Desktop Executable. Depending on the
browser used, you may be presented with security dialogue boxes. Take the
needed actions to launch the executable.

2:

Click through the Setup wizard and then select Install. Clicking Finish launches
the SolVe Desktop. SolVe must be authorized for use. Select OK. A few general
items1.

1 Notice the dialog in the lower left showing the version. This area also shows the
progress when upgrading and downloading content. Also notice in the lower right
the service topics. Once connected, many of articles that are shown may not be

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 13


Installation Engagement

3: From the menu, select Authorize and download the list of available products.
Adhere to the instructions shown, that is to leave SolVe open, enter credentials,
this is using SSO, and open the keychain file. Select OK. And then go to
downloads and open the keychain file.

4: Next are the Release Notes. Review and then close this window. Bring back the
SolVe. Notice the dialog2 in the lower left indicating the keychain is loaded, that
means you are authorized, and content is updated. Now, scroll down, and click
PowerScale to gather the PowrScale content.

5: Click OK. Again, note the progress in the lower left. Once the download is
complete, you see that the PowerScale image has changed. Tools that are
downloaded appear in the upper left corner of the screen without the green arrow
present.

6: Now you can click PowerScale and view the available procedures. If updates are
available for download, you see an information icon, click the icon, and approve the
updated content download.

relevant to PowerScale. There is a filtering option in the menu to receive the


articles that pertain to a specific product.

2The icons with a green arrow indicate that the user must click the icon in order to
download the tool.

PowerScale Hardware Concepts

Page 14 © Copyright 2020 Dell Inc.


Installation Engagement

Resources: Partners3 can search through the Dell EMC partner


portal. SolVe Online can be downloaded from EMC support portal.
Access SolVe Online through SolVe Online portal. Click here for an
overview on SolVe Desktop/Online.

3The view is dependent upon Partner Type. A service partner sees what an
employee sees, a direct sales partner sees what a customer sees, and an
ASP/ASN partner sees products depending upon credentials.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 15


Installation Engagement

Safety Precautions and Considerations

When working with PowerScale equipment, it is critical to ensure you adhere to the
following precautions.

6
1

2
5
3

1: The AC supply circuit for PowerScale nodes must supply the total current that is
specified on the label of the node. All AC power supply connections must be
properly grounded. Connections that are not directly connected to the branch
circuit, such as nodes that are connected to a power strip, must also be properly
grounded. Do not overload the branch circuit of the AC supply that provides power
to the rack holding PowerScale nodes. The total rack load should not exceed 80%
of the branch circuit rating. For high availability, the left and right sides of any rack
must receive power from separate branch feed circuits. To help protect the system
from sudden increases or decreases in electrical power, use a surge suppressor,
line conditioner, or uninterruptible power supply or UPS.

2: To avoid personal injury or damage to the hardware, always use two people to
lift or move a node or chassis. A Gen 6 chassis can weigh more than 200 lbs. It is
recommended to use a lift to install the components into the rack. If a lift is not
available, you must remove all drive sleds and compute modules from the chassis
before lifting. Even when lifting an empty chassis, never attempt to lift and install
with fewer than two people.

3:

Electrostatic Discharge

PowerScale Hardware Concepts

Page 16 © Copyright 2020 Dell Inc.


Installation Engagement

4: If you install PowerScale nodes in a rack that is not bolted to the floor, use both
front and side stabilizers. Installing PowerScale nodes in an unbolted rack without
these stabilizers could cause the rack to tip over, potentially resulting in bodily
injury. Use only approved replacement parts and equipment.

5: Beyond precautions of working with electricity, it is also critical to ensure proper


cooling. Proper airflow must be provided to all PowerScale equipment. Gen 6
nodes have an ASHRAE (American Society of Heating, Refrigerating and Air-
Conditioning Engineers) designation of A3. The nodes can operate in environments
with ambient temperatures from five degrees, up to 40° Celsius for limited periods
of time.

6: You can install racks in raised or nonraised floor data centers capable of
supporting that system. It is your responsibility to ensure that data center floor can
support the weight of the system. A fully populated rack with A2000 chassis’
weighs about 3,500 lbs (1,590 kg). If the floor is rated at less than 3,500 lbs, then
additional care and planning must be taken. Some data center floors have different
static load vs. dynamic (rolling) load specifications, and sectional weight and load
point limits. This becomes important while moving preracked solutions around the
data center.

Caution: Failure to adhere to the safety precautions may result in


electric shock, bodily injury, fire, damage to PowerScale systems
equipment, or loss of data. Review the safety precautions and
considerations4 before the installation.

4Failure to heed these warnings may also void the product warranty. Only trained
and qualified personnel should install or replace equipment. Select the button
options for specific information. Always refer to the current Site Preparation and
Planning Guide for proper procedures and environmental information.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 17


Installation Engagement

Onsite Do's and Don'ts

When onsite, remember to represent Dell EMC and yourself in the best possible
light. Do not change the PEQ without the approval of the design team. Any
approved changes should be meticulously tracked and any appropriate change
control processes should be followed. Remember to bring your documentation and
copies to provide to the customer.

Before you leave a client site, ensure you:

• Test the device function and connectivity by following documented test


procedures in the training material and support guides.
• Escalate any client satisfaction issues or severity level 1 situations to the next
level of support.
• Follow up on any outstanding commitments that are made to the client.
• Contact PowerScale support to report the call status.
• Ensure that the product is registered and that the Install Base Record is
updated.

Tip: To make an Install Base entry, use the IB Status Change page
link.

PowerScale Hardware Concepts

Page 18 © Copyright 2020 Dell Inc.


Introduction to PowerScale Nodes

Introduction to PowerScale Nodes

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 19


Introduction to PowerScale Nodes

Module Objectives

After completing this module, you will be able to:


• Describe node naming conventions.
• Identify each PowerScale node series.
• Identify PowerScale node components.

PowerScale Hardware Concepts

Page 20 © Copyright 2020 Dell Inc.


Introduction to PowerScale Nodes

PowerScale Node Specifications

Attribute F200, F800, H600 H5600 H500, A200 A2000


F600 F810 H400

Rack Units 1U 4 4 4 nodes 4 4 4 nodes


nodes nodes in 4U nodes nodes in 4U
in 4U in 4U in 4U in 4U

Nodes per N/A 4 4 4 4 4 4


Chassis

Per Node F200: F800: 18 TB– 200 30 TB– 30 TB– 200


Capacity 3.84 – 24 TB– 36 TB TB–240 180 TB 180 TB TB–240
15.36 231 TB TB TB
TB F810:
F600: 57.5 -
15.36 – 231 TB
61.4
TB

Storage Media F200: 15 30 20 15 15 15


per Node 4 SSDs SAS SATA SATA SATA SATA
SSDs drives drives drives drives drives
F600:
8
NVMe
SSDs

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 21


Introduction to PowerScale Nodes

Storage Media F200: 3.84 600 10 TB 2 TB, 4 2 TB, 4 10 TB


Capacity 960 TB, GB or or 12 TB, 8 TB, 8 or 12
Options GB, 7.68 1.2 TB TB TB, or TB, or TB
1.92 TB, or SAS SATA 12 TB 12 TB SATA
TB, 15.36 drives drives SATA SATA drives
3.84 TB drive drives
TB SSDs
SSDs
F600:
1.92
TB,
3.84
TB,
7.68
TB
NVMe
SSDs

ECC Memory F200: 256 256 256 GB H400: 16 GB 16 GB


per Node 48 GB GB GB 64 GB
or 96 H500:
GB 128
F600: GB
128
GB,
192
GB, or
384
GB

OneFS 8.1.x F800 - 8.1.x 8.2.x 8.1.x 8.1.x 8.1.x


Compatibility 8.2.x 8.1.x, 8.2.x 9.0.x 8.2.x 8.2.x 8.2.x
9.0.x 8.2.x, 9.0.x 9.0.x 9.0.x 9.0.x
9.0.x
F810 -
8.1.3,
8.2.1,
9.0.x

PowerScale Hardware Concepts

Page 22 © Copyright 2020 Dell Inc.


Introduction to PowerScale Nodes

PowerScale Hardware Overview

Nodes combine to create a cluster. Each cluster


behaves as a single, central storage system.
PowerScale is designed for large volumes of
unstructured data. PowerScale has multiple servers
that are called nodes.

PowerScale includes all-flash, hybrid, and archive


storage systems.
Dual chassis, eight node
Generation 6 (or Gen 6) cluster
Gen 6 highlights5
Gen 6.5 highlights6

5The Gen 6 platform reduces the data center rack footprints with support for four
nodes in a single 4U chassis. It enables enterprise to take on new and more
demanding unstructured data applications. The Gen 6 can store, manage, and
protect massively large datasets with ease. With the Gen 6, enterprises can gain
new levels of efficiency and achieve faster business outcomes.

6 The ideal use cases for Gen 6.5 (F200 and F600) is remote office/back office,
factory floors, IoT, and retail. Gen 6.5 also targets smaller companies in the core
verticals, and partner solutions, including OEM. The key advantages are low entry
price points and the flexibility to add nodes individually, as opposed to a chassis/2
node minimum for Gen 6.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 23


Introduction to PowerScale Nodes

PowerScale Nodes Overview

Generation 6 (or Gen 6) chassis and Generation 6.5 nodes

The design goal for the PowerScale nodes is to keep the simple ideology of NAS,
provide the agility of the cloud, and the cost of commodity.

Storage nodes are peers.

The Gen 6x family has different offerings that are based on the need for
performance and capacity. As Gen 6 is a modular architecture, you can scale out
compute and capacity separately. OneFS powers all the nodes.

PowerScale Hardware Concepts

Page 24 © Copyright 2020 Dell Inc.


Introduction to PowerScale Nodes

PowerScale Node Types

Click each generation node type to learn more.

Gen 6

The Gen 6 platform provides following offerings. Previous generations of


PowerScale nodes come in 1U, 2U, and 4U form factors. Gen 6 has a modular
architecture, with four nodes fitting into a single 4U chassis.

• F-Series
• H-Series
• A-series

Double-click image for enlarged view.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 25


Introduction to PowerScale Nodes

Gen 6.5

Gen 6.5 requires a minimum of three nodes to form a cluster. You can add single
nodes to the cluster. The F600 and F200 are a 1U form factor and based on the
R640 architecture.

• F6007
• F2008

Double click image for enlarged view.

7Mid-level All Flash Array 1U PE server with 10 (8 usable) x 2.5” drive bays,
enterprise NVMe SSDs (RI, 1DWPD), data reduction standard. Front End
networking options for 10/25 GbE or 40/100 GbE and 100 GbE Back End. Also
called as Cobalt Nodes.

8 Entry-level All Flash Array 1U PE server with 4 x 3.5” drive bays (w/ 2.5” drive
trays), enterprise SAS SSDs (RI, 1DWPD), data reduction standard. 10/25 GbE
Front/Back End networking. Also called as Sonic Nodes.

PowerScale Hardware Concepts

Page 26 © Copyright 2020 Dell Inc.


Introduction to PowerScale Nodes

Gen 6 Hardware Components

Gen 6 requires a minimum of four nodes to form a cluster. You must add nodes to
the cluster in pairs.

The chassis holds four compute nodes and 20 drive sled slots.

Both compute modules in a node pair power-on immediately when one of the
nodes is connected to a power source.

Gen 6 chassis

1 10 9

2 8
4
6

3
5 7

1: The compute module bay of the two nodes make up one node pair. Scaling out a
cluster with Gen 6 nodes is done by adding more node pairs.

2: Each Gen 6 node provides two ports for front-end connectivity. The connectivity
options for clients and applications are 10 GbE, 25 GbE, and 40 GbE.

3: Each node can have 1 or 2 SSDs that are used as L3 cache, global namespace
acceleration (GNA), or other SSD strategies.

4: Each Gen 6 node provides two ports for back-end connectivity. A Gen 6 node
supports 10 GbE, 40 GbE, and InfiniBand.

5: Power supply unit - Peer node redundancy: When a compute module power
supply failure takes place, the power supply from the peer node temporarily
provides power to both nodes.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 27


Introduction to PowerScale Nodes

6: Each node has five drive sleds. Depending on the length of the chassis and type
of the drive, each node can handle up to 30 drives or as few as 15.

7: Disks in a sled are all the same type.

8: The sled can be either a short sled or a long sled. The types are:

• Long Sled - four drives of size 3.5"


• Short Sled - three drives of size 3.5"
• Short Sled - three or six drives of size 2.5"

9: The chassis comes in two different depths, the normal depth is about 37 inches
and the deep chassis is about 40 inches.

10: Large journals offer flexibility in determining when data should be moved to the
disk. Each node has a dedicated M.2 vault drive for the journal. A node mirrors
their journal to its peer node. The node writes the journal contents to the vault when
a power loss occurs. A backup battery helps maintain power while data is stored in
the vault.

PowerScale Hardware Concepts

Page 28 © Copyright 2020 Dell Inc.


Introduction to PowerScale Nodes

Gen 6.5 Hardware Components

Gen 6.5 requires a minimum of three nodes to form a cluster. You can add single
nodes to the cluster. The F600 and F200 are a 1U form factor and based on the
R640 architecture.

Graphic shows F200 or F600 node pool.

1
5

8 2

7 4

1: Scaling out an F200 or an F600 node pool only requires adding one node.

2: For front-end connectivity, the F600 uses the PCIe slot 3.

3: Each Gen F200 and F600 node provides two ports for backend connectivity. The
PCIe slot 1 is used.

4: Redundant power supply units - When a power supply fails, the secondary
power supply in the node provides power. Power is supplied to the system equally
from both PSUs when the Hot Spare feature is disabled. Hot Spare is configured
using the iDRAC settings.

5: Disks in a node are all the same type. Each F200 node has four SAS SSDs.

6: The nodes come in two different 1U models, the F200 and F600. You need
nodes of the same type to form a cluster.

7: The F200 front-end connectivity uses the rack network daughter card (rNDC).

8: Each F600 node has 8 NVMe SSDs.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 29


Introduction to PowerScale Nodes

Important: The F600 nodes have a 4-port 1 GB NIC in the rNDC slot.
OneFS does not support this NIC on the F600.

PowerScale Hardware Concepts

Page 30 © Copyright 2020 Dell Inc.


Introduction to PowerScale Nodes

PowerScale Node Tour - Generation 6

Gen 6 Chassis

All Gen 6 chassis come with the front panel and the front panel display module.
The front panel covers the drive sleds while allowing access to the display.

Movie:
The web version of this content contains a movie.

Script: This demonstration takes a tour of the Gen 6 front panel display, drive
sleds, and an outside look at the node’s compute modules. We’ll focus on
identifying components and indicator function.

Front Panel Display

We’ll start the tour on the front panel display. This allows various administrative
tasks and provides alerts. There are 5 navigation buttons that let the administrator
select each node to administer. There are 4 node status indicators. If a node’s
status light indicator is yellow, it indicates a fault with the corresponding node. The
product badges indicate the types of nodes installed in the chassis. Only two
badges are necessary because nodes can only be installed in matched adjacent
node pairs. The front panel display is hinged to allow access to the drive sleds it
covers and contains LEDs to help the administrator see the status of each node.

Sleds

Now, taking the front bezel off the chassis and you will see the drive sleds for the
nodes. The Gen 6 chassis has 20 total drive sled slots that can be individually
serviced, but only one sled per node can be safely removed at a time. The graphic
shows that each node is paired with 5 drive sleds. The status lights on the face of
the sled indicate whether the sled is currently in service, and whether the sled
contains a failing drive. The service request button informs the node that the sled
needs to be removed, allowing the node to prepare it for removal by moving key
boot information away from drives in that sled. This temporarily suspends the
drives in the sled from the cluster file system, and then spins them down. This is
done to maximize survivability in the event of further failures and protect the cluster
file system from the effect of having several drives temporarily go missing. The do-
not-remove light blinks while the sled is being prepared for removal, and then turns
off when it is ready. We’ll see this here. The sleds come in different types. First,

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 31


Introduction to PowerScale Nodes

when configured for nodes that support 3.5" drives, there are 3 drives per sled, as
shown here, equaling 15 drives per node, making 60 drives per chassis. The
second type is a longer sled that holds four 3.5” drives. This is used in the deep
archive, deep rack chassis for A2000 nodes. The long sleds have 20 drives per
node, for up to 80 3.5" drives per chassis. In the 3.5" drive sleds, the yellow LED
drive fault lights are on the paddle cards attached to the drives, and they are also
visible through the cover of the drive sled as indicated here. The long sled has 4
LED viewing locations. The third type of sled applies to nodes supporting 2.5"
drives. The 2.5” drive sleds can have 3 or 6 drives per sled (as shown), 15 or 30
drives per node, making 60 or 120 drives per fully populated chassis. Internally to
the 2.5" sled, there are individual fault lights for each drive. The yellow LED
associated with each drive is visible through holes in the top cover of the sled so
that you can see which drive needs replacement. The LED will stay on for about 10
minutes while the sled is out of the chassis.

Compute

When we look at the back, we see the four nodes’ compute modules in the chassis’
compute bays. We also see the terra cotta colored release lever on each compute
module, secured by a thumb screw. As shown, compute module bay 1 and 2 make
up one node pair and bay 3 and 4 make up the other node pair. In the event of a
compute module power supply failure, the power supply from the peer compute
module in the node pair will temporarily provide power to both nodes. Let’s move
the upper right of a compute module. The top light is a blue, power LED and below
that is an amber, fault LED. Each compute module has a ‘DO NOT REMOVE’
indicator light which is shaped like a raised hand with a line through it. To service
the compute module in question, shut down the affected node and wait until the
‘DO NOT REMOVE’ light goes out. Then it is safe to remove and service the unit in
question. The uHDMI port is used for factory debugging. The PCIE card on the
right is for external network connectivity and the left PCIE card is for internal
network connectivity. The compute module has a 1GbE management port, and the
DB9 serial console port. Each compute module has either a 1100W dual-voltage
(low and medium compute) or a 1450W high-line (240V) only (high and ultra-
compute) power supply unit. If high-line only nodes are being installed in a low-line
(120V) only environment, two 1U rack-mountable step-up transformers are required
for each Gen 6 chassis. Always keep in mind that Gen 6 nodes do not have power
buttons - both compute modules in a node pair will power on immediately when one
is connected to a live power source. There are also status indicator lights such as
the PSU fault light. All nodes have an ASHRAE (American Society of Heating,

PowerScale Hardware Concepts

Page 32 © Copyright 2020 Dell Inc.


Introduction to PowerScale Nodes

Refrigerating and Air-conditioning Engineers) designation of A3, which enables the


nodes to operate in environments with ambient temperatures from 5 up to 40
degrees Celsius for limited periods of time. In closing, here are also 2 SSD bays on
each compute module, one or both of which are populated with SSDs (depending
on node configuration) that are used as L3 cache. This concludes the tour of the
Isilon Gen 6 front panel display, drive sleds, and an outside look at the node’s
compute modules.

Inside Gen 6 Node

This hardware tour will take a deeper look inside the node’s compute module.

Movie:
The web version of this content contains a movie.

Script: This demonstration takes a tour of the inside of the Gen 6 compute module.

First, let’s take at the back of the chassis. The chassis can have two or four
compute modules. Remember that a node is a ¼ of the chassis and consists of a
compute module and five drive sleds. Each node pairs with a peer node to form a
node pair. Shown here, nodes three and four form a node pair. Let’s start by
removing the node’s compute module to get a look inside. This demonstration does
not use a powered system. This tour does not highlight the steps for removing
components. Remember to always follow the proper removal and install
procedures from the SolVe Desktop.

WARNING: Only qualified Dell EMC personnel are allowed to open compute
nodes.

Let’s remove the node’s lid. This can be a bit tricky on the first time. Pull the blue
release handle without pressing down on the lid. Pressing down on the lid while
trying to open will keep the node lid from popping up. The lid portion of the compute
module holds the motherboard, CPU and RAM. There are two different
motherboard designs to accommodate different CPU types; the performance-based
Broadwell-EP or the cost optimized Broadwell-DE. Shown here is the Broadwell-DE
based board that the H400, A200, and A2000 use. Note the position of the four
DIMMs and their slot numbering. Here is the Broadwell-EP based board that the
F800, H600 and H500 use. Note the position of the four DIMMs and their slot

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 33


Introduction to PowerScale Nodes

numbering. The DIMMs are field replaceable units. The CPU is not. Due to the
density and positioning of motherboard components around the DIMM slots,
damage to the motherboard is possible if care is not taken while removing and
installing DIMM modules.

Let’s turn to the lower portion of the compute module. First, we see the fan module.
This is a replaceable unit. Shown is the release lever for the fans.

The riser card, on the right side of the compute module, contains the PCIE card
slots, the NVRAM vault battery, and the M.2 card containing the NVRAM vault.
Let’s remove this to get a closer look. Removing the riser card can be tricky the first
time. Note the two blue tabs for removing the HBA riser, a sliding tab at the back
and a fixed tab at the front. At the same time, push the sliding tab in the direction of
the arrow on the tab and free the front end by pulling the riser away from the
locking pin on the side of the chassis with the fixed tab. Lift the tabs to unseat the
riser and pull it straight up. Try this at least once before going onsite to replace a
component. Here are the two PCIe slots and the ‘Pelican’ slot. They are x4 or x8
depending on the performance level of the node. The internal NIC for
communication between nodes is the PCI card shown on the left, the external PCI
card is on the right. The external NIC is used for client and application access.
Depending on the performance level of the node, the external NIC may either be a
full-size PCIe card facing left, or a ‘Pelican’ card connected to the smaller
proprietary slot between the two PCIe slots and facing right.

Next is the battery. The backup battery maintains power to the compute node while
journal data is being stored in the M.2 vault during an unexpected power loss
event. Note that because the riser card and the battery are paired, if the battery
needs to be replaced, it is replaced together with the riser card. Lastly, as seen
here, the M.2 vault disk is located under the battery. The M.2 vault disk is also a
field replaceable unit. This concludes the inside tour. Remember to review the
documentation on the SolVe Desktop for proper removal and replacement of the
node’s compute module components.

PowerScale Hardware Concepts

Page 34 © Copyright 2020 Dell Inc.


Introduction to PowerScale Nodes

Advantages and Terminologies

Generation 6 4U Node

Gen 6 provides flexibility. From a customer perspective, it allows for easier


planning. Each chassis requires 4U in the rack, with the same cabling and a higher
storage density in a smaller data center footprint. It should be noted that this also
means that there is four times as much cabling across the Gen 6 4U chassis
populated with four nodes. Customers can select the ideal storage to compute ratio
for their workflow.

New PowerScale F600 nodes with full NVMe support deliver massive performance
in a compact form factor. OneFS delivers up to 80% storage utilization for
maximum storage efficiency. Data deduplication can further reduce storage
requirements by up to 30% and inline data compression on the F200, F600, F810
all-flash platforms, and the H5600 hybrid platform can reduce the space that is
consumed.

Generation 6 Terminologies

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 35


Pre-Engagement Questionnaire

Pre-Engagement Questionnaire

PowerScale Hardware Concepts

Page 36 © Copyright 2020 Dell Inc.


Pre-Engagement Questionnaire

Module Objectives

After completing this module, you will be able to:


• Identify the job roles of people involved in the implementation.
• Explain the use of PEQ in implementation.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 37


Pre-Engagement Questionnaire

Job Roles

There are four job roles that are associated with PowerScale hardware installation
and implementation process.

1: Customer Engineer (CE):

• Performs hardware installation and hardware upgrade services


• Creates PowerScale cluster
• Verifies that hardware installation is successful

2: Implementation Specialist (IS):

• Has knowledge of storage system


• Implements cluster

3: Project Manager (PM):

• First contact of customers for service engagement


• Builds delivery schedule
• Coordinates services delivery with customer and service personnel
• Monitors progress of service delivery

4: Solutions Architect (SA):

PowerScale Hardware Concepts

Page 38 © Copyright 2020 Dell Inc.


Pre-Engagement Questionnaire

• Develops implementation plan


• Designs configuration

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 39


Pre-Engagement Questionnaire

Pre-Engagement Questionnaire

The PowerScale PEQ is the replacement for the Configuration Guide. The stated
purpose of the PEQ is to document the Professional Services project installation
parameters and to facilitate the communication between the responsible resources.
The PEQ incorporates the process workflow and eases hand-off from Pre-Sales to
Delivery. It is a delivery document, which benefits other roles, helps define roles
and responsibilities and is not the same as the Qualifier.

Click images for enlarged view.

PowerScale Hardware Concepts

Page 40 © Copyright 2020 Dell Inc.


Pre-Engagement Questionnaire

PEQ Tour

The PEQ is an Excel spreadsheet consisting of eight tabs: Cover, Engagement


Details (SE), Solution Diagram (SE), Checklist(PM), Project Details(PM),
Hardware, Cluster, and Reference.

Click each image for enlarged view.

Cover

To start the application, open the PEQ spreadsheet tool. The first tab that is
displayed is the Cover tab. The Cover tab contains the creation date and the
customer name.

Engagement Details (SE)

Begin filling out the document from upper left to bottom right. SE shares the
Customer contact information and describes at a high level what the project team is
expected to do at each site, using the provided drop-down menus. The SE also
provides general customer environment information, such as Operating Systems in
use, backup apps and protocols, and any specialty licenses sold. Accurate and

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 41


Pre-Engagement Questionnaire

complete customer information is important to the smooth and efficient planning


process.

Solution Diagram (SE)

On the Solution Diagram tab, the SE provides the solution diagrams or topologies
that are used during the presales cycle.

PowerScale Hardware Concepts

Page 42 © Copyright 2020 Dell Inc.


Pre-Engagement Questionnaire

Checklist (PM)

Project Manager begins with the Engagement Checklist tab to help them plan
project tasks with a great deal of granularity.

Project Details (PM)

It is also the responsibility of the Project Manager to maintain the Data Center
readiness information about the Project Details tab. Here the PM focuses on
verifying that each site has met the power, cooling, networking, and other
prerequisites before scheduling resources. The PM should also complete the
Administrative Details section with team member information, project Id details, and
an optional timeline.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 43


Pre-Engagement Questionnaire

Hardware

The hardware tab shows the physical connections parameters and some basic
logical parameters necessary to “stand up” the cluster. When multiple node types
are selected and defined on the Engagement Details tab, the Cluster Details
section includes a complete listing of the extended Node Details and Front-End
Switch details.

PowerScale Hardware Concepts

Page 44 © Copyright 2020 Dell Inc.


Pre-Engagement Questionnaire

Cluster

The Cluster tab represents a single cluster and its logical configuration. Each
section on the Cluster Tab has a designated number (Yellow Chevron). The
numbers represent the listed priority of that section and should be completed in
order starting with number one. This tab is split into sections that describe different
features. These tabs are enabled through the questions in the Licensing \ Features
section.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 45


Pre-Engagement Questionnaire

Reference

The Reference Tab provides frequently used content, cross‐references, and


checklists and other items that assist the delivery resources throughout the delivery
engagement. It is intended quick reference not as the authoritative source of that
information.

PowerScale Hardware Concepts

Page 46 © Copyright 2020 Dell Inc.


Pre-Engagement Questionnaire

Click image for enlarged view.

Note: The Solution Architect (SA) typically fills out the PEQ.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 47


Internal and External Networking

Internal and External Networking

PowerScale Hardware Concepts

Page 48 © Copyright 2020 Dell Inc.


Internal and External Networking

Module Objectives

After completing this module, you will be able to:


• Explain the significance of internal and external networks in clusters.
• Describe InfiniBand switches and cables and identify Ethernet cabling.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 49


Internal and External Networking

PowerScale Networking Architecture

OneFS supports standard network communication protocols IPv4 and IPv6.


PowerScale nodes include several external Ethernet connection options, providing
flexibility for a wide variety of network configurations9.

Network: There are two types of networks that are associated with a cluster:
internal and external.

Front-end, External Network

Client/Application PowerScale Storage


Layer Layer

Ethernet

Protocols: NFS, SMB, S3, Ethernet Backend communication


HTTP, FTP, HDFS, SWIFT Layer (PowerScale internal)

F200 cluster showing supported front-end protocols.

Clients connect to the cluster using Ethernet connections10 that are available on all
nodes.

9 In general, keeping the network configuration simple provides the best results with
the lowest amount of administrative overhead. OneFS offers network provisioning
rules to automate the configuration of additional nodes as clusters grow.

10Because each node provides its own Ethernet ports, the amount of network
bandwidth available to the cluster scales linearly.

PowerScale Hardware Concepts

Page 50 © Copyright 2020 Dell Inc.


Internal and External Networking

The complete cluster is combined with hardware, software, networks in the


following view:

Back-end, Internal Network

Double click image to enlarge.

OneFS supports a single cluster11 on the internal network. This back-end network,
which is configured with redundant switches for high availability, acts as the
backplane for the cluster.12

11 All intra-node communication in a cluster is performed across a dedicated


backend network, comprising either 10 or 40 GbE Ethernet, or low-latency QDR
InfiniBand (IB).

12 This enables each node to act as a contributor in the cluster and isolating node-
to-node communication to a private, high-speed, low-latency network. This back-
end network utilizes Internet Protocol (IP) for node-to-node communication.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 51


Internal and External Networking

Leaf-Spine Backend Network

The Gen 6x back-end topology in OneFS 8.2 and later supports scaling a
PowerScale cluster to 252 nodes. See the participant guide for more details.

22 downlinks per leaf - 40 Gb ports 10 uplinks per


Dell Z9100 switches leaf - 100 Gb ports

27 uplinks per
spine switch

4 leaf switches = max


of 88 nodes

Max scale out to 132 nodes with


2 spine switches

Leaf-Spine topology for a PoweScale cluster with up to 88 nodes.

Leaf-Spine is a two-level hierarchy where nodes connect to leaf switches, and leaf
switches connects to spine switches. Leaf switches do not connect to one another,
and spine switches do not connect to one another. Each leaf switch connects with
each spine switch and all leaf switches have the same number of uplinks to the
spine switches.

The new topology uses the maximum internal bandwidth and 32-port count of Dell
Z9100 switches. When planning for growth, F800 and H600 nodes should connect
over 40 GbE ports whereas A200 nodes may connect using 4x1 breakout cables.
Scale planning enables for nondisruptive upgrades, meaning as nodes are added,
no recabling of the backend network is required. Ideally, plan for three years of
growth. The table shows the switch requirements as the cluster scales. In the table,
Max Nodes indicate that each node is connected to a leaf switch using a 40 GbE
port.

PowerScale Hardware Concepts

Page 52 © Copyright 2020 Dell Inc.


Internal and External Networking

Installing a New Leaf-Spine Cluster

If you install a new cluster or scale a cluster to include 32 performance nodes


(F800, H600, and H500 models) with 40 GbE back-end ports, or more than 96
archive nodes (H400, A200, A2000 models) with 10 GbE back-end ports, use the
Leaf-Spine topology to configure the back-end network.

To install a new Leaf-Spine cluster, follow this workflow.


1. Install the switch rails.
2. Install the Spine switches followed by the Leaf switches.
3. Cable the leaf switches to the spine switches and then to the nodes for both the
networks.
4. Make sure the switch operating system version is 10.4.1.4P4 or later.

Important: Do not connect Leaf to Leaf or Spine to Spine switches.

5. Create a cluster by using any four nodes on the first Leaf switch.
6. Confirm that OneFS 8.2 or later is installed on the cluster.
7. Add the remaining nodes to the cluster that was created in step 5.
8. Confirm the cluster installation by checking the CELOG events.

Important: The events reported can be related to links introduced


between two or more Leaf switches to node connections (downlinks)
or between two or more Leaves to Spine switch connections (uplinks).
Incorrect cabling is also reported in events.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 53


Internal and External Networking

Legacy Connectivity

Three types of InfiniBand cable are used with currently deployed clusters. Older
nodes and switches, which run at DDR or SDR speeds use the legacy CX4
connector. In mixed environments (QDR nodes and DDR switch, or conversely) a
hybrid IB cable is used. This cable has a CX4 connector on one end and a QSFP
connector on the other. However, QDR nodes are incompatible with SDR switches.
On each cable, the connector types identify the cables. The graphic shows, the
combination of the type of node and the type of InfiniBand switch port determines
the correct cable type.

PowerScale Hardware Concepts

Page 54 © Copyright 2020 Dell Inc.


Internal and External Networking

Node Interconnectivity

1: Backend ports int-a and int-b. The int-b port is the upper port. Gen 6 backend
ports are identical for InfiniBand and Ethernet and cannot be identified by looking at
the node. If Gen 6 nodes are integrated in a Gen 5 or earlier cluster, the backend
will use InfiniBand. Note that there is a procedure to convert an InfiniBand backend
to Ethernet if the cluster no longer has pre-Gen 6 nodes.

2: PowerScale nodes with different backend speeds can connect to the same
backend switch and not see any performance issues. For example, an environment
has a mixed cluster where A200 nodes have 10 GbE backend ports and H600
nodes have 40 GbE backend ports. Both node types can connect to a 40 GbE
switch without effecting the performance of other nodes on the switch. The 40 GbE
switch provides 40 GbE to the H600 nodes and 10 GbE to the A200 nodes.

3: There are two speeds for the backend Ethernet switches, 10 GbE and 40 GbE.
Some nodes, such as archival nodes, might not need to use all of a 10 GbE port
bandwidth while other workflows might need the full utilization of the 40 GbE port
bandwidth. The Ethernet performance is comparable to InfiniBand so there should
be no performance bottlenecks with mixed performance nodes in a single cluster.
Administrators should not see any performance differences if moving from
InfiniBand to Ethernet.

4: Gen 6.5 backend ports use the PCIe slot.

Gen 6 nodes can use either an InfiniBand or Ethernet switch on the backend.
InfiniBand was designed as a high-speed interconnect for high-performance

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 55


Internal and External Networking

computing, and Ethernet provides the flexibility and high speeds that sufficiently
support the PowerScale internal communications.

Gen 6.5 only supports Ethernet. All new, PowerScale clusters support Ethernet
only.

Warning: With Gen 6, do not plug a backend Ethernet topology into a


backend InfiniBand NIC. If you plug Ethernet into the InfiniBand NIC,
it switches the backend NIC from one mode to the other and will not
come back to the same state.

PowerScale Hardware Concepts

Page 56 © Copyright 2020 Dell Inc.


Internal and External Networking

F200 and F600 Network Connectivity

The graphic shows a closer look at the external and internal connectivity. Slot 1 is
used for backend communication on both the F200 and F600. Slot 3 is used for the
F600 2x 25 GbE or 2x 100 GbE front-end network connections. The rack network
daughter card (rNDC) is used for the F200 2x 25 GbE front-end network
connections.

The F200 and F600 have no dedicated management port.

PCIe slot 1 - used for all BE


communication PCIe slot 3 - used for F600 FE

rNDC used for F200 FE

Note: The graphic shows the R640 and does not represent the F200 and F600 PCIe and rNDC
configuration.

Tip: Interfaces are named "25gige-N" or "100gige-N." Interface


names may not indicate the link speed. For example, the interface
name for NICs that are running at the lower speed such as 10 Gb do
not change to "10gige-1." You can use ifconfig to check the link
speed.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 57


Internal and External Networking

PowerScale Architecture - External Network

Eight node Gen 6 cluster showing supported protocols.

The external network provides connectivity for clients over standard file-based
protocols. It supports link aggregation, and network scalability is provided through
software in OneFS. A Gen 6 node has to 2 front-end ports - 10 GigE, 25 GigE, or
40 GigE, and one 1 GigE port for management. Gen 6.5 nodes have 2 front-end
ports - 10 GigE, 25 GigE, or 100 GigE.

In the event of a Network Interface Controller (NIC) or connection failure, clients do


not lose their connection to the cluster. For stateful protocols, such as SMB and
NFSv4, this prevents client-side timeouts and unintended reconnection to another
node in the cluster. Instead, clients maintain their connection to the logical interface
and continue operating normally. OneFS supports Continuous Availability (CA) for
stateful protocols like SMB, and NFSv4 is supported.

PowerScale Hardware Concepts

Page 58 © Copyright 2020 Dell Inc.


Internal and External Networking

Breakout Cables

Backend breakout cables

The 40 GbE and 100 GbE connections are 4 individual lines of 10 GbE and 25
GbE. Most switches support breaking out a QSFP port into four SFP ports using a
1:4 breakout cable. The backend is done automatically when the switch detects the
cable type as a breakout cable. The front end is often configured manually on a per
port basis.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 59


Internal and External Networking

Cabling Considerations

Listed here are some general


cabling considerations.

• On a Gen 6 chassis, ensure that


each member of a node pair is
connected to a different power
source13.
• Before creating the cluster, do a
quick cable inspection.
• The front-end, client-facing,
network connections be evenly
distributed across patch panels in
the server room. Distributing the
connections may avoid single
points of failure.
• Use care when handling and looping copper InfiniBand cables, and any type of
optical network cables. Bending or mishandling cables can result in damaged
and unusable cables.
• Do not coil the cables less than 10 inches in diameter to prevent damage.
Never bend cables beyond their recommended bend radius.

13The use of Y cables is not recommended because power supply of node is no


longer redundant if all power is supplied by the same cable. Verify that all cables
are firmly seated and that wire bales firmly in place to keep the power cables
seated.

PowerScale Hardware Concepts

Page 60 © Copyright 2020 Dell Inc.


Cluster Management Tools

Cluster Management Tools

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 61


Cluster Management Tools

Module Objectives

After completing this module, you will be able to:


• Identify tools used to manage PowerScale.

PowerScale Hardware Concepts

Page 62 © Copyright 2020 Dell Inc.


Cluster Management Tools

OneFS Management Tools

The OneFS management interface is used to perform various administrative and


management tasks on the PowerScale cluster and nodes. Management capabilities
vary based on which interface is used. The different types of management
interfaces in OneFS are:

• Serial Console
• Web Administration Interface (WebUI)
• Command Line Interface (CLI).
• OneFS Application Programming Interface (API)
• Front Panel Display

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 63


Cluster Management Tools

Serial Console Video

Movie:

The web version of this content contains a movie.

Link:
https://edutube.emc.com/Player.aspx?vno=KjBgi9m8LmZLw58klDHmOA==&autopl
ay=true

Script: Four options are available for managing the cluster. The web
administration interface (WebUI), the command-line interface (CLI), the serial
console, or the platform application programming interface (PAPI), also called the
OneFS API. The first management interface that you may use is a serial console to
node 1. A serial connection using a terminal emulator, such as PuTTY, is used to
initially configure the cluster. The serial console gives you serial access when you
cannot or do not want to use the network. Other reasons for accessing using a
serial connection may be for troubleshooting, site rules, a network outage, and so
on. Shown are the terminal emulator settings.

The configuration Wizard automatically starts when a node is first powered on or


reformatted. If the Wizard starts, the menu and prompt are displayed as shown.
Choosing option 1 steps you through the process of creating a cluster. Option 2 will
exit the Wizard after the node finishes joining the cluster. After completing the
configuration Wizard, running the isi config command enables you to change
the configuration settings.

PowerScale Hardware Concepts

Page 64 © Copyright 2020 Dell Inc.


Cluster Management Tools

Configuration Manager

For initial configuration, access the CLI by establishing a serial connection to the
node designated as node 1. The serial console gives you serial access when you
cannot or do not want to use the network. Other reasons for accessing using a
serial connection may be for troubleshooting, site rules, a network outage, so on.

Serial Port14

Configure the terminal emulator utility to use the following settings:

• Transfer rate = 115,200 bps

14 The serial port is usually a male DB9 connector. This port is called the service
port. Connect a serial null modem cable between a serial port of a local client, such
as a laptop, and the node service port. Connect to the node designated as node 1.
As most laptops today no longer have serial ports, you might need to use a USB-
to-serial converter. On the local client, launch a serial terminal emulator.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 65


Cluster Management Tools

• Data bits = 8
• Parity = none
• Stop bits = 1
• Flow control = hardware

More Information on Command Prompt15

15Either a command prompt or a Configuration Wizard prompt appears. The


command prompt displays the cluster name, a dash (-), a node number, and either
a hash (#) symbol or a percent (%) sign. If you log in as the root user, a hash (#)
prompt appears. If you log in as another user, a % symbol appears. For example,
Cluster-1# or Cluster-1%. The prompt is the typical prompt that is found on most
UNIX and Linux systems. When a node first powers on or reformats, the
Configuration Wizard automatically starts. If the Configuration Wizard starts, the
prompt displays are shown. There are four options: Create a new cluster, join an
existing cluster, exit wizard and configure manually, and Reboot into SmartLock
Compliance mode. Choosing option 1 creates a cluster, while option 2 joins the
node to an existing cluster. If you choose option 1, the Configuration Wizard steps
you through the process of creating a cluster. If you choose option 2, the
Configuration Wizard ends after the node finishes joining the cluster. You can then
configure the cluster using the WebUI or the CLI.

PowerScale Hardware Concepts

Page 66 © Copyright 2020 Dell Inc.


Cluster Management Tools

isi config

Edit Wizard settings

Common commands - shutdown, status,


name

Changes
prompt to
>>>

Other "isi" commands not available in configuration


console

The isi config command, pronounced "izzy config," opens the configuration
console. The console contains configured settings from the time the Wizard started
running.

Use the console to change initial configuration settings. When in the isi config
console, other configuration commands are unavailable. The exit command is
used to go back to the default CLI.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 67


Cluster Management Tools

Web Administration Interface (WebUI)

OneFS
version
User must have logon privileges

Connect to
any node in
cluster over
HTTPS on
port 8080

Multiple browser support

Double-click the image to enlarge.

The WebUI is a graphical interface that is used to manage the cluster.

The WebUI requires at least one IP address that is configured16 on one of the
external Ethernet ports presents in one of the nodes.

16Either a command prompt or a Configuration Wizard prompt appears. The


command prompt displays the cluster name, a dash (-), a node number, and either
a hash (#) symbol or a percent (%) sign. If you log in as the root user, it will be a #
symbol. If you log in as another user, it will be a % symbol. For example, Cluster-1#
or Cluster-1%. This prompt is the typical prompt that is found on most UNIX and
Linux systems. When a node first powers on or reformats, the Configuration Wizard
automatically starts. If the Configuration Wizard starts, the prompt display is shown.
There are four options: Create a cluster, join an existing cluster, exit wizard and
configure manually, and Reboot into SmartLock Compliance mode. Choosing
option 1 creates a cluster, while option 2 joins the node to an existing cluster. If you

PowerScale Hardware Concepts

Page 68 © Copyright 2020 Dell Inc.


Cluster Management Tools

Example browser URLs:


• https://192.168.3.11:8080
• https://engineering.dees.lab:8080

To access the web administration interface from another computer, an Internet


browser is used to connect to port 8080. The user must log in using the root
account, admin account, or an account with log-on privileges. After opening the
web administration interface, there is a four-hour login timeout. In OneFS 8.2.0 and
later, the WebUI uses the HTML5 doc type, meaning it is HTML5 compliant in the
strictest sense, but does not use any HTML specific features. Previous versions of
OneFS require Flash.

choose option 1, the Configuration Wizard steps you through the process of
creating a cluster. If you choose option 2, the Configuration Wizard ends after the
node finishes joining the cluster. You can then configure the cluster using the
WebUI or the CLI.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 69


Cluster Management Tools

Command Line Interface (CLI)

The CLI can be accessed in two ways:

• Out-of-band17
• In-band18

Both methods are done using any SSH client such as OpenSSH or PuTTY. Access
to the interface changes based on the assigned privileges.

OneFS commands are code that is built on top of the UNIX environment and are
specific to OneFS management. You can use commands together in compound
command structures combining UNIX commands with customer facing and internal
commands.

4
1
5

3 6

17Accessed using a serial cable that is connected to the serial port on the back of
each node. As many laptops no longer have a serial port, a USB-serial port adapter
may be needed.

18 Accessed using external IP address that is configured for the cluster.

PowerScale Hardware Concepts

Page 70 © Copyright 2020 Dell Inc.


Cluster Management Tools

1: The default shell is zsh.

2: OneFS is built upon FreeBSD, enabling use of UNIX-based commands, such as


cat, ls, and chmod. Every node runs OneFS, including the many FreeBSD kernel
and system utilities.

3: Connections make use of Ethernet addresses.

4: OneFS supports management isi commands. Not all administrative


functionalities are available using the CLI.

5: The CLI command use includes the capability to customize the base command
with the use of options, also known as switches and flags. A single command with
multiple options result in many different permutations, and each combination
results in different actions performed.

6: The CLI is a scriptable interface. The UNIX shell enables scripting and execution
of many UNIX and OneFS commands.

Caution: Follow guidelines and procedures to appropriately


implement the scripts to not interfere with regular cluster operations.
Improper use of a command or using the wrong command can be
potentially dangerous to the cluster, the node, or to customer data.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 71


Cluster Management Tools

CLI Usage

Can use common UNIX tools

"help" shows needed privileges

Shows syntax and usage

Option explanation

The man isi or isi --help command is an important command for a new
administrator. These commands provide an explanation of the available isi
commands and command options. You can also view a basic description of any
command and its available options by typing the -h option after the command.

PowerScale Hardware Concepts

Page 72 © Copyright 2020 Dell Inc.


Cluster Management Tools

OneFS Application Programming Interface (API)

The OneFS Application Programming Interface, or OneFS API, is a secure and


scriptable19 interface for managing the cluster.

HTTPS is used in API to encrypt communications.

OneFS applies authentication and RBAC controls to API commands to ensure that
only authorized commands are run.

The example shows a description for https://:8080/platform/quota/quotas1.

1: PAPI conforms to the REST architecture. An understanding of HTTP/1.1 (RFC


2616) is required to use the API.

2: Structured like URLs that execute on a browser that supports authentication

19A chief benefit of PAPI is its scripting simplicity, enabling customers to automate
their storage administration.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 73


Cluster Management Tools

3: Some commands are not PAPI aware, meaning that RBAC roles do not apply.
These commands are internal, low-level commands that are available to
administrators through the CLI. Commands not PAPI aware: isi config, isi
get, isi set, and isi services

4: The number indicates the PAPI version. If an upgrade introduces a new version
of PAPI, some backward compatibility ensures that there is a grace period for old
scripts to be rewritten.

PowerScale Hardware Concepts

Page 74 © Copyright 2020 Dell Inc.


Cluster Management Tools

Front Panel Display

Front Panel Display of a Gen 6 chassis.

The Gen 6 front panel display is an LCD screen with five buttons that are used for
basic administration tasks20.

The Gen 6.5 front panel has limited functionality21 compared to the Gen 6.

20Some of them include adding the node to a cluster, checking node or drive
status, events, cluster details, capacity, IP and MAC addresses.

21You can join a node to a cluster and the panel display node name after the node
has joined the cluster.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 75


Course Summary

Course Summary

PowerScale Hardware Concepts

Page 76 © Copyright 2020 Dell Inc.


Course Summary

Course Summary

Now that you have completed this course, you should be able to:
→ Discuss installation engagement actions.
→ Explain the use of PEQ in implementation.
→ Describe PowerScale nodes.
→ Identify the PowerScale node internal and external networking components.
→ Explain the PowerScale cluster management tools.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 77


Appendix

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 79


Appendix

Electrostatic Discharge
Electrostatic Discharge is a major cause of damage to electronic components and
potentially dangerous to the installer. To avoid ESD damage, review ESD
procedures before arriving at the customer site and adhere to the precautions when
onsite.

Clean Work Area: Clear


work area of items that
naturally build up
electrostatic discharge

Antistatic Packaging:
Leave components in
antistatic packaging until
time to install.

PowerScale Hardware Concepts

Page 80 © Copyright 2020 Dell Inc.


Appendix

No ESD Kit Available:


• Before touching
component, put one
hand firmly on bare
metal surface.
• After removing
component from
antistatic bag, do
NOT move around
room or touch
furnishings,
personnel, or
surfaces.
• If you must move
around or touch
something, first put
component back in
antistatic bag

ESD Kit: Always use


ESD kit when handling
components.

Don't Move: Minimize


movement to avoid
buildup of electrostatic
discharge.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 81


Appendix

PowerScale Nodes
Individual PowerScale nodes provide the data storage capacity and processing
power of the PowerScale scale-out NAS platform. All of the nodes are peers to
each other and so there is no single 'master' node and no single 'administrative
node'.

• No single master
• No single point of administration

Administration can be done from any node in the cluster as each node provides
network connectivity, storage, memory, non-volatile RAM (NVDIMM) and
processing power found in the Central Processing Units (CPUs). There are also
different node configurations, compute, and capacity. These varied configurations
can be mixed and matched to meet specific business needs.

Each contains.

• Disks

• Processor

• Cache

• Front-end network connectivity

PowerScale Hardware Concepts

Page 82 © Copyright 2020 Dell Inc.


Appendix

Tip: Gen 6 nodes can exist within the same cluster. Every
PowerScale node is equal to every other PowerScale node of the
same type in a cluster. No one specific node is a controller or filer.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 83


Appendix

F-Series
The F-series nodes sit at the top of both performance and capacity with all-flash
arrays for ultra-compute and high capacity. The all flash platforms can accomplish
250-300k protocol operations per chassis and get 15 GB/s aggregate read
throughput from the chassis. Even when the cluster scales, the latency remains
predictable.

• F80022
• F81023

22 The F800 is suitable for workflows that require extreme performance and
efficiency. It is an all-flash array with ultra-high performance. The F800 sits at the
top of both the performance and capacity platform offerings when implementing the
15.4TB model, giving it the distinction of being both the fastest and densest Gen 6
node.

23 The F810 is suitable for workflows that require extreme performance and
efficiency. The F810 also provides high-speed inline data deduplication and in-line
data compression. It delivers up to 3:1 efficiency, depending on your specific
dataset and workload.

PowerScale Hardware Concepts

Page 84 © Copyright 2020 Dell Inc.


Appendix

H-Series
After F-series nodes, next in terms of computing power are the H-series nodes.
These are hybrid storage platforms that are highly flexible and strike a balance
between large capacity and high-performance storage to provide support for a
broad range of enterprise file workloads.

• H40024
• H50025
• H560026
• H60027

24The H400 provides a balance of performance, capacity and value to support a


wide range of file workloads. It delivers up to 3 GB/s bandwidth per chassis and
provides capacity options ranging from 120 TB to 720 TB per chassis. The H400
uses a medium compute performance node with SATA drives.

25The H500 is a versatile hybrid platform that delivers up to 5 GB/s bandwidth per
chassis with a capacity ranging from 120 TB to 720 TB per chassis. It is an ideal
choice for organizations looking to consolidate and support a broad range of file
workloads on a single platform. H500 is comparable to a top of the line X410,
combining a high compute performance node with SATA drives. The whole Gen 6
architecture is inherently modular and flexible with respect to its specifications.

26The H5600 combines massive scalability – 960 TB per chassis and up to 8 GB/s
bandwidth in an efficient, highly dense, deep 4U chassis. The H5600 delivers inline
data compression and deduplication. It is designed to support a wide range of
demanding, large-scale file applications and workloads.

27The H600 is Designed to provide high performance at value, delivers up to


120,000 IOPS and up to 12 GB/s bandwidth per chassis. It is ideal for high

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 85


Appendix

performance computing (HPC) workloads that don’t require the extreme


performance of all-flash. These are spinning media nodes with various levels of
available computing power - H600 combines our turbo compute performance nodes
with 2.5" SAS drives for high IOPS workloads.

PowerScale Hardware Concepts

Page 86 © Copyright 2020 Dell Inc.


Appendix

A-Series
The A-series nodes namely have lesser compute power compared to other nodes
and are designed for data archival purposes. The archive platforms can be
combined with new or existing all-flash and hybrid storage systems into a single
cluster that provides an efficient tiered storage solution.

• A20028
• A200029

28The A200 is an ideal active archive storage solution that combines near-primary
accessibility, value and ease of use.

29The A2000 is an ideal solution for high density, deep archive storage that
safeguards data efficiently for long-term retention. The A2000 is capable of
containing 80, 10TB drives for 800TBs of storage by using a deeper chassis with
longer drive sleds containing more drives in each sled.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 87


Glossary
Front Panel Display
The Front Panel Display is located on the physical node or chassis. It is used to
perform basic administrative tasks onsite.

OneFS CLI
The command-line interface runs "isi" commands to configure, monitor, and
manage the cluster. Access to the command-line interface is through a secure shell
(SSH) connection to any node in the cluster.

PAPI
The customer uses OneFS application programming interface (API) to automate
the retrieval of the most detailed network traffic statistics. It is divided into two
functional areas: One area enables cluster configuration, management, and
monitoring functionality, and the other area enables operations on files and
directories on the cluster.

Serial Console
The serial console is used for initial cluster configurations by establishing serial
access to the node designated as node 1.

WebUI
The browser-based OneFS web administration interface provides secure access
with OneFS-supported browsers. This interface is used to view robust graphical
monitoring displays and to perform cluster-management tasks.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 89


PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 90

You might also like