Professional Documents
Culture Documents
Overview
April 14, 2021
Presentation sections
• Hardware Overview • NVMe-FC
• PowerStore 1000T/X – 9000T/X Hardware • Storage Network Scaling
Overview • Volumes & Volume Groups
• PowerStore 500T Hardware Overview • Protection Policies
• PowerStore T Architecture Overview • Snapshots
• PowerStore X Architecture Overview • Thin Clones
• PowerStore Discovery • Block Remote Replication
• PowerStore Clustering • PowerStore metro node
• Resource Balancing • Virtualization Integration
• PowerStore Manager GUI • Serviceability
• CLI • Security Overview
• Rest API • Import External Storage
• Data Path • Services
• File
Cluster Overview
Dual Node
Base Enclosure Architecture
Expansion Scale Up
Storage with
Enclosure Extra Drives
Clustered
Mode Appliances
Power Supply
• Every appliance must contain a 4-Port Mezz 0 Unit (PSU)
– The 4-Port Card is used for connections such as cluster interconnect and management of the
appliance
– Customers will have 2 types of 4-Port Mezz cards to select from: Optical or Copper
• Each node has 2x I/O Module slots for configuring optional additional frontend ports
• Each node has a power supply unit (PSU) which can power both nodes if needed
11 of 292 © Copyright 2021 Dell Inc.
Embedded Module
Overview
USB port
Mezz 0
Micro DB9
SAS ports 1GbE ports
The Embedded Module has the following components/connectors:
• One 4-port card (Mezz 0)
• One SAS controller, with two external MiniSAS HD ports
• One Broadcom 1GbE device with two RJ45 ports, in the field the service port will be used for
console access
• One Micro DB9 serial port, direct connect SP console (Not used by customers)
• USB port for system recovery in case dual M.2 failure and for password reset
14 of 292 © Copyright 2021 Dell Inc.
4-Port Card
Overview
Slot 1 Slot 0
• Each node contains 2 I/O Module slots for added connectivity
• I/O Module configurations must match on both Nodes
• The following I/O Modules are optional for each appliance:
v Slot 0 Slot 1
– When mixed, NVMe SCM drives will serve as a dedicated metadata tier, improving
performance with the low latency capability of NVMe SCM drives
– When mixed, NVMe SSD drives will server as the storage tier and a minimum of 6
NVMe SSD drives are required
6x Up to 21x 2 or 4
Minimum NVMe SCM NVRAM
Drives Drives drives
6x Up to 21x 2 or 4
Minimum NVMe SSD NVRAM
Drives Drives drives
25x SAS
SSD
Drives
Up to 21x 2 or 4
NVMe SSD NVRAM
Drives drives
• All 25 drive slots support NVMe drives, SAS SSD drives are not supported in any
slot on base enclosure
• SAS SSD drives are only supported in attached expansion enclosures
* Low Line voltages (110 – 120 volts) require the use of a step-up transformer
IO Module ColdspellX - 4x 32Gb FC FC, NVMe-FC FC, NVMe-FC FC, NVMe-FC FC, NVMe-FC
• PowerStore T models support 32Gb FC I/O Modules for SAN and NVMe-FC connectivity
– Traditional block and vVol storage presented to external hosts over FC New in
PowerStoreOS
2.0
• Workflow:
1. Add additional IPs to the existing storage network (unless unused IPs exist)
2. Map unmapped ports to the additional IPs
• Management Network
– 1x Out of Band (OOB) Management switch
▪ 2x OOB Management is supported for HA
– Onboard 1GbE Ports
• Data Network
– 2x Top of Rack (ToR) Ethernet Switches
– Bonded 4-Port Card Ports 0 & 1
– Layer 2 interconnect
configuration wizard
48 of 292 © Copyright 2021 Dell Inc.
PowerStore X Architecture
Overview
(applicable to 1000X – 9000X models)
• Capabilities:
End User
– SAN (FC/iSCSI) VM
– vVol (FC/iSCSI)
– Embedded Applications (Virtual Machines)
– Clustering of multiple X model appliances is supported
New in
PowerStoreOS
2.0
Hypervisor
• Active-Active architecture
– Each node has access to the same storage
– Active-optimized/Active-unoptimized front end connectivity
• Customer Virtual Machines will leverage PowerStore storage and data services
51 of 292 © Copyright 2021 Dell Inc.
Overview
Controller VM
passthrough passthrough
PowerStore
ESXi Cluster
REQUIRED OPTIONAL
4-Port Card Port 0 Port 1 Port 2 Port 3
Port Group PowerStore Network Uplink 2 Uplink 1 Uplink 4 Uplink 3
PG_MGMT Management, Initial Discovery Network Active Active Standby Standby
PG_MGMT_ESXi Management Active Active Standby Standby
PG_Storage_INIT1 Storage Unused Active Unused Unused
PG_Storage_INIT2 Storage Active Unused Unused Unused
PG_Storage_TGT1 Storage Standby Active Standby Standby
PG_Storage_TGT2 Storage Active Standby Standby Standby
PG_Storage_TGT3 Storage Standby Standby Standby Active
PG_Storage_TGT4 Storage Standby Standby Active Standby
PG_vMotion1 vMotion Standby Standby Standby Active
• Switch ports must support untagged native VLAN traffic for system discovery
• Data Network
– 2x Top of Rack (ToR) Ethernet Switches
– Bonded 4-Port Card Ports 0 & 1
– Layer 2 interconnect
Data Network
2
1 Connect workstation to
3
the same L2 network and
Once the rack and stack of
discover the system Deploy a new cluster
the appliance is completed
• Cabling completed (To be covered in • Run through the Initial
a later presentation) • Use the PowerStore Discovery Configuration Wizard
Utility to discover appliance(s) (ICW)
• Power on appliance(s)
• Select appliance(s) and click Create
Cluster
• Use cases:
– Initial discovery
▪ Discovery of unconfigured appliances
– Adding appliances
▪ Adding appliance to an existing configured cluster
– Finding already configured cluster(s)
▪ Once a cluster is configured, it can be discovered again from the tool
L2 Network
• Cabling required for
discovery
– PowerStore T uses the 1Gb
Onboard Management port
PowerStore T
– PowerStore X uses first two
ports of the 4-Port Card
• There are now three methods to discover PowerStore to run through Initial
Configuration Wizard (ICW) of a PowerStore system
PowerStore T
1 0
x4
B
B B
3 2 1 0 3 2 1 0
A 3 2 1 0
1GbE
12
10GbE
A
SS
2200W B 3 2 1 0
0
1
1GbE
1GbE
2200W
1
0 1 2 3 B
0
10GbE
SS
1GbE
12
0 1 2 3 A 0 1 0 1
2 3 2 3
A B A
x4
0 1
• There are now three methods to discover PowerStore to run through Initial
Configuration Wizard (ICW) of a PowerStore system
– Service port (recommended)
▪ Direct connect to PowerStore and launch ICW
PowerStore T
1 0
x4
B
B B
3 2 1 0 3 2 1 0
A 3 2 1 0
1GbE
12
10GbE
A
SS
2200W B 3 2 1 0
0
1
1GbE
1GbE
2200W
1
0 1 2 3 B
0
10GbE
SS
1GbE
12
0 1 2 3 A 0 1 0 1
2 3 2 3
A B A
x4
0 1
• There are now three methods to discover PowerStore to run through Initial
Configuration Wizard (ICW) of a PowerStore system
– Service port (recommended)
▪ Direct connect to PowerStore and launch ICW
– Discovery tool
▪ Leverage discovery tool to launch ICW
PowerStore T
1 0
x4
B
B B
3 2 1 0 3 2 1 0
A 3 2 1 0
1GbE
12
10GbE
A
SS
2200W B 3 2 1 0
0
1
1GbE
Network
1GbE
2200W
1
0 1 2 3 B
0
10GbE
SS
1GbE
12
0 1 2 3 A 0 1 0 1
2 3 2 3
A B A
x4
0 1
• There are now three methods to discover PowerStore to run through Initial
Configuration Wizard (ICW) of a PowerStore system
– Service port (recommended)
▪ Direct connect to PowerStore and launch ICW
– Discovery tool
▪ Leverage discovery tool to launch ICW
x4
B
B B
3 2 1 0 3 2 1 0
A 3 2 1 0
1GbE
12
10GbE
A
SS
2200W B 3 2 1 0
0
1
1GbE
Network
1GbE
2200W
1
0 1 2 3 B
0
10GbE
SS
1GbE
12
0 1 2 3 A 0 1 0 1
2 3 2 3
A B A
x4
0 1
• Each Cluster contains a Primary appliance that runs all control path services
• Primary appliance specific services include:
– Global Management IP
– Primary Management DB
– Cluster high availability (pacemaker)
• All other appliances are standby appliances from primary control path perspective
– They run a subset of control path services to manage themselves and communicate with primary
– These appliances still serve I/O, only standby in regard to Primary appliance specific services
• File resources and services will always run on the primary appliance of a Unified Cluster
– File does not failover to a new Primary appliance
– File is highly available on the Primary appliance
78 of 292 © Copyright 2021 Dell Inc.
Clustering
Resources
1-appliance Cluster
• The ability of PowerStore to
use analytics to balance
storage resources (volumes)
Appliance 1
? New
Volumes
4-appliance Cluster
• The ability of PowerStore to
use analytics to balance Appliance 1
storage resources (volumes)
– Which node to assign a new
volume to on an appliance
– Which appliance to assign a new Appliance 2
volume to in a multi-appliance
cluster ? New
Volumes
Appliance 3
Appliance 4
4-appliance Cluster
• Appliance assignments
determined by: Appliance 1
– Current storage space utilization
› Storage trends and forecasts
– System limits
› Max volumes per appliance / Appliance 2
volume group
– Appliance status and health
› Offline, failures, read-only (100%
? New
Volumes
full) Appliance 3
– Performance metrics are not
considered
– Resource Balancer does not
proactively or automatically move Appliance 4
existing volumes from one
appliance to another
84 of 292 © Copyright 2021 Dell Inc.
Overview
What is Resource Balancing?
4-appliance Cluster
• The ability to migrate storage
resources between
Appliance 1 90%+
appliances in a cluster
– Manual migration
– Assisted migration
Appliance 2 ?
• Leverage capacity
monitoring, forecasts, and
alerts with suggested Appliance 3
?
?
remediation options
Appliance 4
?
• 2-appliance cluster
• Event is recorded as a
major alert
• Remediation in this
example: manually move
a volume
• 2-appliance cluster
• Configure a
new host server
under Compute
• Provision and
map a new
volume to the
host server
under Storage
• Select iSCSI or
Fibre Channel and
click Next
• Review the
summary and
click Add Host
• New host is
added
• Click Volumes
List or Storage to
configure a volume
for the host
• In this example:
click Storage and
to the right of
Volumes in the
drop down
Create Volumes
wizard
• Enter volume
properties and
click Next
• The default
Performance
policy is
medium, but can
be changed
• Select a host or
host group
– Set LUN ID or
allow automatic
generation
– Use LUN 0 for
boot-from-SAN
• Click Next
• Review the
summary and click
Create
• Click Refresh
• Volume now listed
• Complete steps on
the host to
initialize and
format the new
volume
• Configure MPIO
settings on the
host
• Standalone Client
– Translates CLI into REST API calls in the background
Additional help topics will be provided once you connect to the remote server. Please provide destination
address to obtain remote server command help.
• The REST API allows you to interact with PowerStore Management functionality,
including:
– System settings and monitoring
– Host and remote system connections
– Network settings
– Storage management
– Data protection
– Support configuration
NVRAM Device
NVRAM Device
NVRAM Device
NVRAM Device
Data Drive
Data Drive
– Data@Rest Encryption
• Battery Protected
...
– Node BoB
19 20 21 22 23 24
A B
• The size of the fingerprint cache is based on the capacity of the appliance when
the software boots on the system
– The fingerprint cache does not expand as drives are added to the appliance
– This size is not exposed to the user
• The contents of the fingerprint cache within system memory is updated to a backup
location on the data drives periodically
– This operation is not configurable and is hidden from the user
• The periodic update process is based on the number of entries that are different
than the backup location
– Only the new entries are updated to the backup location when an update occurs
– The threshold varies by model
• Within PowerStore’s Dynamic Resiliency Engine (DRE), all drives within the
system are automatically consumed within an appliance and the appropriate
amount of redundancy is applied
– Proprietary algorithms are used to store and protect data within the system
– Resiliency sets are use as fault domains to improve the reliability while minimizing spare space
overhead
▪ Resiliency sets are also known as fault resiliency sets in customer facing documentation
– Having multiple failure domains increases the reliability of the system
• Drive fault tolerance is the amount of concurrent drive failures, per resiliency set,
that a system can sustain without causing a Data Unavailable/Data Loss (DU/DL)
situation
– The protection scheme within the resiliency set defines how many failures can occur
• In the PowerStoreOS 2.0 release, the drive tolerance level can be set to single
drive failure or double drive failure during the initial configuration of an appliance
• Initial configuration could be initial cluster creation or when the appliance is being
added to an existing cluster
• Configuring the drive tolerance level sets the data protection tolerance level for all
resiliency sets created within the appliance
• The drive tolerance level is set for the lifetime of the appliance and cannot be
changed without a non-data-in-place factory reset
– Pre-2.0 systems utilize single drive failure protection
D0 D1 D2 D3 D4 D5 D6 D7 D8 D9 D10 D11
Node 1 Node 2
8+1
User User User User User User User User User User User User
Data0 Data0 Data0 Data0 Data0 Data0 Data0 Data0 Data0 Data1 Data1 Data1
User User User User User User User User User User User User
Data1 Data1 Data1 Data1 Data1 Data1 Data2 Data2 Data2 Data2 Data2 Data2
User User User User User User User User User User User User
Data2 Data2 Data2 Data3 Data3 Data3 Data3 Data3 Data3 Data3 Data3 Data3
User User User User User User User User User
Data4 Data4 Data4 Data4 Data4 Data4 Data4 Data4 Data4
Spare Spare Spare Spare Spare Spare Spare Spare Spare Spare Spare Spare
Node 1 Node 2
8+2
User User User User User User User User User User User User
Data0 Data0 Data0 Data0 Data0 Data0 Data0 Data0 Data0 Data0 Data1 Data1
User User User User User User User User User User User User
Data1 Data1 Data1 Data1 Data1 Data1 Data1 Data1 Data2 Data2 Data2 Data2
User User User User User User User User User User User User
Data2 Data2 Data2 Data2 Data2 Data2 Data3 Data3 Data3 Data3 Data3 Data3
User User User User User User User User User User User User
Data3 Data3 Data3 Data3 Data4 Data4 Data4 Data4 Data4 Data4 Data4 Data4
User User
Data4 Data4
Spare Spare Spare Spare Spare Spare Spare Spare Spare Spare Spare Spare
• PowerStore performance policies allow a user to define the I/O priority for volumes
within an appliance
– Can be customized at time of creation or can be changed at any time
– This is supported only for block volumes at this time
– This is not a limits based policy approach rather it is a shared based I/O prioritization mechanism
– When in affect, I/O for volumes set to High will receive priority over resources set to Medium or Low
• The performance policy setting on a volume can be set to Low, Medium or High
– Medium is the default setting at resource creation
• The performance policies only take affect when there is contention of resources
• NFS
– NFSv3
– NFSv4 - 4.1
– Secure NFS
• FTP/SFTP
• In the event of a PowerStore node failure, NAS Servers automatically failover from
one NAS node to the other
– Failover generally completes within 30 seconds to avoid host timeouts
– NAS Servers are automatically moved to the peer node during NDU
– Failback is a manual process
• Current Node
– The node that the NAS server is currently running on
– Changing this moves the NAS server to run on a different node
• Preferred Node
– The node that the NAS server should be ideally running on
– Acts as a marker that is based on the round-robin algorithm
137 of 292 © Copyright 2021 Dell Inc.
NDMP Backups
• Components
– Primary Storage – Source system to be backed up, such as PowerStore
– Data Management Application (DMA) – Backup application that orchestrates the backup sessions,
such as Dell EMC NetWorker
– Secondary Storage – The backup target, such as Data Domain
• Tree Quotas – Limits the capacity consumed on a specific directory on the file
system
– All files in the directory and subdirectories contribute towards the limit
PowerEdge PowerStore
HBAs
Switch
• Fibre Channel front end port simultaneously supports SCSI and NVMe access
– User can choose whether to use both protocols on same port or use each protocol on separate ports
– Always support both protocols with no option to disable one of them
• Connectivity flow:
– Set up Fibre Channel Front End ports (zoned)
– Create Host or Host Groups and select NVMe as protocol
▪ Add initiator(s)
▪ nqn. is the NVMe identifier similar to the iqn. for iSCSI
– Create Volume/Thin Clone or Volume Groups
▪ Not support with vVol
– Map the NVMe Host to the Volume(s)
– PowerStore supports up to 8 storage networks of IPv4 and /or IPv6 addresses 2.0
– PowerStore supports up to 8 storage networks per interface of IPv4 and /or IPv6
addresses
– Improved GUI workflows to create networks and assign interfaces
Storage
• Volumes
• Volume Groups
• A Volume Group is a logical container for a group of volumes or volume thin clones
– Provides a single point of management for one or more resources
Application 1 Application 2
170 of 292 © Copyright 2021 Dell Inc.
Volume Groups
Considerations
• Single volume restore operations are only allowed when “write order consistency”
is disabled on the Volume Group
• A Protection Policy is a set of user defined rules used to establish local or remote
data protection across storage resources
– Users do not configure snapshot schedules or replication sessions on a resource, but
rather assign a Protection Policy to it
• A Protection Policy consists of rules which define what level of protection to apply
• When a Protection Policy is assigned to a resource:
– The Snapshot Rule is automatically applied
– Replication is automatically configured
• Snapshot Rules:
– Hourly snapshots
– Daily snapshots Protection
– Weekly snapshots Policy
• Replication Rules:
– Asynchronous Replication RPO
▪ 1 hour RPO
• Create one or more Protection Policies and use them across multiple resources
– I.E. Create a Gold, Silver, and Bronze service levels and assign them as needed
Example:
• Create a Protection Policy Containing:
– 3 Snapshot Rules:
▪ Hourly snapshots
▪ Daily snapshot at midnight
▪ Weekly snapshot taken on Sundays
– 1 Replication Rule:
▪ Remote Replication with 1 hour Recovery Point Objective (RPO)
• Snapshots consume overall system storage capacity as changes to the source are
made
– Ensure that the system has enough capacity to accommodate snapshots
D’
B’
A’
B
C
D
A
Application 1
A B C D B’ A’ D’
Monday Tuesday
179 of 292 © Copyright 2021 Dell Inc.
Volume/Volume Group Snapshots
Overview
• Volume snapshot names are case insensitive and are limited to 128 characters
– Example of a valid snapshot name: SnApShOt`~!@#$%^&*()-_=+[]\;',./{}|:"<>?
– Names for existing snapshots can be edited
– Snapshots names need to be unique within the Volume family
Storage Resource
S1TC Snap 1
Snap 2 Snap 2 Thin Clone
S1TC Snap 2
S2TC Snap 1
Snap 3
S2TC Snap 2
• There are 2 different file snapshot types, both used for data protection:
– Protocol (Read-Only)
▪ Read-Only snapshot that can be exported as an SMB share and/or NFS export
• Access is provided by the parent NAS Server
▪ This is the default type created by a Snapshot Rule
– .Snapshot (Read-Only)
▪ Read-Only snapshot that can be accessed through Previous Versions or .snapshot
• Note: You may customize the name when taking a manual snapshot, but editing
the name or the access type of a file system snapshot is not possible
• When manually creating a snapshot, the user can customize when the snapshot is
retained until
– To ensure a snapshot doesn’t expire, choose “No Automatic Deletion”
– Can change existing snapshots to “No Automatic Deletion” if needed
• The snapshot aging service runs hourly in the background and cleans up expired
snapshots as needed
– Snapshots are deleted in batches to improve performance and efficiency
• Need access when some amount of data has been deleted or corrupted?
1.Find a snapshot which contains the data needed
2.Create a Thin Clone of the snapshot
3.Provide host access to the Thin Clone to access the data
• Need access when some amount of data has been deleted or corrupted?
1.Find a snapshot which contains the data needed
2.Depending on the snapshot type:
a. Protocol (Read-Only) – Create an SMB share and/or NFS export
b. .Snapshot – Access the snapshot data via Previous Versions or .snapshot
• When creating a snapshot rule, the schedule is entered in your local time
– Stored within the system in UTC time format
• Snapshots created by a Protection Policy will have the following naming scheme:
– Block snapshots:
▪ Snapshot Rule Name.Resource Name.Timestamp with nano-time
• Example: Hourly Snapshots.Volume 1.2019-07-18T17:00:00Z 702319493
– File snapshots:
▪ Snapshot Rule Name_Resource Name_Timestamp with nano-time
• Spaces are automatically replaced with an underscore
• Example: Hourly_Snapshots_FS1_2019-10-30T07:00:01Z_614817807
• A Snapshot Rule can be edited at any point in time, even while in use
– Storage resources will automatically inherit the new settings once the rule is updated
within the Protection Policy
– If the Retention is changed, all existing snapshots created by the rule will be updated
• The restore operation is used to replace the contents of a parent storage resource
with data from an associated snapshot
– Returns the parent resource to a previous point in time of itself
– Only pointer updates occur (near instantaneous)
– Snapshot/Thin Clone hierarchy is preserved
Goal:
Restore to Monday Snapshot
Application 1
A B C D B’ A’ D’
• The refresh operation is used to replace the contents of a resource with data from
a resource within the same family
– Only pointer updates occur (near instantaneous)
– Snapshot/Thin Clone hierarchy is preserved
Goal:
Refresh: Thin Clone
From: Parent Resource Host 1
Protection
Policy
A B C D B’ A’ D’ C’ E
RPO
Host 2
Monday Backup Thin Clone
192 of 292 © Copyright 2021 Dell Inc.
Volume/Volume Group Refresh Operation
Volumes/Thin Clones - Supported Operations
• A file system or file system thin clone refresh operation replaces the contents of a
snapshot with the current view of the parent object
– Only pointer updates occur (near instantaneous)
– Snapshot/Thin Clone hierarchy is preserved
Goal:
Refresh: Snapshot 1
From: Parent Resource Client 1
A B C D B’ A’ D’ C’ E
• A Thin Clone is a read-write copy of a resource that shares blocks with the parent
resource
• Thin Clones are not full copies of the original data
– Pointer based, redirect on write technology
– Creating a Thin Clone is near instantaneous
• A Thin Clone of a Volume, Volume Group (VG) or File System can be created
using the latest image of the Volume, File System, Snapshot, or another Thin
Clone
– The Thin Clone is automatically assigned to the same NAS Server
• When created, the Thin Clone is displayed in the same page as the parent
resource
• Parent resources can be deleted without deleting the associated Thin Clone
– No blocks used by the Thin Clone are removed
– The Thin Clone becomes its own resource
– Only unique blocks within the parent resource and snapshots will be reclaimed
• Parallel processing
– Ability to create and provide multiple copies of the same data to multiple servers/clients
for parallel processing
• System deployment
– Quickly utilize a common image to easily deploy new environments
• Redundancy
– Increase data redundancy and fault tolerance levels
– Failover to a secondary site
• Compliance
– Require additional footprint.
– for instance insurance companies, bank, and government sites
• Migration
– Migrate data between storage systems
– Tech Refresh
• Supports PowerStore T model Arrays and PowerStore X model Arrays – also mixed
• IP based only
• Policy based
• One-to-many (1:n)
– Different Volumes are replicating to multiple destination clusters
– No Support for fan-out or cascaded/chain replication
• Many-to-One (n:1)
– Individual Volumes from different cluster are replicating to a single cluster
• Connection for
Volume Volume
Group - Management
Replication Sessions Group
- Data
Manage Replication
V1 Volume Pair 1 V1
•• SSL Certificates
Volume Pair
V2 •• Latency
Volume&Pair
Policy Rule2 V2
• RPO
Volume Pair 3
V3
• Systems V3
Cluster 1 Cluster 2
WAN
Management Network
• Replication Traffic is routed via Storage Network and their corresponding Network Interfaces
• One individual Port or system bond mapped for Storage Network can be tagged for replication
traffic. Default is bond0 / first two ports on Mezz Card and share host-, and replication workload
• Source and Destination System may use different configuration, but symmetric port tagging on
Nodes within same Appliance
• Replication Traffic is routed via Storage Network and their corresponding Network Interfaces
• One individual Port or system bond mapped for Storage Network can be tagged for replication
traffic. Default is bond0 / first two ports on Mezz Card and share host-, and replication workload
• Source and Destination System may use different configuration, but symmetric port tagging on
Nodes within same Appliance
Mapping of Storage Network and replication Tag must be the same on Node A and Node B
B
A
4-Port Mezz IoModule0 IoModule1
2: id = rep-ff83db99-025d-4f78-b33e-8a867df98ac7
last_sync_timestamp = 08/08/2019 09:05:21 AM
local_resource_id = ae2a5053-cdb6-4fe7-9a48-8276b0610f37
remote_resource_id = 854d9d80-5c9d-4827-b15f-6ff2f764a16b
progress_percentage =
Volume ✓ ✓ ✓
Volume Group ✓ ✓ ✓
Remote System ✓ ✓
Appliance ✓ ✓
Cluster ✓ ✓
Replication is resuming
Operating normally X X X
Paused X
on Source
Failing Over
Failed Over X(1) X(1)
Synchronizing X X
Operating normally X X
on Destination
Paused X X
Failing Over X
Failed Over X(2) X(1)
Synchronizing X X
(1) New Source/Destination after Failover
(2) Forced Failover
216 of 292 © Copyright 2021 Dell Inc.
Replication Session
DR aspects
• Snapshots
– Snapshots on Source are replicated in chronological order during RPO sync
• Enables automated business continuity with zero Recovery Point Objective (RPO)
and Recovery Time Objective (RTO)
– Zero RPO means there’s an expectation for zero data loss
– Zero RTO means that there is no loss of data access
• 32 Gb/s Fibre Channel support (16Gb/s supported now, 32Gb/s expected FY 2022 Q2)
• 64TB virtual volume support
• Embedded management server
• Support for Ansible 1.1 automation
• Events alerts and support
– iDrac based HW monitoring
– Dial home support
• FRU support
224 of 292 © Copyright 2021 Dell Inc.
Hardware components
Metro node Hardware Configuration
• NIC Cards
– Intel x710 10GbE Quad Port Base-T (PCIe slot 3 – CPU2)
– Intel x710 10GbE Quad Port SFP+ (rNDC slot – CPU1; for LOM connections)
• iDRAC Connection – Rear Port – DO NOT USE (We cross connect to peer iDRAC thru LCOM on RMII interface)
• Detailed Contents:
– Two 1U CMA’s (Cable Management Arms)
▪ CMA allows server to slide forward for service without disconnection
of any cables.
– Pre-dress cables:
▪ MGMT1 & MGMT2 Cat6 Shielded Ethernet Cables (Lime and Violet,
end labels)
▪ LCOM1 & LCOM2 SFP+ DAC (direct attach passive copper) cables
(black, end labels).
▪ Two Black Power cords (no labels)
▪ Two Gray Power cords (no labels)
– One Red Cat6 Shielded Eth Cabled (Red, no labels, fastened Assembled CM Arms
to the loop)
– PortMap Tag (gets fastened to the cable loop)
– Expectations are for customers to run Fibre Channel and
Ethernet cables through here as well
• Benefits:
– Perform monitoring and active management from PowerStore Manager
– Monitor events without requiring continuous polling
– Automation for configuration process
• Native vSphere features can be leveraged between PowerStore and external ESXi hosts
– Easily integrate PowerStore in to your vSphere environment
• Once the VASA Provider is registered, this Storage Container becomes accessible
– Exposes all of the storage available on the cluster as a vVol datastore
– Enables external hosts to use it for VM storage
• Block
– Atomic Test & Set (ATS) with Compare and Write (CAW) – Manages locking of files on
shared volumes
▪ Enables multiple ESXi hosts to write to the same volume without corruption
– Block Zero – Hardware-assisted zeroing using Write Same
▪ Speeds up VM provisioning
– Full Copy – Hardware-assisted copying using XCOPY
▪ Saves network traffic by offloading the copy operation to the storage
– Thin Provisioning and Unmap – Free capacity reporting and enables space reclaim
▪ Maximizes efficiency by reclaiming unused space
Appliance _1 Appliance _2
• In PowerStore, the license will be installed
automatically during the Initial Configuration License License
Wizard File File
– All inclusive license e.g. replication, snapshot, Point of
migration…etc. Contact
Primary
• License will be issued to each Appliance in Appliance
HTTPS
the cluster
– Management will be centralized for the customer to run
from the primary Appliance
– The primary Appliance will be the point of contact for
all the Appliances
• Customers can use SupportAssist to provide remote support from Dell EMC to the
PowerStore cluster in case of issues
– Gathering Data Collects
– Troubleshooting
• SupportAssist is a rebranded from ESRS with a new simple Connect Home solution
– Same backend of the ESRS channel
– New and simple frontend
– Does NOT require any support account or user’s information
• Each Appliance will be connected directly to Dell EMC
– Can enabled from the ICW or from the Settings page
– Centralized management
• SupportAssist types:
– Direct Connect
– Gateway Connect
• It should also be noted that CloudIQ supports PowerStore
241 of 292 © Copyright 2021 Dell Inc.
Support Materials
Overview
• An administrator can pause the alert notifications that sent to Dell EMC Support
during a maintenance time
– E.g. unplugging cables, swapping drives ..etc.
• Disable Support Notifications can disable the call home to one or multiple
Appliance(s) in the cluster
– Enabled/disable at anytime
• The Maintenance Window Duration can be changed at anytime during the suspend
• Benefits:
– Execute service commands in the Appliances
– Run scripts and user CLI commands
• PowerStore’s node may encounter hardware or software failures that trigger the
Node unable to boot
– E.g. Take SLIC out
• Node would be placed into the Service Mode state if failures occur
– E.g. Boot loop, or D@RE lockbox copies are corrupted
• While in Service Mode the Appliances does NOT serve any customer I/O on that
Node
– It fail over to the other Node
• SMTP (Simple Mail Transfer Protocol) is a TCP/IP protocols used in sending and
receiving e-mail
• The SMTP server is used to send alerts notifications from the PowerStore cluster
to the user’s email address
• It can be enabled and disabled at anytime
• In order to enable SMTP:
– Server address
– From Email Address
– Port
• Send Test Email is available to check if the SMTP Server is set up correctly
– Send a test email by entering an email address
• An administrator has the ability to Reboot and Power down a Single Node in
PowerStore Appliance
– Done through the GUI, REST API or service script
– Customers can perform maintenance on their Nodes
▪ Replaced a failed hardware, such as, SLIC or power supply… etc.
• When Reboot or Power Down the Primary Node, the docker containers failover to
the secondary Node
– Primary will be switch over to the other Node
• In order to bring a Node back after a Power down, accessing a service script is
required
– Running the service script svc_node power_up will power up the nodes
• Non-Disruptive Upgrade
• Customer Driven
• Software, Hotfixes and Disk Firmware
• One Node at a Time for Availability
• PowerStore supports Data at Rest Encryption (D@RE) which encrypts all user
data that is written to a storage drive at 256-bit Advanced Encryption Standard
(AES)
– PowerStore utilizes Self Encrypting Drives (SEDs) to implement D@RE
• Support array-based self managed keys with the option of downloading user
backup of keys
• Benefits:
– Compliance of National Institute of Standards and Technology (NIST) Special Publication 800-111 and
others
– Protects user data in the event of physical security breaches
• LDAP specifies the application protocol, used by many different Directory Services
– Microsoft Active Directory (WindowsTM oriented)
– OpenLDAP
lab5
cn=readonly,dc=lab5 •••••••••
• Solution
– PowerStore allows for native
migration options of this client’s
data from the old array to the new
PowerStore Source
PowerStore
system
• Data migration
– Move your data to a future proven
infrastructure, PowerStore
• Application migration
– Move your existing applications to
PowerStore
Source PowerStore
system
263 of 292 © Copyright 2021 Dell Inc.
Architecture for Non-disruptive Import
Connectivity
Client
Source PowerStore
system
264 of 292 © Copyright 2021 Dell Inc.
Overview for Non-disruptive Import
3
2 Cutover
Import • Cutover is allowed once
1 • Add source system
Setup the import session is in
• Select source storage a Ready to Cutover
• Zoning for the front-end resources to import
connectivity state (source and
• Click Begin Import PowerStore are in-
• Add iSCSI connectivity
between Source system • Import session is sync)
and PowerStore system created
• Auto-cutover is
• Install Host plugin • Path flip to PowerStore
available
• Background copy from
source system starts • Once cutover, there
is no rollback
• Source system
– Dell EMC’s Midrange arrays
– Upgrade may be applicable
• PowerStore cluster
– Orchestrator
▪ Native software that orchestrates the
import
Orchestrator
Source PowerStore
system
266 of 292 © Copyright 2021 Dell Inc.
Architecture for Agentless Import
Connectivity
Client
Source PowerStore
system
267 of 292 © Copyright 2021 Dell Inc.
Action by user
Overview for Agentless Import New for Agentless
1 2 3
Setup Import Cutover
• Zoning for the front-end • Add source system • Cutover is allowed
connectivity • Select source storage once the import session
• Add iSCSI connectivity resources to import is in a Ready to Cutover
between Source system • Map host(s) state (source and
and PowerStore system • Click Begin Import PowerStore are in-
• Add host(s) to • Import session is sync)
PowerStore created • Auto-cutover is
• Path flip to PowerStore available
• Enable destination
• Once cut over,
volume
there is no rollback
• Start copy
• Background copy from
source system starts
**For the most up-to-date supported versions, refer to the PowerStore ESSM
Support Matrix**
DC Array Plugin
VMware vSphere 6.7
EQL MPIO NA NA NA
DC Array Plugin
Linux SLES 15
EQL MPIO NA NA NA
• Name
– Only for XtremIO
• Management IP Address
• iSCSI IP Address
– Multiple comma separated iSCSI IP
addresses can be entered
• Username
• Password
• CHAP Setting (Optional)
Use IT providers to
70% deploy new technologies
more quickly*
1Based on a September 2020 Principled Technologies Test Report commissioned by Dell Technologies comparing in-house
deployment vs. Dell EMC ProDeploy for Enterprise deployment service. Full report: http://facts.pt/JPiIlWm
Dell – Internal Use – Confidential 27 © Copyright 2021 Dell Inc.
276 of 292
6
Deployment Services | For PowerStore
Get production-ready with less time and effort
Additional services
Migration Services
ProDeploy Plus ProDeploy &
ProDeploy Plus
Recommended
Add-on Services Residency Services
• 2-Host Addition
Data Sanitization
• File Addition*
ProDeploy • Local Protection
for Enterprise
(snaps or thin clones)
Minimum for multi- Data Destruction
appliance clustering • Remote Replication for Enterprise
• Hardware Components
Hardware & system software • Expansion Enclosures
installation services Additional Deployment Time
Note: Customers with 1000+ Dell EMC infrastructure assets may be eligible for ProSupport One. Learn more.
PowerStore includes a 1-year limited warranty on hardware only.
Dell – Internal Use – Confidential © Copyright 2021 Dell Inc.
281 of 292
Optimize for Storage
PowerStore technical experts work remotely
to optimize performance & efficiencies
1 Based on an August 2020 internal analysis of service requests from August 2019 to August 2020 for Dell EMC storage, data protection and hyperconverged products comparing service requests for
connected products with ProSupport Plus for Enterprise vs. products without it. Connectivity is via Secure Remote Services. Actual results may vary..
28
289 of 292 © Copyright 2020
2021 Dell Inc.
9
Dell Technologies
Education Services