co
Hewlett Packard
Enterprise
HPE 3PAR StoreServ Storage best
practices guide
A reference and best practices guide for HPE 3PAR
StoreServ Storage
‘Technical white paperTechnical white paper
Contents
“Typographical conventions
Advisories
Invoduction...
Audience.
Overview.
New hardware platforms forthe StoreServ platform
What's new in HPE SPAR OS version 322 (including MU2)?
Getting started with HPE SPAR StoreSery Storage.
FC hosts zoning
Hosts and host sets.
Provisioning block storage from an HPE SPAR StoreServ
Host-based volume managers
‘Adaptive Flash Cache
Considerations for provisioning virtual volu
Provisioning fle storage from an HPE SPAR StoreServ
High availabilty
Persistent Porss
Priority Optimization
Virtual volumes
Virtual LUNs Cexports) and volume sets
Remote Copy
Streaming Asynchronous Replica
‘Adaptive Optimization
Securty.
Naming conventions
Naming convention examples.
Naming conventions with File Persona Objects
External System Reporter, now EOL (End of Life.
‘System reporter in the SSMC.
(Object selection,
Ongoing management and growth,
Storage Analytics on the Web
‘Autonomic rebalance.Technical white paper
‘Appendix A, Supported host personas
Append
Block persona scalability iis
File persona scalaaity iis
‘Appendix C File Persona compaiiblity with HPE 3PAR block features
‘Appendix D. Using the StoreServ management console (SSMC) to administer File Persona.
‘Appendix E, Using the StoreServ management console (SSMC) 10 manage snapshots.
SummaryTechnical white paper Page &
Typographical conventions
“This aude uses the folowing typographical conventions
Table 1. Tyooscapiea! conventions
ABCDabed Used focal elevate sven ae es button bel nother sees elements nen promsted cick isnt compete te nsalson
[Racabes__Used ower np enor ceriande pet anderen ou Stace the VOB Windows setp om
"ARCDabed® Used for arables nur nt lenares pata ane sreen our “Toadea rete eorain re hese reads coma
erninaine 0 com
Besrprace __Used vo hgh bes acces or = pari fopi secon Best prac ee ADS
Advisories
“To aveié injury to peoole or damage to data and equipment, be sure to abserve the cautions and warnings in this guide
‘Always be careful when handling any electrical equipment.
WARNING!
\Warnings aler you to actions that can cause injury fo people o irreversible damage ro data or the OS.
CAUTION!
Cautions alert you fo actions that can cause damage fo equipment, software, or data,
Note
Notes are reminders, tips, or suggestions that supplement the procedures included inthis guide.Technical white paper Page §
Introduction
Audience
nis guide is for system and storage administrators of all levels. Anyone who plans storage policies, configures storage resources, or monitors the
storage usage of HPE 3PAR ge should read this gui
oreSery Sto
User interfaces
Previously two user interfaces were available for the administration of HPE SPAR StoreSer the HPE SPAR OS Cl software and the HPE SPAR
IMC (inServ Management Console) sofware With version 322 HPE 3PAR introduced the HPE SPAR StoreSery Management Console (SSMCY all
‘operations for File Persone are accomplished using the CLI or the SMC
Units of measure
All units of storage (capacity are calculated base 2 1024). Therefo
+ 1KiB = 102% bytes
+ 1MiB = 2® aytes = 1048576 bytes
+ 118 = 2" bytes = 1024 MB = 1073741826 bytes
+1113 = 2% bytes = 1024 68 = 10995n1627,76 bytes
All units of performance (speed) are calculated base 10 6x
+ 18 = 1000 bytes
+ TMB = 10° bytes = 7,000,000 byres
+ 11GB = 10% bytes = 1000 MB = 1.000,000.000 bytes
+ 1TB=10° bytes =1000
= 1000,000.000,000 bytes
Table 2, Related documentation
‘Geensew and expiration of HPE SPAR Tecnology THREIPAR concept gue
Ging the coc PE PAR Cu Admins an
\destingsorage a7'en compenens ard cealed lew ornare TEE Gased Trewbirhootng
‘Gang the PE aA THRE IPAR GM API Programing Reference
For identifying storage system configuration specifications and compatibility information, go to the Single Point of Connectivity Knowledge
(SPOCK website at hoecom/storage/snacTechnical white paper Page 6
Overview
HPE SPAR StoreServ block storage concepts and terminology
“The HPE SPAR StoreServ array is comprised ofthe following logical data layers:
+ Physical disks (PDS)
+ Chunklers
+ Logical iss (LDS)
+ Common provisioning groups (CPGS)
+ Virtual volumes (Ws)
‘The relationship between system data layer illustrated in figure 7. Each layer is created fram elements of he layer above. Chunkles ave drawn
‘om physical cisks Logical disks are created from groups of chunkets. Common arvsioning groups are groups ological eis, And virtual
volumes use storage space provided by CPGs. The virtual volumes are exported to hosts and are the ony data layer visible to hosts.
mgt SE
EBB
Sz, bamee
es BREE E ‘Matching colors indicate chunklets are
BG A ener eects
Pore ARR AR eo
tage BEE
HAG
2
HAA
BREE
ceeds
‘Vejunpoels ofthe eaten
esk space
Viera volumes
Shoagespace provizione
fevoried fo Paste irtual volumes. | virtual volumes.
= BEB B
Figuee 1. 19% SPAR StooeSeey system eat layers
Physical disks
[A physical dskis@ hard drive (spinning media or Sold State Drive) located in an HPE 3PAR StoreServ rive enclosure,Technical white paper Page 7
Chunklets
Physical csks are divided into chunklets. Each chunklet occupies shysically contiguous space on a FC or NL. disk. On al current HPE SPAR
‘StoreServs all chunklets are 1 GB, Chunklets are automaticaly created by the HPE SPAR OS, and they are used to create logical disks. A chunklet
Is assigned to only one logical disk
Logical disks
Alogial disk a collection of chunklets arranged as rows of RAID sets, Each RAID set Is made up of chunklets from diferent physical disks
Logical dsks are pooles together in commen provisioning groups, whieh allocate soace to virtual volumes
“The underlying lagical disks are automatically created by the HPE 3PAR OS when you create VWs. The RAID type, space allocation, growth,
Increments, and ather logical disk parameters are specified when you create a CPG or can be modified later. The HPE 3PAR SroreSery supports
the faliowing RAID types:
+ RAID VRAID 140
+ RAID S/RAID 50 (Must be enabled from the CLI for NL drves)
+ RAID Multi-Parity (MP) or RAID 6
+ RAID 0 (must be enabled from the CLI and provides no data protection from failed drives)
Cage
Cage isa legacy HPE 3PAR term ands interchangeable with "Drive Enclosure’, ‘Enclosure’ and “Drive Shel
Virtual copy
Virtual Copy is legacy HPE 3BAR term and is interchangeable with “Snapshot”
(PGs (Common provisioning groups)
ACPG isa template for the creation af logical disks that allocate space to virual volumes on demand. A C°G allows up 10 65536 virtual volumes
to share @ CPC's assigned resources. You can create Fully Provisaned Virtual Volumes CFPVV), Thinly Deduped Virtual Volumes (TDVV), and
Thinly Provstoned Virtual Volumes CTPVVS) that craw space from @ CPG's logical esks.Itis important to note that ifno volumes af any "ype
have been created in a CPG, consumes no space
WVs (Virtual volumes)
Ws draw their resources from the LDs in CPGs and are exported as LUNs (Logical Unit Numbers) fo hosts Virtual volumes are the only data
layer visible to the hosts. You can create clones (previously known as fll copies) or snapshots (previously known as virtual copies) of virual
volumes. Clones remain avaliable ifthe original base volume becomes unavailable VV can be created using the CPGs created a installation time
oF user defined CPGS
Exporting virtual vohimes
For @ host to see @ VV, the volume must be exported as a LUN, Volumes ate exported by creating VV-LUN pairings CVLUNS) on the system
When you create VLUNSs, the system produces toth VLUN templates that establish export rules, and active VLUNs that the hast sees as LUNS
as attached disk devices. A VLUN will be created for each path avaiable to the has fer each VV exported
FPWVs (Fully Provisioned Virtual Volumes)
[A FPVY is a volume that uses logical disks that belong to a CPG. Unlike TPVVs or TDVVs, FPVVs have a set amount of user space that is
allocated for user data The fully provisioned volume sie s allocated and consumed atthe time of provisioning sie limits range from
256 MB to 16 TB The volume size can be increased at any time (provided free space is avallabe) up fo the maximum 16 TIB size without any
downtime however, he VV size canal be decreased below the initial allocation
Note
Inprevious versions of the HPE 3PAR OS there was a provisioning type termed CPVV (Copy Provisioned Virtual Volume) wich simply meant
that he provisioned VV had associated snapshot space assigned. As of HPE SPAR OS version 322 all volumes creaved are associated with
snapshot space inthe same CG. Using the aceitional menu options during VV creation in the SSMC or using the -snp.cpg option of the
createw C_I command, a different CPG can be chosen for snapshot space. Also snapshots are reservalioniess,f no snapshots are generated, no
space is consumedTechnical white paper Page &
‘TPWVs (Thinly Provisioned Virtual Volumes)
‘A TPVV isa volume that uses logical dsks that belong to a CPG. TPVVs or TDVVs associated withthe same CPG craw space from those CPGs
LLDs as needed, allocating space on demand in 1 KIB increments for each TPVV. As the volumes that draw space from 1
adctional storage the HPE SPAR OS automaticaly creates additional logical disks or expands the size of existing LDs and adds them to the CPG
Uni the CPG reaches a user-defined growth limit, f one has been set, or the system runs cut of space.
‘TDWVs CThinly Deduped Virtual Volumes)
Inadliion to the features and functionality of TPWVs, TOs ga through an additional process before allocating space on a disk All deta writes,
witha block size 16 KiB or greater have a 32 bit hash (CRC—Cyclic Redundancy Check) generated and the resulting value is compared to @ hash
lookup table fo determine ifthe data is redundant. the dara is redundant there is only an entry adced fo the destination volumes lockup table
otherwise i is written te disk For mare information on this, refer to the HPE 3PAR Thin Technologies white paper located here,
Clones (Previously known as Full Copies)
‘A lone duplicates all he data from a base volume to a destination volume. The base value isthe original volume that is copied to the
destination volume. The clone an the destination volume remains avalabe if the original base volume becomes unavailable,
‘A lone requires the destination volume have usable capacity equal 1o or greater than the usable capacity of the base volume being cloned As of
HPE 3PAR OS 321 the cone can be exported immediately after creation, while the data copy continues in the background,
Snapshots (Previously known and licensed as Virtual Copy?
Unlike a clone, which is 2 block for block duplicate of an entire volume, snapshots preserve a bit
10 Ws are witten to SD (Snap Data) space and the bitmap (Snap Admin space) of the W.
‘of & WV at a particular point in time. Updates
‘Snapshots for FPVVs, TPVVs clones and other snapshots are created using copy-on-write techniques avallable only wth the HPE 3AR
‘snapshot software license (aka Virtual Copy) snapshots for TDWVs are created using ROW (Redirect On Write). Hundreds of snapshots of each
virtual volume can be created assuring that there is sufficient storage space available. Its worth noting that snapshots do not consume any
space unless data on the base volume has been updated and the original data copied ta the SO (snap data) space. Changed data is copied only
once regardless ofthe number of snapshots taken. Snapshots are particulary useful fr test/dev enviranments as they can be created in seconds
and exported while not effecting production data. Also testers and developers can be granted the ability 1o create snapshots while not having
any other administrative privleges, lowering administration requirements (see the HE SPAR CLI guide for correct usage for granting the
Lpdatew prailege), Snapshots can now be updated without the requirement fo un-export and re-export the WW. For more information on
shapshot/Virwual Copy technologies refer to the white paper
Note
Creating snapshots requires an HPE 3PAR Snapshat (oka Virtual Copy) software licenseTechnical white paper Page 9
File Shares
rect Fess Virtual File Servers
BY .
File Provisioning Groups
Figuee 2. IDE SPAR File Poceona logical view
File Provisioning Group
‘Alle Provisioning Group (FPG) is an instance of the HPE intellectual property Adaptive Fle System. It controls how fles are stored and
retrieved. Each FPG is transparenty constructed from one or muitiple vitual volurmes CVs) and is the unit fr replication and disaster recovery
for Fle Persona Software Suite. There ae up to 16 FPGs supported on a node pit
Virtual File Server
‘A Virtual Fle Server (VFS) is conceptually like a server as sucht presents virtual IP addresses to clients participates in User Authentication
Services and can have properties far such characteristics as user/group Queta Management and Antvitus polices.
File Stor
File Stoves are the slice of a VFS and FPG where sna:
Services polices customize
hats are taken, capacity Quota Management can be performed, and Antivirus Scan
File Shares
File Shares are what provide data access to clients via SMB, NFS, and the Object Access API, subject tothe share permissions applied ta themTechnical white paper Page 10
New hardware platforms for the StoreServ platform
Table 5, 8000 series
owew 2k 2k a oe
avian 2a 2a 25th 250k
Fcv8 ca Pon nnaros 6 8 ae 8
‘Sco Parnes 6 8 ae 8
fcaysten atos 2088 1098 209% “090
SCF Coe nna SI 2008 2048 2008
ena Coy Wor Syne oo to0-2reces 00-2 n0es 100-2 rotes
2400 ras 20-4 ees 2400-4 aes
tenet epy Wax s—Asrtonoi Puede 2400 2400-2 2400-2nodee 2400-2 odee
4000-4 aes 000-4 vas 000-4 rades
ees Perence Mane D5 « «o «o
Table 4, 20000 secses
20450 2800 2oes0 2eeso
eaevy a oe eh en
Tene ask ask “ate ae
Macvun 150k 250k 25th 250k
Fo78 cai Pon nanos 2b a ae a
'SCOFCoE Por wos BI Be Be
eaten aos wm we a a
scurcersyen nts a a a0 a
— 24 00-4 odes 2400-46 odes 210 + nades
fame Copy Wax Ws—Asrctvonous Peed 2400-2 nodes t00-redes 100-2 oes 2400-2 edeeTechnical white paper Page 11
What's new in HPE 3PAR OS version 3.2.2 (including MU2)?
Remote Copy enhancements
+ Remote copy now supaorts Asynchronous Streaming: in the industry tis is also known as “True Async
Asynetvonous Streaming allows a source aray to acknowledge a hast write before the destination array has acknowledged the write back 0
‘he source array. Asynchronous Streaming Remote Copy is perfect for env ronments where very small RPOS are required and environments
where synchronous replication is desired but the realication link latencies exceed 10 ms (5 ms for FCIP) where using synchronous replication
would result in unacceprable latency.
Note
Streaming Asynchronous Remote Copy is supported an FC and FCIP transports only,
Tuning and Performance enhancements
Improvements in the tunesys (data rebalancing) behavior.
+ Tunesys now starts automatically afer inition of the admithw command on 7200 and 8400 platforms, altrough the balancing algorithm
takes care not to interfere with performance care should be exercised not to execute the command during peak hours,
+ More granular reporting of dara moved. now expressed in terms of GB moved and GiB robe moved.
+ Tunesys raw speed has been increase.
+ Adaptive Optimization (AO) new suppert "Premium mode to keep SSD dives filed as muchas posse
+ Data deduplcation performance enhancements
+ Updating snapshots (updatew) no longer requires un-export and re-exaor of vrual volumes
Security
+ Increased number of LDAP servers rom ane to unlimited
+ Supper for subdomains inthe same roo! domain
+ Suppor for load balances infront of LDAP (Active Directory) servers
Resiliency
+ Persistent Checksum—Support for end-to-end dara integrity on the 8000 and 20000 series platforms.
Efficiency
+ New fash cache creation will be done with RAID 0 to increase avaiable space for AFC.
Reporting with System Reporter in SSMC
SSSMC comes with a fresh way of reporting capacity and performance dara. Using Reports under the System Reporter main menu. users can
launch the integrated reperring r00l within SSMC. Reports enable users to run historical and realtime reports. This new approach offers great
acvantages over the previous aporoaches of querying historical data.
‘System Reporter in SSMC provides the following enhanced features:
+ Convenient access to configuration options for selecting systems fo include for reporting specifying sampling parameters, scheduling reports,
and generating alerts
+ Extensive selection of reports for obtaining performance and storage utilization statistics on selected oy
iss, vetual volumes, etc).
Ge, hosts, por's, nodes, physical
+ Quick access to predefined reports that contain useful statistics for most common types of installations
+ Customization of reports using the standard Web interface that provides specifically selected and formatted reports for specified systems.
+ Options for choosing the time and duration forthe collection of reporting statistics that can be inated ata specifi time, collected over @
period, and/or compared between ranges of periods.Technical white paper Page 12
+ Capability to isolate and zoom in and out of time periods
+ Performance alerts that can be configured via threshold arts. ance criteria for alert is met, visual narifcation of the alerts displayed on the
'SSMC dashboard
+ Abily 10 ecit generated reports, change object defnitions
+ Rego customization allows hundreds of diferent report generation
+ The size ofthe system reporter database is now tunable up fo 1 TiBin size allowing for very long data retention
+ Data from the system reporter database can now be exported
General
+ VLAN tagging is now supparted for iSCSI connectivity on the 8000 and 20000 plarforms.
+ [Pvbis now supported fr iSCSI connectivity on the 8000 ané 20000 series arrays
+ Dectplication can now be enabled and disabled globally using the “setsys" command,
Storage Federation
+ Online Import now supaorts 16 Gb and import priorities
‘+ Online migration now supports up to four source arrays to one destination array (uniectionad.
Getting started with HPE 3PAR StoreServ Storage
Best practice: In action to folowing the recommendations in the HPE 3PAR physical planning guide itisimportant to keep the number and
‘ype of physical disks as wel as the number of crive enclosures as evenly distributed as possible behind node pairs to facitate maximum
performance and oad csiribution
Best practice: Tuning CPGs/VVs as a result of hardwate upgrade or changing requirements, use the default runesys concurrent rask evel
‘wo in order to limit processing overhead in a production environment
Setting up your HPE SPAR StoreServ system ports
Port locations and nomenclature
“The HPE SPAR CLI and SSMC display the controller node, FC, iSCSI, 1 Gigabit and 10 Gigabit Ethernet port locations in the following format
«Node» For example’ 2.41
+ Node: Valid node numbers are 0-7 depending on the number of nodes installed in your system when viewing a system from the ar af a
cabinet
+ In 7000 and 8000 series arrays nodes are numbered 0-3 from battam left 1 the top right when facing the service sie (ear of the nodes.
+ ln 10000 and 20000 series arrays nodes are numbered 0-7 from bottom left 1 top right, when facing the sevice side (rear) of the nedes
+ The 7000 and 8000 series arrays have @ single onboard slot in each node, numbered starting a 0
+ The 10000 and 20000 arrays’ slats are numbered left to right/top ta bortam starting 0 fram left to right, bottom to top in a node in the lower
enclosure In he upper enclosure, slots are numberes 0-9 fram left to right, top to bottom.
+ Port Valid node port numbers are 1-4 forall add in adapters; counting from the bottom up
+ 7000 and 8000 ports are horizontal and labeled beginning with 1on the HBA or iSCSI adapter
+ 10000 and 20000 ports are numbered from bottom to top in @ node in the lower enclosure In the upper enclosure, ports are numbered from
top to bottom.Technical white paper Page 13
Front-end port cabling
Each HPE SPAR StoreServ controller node should be connected to two fabrics. This is to protect against fabric failures
Ports ofthe same pair of nes withthe same IO should be connected tothe same fabric. Example
+ 023 and 123 on fabric?
+ O24 and 124 on fabric 2
Best practice: Connect Odd sorts fabric 1 and even ports Ie fabric? and so forth. Example with @ 4-node 7400 with eight host ports:
ons 125 224,225 024,124,224 324
Examole with 4-nede HPE 3PAR 10400 with 32 host por's:
arict FaaRic?
021025 051 054,129 125,151 155 221,225 251,255,521 425 451555 022026 082 O54, 122 124,152 Ht 222,226,252 254 522 524,152,256
FC hosts zoning
SAN switches 10 enable avalablliy in the ever’ of a switch failure
Best practice: Use a least two sep:
Best practice: One initiator to multiple targets per zone Czoning by HBA), This zoning configuration is recommended for the HPE SPAR
StoreServ Storage. Zoning by HBAs requited for coex'srence with other HE Storage arrays such as the HPE EVA.
Best practice: Zoning snould be done using Worldwide Port Names (WINPN, the WWN af each individual oot on HPE SPAR StoreServ), Port
Peesistence isnot compatible with DID zoning.
Best practice: Hosts should be mirrored to node pais. For example: zoned to nodes 0 and 1. or nodes 2 and 3. Hosts should net be zoned to
ron mierorad nodes, such as O and 3
Best practi host
Non-hyper
+ Asingle non- hypervisor host port should be zoned with a minimum of two ports from the two nades of the same pair. In adction, the ports
from a hast’s zoning should be mirroree across nodes In the case of hosts attaching with multiple host HBA ports attached to dual suitches,
teach port should be zoned to atleast the two-mitrored nodes
+ Non-hypervisor hosts do not need to be connected to all nodes because of the way the volumes are spread on all the nodes,
Best practice: Hypervisor host
+ single hypervisor hast should be zoned to a maximum of & nodes in two separate nade pairs (for example O61 and 265) to maximize
bandwigt
“ach Hypervisor HBA port should be zoned to @ minimum of two ports from each nede pair an twa node systems: and at least one or more
por’) from atleast four nodes an 4,6, and & nade systems in order fo maximize throughput across multiple node bussesTechnical white paper Page 14
Teble §. Examples of Valid Zoning
Host Type Hest HBA Port StoreSer Porte
Nomhyperr ense HA sgl part HaAtoor 1 oui
Nem hyper, srg HBA tee HBA ports eparte itches Hntpea 7 ont
Heaipe2| onan
Hypenor te HBAs bi HBA pons each, onneced 0p spire sicher Four nage StreSeru Hntpea 7 ont
aa2pen? 721,392
Best practice: Fach HPE SPAR StoreServ system has a maximum numberof iniators supported that depends on the model and configuration,
Inregard to this maximum, initiator = * path from a host.
‘A single HBA zoned with two FC ports will be counted as two initiators.
‘Akost with two HBAs, ach zoned with two ports, will count as four initiators
Best practice: No more than 256 connections (128 if using Persistent Ports) are supported per front endihost port
Hosts and host sets
Best practice: Wen creating hosts, fallow *he implemenration guide for each platform.
Selecting the correct host persona (specifying host operating system) foreach host is important. Implementation guides are avaiable for
download at the folowing address: ip com/go/storaae,
Each physical server should have acifferent host define
‘containing the WINS ar IGN for this host
Best practice for creating @ new hast (Windows®, SPARC Solar’, VMware®, and Red Ha'® Linuxt)
1 incall the Host Explorer software in the hos if availabe forthe host patfarm Gdownloas here)
2. Zoneinall the ports accarding tothe zoning best practices
3. From the host CL execute ipdhostagent-start; then tachostagent-push
4, This willautomarcally create the host on the HPE SPAR StoreServ Storage system,
Best practice fr creating @ new host manually
1. Zone in host par's to HPE SPAR StoreServ using the zoning best practices, one host at a time
2. For each host, select the host and then create the new host
5. Inthe WWN selection screen, select the WWNs associated withthe new host
4 Zoneina single host and then create the host an the HPE SPAR StoreServ to reduce the possibilty af assigning incorrect WWNs to @ hast
Repeat unt al hosts are zoned inTechnical white paper Page 15
Figuee 3. Co
4p the Management Co
Best practice: For clusters, create a hast set containing al the hosts used by the cluster. This will allow exo
cluster in a single operation,
of shared Ws to allhosts of the
Provisioning block storage from an HPE 3PAR StoreServ
Note
This aso includes provisioning block storage for File Persona use.
Host-based volume managers
Use of external volume managers s often unnecessary because of the advanced volume layout algorithms already used by the HPE 3PAR OS,
‘One notable exceation to this best practice is large block sequential workloads such as Statistical Analysis Software (SAS) and video streaming as
these workloads take advantage cf the read ahead algorithm used by the HPE SPAR OS. The algorithm wil instantiate up to § read ahead
threads per VY, effectively preloading data into data cache when sequential reads are detected
Adaptive Flash Cache
IPE 3PAR Adaptive Flash Caches incuded as pat of the HPE SPAR Operating System Suite version 321 an later, ands supported in al
HPE SPAR Storer Storage arrays at have aber of solid-state tives SSDs) and har disk ves (HDDS)
Note
AEC is not support
n the non-AFC 480 GB SSD crives (ETYSSA anc E7Y56A)
Benefits
+ Reduced latency for rancom read-intensive workicads.
+ Responds dynamically atovicing smart and adaprive data placement based on application and workiead demands
+ Enables HPE SPAR Adaptive Flash Cache across the enlre system or select particular workioads to accelera
Requirements
+ HPESPAR OS version 322 (license is bundlTechnical white paper Page 16
Four $SD drives in 7090/8000 series or eight SSD drives in the 10000/20000 series
Table 6. Suppocted configueations and maxinun flash cache. (7000 and 10000 Series acsays)
onseer reo 7400 r440€ 10400 (OLDNODE 10K00NEWNODE T0800 AFC
FC System mace Te 318 518 em o8
AEC Node Pi 0a 708 08 wT Te 218 218
Masso + + + ® * e
conseer ‘00 400 0 20000
AFC System ns 3578 38 47a
AFC Node Pi mc 768.0 1st 278
sso 2 a
Best practice: If SSD drives are not avaiable, use the CLI command createflashcache sim. This wil simulate how much
into
1 could possibly move
flashcache tier and the possible incre
Best practice: On systems with four or more SSD drives per node pair avaliable, enable fash cache system wide using the fllowing commands
covaveflasneache . If there i insufficient space available for the maximum amount the command will eturn
the maximum amount available
setflashcache enable sys:al
native to enabling flash cache forthe entire system, VWsets can also be selected)
For more information refer
o the HPE 3A Adeptwve Flash Cache white paper here
‘Common provisionin
oups
Notes
+ CPGs primarily are templates forthe creation of LDs but have other defining characteristics such as capacity lis.
+ If there arene volumes created “n’ a CPC. it wil consume no space.
+ PGs define
~ The Al
level for the LDs ra be created,
‘Avalabilty level (HA CAGE, HA PORT, or HA MAG).
+ Step size as well as other characteristics such as drive geometry
+ CPGs will only be created across drives ofthe same type (SSD, FC, or NL) and wil include crives of diferent roms unless otherwise speciied
Using a disk fier.
Best practice: CPs using Fast Class (FC/SAS) or NL should use RAID 6, for SSD CPGs use RAID 5.
Best practice: When creating CPGs, accept cefauls according to performance/capacity requirements.
Exceptions include not enough drive enclosures to achieve HA cage, requiring higher capacity ullization with the use of RAID S or provicing
protection against double dst failure with RAID 6Technical white paper Page 17
Best practice: The numberof CPGs should be kept to @ minimum Refer to Anpendbe 8 for pertinent iis
Best practice: There are cases in which having more CPGs than the minimum will be reauired
+ Using thin provsioned WWs while Using adaptive eprimization
+ When using HPE SPAR Virual Domain software @ gven CPG can only be in one domain
+ When using Adaptive Optimization sofware: given CPG can only be in one Adaprive Optimization policy
+ When capacity reparing i required pe custome or apalcation per customer/appliation CPCs ease capacty reporting
+ When snapshots are heavily used and the snapshot data is kept in diferent ter than the source data, use a CPG that matches the production
(CPG performance characteristics as close as possible to maximize performance.
Best practice: Do nor set “grow limits" on CPGs
If awaring threshold is required, seta “growth warning” Gwarning in terms of capacity), not an “allocation warning” (warning in percentage)
Best practice: Avoid creation of RAID 0 CPs as RAID 0 offers no protection from data loss from drive fallures (RAID Os disabled by default
consult the HPE 3PAR StoreSery CL manual for instructions for enabling RO as well as RS on NLD.
Solid-state drive CPs
Best practice: Solid-state drive (SSD) CPGs should be of the RAIDS type with aset size” of 3+1 by default. This wil bring superior
performance/capacity ratio. If maximum performance is required, use RAID 1.
Best practice: The growth increment should be set tothe minimum value, which is 8 GB per node pair
‘On two-node systems, set the value o 8 G8, on four-nade systems fo 16 GB, on sino systems to 24 GB, and on eight-node systems to 32 6B.
Inorder to set the CPG growth increment to a lower value than the defaull, the “show advanced option” box must be checked.
Best practice: Availabilty should be left to “cage level avaibllty (the default option) ifthe system's configuration allows fort If not it should
be set 1o"magazine level avaiatilty. This can be changed using the “advanced options” checkbox in the SSMC.
ther advanced settings such as “preferred chunklets" and step size" should not be changed from their default values Also, avoid using
dls fers,
Fast class (FC/SAS) CPGS
Best practice: FC CPCs should be of the RAID & "ype by default. This wil bring the highest availablity far modern high capacity drives. The “set
size’ (data to parity ratio} can be changed from the default value of 62 if the system configuration supports it. usable capacity isthe primary
concern use a wider stripe
Best practice: For applications that have a very high write ratio (over SO percent ofthe access rate), create @ CPG using RAD "if performance
(as opposed to usable capacity is the primary concern
Best practice: The growth increment shoul be lett the default valve 32 GB per node pa).
Best practice: Avaabilty shouldbe lef to “cage eve the default oprion) ifthe systems configuration allows frit
If poi should beset ro “magazine level avalailty. This can be changed using the “advanced options" checkbox of the Management Console
Best practic: Leave her advanced sevting suchas “preferred chunklets" and ‘step size” to the defaults
NLoPGs
Best practice: NL CPGs should be RAID 6, whichis the default
“The “set size” Gata to parity ratio) can be changed from the default value of 8 (62) ifthe system configuration supports it RAIDS is not
recommended with NL disksTechnical white paper Page 18
Best practice: The growth increment should be left to the default value 32 GB per node pal’)
Best practice: Avaiabilty should be left to “cage level” (the default option if the systems configuration allows for it
Note
HA cage requires thatthe number of cages behind each nede pair must equal to or larger than the set size For example RAID 6 with a set size
of 8 642) requires 8 drive enclosures (cages) or more behind each node par.
Select “magazine level availabilty If HA cage is not posse. This can be changed using the “additional serting” checkbox of the SSMC during
WV creation.
Other advanced settings such as ‘preferred chunklets’ and “step size” should be lef at default values.
Considerations for provisioning virtual volumes
‘Thinly Deduplicated Virtual Volumes (TDVVs)
+ TOWs can only reside on SSO storage Any system with an SSD tier can take advantage of hin deduplication The option to provision TDVVs
isnot avalable inthe HPE SPAR SSMC unless an SS0.CPG has been selected
+ The granularity of deduplication is 16 KIB and therefore the efficiency is greatest when the V/s are aligned to this granularity. For hosts that
Use fie systems wita tunable allocation units consider setting the allocation unit 0 a multiple of 16 KiB.
+ Deduplication is performed on the data contained within the virual volumes of a CPG, For maximum deduplication store éata with duplicate
affinity on virtual volumes within the same CPG,
+ Thin deduplication is ideal for data that has a high level of redundancy Data that has been previously deduplicated, compressed, or encrypted
are nol good candidates for decuplication and should be stared an thinly provisioned volumes.
+ AO does nat support TDVVs
+ When using an HPE SPAR array as external storage toa third-party array deduplication may not function optimally.
Best practice: Use TDVVs when there is a high evel of redundant data, andthe primary goal is capacity efficiency
For more information refer to the HPE SPAR Thin Technologies white paper located here
‘Thin Previsioned Virtual Volumes (TPVVs)
‘The use of thn provisioning has minimal performance impact and has the significant operational benefi of reducing storage consumation
However, there are certain workloads and agplicaions thin provisioning may not be of benefit such as:
+ ‘Applications that write continuously 1o new space; an example ofthis is Oracie redo log files willnot benefit from thin provisioning asthe space
will be consumed unt the volume is ful
+ Environments that require hest encrypted volumes-writing blocks of zeraes to @ host encrypted volume on & newly created HE 3PAR
StoreServ thin-provisioned volume will cause space to be allocated on the TPVV because the encryption alters the content af the blocks.
‘Applying encryplion to thin-provsioned volumes that already contain data or rekeying them also inflates the zere blecks, making the volume
consume space as if was fully provisioned. Attempting to re-thin the valume by wriing zeroes to allocated but unused space wil not
decrease the space uiizaion, Asa result hast encryption and thin provisioning do net cooperare
+ Environments thal require SAN encrypted volumes for example, host-based encryption, encryption by @ device in the data path (eg, SAN
switch) also alters the data stream so that blocks of zeroes written ay the host are not passed onto the storage. A natable exception is
Brocade SAN switches. With the introduction of Fabric OS 7.10 the Fabric OS encryption switch can automatically detect if volumes @
thir provisioned LUN. If@ LUN is detected as thin-provisioned, the f'st-lime encryption and rekey are done on the allocated blacks only. This
thin provisioned LUN support requires no action By the u
+ Copy on Wire filesystems that write new blocks rather than overwrite existing data are not suitable for thin provsioning as every write will
allocate new storage until the volumes fully allocated. An example of a copy on write file system is Oracle Solaris ZFSTechnical white paper Page 19
Best practice: Use TPVVs as a general practice, with exceptions as noted for TDVVs and FPVVs
‘more information refer to he HPE 32AR Thin Technologies white paper heve
Note
Copy Pravsioned Virtual Volumes (CPVVs), an HPE SPAR deprecated rerm, are simply VWs that are associated witha snapshot CPG As of
HPE SPAR 05 522 all volumes created are associated with snapshot space, in the same CPG, by default A ciferent snapshot CPG
selected when creating @ VV by selecting the additional settings option n the SSMC or using the -snp_cpg of the createwy CLI command,
Fully Provisioned Virtual Volumes (FPVVs)
FPVVs allocate all space and create all Ls a initial prowsioning
‘+ AN FPWV provides the highest performance ofthe th
‘visioning tyes
‘ha dynamic optimization license FPVVs can be converted to TPWVs, and further the TPVV can then be converred to TDWs provides
‘here is avallaole SSD space.
rkloads that continuously write Gata in new extents instead of overwriting d
oF RAID 6 will enefit most from FPVVS.
that perform heavy sequential write workloads to RAIDS
Best pra
ice: Use FPVVs when the highest performance isthe priority.
Provisioning file storage from an HPE 3PAR StoreServ
{PE 3PAR OS version 3.21 MU? introduced the HPE 3°42 Fle Persona Software Suite comprise of rich file protecels including SMB 30, 21,20
6 10 plus NFSvi.0, and v30 to support a broad range of cient operating systems rasa includes the Object Access API tha enables
programmatic dara access via’a REST API ‘or cloue applications fram vetualy any device anywhere.
Serv platform
vy consolidation,
“The HPE 3PAR File Persona extends the spectrum of primary storage workloads natively aderessed by the HPE 3PAR S
‘rom vinualzanion, d plications via the Block Persona to also include cient workloads such as home dire
‘group/ceparment shares, and corporate shares va the Fle Persona~all with truly converged conrraless, agile capacity, and untied
‘management.
Note
Make sure that your HPE SPAR StoreServ Storage is configured and provisioned for physi
disk storage and ready to provision storage for
HPE SPAR File Persona has the following expanded an
+ Home directory consolidation
+ Group/separtment and corporate shares
+ Custom cloud applications using Object Access AP!
HPE SPAR File Persona managed.
The HPE SPAR ware Suite is comarised ofthe fellowing managed objects:
le Persona Se
+ Fle provisioning groups (FPGs)
+ Virtual file servers (FSS)
+ File Stores
Best practice: Limi the number of File Share abjects created For example, home directory shares at the department level instead of atthe
Individual user level This reduces the amount of data cache required while also reducing networking trafficTechnical white paper Page 20
SSMC (StoreServ Management Console)
HPE 3PAR offers the new, streamlined SSMC and HPE 3PAR CL for the management of converged block fle, and object access an HPE 3PAR
StoreSery systems. See Apnendix D for basic SSMC navigation for managing the Fie Persona feature
Note
‘Make sure to apply the license for the HPE SPAR File Persona Software Suite on the system in order to enable the fle and object access on the
HPE SPAR StoreServ 7000c, 8000, end 20000 series converged controllers Fle Persona will show up in SSMC only after enabling the license on
the system For customers wishing totes File Persona, or any other HPE 3PAR feature, a NER (Not For Resale) icense s avaiable for 180-day or
‘one-year tal periods. FP is licensac on a per host presented TB bass.
SSMC offers twa modes of manage
the advanced menu.
for File Persone for streamlined management experience. Refer ts AnsendixD for the steps to enable
Normal mede
+ Hides more complex configuration and manage
ent options
+ Uses default values for hidden objects during creation
+ Simplifies admin choices for everyday operations
1S sanstontere
‘Advanced mode
+ Displays more complex configuration and management options
+ Allows admin user to specify all values forall objects during creation
+ Provides greater control for less commonly performed operations
‘S warstereserv ~
sien YimaiviineSexiimey—_ Restt Obi ete
Best practices for HPE 3PAR File Persona depleyment
Following ae best practices to keap in mind while deploying File Persona with respect to networking, sterage layout, authentication, snapshots,
and performanceTechnical white paper Page 21
Networking
+ The HPE SPAR File Persona supports @ dal port TOGbE NIC per node in network bone mode 6 balance-alb or network bond mode 1
(active/passive) or 8 cuad port TGBE NIC in each node in network bond mode 6 Cbalance-alb) with the option to use network bond mode 1
Cactive/passive) Bonding is support only on NICS an the same node,
+ File Persona requires an IP address per node in adcition to a least one |P address per Virual File Server
Best practice: Ensure the File Persona configuration uses the same Network Time Protocol (NTP) server as other servers In the case of Active
Directory it can be an AD server. Kerberos authentication allows a maximum ime draft of five minutes i the drift exceeds this Kerberos
authentication wil fall Publicaly avaiable servers indude poolatporg
Best practice: The same NIC should be used in each node in the system eg, ether IGE or TOGbE. Is best fo use dual port 1OGE NIC for
File Persona to get more network bandwidth Its also worthy to rote that with HPE 3PAR OS version 322 and beyond, an onboard RCIP
(Remote Copy internet Protocol) port may be used for File Persona,
Best practice: Make sure that there are multise network connections from node pals (atleast one port per node), preferably connected to at
least wo network switches for increased avalailry.
‘Authentication
Best practice: When configuring authentication order only enable valid authentication providers For example f Active Directory isnot availabe,
leave i disables
Best practice: I using Active Directory precanfigure the computer account or each node running File Persona) before VES creation 10 avoid
Issues with computer account creation
Storage layout
+ The HPE 3PAR File Persona is enabled per node pai fr high avalzbilly purposes. Upon enablement of File Persona, @ RAID 6 volume will be
created for each node in the array running the File Persona. These are the system volumes and cannot be used for creating the File Stores to
share oUt fe the dents
+ The HPE 3PAR File Persona can share the same CPGs as the block volumes fo create the File Provisioning Group (FPG)
Best practice: Use fle quotas to ensure that fee space remains at 10 percent or greater te avoid problems associated with ful fle systems
Including flied writes.
Best practice: Create new CPGs for FPGs or use existing CPGs exeept fs_cpg 1o enable the greatest administrative lexbility eg, remote copy,
migration to citferent performance ters ta isalate the CPG fr fle, For File Persona, RAID 6 (FC) and RAID 6 (NL) CPGs provide the best
capacity and resiliency,
Best practice: For the best combination of resiliency, and efficient capacity utlization, use RAD 5 fer SSD and RAID 6 for FC and NL drives.
Best practice: Create atleast one FPGY/VFS/fiestoreffie share on each node running the Fle Persona in the HPE StoreSery array Distribute
Users and group/department/corporate shares evenly berween the nodes for maximum load distribution
Note
“The type of network interface Gwhether the onboard interface ar an add-on NIC) fr the ports used by File Persona must be the same.
File Persona cannat be enabled using bath the onboard port and an add-on NIC at the same time
Note
‘Any CPG being used by an FPG and VFS cannat be a member of a Virtual Domain.
Protocols
“The HPE 3PAR File Persona suppor's SMB 30, 21,20, 10 and NFSwi.0, v30 along with Object Access APL This includes eevanced SM330
protocol feature of Transparent Failaver, SMB opportunistic lacks Coplocks) and leases (file and directory) forall SMB versions; crediting and
large maximum transmission unit (MTU) size for SMB 2x and beyond versions and Offioaded Data Transfer (ODX0Technical white paper Page 22
Best practice: For achieving transparent fallover, leave continuous avalability
non-distuptive operations to the clients
tabled and use SMB protocal version 3 at a minimum for
‘Authentication and authorization
ne HPE SPAR File Persona supports three types of name services Active Di
authentication,
ory, LDAP and Local database for ser and group
aros with home
Best practice: Use Active Directoy if available) for the most flexible authentication and authorization for deployments
directories and corporate/group shares,
Best practice: Unless every user in your environment has an Active Directory UserID (UID) avoid enabling rc2307,
Best practice: If f<2307 has to be enabled ensure that all users have UIDs and GiDs defined in AD and then enable rfc2307 supoort in
Persona before creating any FPGs
Best practice: In order to maximize flexility and manageability use Active Directory, LOAP and local in the order forthe authentication stack
configuration on File Persona via SSMC.
Best practice: Aer a user itectory ora group/department/corporate share has been created, review and set tne folder permissions to ref
the appropriate level of access and secutity for your organization
Best practice: The File Persona supports Windows continuously available shares functionality and is enabled by default. Windows 8, 81 or
Server 2012, 2012 R2 clents are required for this functionality It is recommended that the configuration be verified in @ on-pr
environment before being mov
Snapshots
“The HPE SPAR File Persona provides point-in-time space efficient, recrect-on-wte snapshots atthe File Store level.
ime. Far instance
Best practice: Schedule snapshots, with a specific retention count to provide a defined se* of recovery point granulaiies ous
+ Take dally snapshols and retain 7
+ Take weekly snapshots and retain &
+ Take monthly snapshots and retain 12
Best practice: Create a schedule for snapshat clean tasks for every FPG. Snapshot clean tasks ensure that once snapshots are deleted, orpha
blocks are recovered and made available for new files. Create @ weekly clean up task and monitor the amount of space returned to the system. If
the amount returned is over 10 percent of the fatal system capacity, schedule clezn up rasks every 3 days
‘The HPE 3PAR File Persona supports nelwork share based backup over SMB or NFS protocol and NDMP over iSCSI based backup for the user
datain fle shares and the system configuration datain each VFS,
Best practice: Make s
take a weekly backup of system configuration date for each VES.
Note
Fotanivris scanning wih File Persona refer fo Technical overview of HPE PAR File Persona Sofwate Suite
Note
‘Aopendix C covers File Persona compatilty with other HPE SPAR data servicesTechnical white paper Page 23
High availability
Best practice: Hewlett Packard Enterprise encourages all HPE 3PAR StoreSery Storage customers upgrade to the latest recommended
HPE 3PAR OS, Ungracing to the mast current HPE 3A GA OS ensutes that the storage system benefits from the ongoing design
Improvements and enrancements For customers participating in the Get 6 Nines Guarantee Program. the pragram wall denny the latest
HPE 3PAR OS version hat is covered under the Guarantee program.
Best practice: Size the system appropriately so that all workloads and applications dependent on the HPE SPAR system can perform as needed
Under the conditions ofa nade being down, This may occur during an unplanned controller node falure or planned maintenance ofa controler
‘node. Inno situation should the maximum limits of the system as defined inthis document and aroduct specifications be exceeded
Insystems wih four er more nodes. a esience feature called Persistent Cache is automatically enabled The Persistent Cache feature ensures
that no storage controller node is placed into performance limiting “cache write thru" made as @ result ef a losing its partner in the node pair. Any
rode that loses its adjacent node can dynam cally form a mirored eache relationship with another storage contralier node. This limits the
performance impact of unplanned downtime or controller node maintenance.
Pe
Persistent Ports functionality is supported for HPE SPAR OS 312 and ster only (ith functionality restrictions on HPE 3PAR OS 512 Starting
Uith HPE SPAR 0S 515 support for FCoE connected hosts and iSCSI connected hosts has been added. and the ability o detect an array node
suffering loss_sync” Ca physical layer problem occurring between the HPE SPAR controller node and the switch it's connected to) has been.
added. There sna Persistent Ports support on versions of the HPE 3PAR OS versions prior to 312
stent Ports
For HPE SPAR StoreServ FC host ports, the following requirements must be met
+ The same host port on host facing HBAS inthe nodes ina node pair must be connected fo the same FC fabric and preferably dtferent FC
svtches on the fabric (for example, O11 and V1),
+ The host facing HBAS must be set to “target” mode.
+ The host facing HBAs must be configured for point-lo;point connection (no suppet for “loop')
+The FC fabric being used must support NPIV and have NPIV enabled
For HPE SPAR StoreSery ports FCoE host ports, the following requirements must be met
+ The same Converged Network Adapter (CNA) port on host facing HBAS in the nodes in a node pair must be connected to the same FCoE
network and preferably different FCOE switches on the nerwork Gor example O11 and 1.
+ The FCoE network being used must support NPIV and have NPIV enabled
For HPE SPAR StoreServ iSCSI hos! ports, the following reauirements must be met
+The same host port on host facing CNAS in the nodes in a node pair must be connected tothe same IP network and areferably diferent
IP switches on the fabric Cor example, O11 and 11)
ersistent Ports configuration considerations
Persistent Ports requires that corresponcing ‘Natve" and “Guest” host ports on a nade pair be connected to the same FC fabric or IP network
and the suitches they are connected to must support and be configured for NPIV in the case of FC and FCoE. This means that fora minimum
configuration o provide Persistent Ports functionality, where the node pair is connected to redundant FC SAN fabrics, each nade ina node pair
‘must have a least 1wo FC host ports cabled with one port connected to each fabric
Best practice: Ensure that the same