Professional Documents
Culture Documents
DS8000 Technical Overview Advance Technical Skill RatHay
DS8000 Technical Overview Advance Technical Skill RatHay
Agenda
Design Principles DS8000 Highlights Storage Architecture and Components Encryption Copy Services
FlashCopy Metro Mirror Global Copy Global Mirror Metro/Global Mirror z/OS Global Mirror (XRC)
Caching Algorithms
Storage Virtualization DS8000 Family Comparison Functions
Easy Tier I/O Priority Manager Thin Provisioning Resource Groups/Multi-Tenancy System z Synergy Functions
Return
Design Principles
Deliver high-end block-access disk storage systems to satisfy customer requirements Examples:
Maintain access to data Goal: > five 9s availability Redundant hardware, extensive error recovery, release-to-release code reuse Nondisruptive changes: repairs, hardware upgrades, microcode upgrades, logical configuration Deliver high performance High random IOPS, low random I/O response time, high sequential throughput Support efficiently running a mix of workloads concurrently (e.g., random and sequential) Support flexible customization Separately scale cache, host ports, disk drives Scale from the smallest to the largest configuration nondisruptively Support extensive functionality Internal copy, 2-3 site remote mirroring, thin provisioning, sub-volume tiering, wide-striping, Quality of Service Extensive synergy with IBM i, p and z servers Provide ease-of-management Friendly GUI, easy volume management Extensive self-tuning reduces manual tuning time and effort
Return
3
DS8000 Highlights
2004 POWER5
2006 POWER5+
2009 POWER6
2010 POWER6+
DS8100/ DS8300
DS8700
DS8800
Binary Compatibility
DS8700
POWER6 controllers (2-way and 4-way) 4 Gb/s and 8 Gb/s host adapters 2 Gb/s device adapters 3.5 Enterprise Fibre Channel, SSD and SATA drives
DS8800
POWER6+ controllers (2-way and 4-way) 8 Gb/s host adapters 8 Gb/s device adapters 2.5 Enterprise SAS-2 and SSD drives ,3.5 Nearline drives
DS8700 Hardware
Model 941 base and model 94E expansion
941 with up to 4 x 94E expansion frames Enterprise fibre channel, SSD and SATA drive options UP to five frames UP to 1,024 drives
Host adapters
Up to 128 ports Each port supports FCP, FC-AL and FICON at the port level Base frame and first expansion frame allows 16 adapters per frame Both 4 Gb/s and 8Gb/s Host Adapters available
DS8800 Hardware
Model 951 base and model 95E expansion
951 with up to 3 x 95E expansion frames
2.5, small-form-factor drives 6 Gb/s SAS (SAS-2)
Host adapters
Up to 128 8 Gb/second ports Each port supports FCP, FC-AL and FICON at the port level Base frame and first expansion frame allows 16 adapters per frame Both 4 and 8 port host adapter cards available
Drive Options
146GB/15k, 300GB/15k +FDE encryption options 450GB/10k, 600/10k, 900GB/10k + FDE encryption options 300 GB SSD 3 TB/7.2k nearline
Logical configuration:
Intermix of CKD or Open Recommend extent rotation to balance performance
10
DS8800 BC
2011 IBM Corporation
Disk Technology
3.5 (LFF) Fibre Channel
Disk Technology
2.5 (SFF) SAS
Throughput
2Gbps FC interconnect backbone 2Gbps FC to disks
Throughput
8Gbps FC interconnect backbone 6Gbps SAS-2 to disks
Density
Supports 16 disks per enclosure 3.5U of vertical rack space
Density
Supports 12/24 disks per enclosure 2U of vertical rack space
Cabling
Passive copper interconnect
Cabling
Optical short wave multimode interconnect
Modularity
Rack level power Rack level cooling
Modularity
Integrated power Integrated cooling The DS8800 uses the 3.5 enclosure for the larger sized 3 TB nearline drives
Return
2011 IBM Corporation
11
12
13
14
* The maximum number of 8 Gb host adapters is 8 in the base frame and 8 in the expansion frame for a total of 16
15
Each port on card can be independently set to support FCP, FC-AL or FICON
16
17
18
Power Processors
Based on Power6 server technology
DS8700 uses Power 6 at 4.7 GHz DS8800 uses Power 6+ at 5.0 GHz
19
Writes are written to both servers, with one stored in non-volatile memory and the other stored in cache
Concurrent upgrade to all processor memory sizes Options for processor memory
32 GB memory / 1 GB NVS 64 GB memory / 2 GB NVS 128 GB memory / 4 GB NVS 256 GB memory / 8 GB NVS 384 GB memory / 12 GB NVS Business Class system supports from 16 GB 64 GB / 1 GB 2 GB NVS
20
Host Adapters
Host adapters connect via FICON or Fibre Channel to attached servers
Host adapters also are used for replication technologies Up to 128 ports Host adapters can be a mix of long wave and short wave (all ports on card are same type) Any port can be configured independently to support either FICON or FCP
Concurrent upgrade/MES to add additional host adapters DS8700 host adapter cards are all 4-port (either LW or SW)
4 Gb/second (maximum of 32 cards or 128 4 Gb/second ports) 8 Gb/second (maximum of 16 cards or 64 8 Gb/second ports)
21
DS8800
All SAS-2 300 GB SSD 146 GB/15,000 RPM + FDE option
3 TB/7,200 RPM
22
DS8700 DS8800
Each full drive set consists of 16 DDMs. With the exception of Solid State Drives and nearline options, drives are always ordered in full sets of 16. SSD drives may be ordered in a single group of 8 if desired. Nearline drives on the DS8800 are available in groups of 8
23
300 GB SSD eight drive set 300 GB SSD sixteen drive set 146 GB/15K RPM 300 GB/15K RPM 450 GB/10K RPM 600 GB/10K RPM 900 GB/10K RPM 3 TB/7200 RPM eight drive set
24
600 GB SSD eight drive set 600 GB SSD sixteen drive set 300 GB/15K RPM 450 GB/15K RPM 600 GB/15K RPM 2 TB/7200 RPM
25
26
27
Device Adapters
Device adapters connect the I/O enclosures in the processors to the disk drives
Device adapters perform all RAID functions and rebuilds in the event of a drive failure Device Adapters are configured in active/active pairs that provide redundant access to drives RAID levels supported by these device adapters include RAID-5, RAID-6 and RAID10
Device adapters on DS8700 are 2 Gb/second (Fibre Channel) Device adapters on DS8800 are 8 Gb/second Fibre Channel
Return
28
Caching Algorithms
29
Should I keep this data? What data is needed next by the host?
30
Cache Algorithms
Sequential Adaptive Replacement Cache (SARC)
Self tuning algorithm to support a mix of sequential and random I/O streams SARC determines When data is copied into cache Which data is copied into the cache Which data is evicted when the cache becomes full How to adapt the algorithm to differing workloads Not just least recently used, but also how frequently referenced Resists tendency to store one time use sequential data in cache
Return
32
Storage Virtualization
33
Storage Hierarchy
Disk
Individual DDMs Logical Grouping of 8 DDMs of same speed and capacity Arrays One 8-DDM Array Sites used to construct one RAID5, RAID-6 or RAID10 array
Ranks
One Array becomes one CKD or FB Rank Available space in rank divided into extents
An extent is the minimum allocation unit when a LUN or CKD volume is created (FB = 1GB, CKD = 1113 cylinders)
CKD or FB
Extent Pools
1-N Ranks form an Extent Pool
Min of 2 pools1 each for server0 and server1 Max of 1 pool for each rank
Extent Pool
All Extents in a Pool are same storage type (CKD/FB); same RAID recommended Associated with server0 or server1
34
RAID-5 Array
RAID-5 Array
RAID-10 Array
RAID-10 Array
FB Extent Pool
3390-1
3390-3
3390-9
3390-27
12 GB
50 GB
101 GB
40 GB
35
Logical Volumes
Fixed Block LUNs
Composed of one or more 1 GB extents from an extent pool LUNs cannot span multiple extent pools LUNs can have extents from different ranks within the same extent pool LUNs can have a maximum size of 16 TB Can contain up to 64K FB LUNs
CKD Volumes
Composed of 3390 model 1, which has 1113 cylinders When defining, specify the number of cylinders not extents Standard CKD volumes up to 65,520 cylinders (55.6 GB) and with EAV, up to 1,182,006 cylinders (1 TB) Can contain up to 65,280 volumes
Considerations
When creating FB LUNs, create LUNs that are a multiple of 1 GB to avoid spatial waste When creating CKD volumes, create volumes that are a multiple of 1113 cylinders to avoid spatial waste Total of 64K volumes (CKD + FB)
36
4 7
5 Rank 9
6 Rank 10 2
Rank 11 3
37
A host is defined by its world-wide port name (WWPN) Hosts must contain at least one WWPN and many servers attached to DS8000s have multiple hosts
Multiple host ports are balanced using MPIO or Subsystem Device Driver (SDD).
Volumes are assigned to the host by connecting the volume group to the host
38
WWPN-7
WWPN-8
Host Attachment: Prog
39
No connection of LSS performance to underlying storage Number of LSS can be defined based on device number requirements
40
Thin provisioning allocations from extent pool in 1 GB increments Track space efficient volumes for FlashCopy SE repository exists in extent pools
Return
41
42
DDM Interface
Enterprise (FC/SAS) DDM Types Nearline DDM Types SSD DDM Types RAID Types Max Usable Capacity Max Sequential Bandwidth (MB/s) Max Number of LUNs / CKD volumes Max N-Port Logins/Port Max Process Logins Max Logical Paths / CU Max LUN Size
2Gbps FC-AL
FC 73, 146, 300, 450 GB 1 TB 73, 146 GB RAID 5, 6, 10 216 TiB 2GB/s 64K total 510 2K 512 2 TB
2Gbps FC-AL
FC 73, 146, 300, 450 GB 1 TB 73, 146 GB RAID 5, 6, 10 586 TiB 3.9GB/s 64K total 510 2K 512 2 TB
2Gbps FC-AL
FC 300, 450, 600 GB 2 TB 600 GB RAID 5, 6, 10 1158 TiB 9.7GB/s 64K total 510 2K 512 16 TB
6Gbps SAS-2
SAS - 146, 300, 450, 600, 900 GB 3 TB 300 GB RAID 5, 6, 10 1408 TiB 11.8GB/s 64K total 510 2K 512 16 TB
Dynamic Provisioning
Processor Memory / NVS Processor Host Adapters
Add / Del
16-128GB / 1-4GB P5+ 2.2GHz 2-way ESCON x 2 ports 4 Gb FC x 4 ports 16 64 600MB/s 8
Add / Del
32-256GB / 1-8GB P5+ 2.2GHz 4-way ESCON x 2 ports 4 Gb FC x 4 ports 32 128 600MB/s 16
Host Adapter Slots Max Host Adapter Ports Single DA Throughput DA Slots
43
GBps
4KB K IOps 4KB K IOps 4KB K IOps 4KB K IOps
2.1
124 344 165 124
5.6
159 423 201 174
5.7
175 440 204 181
171%
41% 28% 24% 46%
44
10-2008
zHPF zGM Incremental Resync
4-2010
Easy Tier Thin Provisioning Quick Init 600GB 15K 2TB SATA Multi-GM zHPF multi-trk
2004
255 LCUs Supported RAID5/RAID10 RMC/zGM/PTC/PAV 64K Logical Volumes 2GB FCP/FICON 73/146/300GB DDs
5-2011
Easy Tier2 IO Priority Manager Open Resource Groups Ease of Mgmt 16TB LUN
2-2008
SSPC Support (upgr) DS8000 M/T intermix FICON Ext Distance
7-2009
Thin Provisioning Quick init zHPF Multi-track support
6.1
1.0
2.0
2.4
3.0
3.1
4.0
4.1
4.2
4.3
5.0
5.1
6.0
6.1
6.2
6.0
2006
Turbo Models 500GB FATA 4GB FCP/FICON 242x Machine Types Synergy Items
5-2008
Extended Address 2007 SSPC Support (new) Volumes Storage Pool Striping Variable LPAR IP v6 FC Space Efficient Dynamic Volume Expansion
2-2009
Solid State Drives 1 TB SATA Intelligent Write Cache Full Disk Encrypt Remote pair FC
10-2009
DS8700
10-2010 DS8800
DS8100 / DS8300 includes all functional enhancements up to R4.3 DS8700 / DS8800 include all functional enhancements up to R6.2
45
11-2011 DS8800 new drives DS8800 4th frame Easy Tier 3 I/O Priority Manager CKD zHPF QSAM BSAM BPAM 1 TB EAV DB2 list prefetch
2011 IBM Corporation
6.5
12.3
22,200
42,000
DS8300 Base Frame with 128 disks DS8300 Expansion Frame Full DS8300 Configuration 1024 disks
7 5.5 26.6
DS8700 Base Frame with 128 disks DS8700 Expansion frame Full DS8700 Configuration 1024 disks
DS8800 Base Frame with 4-way, Standard Configuration with 240 disks DS8800 Expansion Frame Full DS8800 Configuration 1056 disks
Return
Functions
Easy Tier I/O Priority Manager Thin Provisioning
Resource Groups/Multi-Tenancy
System z Synergy Functions Power Systems Synergy Functions IBM i and DS8000 Synergy
47
Easy Tier
48
Automated drive utilization balancing to remove any hot spots or populate new empty drives
49
Performance monitoring and reporting available to track the I/O demand from application and I/O service time from storage device Performance data is collected for multiple durations, hours, days and weeks
Solid-state
Virtual Disk
Smart Monitoring
Enterprise - FC / SAS
Extent level relocation requires mixed technologies in a merged extent pool (between any two or three tiers), for example:
SSD + Enterprise + Nearline SSD + Nearline or Enterprise + Nearline
Extent Virtualization
SSD Arrays
HDD Arrays
53
54
Analysis Tools
Can offload reports on extent monitoring and obtain SSD capacity planning recommendations Can engage IBM for extended analysis and consulting
55
Rank Rank
hot
cold
hot
cold
Rank
Rank
hot
cold
cold
hot
cold
cold
cold
cold
Rank Rank
hot
cold
cold
Rank Rank
hot
cold
cold
cold
cold
hot
cold
hot
cold
Rank Depopulation Storage Administrator can ask that a rank be removed from an extent pool Automatic, non-disruptive and transparent to host access, the used extents will be reallocated to other ranks in the pool and rank freed
57
Re-stripe extents
58
Rank Depopulation
Can use Easy Tier to depopulate a rank and remove from an extent pool Automatic, non-disruptive and transparent to host access
59
Cannot migrate between extent pools on different storage images (0 / 1) Copy services considerations
Easy Tier optimization of data on the primary system is not reflected at the secondary
Easy Tier automatic mode is not supported on encryption capable storage facilities however Easy Tier manual mode is supported
60
Applies to Metro Mirror, Global Mirror and z/OS Global Mirror Will take time to get to an optimized environment in the event of a failover Easy Tier will have to analyze the production workload, relearn and redistribute data based on this workload
61
Hot spots vary over time such that they are uniformly distributed given a large enough monitoring period
Critical workload to be performance optimized is intermixed with other workloads that result in a non-optimal extent placement
May be able to turn off monitoring in time windows where non-critical workloads are affecting statistics in an undesirable manner (e.g. batch windows, off-shift or weekend workloads, month-end processing, etc.)
62
Usage
Understand workload Plan automated disk tiering View results of automated disk tiering Plan manual volume migration or extent pool merge
Reporting
System Summary - Tier status per pool (including rank IOPs/BW overloaded, rank IOPs skew) System Recommendations SSD, nearline or Enterprise1 recommendations Extent Pool Reports Tier status, (including rank utilization), SSD/ENT/Nearline recommendations, Volume Heat Distribution
Requirements
Performance Monitoring (supporting code level (R6.1 for new support), Storage Image Monitor setting, single tier extent pool or multi-tier extent pool) Offload statistics via DS8000 Storage Manager GUI or DSCLI Download STAT (free) and run on Windows Easy Tier licensed feature (no charge) is required for monitoring
1Enterprise
63
64
65
66
67
69
70
Help!
71
Performance statistics and SNMP traps can be obtained for a performance group that is monitored or managed I/O operations are managed in a performance group that is managed
Same policy applied to all I/O for a given volume based on performance policy for its associated performance group
72
73
74
RAID
Help!
75
User may iterate setting volume performance groups and analyzing statistics to tune the result
76
DB like workload
Favored 50000
Throughput (IO/s)
DB like workload
Favored 200
Response Time (ms)
Non-favored
Non-favored
77
78
79
80
81
Return
82
Thin Provisioning
83
Thin Provisioning
Thin provisioning allows a storage system to provide a volume to an application that is larger than the actual space consumed
When a thin provisioned volume is assigned to a host, the host sees the whole (virtual) capacity of the volume, as if it were a fully-provisioned volume
84
85
86
Return
87
88
Meets requirements for managing copy services in a multitenancy environment Can utilize resource groups to create resource domains to control copy services relationships and to prevent copy services operator errors from escaping a given domain Introduced in Release 6.1
89
Sharks Jets
Sharks Jets
Sharks
Jets
Sharks
Jets
Site 1
SSP
90
Site 2
Return
2011 IBM Corporation
91
Performance
Availability
Management/Growth
92
93
3 variations on PAVs
Static Alias is always bound to same base device Dynamic Uses Workload Manager to dynamically allocate aliases from a shared pool HyperPAVs More efficient than dynamic PAVs. Each System z image has its own pool
Reduces the total number of aliases needed for a given workload z/OS reacts quicker to I/O workload changes Overhead of managing aliases is reduced as Workload Manager is not involved in assigning/moving aliases
z/VSE supports PAVs z/OS, z/VM and Linux for System Z support PAV and HyperPAV
94
95
96
Dynamic PAVs
97
98
HyperPAV
99
100
Multiple Allegiance
101
MIDAWs
Originally Introduced by IBM on the System z9 processor Improves FICON performance
Allows ECKD channel programs to read and write to many storage locations using one channel command Improves the performance of sequential I/Os using 4K data sets, especially when using extended format data sets Eliminates the extended format penalty and shrinks the small DB2 page size performance penalty. MIDAWs implemented by Media Manager Noticeable improvements for:
Extended format data sets accessed through Media Manager DB2 Extended format VSAM files
102
1 TB
Mod 1
Mod 1062
103
Extended Address Volume (EAV) is the Next Step in Larger z/OS Volumes
EAV: A volume with more than 65,520 cylinders
The HyperPAV function complements this design by scaling the I/O rates against a single volume
EAV
Introduced in z/OS V1R10
3GB
Max cyls: 3,339
9GB
Max cyls: 10,017
27GB
Max cyls: 32,760
54GB
Max cyls: 65,520
100s of TBs
Size limited to 1 TB (Max 1,182,006 cylinders)
2011 IBM Corporation
Maximum Sizes
104
zHPF may help reduce the infrastructure costs for System z I/O by efficiently utilizing I/O resources so that fewer CHPIDs, fibers, switch ports and control unit ports may be needed
zHPF also compliments the System z EAV strategy for growth by increasing the I/O rate capability as the volume sizes expand vertically
105
zHPF Evolution
Single domain, single track I/O Reads, update writes Media manager exploitation z/OS R8 and above Multi-track, but <= 64K 2 0 0 9 DS8100/DS8300 with R4.1 or above z10 processor
2 0 z196 processor >64K transfers Multi-track any size 1 Extended Distance I 0 2 0 converted to zHPF Format writes, multi-domain 100% of DB2I/O I/O is now DS8700/DS8800 with R6.2 1 QSAM/BSAM Typicalexploitation Client will have190%+ of all DASD I/O 8S z196 FICON Express converted to zHPF z/OS R11 and above, EXCP
EXCPVR support ISV Exploitation
106
Use simpler protocols to encapsulate channel programs while preserving the enterprise class qualities of service of FICON Complex channel programs continue to use CCW Chains with base FICON protocols
Maximum I/O rate for a channel with a simple 4KB read hit benchmark doubles with zHPF
Realistic production workloads with a mix of data transfer sizes may see up to 30% savings in channel utilization compared to FICON Sequential workloads that transfer up to a single track (for example, 12 x 4KB per I/O) may also benefit OLTP Workloads that exploit zHPF could see up to 30% improvement in DS8000 throughput
107
Improved first failure data capture Additional channel and CU diagnostics for MIH conditions
Value
Reduce the number of channels, switch ports, control unit ports and optical cables required to balance CPU MIPS with I/O capacity Reduce elapsed times (DB2, VSAM) 2X
108
109
READ COMMAND
CMR
C O N T R O L U N I T
C O N T
C
H A N N E L Send Transport Response IU CLOSE EXCHANGE 4K OF DATA
4K of DATA STATUS
CLOSE EXCHANGE
R O L U N I T
zHPF requires System z10 processor or higher zHPF provides a much simpler link protocol than FICON
110
111
ISMF modified to support these SMS enhancements Three new DFSMShsm Commands
FRBACKUP creates a fast replication backup version for each volume in a specified copy pool FRRECOV Use fast replication to recover Entire copy pool from a disk copy Individual volume from a disk or tape copy One or more data sets from a disk or tape copy FRDELETE delete one or more unneeded fast replication backup versions DB2 utilities uses these HSM functions
112
Auto-configuration
Compares newly discovered system with target IODF and proposes new configuration to user
113
zDAC Flow
Return
114
115
DB2 on AIX exploits this function DS8000 host adapter will give preferential treatment to higher priority I/O
116
Cooperative Caching
Cooperative caching allows a trusted host application to provide cache hints to the DS8000
For example, DB2 can tell DS8000 that recently accessed data will unlikely be accessed again soon so DS8000 can destage and use cache slots for data more likely to be reaccessed again soon
Currently supported by AIX and DB2 on DS8000 System p servers with AIX, MPIO and the Path Control Module exploit this function Raw file systems and AIX 64-bit kernel
117
118
Return
119
IBM i OS based hot data ASP Balancer and DB2 media preference support for DS8000 Solid State Drives (SSD) and EasyTier
PowerHA NPIV Virtual I/O Server with DS8000 Full support of Virtual I/O Server (VIOS) and PowerHA PowerHA + DS8000 copy services end-to-end integration Common Smart-IOA fiber for disk and tape New 4Gb / 8Gb Smart fiber I/O Adapter (IOPless) 6.1 (++ performance) Tagged Command Queuing and Header Strip Merge (+++ performance) Common one-stop POWER/DS8000 support from Supportline experts
120
DS8800
Up to 40% better performance, with a reduction in both power consumption and floor space
121
SAN Infrastructure
Focus on end-to-end performance monitoring and investigation for an IBM i and DS8000 environment IBM i 7.1 adds a new category to IBM i Collection Services
*EXTSTG new collection performance metrics from DS8000 Requires DS8000 R4 or later firmware Data can be presented in graphs using iDoctor today Performance Data Investigator (PDI), in a future release (next update semi-annual function update)
122
LPAR-1
LPAR-2
DS8000
Eliminates the causes of out-of-sync situations Fussy applications, complex SQL, reorgs, deletes, heavy batch, etc. are not a replication headache Scalable, robust and automated solution Always ready to switch
LPAR-1 Network
DS8000 DS8000
LPAR-2
LPAR-1
LPAR-1 Network
DS8000 DS8000
LPAR-2
PowerHA-DS8000 combinations
Return
2011 IBM Corporation
123
Encryption
124
Security issues are both internal and external. How do you protect against the well-intentioned employee who mishandles information, and the malicious outsider?
Having your business comply with a growing number of corporate standards and government regulations; you must have tools that can document the status of your application security Growing number of regulatory mandates. You have to prove that your physical assets are secure
125
Lease expiration
Encryption is an option
126
D
D
D
D
D
D
D
D
D D
D D
D D
S S
D D
D D RAID5 7+P
D D
D P
D D
D D
D P
D S
RAID10 4+4
RAID10 3+3
RAID5 6+P
127
Drives do the encryption at full data rate so no impact to disk response times using AES 128 bit encryption
Protection for disk removal (repair, replace or stolen) Protection for disk subsystem removal (retired, replaced or stolen)
128
(TKLM Version 1, hosted in the System Services Runtime Environment for z/OS)
Lifecycle functions
Notification of certificate expiry Automated rotation of groups of keys
Same TKLM can be used with IBM DS8000, DS5000, and IBM tape Products from Emulex, Brocade and LSI also work with TKLM
129
TKLM continued
The keys used by TKLM are a public/private asymmetric key pair referred to as the public Key Encrypting Key (KEK) and the private Key Encrypting Key (KEK), respectively.
The key generation and propagation processes on the TKLM, associate a Key Label to each wrap/unwrap pair This Key Label is a user specified text string and retained with each wrap/unwrap pair Key negotiation and authentication between TLKM and the DS8000 take place at DS8000 power on.
One TKLM key server can easily handle multiple DS8000s and DS5000s, the network traffic requirement is small
Two TKLM servers are required to prevent a deadlock condition
130
131
Encryption Drives
These drives are specialized drives that include encryption capabilities and encryption needs to be enabled before the drive is used Current FDE drive technology available:
DS8700 300 GB/15,000 RPM 450 GB/15,000 RPM DS8800 146 GB/15,000 RPM 300 GB/15,000 RPM 450 GB/10,000 RPM 600 GB/10,000 RPM 900 GB/10,000 RPM
132
Return
133
Copy Services
134
3990/3390
RVA
ESS/Shark
DS6000/8000
Global Mirror
V2
135
FlashCopy
Reduced space snapshots for backups
FlashCopy
137
138
139
Disk Backup
Incremental option makes future backups efficient Multi-target allows check pointing and versioning
Nightly mail DB restore point Nightly pre-batch restore point Nightly market results
Test backup
Pre-testing restore point
D/R backup
Maintain consistent copy during resynchronization Create consistent copy before replication
140
Analysis
Data warehousing, data mining, business intelligence
Reporting Clones/instances
For internal use For business partners and vendors
141
142
Track ID Checklist
Source
Target
Track ID Checklist
No Copy
Copy on write as needed to preserve PiT image Read from source as necessary No Copy may be changed to Background Copy Source 4 Write 2
Target 3
FlashCopy NoCopy
Copy on Write
143
If there is concern about the impact of FlashCopy target access on source volume, and there is a window of low activity
FlashCopy with background copy may be completed during the window After copy is complete, there is no impact to source volume when target is accessed
Source
Track ID Checklist
Target
Track ID Checklist
Source
Target 3
5
Write 2
FlashCopy Options
Incremental FlashCopy (refreshes target volume) Persistent FlashCopy Data Set FlashCopy
Multiple relationships
Consistency group FlashCopy Inband commands over remote mirror link FlashCopy Space Efficient Copy and NoCopy
146
FlashCopy Manager
Local Application FlashCopy Data Versions FlashCopy Backup
Storage Manager 6
With SVC Optional XIV TSM Simplified deployment DS8000 Backup DS 3/4/5* Integration
147
Exchange
FlashCopy Restore* of Exchange storage groups File copy restore of a storage group or database from a mounted FlashCopy image
Restore into a Recovery Storage Group, alternate storage group, or relocated storage group
Oracle
FlashCopy restore of a Full database
SAP
FlashCopy Restore of a Full database
SQL
FlashCopy Restore* of a full database backup File copy restore of a full database from a mounted FlashCopy image
To an alternate database name To an alternate location
Return
149
Remote Replication
150
Metro Mirror
151
Synchronous replication
Continuous Target access requires suspension of replication
Minimal RPO
Designed for 0 data loss
System z, open systems and System i volume replication in one or multiple consistency groups
152
153
Metro Mirror
Synchronous
4
1. Write to local 2. Write copied to remote (placed in cache + persistent memory) 3. Write complete from remote to local 4. Write complete to application
2 3
Local DS8000 Remote DS8000
Return
154
Global Copy
155
RPO depends on procedures and consistency creation interval Unlimited Global Distances Minimal application impact System z, open systems and System i volume replication in same or different consistency groups Target access requires suspension of replication
Global Copy
156
Global Copy
Asynchronous Minimal application impact
3 2 6
1. 2. 3. 4.
5.
6.
1. Write to local 2. Track ID added to checklist of tracks to be copied to secondary 3. Write complete to application 4. At a later time, write copied to the remote 5. Write complete sent from remote to local 6. Track ID removed from checklist Local DS8000
Global Copy
Remote DS8000
157
Return
158
Global Mirror
159
160
Global Mirror
Global Copy Flash Copy
Minimal application response time impact Single consistency group can include
System z + open systems + System i
2 1. 1. Write to local 2. 2. Write complete to application 3. 3. Autonomically or on a user-specified interval, consistency group formed on local 4. 4. CG sent to remote via Global Copy (drain) 5. 5. After all consistent data for CG is received at remote, FlashCopy with 2-phase commit (this can be a FlashCopy SE target volume) 6. 6. Consistency complete to local 7. 7. Tracks with changes (after CG) are copied to remote via Global Copy, and FlashCopy Copy-onWrite preserves consistent image
Flash Copy
Site 1
Site 2
Return
162
Metro/Global Mirror
163
164
Global Mirror
Intermediate Site
Metro Distance
Global Distance
Remote Site
Metro Distance + Global Distance RPO as low as 0 at intermediate or remote for local failure RPO as low as 3-5 seconds at remote for failure of both local and intermediate Application response time impacted only by distance between local & intermediate Fast resynchronization of sites after failures and recoveries Single consistency group may include open systems, System z & System i volumes
165
Synchronous Minimizes data loss RPO as low as 0 Application response time affected by remote mirroring Metro Distance (300 km standard)
Consistent data RPO as low as 3-5 seconds depending on workload & bandwidth
Global Distance
Metro Mirror
Global Mirror
FlashCopy
Local DS8000
Intermediate DS8000
Remote DS8000
166
Local DS8000
167
Intermediate DS8000
168
Sec
Application Server
169
Local
Remote
2011 IBM Corporation
3.
4. 5. 6. 7. 8. 9.
2nd
2 1 4
Remote
Application Server
170
Return
Local
2011 IBM Corporation
DS8000 Performance
171
SPC-2 Benchmark
DS8800 host adapter performance measured using 8 Gb/second host adapter DS8300 and DS8700 host adapter performance measured using 4 Gb/second host adapter
172
DS8800 host adapter performance measured using 8 Gb/second host adapter DS8300 and DS8700 host adapter performance measured using 4 Gb/second host adapter
173
DS8800 host adapter performance measured using 8 Gb/second host adapter DS8300 and DS8700 host adapter performance measured using 4 Gb/second host adapter
174
DS8800 host adapter performance measured using 8 Gb/second host adapter DS8300 and DS8700 host adapter performance measured using 4 Gb/second host adapter
175
176
177
178
Return
179
Logical Configuration
180
181
182
SSPC Architecture
183
SSPC Or TPC
Corporate Network
Encryption Key Servers - Provides key management for encryption of data at rest
Primary HMC
Firewall
Storage Admin
Internet
Phone Line
Storage Admin - Maintain storage configuration - Establish replication - Receive alert messages via email - Review system performance data - Uses web browser, DSCLI, TPC
Remote support -Phone home -Remote technical support -Using analog modem or secure Internet VPN connection
184
TPC Disk Performance monitoring and management Alerting Advanced configuration and allocation TPC Data Enterprise reporting and management of storage utilization and file systems Provides capacity management and automated storage provisioning
185
186
DS Storage Manager
The DS Storage Manager GUI is a web based application to perform storage administration functions on the DS8000 Access the DS Storage Manager GUI from
System Storage Productivity Center (SSPC) Remote desktop to the SSPC TPC on a workstation connected to the HMC From a web browser connected to SSPC or TPC
187
188
189
DSCLI
The DS Command-Line Interface (DSCLI) is a full function CLI and supports scripting and basic automation DSCLI can be used to
Create and maintain authorized users Configure hosts, arrays, extent pools and ports Install activation license keys Define logical volumes Manage copy services such as FlashCopy and replication
190
CLI Examples
setioport -topology scsi-fcp I0103 setioport -topology ficon I0230 mkextpool -rankgrp 1 -stgtype ckd p07 mkextpool -rankgrp 0 -stgtype fb p08 mkarray -raidtype 5 -arsite S1 mkrank -array A0 -stgtype ckd -extpool P0 mkrank -array A1 -stgtype fb -extpool P1 mklcu -qty 1 -id 00 -ss 5000 mklcu -qty 1 -id 01 -ss 5100
Create an extent pool Define I/O port as FICON or FCP
Return
191
192
193