Professional Documents
Culture Documents
com
Systems Architect, SEE IBM Croatia
9/27/2012
Agenda
Introduction Virtualization function and benefits IBM Storage Virtualization Virtualization Appliance SAN Volume Controller Virtual Storage Platform Management Integrated Infrastructure System - Cloud Ready Summary
Smarter Computing
New approach in designing IT Infrastructures
Smarter Computing is realized through an IT infrastructure that is designed for data, tuned to the task, and managed in the cloud...
Higher Utilization
Increased
Virtualization
Flexibility
Storage virtualization
SAN Volume Controller (the Storage Hypervisor), ProtecTIER Industry leading Storage Virtualization solutions
Sharing
Resources
Aggregation
Resources
Examples: LPARs, VMs, virtual disks, VLANs Benefits: Resource utilization, workload mgmt., agility, energy efficiency
Resource Type Y
Examples: Virtual disks, system pools Benefits: Management simplification, investment protection, scalability
Virtual
Add or Change
Virtual Resources
Resources
Emulation
Resource Type X
Insulation
Resources
Add, Replace, or Change
Resources
Examples: Arch. emulators, iSCSI, FCoE, v. tape Benefits: Compatibility, software investment protection, interoperability, flexibility
Examples: Compat. modes, CUOD, appliances Benefits: Agility, investment protection, complexity & change hiding
2012 IBM Corporation
Hides some of the complexity Adds or integrates new function with existing services
Virtualization
Physical Resources
Today's SAN
SAN
10
SAN
Virtual disks start as images of migrated non-virtual disks. Later, modify striping, thin provisioning, etc.
Virtualization layer
11
SAN
12
SAN
Virtualization layer
13
SAN
Upgrade
14
Pooling Isolation
CACHE + SSD
Performance
1. 2. 3.
1. 2. 3.
1. 2. 3.
Mirroring
Mirroring
License $$
15
2012 IBM Corporation
1. 2. 3.
SAN
Virtualization layer
Virtual disks in transparent Image Mode, before being converted to Full Striped This works backwards too (no vendor lock-in)
16
2012 IBM Corporation
Redundant SAN !
ZONE
SAN A
SAN B
1 : 4
Virtualization layer
17
18
Storage Hypervisor
VMControl
Manage
Storage Hypervisor
Manage
Replication
Application integrated FlashCopy DR automation
Data mobility
High Availability
Stretch Cluster HA
6th Generation.....
Continuous development Firmware is backwards compatible
(64 bit not for 32 bit Hardware) initial Release
:
SVC 4F2 SVC 8F2 SVC 8F4 SVC 8G4 SVC CF8 SVC CG8 4GB cache, 2Gb SAN (Rel.3 / 2006) 8GB cache, 2Gb SAN (ROHS comp.) 8GB cache, 4Gb SAN 155.000 SPC-1 IOPS +Dual-core Processor 272.500 SPC-1 IOPS 24GB cache, Quad-core 380.483 6-node SPC-1 IOPS +10 GbE approx. 640.000 SPC-1-like IOPS
MODELS
21
SVC Node
with UPS (not depicted)
Storage Pool
Storage Pool
Storage Pool
Array LUNs
SVC Cluster
Virtual-Disk Types
Image Mode:
A B C
Virtual Disks
Pass thru; Virtual Disk = Physical LUN Sequential Mode: Virtual Disk mapped sequentially to a portion of a managed disk
Striped Mode: Virtual Disk striped across multiple managed disks. Preferred mode
MDG1
MDG3
MDG2
SCSI Frontend
Tier
27
SVC CLI
ssh scripting complete command set
VDS
VSS
vCenter
Plugin
Storage Control
2012 IBM Corporation
29
E-mail, SNMP trap & Syslog error event logging Authentication service for Single Sign-On & LDAP Virtualise data without data-loss Expand or shrink Volumes on-line Thin-provisioned Volumes
Reclaim Zero-write space Thick to thin, thin to thick & thin to thin migration
Volume
Up to 256 Full (with background copy = clone) Partial (no background copy) Vol1 Vol2 Vol0 FlashCopy FlashCopy Space Efficient Source target of Vol0 target of Vol1 Map 1 Map 2 Incremental Cascaded Map 4 Consistency Groups Vol4 Vol3 FlashCopy FlashCopy target of Vol3 target of Vol1 Reverse
Microsoft Virtual Disk Service & Volume Shadow Copy Services hardware provider Remote Copy (optional)
Synchronous & asynchronous remote replication with SVC SVC Consistency groups
SVC
Volume Mirroring
Volume copy 1
SVC
MDisk Source
MDisk Target
Volume copy 2
MM or GM Relationship SVC
Consolidated DR Site
MM or GM Relationship
MM or GM Relationship
SSDs
HDDs
VMware
Optimized performance and throughput
Hot-spots
Storage Replication Adaptor for Site Recovery Manager VAAI support & vCenter Server management plug-in
SVC
It maintains both copies in sync, reads primary copy and writes to both copies
If disk supporting one copy fails, SVC provides continuous data access by using other copy
Copies are automatically resynchronized after repair
Intended to protect critical data against failure of a disk system or disk array
A local high availability function, not a disaster recovery function
Copy 0
Copy 1
The user can configure the timeout for each mirrored volume
31
Priority on redundancy: Wait until write completes or times-out finally. Performance impact, but active copies are always synchronized
IBM EasyTier
Hot-spots
Transparent reorganization
The goal being to reduce response time Users have automatic and semi-automatic extent based placement and migration management
SSDs HDDs SSDs HDDs
Automatic Relocation
Why it matters?
Solid State Storage has orders of magnitude better throughput and response time with random reads Full volume allocation to SSD only benefits a small number of volumes or portions of volumes, and use cases Allowing dynamic movement of the hottest extents to be transferred to the highest performance storage enables a small number of SSD to benefit the entire infrastructure
32
Hot-spots
Thin-provisioning
Traditional (fully allocated) virtual disks use physical disk capacity for the entire capacity of a virtual disk even if it is not used With thin-provisioning, SVC allocates and uses physical disk capacity when data is written
Dynamic growth Without thin provisioning, pre-allocated space is reserved whether the application uses it or not With thin provisioning, applications can grow dynamically, but only consume space they are actually using
Support all hosts supported with traditional volumes and all advanced features (EasyTier, FlashCopy, etc.) Reclaiming Unused Disk Space
When using Volume Mirroring to copy from a fully-allocated volume to a thinprovisioned volume, SVC will not copy blocks that are all zeroes When processing a write request, SVC detects if all zeroes are being written and does not allocate disk space for such requests in the thin-provisioned volumes
Helps avoid space utilization concerns when formatting Volumes
Done at Grain Level (32/64/128/256KiB) If grain contains all zeros dont write
33
Copy Services
34
same as the source Different multipath drivers for each array Lower-cost disks offer primitive, or no replication services
SAN
TimeFinder SRDF
SAN
SVC SVC
IBM DS5000
35
IBM DS5000
EMC Clariion
EMC Clariion
HDS AMS
HP EVA
EMC Clariion
IBM DS5000
FlashCopy
Point-in-Time Copy
outside the box 2 close sites (<10Km) Warning, this is not real time replication
Metro Mirror
Synchronous Mirror
Write IO response time doubled + distance latency No data loss
Global Mirror
Consistent Asynchronous Mirror
Limited impact on write IO response time Data loss All write IOs are sent to the remote site in the same order they were received on source volumes Only 1 source and 1 target volumes
2 close sites (<300 Km) Warning, production performance impact if inter-site links are unavailable, during microcode upgrades, etc.
Vol0
R W
Vol0
Vol0
SVC
SVC
Managed Storage
Legacy Storage
Managed Storage
36
Source and target can have different characteristics and be from different vendors Source and target can be in the same cluster
Datacenter1
Datacenter 2
Datacenter 3
SAN Volume Controller
Datacenter 4
37
2012 IBM Corporation
38
VM VM
VM Host
VM
VM
VM Host
VM
VM
SVC 1 node A
SVC 1 node B
LUN1
LUN1'
SVC split cluster & VDM Connectivity Bellow 10Km using passive DWDM
You should always have 2 SAN fabrics (A & B), and 2 switches per SAN fabric (one on each site)
This diagram is only showing connectivity to a single fabric
In reality connectivity is to a redundant SAN fabric and therefore everything should be doubled
SW SW LW or SW
I/O Group
LW or SW SW SW
You should always connect each SVC node in a cluster on the same SAN switches
The best is to connect each SVC node to SAN fabric A switch 1 & 2, as well as SAN fabric B switch 1 & 2 You can consider (supported but it is not recommended) connecting all SVC nodes to the switch 1 in the SAN fabric A, and to the switch 2 in the SAN fabric B
SW
SAN A Switch 1
ISL
SAN A Switch 2
LW or SW
LW or SW
LW or SW
LW or SW
SW
To avoid fabric re-initialisation in case of link hiccups on the ISL, consider creating a Virtual SAN Fabric on each site and use inter-VSAN routing
Pool 1
Candidate Quorum
Pool 3
Primary Quorum
Pool 2
Candidate Quorum
Production room A
Production room C
Production room B
40
SVC split cluster & VDM Connectivity Up to 300Km using active DWDM
I/O Group
SW SW
Enhanced!
Brocade virtual fabric or a Cisco VSAN can be used to isolate Public and Private SANs ISL s/Trunks
SW
SW
Dedicated ISLs/Trunks
For SVC inter-node traffic
SW SW LW or SW LW or SW
SW
Pool 1
Candidate Quorum
Pool 3
Primary Quorum
Pool 2
Candidate Quorum
Production room A
Production room C
Production room B
41
2 switches per SAN fabric (1 per site) when using CISCO VSANs or Brocade virtual fabrics to isolate private and public SANs 4 switches per SAN fabric (2 per site) when private and public SANs are on physically dedicated switches This diagram is only showing connectivity to a single fabric A (In reality connectivity is to a redundant SAN fabric and therefore everything should be doubled with also connection to B switches).
Up to 300km
Data center 1 Data center 2
Improve availability, load-balance, and deliver real-time remote data access by distributing applications and their data across multiple sites. Seamless server / storage failover when used in conjunction with server or hypervisor clustering (such as VMware or PowerVM) Up to 300km between sites (3x EMC VPLEX)
Server Cluster 1
Failover
Server Cluster 2
Server Cluster 1
Failover
Server Cluster 2
For combined high availability and disaster recovery needs, synchronously or asynchronously mirror data over long distances between two high-availability stretch clusters.
Data center 1
Data center 2
Data center 1
Data center 2
High Availability
Advantages
No manual intervention required Automatic and fast handling of storage failures Volumes mirrored in both locations Transparent for servers and host based clusters Perfect fit in a virtualized environment (like VMware VMotion, AIX Live Partition Mobility)
Disadvantages
Mix between HA and DR solution but not a true DR solution Non-trivial implementation involve IBM Services
43
44
Non-IBM expansion
Auto-migration
45
Compatibility
46
SGI IRIX
Apple Mac OS
IBM BladeCenter
1024 Hosts
VAAI
Point-in-time Copy
Full volume, Copy on write 256 targets, Incremental, Cascaded, Reverse, Space-Efficient, FlashCopy Mgr
SAN
Easy Tier
SSD
IBM DS
IBM XIV
Series 20
IBM System Storage SAN Volume Controller
Thunder MA, EMA TagmaStore MSA 2000, XP CX4-960 AMS 2100, 2300, 2500EVA 6400, 8400 Symmetrix WMS, USP, USP-V
3PAR, VNX StorageTek Axiom Eternus NetApp iStorage DX60, DX80, Platform (VSP) StorageWorks VMAX DX90, DX410 Lightning P9500, FAS CLARiiON
HP
EMC
Sun
NEC Bull
Fujitsu
Pillar
DX8100, DX8300, DX9700 8000 Models 2000 & 1200 Storeway4000 models 600 & 400, 3000
48
Storage Hypervisor
VMControl
Manage
Storage Hypervisor
Manage
Replication
Application integrated FlashCopy DR automation
Data mobility
High Availability
Stretch Cluster HA
TPC 5.1
Single management console Heterogeneous storage Health monitoring Capacity mgmt. Provisioning Fabric management FlashCopy support Storage System Performance Management SAN Fabric Performance management Trend Analysis DR & Business Continuity Applications & Storage Hypervisor (ESX, VIO) Hyperswap Mgmt.
Storage Networks
Switches & Directors Virtual devices
Storage
Multi-vendor storage Storage array provisioning Virtualization / Vol. mapping Block + NAS, VMFS Tape libraries
Replication
FlashCopy Metro Mirror Metro Global Mirror
TCR/Cognos-based Reporting & Analytics Enhanced management for virtual environments Integrated Installer Simplified packaging
51
Hypervisor
VM
Storage (SAN)
Helps avoid double counting storage capacity in TPC reporting on VMware Associates storage not only with individual VMs and Hypervisors but also with the clusters VMotion awareness
52
53
54
IBM PureSystems
Infrastructure & Cloud Application & Cloud
Integrated Infrastructure System Factory integration of Compute, Storage, Networking, and management Broad support for x86 and POWER environments Cloud ready for infrastructure
Integrated Application Platform Factory integration of infrastructure + middleware (DB2, Websphere) Application ready (Power or x86 with workload deployment capability) Cloud ready application platform
55
Storage
Networking
Compute
Virtualization
Security
Tools
Applications
Management
IBM PureSystems
Whats Inside? An evolution in design, a revolution in experience
Expert Integrated Systems
Compute Nodes
Power 2S/4S* x86 2S/4S
Storage Node
V7000 Expansion inside or outside chassis
Expansion
PCIe Storage
2012 IBM Corporation
57
Summary
58
59
Internet Resources
Information Center
http://publib.boulder.ibm.com/infocenter/svc/ic/index.jsp
60
Thank you!
61
62