You are on page 1of 61

Storage Scalability

Module 6

© 2014 VMware Inc. All rights reserved


You Are Here

Course Introduction Storage Optimization

VMware Management Resources


CPU Optimization
Performance in a Virtualized
Environment
Memory Performance
Network Scalability
Virtual Machine and Cluster
Network Optimization Optimization

Storage Scalability Host and Management Scalability

VMware vSphere: Optimize and Scale 6-2

© 2014 VMware Inc. All rights reserved


Importance

As the enterprise grows, new scalability features in VMware


vSphere® enable the infrastructure to handle the growth efficiently.
Datastore growth and balancing issues can be addressed
automatically with VMware vSphere® Storage DRS™.

VMware vSphere: Optimize and Scale 6-3

© 2014 VMware Inc. All rights reserved


Module Lessons

Lesson 1: Storage APIs and Virtual Machine Storage Policies


Lesson 2: vSphere Storage I/O Control
Lesson 3: Datastore Clusters and vSphere Storage DRS

VMware vSphere: Optimize and Scale 6-4

© 2014 VMware Inc. All rights reserved


Lesson 1:
Storage APIs and Virtual Machine Storage
Policies

VMware vSphere: Optimize and Scale 6-5

© 2014 VMware Inc. All rights reserved


Learner Objectives

By the end of this lesson, you should be able to meet the following
objectives:
 Describe VMware vSphere® Storage APIs – Array Integration
 Describe VMware vSphere® API for Storage Awareness™
 Configure and use virtual machine storage policies

VMware vSphere: Optimize and Scale 6-6

© 2014 VMware Inc. All rights reserved


vSphere Storage APIs – Array Integration (1)

vSphere Storage APIs – Array Integration helps storage vendors to


provide hardware assistance to accelerate vSphere I/O operations
that are more efficiently accomplished in the storage hardware.
vSphere Storage APIs – Array Integration includes the following API
subsets:
 Hardware Acceleration APIs
• Allow arrays to integrate with vSphere to transparently offload certain
storage operations to the array.
• Use of APIs significantly reduces the CPU overhead on the host.
 Array Thin Provisioning APIs
• Use of the APIs helps to manage space on thin-provisioned storage arrays.
• APIs help to prevent out-of-space conditions and perform space
reclamation.

VMware vSphere: Optimize and Scale 6-7

© 2014 VMware Inc. All rights reserved


vSphere Storage APIs – Array Integration (2)

Hardware Acceleration for block storage devices supports the


following array operations:
 Full copy: Also called clone blocks or copy offload
 Block zeroing: Also called write same
 Hardware assisted locking: Also called atomic test and set (ATS)
Hardware Acceleration for NAS devices supports the following NAS
operations:
 Full file clone: The entire file is cloned instead of file segments.
 Reserve space: Space for a virtual disk is allocated in thick format.
 Lazy file clone: VMware Horizon™ View™ can offload the creation of
linked clones to a NAS array.
 Extended file statistics

VMware vSphere: Optimize and Scale 6-8

© 2014 VMware Inc. All rights reserved


vSphere Storage APIs – Array Integration (3)

Array Thin Provisioning APIs allow the host to integrate with physical
storage and become aware of space usage in thin-provisioned LUNs.
 A VMware vSphere® VMFS (VMFS) datastore that you deploy on the
thin-provisioned LUN can detect only the logical size of the LUN.
• For example, if an array reports 2TB of storage, but the array provides only
1TB, the datastore considers 2TB to be the LUN's size.
 Using thin provisioning integration, the host can perform these tasks:
• Monitor the use of space on thin-provisioned LUNs to avoid running out of
physical space.
• Inform the array about datastore space that is freed when files are deleted
or removed from the datastore by VMware vSphere® Storage vMotion®.
­ The array reclaims the freed blocks of space.
 vSphere 5.5 introduces a new command in the esxcli namespace
that allows deleted blocks to be reclaimed on thin provisioned LUNs
that support the VAAI UNMAP primitive.

VMware vSphere: Optimize and Scale 6-9

© 2014 VMware Inc. All rights reserved


VMware vSphere API for Storage Awareness

VMware vSphere API for Storage Awareness enables a storage


vendor to develop a software component (called a storage provider)
for its storage arrays.
 A storage provider gets information from the storage array about
available storage topology, capabilities, and state.

vSphere
storage vCenter
provider Web Client
Server

storage
device

VMware® vCenter Server™ connects


to a storage provider.
 Information from the storage provider
is displayed in the VMware vSphere® Web Client.

VMware vSphere: Optimize and Scale 6-10

© 2014 VMware Inc. All rights reserved


Benefits of Storage Providers

Storage providers benefit vSphere administrators by:


 Enabling administrators to be aware of the topology, capabilities, and
state of the physical storage devices on which their virtual machines
are located
 Enabling administrators to monitor the health and usage of their
physical storage devices
 Assisting administrators in choosing the correct storage in terms of
space, performance, and service-level agreement requirements:
• Performed by using virtual machine storage policies

VMware vSphere: Optimize and Scale 6-11

© 2014 VMware Inc. All rights reserved


Configuring a Storage Provider

Select Home > vCenter > Storage > Manage > Storage Providers.

After adding a storage provider,


the storage provider is listed
in the Storage Providers pane.

VMware vSphere: Optimize and Scale 6-12

© 2014 VMware Inc. All rights reserved


About Virtual Machine Storage Policies

Virtual machine storage policies help you ensure that virtual


machines use storage that guarantees a specified level of capacity,
performance, availability, redundancy, and so on.
Virtual machine storage policies help you meet the following goals:
 Categorize datastores based on certain levels of service
 Provision a virtual machine’s disks on the correct storage
When you define a storage policy, you specify storage requirements
for applications that run on the virtual machines.
Storage requirements can be of the following types:
 Vendor-specific storage capabilities
 User-defined, or tag-based, storage capabilities

VMware vSphere: Optimize and Scale 6-13

© 2014 VMware Inc. All rights reserved


Storage Capabilities

Storage providers present


device characteristics as
vendor-specific storage
Storage
capabilities to vCenter Server. Provider
You can use tags to create
user-defined storage Vendor-Specific User-Defined
Storage Storage
capabilities and apply those Capabilities Capabilities
tags to datastores.
Use tags when your datastore vCenter Server
is not represented by a storage Virtual Machine
provider. Storage Policies

VMware vSphere: Optimize and Scale 6-14

© 2014 VMware Inc. All rights reserved


Storage Policies

Virtual machine
storage policies:
 Contain one or Virtual Machine
Storage Policies
more storage
capabilities
 Are associated with
one or more virtual
machines
 Can be used to test Compliant Compliant Not Compliant
that virtual
machines reside on
compliant storage

VMware vSphere: Optimize and Scale 6-15

© 2014 VMware Inc. All rights reserved


Using the Virtual Machine Storage Policy

When you create, clone, or migrate a virtual machine, you can apply
the storage policy to the virtual machine.

VMware vSphere: Optimize and Scale 6-16

© 2014 VMware Inc. All rights reserved


Checking Virtual Machine Storage Compliance

You can check whether virtual machines use datastores that are
compliant with the storage policy.

VMware vSphere: Optimize and Scale 6-17

© 2014 VMware Inc. All rights reserved


Advanced Storage Options

Advanced storage options include the following:


 N_Port ID virtualization (NPIV)
 Software iSCSI port binding
 VMFS resignaturing
 Pluggable storage architecture (PSA)

VMware vSphere: Optimize and Scale 6-18

© 2014 VMware Inc. All rights reserved


N_Port ID Virtualization

NPIV assigns a virtual World Wide Name and virtual N_Port ID to an


individual virtual machine.
 NPIV gives each virtual machine an identity on the SAN.
NPIV benefits:
 Track storage traffic per virtual machine.
 Zone and mask LUNs per virtual machine.
 Leverage SAN quality-of-service per virtual machine.
 Improve I/O performance through per virtual machine array-level
caching.
ESXi host
Configure NPIV if you have the following
requirements:
 A management requirement to monitor SAN WWN WWN WWN

LUN usage at the virtual machine level


 A security requirement to be able to zone WWN
a specific LUN to a specific virtual machine
VMware vSphere: Optimize and Scale 6-19

© 2014 VMware Inc. All rights reserved


N_Port ID Virtualization Requirements

NPIV requires the following:


 Virtual machines use RDMs.
 Fibre Channel HBAs support NPIV.
 Fibre Channel switches support NPIV.
 VMware® ESXi™ hosts have access to all LUNs used by their virtual
machines.
NPIV cannot be used with virtual machines configured with VMware
vSphere® Fault Tolerance.

VMware vSphere: Optimize and Scale 6-20

© 2014 VMware Inc. All rights reserved


Configuring Software iSCSI Port Binding

1. In the host, select Manage and click the Storage tab.


2. Select the Storage Adapters link.
3. Highlight the software iSCSI initiator and select Network Port
Binding.

VMware vSphere: Optimize and Scale 6-21

© 2014 VMware Inc. All rights reserved


VMFS Resignaturing

VMFS UUID:
VMFS_1
4e26f26a-9fe2664c-c9c7-000c2988e4dd

resignature
StorageVMFS_1
replicate Array Replication

protected site recovery site

New UUID: snap-snapID#-VMFS_1


4e26f26a-9fe2664c-c9c7-000c2999e4ca
Datastore resignaturing overwrites the original VMFS UUID:
 The LUN copy that contains the VMFS datastore that you resignature is no
longer treated as a LUN copy. The LUN appears as an independent
datastore with no relation to the source of the copy.
 A spanned datastore can be resignatured only if all its extents are online.

VMware vSphere: Optimize and Scale 6-22

© 2014 VMware Inc. All rights reserved


Fibre Channel Storage Array Types

ESXi supports different storage systems and arrays:


 Active-active storage system:
• Allows access to the LUNs simultaneously through all the storage ports.
• All the paths are active at all times, unless a path fails.
 Active-passive storage system:
• A system in which one storage processor is actively providing access to a
given LUN.
• If access through the active storage port fails, one of the passive storage
processors can be activated by the servers accessing it.
 Asymmetrical storage system:
• Supports Asymmetric Logical Unit Access (ALUA).
• ALUA-complaint storage systems provide different levels of access per port.
• Each LUN can be accessed through its primary storage processor at full
speed, but is also concurrently active through other storage processors at a
lower speed.
• ALUA allows hosts to determine the states of target ports and prioritize
paths.
VMware vSphere: Optimize and Scale 6-23

© 2014 VMware Inc. All rights reserved


Managing Multiple Paths

VMware® provides these path selection choices:


 Most Recently Used (MRU):
• Selects the first working path discovered at system boot time. If this path
becomes unavailable, the host switches to an alternate path.
• The host does not revert back to the original path when that path becomes
available again.
 Fixed:
• The host uses the designated preferred path, if it has been configured.
• Otherwise, the host uses the first working path discovered at system boot
time.
• If the host cannot use the preferred path, it selects a random alternative
available path.
• The host reverts back to the preferred path as soon as the path becomes
available.
 Round Robin (RR):
• Uses an automatic path selection, rotating through all available paths and
enabling load balancing across paths.
VMware vSphere: Optimize and Scale 6-24

© 2014 VMware Inc. All rights reserved


Pluggable Storage Architecture

PSA is a collection of Storage APIs that allows third-party hardware


vendors to insert a plug-in directly into the SCSI middle layer.
 The PSA allows third-party software developers to design their own
load-balancing techniques and failover mechanisms for particular
storage array types.
 Third-party vendors can add support for new arrays into the SCSI
middle layer without providing internal information or intellectual
property about the array to VMware.
VMware provides a generic multipathing plug-in (MPP) called the
Native Multipathing Plug-in (NMP).
 NMP is the default plug-in.
PSA coordinates the operation of the NMP and third-party MPPs.

VMware vSphere: Optimize and Scale 6-25

© 2014 VMware Inc. All rights reserved


VMware Default Multipathing Plug-in

The top-level plug-in is the MPP.


The VMware default MPP is the NMP, which includes the built-in
Storage Array Type Plug-ins (SATPs) and Path Selection Plug-ins
(PSPs).
PSA

VMware NMP
The default MPP
is the NMP, which
includes the VMware SATP VMware PSP
T
SATPs and PSPs. VMware SATP VMware PSP

VMware SATP VMware PSP

third-party SATP third-party PSP

VMware vSphere: Optimize and Scale 6-26

© 2014 VMware Inc. All rights reserved


Overview of the MPP Tasks

The PSA:
 Discovers available storage (physical paths)
 Uses predefined claim rules to determine which multipathing module
should claim the paths to a particular device and to manage the device
An MPP claims a physical path and exports a logical device.
Details of path failover for a specific path are delegated to the SATP.
Details for determining which physical path is used to a storage
device (load balancing) are handled by the PSP.

VMware vSphere: Optimize and Scale 6-27

© 2014 VMware Inc. All rights reserved


Path Selection Example

Information about all functional


paths is forwarded by the SATP to
the PSP. The PSP chooses which
path to use.

SATP PSP VMkernel


4 1 storage stack
NMP PSA
5 2

HBA 1 HBA 2
3

VMware vSphere: Optimize and Scale 6-28

© 2014 VMware Inc. All rights reserved


Lab 7: Policy-Based Storage

Use policy-based storage to create tiered storage


1. Prepare for the Lab
2. Add Datastores for Use by Policy-Based Storage
3. Use VMware vSphere® vMotion® to Migrate a Virtual Machine to the
Gold Datastore
4. Configure Storage Tags
5. Create Virtual Machine Storage Policies
6. Assign Storage Policies to Virtual Machines
7. Clean Up for the Next Lab

VMware vSphere: Optimize and Scale 6-29

© 2014 VMware Inc. All rights reserved


Review of Learner Objectives

You should be able to meet the following objectives:


 Describe vSphere Storage APIs – Array Integration
 Describe VMware vSphere API for Storage Awareness
 Configure and use virtual machine storage policies

VMware vSphere: Optimize and Scale 6-30

© 2014 VMware Inc. All rights reserved


Lesson 2:
vSphere Storage I/O Control

VMware vSphere: Optimize and Scale 6-31

© 2014 VMware Inc. All rights reserved


Learner Objectives

By the end of this lesson, you should be able to meet the following
objectives:
 Describe VMware vSphere® Storage I/O Control
 Configure vSphere Storage I/O Control

VMware vSphere: Optimize and Scale 6-32

© 2014 VMware Inc. All rights reserved


vSphere Storage I/O Control

vSphere Storage I/O Control allows cluster-wide storage I/O


prioritization: Without With
vSphere Storage vSphere Storage
 Allows better I/O Control I/O Control

workload Data Print Online Mail Data Print Online Mail

consolidation
Mining Server Store Server Mining Server Store Server

 Helps reduce
extra costs
associated with
overprovisioning
 Can balance
I/O load in a
datastore cluster
that is enabled
for vSphere
Storage DRS
During high I/O from a noncritical application

VMware vSphere: Optimize and Scale 6-33

© 2014 VMware Inc. All rights reserved


vSphere Storage I/O Control Configuration

The latency thresholds for vSphere Storage I/O Control can be set
using injector-based models or can be manually set.
Injector-based models are latency thresholds that are measured
when peak throughput is measured.
 The benefit of using injector-based models is that vSphere Storage I/O
Control determines the best threshold for a datastore.
• The latency threshold is set to the value determined by the injector when 90
percent of the throughput value is achieved.
 You can also set the latency threshold manually:
• Latency setting is 30 ms by default.
vSphere Storage I/O Control is set to stats-only mode by default.
 vSphere Storage I/O Control does not enforce throttling but gathers
statistics.
 vSphere Storage DRS now adds statistics in advance for new
datastores to the datastore cluster.

VMware vSphere: Optimize and Scale 6-34

© 2014 VMware Inc. All rights reserved


vSphere Storage I/O Control Automatic Threshold Detection

Through device modeling, vSphere

Latency
Storage I/O Control determines the peak Lpeak
throughput of the device.
The injector-based models measure the La
peak latency value when the throughput is
at its peak. Load
The threshold (a) is then set (by default)
to 90 percent of this value. Tpeak

Throughput
You can still do the following:
 Change the percentage value. Ta
 Manually set the congestion threshold.
Load

VMware vSphere: Optimize and Scale 6-35

© 2014 VMware Inc. All rights reserved


vSphere Storage I/O Control Requirements

vSphere Storage I/O Control is only supported on datastores that are


managed by a single vCenter Server system.
vSphere Storage I/O Control is supported for Fibre Channel, iSCSI,
and NFS storage.
vSphere Storage I/O Control does not support datastores with
multiple extents.
Verify whether your automated tiered storage array is certified as
compatible with vSphere Storage I/O Control.
vSphere Storage I/O Control is not supported for raw device
mappings.

VMware vSphere: Optimize and Scale 6-36

© 2014 VMware Inc. All rights reserved


Configure vSphere Storage I/O Control

To configure vSphere Storage I/O Control:


1. Enable vSphere Storage I/O Control for the datastore.
2. Set the number of storage I/O shares and the upper limit of I/O
operations per second (IOPS) for each virtual machine.

Example: Two virtual machines running Iometer


(VM1:1,000 shares, VM2: 2,000 shares)

Without shares or limits With shares or limits


IOPS Iometer Latency IOPS Iometer Latency
VM1 1,500 20ms 1,080 31ms
VM2 1,500 21ms 1,900 16ms

VMware vSphere: Optimize and Scale 6-37

© 2014 VMware Inc. All rights reserved


Review of Learner Objectives

You should be able to meet the following objectives:


 Describe vSphere Storage I/O Control
 Configure vSphere Storage I/O Control

VMware vSphere: Optimize and Scale 6-38

© 2014 VMware Inc. All rights reserved


Lesson 3:
Datastore Clusters and vSphere Storage DRS

VMware vSphere: Optimize and Scale 6-39

© 2014 VMware Inc. All rights reserved


Learner Objectives

By the end of this lesson, you should be able to meet the following
objectives:
 Create a datastore cluster
 Configure vSphere Storage DRS
 Explain how vSphere Storage I/O Control and vSphere Storage DRS
complement each other

VMware vSphere: Optimize and Scale 6-40

© 2014 VMware Inc. All rights reserved


Datastore Clusters

A datastore cluster is a collection


of datastores that are grouped
together without functioning 2TB
datastore
together. cluster

A datastore cluster enabled for


vSphere Storage DRS is a
collection of datastores working
500GB 500GB 500GB 500GB
together to balance:
 Capacity
 I/O latency

VMware vSphere: Optimize and Scale 6-41

© 2014 VMware Inc. All rights reserved


Datastore Cluster Rules

General rules for datastore clusters (with or without vSphere Storage


DRS):
 Datastores from different arrays can be added to the same datastore cluster.
• LUNs from arrays of different types can adversely affect performance if they
are not equally performing LUNs.
 Datastore clusters must contain similar or interchangeable datastores.
 Datastore clusters support only ESXi 5.x hosts.
Rules specific to datastore clusters enabled for vSphere Storage
DRS:
 Do not mix VMFS and NFS datastores in the same datastore cluster.
 Do not mix replicated datastores with nonreplicated datastores.
 You can mix VMFS-3 and VMFS-5 datastores in the same datastore cluster.
 If datastores within the same datastore cluster have different block sizes,
vSphere Storage vMotion cannot use VAA.

VMware vSphere: Optimize and Scale 6-42

© 2014 VMware Inc. All rights reserved


Relationship of Host Cluster to Datastore Cluster

The relationship between a VMware vSphere® High Availability or a


VMware vSphere® Distributed Resource Scheduler™ (DRS) cluster
and a datastore cluster can be one to one, one to many, or many to
many.
one to one one to many many to many
host cluster host host cluster host host clusters

datastore cluster datastore clusters datastore clusters

VMware vSphere: Optimize and Scale 6-43

© 2014 VMware Inc. All rights reserved


vSphere Storage DRS Overview

vSphere Storage DRS provides the following functions:


 Initial placement of virtual machines based on storage capacity, and
optionally on I/O latency
 Use of vSphere Storage vMotion to migrate virtual machines based on
storage capacity and, optionally, I/O latency.
 Configuration in either manual or fully automated modes
 Use of affinity and anti-affinity rules to govern virtual disk location
 Use of fully automated, storage maintenance mode to clear a LUN of
virtual machine files

VMware vSphere: Optimize and Scale 6-44

© 2014 VMware Inc. All rights reserved


Initial Disk Placement

When virtual machines are created, cloned, or migrated:


 You select a datastore cluster, rather than a single datastore.
• vSphere Storage DRS selects a member datastore based on capacity and
optionally on IOPS load.
 By default, a virtual machine’s files are placed on the same datastore in
the datastore cluster.
• vSphere Storage DRS affinity and anti-affinity rules can be created to
change this behavior.

VMware vSphere: Optimize and Scale 6-45

© 2014 VMware Inc. All rights reserved


Migration Recommendations

Migration recommendations are executed:


 When the I/O latency threshold is exceeded
 When the space utilization threshold is exceeded
 Space utilization is checked every five minutes by default.
 IOPS load history is checked every eight hours by default (if enabled).
vSphere Storage DRS selects a datastore based on utilization and,
optionally, I/O latency.
Load balancing is based on IOPS workload, which ensures that no
datastore exceeds a particular VMkernel I/O latency level.

VMware vSphere: Optimize and Scale 6-46

© 2014 VMware Inc. All rights reserved


Datastore Correlation Detector

Datastore correlation refers to datastores that are created on the


same physical set of spindles.
vSphere Storage DRS detects datastore correlation by doing the
following:
 Measuring individual datastore performance
 Measuring combined datastore performance
If latency increases on multiple datastores when load is placed on
one datastore, then datastores are considered to be correlated.
Correlation is determined by a long running background process.
Anti-affinity rules can use correlation detection to ensure that the
virtual machines or virtual disks are on different spindles.
Datastore correlation is enabled by default.

VMware vSphere: Optimize and Scale 6-47

© 2014 VMware Inc. All rights reserved


Configuration of vSphere Storage DRS Migration Thresholds

In the Storage view, right-click the data center


and select New Datastore Cluster.

Configuration
settings for
utilized space Option for
threshold utilization
difference
threshold

Option for Options


setting I/O to check for
latency imbalances
threshold

VMware vSphere: Optimize and Scale 6-48

© 2014 VMware Inc. All rights reserved


vSphere Storage DRS Affinity Rules

datastore cluster datastore cluster datastore cluster

Intra-VM VMDK affinity Intra-VM VMDK VM anti-affinity


 Keep a virtual machine’s anti-affinity  Keep virtual machines
VMDKs together on the  Keep a virtual on different datastores.
same datastore. machines’s VMDKs on  Rule is similar to the
 Maximize virtual machine different datastores. DRS anti-affinity rule.
availability when all disks  Rule can be applied to  Maximize availability of
are needed in order to run. all or a subset of a a set of redundant
 Rule is on by default for virtual machine’s disks. virtual machines.
all virtual machines.

VMware vSphere: Optimize and Scale 6-49

© 2014 VMware Inc. All rights reserved


Adding Hosts to a Datastore Cluster

Select the host cluster that will use the datastore cluster.
 Select the host cluster(s) and/or standalone hosts that will use the
datastore cluster.
 Individual hosts within a cluster cannot be selected

In the Storage view, right-click the data center


and select New Datastore Cluster.

VMware vSphere: Optimize and Scale 6-50

© 2014 VMware Inc. All rights reserved


Adding Datastores to the Datastore Cluster

Select datastores to add to the datastore cluster.

VMware recommends
selecting datastores that all
hosts can access.

Select datastores with


similar performance
statistics.

VMware vSphere: Optimize and Scale 6-51

© 2014 VMware Inc. All rights reserved


vSphere Storage DRS Summary Information

To view Storage DRS settings, expand the vSphere Storage DRS item
in the Services panel.
Refer to the Datastore Cluster Resources to review cluster capacity.

VMware vSphere: Optimize and Scale 6-52

© 2014 VMware Inc. All rights reserved


vSphere Storage DRS Migration Recommendations

Use the Storage DRS tab to monitor migration recommendations.

VMware vSphere: Optimize and Scale 6-53

© 2014 VMware Inc. All rights reserved


vSphere Storage DRS Maintenance Mode

vSphere Storage DRS maintenance mode allows you to take a


datastore out of use in order to service it.
vSphere Storage DRS maintenance mode:
 Evacuates virtual machines from a datastore placed in maintenance
mode:
• Registered virtual machines (on or off) are moved.
• Templates and unregistered virtual machines are not moved.

VMware vSphere: Optimize and Scale 6-54

© 2014 VMware Inc. All rights reserved


Backups and vSphere Storage DRS

Backing up virtual machines can add latency to a datastore.


You can schedule a task to disable vSphere Storage DRS behavior
for the duration of the backup.

VMware vSphere: Optimize and Scale 6-55

© 2014 VMware Inc. All rights reserved


vSphere Storage DRS and vSphere Technology Compatibility

Feature or Product Supported/Not Supported Migration Recommendation

VMware snapshots Supported Fully Automated

Raw device mapping pointer files Supported Fully Automated

VMware Thin-Provisioned Disks Supported Fully Automated

VMware vSphere Linked Clones Supported Fully Automated

VMware vSphere Storage Metro Cluster Supported Manual

VMware® vCenter™ Site Recovery Fully Automated


Supported
Manager™ (from protected site)

VMware® vCloud Director® Supported Fully Automated

Fully Automated
VMware vSphere® Replication Supported
(from protected site)

VMware vSphere: Optimize and Scale 6-56

© 2014 VMware Inc. All rights reserved


vSphere Storage DRS and Array Feature Compatibility

Feature or Product Initial Placement Migration Recommendations

Array-Based Snapshots Supported Manual

Array-Based Duplication Supported Manual

Array-Based Thin Provisioning Supported Manual

Array-Based Auto-Tiering Supported Manual (only capacity load balancing)

Array-Based Replication Supported Fully Automated

VMware vSphere: Optimize and Scale 6-57

© 2014 VMware Inc. All rights reserved


vSphere Storage DRS and vSphere Storage I/O Control

vSphere Storage DRS and vSphere Storage I/O Control are


complementary solutions:
 vSphere Storage I/O Control is set to stats-only mode by default.
• vSphere Storage DRS works to avoid I/O bottlenecks.
• vSphere Storage I/O Control manages unavoidable I/O bottlenecks.
 vSphere Storage I/O Control works in real time.
 vSphere Storage DRS does not use real-time latency to calculate load
balancing.
 vSphere Storage DRS and vSphere Storage I/O Control provide you
with the performance that you need in a shared environment, without
having to significantly overprovision storage.

VMware vSphere: Optimize and Scale 6-58

© 2014 VMware Inc. All rights reserved


Lab 8: Managing Datastore Clusters

Create a datastore cluster and work with vSphere Storage DRS


1. Prepare for the Lab
2. Create a Datastore Cluster That Is Enabled by vSphere Storage DRS
3. Evacuate a Datastore Using Datastore Maintenance Mode
4. Run vSphere Storage DRS and Apply Migration Recommendations
5. Clean Up for the Next Lab

VMware vSphere: Optimize and Scale 6-59

© 2014 VMware Inc. All rights reserved


Review of Learner Objectives

You should be able to meet the following objectives:


 Create a datastore cluster
 Configure vSphere Storage DRS
 Explain how vSphere Storage I/O Control and vSphere Storage DRS
complement each other

VMware vSphere: Optimize and Scale 6-60

© 2014 VMware Inc. All rights reserved


Key Points

 vSphere Storage APIs − Array Integration consists of APIs for hardware


acceleration and Array Thin Provisioning.
 VMware vSphere API for Storage Awareness allows storage vendors to
provide information about the capabilities of their storage arrays to
vCenter Server.
 Policy-based storage is a feature that introduces storage compliance to
vCenter Server.
 vSphere Storage I/O Control allows clusterwide storage I/O
prioritization.
 vSphere Storage DRS provides an easy way for an organization to
balance its storage utilization and minimize the effect of I/O latency.
 A datastore cluster enabled for vSphere Storage DRS is a collection of
datastores working together to balance storage capacity and I/O
latency.
Questions?

VMware vSphere: Optimize and Scale 6-61

© 2014 VMware Inc. All rights reserved

You might also like