You are on page 1of 59

White Paper

EMC STORAGE WITH MICROSOFT HYPER-V


VIRTUALIZATION
Design and deployment considerations and best practices
using EMC storage solutions

EMC Solutions

Abstract
This white paper examines deployment and integration of a Microsoft Windows
Server Hyper-V virtualization solution on EMC storage arrays, with details on
integration, storage solutions, availability, and mobility options for Windows
Server 2008 R2, Windows Server 2012, and Windows Server 2012 R2 Hyper-V.

November 2013
Copyright © 2013 EMC Corporation. All Rights Reserved.

EMC believes the information in this publication is accurate as of its


publication date. The information is subject to change without notice.

The information in this publication is provided “as is.” EMC Corporation makes
no representations or warranties of any kind with respect to the information in
this publication, and specifically disclaims implied warranties of
merchantability or fitness for a particular purpose.

Use, copying, and distribution of any EMC software described in this


publication requires an applicable software license.

For the most up-to-date listing of EMC product names, see EMC Corporation
Trademarks on EMC.com.

All trademarks used herein are the property of their respective owners.

Part Number H12557

EMC Storage with Microsoft Hyper-V Virtualization 2


Table of contents
Executive summary............................................................................................................................... 5
Business case .................................................................................................................................. 5

Introduction.......................................................................................................................................... 6
Purpose ........................................................................................................................................... 6
Scope .............................................................................................................................................. 6
Audience ......................................................................................................................................... 6

Technology overview ............................................................................................................................ 7


Microsoft Hyper-V ............................................................................................................................ 7

Storage connectivity options for virtual machines .............................................................................. 10


Virtual machine direct connectivity using iSCSI .............................................................................. 10
Virtual machine direct connectivity with virtual Fibre Channel ........................................................ 11
SMB 3.0 File Shares ....................................................................................................................... 16
Hyper-V Server managed connectivity ............................................................................................ 18
Virtual Hard Disks .......................................................................................................................... 20
Virtual hard disk types ................................................................................................................... 22
Windows Server 2012 R2 new VHD features ................................................................................... 25
Online VHD re-sizing.................................................................................................................. 25
Shared virtual hard disk ............................................................................................................ 25
Pass-through disks ........................................................................................................................ 26
Storage connectivity summary ....................................................................................................... 29

Availability and mobility for virtual machines..................................................................................... 30


Windows failover clustering for Hyper-V servers ............................................................................. 30
Windows failover clustering for virtual machines............................................................................ 32
Virtual machine live migrations within clusters .............................................................................. 32
Shared-nothing live migration ........................................................................................................ 34
Storage live migration .................................................................................................................... 34
Windows failover clustering with Cluster Shared Volumes.............................................................. 35
Sizing of CSVs ................................................................................................................................ 38
Site disaster protection with Hyper-V Replica ................................................................................. 38
Site disaster protection with Cluster Enabler .................................................................................. 39
Cluster Enabler CSV behavior ......................................................................................................... 41
EMC VPLEX ..................................................................................................................................... 42
VPLEX Local ............................................................................................................................... 42
VPLEX Metro with AccessAnywhere............................................................................................ 42
VPLEX Geo with AccessAnywhere............................................................................................... 43
VPLEX with Windows failover clustering ..................................................................................... 43

EMC Storage with Microsoft Hyper-V Virtualization 3


Manual or scripted disaster recovery with storage replication ........................................................ 44

Microsoft integration with EMC storage technologies ........................................................................ 45


Microsoft System Center Virtual Machine Manager......................................................................... 45
VNX operating environment file based SMI-S Provider .................................................................... 47
Windows Server 2012 Offloaded Data Transfer .............................................................................. 47
ODX support requirements ............................................................................................................. 49
Using ODX for virtual machine deployments with SCVMM 2012 R2 ................................................ 49
Windows Server 2012 thin provisioning space reclamation............................................................ 51
EMC Replication Manager .............................................................................................................. 54
EMC Storage Integrator .................................................................................................................. 54
EMC Solutions Enabler ................................................................................................................... 56

Conclusion ......................................................................................................................................... 59

EMC Storage with Microsoft Hyper-V Virtualization 4


Executive summary
Business case For many customers, there has been a growing need to provide ever-increasing
physical server deployments to service business needs. This has led to several
inefficiencies in operational areas, including the overprovisioning of server CPU,
memory, and storage. Power and cooling costs, and the requirements for floor space
in data centers, grow with each added physical server, whether the resources are
overprovisioned or not. Large numbers of physical servers, and the inefficiencies of
overprovisioning these servers, result in high costs and a poor return on investment
(ROI).

You can use Microsoft Windows Server 2008 R2, Windows Server 2012, and Windows
Server 2012 R2 Hyper-V to consolidate multiple physical server environments to
achieve significant space, power, and cooling savings, and maintain availability and
performance targets. EMC storage arrays provide additional value by allowing you to
consolidate storage resources, implement advanced high-availability solutions, and
provide seamless multi-site protection of data assets.

For consolidating data center operations, the Microsoft Hyper-V hypervisor provides a
scalable solution for virtualization on the Windows Server environment. Large-scale
consolidation saves money when you optimize and consolidate storage resources to
a single storage repository. Centralized storage also enhances the advanced features
of Hyper-V.

EMC Storage with Microsoft Hyper-V Virtualization 5


Introduction
Purpose This white paper examines deployment and integration of a Microsoft Windows
Server Hyper-V virtualization solution on EMC storage arrays. The paper includes
details of integration with storage solutions, and also includes availability and
mobility options for Windows Server 2008 R2, Windows Server 2012, and Windows
Server 2012 R2 Hyper-V.

You can use EMC storage arrays for large-scale consolidation efforts that support
thousands of connected hosts, tens of thousands of logical units, and advanced
internal mechanisms such as Offloaded Data Transfer (ODX) and virtual provisioning.
EMC storage arrays are a central part of Windows Server consolidation efforts and
provide thin reclamation support, snapshot and clone operations, and multi-site
replication solutions to provide disaster restart and recovery solutions.

Scope This white paper explains how to use Microsoft Hyper-V with EMC storage arrays to
provide RAID protection and improve core system performance. This white paper also
explains how to use complementary EMC technologies such as EMC Replication
Manager, EMC Storage Integrator (ESI), and EMC Solutions Enabler for Hyper-V
environments to improve dynamic placement capabilities for Hyper-V landscapes.

Audience This white paper is for administrators who use Microsoft Windows Server 2008 R2,
Windows Server 2012, and Windows Server 2012 R2. This white paper is also for
administrators, storage architects, customers, and EMC field personnel who want to
understand the implementation of Hyper-V solutions on EMC storage platforms.

EMC Storage with Microsoft Hyper-V Virtualization 6


Technology overview
Microsoft Hyper-V Microsoft Windows Server 2008 R2, Windows Server 2012, and Windows Server 2012
R2 provide the Hyper-V server role on the applicable versions of Windows Server.
When a Windows Server instance has the Hyper-V role installed, the original
operating system instance is called the “parent partition.”

Microsoft provides, supports, and recommends running a Hyper-V server in the


minimal footprint of a Windows Server Core installation. Windows Server Installation
Options, on Microsoft TechNet, provides more information about Windows Server
installation options, including Server Core. Hyper-V products are only available for the
64-bit (x64) release of Microsoft Windows, and require that the server hardware
environment supports hardware assisted virtualization (Intel-VT or AMD-V).

When you install the Hyper-V server role, you also install the Windows Hyper-V
virtualization hypervisor for the parent partition. Figure 1 shows the Hyper-V Manager
Management Console (MMC) that you can use to define virtual machine instances.

Figure 1. Hyper-V Manager

You can use products such as Microsoft System Center Virtual Machine Manager
(SCVMM) in more complicated Hyper-V deployments that include many physical
servers and virtual machine instances. The SCVMM solution provides a
comprehensive management framework with centralized command and control
features. SCVMM also includes functionality in the Performance and Resource
Optimization (PRO) subsystem, and storage integration based on the Storage
Management Initiative Specification (SMI-S). Figure 2 shows these options.

EMC Storage with Microsoft Hyper-V Virtualization 7


The PRO functionality has a dependency on Microsoft System Center Operations
Manager (SCOM). You can use this functionality to build automatic and dynamic
management capabilities into a Hyper-V landscape. Storage management integration
has a dependency on a storage array with an SMI-S Provider. Such configurations
allow dynamic placement of virtual machine resources based on the changing
characteristics of the data center.

Note: The Storage Automation with System Center 2012 and EMC Storage Systems using
SMI-S white paper on EMC Online Support provides information about SCVMM 2012
integration with EMC storage using the SMI-S standard.

Figure 2. SCVMM console storage fabric

A virtual machine instance typically includes a configuration file defining the


configuration of the virtual machine. The virtual machine includes processor count,
memory configuration, network connectivity, and other hardware details, and one or
more storage devices that represent the storage resources that are used by the
operating system instance. These features are configurable through the virtual
machine settings options in the Hyper-V MMC, as shown in Figure 3.

EMC Storage with Microsoft Hyper-V Virtualization 8


Figure 3. Virtual machine settings

With Windows Server 2008 R2, you can configure two types of storage devices for a
virtual machine from the settings options. Provision a storage device as a Virtual Hard
Disk (VHD) that is connected to one of the IDE or SCSI Controller adapters. You can
also provision a device that is connected as a physical hard disk (also called a pass-
through storage device).

Windows Server 2012 includes VHD provisioning, pass-through support, and a new
virtual Fibre Channel option. Virtual Fibre Channel creates synthetic Fibre Channel
adapters that allow direct storage access using the Fibre Channel protocol. “Virtual
machine direct connectivity with virtual Fibre Channel” on page 11 provides more
information.

EMC Storage with Microsoft Hyper-V Virtualization 9


Storage connectivity options for virtual machines
Microsoft Hyper-V configurations support different types of deployment models for
connectivity to storage arrays; however, there are two basic methods for a virtual
machine to gain access to storage resources. Connectivity is either directly to the
virtual host, or connectivity is provisioned through the Hyper-V server. The Hyper-V
server manages the storage device that the virtual machines use for connectivity. For
direct connectivity from the virtual machine, the virtual machine accesses block-
based storage through either an iSCSI connection, or through a virtual Fibre Channel
connection. In either case, the Hyper-V server does not physically manage the storage
device. You can use network connectivity to access storage resources with shares,
using the Server Message Block (SMB) protocol. With Windows Server 2012, you can
also place virtual hard disk files onto SMB 3.0 shares for virtual machine use.

Note: Although iSCSI and SMB are types of direct connectivity, the network connectivity is
really occurring over virtualized network adapters managed by the Hyper-V server.

Virtual machine Virtual machine instances running Windows Server can use storage provided directly
direct connectivity to the virtual machine as an iSCSI target. For this type of connectivity, the operating
using iSCSI system of the virtual machine must implement the Microsoft iSCSI Initiator software
and must access network resources through a virtual network interface.

As the virtual machine itself is directly accessing the iSCSI storage device through the
network, the operating system within the virtual machine is responsible for all
management of the disk device and subsequent volume management. An iSCSI
target device must be appropriately configured for the virtual machine to access the
iSCSI devices.

For channel redundancy, we recommend using EMC PowerPath or Microsoft Multipath


I/O (MPIO) from the virtual machine instead of NIC teaming 1. We also recommend
using jumbo frames for I/O intensive applications, by setting the Maximum
Transmission Unit (MTU) to 9,000. The MTU should be the same for the storage array,
network infrastructure (switch or fabric interconnect), Hyper-V server network cards
servicing iSCSI traffic, and on the virtual NICs for the virtual machine.

The following example sets the MTU on a NIC in Server 2012 using PowerShell:
set-netadapteradvancedproperty –name iSCSIA -RegistryKeyword
"*JumboPacket" -RegistryValue "9014"

For clustered environments, disable the cluster network communication for any
network interfaces that you plan to use for iSCSI. As shown in Figure 4, you can
disable it by opening the iSCSI Properties dialog box for the iSCSI network that was
discovered by Windows failover clustering.

1
In this white paper, "we" refers to the EMC engineering team that validated the solution.

EMC Storage with Microsoft Hyper-V Virtualization 10


Figure 4. Cluster network properties

Note: The EMC Host Connectivity Guide for Windows, available on EMC Online Support,
provides more details for configuring the Microsoft iSCSI Initiator.

One benefit of iSCSI storage is the support for clustering within virtual machines.
Shared storage clustering between virtual machines is supported with iSCSI for both
Windows 2008 R2 Hyper-V and Windows Server 2012 Hyper-V. “Windows failover
clustering for virtual machines” on page 32 provides more information.

For most configurations you must still provision a VHD device to support the
installation of the virtual machine operating system. “Hyper-V Server managed
connectivity” on page 18 provides details.

Note: Third-party hardware iSCSI solutions can support a boot from an iSCSI SAN solution;
however, these solutions are beyond the scope of this white paper due to the specific
details required for each implementation.

Virtual machine With Windows Server 2012, you can use the virtual Fibre Channel feature to provide
direct connectivity direct Fibre Channel connectivity to storage arrays from virtual machines, giving
with virtual Fibre optimal storage performance and full protocol access. Virtual Fibre Channel also
Channel supports guest-based clustering on Hyper-V servers that are running Windows Server
2012. Virtual machines must be running Windows Server 2008, Windows Server 2008
R2, or Windows Server 2012 to support the virtual Fibre Channel feature.

To support Hyper-V virtual Fibre Channel, you must use N Port ID Virtualization (NPIV)
capable FC host bus adapters (HBA) and NPIV capable FC switches. NPIV assigns
World Wide Names (WWN) to the virtual Fibre Channel adapters that are presented to
a virtual machine. Zoning and masking can then be performed between the storage
array front end ports directly to the virtual WWNs created by NPIV for the virtual
machines. No zoning or masking is necessary for the Hyper-V server.

For the initial configuration of virtual FC, you must create a Fibre Channel SAN within
the Hyper-V Virtual SAN Manager, as shown in Figure 5. The Hyper-V SAN is a logical
construct where physical HBA ports are assigned. You can place one or multiple HBAs
within a Hyper-V SAN for port isolation or for deterministic fabric redundancy. Use the
same virtual SAN configuration and naming convention on all Hyper-V servers in a

EMC Storage with Microsoft Hyper-V Virtualization 11


clustered environment. This ensures that each node can take ownership and host a
highly available guest with virtual Fibre Channel.

Figure 5. Hyper-V virtual SAN manager

After a virtual SAN is created, do the following to present virtual Fibre Channel (FC)
controllers to the virtual machine:
1. From within the virtual machine settings (with the virtual machine powered
off), click Add Hardware.
2. Click Fibre Channel Adapter.
3. Assign the virtual Fibre Channel adapter to a specific virtual SAN.
The virtual adapters are discovered as Microsoft Hyper-V Fibre Channel HBA
within the virtual machine operating system, as shown in Figure 6.

EMC Storage with Microsoft Hyper-V Virtualization 12


Figure 6. Virtual FC adapter discovered within virtual machine

We recommend that you add a minimum of two adapters on separate fabrics to


ensure no single point of failure, the same as in a typical Fibre Channel topology. If
you assign multiple virtual FC adapters to the same virtual SAN, each virtual adapter
is pinned to a single physical adapter within the virtual SAN. If a virtual SAN has
multiple physical adapters, you can assign each virtual adapter to a different physical
adapter (in a round-robin fashion) within that SAN.

Configure separate SANs to guarantee redundancy across physical components.


Assign a minimum of two virtual adapters across these redundant virtual SANs. Zone
and register each virtual adapter (both sets of WWPNs), with devices unmasked
across redundant storage controllers. The overall recommended base topology is
shown in Figure 7.

EMC Storage with Microsoft Hyper-V Virtualization 13


Virtual FC Virtual Machine Virtual FC
Adapter A Adapter B

Virtual SAN A Virtual SAN B

Physical Server(s)

Physical FC Physical FC
Adapter A Adapter B

Fabric A Fabric B

Storage Director A Storage Director B

Figure 7. Virtual Fibre Channel topology

Two NPIV based WWPNs, or Hyper-V address sets, are associated with each virtual FC
adapter, as shown in Figure 8. Both of these WWPNs must be zoned, masked, or
registered to the appropriate storage for live migration to work for that virtual
machine. When powered on, only one of the two WWPN addresses are used by the
guest at a specified time. If you request a live migration, Hyper-V uses the inactive
address to log into the storage array and ensure connectivity before the live migration
continues. After the live migration, the previously active WWPN becomes inactive. It is
important to validate connectivity and live migration functionality before putting a
virtual machine into production using the virtual Fibre Channel feature.

EMC Storage with Microsoft Hyper-V Virtualization 14


Figure 8. Virtual Fibre Channel adapters

You can also use the Get-VM PowerShell cmdlet to get WWPNs by viewing the
FibreChannelHostBusAdapters property, or using the Get-VMFibreChannelHba
command, as shown in the following WWPN PowerShell example:
Get-VMFibreChannelHba -Computername MSTPM3035 -VMName FCPTSMIS |
ft SanName, WorldWidePortNameSetA, WorldWidePortNameSetB –autosize

SanName WorldWidePortNameSetA WorldWidePortNameSetB


------- --------------------- ---------------------
SAN_A C003FF22E51D0000 C003FF22E51D0001
SAN_B C003FF8A1F380000 C003FF8A1F380001

The virtual machine must manage multi pathing when you use the virtual Fibre
Channel feature. You can use either native MPIO or EMC PowerPath from within the
virtual machine to control load balancing and path failover where multiple virtual FC
adapters or multiple targets are configured.

Both Microsoft and EMC recommend that you use virtual Fibre Channel instead of
pass-through devices when direct storage access is required by the virtual machine or
required by the layered software. If components in the environment do not support

EMC Storage with Microsoft Hyper-V Virtualization 15


NPIV and cannot use virtual FC, pass-through devices can still be used and are
supported.

SMB 3.0 File SMB 3.0 is the current iteration of the SMB protocol, also known as Common Internet
Shares File System (CIFS). The SMB protocol is often used to provide shared access to files
over a TCP/IP-based network in Microsoft Windows-based environments. The SMB
3.0 protocol, which is supported by Microsoft for Hyper-V, provides new core
performance and high availability enhancement features. Server Message Block
overview, on Microsoft TechNet, provides more details about SMB 3.0.
SMB file shares can be presented directly to a virtual machine for storage use or,
starting with Windows Server 2012, be used as a target to support virtual hard disks
used by virtual machines. Both the EMC VNX and VNXe family of storage arrays have
SMB 3.0 support in their latest software releases. Some of the SMB 3.0 features
supported include Multichannel, Continuous Availability, Offload Copy (Windows
Server 2012 Offloaded Data Transfer (ODX), and Directory Leasing.

Note: EMC VNX Series: Introduction to SMB 3.0 Support and EMC VNXe Series: Introduction
to SMB 3.0 Support, on EMC Online Support, provides more details about SMB 3.0 support
for VNX and VNXe arrays.

When configuring SMB 3.0 based storage for Hyper-V it is important to include the
following steps:
1. Ensure the Hyper-V computer accounts, the SYSTEM account and all Hyper-V
administrators have full control permissions to the appropriate file share
folder.
2. Enable the SMB 3.0 Continuous Availability (SMBCA) feature.
SMBCA is not enabled by default on either VNX or VNXe file system shares.

3. Enable CIFS Synchronous Writes.


Synchronous writes are not enabled by default on either VNX or VNXe shares.

To enable Continuous Availability on the VNX platform, use the CLI from the control
station as in the following example:
1. From an SSH client (like Putty) connect to the VNX control station.
2. Run the server_mount command against the primary data mover that owns
the file system.
Note the file system and path name, SMB_FS on /SMB_FS for the specified
example in Figure 9.

Figure 9. VNX file system and path name

EMC Storage with Microsoft Hyper-V Virtualization 16


3. Mount the file system with the Continuous Availability (CA) option:
server_mount server_2 –o smbca SMB_FS

4. Export the file system with the CA option


 server_export <server_number> -P cifs –n <share_name> -o type=CA
<mount_path>
 Example: ‘server_export server_2 –P cifs –n SMB_Share –o type=CA
/SMB_FS’

Note: For VNXe, SMBCA can be enabled within Unisphere, under Advanced Options in CIFS
Share Detail.

For VNX, synchronous writes can be enabled within Unisphere, with the following
steps:
1. From Storage Array > Storage Configuration > File Systems, click the Mounts
tab.
2. Select the mount associated with the targeted file system, and then click
Properties.
3. Select Set Advanced Options and ensure that Direct Writes Enabled and CIFS
Sync Writes Enabled are selected.
Notes:
• For VNX OE 8.x, we recommend not enabling direct writes. EMC VNX Unified
Best Practices For Performance on EMC Online Support provides additional
details.
• For VNXe, you can enable synchronous writes within Unisphere, under
Advanced Attributes in Shared Folder Detail.

In Windows, SMB shares can be used by specifying the Universal Naming Convention
(UNC) path within PowerShell cmdlets, as in the following example. You can also use
Hyper-V Manager, as shown in Figure 10.

PowerShell example with SMB storage:


New-VHD -Path \\SFSERVER00\SHARE00\VM00.VHDX -Dynamic -SizeBytes
100GB

ComputerName : EMCFT302
Path : \\SFSERVER00\SHARE00\VM00.VHDX
VhdFormat : VHDX
VhdType : Dynamic
FileSize : 4194304
Size : 107374182400
MinimumSize :
LogicalSectorSize : 512
PhysicalSectorSize : 4096
BlockSize : 33554432
ParentPath :
FragmentationPercentage : 0
Alignment : 1
Attached : False

EMC Storage with Microsoft Hyper-V Virtualization 17


DiskNumber :
IsDeleted : False
Number :

Figure 10. Universal Naming Convention to specify a VHD

Hyper-V Server When you first deploy a virtual machine in Hyper-V, you must often provide the
managed location for the VHD storage that represents the operating system image. When you
connectivity format the volume to be used for virtual machine storage, we recommend using a 64
KB allocation unit size (AU). The 64 KB AU helps to ensure the VHD files within the file
system are aligned with the boundaries of the underlying storage device.

As shown in Figure 11, the initial configuration requires a Hyper-V management


name, and also the location for the virtual machine configuration information. If you
want to provide high availability for the virtual machine, then this location should
represent a SAN device that is available to all nodes within the cluster. In most high
availability cases, the storage location for the operating system VHD resides on a
Cluster Shared Volume (CSV).

EMC Storage with Microsoft Hyper-V Virtualization 18


Figure 11. Create a virtual machine

Subsequent steps in the New Virtual Machine Wizard request sizing information for
memory allocation and network connectivity, which are beyond the scope of this
white paper. Microsoft Hyper-V online help provides information about these
parameters.

Use the New Virtual Machine Wizard to configure the Virtual Hard Disk (VHD or VHDX)
for the operating system installation as shown in Figure 12. The default location for
the VHD is based on the previous location specified in the Location field, and the VHD
Name field is based on the name provided for the virtual machine.

EMC Storage with Microsoft Hyper-V Virtualization 19


Figure 12. Default definition of the VHD

Size the VHD appropriately for the operating system being installed. When configured
through the wizard in this manner, the VHD created will be a dynamic VHD or dynamic
VHDX file when using Windows Server 2012. You can manually configure VHD devices
for the virtual machine so as to specify the VHD characteristics. To allow for manual
configuration of VHD devices, select Attach a virtual hard disk later.

After the New Virtual Machine Wizard has completed, select the Settings option from
the Hyper-V Manager console for further modifications to the virtual machine
configuration. The configuration is stored in an XML document located in the
previously specified virtual machine directory. The name of the configuration file is
based on a Global Unique Identifier (GUID) for the virtual machine.

You can map manually configured VHDs to either the IDE or SCSI controllers defined
in the virtual machine configuration. At least one such VHD must exist to host and
install the operating system.

Virtual Hard Disks You can define VHD devices that can be later repurposed to virtual machines. You can
create VHD devices outside of a virtual machine by selecting New > Hard Disk within
Hyper-V Manager, as shown in Figure 13. In clustered environments, you can launch
the same hard disk creation wizard from the failover cluster manager by right clicking
Roles and then selecting Virtual machines > New Hard Disk.

EMC Storage with Microsoft Hyper-V Virtualization 20


Figure 13. Virtual hard disk creation

Define and associate a VHD from within the Settings option for a virtual machine by
selecting the controller (IDE or SCSI) to which a VHD will be associated. To define and
map the VHD within the virtual machine, open the settings for the virtual machine as
shown in Figure 14. By selecting the hardware device (SCSI Controller in this
example), you can define the new VHD to be created and assigned to that controller.

Figure 14. Virtual machine hard disk settings

For Windows 2008 R2 and Windows Server 2012 VHDs, you must assign virtual
machine boot disks to an IDE controller.

Server 2008 R2 and Server 2012 support only two devices per IDE controller; because
of this, you can configure only four IDE VHD devices for any specified virtual machine.
If additional VHD devices are required, you must define them as SCSI controller
managed devices.

EMC Storage with Microsoft Hyper-V Virtualization 21


For Windows Server 2012 R2, you can configure two generations of virtual machines.
Generation 1 virtual machines are compatible with the previous version of Hyper-V,
and Generation 2 virtual machines add support for the following new functionality:
• Secure boot
• Boot from SCSI based VHDs
• Boot from SCSI based virtual DVD
• PXE boot using a standard network adapter
• UEFI firmware support

Generation 2 virtual machines also remove support for the legacy network adapter
and IDE drives.

SCSI controllers support multiple disk devices per controller, and are a more scalable
solution for configurations when multiple LUN devices or multiple VHD devices are
required. Each virtual machine can have four virtual SCSI controllers, with 64 disks
per controller. You can present 256 virtual SCSI disks to a virtual machine. For non-
boot devices, use SCSI controllers for presenting additional storage to the virtual
machine.

For I/O intensive workloads, especially with Windows 2008 R2, you may need to
allocate multiple virtual SCSI adapters to a virtual machine. With Windows 2008 R2,
each virtual SCSI controller has a single channel, with a maximum queue depth of
256 per adapter. Additionally, a single virtual CPU is used for storage I/O interrupt
handling. Due to these limits, you may need to use multiple virtual SCSI adapters to
reach the IOPS potential of the underlying storage.

Windows Server 2012 greatly reduced the need to present multiple virtual SCSI
controllers to improve performance. Windows Server 2012 provides a minimum of
one channel per virtual SCSI device/per controller. One channel is added for every 16
virtual CPUs presented to the virtual machine. The queue depth was also changed to
256 per device, and the I/O interrupt handling was changed to be distributed across
all virtual CPUs presented to the virtual machine. Because of these changes, you
usually need only a single virtual SCSI controller for a virtual machine with Window
Server 2012 or Windows Server 2012 R2.

Virtual hard disk Two virtual hard disk formats are available natively with Hyper-V. For Windows Server
types 2008 R2, the VHD hard disk format is used. With Windows Server 2012 both the VHD
and VHDX hard disk formats are supported. The VHD format supports files up to 2 TB
in size, while the VHDX format supports files up to 64 TB in size.

You can use three different types of VHD disks when you configure new or additional
storage devices, as shown in Figure 15. The choice between a fixed size and
dynamically expanding format is usually based on the storage utilization
requirements, as there is a difference in how storage is allocated for these two types.
For this reason, the two selections affect storage provisioning functionality such as
that provided by virtual/thin provisioning technologies within storage arrays.

EMC Storage with Microsoft Hyper-V Virtualization 22


Figure 15. Virtual Hard Disk Wizard

A fixed size VHD or VHDX device is fully written to at creation time; as a result, when
selecting this VHD type, all storage equal to the size of the VHD file is consumed
within the targeted thin pools. The creation of the fixed device can also take a long
time because of the requirement to write the full size of the file to the storage array.

To help address the time it takes to create a fixed VHD, Windows Server 2012 has a
feature called Offloaded Data Transfer, also referred to as ODX. ODX can offload the
writing of repeating patterns to a storage device. If ODX is supported by the target
storage array, the creation of fixed VHD files (either VHD or VHDX) offloads the series
of contiguous writes to be handled by the storage array. This increases the speed at
which the VHD is created.

Another benefit of the ODX write offload capability, as implemented by both the
VMAX and VNX storage arrays, is in virtual provisioning environments; the zeros that
represent the fixed VHD are not allocated within the thin pool. This makes a fixed VHD
file space efficient with virtual provisioning where ODX is available. The fixed VHD
continues to show its full size within the file system. For example, if a 100 GB fixed
VHD is created, 100 GB is consumed within the file system, but that space is not be
consumed within the thin pool.

You can have a fixed VHDX type that is not allocated within a thin pool, where ODX is
disabled or not supported by the array. Windows Server 2012 contains a native thin
reclamation capability based on the TRIM and UNMAP SCSI commands. This reclaim
functionality is supported from within virtual machines using the VHDX disk format. If
you present a fixed VHDX to a virtual machine, and that virtual machine is running
Windows Server 2012, a file system format causes Windows to issue reclaim requests

EMC Storage with Microsoft Hyper-V Virtualization 23


for the entire size of the underlying VHDX to the storage device. As a result, the space
for the fixed VHDX is no longer allocated in the thin pool. Both the VMAX and VNX
support the UNMAP functionality native to Windows Server 2012.

Dynamically expanding VHD devices do not pre-allocate all storage that is defined for
them, even if ODX is available in Server 2012 environments; however, these devices
can suffer a slight degradation in performance because storage must be allocated
when the operating system or applications within the virtual machine need more
allocations. Also, with the dynamic VHD file format there are storage alignment
concerns due to the internal constructs of the file as it grows. This causes additional
storage performance overhead for both read and write operations.

Alignment is not a problem with the Windows Server 2012 VHDX format. The VHDX
internal constructs ensure 1 MB alignment on the underlying storage device.
Microsoft also made significant improvements for the storage performance of
dynamic VHDX files, compared to the legacy VHD format. For most workloads,
dynamic VHDX files perform similarly to fixed VHDX files, except for sequential write
workloads where the dynamic VHDX file must expand consistently into the file
system. After you allocate dynamic VHDX space, there is no difference in performance
when compared to a fixed VHDX file.

When using dynamic VHD files, you can over provision a file system. A dynamic VHD
file has both a current size and a potential size. The current size is the size it
consumes in the file system, and the potential size is the size that is specified during
creation, and which can be consumed by the virtual machine. An administrator can
assign VHD files with potential sizes that total more than the size of the file system.
This is not necessarily a problem, but be careful to monitor file systems and ensure
adequate free space where dynamic VHD files are used. You can use fixed VHD files
to avoid the possibility of over-allocation.

You can use third-party tools in the public domain to create a fixed VHD device where,
like a dynamic VHD, pre allocation is not executed. Be cautious when using public
domain solutions. Although such solutions work, they may not be supported by
Microsoft. One such tool is provided at http://code.msdn.microsoft.com/vhdtool.
Such third-party tools are unnecessary when ODX is available, or when reclamation
support is available and the virtual machine is running Windows Server 2012, as the
resulting fixed VHD file is space efficient for virtual provisioning.

Based on the considerations in the previous paragraphs, we recommend the


following when using VHD files:
• Windows 2008 R2:
 Where storage performance is important, use fixed VHDs over dynamic
VHDs.
 If thin storage efficiency is more important than storage performance, use
dynamic VHDs.
 Use fixed VHD files if the potential for over-allocating file system space is
not desired.
• Windows Server 2012 and Server 2012 R2:
 Use the VHDX file format instead of the VHD format.

EMC Storage with Microsoft Hyper-V Virtualization 24


 Convert VHD files to VHDX files when migrating virtual machines to Hyper-V
on Server 2012 if:
− Performance is of high importance for that file.
− Advanced functionality such as ODX and TRIM/UNMAP from within a
virtual machine running Windows Server 2012 is required.
− Backwards compatibility with Windows 2008 R2 is not required.
 Use dynamic VHDX files for general-purpose workloads where initial write
performance is not required.
 Use fixed VHDX files if performance is the most important factor.
 Use fixed VHDX files if the potential for over-allocating file system space is
not desired.
• Use VHD files instead of pass-through or virtual Fibre Channel storage unless
there are specific requirements for these technologies by applications within
the virtual machine.
Differencing disk is the third VHD format option. This disk device is configured to
provide an associated storage area that is created against a source VHD. You can use
this style of configuration when, for example, you create a gold master VHD, and you
create multiple virtual machine instances. In such configurations, you must protect
the gold master from being updated by any individual virtual machine. Each virtual
machine must write its own changes. In this instance, the gold master VHD behaves
as a read-only device, and all changes written by the virtual machine are saved to the
differencing disk device. There will always be an association between the differencing
disk and the gold master for, without the original gold master, the differencing disk
only maintains changes, and does not represent a fully independent copy.

Windows Server Windows Server 2012 R2 offers several new features specific to the VHD format,
2012 R2 new VHD including online VHD re-sizing and shared VHD:
features
Online VHD re-sizing
In previous versions of Hyper-V, you had to power off the virtual machine before
resizing a virtual hard disk. Windows Server 2012 R2 allows you to resize a virtual
hard disk for the VHDX format, when presented through a virtual SCSI adapter and
while the virtual machine is running. This feature is independent of any storage array
specific functionality. However, when combined with the ability to expand storage
pools and LUNs within EMC storage arrays, the feature offers an end to end capability
for adding capacity when using virtual hard disks non-disruptively. Online Virtual
Hard Disk Resizing Overview, in Microsoft TechNet, provides more details about this
feature.

Shared virtual hard disk


Shared virtual hard disks enable multiple virtual machines to access the same VHDX
file. The main benefit of this functionality is the support for shared storage within a
virtual machine-based Windows failover cluster.

Microsoft supports the use of shared VHDX files within Cluster Shared Volumes or
within SMB file shares that support certain parts of the SMB 3.0 protocol. You can use

EMC Storage with Microsoft Hyper-V Virtualization 25


shared virtual hard disks with CSVs on EMC block-based storage. VNX or VNXe SMB
3.0 based file shares do not currently support the use of shared VHDs.

You can enable shared VHDs from Hyper-V manager or PowerShell. You must power
off the virtual machine to modify the required setting. You can use Hyper-V manager,
from within the settings of the virtual machine, to enable virtual hard disk sharing, as
shown in Figure 16.

Figure 16. VHD sharing option

With PowerShell, you can enable hard disk sharing either when you add the virtual
disk to the virtual machine or after you add the disk.

To enable hard disk sharing when adding the virtual disk to a virtual machine:
Add-VMHardDiskDrive -VMName VM1 -Path
C:\ClusterStorage\Volume1\Shared.vhdx -ShareVirtualDisk

To enable hard disk sharing on a virtual disk already added to a virtual machine:
$Drive = get-vm VM1 | Get-VMHardDiskDrive | Where {$_.Path -like
"C:\ClusterStorage\Volume1\Shared.vhdx"}

Set-VMHardDiskDrive $Drive -SupportPersistentReservations $true

Virtual Hard Disk Sharing Overview, in Microsoft TechNet, provides more details
about this feature.

Pass-through Because of the way that I/O is generated to a VHD device located on a volume
disks managed by the parent partition is processed, several levels of indirection are
imposed. The virtual machine operating system services I/O within the virtual
machine and passes I/O to the storage device. In turn, as the VHD is physically
owned by the parent partition, the parent must now receive and drive the I/O to the
physical disk that it owns. This multi-level indirection of I/O does not provide the best

EMC Storage with Microsoft Hyper-V Virtualization 26


performance, although the overhead is relatively small. The best performance occurs
when there are the fewest levels of indirection. For storage devices presented to a
virtual machine, the best performance occurs when you use virtual Fibre Channel
devices, pass-through devices, or iSCSI devices directly to the virtual machine. As
previously mentioned, we recommend that you use virtual Fibre Channel instead of
pass-through devices where it is supported in a specified environment.

Pass-through devices must be configured as offline to the parent partition and are
therefore inaccessible for any parent managed functions, such as creation or
management of volumes. These offline disk devices are then configured as storage
devices directly to the virtual machine. You can use the disk management console or,
alternatively, the DISKPART command line interface, to transition SAN storage devices
between online and offline status. The SAN Policy set, which is accessed through the
DISKPART command line interface, manages the default storage devices state.

You can use the Hyper-V MMC to configure pass-through devices as shown in Figure
17. Disks can be allocated against a SCSI controller that is configured for the virtual
machine. Each SCSI controller can map up to 64 pass-through devices, and up to four
discrete SCSI controllers may be configured to an individual virtual machine. This
provides support for up to 256 SCSI devices.

Figure 17. Configuration of pass-through disk devices for a virtual machine

EMC Storage with Microsoft Hyper-V Virtualization 27


After you configure the required disk devices as pass-through devices to the virtual
machine, the operating system of the virtual machine detects and displays them as
shown in Figure 18. In this instance, the virtual machine has been configured with a
VHD device that is used as a boot device. The Virtual HD ATA Device is the boot
device. The EMC SYMMETRIX SCSI Disk Device identifies the two pass-through
devices, as this is the detected storage device from the Windows Server 2008 R2
operating system of the virtual machine.

Figure 18. Pass-through storage for an EMC Symmetrix VMAX array

You must configure storage devices that have been configured as pass-through
devices to a virtual machine in the same way as is typical for storage devices to a
physical server. Administrators should follow the recommendations provided for a
physical environment, which can include the requirements to align partitions on
applicable operating systems. Windows Server 2008 R2 and Windows Server 2012,
do not require manual partition alignment, as partitions are automatically aligned to
a 1 MB offset. When you create a New Technology File System (NTFS) volume, follow
Microsoft SQL Server and Microsoft Exchange Server recommendations for such tasks
as selecting an allocation unit size of 64 KB when formatting volumes.

You can deploy virtual machine instances that use pass-through devices as the boot
device for the operating system disk device. You must define the pass-through device
before installing the operating system of the virtual machine and before selecting the
pass-through disk (configured through the IDE controller) as the install location.

In clustered environments, ensure that the proper resource dependencies are in place
for pass-through devices with their respective virtual machine, as shown in Figure 19.
The pass-through disk should be within the virtual machine group or role. The virtual
machine resource and virtual machine configuration resource must also be made
dependent on the pass-through devices. The Update-
ClusterVirtualMachineConfiguration PowerShell cmdlet can be used to help set the
proper dependencies for a clustered virtual machine.

EMC Storage with Microsoft Hyper-V Virtualization 28


Figure 19. Pass-through disk dependencies

Storage EMC storage arrays provide and support all forms of storage connectivity required by
connectivity Windows Server 2008 R2 and Windows Server 2012 Hyper-V. You can deploy any
summary form of Hyper-V Server managed by, or directed to virtual machine connectivity, and
can even combine multiple forms of connectivity to satisfy application-level
requirements.

Each form of storage connectivity provides different management or operational


features. For example, storage that is provisioned directly to a virtual machine, using
virtual Fibre Channel, iSCSI connectivity, or pass-through disks, restricts that storage
volume only to virtual machines. Conversely, storage allocated in VHD devices
created on volumes within the Hyper-V server allows a single LUN to be collocated
among any number of virtual machines. The various VHD devices are also shared
with, and located on the parent-managed volume.

When you use a common volume to collocate VHD devices, you can also affect some
high-availability or mobility solutions, because a change to the single LUN affects all
virtual machines located on the LUN. This can affect configurations using failover
clustering. However, the implementation of Clustered Shared Volumes (CSV) with
Windows Server 2008 R2 and Windows Server 2012 failover clustering addresses the
need for high availability of consolidated VHD deployments.

When you collocate VHD devices onto a single storage LUN, consider how to address
the cumulative workload. In cases where an application such as Microsoft SQL Server
or Microsoft Exchange Server is deployed within a virtual machine, use sizing best
practices to ensure that the underlying storage is able to support the anticipated
workload. When collocated VHD devices are placed on a common storage volume
(LUN), provision the device to ensure that it can satisfy the cumulative workload of all
applications and operating systems located on the VHDs. When storage has been

EMC Storage with Microsoft Hyper-V Virtualization 29


under-provisioned from a performance perspective, all collocated applications and
virtual machines can be adversely affected.

Availability and mobility for virtual machines


After initial deployment of a virtualized infrastructure, you must often provide high
availability for the services running within the environment. In its simplest form, you
can provide a high-availability solution by configuring multiple Windows servers into
a failover clustering configuration. This style of configuration can cluster up to 16
physical servers with Windows 2008 R2 and up to 64 physical servers with Windows
2012. The virtual machines that are configured on shared SAN storage then become
resources that can be moved amongst the nodes.

Windows failover You can implement Windows failover clustering for use with Hyper-V virtual machines
clustering for in the same way as implementing the Windows cluster environment for other
Hyper-V servers applications such as SQL Server or Exchange Server. Virtual machines become
another form of application that failover clustering can manage and protect.

Use the failover cluster management wizard to configure a new application that
converts an existing virtual server instance into a highly available configuration. Use
the option to configure a virtual machine as shown in Figure 20. Shut down the virtual
machine to configure it for high availability, and locate all storage objects, including
items such as ISO images that are mounted to the virtual machine, to SAN storage.

Failover clustering with Windows 2008 R2 assumes that access to storage objects
from all nodes within the cluster is symmetrical. This means that all drive mappings,
file locations, and mount points are identical, and during configuration, checks are
made to ensure that this condition is met.

With failover clustering with Windows Server 2012, you can have asymmetrical
storage configurations, where the same storage is not connected to all nodes in the
cluster. Such configurations are possible in many geographically dispersed cluster
scenarios. In this case, the cluster validation wizards only validate storage against
nodes in a common site. Wizard failure results when mandatory requirements are not
met. You will receive warnings when failover clustering is not able to verify some of
these aspects, or when failure is likely. Read the warnings for information about how
to fix the problems.

EMC Storage with Microsoft Hyper-V Virtualization 30


Figure 20. High Availability Wizard

After you import a virtual machine into failover clustering, manage and maintain the
virtual machine through the failover cluster management interface. Avoid starting and
stopping the virtual machine outside of the control of failover clustering. If the virtual
machine shuts down outside of the control of failover clustering, the clustering
software assumes that the virtual machine has failed and restarts the virtual
machine.

Failover Cluster manager, where necessary, launches the required virtual machine
management interfaces. Use failover clustering to manage all availability options and
state changes for the virtual machine.

When you import a virtual machine instance into a high-availability configuration, the
machine must include all related storage disk devices so that you can manage the
virtual machine correctly. The High Availability Wizard fails if it is unable to include all
storage configured for the virtual machine within the cluster environment. Configure
all shared storage correctly across the cluster nodes. When you add disk storage
devices, correctly configure the devices as shared storage within the cluster.

The primary goal of Windows Server failover clustering is to maintain availability of


the virtual machine when the virtual machine becomes unavailable due to
unforeseen failures; however, this protection does not always maintain the virtual
machine state through such transitions. As an example of this style of protection,
consider the case of a physical node failure where one or more virtual machines were
running. Windows failover clustering detects that the virtual machines are not
operational and that a node is no longer available and attempts to restart the virtual
machines on a remaining node within the cluster configuration.

EMC Storage with Microsoft Hyper-V Virtualization 31


Windows failover Availability for the virtual machine resources is ensured through the use of Windows
clustering for failover clustering at the parent level; however, protection at the virtual machine level
virtual machines may not provide high availability for the applications running within the virtual
machines. For example, a server instance cannot start if a virtual machine instance
has corrupted files. The high-availability protection for the virtual machine can ensure
that the virtual machine is running, but cannot ensure that the operating system
itself, or the applications installed on the server, are accessible.

Windows failover clustering checks at the application level to ensure that services are
accessible. For example, a clustered SQL Server instance continually undergoes
“Look Alive” and “Is Alive” checks to ensure that the SQL Server instance is
accessible to user connections. Implementing clustering within the virtual machines
can provide this additional level of protection.

You cannot configure a Failover Cluster within virtual machines that are running
Windows Server 2008 R2 or Windows Server 2012 using virtual disks or pass-through
disks. This limitation is because of the filtering of the necessary SCSI-3 Persistent
Reservation commands. However, you can form Windows Cluster configurations with
virtual machines that are running Windows Server 2008 R2 with iSCSI shared storage
devices. In such configurations, the iSCSI initiator is implemented within the child
virtual machines, and the shared storage is defined on the iSCSI LUNs.

With Windows Server 2012 you can use both iSCSI and virtual Fibre Channel as
shared storage within a virtual machine cluster. You can also use SMB file share
storage with certain clustered applications, such as SQL Server. If you use SMB file
share storage, you should also use SMB 3.0 based file shares.

With Windows Server 2012 R2, you can also use VHDs as shared storage between
virtual machines that run Windows failover clustering. “Windows Server 2012 R2 new
VHD features” on page 25 provides more information about the shared virtual hard
disk feature.

Virtual machine Movement of virtual machines within a cluster was different for systems before
live migrations Windows 2008 R2. When an administrator or an automated management tool
within clusters requested a move, the virtual machine state was saved to a disk and then resumed
after disk resources were moved to the target node. This move, or quick migration
operation, took so long that outages often occurred, even though the virtual machine
state would then resume.

With Windows Server 2008 R2 and Windows Server 2012 for failover cluster nodes,
you can use the live migration functionality available with the clustering environment.
Live migrations move virtual machines transparently between nodes. Unlike quick
migration move requests, there is no outage for a client application, and the
migration between nodes is completely transparent. To achieve this level of client
transparency, live migrations copy the memory state representing the virtual machine
from one server to another so as to mitigate any loss of service.

Live migration configurations require a robust network configuration between the


nodes within the cluster. This network configuration optimizes the memory copy
between the nodes and enables an efficient virtual machine transition. For such live
migration configurations, you must have at least one dedicated 1 Gb (or greater)
network between cluster nodes to enable the memory copy. We also recommend that

EMC Storage with Microsoft Hyper-V Virtualization 32


you dedicate specific private networks exclusively to live migration traffic, as shown
in Figure 21. Networks that are disabled for cluster communication can still be used
for live migration traffic. Deselect networks within the live migration settings if you do
not want to use them.

Figure 21. Live Migration Settings window

When you use a live migration, failover clustering replicates the virtual machine
configuration and memory state to the target node of the migration. Multiple cycles of
replicating the memory state occur to reduce the amount of changes that need to be
sent on subsequent cycles.

You can use live migration operations for virtual machines that contain virtual disks,
pass-through disks, virtual Fibre Channel storage, or iSCSI storage as presented
directly to the virtual machine. We recommend using CSVs for virtual disks, but they
are not required for live migrations. You can migrate a virtual machine with dedicated
storage devices that are used for virtual disk access. If you migrate a virtual machine,
the virtual disks transition from offline to online on the target cluster node during the
live migration process.

Network connectivity allows for the timely transfer of state, and the migration
process, as a final phase, momentarily suspends the machine instance, and switches
all disk resources to the target node. After this process, the virtual machine
immediately resumes processing. The transition of the virtual machine is required to
complete within a TCP/IP timeout interval such that no loss of connectivity is
experienced by client applications.

Note: The live migration process is different from the quick migration process because no
suspension of virtual machine state to disk occurs. Failover clustering still provides support
for quick migrations.

If the migration of the virtual machine cannot execute successfully, the migration
process reverts the virtual machine back to the originating node. This also maintains
the availability of the virtual machine to ensure that client access is not impacted.
You can also terminate a live migration by using the Cancel in progress Live
Migration option in the Cluster Manager console.

EMC Storage with Microsoft Hyper-V Virtualization 33


Shared-nothing Windows Server 2012 introduces a new type of live migration referred to as a shared-
live migration nothing live migration. This form of live migration allows for the movement of non-
clustered virtual machines between Hyper-V hosts when there is no shared storage.
The migration can occur between hosts using local storage, SAN storage or SMB 3.0
file shares. If both hosts have access to the SMB file share, then no storage
movement is necessary. When non-shared storage is used, Hyper-V uses these steps
to initiate a storage live migration:
1. Throughout most of the migration, reads and writes are serviced from the
source virtual disks, while the contents of the source are copied, over the
network, to the new destination VHDs.
2. Following the initial full copy of the source, writes are mirrored to the source
and destination VHDs. Outstanding changes to the source are also replicated
to the target.
3. When the source and target VHDs are synchronized, the virtual machine live
migration begins, following the same process used for shared storage live
migrations.
Offloaded Data Transfer can be used as a part of the migration. “Storage live
migration” on page 34 provides more details.

4. When the live migration completes, the virtual machine runs from the
destination server and the original source VHDs are deleted.

Virtual Machine Live Migration Overview, in Microsoft TechNet, provides more


information about shared-nothing live migrations.

Storage live Starting with Windows Server 2012 you can migrate the virtual hard disk storage of a
migration virtual machine between LUNs non-disruptively. You can migrate storage on stand-
alone hosts or on Hyper-V clusters where virtual hard disks reside or will reside on
CSVs or SMB 3.0 file shares. You can start the storage migration process from Hyper-V
manager for stand-alone hosts, from Failover Cluster Manager for clustered hosts (as
shown in Figure 22) or from PowerShell, by using the Move-VMStorage cmdlet. If
SCVMM exists in the environment, you can start migrations from the SCVMM console
or from PowerShell.

If the virtual machine that is being migrated is offline, the machine remains offline
and the virtual hard disks are moved between the source and target. If the virtual
machine that is being migrated is online, a live storage migration occurs, using the
following process:
1. Throughout most of the migration, reads and writes are serviced from the
source virtual disks while the contents of the source are copied to the new
destination VHDs.
2. Following the initial full copy of the source, writes are mirrored to the source
and destination VHDs. Outstanding changes to the source are also replicated
to the target.
3. When the source and target VHDs are synchronized, the virtual machine
begins using the target VHDs.

EMC Storage with Microsoft Hyper-V Virtualization 34


4. The original source VHDs are then deleted.

Figure 22. Storage Migration within a Hyper-V cluster

You can accelerate the storage migration process with ODX. If the storage array where
the migration occurs supports ODX, the storage migration automatically runs ODX.
Using ODX greatly enhances the speed of the initial copy operation between the
source and target devices. For EMC Symmetrix® VMAX, EMC VNX® and EMC VNXe®
systems where ODX is supported, both the source and target must reside in the same
storage array. EMC environments also require a Windows hotfix for Server 2012
support with ODX. The hotfix ensures that if ODX copy operations are rejected that the
host based copy engages and resumes from where the ODX copy left off. The hotfix
also corrects an issue with clustered storage live migration that can lead to data loss.
You can download the Update that improves cloud service provider resiliency in
Windows Server 2012 hotfix from Microsoft Support at:
http://support.microsoft.com/kb/2870270. “Windows Server 2012 Offloaded Data
Transfer” on page 47 provides more details.

Windows failover You can use Windows Server 2008 R2 and Windows Server 2012 to configure shared
clustering with SAN storage volumes so that all nodes within a given cluster configuration can access
Cluster Shared the volume concurrently. In this configuration, the volume is mounted as read/write
Volumes to all nodes at the same time. The new model for allowing direct read/write access
from multiple cluster nodes is called Cluster Shared Volumes (CSVs). CSV supports
running multiple virtual machines on different nodes where the VHD storage devices
are located on a commonly accessible storage device.

CSVs help make the transition process for VHD ownership during live migrations more
efficient, as no transition of ownership and subsequent mounting is required, as is
typical for cluster storage devices. The SAN storage configured as CSVs is mounted
and accessible by all cluster nodes.

EMC Storage with Microsoft Hyper-V Virtualization 35


The CSV feature is enabled by default in Windows Server 2012. You must enable the
CSV feature in Windows Server 2008 R2. To enable the feature, select Enable Cluster
Shared Volumes, or select Enable Cluster Shared Volumes from Failover Cluster
Manager on a Windows Server 2008 R2 cluster, as shown in Figure 23.

Figure 23. Windows 2008 R2 CSV from Failover Cluster Manager

After you enable CSV, in Windows 2008 R2, a new Cluster Shared Volumes option
appears in Failover Cluster Manager. In Windows Server 2012, you can access CSVs
at Storage > Disks in Failover Cluster Manager. As shown in Figure 24, you can use this
option to convert any disk within the available storage group to a CSV.

EMC Storage with Microsoft Hyper-V Virtualization 36


Figure 24. Add available storage to CSVs

For Windows Server 2008 R2 and Windows Server 2012, you must format a disk with
NTFS to be added as a CSV. Resilient File System (ReFS) is not supported for CSV use
on Windows Server 2012. For Windows Server 2012, the CSV files system is called
“CSVFS.” Although the name has changed, the underlying file system is still NTFS. If a
CSV is removed from a cluster, the file system designation returns to NTFS, with all
data on the file system remaining intact.

After you convert a SAN device to be used as a CSV volume, you can access the
storage device on all cluster nodes. The CSV volume is mounted to a common, but
local, location on all nodes, which ensures that the namespace to VHD objects is
identical on all cluster nodes. The namespace attributed for each CSV volume is
based on the system drive location, which must be the same for all cluster nodes. The
namespace includes a ClusterStorage location, in which the volumes are
physically mounted on each node. The mount location is a sequentially generated
name of the form Volume1 where the appended numeric value is incremented for
each subsequent volume.

Note: You can rename the mount points assigned to CSVs. To rename the specified volume
based mount point, select Rename from Windows Explorer. The new name appears on all
nodes of the cluster.

All CSV devices list the current owner for the resource. The owner must coordinate
access to the various VHD devices that represent virtual machine storage within the
cluster. Virtual machines continue to run on only a single physical server at any time.
When a virtual machine that is deployed on CSV storage configured within the cluster
is to be brought online, the node that is starting the virtual machine communicates
with the CSV owner to request permission to generate I/O to the VHD device when the
virtual machine is brought into operation. The node that starts the virtual machine
locks the VHD device to ensure that no other process can write to the VHD from any
other node. If the VHD has already been locked by another node, then the request is
denied. When the CSV owner grants permission, the node generates direct I/O to the
VHD on the storage device as needed by the virtual machine.

CSVs also protect against external failure scenarios, such as physical connectivity
loss from a given node. If connectivity from a node is lost to the underlying storage,

EMC Storage with Microsoft Hyper-V Virtualization 37


I/O operations are redirected over the CSV network to the current owning node. This
functionality prevents the failure of a virtual machine as a result of the loss of storage
connectivity. While this functionality allows the virtual machine to continue
operating, this indirection should not be relied on to provide ongoing access to the
virtual machine. Performance is affected when running in redirected mode, resolve
the loss of connectivity, or execute a live migration.

Sizing of CSVs CSVs are NTFS volumes, and have the same limits as NTFS. NTFS volumes and CSVs
have a theoretical maximum of the largest NTFS volume of 256 TB. You can determine
appropriate sizing for CSV volumes based on the cumulative workload expected from
the VHD files located in the CSV.

The CSV is physically represented by a single LUN presented from a storage array. The
LUN is supported by some number of physical disks within the array. Use the typical
sizing for both storage allocation and I/O capacity to ensure that both the storage
allocation for a given CSV and the I/O requirements are adequately met.

Undersizing the LUN for I/O load results in poor performance for all VHDs located on
the CSV, and for all applications installed in the virtual machines that use the VHDs.
We recommend adding multiple CSVs to distribute workloads across available
resources.

Site disaster Windows Server 2012 includes Hyper-V Replica, a native replication technology for
protection with virtual machines. You can use Hyper-V Replica to enable asynchronous host-based
Hyper-V Replica replication of VHDs between standalone hosts or clusters. You can also use Hyper-V
Replica to enable virtual machine replication between sites without shared storage.
Hyper-V Replica is useful for branch offices and for replicating virtual machines to
hosted cloud providers.

When using Hyper-V Replica, you can enable or disable replication for each VHD. You
can select data that you do not want to replicate, such as an operating system page
file, and create a separate virtual disk that you configure for that workload and
disable for replication.

Note: You can only replicate VHDs. If you configure a virtual machine with pass-through or
virtual Fibre channel storage, Hyper-V replica is blocked.

When you replicate specific VHDs within a virtual machine, an initial full copy of the
data from the primary virtual machine is sent to the replica virtual machine location.
This replication can be over the network or you can manually copy the VHD files to the
replica site. If you manually copy the files, a file comparison is performed and
ensures that only incremental changes are replicated for the initial synchronization.

After the initial synchronization, changes in the source virtual machine are
transmitted over the network at periodic replication frequency intervals. The
replication frequency is dependent on cycle times. Hyper-V replica requires cycle
times of at least five minutes with Windows Server 2012. With Windows Server 2012
R2 you can configure the replication frequency at 30 second, 5 minute, or 15 minute
cycles.

EMC Storage with Microsoft Hyper-V Virtualization 38


Hyper-V Replica also allows for additional recovery points. Multiple recovery points
enable the ability to recover to an earlier point in time. Windows Server 2012
supports 16 hourly recovery points while Windows Server 2012 R2 supports 24 hourly
recovery points.

Windows Server 2012 R2 includes extended replication, a feature that enables


support for a second replica, where the replica server forwards changes that occur on
the primary virtual machines to a third server. This functionality can enable three site
solutions which provide for additional disaster recovery protection in the event of a
single site or regional disruption.

You can move a virtual machine to a replica server in a planned failover. With a
planned failover, any changes which have not been replicated are first copied to the
replica site, so that no data is lost. After data is moved to the replica site, you can
configure reverse replication to send changes back to the original site. For unplanned
failovers, you can bring the the replica virtual machine online. You can lose some
data in an unplanned failover. When you use extended replication, replication
continues to the extended replica server if a planned or unplanned failover occurs.

Note: Planning and configuration for Hyper-V Replica is outside the scope of this white
paper. Deploy Hyper-V Replica, on Microsoft TechNet, provides more details.

Site disaster EMC Cluster Enabler has been a supported product for many years under the
protection with Windows Geographically Dispersed Clustering program. You can use Cluster Enabler
Cluster Enabler product to seamlessly integrate multi-site storage replication into the framework
provided by Windows failover clustering. The Microsoft Windows Server Catalog lists
compatible solutions.

Cluster Enabler supports failover cluster configurations with multiple forms of


storage-based replication, including both synchronous and asynchronous, replication
across sites. Cluster Enabler provides a plug-in architecture to support various EMC
replication products, including EMC RecoverPoint®, EMC Symmetrix Remote Data
Facility (SRDF®) and EMC MirrorView ™.

Because of this tight integration with the Windows failover cluster framework, valid
supported failover cluster configurations and deployed applications are fully
supported under the Cluster Enabler solution set. This includes Windows Hyper-V
virtual machines.

Note: Steps for installing Cluster Enabler within a Microsoft cluster environment are beyond
the scope of this white paper. Cluster Enabler product guides on EMC Online Support
provide more details.

Cluster Enabler is transparent to the operations of the typical failover cluster


management framework. Installation of cluster enabler includes a cluster enabler
service and cluster resource. Figure 25, shows the cluster resource for SRDF/CE. The
name assigned to the resource is preceded with “EMC_” and appends the name of
the specific resource group.

EMC Storage with Microsoft Hyper-V Virtualization 39


Figure 25. Failover Cluster Manager with an SRDF/CE resource

The clustered disks within the resource group have their dependencies modified to
include the Cluster Enabler resource defined for the group. This ensures that
transitions to other nodes are coordinated appropriately. For lateral movements, or
movement to nodes that are within the same site as the owning node, no transition of
the replication state is required. If you request that the resource be moved to a peer
node, or a node that is located in the remote site, the Cluster Enabler resource
coordinates with the underlying storage replication service to transition the remote
disk to a read/write state. Management of the replication state of disk devices is fully
managed by the cluster enabler resource, and this functionality is transparent to the
administrator.

The Cluster Enabler environment includes a Failover Cluster Management Console to


configure and manage Cluster Enabler specific functionality, as shown in Figure 26.
You can use the console to identify resources used within the various groups
configured in the geographically dispersed cluster environment. The management
framework uses a logical construct of sites, and logically displays resources based on
this layout.

All movement, configuration of resources, and online/offline status changes of


resources within the cluster continue to be executed through the standard Failover
Cluster Management Console. The Cluster Enabler Manager Console is used to
configure newly created resource group integration, or to introduce new shared disk
resources into the cluster configuration.

EMC Storage with Microsoft Hyper-V Virtualization 40


Figure 26. EMC Cluster Enabler Manager

Due to the transparent implementation of Cluster Enabler, most of the configurations


that are supported by Windows failover clustering are also supported by Cluster
Enabler. Clustered Shared Volumes (CSV) is an important exception. CSV functionality
provides equal access to the CSV volumes from all nodes, while most storage
replication technologies limit access to the target storage devices. Clustered
solutions for Hyper-V virtual machines that do not use CSV are fully supported with all
cluster enabler plug-ins. CSV support depends on the Windows version and the
replication technology. Refer to the appropriate Cluster Enabler plug-in product guide
for current CSV support statements.

Cluster Enabler For Cluster Enabler with Windows Server 2008 R2, you can run virtual machines only
CSV behavior on the primary site where the CSV devices are online and read/write enabled. Cluster
Enabler, with Windows Server 2008 R2, blocks virtual machines from running on the
secondary site because the devices are read/write disabled.

This is different from failover cluster behavior without Cluster Enabler configured,
where virtual machines are allowed on the secondary site but in redirected access
mode. Geo-clustering is the reason for this restriction, because site to site network
transfers would have higher network latencies and more expensive bandwidth
requirements. Cluster Enabler restricts virtual machines to remain on the site on
which they have direct access to the disk, and move them only when the CSV disk
fails over to the secondary site.

For Windows Server 2012, virtual machines can run on any node regardless of where
the CSV disk is online. This means that the virtual machine can failover to a node
where the CSV disk is marked as write-disabled and run in redirected access mode.

EMC Storage with Microsoft Hyper-V Virtualization 41


To avoid this state, a virtual machine can be restricted to a site by editing the
possible owners list and limiting the virtual machine resources to run from specific
nodes in the cluster.

EMC product guides on EMC Online Support provide configuration and management
details for supported Cluster Enabler replication technologies.

EMC VPLEX EMC VPLEX technology is a scalable, distributed-storage federation solution that
provides non-disruptive, heterogeneous data movement and volume management
functionality.

EMC offers VPLEX in three configurations to provide high-availability and data


mobility:

• EMC VPLEX Local®


• EMC VPLEX Metro®
• EMC VPLEX Geo®

Figure 27. VPLEX configurations

VPLEX Local
VPLEX Local provides seamless, non-disruptive data mobility and allows you to
manage multiple heterogeneous arrays from a single interface within a data center.
VPLEX Local also provides increased availability, simplified management, and
improved utilization across multiple arrays.

VPLEX Metro with AccessAnywhere


You can use VPLEX Metro with AccessAnywhere® technology to enable active/active,
block level access to data between two sites within synchronous distances. The
distance is limited to what synchronous behavior can withstand and also considers
host application stability. We recommend that, depending on the application,
replication latency for Metro be less than or equal to 5 ms RTT.

You can use the combination of virtual storage with VPLEX Metro and virtual servers
to provide transparent movement of virtual machines and storage across a distance.
This technology provides improved utilization across heterogeneous arrays and
multiple sites.

EMC Storage with Microsoft Hyper-V Virtualization 42


VPLEX Geo with AccessAnywhere
You can use VPLEX Geo with AccessAnywhere technology to enable active/active,
block level access to data between two sites within asynchronous distances. VPLEX
Geo enables cost-effective use of resources and power and allows primary and
secondary data centers to be active against the same logical storage. Geo provides
the same Distributed Virtual Volume flexibility as Metro but extends the distance up
to and within 50 ms RTT.

VPLEX with Windows failover clustering


Deployments of distributed Windows geographically dispersed cluster solutions with
EMC VPLEX Metro and Geo support Hyper-V live migrations across site boundaries.
You can use active/active solutions to implement load balancing in addition to the
core high availability/disaster recovery features of Windows failover clustering.

EMC VPLEX Metro and Geo provide active/active storage access across synchronous
and asynchronous distances by creating VPLEX distributed virtual volumes of a RAID
1 mirror with extents located across two VPLEX Geo clusters. After you create the
distributed virtual volumes, volumes are then placed into a consistency group for
management decisions made by EMC VPLEX Witness®.

SITE-A SITE-B

Ethernet WAN
(Host Connectivity)

Distributed Devices
RAID-1 Mirrors Across Sites

Ethernet WAN
(VPLEX Connectivity)

VPLEX Cluster 1 VPLEX Cluster 2

Figure 28. VPLEX distributed virtual volumes

In an active/active presentation, hosts can perform read/write operations on the


distributed virtual volume while simultaneously presenting them to both sites. This
presentation is fundamentally different from traditional array replication solutions
where the volume on the secondary clusters remains write/disabled until a failover
occurs.

VPLEX functionality is particularly powerful in environments that use CSVs. You can
distribute Hyper-V virtual machines residing on CSVs across sites where a VPLEX
distributed volume is used. This allows for load balancing of virtual machines across

EMC Storage with Microsoft Hyper-V Virtualization 43


storage arrays and sites, disaster avoidance with proactive virtual machine mobility
between data centers, and core disaster recovery for unplanned events.

With VPLEX distributed virtual volumes, you do not need to use manual failover
processes. The global cache coherency layer in VPLEX presents a consistent view of
data at any point in time. You can manage virtual machine mobility across sites with
quick or live migrations like traditional shared storage solutions.

The configuration and use of VPLEX with Windows failover clustering and Hyper-V is
outside the scope of this document. Hyper-V Live Migration with VPLEX Geo on EMC
Online Support provides more information about VPLEX and Hyper-V.

Manual or scripted You can also use the import-vm PowerShell cmdlet for disaster recovery with
disaster recovery Windows Server 2012 for storage replication and Hyper-V. With this enhanced
with storage PowerShell cmdlet, you can import virtual machines from their original configuration
replication and VHD files. You can also replicate configuration files and replicate data between
sites, and then import directly from the replicated data. You do not need to export the
virtual machines as required with Windows 2008 R2.

If all virtual switches are available with the same names and all mount point locations
are identical to the original configuration, you can import all virtual machine
configuration files in a specified directory hierarchy on a target host with the
following command:
Get-ChildItem .\*.xml -recurse | import-vm

EMC Storage with Microsoft Hyper-V Virtualization 44


Microsoft integration with EMC storage technologies
EMC storage is integrated with Microsoft technologies when using the following EMC
or Microsoft software and features:
• Microsoft System Center Virtual Machine Manager
• Windows Server 2012 Offloaded Data Transfer
• Windows Server 2012 Thin-Provisioning Space Reclamation
• EMC Replication Manager
• EMC Storage Integrator (ESI)
• EMC Solutions Enabler

Microsoft System You can use Microsoft System Center Virtual Machine Manager (SCVMM) to efficiently
Center Virtual manage a Hyper-V environment that can incorporate hundreds of physical servers.
Machine Manager SCVMM integrates with the various availability products such as failover clustering for
a system that provides centralized management, reporting, and alerts. SCVMM also
provides management services for VMware servers and their virtual machine
resources.

You can use the centralized management console for a centralized view of all
managed servers and resources. From the management console, you can discover,
deploy, or migrate, existing virtual machines between managed physical servers. You
can use this functionality to dynamically manage physical and virtual resources
within the landscape, and to adapt to changing business demands.

SCVMM 2012 provides standards-based discovery and automation of iSCSI and Fibre
Channel block storage resources in a virtualized data center environment. These new
capabilities build on the Storage Management Initiative Specification (SMI-S) that
was developed by the Storage Networking Industry Association (SNIA). The SMI-S
standardized management interface enables an application such as SCVMM to
discover, assign, configure, and automate storage for heterogeneous arrays in a
unified way. An SMI-S Provider uses SMI-S to enable storage management. To take
advantage of this new storage capability, EMC updated the SMI-S Provider to support
the SCVMM 2012 RTM and SP1 releases.

EMC SMI-S Provider supports unified management of multiple types of storage arrays.
With the one-to-many model enabled by the SMI-S standard, a virtual machine
manager can interoperate, by using the EMC SMI-S Provider, with multiple disparate
storage systems from the same virtual machine manager console that is used to
manage all other private cloud components. Table 1 outlines some of the benefits of
centralized storage management with SCVMM.

EMC Storage with Microsoft Hyper-V Virtualization 45


Table 1. Centralized storage management with SCVMM

Benefit Description
Reduce costs • On-demand storage—Aligns IT costs with business priorities by synchronizing
storage allocation with fluctuating user demand. SCVMM elastic infrastructure
supports thin provisioning, expanding or contracting the allocation of storage
resources on EMC storage arrays in response to changing demand.
• Ease-of-use—Simplifies consumption of storage capacity, saves time, and
lowers costs, by enabling the interaction of EMC storage arrays with, and the
integration of storage automation capabilities within, the SCVMM private cloud.
Simplify • Private cloud GUI—Allows administration of private cloud assets, including
administration storage, through the SCVMM console, a single management UI for SCVMM or
cloud administrators.
• Private cloud CLI—Enables automation through the SCVMM comprehensive set
of Windows PowerShell cmdlets, including 25 new storage-specific cmdlets.
• Reduce errors—Tracks errors by using the SCVMM UI or CLI to view and request
storage.
• Private cloud self-service portal—Provides a web-based interface that permits
users to create virtual machines, as needed, with a storage capacity that is
based on predefined classifications.
• Simpler storage requests—Automates storage requests to eliminate delays of
days or weeks.
Deploy faster • Deploy VMs faster and at scale—Supports rapid provisioning of virtual machines
to Hyper-V hosts or host clusters at scale. SCVMM can communicate directly with
your SAN arrays to provision storage for your virtual machines. SCVMM 2012 can
provision storage for a virtual machine in the following ways:
 Create a new logical unit from an available storage pool—Controls the
number and size of each logical unit.
 Create a writeable snapshot of an existing logical unit—Provisions many
virtual machines quickly by rapidly creating multiple copies of an existing
virtual disk. Snapshots cause minimal loads on hosts and use space on the
array efficiently.
 Create a clone of an existing logical unit—Offloads a full copy of a virtual
disk from the host to the array. Typically, clones do not use space-as
efficiently as snapshots and take longer to create.
• Reduce load—Provisioning of virtual machines quickly using SAN-based storage
resources takes full advantage of EMC array capabilities while placing no load on
the network.

Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
on EMC Online Support provides details about the integration between SCVMM and
the EMC SMI-S provider.

The EMC SMI-S Provider is based on EMC Solutions Enabler and supports block based
storage (FC/iSCSI) for both VMAX and VNX.

EMC Storage with Microsoft Hyper-V Virtualization 46


VNX operating The VNX operating environment 8.1 or later supports SCVMM with an SMI-S provider
environment file that runs natively on the VNX control station. This SMI-S Provider is enabled by
based SMI-S default and supports file based (CIFS) storage. The following basic functionality
Provider supports NAS storage within SCVMM and is supported by the VNX provider:

• Creating file systems and shares on VNX CIFS or NFS based servers
• Removing file systems and shares

Note: The file systems must be empty prior to removal.

• Updating SCVMM when creating new file systems or shares from management
applications other than SCVMM (for example, Unisphere).

Note: SCVMM is updated by rescanning from the Provider area of the SCVMM console.

Configuring the System Center Virtual Machine Manager Console for the NAS SMI-S
Provider, on EMC Online Support, provides updated instructions to configure SCVMM
for use with the VNX File SMI-S Provider.

Windows Server Offloaded Data Transfer (ODX) is a new feature of the Windows Server 2012 operating
2012 Offloaded system and the Windows 8 client. ODX enables Windows Server to offload data
Data Transfer transfers between LUNs, or offload the writing of repeating patterns, to the storage
area network (SAN). By offloading the data transfer or repeating the write pattern to
the SAN, client-server network usage, CPU utilization and storage input output
operations are reduced to nearly zero as the data movement is performed by the
intelligent storage array. These operations can take a fraction of the time compared to
conventional methods. ODX starts a copy request with an offload read operation and
retrieves a token representing the data from the storage device. ODX then uses an
offload write command, which includes the token, to request data movement from
the source disk to the destination disk. The storage system then performs the actual
data movement. Figure 29 illustrates the ODX process.

EMC Storage with Microsoft Hyper-V Virtualization 47


Figure 29. ODX process

You can use ODX based copy operations within a physical LUN or across multiple
LUNs from the same storage array. You can also use ODX copy operations across
multiple Windows Server 2012 hosts that have a source LUN on one server and a
target on the secondary server within the same array. In this latter case, SMB 3.0 is
required (and is implemented by Windows Server instances). Hyper-V virtual
machines also support ODX for the Windows Server 2012 operating system. ODX
supports virtual machine storage for VHDs (VHDX only,) Pass-through hard disks,
virtual Fibre Channel LUNs, or iSCSI LUNs presented directly to the virtual machine.

ODX is enabled by default within Windows Server 2012 and you can use it for any file
copy operation where the file is greater than 256 KB in size. Windows automatically
detects whether ODX is supported by a given storage device. If the storage device
does not support ODX, the device uses a standard host-based copy. If ODX is
supported, but an offload request is rejected by the storage array, Windows reverts to
a host based copy to complete the operation. In some cases, when ODX is rejected,
Windows waits three minutes before again attempting to use ODX against that
device. The copy operation that failed an ODX call can continue to use legacy copy
operations until completion.

ODX is especially useful for copying large files between file shares, deploying virtual
machines from templates, and performing storage live migrations of virtual machines
between LUNs. In addition to copy operations, ODX can be used for offloading the
writing of repeating patterns to a storage device. For example, Hyper-V with Windows

EMC Storage with Microsoft Hyper-V Virtualization 48


Server 2012 uses ODX to offload writing a range of zeros when creating fixed VHDs.
Windows Offloaded Data Transfers overview, on Microsoft TechNet, provides more
details.

ODX support The client that requests the copy operation must be ODX-aware. You must use
requirements Windows Server 2012 or Windows 8 to initiate the copy operation for ODX to engage.
ODX also requires the storage arrays within the SAN to support the offload requests
from the operating system based on the T10 specifications (http://www.t10.org/.)

For ODX to be leveraged in Hyper-V virtual machines, virtual SCSI adapters with VHDX
(VHD format is not supported) and pass-through disks or virtual Fibre Channel
adapters are required.

If ODX is enabled on an EMC storage array, for example, following a code upgrade,
you must either reboot the Windows 2012 server or mask and unmask the devices so
that Windows can detect the change in ODX support. Windows Server 2012 discovers
device feature support characteristics only at the time of initial device discovery and
enumeration. Any change in device feature support characteristics for previously
discovered devices is not recognized without a host reboot or device re-discovery.

Table 2 lists ODX support for EMC storage arrays:

Table 2. ODX support for EMC storage arrays

Storage array Supported version Notes


VNX Block VNX OE for block version An ODX enabler must be installed on the
05.32.000.5.201 released VNX before the ODX feature can be used.
on 2/22/2013 Where can the ODX enabler for VNX
OE 05.32 be obtained? on EMC Online
Support provides details about the
enabler and how to obtain it.

VNX File VNX OE for file version EMC VNX Series: Introduction to SMB
7.1.65.8 released on 3.0 Support on EMC Online Support
2/22/2013 provides details about support for ODX
with SMB 3.0.

VNXe VNXe OE version EMC VNX Series: Introduction to SMB 3.0


2.4.0.20932 (MR4) Support on EMC Online Support provides
released on 1/7/2013 details about support for ODX with SMB
3.0.

VMAX Enginuity version Enabled by default


5876.229.145 (Q2 2013
SR)

Using ODX for Starting with SCVMM 2012 R2, you can use ODX when you deploy virtual machines
virtual machine from templates. When using the network transfer type, SCVMM 2012 R2
deployments with automatically attempts to use ODX to perform the virtual machine deployments if ODX
SCVMM 2012 R2 is supported in the environment.

For ODX to be used, the library server, Hyper-V hosts, and clusters need an
appropriate run as account for their host management credentials. You can assign

EMC Storage with Microsoft Hyper-V Virtualization 49


the credential by specifying a run as account, which has permissions to the servers to
be added, while adding the server or cluster into SCVMM. The run as account is then
assigned to the host management credentials as shown in Figure 30.

Figure 30. Host management credentials

For clustered hosts previously added to SCVMM, the ability to change the host
management credential can be disabled from within the SCVMM console. To change
the credential, run the following PowerShell commands:
$Cluster = Get-SCVMHostCluster -Name HyperVR2Clus.contoso.com

$RunAs = Get-SCRunAsAccount -Name dcadmin

Set-SCVmHostCluster -VMHostCluster $Cluster –


VMHostManagementCredential $RunAs

When ODX is automatically invoked, the create virtual machine job performing the
deployment displays a step called Deploy file (using Fast File Copy) as shown in
Figure 31.

EMC Storage with Microsoft Hyper-V Virtualization 50


Figure 31. Create virtual machine with ODX

If ODX fails or is not used when you create a virtual machine, the deployment
continues and completes by reverting to a traditional host based copy. The job
displays a status of Completed w/ Info which notes the failure to use ODX. Figure 32
shows an example.

Figure 32. SCVMM 2012 R2 failure to invoke ODX

Windows Server Storage arrays such as VMAX and VNX support a pooling and storage allocation on-
2012 thin demand functionality called virtual (or thin) provisioning. You can use thin
provisioning space provisioning to allocate storage for a specific device, within a thin pool, when a server
reclamation writes data for the first time. Resources are more efficiently used by only allocating
storage on-demand. Over time, the data written by the server cancan be deleted, but
the space allocated within the thin pool persists, leading to inefficient storage
utilization. Windows Server 2012 includes a new feature that allows the operating
system to request that previously written, but now deleted data, is reclaimed. This
reclaim functionality frees the allocated, but no longer required data, from within a
thin pool.

Windows Server 2012 supports detecting thinly provisioned storage and issuing T10
standard UNMAP or TRIM based reclaim commands against that storage. Windows

EMC Storage with Microsoft Hyper-V Virtualization 51


Server 2012 uses the UNMAP specification for reclaim operations against EMC
storage. The following EMC storage supports detection and reclamation with UNMAP:
• VNX—Support for thin awareness and reclamation is provided in VNX OE for
block version 05.32.000.5.201 released on 2/22/2013.
• Symmetrix VMAX—Support for thin awareness and reclamation is available
starting in the Enginuity 5876 Q4 2012 Service Release. We recommend using
the newest Enginuity release (Q2 2013 SR or higher) prior to using Windows
Server 2012 reclamation support.

If a LUN is detected as a thin provisioned drive in Windows Server 2012, by default


reclaim operations are performed under the following scenarios:
• When a volume residing on a thin provisioned drive is formatted with the quick
option, the entire size of the volume is reclaimed in real-time.
• When the Optimize option is selected for a volume as a part of a regularly
scheduled operation, or when manually selected from the Optimize Drives
interface shown in Figure 33. By default, drives are automatically optimized on
a weekly basis. CSVs cannot be optimized unless they are in redirected mode.
• When the optimize-volume PowerShell cmdlet is used with the Retrim option.
• When a file or groups of files are deleted from a file system, Windows
automatically issues reclaim commands for the area of the file system that was
freed based on the file deletion. This is also true for CSV volumes, even if they
are not in redirected mode. This automated method of reclamation reduces the
need of running optimize operations; however to achieve full efficiency, an
optimize drive operation may still need to be run.

Windows Server 2012 supports reclaim operations against both NTFS and ReFS
formatted volumes. The new VHDX virtual disk format, native to Windows Server
2012, also supports reclaim operations from within a Hyper-V virtual machine to a
virtual disk. You can perform all reclaim operations supported on a physical LUN
within and against a VHDX based virtual disk or against a pass-through disk
presented to a Hyper-V based virtual machine.

EMC Storage with Microsoft Hyper-V Virtualization 52


Figure 33. Windows Server 2012 Optimize Drives dialog box

You can globally disable the default behavior of issuing reclaim operations on a
Windows 2012 server. Modify the disabledeletenotify parameter to prevent
reclaim operations from being issued against all volumes on the server. This setting
can be changed with the Fsutil command line tool included with Windows Server
2012.

To disable reclaim operations run the following from an elevated command prompt:
Fsutil behavior set DisableDeleteNotify 1

To query the reclamation setting:


Fsutil behavior query DisableDeleteNotify
If DisableDeleteNotify = 0, this is the default value and reclamation is enabled.

If DisableDeleteNotify = 1, space reclamation is disabled.

Lab testing has shown that both automatic reclamation and reclaim when running
optimize volume operations are disabled when DisableDeleteNotify = 1. No
reboot is required and the change takes effect immediately.

For Windows Server 2012 environments that use space reclamation, install the
Update that improves cloud service provider resiliency in Windows Server 2012 hotfix
package from http://support.microsoft.com/kb/2870270. The hotfix contains a fix to
help prevent file system hangs while reclaim operations are being performed.

EMC Storage with Microsoft Hyper-V Virtualization 53


EMC Replication EMC Replication Manager simplifies the management of storage replication,
Manager integrates with critical business applications, and creates, mounts, and restores
point-in-time replicas of databases or file systems residing on supported storage
arrays. You can also use it to perform automatic discovery of changes to the storage
or application environment and delegate tasks to appropriate resources. Replication
Manager includes the following benefits:
• Automated management of point-in-time replicas on EMC Symmetrix, EMC
CLARiiON®, EMC VNX™, EMC Celerra®, and EMC VNXe ™ storage.
• Application consistent replication of Microsoft, Oracle, and UDB applications.
• Reduces or eliminates the need for scripting solutions for replication tasks.
• Provides a single management console and wizards to simplify replication
tasks.
• Improved recovery and restore features, including application recovery.
• Integration with physical, VMware, Hyper-V, or IBM AIX VIO virtual environments

Replicas can be stored on EMC Symmetrix TimeFinder®mirrors, clones, or snapshots;


CLARiiON clones or snapshots; VNX snapshots, Celerra SnapSure™ local snapshots,
or EMC Celerra Replicator™ remote snapshots. Replication Manager also supports
data using the RecoverPoint Appliance storage service. Use Replication Manager to
perform local and remote replications using TimeFinder, Open Replicator, EMC SRDF,
EMC SAN Copy™, EMC Navisphere®, EMC Celerra iSCSI, Celerra NFS, and/or replicas
of EMC MirrorView™/A or MirrorView/S secondaries using EMC SnapView™snapshot
and SnapView clone replication technologies where they are appropriate.

You can install Replication Manager on Hyper-V virtual machines and perform
replications, mounts, and restores of devices residing on Symmetrix, CLARiiON, VNX,
and Celerra storage. Replication Manager requires either iSCSI or pass-through
storage to support Hyper-V environments. Replication Manager product and
administrator guides on EMC Online Support provide information about Hyper-V
support.

EMC Storage EMC Storage Integrator (ESI) for Windows Suite is a set of tools which integrate
Integrator Microsoft Windows and Microsoft applications with EMC storage arrays. The suite
includes: ESI for Windows, ESI PowerShell Toolkit, ESI Service, ESI SCOM
Management Packs, ESI SCO Integration Pack, and the ESI Service PowerShell Toolkit.

You can use ESI for Windows to view, provision, and manage block and file storage
for Microsoft Windows environments. ESI supports the EMC Symmetrix VMAX, EMC
VNX, EMC VNXe and EMC CLARiiON CX4 series of storage arrays.

In addition to physical environments ESI also supports storage provisioning and


discovery for Windows virtual machines that run on Microsoft Hyper-V, in addition to
other types of hypervisors. For Hyper-V, ESI supports the creation of VHDs and pass-
through disks, and also supports the creation of host disks and clustered shared
volumes.

EMC Storage with Microsoft Hyper-V Virtualization 54


Figure 34. ESI MMC mapping of Hyper-V VMs

The ESI PowerShell Toolkit is a powerful option for discovering and managing
Windows environments, including Hyper-V. ESI includes over 150 PowerShell cmdlets
for discovering and managing virtual machines, servers and storage arrays. For
example, the following script uses ESI to take all Hyper-V registered hosts, discover
all host volumes and map them to the underlying storage LUN and pool (the script
output is shown in Figure 35).
$myobj = @()
$hypervsystem=get-emchypervsystem
foreach ($system in $hypervsystem){
$volumes=$system | get-emchostvolume
foreach ($vol in $volumes){
$lun= $vol | get-emclun
$pool= $lun | get-emcstoragepool

$myobjtemp = New-Object System.Object


$myobjtemp | Add-member -name ComputerName -type NoteProperty
-value $system.Name
$myobjtemp | Add-member -name VolumePath -type NoteProperty -
value $vol.MountPath
$myobjtemp | Add-member -name VNXLunName -type NoteProperty -
value $Lun.Name
$myobjtemp | Add-member -name VNXLunCapacity -type
NoteProperty -value $Lun.Capacity
$myobjtemp | Add-member -name PoolName -type NoteProperty -
value $pool.Name
$myobjtemp | Add-member -name PoolTotal -type NoteProperty -
value $pool.TotalCapacity
$myobjtemp | Add-member -name PoolAvailable -type NoteProperty
-value $pool.AvailableCapacity
$myobj += $myobjtemp

}
}

EMC Storage with Microsoft Hyper-V Virtualization 55


$myobj | Out-GridView

Figure 35. ESI Script Output

ESI can be downloaded from EMC Online Support. ESI release notes and online help
provide more information about ESI.

EMC Solutions EMC Solutions Enabler is a prerequisite for many layered product offerings from EMC.
Enabler Installation of Solutions Enabler at the parent level is fully supported and provides
the necessary support for configurations such as Cluster Enabler, when run at the
Hyper-V server level. Deployments of Solutions Enabler within virtual machines that
are using iSCSI storage devices are also fully supported.

For gatekeeper access, Solutions Enabler also supports a virtual machine where
gatekeepers are presented over virtual Fibre Channel. When gatekeepers are
presented to a virtual machine over virtual Fibre channel, no additional steps are
required. We recommend using virtual Fibre channel instead of pass-through devices
when presenting gatekeeper devices to Hyper-V virtual machines.

EMC Storage with Microsoft Hyper-V Virtualization 56


Note: Solutions Enabler cannot function against storage devices that are VHD devices, even
when the VHD devices are located on EMC Symmetrix storage. The underlying LUN
configuration for a storage device that is used for VHD placement cannot be detected from
the child partition.

In certain cases, you must implement Solutions Enabler within a virtual machine that
is using pass-through storage devices presented through the Hyper-V server and
intended as gatekeepers. EMC supports installing Solutions Enabler with a child
virtual machine using pass-through storage devices only when the parent is running
Windows Server 2008 R2 or Window Server 2012, and when the appropriate settings
for the virtual machine have been made.

EMC Solutions Enabler implements extended SCSI commands, which are by default,
filtered by the parent where virtual disks or pass-through disks are used. A bypass of
this filtering is provided with Windows Server 2008 R2 and Windows Server 2012
Hyper-V, and this pass-through must be enabled to allow for appropriate discovery
options from the virtual machine. Planning for Disks and Storage, on Microsoft
TechNet, provides information about full pass-through of SCSI commands. We
recommend allowing SCSI command pass-through only for those virtual machines
where it is necessary.

To disable the filtering of SCSI commands, you can run the following PowerShell
script on a Hyper-V parent partition. In this example, the name of the affected virtual
machine is passed to the PowerShell script when it is executed.
$Target = $args[0]
$VSManagementService = gwmi MSVM_VirtualSystemManagementService -
namespace "root\virtualization"
foreach ($Child in Get-WmiObject -Namespace root\virtualization
Msvm_ComputerSystem -Filter "ElementName='$Target'")
{ $VMData = Get-WmiObject -Namespace root\virtualization -Query
"Associators of {$Child} Where
ResultClass=Msvm_VirtualSystemGlobalSettingData
AssocClass=Msvm_ElementSettingData"
$VMData.AllowFullSCSICommandSet=$true
$VSManagementService.ModifyVirtualSystem($Child,
$VMData.PSBase.GetText(1)) | out-null
}

Figure 36 provides an example of how to run this script. In the example, the script is
first displayed, and then the virtual machine named ManagementServer is provided
as the target for disabling SCSI filtering.

The script is provided as-is, and includes no validation or error checking functionality.

EMC Storage with Microsoft Hyper-V Virtualization 57


Figure 36. Example of disabling SCSI filtering in a Virtual Machine

You can also check the current value of the SCSI filtering. The following PowerShell
script reports on the current SCSI filtering status. You must provide the name of the
virtual machine target to be reported on.
$Target = $args[0]
foreach ($Child in Get-WmiObject -Namespace root\virtualization
Msvm_ComputerSystem -Filter "ElementName='$Target'")
{
$VMData = Get-WmiObject -Namespace root\virtualization -Query
"Associators of {$Child}
Where ResultClass=Msvm_VirtualSystemGlobalSettingData
AssocClass=Msvm_ElementSettingData"
Write-host "Virtual Machine:" $VMData.ElementName
Write-Host "Currently ByPassing SCSI Filtering:"
$VMData.AllowFullSCSICommandSet
}

Once set, the setting persists for the virtual machine, as the setting is recorded
against the virtual machine configuration. For the setting to take effect, you must
restart the virtual machine, after the setting has been changed.

EMC Storage with Microsoft Hyper-V Virtualization 58


Conclusion
EMC storage arrays provide an extremely scalable storage solution, which provides
customers with industry-leading capabilities to deploy, maintain, and protect
Windows Hyper-V environments. EMC storage provides scale-out solutions for
applications such as Microsoft Windows Hyper-V, allowing flexible data protection
options to meet different performance, availability, functionality, and economic
requirements. Support for a wide range of service levels with a single storage
infrastructure provides a key building block for implementing Information Lifecycle
Management (ILM) by deploying a tiered storage strategy.

EMC technologies provide an easier and more reliable way to provision storage in
Microsoft Windows Hyper-V environments, while enabling transparent, non-disruptive
data mobility between storage tiers. Industry-leading multi-site protection through
the use of VPLEX or Cluster Enabler allows customers to implement a complete end-
to-end solution for virtual machine management and protection.

EMC Storage with Microsoft Hyper-V Virtualization 59

You might also like