You are on page 1of 5

This article was originally published in NetApp’s Tech OnTap Newsletter.

To receive the newsletter monthly and enjoy other great benefits, sign up
today at www.netapp.com/us/communities/tech-ontap

Top Five Hyper-V Best Practices


By Chaffie McKenna

Microsoft Hyper-V virtualization technology BP #1: Network Configuration


Chaffie McKenna
has been shipping for more than a year. in Hyper-V Environments
Reference Architect
Tech OnTap profiled the use of Hyper-V There are two important best practices
NetApp
with NetApp® technology in several past to mention when it comes to network
Chaffie joined NetApp in early 2008 articles, including an overview article configuration:
as part of NetApp’s Microsoft® (www.netapp.com/us/communities/tech-
Alliance Engineering team based in ontap/hyperv.html) and a detailed case • Be sure to provide the right number of
Seattle, WA. She focuses heavily on study of one customer’s experiences physical network adapters on Hyper-V
virtualization, especially Microsoft (www.netapp.com/us/communities/tech- servers.
Hyper-V™ and SCVMM. Her ontap/avanade-hyperv.html). • Take advantage of the new network
experience with virtualization goes features that Hyper-V R2 supports if
back 10 years, when virtualization NetApp has been involved with hundreds of at all possible.
was in its infancy. Hyper-V deployments and has developed a
detailed body of best practices for Hyper-V Physical network adapters
Read the Latest on deployments on NetApp. Tech OnTap asked Failure to configure enough network
Hyper-V and NetApp me to highlight the top five best practices connections can make it appear as though
Chaffie McKenna blogs regularly about for Hyper-V on NetApp, with special you have a storage problem, particularly
all things Hyper-V and other topics attention to the recently released Hyper-V when using iSCSI. Smaller environments
related to Microsoft environments Server 2008 R2. require a minimum of two or three network
on her Microsoft Environments blog adapters, while larger environments require
• Network configuration at least four or five. You may require far
(http://blogs.netapp.com/msenviro/).
• Setting the correct iGroup and LUN more. Here’s why:
Recent posts include: protocol type
• Virtual machine disk alignment • Management. Microsoft recommends a
• Is Hyper-V data center ready? • Using cluster shared volumes (CSVs) dedicated network adapter for Hyper-V
• Networking best practices • Getting the most from NetApp storage server management.
• Provisioning best practices software and tools • Virtual machines. Virtual network
configurations of the external type require
You can find full details on these items a minimum of one network adapter.
and much more in NetApp Storage Best • IP storage. Microsoft recommends that IP
Practices for Microsoft Virtualization which storage communication have a dedicated
has been updated to include Hyper-V R2 network, so one adapter is required and
(www.netapp.com/us/library/technical- two or more are necessary to support
reports/tr-3702.html). multipathing.
Table 1) Standalone Hyper-V servers.
Complete Hyper-V
Mgmt. VMs iSCSI Cluster Migration CSV Total Best Practices
Non-prod. DAS 1 1 n/a n/a n/a n/a 2 If you’re deploying Hyper-V with
NetApp storage, access to the
FC 1 1 n/a n/a n/a n/a 2 latest best practices information is
iSCSI 1 1 1 n/a n/a n/a 3 indispensible.

Production DAS 1 1 or 2 n/a n/a n/a n/a 2 or 3 Download the latest detailed guides:

FC 1 1 or 2 n/a n/a n/a n/a 2 or 3 • NetApp Storage Best Practices for


Microsoft Virtualization (www.netapp.
iSCSI 1 1 or 2 2 n/a n/a n/a 4 or 5 com/us/library/technical-reports/
tr-3702.html)
• NetApp Implementation Guide for
Microsoft Virtualization (www.netapp.
Table 2) Clustered Hyper-V servers. com/us/library/technical-reports/
tr-3733.html).
Mgmt. VMs iSCSI Cluster Migration CSV Total
Non-prod. FC 1 1 n/a 1 n/a n/a 3

iSCSI 1 1 1 1 n/a n/a 4

Production FC 1 1 or 2 n/a 1 n/a n/a 3 or 4

iSCSI 1 1 or 2 2 1 n/a n/a 5 or 6

• Windows® failover cluster. Windows BP #2: Selecting the Correct LUN types
failover cluster requires a private network. iGroup and LUN Protocol Type The LUN Protocol Type setting determines
• Live migration. This new Hyper-V When provisioning a NetApp LUN for use the on-disk layout of the LUN. It is important
R2 feature supports the migration with Hyper-V, you must specify specific to specify the correct LUN type to make sure
of running virtual machines between initiator groups (iGroups) and the correct that the LUN aligns properly with the file
Hyper-V servers. Microsoft recommends LUN type. Incorrect settings can make system it contains. (See the following tip for
configuring a dedicated physical network deployment difficult and performance an explanation.) This issue is not unique to
adapter for live migration traffic. can suffer. NetApp storage. Any storage vendor or host
• Cluster shared volumes. Microsoft platform may exhibit this problem.
recommends a dedicated network to Initiator groups
support the communications traffic FCP and iSCSI storage must be masked Tip: The LUN type you specify depends
created by this new Hyper-V R2 feature. so that the appropriate Hyper-V server on your OS, OS version, disk type, and
and virtual machines (VMs) can connect to Data ONTAP® version. For complete
Tables 1 through 4 above and on the next them. With NetApp storage, LUN masking is information on LUN types for different
page will help you choose the right number handled by iGroups. operating systems, refer to the Block
of physical adapters. Access Management Guide for your
• When dealing with individual Hyper-V version of Data ONTAP.
New network features servers or VMs, you should create an
Windows Server® 2008 R2 supports a iGroup for each system and for each Tables 5 and 6 on page 4 will help you
number of new networking features. NetApp protocol (FC and iSCSI) that system uses choose the correct LUN type.
recommends configuring these features on to connect to the NetApp storage system.
your Hyper-V servers and taking advantage BP #3: Virtual Machine
• When dealing with a cluster of Hyper-V
of them whenever possible. Be aware that Disk Alignment
servers or VMs, you should create an
some or all of them may not be supported individual iGroup for each protocol that Tip: This tip is closely tied to the previous
by your server and network hardware. the cluster of systems uses to connect to one, since failure to follow the previous tip
(See box on page 5 for details.) the NetApp storage system. will result in misalignment. The problem
of virtual machine disk alignment is not
It’s easier to manage iGroups by using unique to Hyper-V, nor is it unique to NetApp
NetApp SnapDrive®. SnapDrive cuts storage. This problem exists in any virtual
down on the confusion because it knows environment on any storage platform.
which OS you are using and automatically
configures that setting for your iGroups.
Table 3) Clustered Hyper-V servers using live migration.

Mgmt. VMs iSCSI Cluster Migration CSV Total


Non-prod. FC 1 1 n/a 1 1 n/a 4

iSCSI 1 1 1 1 1 n/a 5

Production FC 1 1 or 2 n/a 1 1 n/a 4 or 5

iSCSI 1 1 or 2 2 1 1 n/a 6 or 7

Table 4) Clustered Hyper-V servers using live migration and CSV.

Mgmt. VMs iSCSI Cluster Migration CSV Total


Non-prod. FC 1 1 n/a 1 1 1 5

iSCSI 1 1 1 1 1 1 6

Production FC 1 1 or 2 n/a 1 1 1 5 or 6

iSCSI 1 1 or 2 2 1 1 1 7 or 8

MBR + Unused Space

Host/Guest
Block 0 Block 1 Block 2 Block 3
File System

LUN

Figure 1) Virtual disk misalignment.

This problem occurs because, by default, the use of dynamically expanding and • Simplified storage management.
many guest operating systems, including differencing VHDs where possible, because More VMs share fewer LUNs.
Windows 2000 and 2003 and various Linux® file system alignment can never be reliably • Storage efficiency. Pooling VMs on the
distributions, start the first primary partition achieved with these VHD types. same LUN simplifies capacity planning
at sector (logical block) 63. This behavior and reduces the amount of space
leads to misaligned file systems because the The best practices guide provides complete reserved for future growth, because it is
partition does not begin at a block boundary. procedures for identifying and correcting no longer set aside on a per-VM basis.
As a result, every time the virtual machine alignment problems (www.netapp.com/us/
wants to read a block, two blocks have to be library/technical-reports/tr-3702.html). CSV Dynamic I/O Redirection allows storage
read from the underlying LUN, doubling the and network I/O to be redirected within
BP #4: Using Cluster a failover cluster if a primary pathway is
I/O burden (see Figure 1 above). Shared Volumes interrupted. The following recommendations
The situation becomes even more Cluster shared volumes are a completely apply specifically to the use of CSVs and
complicated when virtual machines are new feature in Hyper-V R2. If you’re familiar are intended to minimize the impact of I/O
managed as files within the Hyper-V server’s with VMware®, you can think of a CSV as redirection:
file system, because it introduces another being somewhat akin to VMFS (although
layer that must be properly aligned. This is there are significant differences). • In addition to the NICs installed in the
why selecting the LUN type is so critical. Hyper-V server for management, VMs, IP
A CSV is a “disk” that is connected to the storage, and more (see Best Practice #1),
• NetApp strongly recommends correcting Hyper-V parent partition and shared between NetApp recommends that you dedicate
the offset for all VM templates, as well multiple Hyper-V server nodes configured a physical network adapter to CSV traffic
as any existing VMs that are misaligned as part of a Windows failover cluster. A CSV only. The physical network adapter should
and are experiencing an I/O performance can be created only from shared storage, be a gigabit Ethernet (GbE) adapter at a
issue. (Misaligned VMs with low I/O such as a LUN provisioned on a NetApp minimum. If you are running large servers
requirements may not benefit from the storage system. All Hyper-V server nodes in (16 LCPUs+, 64GB+), planning to use
effort to correct the misalignment.) the failover cluster must be connected to the CSVs extensively, planning to dynamically
• When using virtual hard disks (VHDs), shared storage system. balance VMs across the cluster by using
NetApp recommends using fixed-size SCVMM, and/or planning to use live
VHDs in your Microsoft Hyper-V virtual CSVs have many advantages, including: migration extensively; you should consider
environment wherever possible, especially • Shared namespace. CSVs do not need 10 Gigabit Ethernet for CSV traffic.
in production environments, because to be assigned a drive letter, reducing • NetApp strongly recommends that you
proper file system alignment can be reliably restrictions and eliminating the need to configure MPIO on all Hyper-V cluster
achieved only on fixed-size VHDs. Avoid manage GUIDs and mount points. nodes, to minimize the opportunity
Table 5) LUN types for use with Data ONTAP 7.3.1 and later.

Data ONTAP 7.3.1 Windows Server 2008 Windows Server 2008 Windows Server 2008
or Later Physical Server Hyper-V Server Hyper-V Server
without Hyper-V Physical Disk Pass-through Disk
With SnapDrive Installed windows_gpt hyper_v LUN type of child OS

Without SnapDrive Installed windows_2008 windows_2008 LUN type of child OS

Table 6) LUN types for use with Data ONTAP 7.2.5 through 7.3.0.

Data ONTAP 7.2.5 Windows Server 2008 Windows Server 2008 Windows Server 2008
through 7.3.0 Physical Server Hyper-V Server Hyper-V Server
without Hyper-V Physical Disk Pass-through Disk
With SnapDrive Installed windows_gpt windows_gpt LUN type of child OS

Without SnapDrive Installed windows_2008 windows_2008 LUN type of child OS

for CSV I/O redirection to occur. CSV • The Windows Host Utilities Kit modifies -- Existing features (no new R2 features),
I/O Redirection is not a substitute for system settings so that the Hyper-V parent install NetApp SnapDrive for Windows
multipathing or for proper planning of or child OS operates with the highest 6.1P2 or later.
storage layout and networking, which reliability possible when connected to -- New features (all new R2 features),
will minimize single points of failure in NetApp storage. NetApp strongly recom- install NetApp SnapDrive for Windows
production environments. mends that the Windows Host Utilities 6.2 or later.
• Once you recognize that I/O redirection is Kit be installed on all Hyper-V servers. • NetApp SnapDrive for Windows 6.0 or
occurring on a CSV, you may want to live Windows Server 2008 requires Windows later can also be installed in supported
migrate all affected VMs on the affected Host Utilities Kit 5.1 or later. Windows child operating systems that include
cluster node to another Hyper-V cluster Server 2008 R2 (Hyper-V R2) requires Microsoft Windows Server 2003,
node to restore optimal performance until Windows Host Utilities Kit 5.2 or later. Microsoft Windows Server 2008, and
any I/O pathway problems are diagnosed • Highly available storage configurations Microsoft Windows Server 2008 R2.
and repaired. require the appropriate version of the
Data ONTAP DSM for Windows MPIO. For the latest information on supported
The best practices guide describes Windows Server 2008 requires Data software versions, refer to the NetApp
additional best practices that pertain ONTAP DSM 3.2R1 or later. Windows Interoperability Matrix (http://now.netapp.
specifically to backup and VM provisioning Server 2008 R2 requires Data ONTAP com/matrix/mtx/login.do). You must have
with CSVs. DSM 3.3.1 or later. You should set the a NOW™ (NetApp on the Web) account to
least queue depth policy when using access this resource.
BP #5: NetApp Storage
Software and Tools MPIO. (This is the default setting.) Conclusion
• NetApp recommends NetApp SnapDrive
NetApp provides a variety of storage on all Hyper-V and SCVMM servers If you pay attention to the best practices
software and tools that can simplify to enable maximum functionality and I’ve outlined here, you can avoid most of
operations in a Hyper-V environment. support of key features. For Microsoft the pitfalls of configuring your Hyper-V
With the release of Hyper-V R2, minimum Windows Server 2008 installations where environment. For complete details on these
requirements have changed for many the Hyper-V role is enabled and for procedures and much more, refer to the
software elements: Microsoft Hyper-V Server 2008, install Hyper-V best practices guide (www.netapp.
NetApp SnapDrive for Windows 6.0 or com/us/library/technical-reports/tr-3702.
• As a minimum, NetApp recommends html) and Hyper-V implementation guide
using Data ONTAP 7.3 or later with later. For Microsoft Windows Server 2008
R2 installations where the Hyper-V role is (www.netapp.com/us/library/technical-
Hyper-V virtual environments. reports/tr-3733.html).
enabled and for Microsoft Hyper-V Server
2008 R2 to support:
New Windows Server 2008 R2 Networking Features
Windows Server 2008 R2 adds important new capabilities that you should use in your
Hyper-V environment if your servers and network hardware support them:

Large Send Offload (LSO) and Checksum Offload (CSO)


LSO and CSO are supported by the virtual networks in Hyper-V. In addition, if your
physical network adapters support these capabilities, the virtual traffic is offloaded to
the physical network as necessary. Most network adapters support LSO and CSO.

Jumbo frames
With Windows 2008 R2, jumbo frame enhancements converge to support up to 6
times the payload per packet. This makes a huge difference in overall throughput and
reduces CPU utilization for large file transfers. Jumbo frames are supported on physical
networks and virtual networks, including switches and adapters. For physical networks,
all intervening network hardware (switches and so on) must have jumbo frame support
enabled as well.

TCP chimney
This allows virtual NICs in child partitions to offload TCP connections to physical
adapters that support it, reducing CPU utilization and other overhead.

Virtual machine queue


VMQ improves network throughput by distributing network traffic for multiple VMs
across multiple processors, while reducing processor utilization by offloading packet
classification to the hardware and avoiding both network data copy and route lookup
on transmit paths. VMQ is compatible with most other task offloads, and therefore it
can coexist with large send offload and jumbo frames. However, where TCP chimney
is supported by the NIC, VMQ takes precedence.

Tech OnTap
SUBSCRIBE NOW
Tech OnTap provides monthly IT insights
plus exclusive access to real-world best
practices, tips and tools, behind the
scenes engineering interviews, demos,
peer reviews, and much, much more.

Visit www.netapp.com/us/communities/
tech-ontap to subscribe today.

NetApp creates innovative storage and


© 2009 NetApp. All rights reserved. Specifications are subject to
data management solutions that accelerate change without notice. NetApp, the NetApp logo, Go further, faster,
Data ONTAP, NOW, and SnapDrive are trademarks or registered
business breakthroughs and deliver trademarks of NetApp in the U.S. and other countries. Linux is a
registered trademark of Linus Torvalds. Microsoft, Hyper-V, Windows,
outstanding cost efficiency. Discover our and Windows Server are trademarks or registered trademarks of
passion for helping companies around the Microsoft Corporation. VMware is a registered trademark of VMware,
Inc. All other brands or products are trademarks or registered
www.netapp.com world go further, faster at www.netapp.com. trademarks of their respective holders and should be treated as such.