Professional Documents
Culture Documents
2012 Redmond Guide To Hyper-V
2012 Redmond Guide To Hyper-V
Microsoft Hyper V
By Paul Schnackenburg
Microsoft Hyper V
Part 1: Processors.............................................................................................. 2
Part 2: Memory, Storage, Networking...........................................................4
Part 3: 8 Tips and Tricks....................................................................................6
Part 4: Monitoring Hyper-V The Right Way..................................................8
WHITE PAPER
Hyper-V on Hyper-Drive
Part 1: Processors
In this first of a series on Hyper-V, Paul reviews tips for configuring
virtual and physical processors for optimum performance.
Figure 1. Easily pinpoint the VP to LP ratio on your Hyper-V hosts with this simple cmdlet.
WHITE PAPER
Figure 2. Assigning Virtual Processors to a VM is easy; just pick from the list.
WHITE PAPER
Hyper-V on Hyper-Drive
Part 2: Memory, Storage, Networking
Now that we know how Hyper-V can take advantage of processors, its a good
time to look at how we can take advantage of memory, networking and disk
resources without breaking the budget.
Memory
Before Service Pack 1 was released for Windows Server 2008
R2, assigning memory to VMs and architecting host machines
was difficult because only a fixed amount could be assigned
to each VM whether it needed it or not. Because Dynamic
Memory is such a game changer in the Hyper-V world make
sure all your hosts are running Windows 2008 R2 SP1.
Figure 1. Assigning the right amount of memory to running virtual servers is a whole
lot easier with Dynamic Memory.
WHITE PAPER
Storage
Storage is always a tricky part of
server design and no less so in the
virtual world. Ensuring that applications and VMs can access the
required amount of IOPS (Input/
Output Operations per Second) is
crucial and virtualization has made it
even harder. In the old world, we
could assess this need on a per server
basis but now we may have many
servers with different IOPS profiles
running on the same physical host.
Some applications have specific
storage enhancements (Exchange
Server 2010 for instance has several
tricks to optimize the performance of
the underlying disk subsystem for
sequential IO, as does SQL Server). All
of these optimizations are lost when
you move to virtualized disks as the
disk the VM sees is actually just a
large file on a drive or a SAN. There
are a few ways to compensate for
this; one is to use pass-through disks
(raw disks in the VMware world)
where the VM has full access to a
physical disk. The drawback is that
Figure 2. Creating virtual networks is easy in Hyper-V, but well have to wait for
theres no way of backing up the disk
Hyper-V in Server 8 for true virtual network switch functionality.
from outside of the VM. The other
option is to decrease the latency and increase the speed of the
through a couple of Gigabit NICs. As always, it pays to know
disks which generally means a more expensive SAN with more
your workloads. If youre going to virtualize busy file servers
spindles and / or SSD disks. The latter have excellent performake sure to allow enough virtual network cards for the task.
mance for random read IO making them eminently suited for
Hyper-V supports up to eight synthetic network interfaces in
storing VHD files, but of course their cost per gigabyte is high.
each VM (along with four emulated NICs, but these are not
recommended for performance). 10 GB Ethernet is starting to
There are two types of VHD disks that you can attach to a VM,
become affordable and is a great way of increasing bandwidth.
fixed size or dynamic. The former means that a 100 GB VHD is
created as a 100 GB file initially, the latter starts off as a small file
NIC teaming is another area where it pays to thread carefully.
(while still appearing as a 100 GB drive to the VM) but grows as
Officially, Microsoft doesnt support NIC teaming, but some of
data is added. The benefit of the latter is better utilization of
the vendors/OEMs do. Check with the manufacturer of your
your storage hardware as only the actual used storage is
NIC to find out if they do support NIC teaming.
consumed, but you have to be careful that you dont oversubscribe the underlying storage and run out of space as virtual
If youre going to use iSCSI for your storage, make sure to
disks grow. The golden rule used to be that fixed disks gave
allow network cards for this connectivity as well, and use
better IO performance, but the gap is closing and in Hyper-V
Jumbo Frames and disable File Sharing and DNS services
2008 R2 the difference is minimal. For a more in-depth explorafrom these NICs. Ensure that your NICs support performance
tion, see this white paper from Microsoft; the relevant section
enhancing features that are supported in Windows Server
starts at page 25. Be aware that some workloads arent sup2008 R2 such as TCP Chimney Offload and Virtual Machine
ported on dynamically expanding disks, such as Exchange.
Queues (VMQ). When using TCP Chimney Offload, you have
to enable it both in the OS and in the properties of the driver
Networking
for each NIC.
To achieve a well performing Hyper-V platform, dont forget
the networking subsystem. If you have five, ten or more VMs
Next time, well look at some tricks for improving the
on a host, dont expect them to fit all their connectivity needs
performance of your VMs.
Test Drive AppAssure Microsoft Hyper-V Solutions Today!
WHITE PAPER
Hyper-V on Hyper-Drive
Part 3: 8 Tips and Tricks
Some well-knownand some more obscuretips and tricks for enhancing Hyper-V.
Integration components
First and foremost, make sure that the latest version of the
Integration Components (IC) are loaded in every VMSystem
Center Virtual Machine Manager will warn you when theyre
out of date in a VM. This is the most important step for
improving VM performance. If youre unsure if theyre
installed, simply check under System Devices in System
Manager in the VM. The presence of Virtual Machine Bus
indicates that the IC are installed.
Hyper-V Manager
If youre running the full GUI version of Windows on the host,
close Hyper-V Manager (see Fig. 1) when its not being used,
as thumbnails of VM screens costs resources in both host and
guest while monitoring of performance statistics cause WMI
activity in the parent partition. Another tip is to use Remote
Desktop Sessions to connect to VMs instead of Virtual
Machine Connection (which is whats used when you
Host OS
In a lab environment, running Hyper-V on the full GUI
version of Windows Server 2008 R2 SP1 certainly simplifies
configuration and management and is an acceptable tradeoff. In production, however, Server Core or the free Hyper-V
server are better choices as they come with less overhead
(about 80 MB less commit charge). They also come with the
Novell benefitthat is, its far less likely that someone will
muck around with them, as theyre command-line only.
Guest OS
The rule here is simple: The newer the OS, the happier it is to
be virtualized. So, Windows Server 2008 R2 SP1 and Windows
7 are your best candidates. This is true even if youre trying to
squeeze an extra VM or two onto a crowded host. In the
physical world we generally look to older OSs as requiring less
resources, but in the virtual world the opposite applies.
Services
Limit the services that are running in the parent partition, not
just to give as much of the hosts resources to your VMs but
also to maintain a supported
configuration. Microsofts words
are clear on this point. You can
only run management-, backup
and if absolutely necessary,
malwareagents in the parent
partition, and nothing else.
Snapshots
Figure 1. Dont leave Hyper-V Manager running when youre not using it.
WHITE PAPER
Network configuration
VMs can either use synthetic virtual NICs or legacy network
adapters. The latter is required if you need your VMs to be
able to PXE boot or the guest OS doesnt support the
Integration Components. But in all other cases, make sure
WHITE PAPER
Hyper-V on Hyper-Drive
Part 4: Monitoring Hyper-V
The Right Way
Its time to put what youve learned into practice and then
make sure that Hyper-V is running at hyperspeed.
Figure 1. Really spend some time with Performance Monitor, getting to know the
different objects and counters and how your fabric and VMs are actually performing.
WHITE PAPER
Committed Memory
While dynamic memory makes memory management a bit
fluid, do keep an eye on \Memory\Available Mbytes for the
host. As long as theres 10 percent free you should be right,
but when it goes under 10 percent free its a warning. At less
than 100 MB its definitely time to investigate.
Dynamic memory also brings a new set of counters. The
most important is the \Hyper-V Dynamic Memory Balancer\
Average Pressure counter, where healthy is less than 80. A
value between 80 and 100 deserves attention, while over 100
indicates a critical condition.
Disk Latency
Keep an eye on your disks with the
\LogicalDisk(*)\Average Disk Sec\Read
or Write counters which indicate disk
latency. A good rule of thumb is that
OK is less than 10ms (0.010); 15ms or
above (0.015) is a warning; at 25ms or
above (0.025) the situation is critical.
Network monitoring
To monitor network usage the counter,
\Network Interface (*)\OutputQueue
Length is your friend. Less than 1 on
average is healthy, warning is when its
above 1 on average, critical is when its
2 or more on average.
Conclusion
Figure 2. Set your buffer based on the file cache you expect the server to use.
WHITE PAPER
Instant restore of VMs or Servers near-zero recovery time (RTO) & 5-minute RPO
2. Recovery Assure
3. Universal Recovery
Anywhere to Anywhere Restore to any VM or dissimilar hardware with Granular Object Level Recovery
WHITE PAPER
10
www.appassure.com/Free-Trial