You are on page 1of 71


1. Multitenant security and Isolation Provided through Server virtualization provides a fully isolated network layer of the datacenter through programmatically managed and extensible capabilities. This enables connection to the network of virtual machines with policy enforcement for security and isolation. Provides flexibility to restrict access to a virtual machine on any node while maintaining isolation of the network and storage traffic. Provides enhanced security and isolation of customers networks from one another. In windows 2008 R2, Server virtualization provides isolation between virtual machines. However, the network layer of the datacenter is not fully isolated, and Layer 2 connectivity is implied between different workloads that are running over the same infrastructure. 2. Router guard: Router guard drops router advertisement and redirection messages from unauthorized virtual machines that are acting as routers. 3. The Hyper-V Extensible Switch is a Layer 2 virtual network switch that provides programmatically managed and extensible capabilities to connect virtual machines to the physical network. The Hyper-V Extensible Switch is an open platform that lets vendors provide extensions written to standard Windows application programming interface (API) frameworks. Multiple monitoring and filtering extensions can be supported at the entrance and outlet portions of the Hyper-V Extensible Switch. Provides traffic visibility at different layers and enables statistical traffic data to be gathered. 4. Integration services: The Integration Services are a set of drivers that allow a virtual machine to communicate directly with the hypervisor For those who come from a VMware background, the Integration Services are comparable to the VMware Tools. Virtual machines can function without the Integration Services, but they will perform much better if the Integration Services are installed.

Virtual machines that do not use the Integration Services must rely on emulated hardware, which degrades the virtual machines overall performance. Unfortunately however, the Integration Services are only compatible with virtual machines that are running Windows operating systems 5. Hyper V replication - Starting in Windows Server 2012 R2, the frequency of replication, which previously was a fixed value, is configurable. You can set this value in the Configure Replication Frequency page of the Enable Replication wizard or in the Replication section for the virtual machine in Hyper-V Manager. You can configure the replication frequency so that the changes are sent every 30 seconds, every 5 minutes, or every 15 minutes 6. Hyper-V Replica provides asynchronous replication of Hyper-V virtual machines between two hosting servers. It is simple to configure and does not require either shared storage or any particular storage hardware. Any server workload that can be virtualized in Hyper-V can be replicated. Replication works over any ordinary IP-based network, and the replicated data can be encrypted during transmission. Hyper-V Replica works with standalone servers, failover clusters, or a mixture of both. The servers can be physically co-located or widely separated geographically. The physical servers do not need to be in the same domain, or even joined to any domain at all 7. You can set up replication of Hyper-V virtual machines as long as you have any two physical Windows Server 2012 or Windows Server 2012 R2 servers which support the Hyper-V role. The two (or three, in the case of extended replication) servers can be physically co-located or in completely separate geographical locations. Either (or both of) the primary and Replica servers can be part of a failover cluster and mixed standalone and clustered environments are supported 8. Live Migrations: Windows Server 2008 R2 introduced the Live Migration feature, which permits users to move a running virtual machine from one physical computer to another with no downtimeassuming that the virtual machine is clustered. Windows Server 2012 Hyper-V provides the ability to migrate virtual machines, with support for simultaneous live migrations. That is, users can move several virtual machines at the same time. 9. Live migrations are not limited to a cluster. Virtual machines can be migrated across cluster boundaries, and between stand-alone servers that are not part of a cluster 10. Live storage migration: Live storage migration allows users to move virtual hard disks that are attached to a running virtual machine. Users can transfer virtual hard disks to a new location for upgrading or migrating storage, performing back-end storage maintenance, or redistributing the storage load. 11. NUMA - A NUMA topology can be projected onto a virtual machine, and guest operating systems and applications can make intelligent NUMA decisions. 12. Windows Server 2012 Hyper-V enables support for SR-IOV-capable network devices and allows the SR-IOV virtual function of a physical network adapter to be assigned directly to a virtual machine. Reduces network latency and host CPU overhead (for processing network traffic); increases network throughput 13. Windows Server 2012 Hyper-V can reclaim the unused memory from virtual machines with a minimum memory value lower than their startup value. 14. Smart paging - If a virtual machine is configured with a lower minimum memory than its startup memory and Hyper-V needs additional memory to restart it, Hyper-V smart paging is used to

bridge the gap between minimum and startup memory. Provides a reliable way to keep virtual machines running when there is not enough physical memory available 15. Runtime memory configuration - Users can make configuration changes to Dynamic Memory (increase maximum memory or decrease minimum memory) when a virtual machine is running. 16. Resource metering - Resource Metering allows users to track how many CPU, memory, storage, and network resources are consumed by a virtual machine over time. This information is gathered automatically (without the need to constantly collect data from the virtual machine) and persists with the virtual machine through live migration/other mobility operations. Windows Server 2012 Hyper-V can track and report the amount of data transferred per IP address or virtual machine 17. VHDX format = VHDX supports up to 64 TB of storage. It helps to provide protection from corruption due to power failures by logging updates to the VHDX metadata structures. It also helps to prevent performance degradation on large-sector physical disks by optimizing structure alignment. 18. Offloaded data transfer support = Windows Server 2012 Hyper-V uses SAN copy offload to copy large amounts of data from one location to another. 19. Data center bridging - Windows Server 2012 Hyper-V uses DCB-capable hardware to converge multiple types of network traffic onto a single network adapter, with a maximum level of service to each. Helps to reduce the cost and complexity of maintaining separate traffic for network, management, live migration, and storage. Makes it easy to change allocations to different traffic flows 20. Virtual Fibre Channel in Hyper-V provides Fibre Channel ports within the guest operating system. 21. QOS- Windows Server 2012 Hyper-V uses minimum bandwidth to assign specific bandwidth for each type of traffic and to ensure fair sharing during congestion. 22. Network Interface Card (NIC) Teaming for load balancing and failover (LBFO) - Windows Server 2012 Hyper-V provides built-in support for NIC Teaming: A virtual machine can have virtual network adapters that are connected to more than one virtual switch. If a network adapter under that virtual switch is disconnected, it still has connectivity. NIC Teaming supports up to 32 network adapters in a team 23. Guest Clustering - Workloads can be virtualized by directly accessing cluster guest operating systems and storage over Fibre Channel or through iSCSI. Provides the ability to connect Fibre Channel directly from within virtual machines 24. Affinity virtual machine rules - Administrators can configure partnered virtual machines to migrate simultaneously at failover. 25. Anti-affinity virtual machine rules - Administrators can specify that two virtual machines cannot coexist on the same node in a failover scenario 26. NIC Teaming - NIC Teaming which allows multiple network interfaces to work together as a team, preventing connectivity loss if one network adapter fails. NIC Teaming also allows you to aggregate bandwidth from multiple network adapters, so for example, four 1-gigabyte (GB)

network adapters can provide an aggregate of 4 GB/second of throughput. In Windows Server 2012 R2, the load-balancing algorithms have been further enhanced with the goal to better utilize all NICs in the team, significantly improving performance. 27. The advantages of a Windows NIC Teaming solution are that it works with all network adapter vendors, spares you from most potential problems that proprietary solutions cause, provides a common set of management tools for all adapter types, and is fully supported by Microsoft. 28. Memory ballooning the technique used to reclaim unused memory from a virtual machine to be given to another virtual machine that has memory needs 29. SMB Direct - the SMB protocol has provided support for Remote Direct Memory Access (RDMA) network adapters, which allows storage performance capabilities that rival Fiber Channel. RDMA network adapters enable this performance capability by operating at full speed with very low latency due to the ability to bypass the kernel and perform write and read operations directly to and from memory. This capability is possible since reliable transport protocols are implemented on the adapter hardware and allow for zero-copy networking with kernel bypass. With this capability, applications, including SMB, can perform data transfers directly from memory, through the adapter, to the network, and then to the memory of the application requesting data from the file share. 30. Resource metering

Hyper-V in Windows Server 2012 R2 helps providers build a multitenant environment in which virtual machines can be served to multiple clients in a more isolated way. Because a single client may have many virtual machines, aggregation of resource use data can be a challenging task. However, Windows Server 2012 R2 simplifies this task by using resource pools, a Hyper-V feature that allows for resource metering. Resource pools are logical containers that collect the resources of the virtual machines that belong to one client, permitting single-point querying of the clients overall resource use. Resource Metering in Windows Server 2012 R2 can measure and track a series of important data points, including the following: The average CPU, in megahertz, used by a virtual machine over a period of time. The average physical memory, in megabytes, used by a virtual machine over a period of time. The lowest amount of physical memory, in megabytes, assigned to a virtual machine over a period of time. The highest amount of physical memory, in megabytes, assigned to a virtual machine over a period of time. The highest amount of disk space capacity, in megabytes, allocated to a virtual machine over a period of time. The total incoming network traffic, in megabytes, for a virtual network adapter over a period of time. The total outgoing network traffic, in megabytes, for a virtual network adapter over a period of time

31. Cluster shared volume: you can create a Cluster Shared Volume on virtually any iSCSI or Fibre Channel accessible storage device. This can include a SAN, a physical NAS appliance or even a server that is configured to act as a shared storage device.

32. Storage QoS is a new feature in Windows Server 2012 R2 that allows you to restrict disk throughput for overactive or disruptive virtual machines and can be configured dynamically while the virtual machine is running. For maximum bandwidth applications, it provides strict policies to throttle IO to a given virtual machine to a maximum IO threshold. For minimum bandwidth applications, it provides policies for threshold warnings that alert of an IO starved VM when the bandwidth does not meet the minimum threshold. 33. Moving of Virtual machines: Hyper-V gives you three different options including: Move the Virtual Machines Data to a Single Location This option places all of the virtual machine components into a single location. Move the Virtual Machines Data by Selecting Where to Move Each Item This option gives you the most flexibility because it allows you to control where each virtual machine component will be placed. This is usually the option that you will use when performing cluster-to-standalone host migrations. Move Only the Virtual Machine This option moves the virtual machine itself to a new host, but leaves the Virtual Hard Disk in its original location on the Cluster Shared Volume.

When you move the virtual machine storage, the third option in the above will changed to move only the virtual machine hard disk

34. While creating failover cluster, You can select multiple network adapters, but as a best practice you should reserve a network adapter for management traffic and reserve an adapter for cluster communications (such as cluster heartbeats and live migration traffic). 35. Failover clustering requirements: Cluster name The configuration process requires that a unique name be assigned to the cluster. The name you choose should be different from any of the computer names that are used within your Active Directory.

Cluster node hardware Another important consideration is the cluster nodes themselves. The nodes do not have to use identical hardware, but they should all use the same CPU architecture and ideally they should be equipped with comparable amounts of memory Domain membership Domain membership isnt an absolute requirement for cluster nodes, but the configuration process is a lot easier if all of the cluster nodes are members of a common Active Directory domain. Domain membership allows Kerberos authentication to be used. Node names Just as the cluster requires a cluster name, each cluster node requires a unique computer name Network adapters Hyper-V is very flexible in terms of the network adapter requirements for cluster nodes. However, it is generally recommended that each cluster node have a minimum of three network adapters. You should reserve one adapter should be reserved for management traffic and another adapter for cluster traffic. The third (and any additional adapters) are used for virtual machine traffic. Node IP addresses As a best practice, you should assign a static IP address to each cluster nodes management NIC. However, you will also need to decide how you want to handle IP address assignment for the other NICs. Cluster IP address In addition to the IP addresses assigned to physical NICs you must assign a static IP address to the cluster. This IP address is used to communicate with the cluster as a whole rather than with an individual cluster node.

36. What are server roles, role services, and features? This section defines the terms role, role service, and feature as they apply to Windows Server 2008 R2. Roles A server role is a set of software programs that, when they are installed and properly configured, lets a computer perform a specific function for multiple users or other computers within a network. Generally, roles share the following characteristics. They describe the primary function, purpose, or use of a computer. A specific computer can be dedicated to perform a single role that is heavily used in the enterprise, or may perform multiple roles if each role is only lightly used in the enterprise. They provide users throughout an organization access to resources managed by other computers, such as Web sites, printers, or files that are stored on different computers. They typically include their own databases that can queue user or computer requests, or record information about network users and computers that relates to the role. For example, Active Directory Domain Services includes a database for storing the names and hierarchical relationships of all computers in a network. As soon as they are properly installed and configured, roles function automatically. This allows the computers on which they are installed to perform prescribed tasks with limited user commands or supervision. Role services Role services are software programs that provide the functionality of a role. When you install a role, you can choose which role services the role provides for other users and computers in your enterprise. Some roles, such as DNS Server, have only a single function, and therefore do not have available role services. Other roles, such as Remote Desktop Services, have

several role services that can be installed, depending on the remote computing needs of your enterprise. You can consider a role as a grouping of closely related, complementary role services, for which, most of the time, installing the role means installing one or more of its role services. Features Features are software programs that, although they are not directly parts of roles, can support or augment the functionality of one or more roles, or improve the functionality of the server, regardless of which roles are installed. For example, the Failover Clustering feature augments the functionality of other roles, such as File Services and DHCP Server, by allowing them to join server clusters for increased redundancy and improved performance. Another feature, Telnet Client, lets you communicate remotely with a telnet server over a network connection, a functionality that enhances the communication options of the server. 37. In Hyper V installation Allow this Server to Send and Receive Live Migrations of Virtual Machines Select the authentication protocol to be used by live migration traffic. If the cluster nodes reside in the same Active Directory domain, then you should use the Kerberos protocol. Kerberos is more secure than CredSSP and the configuration process is easier using the Disk Management Console to provision a storage array will work in Windows Server 2012, but that method is primarily suited for volumes that are stored on legacy versions of Windows Server. If a storage volume is stored on Windows Server 2012, Microsoft recommends using a new feature called Windows Storage Spaces. The main benefits to using Windows Storage Spaces instead of the Disk Management Console include: Volumes created using Windows Storage Spaces can be thin provisioned. You can add additional physical disk space as needed. You can choose the type of redundancy that is most beneficial. Storage resources can be provisioned much more quickly than they can through the Disk Management Console. 38. The first step in the process of configuring Windows Storage Spaces is to create a storage pool. A storage pool is a collection of physical disks that act as a pool of storage resources. After the storage pool is created, the next step is to create a virtual disk within the storage pool. This virtual disk will act as a repository for the virtual machine components that will be stored within the Cluster Shared Volume. Fixed size provisioning delivers better performance, but thin provisioning is more flexible and makes more efficient use of storage space. 39. Microsoft supported configuring Windows Server 2008 R2 as an iSCSI target, but doing so required you to download an additional component. In Windows Server 2012, the iSCSI target software is built into the operating system, so there is nothing extra to download 40. Most settings can be configured only if turned off Adding or removing hardware components Configuring memory, processor, disk settings Few settings are configurable while virtual machine is running Connecting a network adapter to a virtual switch Adding a virtual hard disk to a SCSI controller Enable or disable Integration Services 41. What Is Smart Paging? Memory Management technique that uses physical disk resources as temporary memory Ensures that a virtual machine can always restart Used during virtual machine

restart only If Hyper-V is low on memory, and The virtual machine has more startup than minimum RAM, and Memory cannot be reclaimed from other virtual machines Temporarily degrades virtual machine performance Used only for a limited time, and then removed Not used when a virtual machine started from the Off state Virtual machine operating system paging is always preferred 42. Integration Services Makes a guest operating system aware that it is running on a virtual machine Many operating systems include integration services o Install the latest integration services o VMBus and synthetic devices support o Time synchronization, mouse release, VSS o Managed as virtual machine settings 43. Only Performance Monitor can monitor Hyper-V. Many Hyper-V performance objects added Other tools monitor only their virtual environment Parent partition is also considered a virtual machine 44. Virtual machine tools monitor the virtual environment 45. Heavy utilization in virtual machine does not mean that Hyper-V host is heavy utilized (and vice versa)

46. NUMA non uniform memory access. Non-uniform memory access (NUMA) is a computer memory design used in multiprocessing, where the memory access time depends on the memory location relative to a processor. Under NUMA, a processor can access its own local memory faster than non-local memory (memory local to another processor or memory shared between processors). The benefits of NUMA are limited to particular workloads, notably on servers where the data are often associated strongly with certain tasks or users. NUMA architectures logically follow in scaling from symmetric multiprocessing (SMP) architectures. They were developed commercially during the 1990s by Burroughs (later Unisys), Convex Computer (later Hewlett-Packard), Honeywell Information Systems Italy (HISI) (later Groupe Bull), Silicon Graphics (later Silicon Graphics International), Sequent Computer Systems (later IBM), Data General (later EMC), and Digital (later Compaq, now HP). Techniques developed by these companies later featured in a variety of Unix-like operating systems, and to an extent in Windows NT. 47. Layer 2 switching uses the media access control address (MAC address) from the host's network interface cards (NICs) to decide where to forward frames. Layer 2 switching is hardware based,[1] which means switches use application-specific integrated circuit (ASICs) to build and maintain filter tables (also known as MAC address tables or CAM tables). One way to think of a layer 2 switch is as a multiport bridge. 48. Layer 2 switching provides the following

Hardware-based bridging (MAC) Wire speed High speed

Low latency

49. Layer 2 switching is highly efficient because there is no modification to the data packet, only to the frame encapsulation of the packet, and only when the data packet is passing through dissimilar media (such as from Ethernet to FDDI). Layer 2 switching is used for workgroup connectivity and network segmentation (breaking up collision domains). This allows a flatter network design with more network segments than traditional 10BaseT shared networks. Layer 2 switching has helped develop new components in the network infrastructure. 50. Server farms Servers are no longer distributed to physical locations because virtual LANs can be created to create broadcast domains in a switched internetwork. This means that all servers can be placed in a central location, yet a certain server can still be part of a workgroup in a remote branch, for example. 51. Intranets Allows organization-wide client/server communications based on a Web technology. 52. In computer storage, a logical unit number, or LUN, is a number used to identify a logical unit, which is a device addressed by the SCSI protocol or protocols which encapsulate SCSI, such as Fibre Channel or iSCSI. A LUN may be used with any device which supports read/write operations, such as a tape drive, but is most often used to refer to a logical disk as created on a SAN. Though not technically correct, the term "LUN" is often also used to refer to the logical disk itself 53. To provide a practical example, a typical disk array has multiple physical SCSI ports, each with one SCSI target address assigned. An administrator may format the disk array as a RAID and then partition this RAID into several separate storage-volumes. To represent each volume, a SCSI target is configured to provide a logical unit. Each SCSI target may provide multiple logical units and thus represent multiple volumes, but this does not mean that those volumes are concatenated. The computer that accesses a volume on the disk array identifies which volume to read or write with the LUN of the associated logical unit. In another example: a single disk-drive has one physical SCSI port. It usually provides just a single target, which in turn usually provides just a single logical unit whose LUN is zero. This logical unit represents the entire storage of the disk drive. 54. A solid-state drive (SSD) (also known as a solid-state disk[1][2][3] or electronic disk,[4] though it contains no actual "disk" of any kind or motors to "drive" the disks) is a data storage device using integrated circuit assemblies as memory to store data persistently. SSD technology uses electronic interfaces compatible with traditional block input/output (I/O) hard disk drives, thus permitting simple replacement in common applications.[5] Also, new I/O interfaces like SATA Express are created to keep up with speed advancements in SSD technology. 55. SSDs have no moving mechanical components. This distinguishes them from traditional electromechanical magnetic disks such as hard disk drives (HDDs) or floppy disks, which contain spinning disks and movable read/write heads.[6] Compared with electromechanical disks, SSDs are typically more resistant to physical shock, run silently, have lower access time, and less latency.[7] However, while the price of SSDs has continued to decline in 2012,[8] SSDs are still about 7 to 8 times more expensive per unit of storage than HDDs. 56. Cluster Shared Volumes (CSV) enable multiple nodes in a failover cluster to simultaneously have read-write access to the same LUN (disk) that is provisioned as an NTFS volume. (In Windows Server 2012 R2, the disk can be provisioned as NTFS or Resilient File System (ReFS).) With

CSV, clustered roles can fail over quickly from one node to another node without requiring a change in drive ownership, or dismounting and remounting a volume. CSV also help simplify the management of a potentially large number of LUNs in a failover cluster. 57. CSV provide a general-purpose, clustered file system, which is layered above NTFS (or ReFS in Windows Server 2012 R2). CSV applications include:
Clustered virtual hard disk (VHD) files for clustered Hyper-V virtual machines Scale-out file shares to store application data for the Scale-Out File Server clustered role.

Examples of the application data for this role include Hyper-V virtual machine files and Microsoft SQL Server data. (Be aware that ReFS is not supported for a Scale-Out File Server.) For more information about Scale-Out File Server, see Scale-Out File Server for Application Data Overview 58. What Is Offloaded Data Transfer? In Traditional data copy model Server issues read request to SAN, Data is read and transferred into memory and Data is transferred and written from memory to SAN. Issues: CPU and memory utilization, increased traffic, Offload-enabled data copy model- Server issues read request and SAN returns token, Server issues write request to SAN using token, SAN completes data copy and confirms completion, Benefits: Increased performance, reduced utilization. SAN must support Offloaded Data Transfer 59. differencing disk is a virtual hard disk you use to isolate changes to a virtual hard disk or the guest operating system by storing them in a separate file. A differencing disk is similar to the Undo Disks feature because both offer a way to isolate changes in case you want to reverse them. However, Undo Disks is associated with a virtual machine and all disks assigned to it, while a differencing disk is associated only with one disk. In addition, Undo Disks is intended to be a shorter-term method of isolating changes. For more information, see Using Undo Disks. 60. A differencing disk is associated with another virtual hard disk that you select when you create the differencing disk. This means that the disk to which you want to associate the differencing disk must exist first. This virtual hard disk is called the "parent" disk and the differencing disk is the "child" disk. The parent disk can be any type of virtual hard disk. The differencing disk stores all changes that would otherwise be made to the parent disk if the differencing disk was not being used. The differencing disk provides an ongoing way to save changes without altering the parent disk. You can use the differencing disk to store changes indefinitely, as long as there is enough space on the physical disk where the differencing disk is stored. The differencing disk expands dynamically as data is written to it and can grow as large as the maximum size allocated for the parent disk when the parent disk was created.

61. Emulated hardware is a software construct that the hypervisor presents to the virtual machine as though it were actual hardware. This software component implements a unified least-commondenominator set of instructions that are universal to all devices of that type. This all but guarantees that it will be usable by almost any operating system, even those that Hyper-V Server does not directly support. These devices can be seen even in minimalist pre-boot execution environments, which is why you can use a legacy network adapter for PXE functions and can boot from a disk on an IDE controller.

The drawback is that it can be computationally expensive and therefore slow to operate. The software component is a complete representation of a hardware device, which includes the need for IRQ and memory I/O operations. Within the virtual machine, all the translation you see in the above images occurs. Once the virtual CPU has converted the VMs communication into that meant for the device, it is passed over to the construct that Hyper-V has. Hyper-V must then perform the exact same functions to interact with the real hardware. All of this happens in reverse as the device sends data back to the drivers and applications within the virtual machine. 62. Synthetic hardware is different from emulated hardware in that Hyper-V does not create a software construct to masquerade as a physical device. Instead, it uses a similar technique to the Windows HAL and presents an interface that functions more closely to the driver model. The guest still needs to send instructions through its virtual CPU, but its able to use the driver model to pass these communications directly up into Hyper-V through the VMBus. The VMBus driver, and the drivers that are dependent on it, must be loaded in order for the guest to be able to use the synthetic hardware at all. This is why synthetic and SCSI devices cannot be used prior to Windows startup. 63. SR-IOV (single root i/o virtualization) eliminates the need for the hypervisor to perform as intermediary at all. They expose virtual functions, which are unique pathways to communicate with a hardware device. So, a virtual function is assigned to a specific virtual machine; the virtual machine manifests that as a device. No other device or computer system, whether in the host computer or inside that virtual machine, can use that virtual function at all. Whereas all traditional virtualization requires the hypervisor to manage resource sharing, SR-IOV is an agreement between the hardware and software to use any given virtual function for one and exactly one purpose. 64. UEFI (Unified Extensible Firmware Interface) is a standard firmware interface for PCs, designed to replace BIOS (basic input/output system). This standard was created by over 140 technology companies as part of the UEFI consortium, including Microsoft. It's designed to improve software interoperability and address limitations of BIOS. Some advantages of UEFI firmware include: Better security by helping to protect the pre-startupor pre-bootprocess against bootkit attacks. Faster startup times and resuming from hibernation. Support for drives larger than 2.2 terabytes (TB). Support for modern, 64-bit firmware device drivers that the system can use to address more than 17.2 billion gigabytes (GB) of memory during startup. Capabililty to use BIOS with UEFI hardware.

65. In Generation 2 emulated devices are removed 66. N_Port ID virtualization (NPIV) is a technology that defines how multiple virtual servers can share a single physical Fibre Channel port identification (ID). NPIV allows a single host bus adaptor (HBA) or target port on a storage array to register multiple World Wide Port Names (WWPNs) and N_Port identification numbers. This allows each virtual server to present a

different world wide name to the storage area network (SAN), which in turn means that each virtual server will see its own storage -- but no other virtual server's storage. 67. Virtual Machine Connection is a tool that you can use to connect to the virtual machines that run on a local or remote server. o Connects to virtual machines on local and remote Hyper-V o Port 2179 used (can be modified in the registry) o Connection allowed by Windows Firewall o Installed as part of Hyper-V role or RSAT feature 68. Virtual switch: Parent has physical network adapter(s) Each virtual machine (and parent) has virtual network adapter(s) Each virtual network adapter is connected to a virtual switch Type of virtual switch is: External connects to a physical or wireless adapter Internal parent and virtual machine connections only Private virtual machine connections only

69. Availability

70. Windows Server 2012 Hyper-V Role introduces a new capability, Hyper-V Replica, as a built-in replication mechanism at a virtual machine (VM) level. Hyper-V Replica can asynchronously replicate a selected VM running at a primary site to a designated replica site across LAN/WAN. The following schematic presents this concept. Here both a primary site and a replica site are Windows Server 2012 Hyper-V hosts where a primary site runs production or the so-called primary VMs, while a replica site is standing by with replicated VMs off and to be brought online, should the primary site experiences a planned or unplanned VM outage. Hyper-V Replica requires neither shared storage, nor a specific storage hardware. Once an initial copy is replicated to a replica site and replication is ongoing, Hyper-V Replica will replicate only the changes of a configured primary VM, i.e. the deltas, asynchronously 71. Hyper V replica

72. VLAN ID is Used to isolate network traffic for nodes that are connected to the same physical network. VLAN ID can be configured on Virtual machine network adapter External and Internal virtual switch 73. 74. Virtual Machine Moving Options Virtual machine and storage migration Includes from Windows Server 2012 to Windows Server 2012 R2 Quick migration requires failover clustering Live migration requires only network connectivity Improved performance in Windows Server 2012 R2 Hyper-V Replica Asynchronously replicate virtual machinesConfigure replication frequency and extended replication Exporting and Importing of a virtual machine Exporting while virtual machine is running Can import virtual machine without prior export

75. Hyper-V Replica has the following components: Replication engine o Manages replication configuration and handles initial replication, delta replication, failover, and test-failover o Change tracking module o Keeps track of the write operations in the virtual machine Network module o Provides a secure and efficient channel to transfer data Hyper-V Replica Broker server role o Provides seamless replication while a virtual machine is running on different failover cluster nodes Management tools o Hyper-V Manager, Windows PowerShell, Failover Cluster Manager 76. Failover Failover o Initiated at replica virtual machine o Primary virtual machine has failed (turned off or unavailable) o Data loss can occur o Reverse the replication after primary site is recovered Test failover o Non-disruptive testing, with zero downtime o New virtual machine created in recovery site o From the replica checkpoint o Turned off and not connected o Stop Test Failover Planned failover Initiated at primary virtual machine which is turned off Sends data that has not been replicated Fail over to replica server Start the replica virtual machine Reverse the replication after primary site is restored

77. Failover cluster Shared storage using SMB, iSCSI, Fibre Channel, Fibre Channel over Ethernet (FCoE) or Serial-Attached SCSI (SAS) 78. SMB Share Applications profile should be used when No access-based enumeration or share caching

79. Using Virtual Hard Disk Sharing as Shared Storage A failover cluster runs inside virtual machines A shared virtual disk used as a shared storage o Virtual machines do not need access to iSCSI or FC SAN o Presented as virtual SAS disk

o Can be used only for data Requirements for shared virtual disk o Virtual hard disk must be in .vhdx format o Connected by using a virtual SCSI adapter o Stored on a scale-out file server or CSV Supported operating systems in a virtual machine o Windows Server 2012 or Windows Server 2012 R2


Preferred owners Virtual machine will start on preferred Hyper-V host Start on possible owner only preferred owners are unavailable If preferred and possible owners are unavailable, virtual machine will move to other failover cluster node, but not startApp Controller: Replaces the now deprecated VMM self-service portal

81. Host Groups? Allows collective management of physical hosts o Can nest host groups: Parent-Child inheritance applies Configurable properties include: o Naming & moving group, allow unencrypted file transfers o Placement rules: Virtual machine must, should, must not or should not match the host o Host reserves: Can reserve various resources for host alone Includes CPU, Memory, Disk I/O and space, Network I/O o Dynamic optimization for determining vm load Resource default: CPU 30%, RAM 512MB, Disk I/O 0% Power optimization included o Network: Can assign varied network resources: IP pools, load balancers, logical networks & MAC pools o Storage: Can assign storage pools and logical units resources


Operations Manager provides: Application monitoring in both the private and public clouds Dashboards Health monitoring Alerts Agent and agentless monitoring Fabric monitoring

83. With Service Manager, you can: Implement service management, as defined in the ITIL and the Microsoft Operations Framework Use the built-in process management packs to provide processes for: o Defining templates and workflows o Implementing change requests and change request templates o Manually designing activity templates o Enforcing compliance


Orchestrator provides the ability to: Automate processes across systems, platforms, and cloud services Automate best practices Connect different systems from different vendors Implement built-in integration packs Implement end-to-end automation across multiple System Center products


Storage solution implementing block storage Implementing Fibre Storage Virtual Fibre Channel Adapters Implementing iSCSI Storage

86. Storage solution implementing file storage SMB 3.0: o Enables virtual machine storage on SMB 3.0 file shares o Requires Windows Server 2012 file servers o Requires fast network connectivity o Provide redundancy and performance benefits NFS: o Enables you can use NFS Shares to deploy VMware to virtual machines 87. Snapshots. With this method, the SAN creates a writable snapshot of an existing logical unit Cloning. With this method, the SAN creates an independent copy of an existent logical unit 88. You can integrate VMM and Windows Server Update Server (WSUS) to provide scanning and compliance of your virtualization infrastructure An update baseline is a set of required updates assigned to a scope of infrastructure servers within the private cloud If you move a host or host cluster to a new host group, the object will inherit the baseline associated with the target host group If you assign a baseline specifically to a standalone host or host cluster, the baseline will stay with the object when it moves from one host group to another 89. When integrating WSUS and VMM: You must have WSUS 3.0 SP2 x64 or newer You should limit languages, products, and classifications in WSUS Integration with Configuration Manager is possible, if WSUS server is managed by Configuration Manager Also use reporting capabilities for compliance information

90. SMB 3.0:


Enables virtual machine storage on SMB 3.0 file shares Requires Windows Server 2012 file servers Virtual Machine Manager Library? Hosted on Library servers Stores resources used to create virtual machines Catalog of stored resources o Some resources stored in VMM database Contains templates and profiles Contains library shares o Shared folders on the Library servers o Can be organized into subfolders o Indexed for quick retrieval Data deduplication o Variable chunking o Compression of primary data to other storage areas

92. Virtual machine manager library is hosted on the library servers and stores resources used to create virtual machine 93. Library stored resources include: File-based resources answer and driver files, virtual floppy and hard drives, ISO images, Windows PowerShell and SQL Server scripts, web deployment, and SQL DAC files Virtual machines templates and profiles Equivalent objects Cloud library Self-service user content Orphaned resources Updated catalogs and baselines Stored virtual machines and services 94. Library servers can be associated with particular host groups

95. Considerations for Highly Available Library Servers 1. 2. 3. 4. VMM management servers cannot be on the same cluster as library servers When a cluster fails over, library shares on it go offline until the cluster comes back up The SQL Server running the VMM database should also be clustered As an alternative to failover clustering, you can add more library servers VMM library servers do not replicate files o Manually copy files using robocopy or another similar utility

96. data deduplication is a specialized data compression technique for eliminating duplicate copies of repeating data. Related and somewhat synonymous terms are intelligent (data) compression

and single-instance (data) storage. This technique is used to improve storage utilization and can also be applied to network data transfers to reduce the number of bytes that must be sent. In the deduplication process, unique chunks of data, or byte patterns, are identified and stored during a process of analysis. As the analysis continues, other chunks are compared to the stored copy and whenever a match occurs, the redundant chunk is replaced with a small reference that points to the stored chunk. Given that the same byte pattern may occur dozens, hundreds, or even thousands of times (the match frequency is dependent on the chunk size), the amount of data that must be stored or transferred can be greatly reduced.[1] 5. This type of deduplication is different from that performed by standard file-compression tools, such as LZ77 and LZ78. Whereas these tools identify short repeated substrings inside individual files, the intent of storage-based data deduplication is to inspect large volumes of data and identify large sections such as entire files or large sections of files that are identical, in order to store only one copy of it. This copy may be additionally compressed by single-file compression techniques. For example a typical email system might contain 100 instances of the same 1 MB (megabyte) file attachment. Each time the email platform is backed up, all 100 instances of the attachment are saved, requiring 100 MB storage space. With data deduplication, only one instance of the attachment is actually stored; the subsequent instances are referenced back to the saved copy for deduplication ratio of roughly 100 to 1. 97. Virtual machine checkpoint a. Checkpoint creates .avhd/.avhdx file b. A non-checkpointed .vhd/.vhdx c. All subsequent changes are written to the .avhd/.avhdx file d. When reverted, the and .avhd/.avhdx file is deleted 98. Virtual Machine Cloning? a. Rapid way to deploy a virtual machine b. Makes copy of the .vhd/.vhdx, configuration files, and memory contents c. Original can be online if using System Center 2012 R2 Virtual Machine Manager d. Cloned virtual machine is an exact copy with the same identity e. The cloned virtual machine has the same name and domain SID of the original virtual machine f. After cloning, run Sysprep, or manually change to unique settings and values g. Ensure sufficient disk space exists on host 99. Virtual machine conversion a. Convert Citrix XenServer virtual machines to Hyper-V via a P2V conversion b. Virtual-to-virtual machine conversion supports converting: i. In System Center 2012 VMM 1. ESX/ESXi 3.5 Update 5 2. ESX/ESXi 4.0 3. ESX/ESXi 4.1 4. ESXi 5.1 ii. In System Center 2012 SP1 VMM and System Center 2012 R2 VMM 1. ESX/ESXi 4.1 2. ESXi 5.1 100. Types of cloud


Windows 2012 networking


Manageability comparison

103. Hardware requirements for Hyper-v installation Hyper V, however, is only available on the 64 bit editions of Windows Server. The CPU must have the necessary virtualization extensions available and turned on in the BIOS. Both the major processor manufacturers (Intel and AMD) have CPUs with these extensions available. In addition, the CPU must support hardware enforced Data Execution Prevention, or DEP. For Intel processors, this requires enabling the XD bit; for AMD processors it is the NX bit. 104. Hyper V requires a minimum of two physical network adapters: one for hypervisor management and one for VM to network connectivity. If you plan to cluster your devices, install a third adapter 105. Server Core is a minimal installation of Windows Server 2008, with only those applications and services required for operation. 106. Hyper V is installed as a role. (A role is a predefined function for a server, such as DNS, Active Directory Controller, or Hyper V.) Before you can install the Hyper V role, you must install the Hyper V update packages for Windows Server 2008 (KB950050), as well as the Language Pack for Hyper V (KB951636). Hyper V v2 comes preinstalled in Server 2008 R2; you simply have to enable it. 107. Virtual networks a. Virtual networks allow the virtual machine to communicate with the rest of your network, the host machine, and other virtual machines. With the Virtual Network Manager, you can create the following types of virtual networks b. Private network allows a virtual machine to communicate only with another virtual machine on the host.

Internal network sets up communication between the host system and the virtual machines on it. d. External network connects virtual machines and the host physical network. This allows the virtual machine to communicate on the same network as the host, operating as any other node on the network. 108. Virtual Machine Reserve and Virtual Machine Limit a. refer to the amount of host resources reserved for the virtual machines exclusive us e and the maximum it can consume, respectively. Relative weight comes into play when multiple virtual machines are running on the same host and essentially gives a priority value to each virtual machine 109. The Automatic Stop Action cannot take place if the host machine is shut down unexpectedly. 110. VMM not only consolidates the functionality built into Hyper V but also adds to it. With VMM performing, physical to virtual (P2V) migrations are greatly simplified and can be done without service interruption. VMM will also convert your VMware machines to VHDs using a similar technique called virtual to virtual transfers. 111. VMM is not included with Windows Server 2008 unlike most of the products and must be purchased and licensed separately. VMM is available from the Microsoft Volume License Services Web site 112. VMM stores all the resources used in the creation of virtual machines in the VMM library. Your enterprise must have at least one library server. In the default setup, each VMM server contains a library server and share (Figure 5.13). If you already have an existing library server, point this VMM server to it. 113. You cannot move or remove a library server or share once setup is complete. So give some thought to where you place them before completing the installation. 114. VMM Administrator Console. The Admin Console can be installed locally on the VMM server for convenience when you are standing at the console. You can also install it on a workstation, allowing you to administer your virtual environment right from your desktop 115. There are two parameters you configure in the Host Properties window, Figure 5.31. The first sets one or more default virtual machine paths. VM paths are locations to store files associated with deployed virtual machines. Enter the paths you would like to set by clicking the Add button after entering each one. There is also a Remove button if you make any mistakes. The second parameter that can be set is the port used for remote connection to your VMs via Virtual Machine Remote Control. You can accept the default port or configure it to meet the security policies at your company. 116. Virtual Machine Remote Control (VMRC) connections are made from within the VMM console by simply right clicking on the VM of choice and selecting Connect to virtual machine. VMRC provides you with remote access very similar to Remote Desktop; however, because it is granted via the host, you can connect to a VM before the guest OS has booted 117. Network binding selecting from the three types: private network, internal network, or physical network adapter. Private network allows the virtual machines to communicate with each other, but not with the host. Internal network allows the VMs to talk to each other and with the


host. You can further require that communication with the host be via a VLAN. Selecting Physical network adapter binds the VMs to a physical NIC allowing them to communicate with each other, the host, and other machines on your network. This is the same as the External setting in the Hyper V Manager

118. There are two common types of clusters: fail over and load balanced. Fail over clusters consist of a single node that typically handles all of the client requests, called the Primary node, and one or more nodes that are largely inactive unless the Primary node goes offline; these are called secondary nodes. In a load balanced cluster, all of the nodes participate actively in serving client requests. In most cases, a load balanced cluster can also serve as a fail over cluster, since one or more nodes of the load balanced cluster can typically fail without the other nodes being impacted. 119. In any cluster, two major challenges present themselves: determining the status of a node member (particularly in fail over clusters), and determining which node of a cluster currently controls a clustered application and its data. The first challenge is met with a heartbeat network, which is typically a physically separate set of network cards that communicate a signal, or heartbeat, to determine the status of each node. Data ownership is tracked by a data partition called the Quorum. The Quorum is a separate partition from the shared data, that also needs to be equally accessible to all nodes in a cluster. The quorum tracks which node is the owner of a given set of applications or data.

120. Virtual machines that have VHDs on the same shared disk are placed in the same Service or Application Group. If one virtual machine needs to be moved, then all of the virtual machines in the same group will be moved if you are using Differencing Disks for your VHDs, they also must be stored on shared storage, in the same Application or Services group as the original parent disk. As with the other situations above, failure to do so means the VM will not start up in a fail over event The total amount of physical system memory is shared by all the virtual machines. Once allocated to a VM, the memory cannot be used by another VM.

121. Dynamic VHD files start out small and grow up to the limit you assign. Hyper V will allow you to allocate more virtual disk space than physical space. Exercise caution when oversubscribing your hard disk or you might run out of space at runtime. Oversubscribing the C drive is strongly discouraged for this reason.

122. If your virtual machine does not automatically launch, you may have missed checking the Start the virtual machine after it is created box in the previous step. If so, skip to the next section Connecting to a Virtual Machine and Basic Hyper V Commands and follow the steps to start your VM. Once it is running, return here. 123. Note If your VM is running Windows Server 2008, you may be prompted to confirm you want to upgrade a previous version of integration services. Windows Server 2008 was originally built with an integration services version that was compatible with the beta version of Hyper V. If you see this prompt, click OK. Note If you receive an error at this point referring to not being able to start the virtual machine, you may not have properly configured virtualization in the BIOS of the machine's motherboard. Consult your manufacturer's documentation for details. 124. virtual machine template is an image of a virtual machine you can lever age to quickly create a new VM without stepping through the entire OS and application installation process. Using templates increases your agility when asked to standup a new environment. But perhaps more impor tantly, it saves you time. There are many scenarios where a template can prove valuable. For example, you can create a template of each OS in use in your enterprise, complete with your standard applications already installed. Those standard applications might include your antivirus prod uct, data backup software, or a monitoring agent. 125. SYSPREP removes any unique values assigned to the instance of the OS during installation. When SYSPREP finishes, your VM should shutdown and be ready for export. 126. Virtual machine names in the Hyper V Manager are not tied to the name given to the VM from within its operating system. However, we strongly encourage you to keep these names synchronized to reduce confusion whenever possible 127. If you are planning to virtualize your Exchange environment, you must use Exchange Server 2007 with Service Pack 1 or later. Earlier, Exchange versions are not fully supported on a virtual platform. The use of the Unified Messaging role is not supported if your Exchange server is running within a virtual platform. It is important to note that the base Exchange 2007 system requirements WITHOUT the use of virtualization is Windows 2003, but the use of this operating system as a host for Exchange 2007 is not supported in the virtualization world. We mention this because some may consider building Exchange 2007 in a nonvirtual environment with the plan to migrate the complete operating system into Hyper V at a later date. If you plan to do this, you

must build your environment using Windows 2008 or later Exchange 2007 on a virtual platform supports all of the most common forms of storage. These include virtual hard drives (VHD), SCSI, and iSCSI storage. If you plan to use SCSI or iSCSI, you must configure it to be presented as block level storage within the hardware virtualization software, and it must be dedicated to the Exchange guest machine. Exchange 2007 does not support the use of network attached storage, but if the storage is attached at the host level, the guest will see it as local storage. Should you plan to use SCSI or iSCSI in your virtual Exchange environment, Hyper V only supports VHDs up to 2040 gigabytes (GB) in size, and virtual IDE drives up to 127 GB; plan accordingly. 128. Use a synthetic network adapter provided by the Hyper V Integration tools instead of a legacy network adapter when configuring networking for the virtual machine. Avoid emulated devices for SQL Server deployments when possible. These devices can result in significantly more CPU overhead when com pared to synthetic devices

129. SQL Server is I/O intensive, so it is recommended that you use the pass through disk option as opposed to the fixed size Virtual Hard Disks (VHDs). Dynamic VHDs are not recommended for performance reasons. 130. Do not use the Hyper V snapshot feature on virtual servers that are connected to a SharePoint Products and Technologies server farm. This is because the timer services and the search applications might become unsynchronized during the snapshot process and once the snapshot is finished, errors or inconsistencies can arise. Detach any server from the farm before taking a snapshot of that server.


Deploy Clustered Storage Spaces

Storage Spaces and Failover Clustering in Windows Server together, these technologies provide a resilient, highly available, and cost-efficient solution that you can scale from simple deployments to the needs of a large datacenter. You can build a failover cluster for your physical workloads or for virtual workloads that are available through the Hyper-V role. The basic building block of a clustered storage spaces deployment is a small collection of servers, typically two to four, and a set of shared Serial Attached SCSI (SAS) just-a-bunch-of-disks (JBOD) enclosures. The JBOD enclosures should be connected to all the servers, where each server has redundant paths to all the disks in each JBOD enclosure. The following figure shows an example of the basic building block.

Figure 1: Example of clustered storage spaces By using Cluster Shared Volumes (CSVs), you can unify storage access into a single namespace for ease of management. A common namespace folder is created at the path C:\ClusterStorage\ with all the CSVs in the failover cluster. All cluster nodes can access a CSV at the same time, regardless of the number of servers, the number of JBOD enclosures, or the number of provisioned virtual disks. This unified namespace enables highly available workloads to transparently fail over to another server if a server failure occurs. It also enables you to easily take a server offline for maintenance. Clustered storage spaces can help protect against the following risks:

Physical disk failures When you deploy a clustered storage space, protection against physical disk failures is provided by creating storage spaces with the mirror or parity resiliency types. Additionally, mirror spaces use dirty region tracking (DRT) to track modifications to the disks in the pool. When the system resumes from a power fault or a hard reset event and the spaces are brought back online, DRT makes disks in the pool consistent with each other. Data access failures If you have redundancy at all levels, you can protect against failed components, such as a failed cable from the enclosure to the server, a failed SAS adapter, power faults or failure of a JBOD backplane. For example, in an enterprise deployment, you should have redundant SAS adapters, SAS I/O modules, and power supplies. To protect against complete disk enclosure failure, you can use redundant JBOD enclosures. Data corruptions and volume unavailability Both the NTFS file system and the Resilient File System (ReFS) help protect against corruption. For NTFS, improvements to the Chkdsk tool in Windows Server 2012 can greatly improve availability. For more information, see NTFS Health and Chkdsk. If you deploy highly available file servers (without using CSVs), you can use ReFS to enable high levels of scalability, high

availability, and data integrity regardless of hardware or software failures. For more information about ReFS, see Resilient File System Overview.

Server node failures Through the Failover Clustering feature, you can provide high availability for the underlying storage and workloads. This helps protect against server failure and enables you to take a server offline for maintenance without service interruption.

Notes: - powershell command Export-VM ExportVMSnapshot Checkpoint-VM Compare-VM Exports a virtual machine (VM) to disk. Exports a snapshot as a virtual machine (VM) and writes it to disk. Creates a snapshot of a virtual machine (VM). Compares a virtual machine (VM) to a host and returns a compatibility report.

132. DHCP Guard- With DHCP Guard, Microsoft has introduced a new way to look at security within virtual environments The growth in virtual machines has introduced new security and network challenges. Dynamic Host Configuration Protocol (DHCP) servers are used to allocate network addresses and other key information to computers. Part of that information is the DNS server that the computer will use to search for other machines on the Internet. This makes DHCP server a very attractive target for hackers. With network infrastructure increasingly being locked down, hackers have started to ship malware which includes its own DHCP server. Additionally, at major conferences and even in many hotels and city centres, it is easy to deploy a DHCP server that will capture a lot of unprotected computers. Once those computers have been captured, their network traffic can be captured and then examined. This can yield a lot of very sensitive information such as usernames and passwords for corporate networks, online banking and other services. This mechanism also makes it easy for hackers to direct valid user network requests to bogus sites that will deploy malware onto the computers compromising them It is not just criminals that create bad DHCP servers. In any test and development environment there can be multiple servers created as part of a test programme. If these servers are routed to the

wrong network segment, they can end up being seen by a live production environment and inadvertently cause problems. DHCP Guard is a configuration setting for VMs that talks to Active Directory to retrieve the list of valid DHCP servers on the network. It determines what ports the computer should listen on for DHCP requests and blocks DHCP messages on other ports.


List of cmdlets for powershell hyper v cmdlet Description Installs a DVD drive in a virtual machine (VM). Installs a virtual Fibre Channel host bus adapter in a virtual machine (VM). Installs a hard disk drive in a virtual machine (VM). Adds a network to the list of networks that can be used for virtual machine (VM) migration. Installs a network adapter in a virtual machine (VM). Creates an access control list (ACL) to apply to the traffic sent or received by a virtual machine (VM) network adapter. Creates an extended ACL for a virtual network adapter. Adds a routing domain and virtual subnets to a virtual network adapter. Installs a RemoteFX video adapter in a virtual machine (VM). Installs a SCSI controller in a virtual machine (VM). Adds a path to a storage resource pool. Adds a network to a resource pool. Adds a flow sheet document (FSD) to a virtual network adapter in a virtual machine or the management operating system (which runs the Hyper-V role). Adds a flow sheet document (FSD) to a virtual switch. Creates a snapshot of a virtual machine (VM). Compares a virtual machine (VM) to a host and returns a compatibility report. Completes the failover process of the virtual machine (VM). Connects a virtual network adapter to a virtual network. Associates a host bus adapter with a virtual storage area network (VMSAN).

Add-VMDvdDrive Add-VMFibreChannelHba Add-VMHardDiskDrive Add-VMMigrationNetwork Add-VMNetworkAdapter Add-VMNetworkAdapterAcl Add-VMNetworkAdapterExtendedAcl AddVmNetworkAdapterRoutingDomainMapping Add-VMRemoteFx3dVideoAdapter Add-VMScsiController Add-VMStoragePath Add-VMSwitch Add-VMSwitchExtensionPortFeature Add-VMSwitchExtensionSwitchFeature Checkpoint-VM Compare-VM Complete-VMFailover Connect-VMNetworkAdapter Connect-VMSan

Convert-VHD Copy-VMFile Debug-VM Disable-VMEventing Disable-VMIntegrationService Disable-VMMigration Disable-VMRemoteFXPhysicalVideoAdapter Disable-VMResourceMetering Disable-VMSwitchExtension Disconnect-VMNetworkAdapter Disconnect-VMSan Dismount-VHD Enable-VMEventing Enable-VMIntegrationService Enable-VMMigration Enable-VMRemoteFXPhysicalVideoAdapter Enable-VMReplication Enable-VMResourceMetering Enable-VMSwitchExtension Export-VM Export-VMSnapshot Get-VHD

Converts the format version and type of virtual hard disk file of a virtual machine (VM). Copies a file to a virtual machine. Debugs a virtual machine. Disables virtual machine eventing. Disables an integration service on a virtual machine (VM). Disables migration on one or more virtual machine hosts. Disables a particular RemoteFX physical graphics processing unit (GPU) adapter for use with a RemoteFX virtual machine (VM). Disables resource utilization data collection for a virtual machine (VM) or a resource pool. Disables one or more extensions and the feature sets associated with each extension for one or more specified switches. Disconnects a virtual network adapter from a virtual network or a network resource pool. Removes a host bus adapter from a virtual storage area network (VMSAN). Specifies the path to the files representing the virtual hard disks to be dismounted. Enables the automatic refresh of Hyper-V objects "live" objects for the current Windows PowerShell session. Enables an integration service on a virtual machine (VM). Enables migration on one or more Hyper-V hosts. Enables one or more RemoteFX physical video adapters for use with RemoteFX-enabled virtual machines. Enables replication of a virtual machine. Enables the collection of resource utilization data for one or more virtual machines (VM) or resource pools. Enables one or more extensions and the feature sets associated with each extension on one or more specified virtual switches. Exports a virtual machine (VM) to disk. Exports a snapshot as a virtual machine (VM) and writes it to disk. Creates a VHDObject for each virtual hard disk file specified by path or associated with a virtual machine (VM).

Get-VM Get-VMBios Get-VMComPort Get-VMConnectAccess Get-VMDvdDrive Get-VMFibreChannelHba Get-VMFirmware Get-VMFloppyDiskDrive Get-VMHardDiskDrive Get-VMHost Get-VMHostNumaNode Get-VMHostNumaNodeStatus Get-VMIdeController Get-VMIntegrationService Get-VMMemory Get-VMMigrationNetwork Get-VMNetworkAdapter Get-VMNetworkAdapterAcl Get-VMNetworkAdapterExtendedAcl

Retrieves a VMObject for each virtual machine (VM) on the Hyper-V host. Retrieves the BIOS configuration of a virtual machine (VM). Retrieves a list of the COM ports associated with a virtual machine (VM). Retrieves a list of users that have access to connect to a virtual machine (VM). Retrieves a list of DVD drives that are attached to a virtual machine (VM). Retrieves a list of all Fibre Channel host bus adapters associated with a virtual machine (VM). Gets the firmware configuration of a virtual machine. Retrieves a list of floppy disk drives that are attached to a virtual machine (VM). Retrieves a list of the hard disk drives that are attached to a virtual machine (VM). Retrieves the configuration of a Hyper-V host. Retrieves the NUMA topology of a Hyper-v host. Retrieves a list that associates each virtual machine (VM) with the allocated resources for each NUMA Node on the host. Retrieves a list of the IDE controllers associated with a virtual machine (VM). Retrieves the integration services configuration of a virtual machine (VM). Retrieves the memory configuration of a virtual machine (VM). Retrieves a list of the networks that have been added for migration on a Hyper-V host. Retrieves a list of the virtual network adapters of a virtual machine (VM), the management operating system, or both. Retrieves an access control list (ACL) configured for a virtual machine (VM) network adapter.

Gets extended ACLs configured for a virtual network adapter. Retrieves the Failover IP settings on a virtual machine Get-VMNetworkAdapterFailoverConfiguration (VM) network adaptor. Get-VmNetworkAdapterIsolation Gets isolation settings for a virtual network adapter. GetVMNetworkAdapterRoutingDomainMapping Get-VMNetworkAdapterVlan Gets members of a routing domain. Retrieves virtual local area network (VLAN) settings

configured on a virtual network adapter. Get-VMProcessor Get-VMRemoteFx3dVideoAdapter Get-VMRemoteFXPhysicalVideoAdapter Get-VMReplication Get-VMReplicationAuthorizationEntry Get-VMReplicationServer Get-VMResourcePool Get-VMSan Get-VMScsiController Get-VMSnapshot Get-VMStoragePath Get-VMSwitch Get-VMSwitchExtension Get-VMSwitchExtensionPortData Get-VMSwitchExtensionPortFeature Get-VMSwitchExtensionSwitchData Get-VMSwitchExtensionSwitchFeature Get-VMSystemSwitchExtension Get-VMSystemSwitchExtensionPortFeature Get-VMSystemSwitchExtensionSwitchFeature Grant-VMConnectAccess Retrieves the processor configuration of a virtual machine (VM). Retrieves the RemoteFX adapter of a virtual machine (VM). Retrieves a list of physical graphics processing unit (GPU) adapters in the server that can be used with RemoteFX. Retrieve a list of virtual machine (VM) replication plans or a specific replication plan and associated settings. Retrieve the authorization list or a specific authorization entry. Retrieves the authentication details of the recovery server. Retrieves the resource pools that meet the specified criteria. Retrieves a list of virtual storage area networks (VMSANs) available on a host. Retrieves the virtual SCSI controllers for a virtual machine (VM). Retrieves a list of the snapshots of a virtual machine (VM). Retrieves a list of the paths in a storage resource pool. Retrieves the virtual network from a virtual machine (VM) host. Retrieves a list of the switch extensions on one or more virtual switches. Retrieves the status or the statistics for the extension of a virtual switch (VMSwitch) applied on a virtual network adapter. Retrieve features configured on a virtual network adapter. Retrieves the status or the statistics for the extension of a virtual switch (VMSwitch) applied on a VMSwitch. Retrieves features configured on a virtual switch (VMSwitch). Retrieves a list of the switch extensions that are installed on a physical server at a system level. Retrieves the default instance of the port level feature or features available in an extension at a system level. Retrieves the default instance of the switch level feature or features available in an extension at a system level. Grants a user access to connect to a virtual machine (VM).

Import-VM Import-VMInitialReplication Measure-VM Measure-VMReplication Measure-VMResourcePool Merge-VHD Mount-VHD Move-VM Move-VMStorage New-VFD New-VHD New-VM New-VMReplicationAuthorizationEntry New-VMResourcePool New-VMSan New-VMSwitch Optimize-VHD Remove-VM Remove-VMDvdDrive Remove-VMFibreChannelHba Remove-VMHardDiskDrive Remove-VMMigrationNetwork Remove-VMNetworkAdapter Remove-VMNetworkAdapterAcl Remove-VMNetworkAdapterExtendedAcl

Imports a virtual machine (VM) from a folder. Imports the initial replication at a recovery site. Retrieves the resource utilization data of virtual machines (VMs). Retrieves statistics related to the replication of a virtual machine. Retrieves the resource utilization information for a resource pool. Merges virtual hard disks (VHDs) in a differencing virtual hard disk (VHD) chain. Mounts one or more virtual hard disks (VHDs) specified by one or more virtual hard disk (VHD) files. Migrates an offline virtual machine (VM) or Live Migrates a running virtual machine (VM). Moves the storage of a virtual machine (VM). Creates a virtual floppy disk. Creates one or more new virtual hard disk (VHD) files. Creates a new virtual machine (VM). Creates an authorization entry containing the allowed primary server and corresponding replica storage. Creates a resource pool. Creates a new virtual storage area network (VMSAN) on a Hyper-V host. Creates a new virtual network switch on a Hyper-V host. Compacts one or more dynamic or differencing virtual hard disk (VHD) files. Deletes the configuration file for a virtual machine. Deletes one or more virtual DVD drives from a virtual machine (VM). Deletes a Fibre Channel host bus adapter from a virtual machine (VM) configuration. Deletes one or more virtual hard disks (VHDs) from a virtual machine (VM). Deletes a network from the list of networks that can be used for virtual machine (VM) migration. Deletes one or more network adapters from a virtual machine (VM). Deletes an access control list (ACL) applied to a virtual machine (VM) network adapter for traffic that is sent or received. Removes an extended ACL for a virtual network adapter.

RemoveVMNetworkAdapterRoutingDomainMapping Remove-VMRemoteFx3dVideoAdapter Remove-VMReplication Remove-VMReplicationAuthorizationEntry Remove-VMResourcePool Remove-VMSan Remove-VMSavedState Remove-VMScsiController Remove-VMSnapshot Remove-VMStoragePath Remove-VMSwitch Remove-VMSwitchExtensionPortFeature Remove-VMSwitchExtensionSwitchFeature Rename-VM Rename-VMNetworkAdapter Rename-VMResourcePool Rename-VMSan Rename-VMSnapshot Rename-VMSwitch Repair-VM Reset-VMReplicationStatistics Reset-VMResourceMetering Resize-VHD Restart-VM Restore-VMSnapshot Resume-VM Resume-VMReplication

Removes a virtual subnet from a routing domain. Deletes a RemoteFX adapter from a virtual machine (VM). Delete the replication relationship for a virtual machine. Deletes an authorization entry. Deletes a resource pool. Deletes a virtual storage area network (VMSAN) from a Hyper-V host. Deletes the saved state of a saved virtual machine (VM). Deletes one or more SCSI controllers from a virtual machine (VM). Deletes a snapshot or snapshot tree. Deletes a path from a virtual machine (VM) storage resource pool. Deletes a virtual network. Deletes a flow sheet document (FSD) from a virtual network adapter. Deletes a flow sheet document (FSD) from a virtual switch. Renames a virtual machine (VM). Renames a virtual network adapter on a virtual machine or on the management operating system. Renames a resource pool. Renames an existing virtual machine storage area network (VMSan). Renames a snapshot. Renames a virtual network. Restores one or more virtual machines (VMs) to usable condition based upon adjustments fixes contained in each compatibility report. Resets the data collected about resource utilization for a virtual machine (VM) or a resource pool. Resets the resource utilization data collected by Hyper-V resource metering. Resize a virtual hard disk (VHD). Restarts a virtual machine (VM) immediately with shutting down the operating system. Restores a virtual machine (VM) snapshot. Resumes a paused (suspended) or saved (hibernated) virtual machine (VM). Resumes the replication of a virtual machine (VM).

Revoke-VMConnectAccess Save-VM Set-VHD Set-VM Set-VMBios Set-VMComPort Set-VMDvdDrive Set-VMFibreChannelHba Set-VMFirmware Set-VMFloppyDiskDrive Set-VMHardDiskDrive Set-VMHost Set-VMMemory Set-VMMigrationNetwork

Revokes the access assigned to a user for connections to a virtual machine (VM). Saves a virtual machine (VM). Modifies the differencing virtual hard disk (VHD) chain settings to assign the parent of a virtual hard disk file (VHD). Modifies the properties for a virtual machine (VM). Modifies the BIOS settings of a virtual machine (VM). Modifies the virtual COM port settings for a virtual machine (VM). Modifies the virtual DVD drive settings for a virtual machine (VM). Modifies the existing Fibre Channel host bus adapter settings for a virtual machine (VM). Sets the firmware configuration of a virtual machine. Modifies the virtual floppy drive (VFD) settings for a virtual machine (VM). Modifies the virtual hard disk (VHD) drive settings for a virtual machine (VM). Modifies the settings for a Hyper-V host. Modifies the memory settings for a virtual machine (VM).

Sets the subnet, subnet mask, and/or priority of a migration network. Modifies the network adapter settings for a virtual Set-VMNetworkAdapter machine (VM). Modifies the Failover IP settings of the network adapter Set-VMNetworkAdapterFailoverConfiguration for a virtual machine (VM). Set-VmNetworkAdapterIsolation SetVmNetworkAdapterRoutingDomainMapping Set-VMNetworkAdapterVlan Set-VMProcessor Set-VMRemoteFx3dVideoAdapter Set-VMReplication Set-VMReplicationAuthorizationEntry Set-VMReplicationServer Modifies isolation settings for a virtual network adapter. Sets virtual subnets on a routing domain. Modifies the virtual local area network (VLAN) settings configured on a virtual machine (VM) network adapter. Modifies the virtual processor settings for a virtual machine (VM). Modifies the RemoteFX adapter settings for a virtual machine (VM). Modifies the replication relationship settings for a virtual machine (VM). Modifies the authorization entry for a virtual machine (VM). Modifies the settings that specify the server authentication and the associated ports of the recovery

server for a virtual machine (VM). Set-VMResourcePool Set-VMSan Set-VMSwitch Modifies the settings that specify the relationship between two resource pools. Modifies the existing virtual machine storage area network (VMSAN) settings on the Hyper-V host. Modifies the virtual network settings for a virtual machine (VM). Modifies an existing flow sheet document (FSD) of a virtual machine network interface controller (VMNIC) or parent virtual network interface controller (VNIC) for a virtual machine (VM). Modifies an existing flow sheet document (FSD) of a vmswitch for a virtual machine (VM). Intiates a virtual machine (VM) that is shutdown off , hibernated saved, or suspended paused. Initiates the failover of a virtual machine (VM). Initiates the replication for a virtual machine (VM). Discontinues running an active virtual machine (VM). Discontinues an on-going failover for a virtual machine (VM). Discontinues an on-going replication for a virtual machine (VM). Discontinues an on-going resync operation for a virtual machine (VM). Pauses an active virtual machine (VM). Pauses replication for a virtual machine (VM). Tests the connection configured for replication traffic. Verifies the usability of one of more virtual hard disk (VHD) files.


Set-VMSwitchExtensionSwitchFeature Start-VM Start-VMFailover Start-VMInitialReplication Stop-VM Stop-VMFailover Stop-VMInitialReplication Stop-VMReplication Suspend-VM Suspend-VMReplication Test-VMReplicationConnection Test-VHD


NIC Teaming:

Windows Server 2012 is a cloud-optimized OS, delivering the key capabilities for building scalable, mission-critical cloud environments. The new NIC Teaming capability in this release of Windows Server bring continuous network availability and increased network performance, supporting greater VM density and lower operational costs. Cloud environments need continuous availability to meet the demanding needs of high-density workloads. In particular, downtime in network connectivity creates ill will with users and may affect revenue. Service outages can result from network interface card (NIC) failures, network switch failures, or even something as mundane as an accidentally disconnected cable. The trend to increase the density of VMs on a physical server places even more pressure on the underlying network reliability because a single outage can

impact a variety of services. As a result, datacenter operators require solutions that can quickly and automatically recover from connectivity failures, while being easy to setup and manage. At the same time, greater VM densitiesand the push to virtualize demanding workloads such as media streamingplace new requirements for bandwidth aggregation across the datacenter. In particular, having invested in redundant network connectivity, datacenter operators should be able to make use of that spare capacity without requiring special hardware or changes to application workloads. This aggregated network capacity ultimately reduces network infrastructure investment and improves resource utilization. Windows Server 2012 NIC Teaming provides transparent network failover and bandwidth aggregation. Uniquely, the Windows solution is hardware-independent and can be deployed under all existing workloads and applications on both physical and virtualized servers. What is NIC Teaming? A solution commonly employed to solve the network availability and performance challenges is NIC Teaming. NIC Teaming (aka NIC bonding, network adapter teaming, Load balancing and failover, etc.) is the ability to operate multiple NICs as a single interface from the perspective of the system. In Windows Server 2012, NIC Teaming provides two key capabilities: 1. Protection against NIC failures by automatically moving the traffic to remaining operational members of the team, i.e., Failover, and 2. Increased throughput by combining the bandwidth of the team members as though they were a single larger bandwidth interface, i.e., bandwidth aggregation. Many vendors have provided NIC teaming solutions for Windows Server, but these solutions shared many limitations. Those solutions are typically tied to a particular NIC manufacturer, so you cannot always team together NICs from multiple vendors. Many of the solutions do not integrate well with other networking features of Windows Server or with features such as Hyper-V. Finally, each of these NIC teaming solutions is managed differently, and most cannot be managed remotely. As a result, it is not easy for an administrator to move from machine to machine in a heterogeneous environment and know how to configure NIC teaming on each host. NIC Teaming in Windows Server 2012 Windows Server 2012 includes an integrated NIC Teaming solution that is easy to setup and manage, is vendor independent, and that supports the performance optimizations provided by the underlying NICs. NIC Teaming is easily managed through PowerShell or a powerful, intuitive UI (the UI is layered on top of PowerShell). Teams can be created, configured, monitored, and deleted at the click of a mouse. Multiple servers can be managed at the same time from the same UI. Through the power of PowerShell remote management the NIC Teaming UI can be run on Windows 8 clients to remotely manage servers even when those servers are running Windows 2012 Server Core! A team can include NICs from any vendor and it can even include NICs from multiple vendors. This vendor-agnostic approach brings a common management model to even the most heterogeneous datacenter. New NICs can be added to systems as needed and effortlessly integrated to the existing NIC Teaming configuration.

Finally, the team supports all the networking features that the underlying NICs support, so you dont lose important performance functionality implemented by the NIC hardware. The no compromise approach means that NIC Teaming can be deployed with confidence on all servers. NIC Teaming Configuration Options NIC Teaming in Windows Server 2012 supports two configurations that meet the needs of most datacenter administrators.


Integration services:

Windows Server 2012 Hyper-V Integration Services In Windows Server 2012, Hyper-V Integration Services (IS) include six components that provide performance enhancements to a child partition (i.e., virtual machine or guest), and additional interoperability between child and parent partitions. Integrations Services are available in a child partition only after they are installed in a supported guest operating systems. It is also possible to update Integration Services after the initial installation, and this is usually recommended when migrating a virtual machine from an older to a newer version of Hyper-V (e.g., Windows Server 2008 R2 to Windows Server 2012), or as new versions of the Integrations Services are released.

Integration Services are installed as user mode components in the guest operating system, and are implemented in the following services:

Hyper-V Heartbeat Service (vmicheartbeat) Hyper-V Guest Shutdown Service (vmicshutdown) Hyper-V Data Exchange Service (vmickvpexchange) Hyper-V Time Synchronization Service (vmictimesync) Hyper-V Remote Desktop Virtualization Service (vmicrdv) Hyper-V Volume Shadow-Copy Requestor Service (vmicvss) Integration Services in a child partition communicate over a Virtual Machine Bus (VMBus) with components in the parent partition virtualization stack that are implemented as virtual devices (VDev). The VMBus supports high-speed, point-to-point channels for secure inter-partition communication between child and parent partitions. A dedicated VDev manages each of the parent partition Integration Services function, just as each dedicated service manages the different Integration Services functions in a child partition. Through this architecture, Integration Services components provide enhanced functionality

and performance for mouse, keyboard, display, network, and storage devices installed in a virtual machine. Hyper-V Heartbeat Service The hyper-V Heartbeat Service provides a method for the parent partition to verify if a guest operating system running in a child partition becomes unresponsive. The parent partition sends regular heartbeat requests to a child partition and logs an event if a response is not received within a defined time boundary. If a heartbeat response is not received within the expected timeframe, the parent partition will continue to send heartbeat requests and generate events for missing replies. Hyper-V Guest Shutdown Service In order to cleanly shutdown a virtual machine without needing to interact directly with the guest operating system through a virtual machine connection or remote desktop protocol (RDP) session, the Hyper-V Guest Shutdown Service provides a virtual machine shutdown function. The shutdown request is initiated from the parent partition to the child partition using a Windows Management Instrumentation (WMI) call. Hyper-V Data Exchange Service The purpose of the Hyper-V Data Exchange Service is to provide a method to set, delete, and enumerate specific information about the virtual machine and guest operating system configuration running in a child partition. This allows the parent partition to set specific data values in the guest operating system and retrieve data for internal use or to provide to third-party management or other tools. Some examples of the information available through the data exchange component include:

The major version number of the guest operating system The minor version number of the guest operating system The build number of the guest operating system The version of the guest operating system The processor architecture identifier (e.g., Intel, AMD) The fully qualified DNS name that uniquely identifies the guest operating system Hyper-V Time Synchronization Service The Hyper-V Time Synchronization Service provides a method to ensure that a virtual machine running in a child partition can use the parent partition as a consistent and reliable time synchronization source. In particular, this Integration Services component addresses two specific situations:

Keeping time synchronized in the guest operating system to account for time-drift in the virtual machine. Restoring a virtual machine from a snapshot or saved state where a significant period has passed since the guest operating system last synched time. In the last case, a standard network-based protocol could fail to successfully synchronize as the maximum time difference allowed could commonly be exceeded for virtual machine snapshots or even after a saved state.

Hyper-V Remote Desktop Virtualization Service The Hyper-V Remote Desktop Virtualization Service communicates with the Remote Desktop Virtualization Host (RDVH) component of Remote Desktop Services to provide a method to manage virtual machines that belong to a Virtual Desktop Infrastructure (VDI) collection. Hyper-V Volume Shadow-Copy Requestor Service For guest operating systems that support Volume Shadow Copy (VSS), the Hyper-V Volume ShadowCopy Requestor Service allows the parent partition to request the synchronization and quiescence of a virtual machine running in a child partition. Hyper-V Integration Services Support Integration Services support a specific set of Windows server and client operating systems, as well as some Linux operating systems. In addition, only a subset of Integration Services components may be supported for some legacy Windows operating systems and Linux distributions. Tables 1 and 2 contain the Integration Services support matrix for supported server operating systems and client operating systems, respectively. Details Operating System Windows Server 2012 Windows Server 2008 R2 with SP1

Integration Services are built-in to the operating system, and do not require a separate installation. Datacenter, Enterprise, Standard, and Web editions are supported. Integration Services must be installed after the guest operating system installation in the virtual machine. Datacenter, Enterprise, Standard, and Web editions are supported. Integration Services must be installed after the guest operating system installation in the virtual machine. Datacenter, Enterprise, Standard, and Web editions are supported, both 32-bit and 64-bit. Integration Services must be installed after the guest operating system installation in the virtual machine. Integration Services must be installed after the guest operating system installation in the virtual machine. Essentials and Standard editions are supported. Integration Services must be installed after the guest operating system installation in the virtual machine. Datacenter, Enterprise, Standard, and Web editions are supported.

Windows Server 2008 R2

Windows Server 2008 with SP2

Windows Home Server 2011 Windows Small

Business Server 2011 Windows Server 2003 R2 with SP2

Integration Services must be installed after the guest operating system installation in the virtual machine. Datacenter, Enterprise, Standard, and Web editions are supported. Integration Services must be installed after the guest operating system installation in the virtual machine. Download Linux Integration Services Version 3.4 for Hyper-V. Integration Services must be installed after the guest operating system installation in the virtual machine. Download Linux Integration Services Version 3.4 for Hyper-V. Integration Services must be installed after the guest operating system installation in the virtual machine.

Windows Server 2003 SP2

CentOS 5.7 and 5.8

CentOS 6.0 to 6.3

Red Hat Enterprise Download Linux Integration Services Version 3.4 for Hyper-V. Linux 5.7 and 5.8 Integration Services must be installed after the guest operating system installation in the virtual machine. Red Hat Enterprise Download Linux Integration Services Version 3.4 for Hyper-V. Linux 6.0 to 6.3 Integration Services must be installed after the guest operating system installation in the virtual machine. Suse Linux Integration Services are built-in to the operating system, and do not require a Enterprise Server 11 separate installation. SP2 Open SUSE 12.1 Integration Services are built-in to the operating system, and do not require a separate installation. Ubuntu 12.04

Integration Services are built-in to the operating system, and do not require a separate installation.

Table 1: Integration Services Support Matrix for Server Guest Operating Systems When Linux Integration Services Version 3.4 is installed for supported Linux guest operating systems, the following functionality is available:

Time synchronization Guest shutdown Heartbeat Data exchange Optimized network, IDE, and SCSI controllers Symmetric multiprocessing (SMP) Mouse support

Live migration Jumbo frames VLAN tagging and trunking Features that are not supported in Linux Integration Services Version 3.4 include the Hyper-V Volume Shadow Copy Requestor since VSS is not supported in Linux, and also TCP offload. For supported Linux guest operating systems, the data exchange component allows a virtual machine to communicate the following information to Hyper-V:

Fully qualified domain name of the virtual machine Linux Integration Services version that is installed IPv4 and IPv6 addresses for all virtual Ethernet adapters Operating system information, including distribution and kernel versions Processor architecture Details Operating System Windows 8

Integration Services are built-in to the operating system, and do not require a separate installation. Ultimate, Enterprise, and Professional editions are supported, both 32-bit and 64-bit. Integration Services must be installed after the guest operating system installation in the virtual machine. Ultimate, Enterprise, and Professional editions are supported, both 32-bit and 64-bit. Integration Services must be installed after the guest operating system installation in the virtual machine.

Windows 7 with SP1

Windows 7

Windows Vista with Ultimate, Enterprise, and Business editions, including KN and N are SP2 supported.

Integration Services must be installed after the guest operating system installation in the virtual machine. Professional edition is supported. Integration Services must be installed after the guest operating system installation in the virtual machine. Professional edition is supported. Integration Services must be installed after the guest operating system installation in the virtual machine.

Windows XP with SP3

Windows XP x64 with SP2

Table 2: Integration Services Support Matrix for Client Guest Operating Systems

Hyper-V Integration Services Installation The installation of Integration Services should be performed after the guest operating system loads for the first time. From the Hyper-V Manager console, launch the Virtual Machine Connection tool to connect to the guest operating system, and then log in with an account that has administrative privileges. After you are logged in, you can select the Insert Integration Services Setup Disk option from the Action menu in the Virtual Machine Connection window. This step will attach an ISO image named vmguest.iso to the virtual machine DVD drive. The installation of Integration Services may begin automatically, or you may have to start the installation manually depending on the virtual machine guest operating system. After the installation is complete, you should restart the virtual machine. You can verify that the Integration Services are installed in the guest operating system by browsing for the Hyper-V services in the Services administration tool, Conclusion Hyper-V Integration Services provide a set of components that support important functionality and integration between the Hyper-V parent and child partitions, as well as performance enhancements to core virtual machine devices. You should install the Integration Services to ensure the availability of these services in virtual machine deployments. It is also important to install new versions of the Integration Services when updates are released.


Private virtual lan

PVLANs allow Hyper-V administrators to isolate virtual machines from each other (for example, virtual machines cannot contact other virtual machines over the network), while still maintaining external network connectivity for all virtual machines. This feature was not available in Windows Server 2008 R2 Hyper-V 137. Rogue DHCP servers

A rogue DHCP server is a DHCP server on a network which is not under the administrative control of the network staff. It is a network device such as a modem or a router connected to the network by a user who may be either unaware of the consequences of their actions or may be knowingly using it for network attacks such as man in the middle. Some kind of computer viruses or malicious software have been found to set up a rogue DHCP, especially for those classified in the "Rootkit" category. As clients connect to the network, both the rogue and legal DHCP server will offer them IP addresses as well as default gateway, DNS servers, WINS servers, among others. If the information provided by the rogue DHCP differs from the real one, clients accepting IP addresses from it may experience network access problems, including speed issues as well as inability to reach other hosts because of incorrect IP network or gateway. In addition, if a rogue DHCP is set to provide as default gateway an IP address of a machine controlled by a misbehaving user, he can sniff all the traffic sent by the clients to other networks, violating network security policies as well as user privacy (see man in the middle). VMware or virtual machine software can also act as a rogue DHCP server inadvertently when being run on a client machine joined to a network. The VMware will act as a rogue DHCP server handing out random IP addresses to the clients around it on the network. The end result can be that large portions of the network are then cutoff from both the Internet and the rest of the domain without any access at all.


Disable guest DHCP with Hyper-V DHCP Guard

If you have any amount of test or development infrastructure, a rogue DHCP server has surely shown up. While Ive been lucky to never have it show up on a production segment, Ive seen many a network administrator track one down and shut down a port to try to find the offending virtual machine. As it turns out, virtual machines can be hard to find as they have different MAC address formats and may move around to different segments if on a laptop or host with migration technology enabled. With Hyper-V R3, Hyper-V administrators can make their virtual machine libraries disallow DHCP server packets to be sent from the host by default. The DHCP guard feature sets this property at the virtual machine network configuration. This step is shown in Figure A

The selected area has the DHCP Guard option selected. There are a number of ways to tackle this problem, including switch configuration and possibly arcane server administrative practices. However, with any test or development capacity, the rules seem to lighten as requirements in these environments differ from their production counterparts.

One of Active Directory's keystone features is authorization for DHCP servers. Given that Hyper-V is quite aware of Active Directory and other Windows environments, this works well in the grand scheme of things to protect against the unauthorized DHCP advertisements. DHCP Guard is an example of a granular setting that should be set as part of the virtual machine library creation process. This is for both server and client operating systems. Should a user install something like VMware Workstation, Oracle VM VirtualBox, or another type II hypervisor, additional operating systems (including a DHCP server) could be added within. Do you see value in this configuration option? I do! Would you configure your library virtual machine to use this setting? Share your comments below

Customers who value route diversity (in order to withstand switch failures) can connect their hosts to different switches. In this Switch Independent Mode, the switches are not aware that different interfaces on the server comprise a team. Instead, all the teaming logic is done exclusively on the server. Many customers choose to operate in an active/active mode, where traffic is spread across both NICs until a failure occurs. Historically some customers prefer to operate in an active/standby mode where all the traffic is on one team member until a failure occurs. When a failure is detected all the traffic moves to the standby team member. This mode of operation also can be created in Switch Independent mode teaming. The real tradeoff is what happens when there is a failure. If youve configured active/st andby then you will have the same level of performance in a failure condition whereas youll have degraded performace if you go with the active/active mode. On the other hand, when you dont have a failure, youll have much greater bandwidth using active/active. In addition to achieving reliability, customers can also choose to aggregate bandwidth to a single external switch using NIC Teaming. This is done by creating a team in a Switch Dependent Mode wherein all NICs that comprise the team are connected to the same switch. There are two common varieties of Switch Dependent teams: Those that use no configuration protocol, a method often called static or generic teaming, and a mode that uses the IEEE 802.1ax Link Aggregation Control Protocol (LACP) to coordinate between the host and the switch. Both of these models are fully supported in Windows Server 2012. For more details on the modes of operation and load distribution schemes, please refer to the NIC Teaming Users Guide. Switch Dependent Mode treats the members of the team as an aggregated big pipe with a minor restriction (explained below). Each side balances the load between the team members independent of what the other side is doing. And, subject to the minor restriction, the pipe is kept full in both directions. What is that minor restriction? TCP/IP can recover from missing or out-of-order packets. However, outof-order packets seriously impact the throughput of the connection. Therefore, teaming solutions make every effort to keep all the packets associated with a single TCP stream on a single NIC so as to minimize the possibility of out-of-order packet delivery. So, if your traffic load comprises of a single TCP stream (such as a Hyper-V live migration), then having four 1Gb/s NICs in an LACP team will still only deliver 1 Gb/s of bandwidth since all the traffic from that live migration will use one NIC in the team. However, if you do several simultaneous live migrations to multiple destinations, resulting in multiple TCP streams, then the streams will be distributed amongst the teamed NICs. Configuring NIC Teaming in Windows Server 2012 As mentioned previously, NIC Teaming provides a rich PowerShell interface for configuring and managing teams either locally or remotely. Moreover, for those who prefer a UI based management model, the NIC Teaming UI is a complete management solution that runs PowerShell under the covers. Both PowerShell and UI administration are covered in depth in the NIC Teaming Users Guide. Below are some highlights that show just how easy it is to setup NIC Teaming. Suppose you have a server with four NICs: NIC1, NIC2, NIC3, and NIC4. In order to put NIC1 and NIC2 in a team, you can run this PowerShell command as an administrator: New-NetLbfoTeam MyTeam NIC1,NIC2 When the command returns, you will have a team with the name MyTeam and team members NIC1 and NIC2, setup in Switch Independent mode. It is also simple to make more advanced changes. For

example the PowerShell cmdlet below will create a team as an LACP team to be bound to the Hyper-V switch. New-NetLbfoTeam MyTeam NIC1,NIC2 TeamingMode Lacp LoadBalancingAlgorithm HyperVPorts As noted earlier, you could use the UI instead to achieve the same results. The NIC Teaming UI can be invoked from Server Manager or by invoking lbfoadmin.exe at a command prompt. The UI is available on Windows Server 2012 configurations that have local UI and on Windows Server 2012 or Windows 8 systems that run the Remote Server Administration Tools (RSAT). The UI can manage multiple servers simultaneously. .

Now you can create a new team. Select the NICs you want to team (control-click each NIC) then rightclick the group and click on Add to New Team:

This will bring up the New Team dialog. Enter the team name.

You can configure the team further to support the teaming mode and other properties.

Now the team is set up. It is easy to make changes to the team through the Team TASKS dropdown or by right-clicking on the team.

If you want to modify the team to be an active/standby team, simply right click on the team and select Properties.

This will bring up the Team Properties dialog. Click on the additional properties drop-down, then the Standby adapter drop-down, and select the standby NIC.

After you select OK to apply the change you will see that the NIC is now in Standby mode:

Summary NIC Teaming in Windows Server 2012 enables continuous network availability and increased network performance for all workloads even when the team comprises of NICs from multiple vendors. NIC Teaming can be managed easily using PowerShell or the built-in NIC Teaming UI. NIC Teaming enables greater workload density while reducing operational costs for your private, public, and hybrid cloud deployments.

139. Features that were not available in Windows 2008 R2 Hyper v and only available in Windows 2012 Hyper V DHCP Guard Router guard Hyper-V Extensible Switch Extension monitoring Extension uniqueness Extensions that learn life cycle of virtual machines Extensions that prohibit state changes Multiple extensions on same switch IP Address rewrite Generic Routing Encapsulation Live storage migration NUMA Support SR-IOV Smart paging Runtime memory configuration Resource Metering in Hyper-V VHDX Offloaded data transfer support

Data Center Bridging (DCB) Virtual Fibre Channel in Hyper-V MPIO Support for 4 KB disk sectors in Hyper-V virtual hard disks Quality of Service (QoS) minimum bandwidth Encrypted cluster volumes Cluster Shared Volume (CSV) 2.0 Application monitoring In-box live migration queuing Anti-affinity virtual machine rules (Affinity virtual machine rules were partially supported)

140. Hyper-V host servers need to have at least one NIC that is dedicated to hosting management traffic. As a best practice, you should assign a static IP address to this NIC 141. If your organization is currently running Hyper-V 2.0, it is usually possible to perform an in-place upgrade to Hyper-V 3.0. In preparation for an upgrading a standalone Hyper-V 2.0 server, you must shut down any virtual machines that are currently running. 142. It is worth noting that Windows Server 2012 is designed to perform a server core deployment by default. However, you cannot perform an in-place upgrade of a full Windows Server deployment (with a GUI) to a server core deployment. If you want a server core deployment, you will have to upgrade to the full GUI version of Windows Server 2012 and then uninstall the GUI later. 143. One of the big disadvantages to performing an in-place upgrade is that it can cause virtual machines to be down for a significant amount of time. One way to reduce the amount of time during which virtual machines are unavailable is to perform a migration rather than an upgrade. A migration involves deploying Hyper-V 3.0 onto new hardware while your existing hardware continues to run Hyper-V 2.0. Once the deployment is complete, you can migrate the individual virtual machines from the Hyper-V 2.0 deployment to the Hyper-V 3.0 deployment 144. Exporting the virtual machine creates the following folder

The Virtual Machines folder


This folder will contain a single .exp file, which will use the virtual machine ID for its name (in my case: "6D59FE56-6D20-4129-9BF3-2457DDB58A9A.exp"). The .exp file is the exported configuration of the virtual machine. There will also be another folder in this folder, which is also named use the virtual

machine ID. If the virtual machine was in a saved state when it was exported this subfolder will contain two saved state files (a .vsv and a .bin file), otherwise it will be empty.

The Virtual Hard Disks folder


This folder contains copies of each of the virtual hard disks associated with the virtual machine. Note that if you have two virtual hard disks with the same name (but different locations) associated with a virtual machine, exporting the virtual machine will fail.

The Snapshots folder


This folder will contain: A .exp file for each snapshot the virtual machine had (name after the snapshot ID) A folder named after the snapshot ID that contains the saved state files for the snapshot in question. A folder named after the virtual machine ID that will contain the differencing disks used by all of the snapshots associated with the virtual machine (.avhd files).


I will look at this file in more detail another day. It is not necessary for standard export / import usage.


Three types of import of virtual machine

Restore and copy were available in previous versions of Hyper-V but register is a new option. Let me quickly step you through what each of these options do:

Register o If you have a virtual machine were you have already put all of the virtual machine files exactly where you want them, and you just need Hyper-V to start using the virtual machine where it is this is the option you want to choose. Restore o If your virtual machine files are stored on a file share / removable drive / etc and you want Hyper-V to move the files to the appropriate location for you, and then register the virtual machine this is the option for you. Copy o If you have a set of virtual machine files that you want to import multiple times (e.g. you are using them as a template for new virtual machines) this is what you want to choose. This will copy the files to an appropriate location, give the virtual machine a new unique ID, and then register the virtual machine.

146. In Hyper-V 3.0, shared storage is no longer required for failover clustering. A failover cluster can be built without the need for a Cluster Shared Volume. In those types of clusters, the

virtual machines can reside on local direct attached storage or they can even reside on certain types of file servers. 147. Failover clustering is a feature in the server manager


Cluster resource testing The second type of cluster testing you can perform is called cluster resource testing. Cluster resource testing lets you see how your cluster would behave if a specific resource failed. Unfortunately, this type of testing is somewhat limited and there isnt much documentation available from Microsoft pertaining to it. There are three important things that you need to know about cluster resource testing prior to trying it: 1. Cluster resource testing simulates the failure of a cluster resource, so if you do not perform the tests carefully you could cause an outage. 2. You can only test components that Microsoft defines as cluster resources. You will find out how to get a list of these components below. 3. Often the name that Windows assigns to a cluster resource is different from the name that you give to the resource. You can use cluster resource testing to find out what would happen if a virtual machine fails, but you wont be able to reference the virtual machine by its usual name.

149. To see a list of the cluster group properties, enter the following command: GetClusterGroup <group name> | Select-Object *


Anti Affinity rules Anti-affinity rules are not exactly easy to work with. These rules can only be established through PowerShell, and the process is not very intuitive. The key to understanding how the process works is to understand that for every clustered virtual machine, there is a corresponding cluster group. Each cluster group uses the same name as the virtual machine for which it was created. Anti-affinity rules are applied to cluster groups, not to virtual machines. Normally you could modify this type of value by using a command like this: GetClusterGroup <virtual machine name> | Set-ClusterGroup AntiAffinityClassNames <value> However, there is just one problem with the command shown abovethere is no SetClusterGroup cmdlet. The fact that such a command does not exist is a safety precautionif a Set-ClusterGroup cmdlet did exist, you could potentially destroy a virtual machine if you used the cmdlet ncorrectly. Since you cant use Get-ClusterGroup and Set-ClusterGroup, you have to use a completely different approach to modifying the AntiAffinityClassNames property. Unfortunately, Windows PowerShell does not contain the code for managing the AntiAffinityClassNames property. However, you can download a PowerShell module that makes it possible to directly manipulate this property. In case you are not familiar with the concept of a PowerShell module, it is basically a collection of PowerShell cmdlets that can be bolted on to the core cmdlet set. Before you can import the module, you will have to configure your servers execution policy to allow PowerShell scripts to be run. The easiest way to do this is to enter the following command: Set-ExecutionPolicy Unrestricted This command allows PowerShell to run any PowerShell script, regardless of where it came from (Figure 4.20). Obviously, there are some security implications associated with allowing unrestricted access to scripts, as explained in the text shown below (Figure 4.20). If you are concerned about security, you can set the execution policy back to Restricted when you finish working with the AntiAffinityClassNames by using the SetExecutionPolicy Restricted cmdlet. you can use the following command to import the moduleImport-Module C:\Modules\AntiAffinityClassNames If you would like to see a list of the new cmdlets that were added, enter the following command (Figure 4.22): Get-Command Module AntiAffinityClassNames use the following command: Set-AntiAffinityClassNames Cluster HyperVCluster ClusterGroup VM3 Value Domain Controller Use the Get-AntiAffinityClassNames cmdlet to verify the success of the operation


Preferred owners Some organizations might prefer to have virtual machines running on certain hosts. For example, if you have a large database application, you would probably want that application to run on the host that has the fastest CPU. Hyper-V allows you to set priorities on which physical hosts your virtual machines run. Virtual machine prioritization In any organization there are some virtual machines that are more important than others. For example, your mail server is probably more important than a redundant domain controller. By prioritizing your virtual machines, you can make sure that the most important virtual machines continue to function in a failover situation. Windows will start the highest priority virtual machines first and lower priority virtual machines in sequence until the server runs out of memory or other resources.


153. you can modify the number of virtual processors that are assigned to the virtual machine by using the following command Set-VMProcessor Limit The maximum amount of time that a virtual machine is allowed to use a physical CPU. The default limit is 100% usage. Reservation A percentage of CPU time solely for a specific virtual machine. By default the reservation is set at 0%. Weight A relative weight that affects how much CPU time a virtual machine will receive. The default weight is 100. 154. New VM-cmdlet: it is possible to build a virtual machine by using the New-VM cmdlet. However, when you use a minimum set of switches with this cmdlet, the virtual machine is lacking, to say the least. In fact, the newly created virtual machine doesnt even have a virtual hard disk. There are a couple of ways to overcome this problemthe easiest way being to provision the virtual machine while it is being created. Although this approach can complicate the New-VM command, it does allow you to create a fully provisioned virtual machine using a single command. For example, to create a new virtual machine named NewVM4 that has a 50-GB virtual hard drive and is located at F:\NewVM4, you could enter the following command: New-VM Name NewVM4 NewVHDPath F:\NewVM4\disk1.VHDX NewVHDSize 50GB Path F:\NewVM4 This command (Figure 5.42) works really well for creating a virtual machine and its virtual hard disk. The problem is that the virtual machine is still a bit lacking. By default, this virtual machine is only equipped with 512 MB of memory, a single virtual processor, and the virtual network adapter is not connected to a virtual switch (Figure 5.43). Never mind the fact that you might need to create some additional virtual hard disks for the virtual machine 155. Creating a new VM using Powershell creating the virtual machine and its 50-GB New-VM Name NewVM5 NewVHDPath boot drive F:\NewVM5\disk1.VHDX NewVHDSize 50GB Path F:\NewVM5 adding the second 50-GB virtual hard disk. New-VHD F:\NewVM5\Disk2.VHDX Size Create a new hard disk. But this will not let 50GB the new hard disk attached to the virtual machine join the newly created virtual hard disk to the Add-VMHardDiskDrive NewVM5 IDE 0 1

virtual machine, you will have to choose a set of ports that are not in use. For example, use IDE port 0.1 Provision memory and CPU retrieve both the name of the virtual network adapter and the name of the virtual switch onnect the virtual network adapter to the virtual switch

Path F:\NewVM5\Disk2.vhdx

Set-VMMemory NewVM5 Startup 4GB Set-VMProcessor NewVM5 Count 4 $VMNic = Get-VMNetworkAdapter VMName <virtual machine name> Get-VMSwitch | Select-Object Name Connect-VMNetworkAdapter VMNetworkAdapter $VMNic SwitchName <virtual switch name>


snapshot using powershell To take a snapshot of a virtual machine, you can use this command: Get-VM <virtual machine name> | Checkpoint-VM You can roll back a virtual machine by using the Restore-VMSnapshot command to verify the snapshot through PowerShell, you can use the Get- VMSnapshot cmdlet shown below: o Get-VMSnapshot VMName <Virtual machine name> If you decide instead to remove the most recently created virtual machine snapshot, the command syntax is exactly the same, except that you would use the RemoveVMSnapshot cmdlet (Figure 5.56) instead of the RestoreVMSnapshot cmdlet. Live migration of a virtual machine Move-VM NewVM3 Lab2 The problem is that the above command moves the virtual machine itself, but not the virtual machines storage. Therefore, the command is only appropriate if the virtual machines files reside on SMB storage. For other cases you need to specify the destination storage path as an additional switch. To see how this works, imagine that you want to move the virtual machine NewVM3 to server Lab2 and you want to store the virtual machine files in F:\VMs (Figure 5.57). You can perform the migration using this command: Move-VM NewVM3 Lab2 DestinationStoragePath F:\VMs Upgrading the Integration Services If you upgrade a server that was previously running an earlier version of Hyper-V to Hyper-V 3.0, the upgrade process only affects the hypervisor and the host operating system. The virtual machines themselves are not altered in the process and continue to run the same version of the Integration Services they were using prior to the upgrade. A virtual machine that is running an earlier version of the Integration Services will function in a Hyper-V 3.0 environment, but the virtual machines performance will not achieve the level of performance that it would if it were running the current version of the Integration Services



159. Microsofts primary tool for performing P2V conversions is System Center Virtual Machine Manager (SC VMM). 160. System center VMM offline conversion An offline conversion differs from an online conversion in that System Center Virtual Machine Manager creates a Windows PE boot

environment on the physical server. The server is then booted into this environment so that the servers operating system and applications can be taken offline during the conversion process 161. Using the Disk2VHD free physical disk conversion tool As previously mentioned, System Center Virtual Machine Manager is the tool of choice for performing physical to virtual conversions in a Hyper-V environment. However, System Center Virtual Machine Manager is not the only tool for the job. A number of different vendors provide conversion tools, some of which are quite expensive. If a commercial P2V tool such as System Center Virtual Machine Manager is beyond your budget, you might consider using Disk2VHD, a free tool provided by Microsoft and Sysinternals to aid in P2V conversions. Unlike System Center Virtual Machine Manager, Disk2VHD does not perform machine-level P2V conversions. Instead, it converts a physical hard disk to a virtual hard disk. You can then manually create a virtual machine and tell it to use the newly created virtual hard disk. The nice thing about this tool is that it can be run while a server is online. You can even use it on physical computers that only have one hard drive. Disk2VHD creates a VSS snapshot prior to building the VHD file, and this allows the VHD file to be stored on the same disk that is being converted. Of course the tool delivers much better performance if you are able to store the VHD file on a disk other than the one you are converting. 162. Replica vs failover clustering Before you learn how to build a Hyper-V replica, you need to understand that that the Hyper-V replica feature is not intended to be a replacement for failover clustering. As explained in the earlier chapters on failover clustering, the purpose of a cluster is to make a virtual machine highly available. If a host server fails, the virtual machines that were running on that host server will instantly fail over to another cluster node. Clustering also allows host servers to be taken offline for maintenance without disrupting the virtualized workload. Hyper-Vs replica feature provides capabilities that are somewhat similar to clustering, but replicas are intended for use in smaller environments and do not have the same capabilities as clusters. Failover clusters are designed to fail over automatically, but replicas have to be failed over manually. Another important difference is that Hyper-V replicas use asynchronous replication. This means that replication works really well over low-bandwidth, high-latency links. However, the use of asynchronous replication means that replication does not happen in real time and a replica is therefore not a true mirror image of a virtual machine. Replication cycles are designed to occur every five minutes, but there are factors that can delay replication. As a general rule, larger organizations use clustering whenever possible. Replication is intended for use in smaller organizations and is a convenient way to create secondary copies of virtual machines rather than being a faulttolerant solution.


Dynamic Optimization and Power Optimization - Performance and Resource VMM can perform load balancing within host clusters that support live migration. Dynamic Optimization migrates virtual machines within a cluster according to settings you enter In System Center 2012 Virtual Machine Manager, Dynamic Optimization replaces the host load balancing that is performed for Performance and Resource Optimization (PRO) by the PRO CPU Utilization and PRO Memory Utilization monitors in System Center Virtual Machine Manager (VMM) 2008 R2.

Optimization (PRO).

VMM can help to save power in a virtualized environment by turning off hosts when they are not needed and turning the hosts back on when they are needed. VMM supports Dynamic Optimization and Power Optimization on Hyper-V host clusters and on host clusters that support live migration in managed VMware ESX and Citrix XenServer environments. For Power Optimization, the computers must have a baseboard management controller (BMC) that enables out-of-band management

164. The VMM console is automatically installed when you install a VMM management server 165. In System Center 2012 Service Pack 1 (SP1) and in System Center 2012 R2 you can take advantage of the AlwaysOn feature in Microsoft SQL Server 2012 to ensure high availability of the VMM database

For System Center 2012 SP1 and System Center 2012 R2 only: To configure Microsoft SQL Server with AlwaysOn
1. Complete the VMM Setup wizard to install the VMM management server, as described in the previous procedure. 2. Add the VMM database to the availability group. 3. On the secondary SQL Server node, create a new login with the following characteristics:
o o o

The login name is identical to the VMM service account name. The login has the user mapping to the VMM database. The login is configured with the database owner credentials.

4. Initiate a failover to the secondary SQL Server node, and verify that you can restart the VMM service (scvmmservice). 5. Repeat the last two steps for every secondary SQL Server node in the cluster. 6. If this is a high availability VMM setup, continue to install other high availability VMM nodes

166. The Hyper-V Management Pack monitors the major components of a Hyper-V deployment, including:

Hyper-V role Virtual machines Virtual networks Virtual network adapters Virtual hard disks

167. During Dynamic Optimization, VMM migrates virtual machines within a host cluster to improve load balancing among hosts and to correct any placement constraint violations for virtual machines. 168. Dynamic Optimization can be configured on a host group, to migrate virtual machines within host clusters with a specified frequency and aggressiveness. Aggressiveness determines the amount of load imbalance that is required to initiate a migration during Dynamic Optimization. By default, virtual machines are migrated every 10 minutes with medium aggressiveness. When configuring frequency and aggressiveness for Dynamic Optimization, an administrator should factor in the resource cost of additional migrations against the advantages of balancing load among hosts in a host cluster. By default, a host group inherits Dynamic Optimization settings from its parent host group.

169. Dynamic Optimization can be set up for clusters with two or more nodes. If a host group contains stand-alone hosts or host clusters that do not support live migration, Dynamic Optimization is not performed on those hosts. Any hosts that are in maintenance mode also are excluded from Dynamic Optimization. In addition, VMM only migrates highly available virtual machines that use shared storage. If a host cluster contains virtual machines that are not highly available, those virtual machines are not migrated during Dynamic Optimization. 170. On demand Dynamic Optimization also is available for individual host clusters by using the Optimize Hosts action in the VMs and Services workspace. On demand Dynamic Optimization can be performed without configuring Dynamic Optimization on host groups. After Dynamic Optimization is requested for a host cluster, VMM lists the virtual machines that will be migrated for the administrator's approval

To use Dynamic Optimization, VMM must be managing a host cluster that supports live migration. For information about configuring Hyper-V host clusters in VMM, see Adding and Managing Hyper-V Hosts and Host Clusters in VMM. For information about adding VMware ESX and Citrix XenServer environments to VMM, see Managing VMware and Citrix XenServer in VMM.

172. You can configure Dynamic Optimization and Power Optimization on any host group. However, the settings will not have any effect unless the host group contains a host cluster. 173. To use Power Optimization, the host computers must have a BMC that enables out-ofband management. For more information about the BMC requirements, see How to Configure Host BMC Settings. 174. To view Dynamic Optimization and Power Optimization in action, you must deploy and run virtual machines on the host cluster. For more information, see Creating and Deploying Virtual Machines in VMM.


Virtual fibre channel First, the virtual machine must run a compatible operating system. Microsoft supports virtual Fibre Channel for virtual servers running Windows Server 2008, Windows Server 2008 R2 or Windows Server 2012 host server must contain at least one physical Fibre Channel adapter. The adapter must support N_Port Virtualization (NPIV). The NPIV feature must be enabled and the adapter must be connected to an NPIV-enabled SAN. Create a virtual SAN . configure with the fibre channel host bus adaptor Add the SAN to a virtual machine

The Active Directory directory service is a distributed database that stores and manages information about network resources, as well as application-specific data from directory-enabled applications. Active Directory allows administrators to organize objects of a network (such as users, computers, and devices) into a hierarchical collection of containers known as the logical structure. The top-level logical container in this hierarchy is the forest. Within a forest are domain containers, and within domains are organizational units.

Storage QoS cannot be installed as a separate feature. It is only available when a user installs the HyperV role. For more information about storage QoS, see Storage Quality of Service for Hyper-V.

Storage QoS provides storage performance isolation in a multitenant environment and mechanisms to notify you when the storage I/O performance does not meet the defined threshold to efficiently run your virtual machine workloads. Key benefits

Storage QoS provides the ability to specify a maximum input/output operations per second (IOPS) value for your virtual hard disk. An administrator can throttle the storage I/O to stop a tenant from consuming excessive storage resources that may impact another tenant. An administrator can also set a minimum IOPS value. They will be notified when the IOPS to a specified virtual hard disk is below a threshold that is needed for its optimal performance. The virtual machine metrics infrastructure is also updated, with storage related parameters to allow the administrator to monitor the performance and chargeback related parameters. Maximum and minimum values are specified in terms of normalized IOPS where every 8 K of data is counted as an I/O. Key features

Storage QoS allows administrators to plan for and gain acceptable performance from their investment in storage resources Administrators can: Requirements Specify the maximum IOPS allowed for a virtual hard disk that is associated with a virtual machine. Receive a notification when the specified minimum IOPS for a virtual hard disk is not met. Monitor storage-related metrics through the virtual machine metrics interface.

Storage QoS requires that the Hyper-V role is installed. The Storage QoS feature cannot be installed separately. When you install Hyper-V, the infrastructure is enabled for defining QoS parameters associated with your virtual hard disks. Virtual hard disk maximum IOPS

Storage QoS provides the following features for setting maximum IOPS values (or limits) on virtual hard disks for virtual machines: You can specify a maximum setting that is enforced on the virtual hard disks of your virtual machines. You can define a maximum setting for each virtual hard disk. Virtual disk maximum IOPS settings are specified in terms of normalized IOPS. IOPS are measured in 8 KB increments. You can use the WMI interface to control and query the maximum IOPS value you set on your virtual hard disks for each virtual machine. Windows PowerShell enables you to control and query the maximum IOPS values you set for the virtual hard disks in your virtual machines. Any virtual hard disk that does not have a maximum IOPS limit defined defaults to 0. The Hyper-V Manager user interface is available to configure maximum IOPS values for Storage QoS.

Virtual hard disk minimum IOPS threshold notifications

Storage QoS provides the following features for setting minimum values (or reserves) on virtual hard disks for virtual machines:

You can define a minimum IOPS value for each virtual hard disk, and an event-based notification is generated when the minimum IOPS value is not met. Virtual hard disk minimum values are specified in terms of normalized IOPS. IOPS are measured in 8 KB increments. You can use the WMI interface to query the minimum IOPS value you set on your virtual hard disks for each virtual machine. Windows PowerShell enables you to control and query the minimum IOPS values you set for the virtual hard disks in your virtual machines. Any virtual hard disk that does not have a minimum IOPS value defined will default to 0. The Hyper-V Manager user interface is available to configure minimum IOPS settings for Storage QoS.

QoS technologies allow you to meet the service requirements of a workload or an application by measuring network bandwidth, detecting changing network conditions (such as congestion or availability of bandwidth), and prioritizing - or throttling - network traffic. For example, you can

use QoS to prioritize traffic for latency-sensitive applications (such as voice or video streaming), and to control the impact of latency-insensitive traffic (such as bulk data transfers). QoS provides the following features.

Bandwidth management Classification and tagging Priority based flow control Policy-based QoS and Hyper-V QoS

Modifies the settings for an existing VMM user role

176. P2V Requirements on the source machine-The physical computer to be converted must meet the following requirements:

Must have at least 512 MB of RAM. Cannot have any volumes larger than 2040 GB. Must have an Advanced Configuration and Power Interface (ACPI) BIOS. Vista WinPE will not install on a non-ACPI BIOS. Must be accessible by VMM and by the virtual machine host. Cannot be in a perimeter network A perimeter network, which is also known as a screened subnet, is a collection of devices and subnets that are placed between an intranet and the Internet to help protect the intranet from unauthorized Internet users. The source computer for a physical-to-virtual (P2V) conversion can be in any other network topology in which the VMM management server can connect to the source machine to temporarily install an agent and can make Windows Management Instrumentation (WMI) calls to the source computer The source computer should not have encrypted volumes

Supported operating systems

The following restrictions apply to P2V operating system support:

VMM does not support P2V conversion for computers with Itanium architecturebased operating systems. VMM does not support P2V on source computers that are running Windows NT Server 4.0. VMM does not support converting a physical computer running Windows Server 2003 with Service Pack 1 (SP1) to a virtual machine that is managed by Hyper-V. Hyper-V does not support Integration Components on computers running Windows Server 2003 with SP1. As a result, there is no mouse control when you use Remote Desktop Protocol (RDP) to connect to the virtual machine. To avoid this issue, update the operating system to Windows Server 2003 with Service Pack 2 (SP2) before you convert the physical computer.

In Virtual Machine Manager (VMM) in System Center 2012 Service Pack 1 (SP1) or System Center 2012 R2, you can consistently configure identical capabilities for network adapters across multiple hosts by using port profiles and logical switches. Port profiles and logical switches act as containers for the properties or capabilities that you want your network adapters to have. Instead of configuring individual properties or capabilities for each network adapter, you can specify the capabilities in port profiles and logical switches, which you can then apply to the appropriate adapters

IT organizations need tools to charge back business units that they support while providing the business units with the right amount of resources to match their needs. For hosting providers, it is equally important to issue chargebacks based on the amount of usage by each customer. To implement advanced billing strategies that measure both the assigned capacity of a resource and its actual usage, earlier versions of Hyper-V required users to develop their own chargeback solutions that polled and aggregated performance counters. These solutions could be expensive to develop and sometimes led to loss of historical data.

To assist with more accurate, streamlined chargebacks while protecting historical information, Hyper-V in Windows Server 2012 introduces Resource Metering, a feature that allows customers to create cost-effective, usage-based billing solutions. With this feature, service providers can choose the best billing strategy for their business model, and independent software vendors can develop more reliable, end-to-end chargeback solutions on top of Hyper-V.
Hypee v extended replica

With Hyper-V Extend Replication feature in Windows Server 2012 R2, customers can have multiple copies of data to protect them from different outage scenarios. For example, as a customer I might choose to keep my second DR site in the same campus or a few miles away while I want to keep my third copy of data across the continents to give added protection for my workloads. Hyper-V Replica Extend replication exactly addresses this problem by providing one more copy of workload at an extended site apart from replica site. As mentioned in Whats new in Hyper-V Replica in Windows Server 2012 R2, user can extend the replication from Replica site and continue to protect the virtualized work loads even in case of disaster at primary site!! This is so cool and exactly what I was looking for. But how do I enable this feature in Windows Server 2012 R2? Well, I will walk you through different ways in which you can enable replication and you will be amazed to see how similar is the experience is to enable replication wizard.

Extend Replication through UI:

Before you Extend Replication to third site, you need to establish the replication between a primary server and replica server. Once that is done, go to replica site and from Hyper-V UI manager select the VM for which you want to extend the replication. Right click on VM and select Replication->Extend Replication . This will open Extend Replication Wizard which is similar to Enable Replication Wizard. Few points to be taken care are: 1. In Configure Replication frequency screen , note that Extend Replication only supports 5 minute and 15 minute Replication frequency. Also note that replication frequency of extend replication should be at least equal to or greater than primary replication relationship. 2. In Configure Additional Recovery Points screen, you can mention the recovery points you need on the extended replica server. Please note that you cannot configure App-Consistent snapshot frequency in this wizard. Click Finish and you are done!! Isnt it very similar to Enable Replication Wizard??? If you are working with clusters, in replica site go to Failover Cluster manager UI and select the VM for which you want to extend replication from Roles tab in the UI. Right Click on VM and

select Replication->Extend Replication. Configure the extended replica cluster/server in the same way as you did above.

Extend Replication using PowerShell:

You can use the same PowerShell cmdlet which you used for enabling Replication to create extended replication relationship. However as stated above, you can only choose a replication frequency of either 5 minutes or 15 minutes.
Enable-VMReplication VMName <vmname> -ReplicaServerName <extended_server_name> ReplicaServerPort <Auth_port> -AuthenticationType <Certificate/Kerberos> -ReplicationFrequencySec <300/900> [--other optional parameters if needed]

Status and Health of Extended Replication:

Once you extend replication from replica site, you can check Replication tab in Replica Site Hyper-V UI and you will see details about extend replication being present along with Primary Relation ship. You can also check-up Health Statistics of Extended Replication from Hyper-V UI. Go to VM in Replica site and right click and select Replication->View replication Health . Extended Replication health statistics are displayed under a separate tab named Extended Replication. You can also query PowerShell on the replica site to see details about Extended Replication Relationship.
Measure-VMReplication VMName <name> -ReplicationRelationshipType Extended | select *

Hyper-V Offloaded Data Transfer requires the following:

Offloaded Data Transfercapable hardware to host the virtual hard disk files. The hardware needs to be connected to the virtual machine as virtual SCSI devices or directly attached physical disks (sometimes referred to as pass-through disks). This optimization is also supported for natively attached, VHDX-based virtual disks. VHD-based or VHDX-based virtual disks attached to a virtual IDE controller do not support this optimization because integrated development environment (IDE) devices lack support for Offloaded Data Transfer. With Port Mirroring, traffic sent to or from a Hyper-V Virtual Switch port is copied and sent to a mirror port. There are a range of applications for port mirroring - an entire ecosystem of network visibility companies exist that have products designed to consume port mirror data for performance management, security analysis, and network diagnostics. With Hyper-V Virtual Switch port mirroring, you can select the switch ports that are monitored as well as the switch port that receives copies of all the traffic.

The following examples configure port mirroring so that all traffic that is sent and received by both MyVM and MyVM2 is also sent to the VM named MonitorVM.
Set-VMNetworkAdapter VMName MyVM PortMirroring Source Set-VMNetworkAdapter VMName MyVM2 PortMirroring Source Set-VMNetworkAdapter VMName MonitorVM PortMirroring Destination

What Hyper-V Replica Is Not Intended To Do I know some people are thinking of this next scenario, and the Hyper-V product group anticipated this too. Some people will look at Hyper-V Replica and see it as a way to provide an alternative to clustered Hyper-V hosts in a single site. Although Hyper-V Replica could do this, it is not intended for for this purpose.
The replication is designed for low bandwidth, high latency networks that the SME is likely to use in inter-site replication. As youll see later, there will be a delay between data being written on host/cluster A and being replicated to host/cluster B

In Windows Server 2012, Server Message Block (SMB) 3.0 file shares can be used as shared storage for Hyper-V hosts, so that Hyper-V can store virtual machine files, which include configuration, virtual hard disk (.vhd and .vhdx) files, and snapshots on SMB file shares. By using Virtual Machine Manager (VMM), you can assign SMB file shares to stand-alone servers that are running Hyper-V and host clusters