8/29/2012

Hello. Today I’d like to talk to you about how Windows Server 2012 helps IT professionals get more out of their server infrastructure through Server Virtualization.

Page 2

8/29/2012

Traditional datacenters were built with physical servers running a dedicated workload. Each server in the datacenter was designed, purchased, deployed, and maintained for the sole purpose of running a single workload. If the workload was later retired or upgraded, the physical server was either repurposed or retired. With Windows Server 2012 Hyper-V, it is now easier than ever for organizations to take advantage of the cost savings of virtualization and make the optimum use of server hardware investments by consolidating multiple server roles as separate virtual machines. You can use Hyper-V to efficiently run multiple operating systems — Microsoft Windows, Linux, and others — in parallel, on a single server. Windows Server 2012 extends this with more features, greater scalability and built-in reliability mechanisms.
Today, we are going to focus on how Windows Server and Hyper-V help you better manage your existing datacenter’s infrastructure through:

• • •

Support for important business workloads through scale and performance improvements Increased business flexibility with virtual machine mobility Continuous services to help meet availability and service-level agreements

Page 4

• •

Open and extensible platform for performance management and automation Delivery of shared and multitenant environments with isolation

4

8/29/2012

Note: This slide has 4 clicks Cloud and mobility are two major trends that have started to affect the IT landscape, in general, and the datacenter, in particular. There are four key IT questions that customers claim are keeping them up at night: [Click] How do I embrace the cloud? With a private cloud, you get many of the benefits of public cloud computing—including selfservice, scalability, and elasticity—with the additional control and customization available from dedicated resources. Microsoft customers can build a private cloud today with Windows Server, Hyper-V, and Microsoft System Center, but there are many questions about how to best scale and secure workloads on private clouds and how to cost effectively build private clouds, offer cloud services, and connect more securely to cloud services. [Click] How do I increase the efficiency in my datacenter? Whether you are building your own private cloud, are in the business of offering cloud services, or simply want to improve the operations of your traditional datacenter, lowering infrastructure costs and operating expenses while increasing overall availability of your production systems is critical. Microsoft understands that efficiency built into your server platform and good management of your cloud and datacenter infrastructure are important to achieving operational excellence. [Click] How do I deliver next-generation applications? As the interest in cloud computing and providing web-based IT services grows, our customers

Page 5

tell us that they need a scalable web platform and the ability to build, deploy, and support cloud applications that can run on-premises or in the cloud. They also want to be able to use a broad range of tools and frameworks for their next-generation applications, including open source tools. [Click] How do I enable modern work styles? As the lines between people’s lives and their work blur, their personalities and individual work styles have an increasing impact on how they get their work done—and which technologies they prefer to use. As a result, people increasingly want a say in what technologies they use to complete work. This trend is called “Consumerization of IT.” As an example of consumerization, more and more people are bringing and using their own PCs, slates, and phones to work. Consumerization is great as it unleashes people’s productivity, passion, innovation, and competitive advantage. We at Microsoft believe that there is power in saying “yes” to people and their technology requests in a responsible way. Our goal at Microsoft is to partner with you in IT, to help you embrace these trends while ensuring that the environment is more secure and better managed.

5

8/29/2012

Note: This slide has 4 clicks Optimize your IT for the cloud with Windows Server 2012 When you optimize your IT for the cloud with Windows Server 2012, you take advantage of the skills and investment you’ve already made in building a familiar and consistent platform. Windows Server 2012 builds on that familiarity. With Windows Server 2012, you gain all the Microsoft experience behind building and operating private and public clouds, delivered as a dynamic, available, and cost-effective server platform. Windows Server 2012 delivers value in four key ways: 1. It takes you beyond virtualization. Windows Server 2012 offers a dynamic, multitenant infrastructure that goes beyond virtualization technology to a complete platform for building a private cloud. 2. It delivers the power of many servers, with the simplicity of one. Windows Server 2012 offers you excellent economics by integrating a highly available and easy-to-manage multiple-server platform. 3. It opens the door to every app on any cloud. Windows Server 2012 is a broad, scalable, and elastic web and application platform that gives you the flexibility to build and deploy applications on-premises, in the cloud, and in a hybrid environment through a consistent set of tools and frameworks. 4. It enables the modern workstyle. Windows Server 2012 empowers IT to provide users with flexible access to data and applications anywhere, on any device, and while simplifying management and maintaining security, control, and compliance.

With Windows Server 2012, Microsoft has made significant investments in each of these four areas that allow customers to take their datacenter operations to the next level. Now, let’s take a look how Windows Server

Page 6

2012 helps customers to: • • • Build and deploy a modern datacenter infrastructure Build and run modern applications Enable modern work styles for their end users 6 .

8/29/2012 We have listened to you about what you need to deliver to you customers. they must keep the servers and VM’s up and running • They want to decrease the capital cost and lower the operational cost of managing their infrastructure • They want to use these new servers as they come out and be able to fully leverage the raw power that the servers provide • They must be able to keep using the servers that they have in place today. better. come challenges: • They want to meet their customers’ SLA. faster and more available virtual machines • They need more flexibility to deliver these solutions so that they aren’t locked in any particular solution and they can easily handle the needs of the types of the virtual machines that are requested • They want to be able to handle the different storage and networking requirements both for resources in place or resources that may be purchased in the future • They want the flexibility to move virtual machines wherever it would be best to run them whether on premises or at a Service Provider • They want to ensure that the virtualization solution they provide is the one that will allow them to handle the new hardware technologies that are coming from different manufacturers With these needs. Some of the needs an organization has are: • They need bigger. Page 7 .

7 . we are able to support these customers’ needs and challenges.• They must be able to securely run a common infrastructure serving multiple groups or customers With Windows Server 2012 Hyper-V.

8/29/2012 Let’s look at a few scenarios regarding Windows Server 2012 and Server Virtualization Page 8 .

Page 9 . we are going to talk about different scenarios that are Windows Server 2012 and Server Virtualization help you solve. you want to be able to handle multitenant situations where your tenants have different organizational needs but need to run on a shared environment such as conflicting IP schemes or multiple different domains.8/29/2012 Now. • Whether you are a hosting provider. • In the first scenario we are going to focus on the increased scale and capacity that an organization can support by running the virtual machines on Windows Server 2012. or an enterprise. With the enhancements we have made to virtual machine availability in Windows Server 2012. • Next. we can help an organization to achieve greater uptime for these virtual machines. you can increase your business flexibility by leveraging the new virtual machine mobility enhancements from within Windows Server 2012 Hyper-V. Through these massive scale capabilities within Windows Server 2012 nearly all workloads are virtualization candidates. We’ll discuss all the different types of virtual machine mobility. we will focus on the ecosystem and the extensibility that we have provided within Windows Server 2012 Hyper-V. • Also. • Many organizations rely on or try to achieve continuous services.

• Virtual machines running on Windows Server 2012 Hyper-V are much larger than what they were with Windows Server 2008 R2 or even SP1. These are just a few of the more important ones. thereby. we have added many clustering enhancements to increase that availability of keeping the virtual machine up and running as well as being able to do things to the virtual machine without taking it down. • Things like multiple simultaneous live migrations which allows you to quickly move virtual machines around between hosts whether they are on the same cluster or using technologies like shared nothing live migration to move virtual machines across multiple different clusters.8/29/2012 There are many new features within Windows Server 2012 Hyper-V to support those needs and challenges of an organization or a customer. So we have given them the larger virtual machines and we’ve been able to increase performance through hardware offloading to allow better performance and more scale within that physical host. increasing the availability Things like dynamic memory enhancements where we can increase the memory capacity to a virtual machine without any downtime to the VM. You need these machines to be more available and thus. • The many different VM availability options that give you lots of flexibility and the ability to meet the availability demands that your organizations require. under VM availability. • We don’t do this all by ourselves. we have support from our customers and our Page 10 .

• You need to be able to handle multitenant environments and in those multitenant environments. we use capabilities like network virtualization and resource metering to better support these multitenant environments and report back on the quantity of resources your virtual machines are using. 10 . With an open and extensible virtual switch we allow different third-party vendors to create plugins that will handle specific tasks within this switch to help support security and management needs. This will also help organizations who want to customize aspects of running their infrastructure by giving them the ability to automate tasks through our enhanced support of Windows PowerShell.partners to create extensions to their Hyper-V infrastructure.

8/29/2012 NOTE: This slide is animated and has 3 clicks [Click] The first scenario we are going to talk about is how you can achieve greater densities and run more demanding workloads through the Scale and Performance improvements of Windows Server 2012 Hyper-V. as you virtualize more of your infrastructure you need to have a platform. you will need to be able to utilize the advancements within the hardware to the fullest. faster virtual machines • Hardware offloading • Non-Uniform Memory Access (NUMA) support Page 11 . Within your organization. [Click] We do this through new features and updates delivered with Windows Server 2012 Hyper-V like: • Bigger. a hypervisor. as you adopt newer hardware. that can support your most demanding workloads. [Click] Also. without losing the capability of the existing investments in infrastructure you already have.

However. Hyper-V in Windows Server 2012 Hyper-V in Windows Server 2012 greatly expands support for host processors and memory. and additional resiliency. “New virtual hard disk format“). tier-1 business applications.8/29/2012 Before Windows Server 2012 Hyper-V in Windows Server 2008 R2 supported configuring virtual machines with a maximum of four virtual processors and up to 64 GB of memory. IT organizations increasingly want to use virtualization when they deploy mission-critical. For this class of workloads. more virtual processors and larger amounts of virtual machine memory are a core requirement. Large. These features help ensure that your virtualization infrastructure can support the configuration of large. a new VHDX virtual hard disk format with larger disk capacity of up to 64 TB (see the section. high-performance virtual machines to support workloads that might need to scale up significantly. Page 12 . demanding workloads such as online transaction processing (OLTP) databases and online transaction analysis (OLTA) solutions typically run on systems with 16 or more processors and demand large amounts of memory. New features include support for up to 64 processors and 1 TB of memory for Hyper-V guests.

called VHDX. the VHD format of Windows Server needed to also evolve. It also provides additional protection from corruption from power failures by logging updates to the VHDX metadata structures and prevents performance degradation on large-sector physical disks by optimizing structure alignment. and the ever-increasing reliance on virtualized enterprise workloads. Hyper-V in Windows Server 2012 contains an update to the VHD format. that has much larger capacity and additional resiliency. To prevent performance degradation issues on the new. specifically: • • • Where the size of the VHD is larger then 2. Technical description The VHDX format’s principal new features are: Page 13 .8/29/2012 With the evolution of storage systems.040 GB. VHDX supports up to 64 terabytes of storage. To reliably protect against issues for dynamic and differencing disks during power failures. The new format is better suited to address the current and future requirements for running enterprise-class workloads. large-sector physical disks.

This alleviates the alignment issue that is associated with virtual hard disks. including the log.• • Support for virtual hard disk storage capacity of up to 64 terabytes. (Trim requires pass-through or SCSI disks and trim-compatible hardware. In case of a power failure. • The VHDX format also provides the following features: • • • • Larger block sizes for dynamic and differential disks. to increase resiliency to corruptions. The header region contains two headers. which lets these disks attune to the needs of the workload. Efficiency (called trim) in representing data. if the write to the final destination is corrupted. The structures in the format are aligned to help ensure that are no unaligned I/Os exist. 13 . The ability to store custom metadata about the file that you might want to record. such as operating system version or patches applied. an associated performance penalty is caused by the Read-Modify-Write cycles that are required to satisfy these I/Os. The header region is the first region of the file and identifies the location of the other structures. which results in smaller files and lets the underlying physical storage device reclaim unused space.) The figure illustrates the VHDX hard disk format. A 4-KB logical sector virtual disk that results in increased performance when applications and workloads that are designed for 4-KB sectors use it. If unaligned I/Os are issued to these disks. The format contains an internal log that is used to capture updates to the metadata of the virtual hard disk file before being written to its final location. block allocation table (BAT). then it is played back from the log to promote consistency of the virtual hard disk file. most of the structures are large allocations and are MB aligned. only one of which is active at a time. Protection against corruption during power failures by logging updates to the VHDX metadata structures. and metadata region. Optimal structure alignment of the virtual hard disk format to suit large sector disks. The different regions of the VHDX format are as follows: • Header region. As you can see in the preceding figure.

and optimize structure alignments of dynamic and 13 . Because the VHDX files can be large. The VDHX performance-enhancing features make it easier to handle large workloads. If corruption occurs during a power failure while an update is being written to the actual location. Hyper-V in Windows Server 2012 also introduces support that lets VHDX files be more efficient when they represent that data within it. which lets VHDX files be more efficient in representing that data within it. physical sector size. based on the workload they are supporting. and logical sector size. has a much larger storage capacity than the earlier formats and addresses the technological demands of evolving enterprises. • Data region. and the VHDX file is brought back to a consistent state. when applications delete content within a virtual hard disk. which lets the underlying physical storage device reclaim unused space. • Metadata region. which is designed to handle current and future workloads. the space they consume can grow quickly. Changes to the VHDX metastructures are written to the log before they are written to the final location. This is an important difference from the VHD format because sector bitmaps are aggregated into their own blocks instead of being appended in front of each payload block. Currently. on the subsequent open. The intent log is a circular ring buffer. Benefits VHDX. The BAT contains entries that point to both the user data blocks and sector bitmap block locations within the VHDX file. This contains the Hyper-V storage stack from optimizing the space used and prevents the underlying storage device from reclaiming the space previously occupied by the deleted data.• Intent log. In Windows Server 2012. so it does not protect data contained within them. the Windows storage stack in both the guest operating system and the Hyper-V host have limitations that prevent this information from being communicated to the virtual hard disk and the physical storage device. This results in smaller files size. protect data better during power outages. Hyper-V now supports unmap notifications. The metadata region contains a table that points to both user-defined metadata and virtual hard disk file metadata such as block size. The log does not track changes to the payload blocks. the change is applied again from the log.

you need the following: • VHDX-based virtual disks connected as virtual SCSI devices or as directly attached physical disks (sometimes referred to as pass-through disks). 13 . This optimization also is supported for natively attached VHDX-based virtual disks.differential disks to prevent performance degradation on new. • Trim-capable hardware. you need the following: • Windows Server 2012 or Windows 8 • The Hyper-V server role To take advantage of the trim feature. Requirements To take advantage of the new version of the new VHDX format. large-sector physical disks.

hard-drive vendors are introducing transitional “512-byte emulation drives. which contains the 512-byte logical sector referred to in the write. The disk reads the 4-KB physical sector into its internal cache.096-byte sectors (also known as 4-KB sectors). most of the software industry depends on 512byte disk sectors of. To minimize the impact on the ecosystem. Page 14 . Support for improved performance of virtual hard disks on 512e disks A 512e disk can perform a write only in terms of a physical sector—that is. The internal process in the disk that makes this write possible follows these steps: 1. Hyper-V supports 4-KB disk sectors. it can’t directly write a 512-byte sector write issued to it. but with fewer compatibility issues than by exposing a 4-KB sector size at the disk interface. such as improved format efficiency and an improved scheme for error correction codes.” These drives offer some of the advantages of 4-KB native drives. A change in sector size introduces major compatibility issues in many applications. With Windows Server 2012 In Windows Server 2012. However.” also known as “512e.8/29/2012 Current situation Increases in storage density and reliability (among other factors) are driving the data storage industry to transition the physical format of hard disk drives from 512-byte sectors to 4.

Because there’s a 512-byte sector bitmap in front of the data payload block of dynamic and differencing virtual hard disks. Hyper-V 4-KB disk sector support in Windows Server 2012 reduces the performance impact of 512e disks on the virtual hard disk stack. The disk performs a write of the updated 4-KB buffer back to its physical sector on the disk. resulting in the RMW behavior just described. Requirements for 4-KB disk sector support: • • Windows Server 2012 Physical disk drives that use: o o 512e format Native 4-KB format 14 . It’s common for the virtual hard disk driver to issue 512-byte writes to update these structures.2. 3. Benefits The storage industry is introducing 4-KB physical format drives to provide increased capacity and reliability. Hyper-V in Windows Server 2012 lets you take advantage of this emerging innovation in storage hardware with support for improved performance of virtual hard disks on 512e disks and support for hosting virtual hard disks on native 4-KB disks. causes performance degradation in virtual hard disks for the following reasons: • Dynamic and differencing virtual hard disks have a 512-byte sector bitmap in front of their data payload. called an RMW. as shown in the figure. Support for hosting virtual hard disks on native 4-KB disks Hyper-V in Windows Server 2012 makes it possible to store virtual hard disks on 4-KB disks by implementing a software RMW algorithm in the virtual hard disk layer. In addition. the 4-KB blocks aren’t aligned to the physical 4KB boundary. which lets workloads complete more quickly. • Applications commonly issue reads and writes in multiples of 4-KB sizes (the default cluster size of NTFS). This process. footer/header/parent locators all align to a 512-byte sector. This algorithm converts 512-byte access-and-update requests to corresponding 4-KB accesses and updates. Data in the 4-KB buffer is modified to include the updated 512-byte sector.

Guest NUMA Projecting a virtual NUMA topology onto a virtual machine provides optimal performance and workload scalability in large virtual machine configurations.8/29/2012 Windows Server 2012 Hyper-V supports NUMA in a virtual machine. It does this by allowing the guest operating system and applications such as SQL Server to take advantage of their inherent NUMA performance optimizations (for example. making intelligent NUMA decisions about thread and memory allocation). refers to a computer architecture in multiprocessor systems in which the time required for a processor to access memory depends on the memory’s location relative to the processor. The default virtual NUMA topology projected into a virtual machine running Hyper-V is optimized to match the host’s NUMA topology. With NUMA. as shown in the figure. Modern operating systems and high-performance applications such as SQL Server have developed optimizations to recognize the system’s NUMA topology and consider NUMA when they schedule threads or allocate memory to increase performance. Page 15 . What is NUMA? NUMA. or Non-Uniform Memory Access. a processor can access local memory (memory attached directly to the processor) faster than it can access remote memory (memory that is local to another processor in the system).

Virtual Fibre Channel for Hyper-V provides the guest operating system with unmediated access to a SAN by using a standard World Wide Name (WWN) associated with a virtual machine. which lets you connect to Fibre Channel directly from within virtual machines. Many enterprises have already invested in Fibre Channel SANs. Hyper-V lets you use Fibre Channel SANs to Page 16 . Current situation You need your virtualized workloads to connect to your existing storage arrays with as little trouble as possible. With Windows Server 2012 Virtual Fibre Channel support includes the following: • Unmediated access to a SAN. deploying them in their data centers to address their growing storage requirements. Virtual Fibre Channel for Hyper-V. a new feature of Windows Server 2012. These customers often want the ability to use this storage from within their virtual machines instead of having it only accessible from and used by the Hyper-V host. provides Fibre Channel ports within the guest operating system.8/29/2012 Note: This slide has 2 Clicks for animation to describe how live migration works when you use Virtual Fibre Channel in the VM.

MPIO functionality. This allows multiple Fibre Channel initiators to occupy a single physical port. Mid-range and high-end storage arrays include advanced storage functionality that helps offload certain management tasks from the hosts to the SANs. N_Port ID Virtualization (NPIV). assume a Hyper-V host is connected to two SANs—a production SAN and a test SAN. • • • Up to four virtual Fibre Channel adapters on a virtual machine. the NPIV port is removed. easing hardware requirements in SAN design. Each virtual Fibre Channel adapter is associated with one WWN address. especially where virtual SANs are called for. Fibre Channel SANs also allow you to operate in new scenarios. Hyper-V allows you to define virtual SANs on the host to accommodate scenarios where a single Hyper-V host is connected to different SANs via multiple Fibre Channel ports. For example. You can configure as many as four virtual Fibre Channel adapters on a virtual machine and associate each one with a virtual SAN. such as running the Windows Failover Cluster Management feature inside the guest operating system of a virtual machine connected to shared Fibre Channel storage. In this example.virtualize workloads that require direct access to SAN logical unit numbers (LUNs). NPIV is a Fibre Channel facility that allows multiple N_Port IDs to share a single physical N_Port. This path lets you use the advanced functionality of your SANs directly from Hyper-V virtual machines. For example. You can use the same technique to name two separate paths to a single storage target. • A hardware-based I/O path to the Windows software virtual hard disk stack. Virtual Fibre Channel for Hyper-V guests uses NPIV (T11 standard) to create multiple NPIV ports on top of the host’s physical Fibre Channel ports. A new NPIV port is created on the host each time a virtual host bus adapter (HBA) is created inside a virtual machine. When the virtual machine stops running on the host. You can use MPIO functionality with Fibre Channel in the following ways: • 16 . Hyper-V in Windows Server 2012 can use the multipath I/O (MPIO) functionality to help ensure optimal connectivity to Fibre Channel storage from within a virtual machine. Virtual Fibre Channel presents an alternative. you might configure two virtual SANs—one named “Production SAN” that has two physical Fibre Channel ports connected to the production SAN and one named “Test SAN” that has two physical Fibre Channel ports connected to the test SAN. Hyper-V users can offload storage functionality (such as taking a snapshot of a LUN) to the SAN hardware simply by using a hardware Volume Shadow Copy Service (VSS) provider from within a Hyper-V virtual machine. hardware-based I/O path to the Windows software virtual hard disk stack. A virtual SAN defines a named group of physical Fibre Channel ports that are connected to the same physical SAN. or two WWN addresses to support live migration. A single Hyper-V host connected to different SANs with multiple Fibre Channel ports. The host is connected to each SAN through two physical Fibre Channel ports. Each WWN address can be set automatically or manually.

This helps to ensure that all LUNs are available on the destination host before the migration and minimal downtime occurs during the migration. o o • Live migration support with virtual Fibre Channel in Hyper-V: To support live migration of virtual machines across hosts running Hyper-V while maintaining Fibre Channel connectivity. and use MPIO to provide highly available connectivity to the LUNs accessible by the host. Updated HBA drivers are included with the in-box HBA drivers for some models. including the configuration of DSM and connectivity between hosts and compatibility with existing server configurations and DSMs. • Connection only to data LUNs. Configure multiple virtual Fibre Channel adapters inside a virtual machine. Storage accessed through a Virtual Fibre Channel connected to a LUN can’t be used as boot media. This configuration can coexist with a host MPIO setup. and use a separate copy of MPIO within the guest operating system of the virtual machine to connect to the LUNs the virtual machine can access. • A computer with one or more Fibre Channel HBAs.o Virtualize workloads that use MPIO. Hyper-V requires a computer with processor support for hardware virtualization. 16 . two WWNs are configured for each virtual Fibre Channel adapter: Set A and Set B. Requirements for Virtual Fibre Channel in Hyper-V: • One or more installations of Windows Server 2012 with the Hyper-V role installed. Install multiple Fibre Channel ports in a virtual machine. Hyper-V automatically alternates between the Set A and Set B WWN addresses during a live migration. • Windows Server 2008. Use different device-specific modules (DSMs) for the host or each virtual machine. or Windows Server 2012 as the guest operating system. each with an updated HBA driver that supports Virtual Fibre Channel. This approach allows live migration of the virtual machine configuration. Windows Server 2008 R2.

8/29/2012 We only have a Video capture of this Demo at this time. but it is placed in “\\scdemostore01\demostore\Windows Server 2012\WS 2012 Demo Series\Click Thru Demos\Server Virtualization Page 17 .

You have a need to rebalance where the virtual machines are located either via through the servers the VMs reside on. or the storage resources used by the virtual machine. [Click] Within Windows Server 2012 we provide these values through: • Live Migration within a cluster • Live Migration of storage • Shared nothing live migration Page 18 .8/29/2012 NOTE: This slide is animated and has 3 clicks In this scenario we will discuss how you can achieve Increased business flexibility with virtual machine mobility. the things that we have done with Windows Server 2012 Hyper-V that allow us to gain a benefit for our customers of being able to manage the virtual machines independently of thier underlying and physical infrastructure. you need to be able to handle the changes in demand as they occur. [Click] Also. [Click] What we are going to talk about here are the different ways of moving a virtual machine around between different servers.

• Hyper-V Replica 18 .

[Click] 19 . enabling you to move several virtual machines at the same time. you must be able to move virtual machines whenever necessary – without disrupting your business. In Windows Server 2012. you can store your virtual machine hard disk files and configuration files on an SMB share and live migrate the VM to another host whether that host is part of a cluster or not. including to any Hyper-V host server in your environment. When combined with features such as Network Virtualization. this feature even allows virtual machines to be moved between local and cloud hosts with ease.NOTE: This slide is animated and has 5 clicks To maintain optimal use of physical resources and to add new virtual machines easily. With Windows Server 2012 and SMB3. adding support for simultaneous live migrations. which made it possible to move a running virtual machine from one physical computer to another with no downtime and no service interruption. Windows Server 2008 R2 introduced live migration. In this example. live migrations are no longer limited to a cluster and virtual machines can be migrated across cluster boundaries. this assumed that the virtual hard disk for the virtual machine remained consistent on a shared storage device such as a Fibre Channel or iSCSI SAN. However. we are going to show how live migration works when connected to an SMB File Share. Hyper-V builds on this feature.

(Virtual Fibre Channel is also a new feature of HyperV. The source host transfers the CPU and device state of the virtual machine to the destination host. [Click] Bringing the virtual machine online on the destination server: In this stage of a live 19 . This connection transfers the virtual machine configuration data to the destination host. control of the storage that is associated with “Test VM”. the source host creates a TCP connection with the destination host. The following figure shows this stage. This memory is referred to as the “working set” of the migrating virtual machine. [Click] Memory page copy process: This stage is a memory copy process that duplicates the remaining modified memory pages for “Test VM” to the destination host. and memory is allocated to the destination virtual machine. The number of pages transferred in this stage is determined by how actively the virtual machine accesses and modifies the memory pages. [Click] Moving the storage handle from source to destination: During this stage of a live migration. A skeleton virtual machine is set up on the destination host. After the working set is copied to the destination host. is transferred to the destination host. Hyper-V iterates the memory copy process several times. The more modified pages. the available network bandwidth between the source and destination hosts is critical to the speed of the live migration. [Click] Memory page transfer: In the second stage of a SMB-based live migration. with each iteration requiring a smaller number of modified pages to be copied. A page of memory is 4 KB. see “Virtual Fibre Channel in Hyper-V”). such as any virtual hard disk files or physical storage attached through a virtual Fibre Channel adapter. the more quickly live migration is completed. the next stage of the live migration begins. Use of a 1-gigabit Ethernet (GbE) or faster connection is important.Live migration setup: During the live migration setup stage. For more information. The faster the source host transfers the modified pages from the migrating virtual machine’s working set. the memory that is assigned to the migrating virtual machine is copied over the network from the source host to the destination host. During this stage. During this phase of the migration. the migrating virtual machine continues to run. the longer it takes to transfer all pages to the destination host.

TCP time-out intervals vary based on network topology and other factors.migration. the destination server has the up-to-date working set for the virtual machine and access to any storage that the VM uses. At this time. The live migration process completes in less time than the TCP time-out interval for the virtual machine that is being migrated. Network cleanup: In the final stage of a live migration. which causes the switch to obtain the new MAC addresses of the migrated virtual machine so that network traffic to and from the VM can use the correct switch port. a message is sent to the network switch. the migrated virtual machine runs on the destination server. 19 . At this time. the VM resumes operation.

with no downtime. While reads and writes occur on the source virtual hard disk.8/29/2012 NOTE: This slide is animated and has 3 clicks Not only can we live migrate a virtual machine between two physical hosts. which lets you move virtual hard disks that are attached to a running virtual machine without downtime. Hyper-V performs the following steps to move storage: Throughout most of the move operation. a new virtual hard disk is created on the target storage device. the disk contents are copied Page 20 . or redistributing your storage load. you can transfer virtual hard disks. to a new location for upgrading or migrating storage. When you move a running virtual machine’s virtual hard disks. Through this feature. Live storage migration is available for both storage area network (SAN)based and file-based storage. You can perform this operation by using a new wizard in Hyper-V Manager or the new Hyper-V cmdlets for Windows PowerShell. [Click] After live storage migration is initiated. disk reads and writes go to the source virtual hard disk. Hyper-V in Windows Server 2012 introduces live storage migration. performing backend storage maintenance.

Storage migration. [Click] After the source and destination virtual hard disks are synchronized. You also may want to move virtual machine storage between physical storage devices. disk writes are mirrored to both the source and destination virtual hard disks while outstanding disk changes are replicated. Windows Server 2012 provides the flexibility to move virtual hard disks both on shared storage subsystems and on nonshared storage as long as a Windows Server 2012 SMB3 network shared folder is visible to both Hyper-V hosts. or to respond to reduced performance that can result from bottlenecks in the storage throughput. at runtime. also lets you move a virtual machine between hosts on different servers that are not using the same storage. You can add physical storage to either a stand-alone system or to a Hyper-V cluster and then move the virtual machine’s virtual hard disks to the new physical storage while the virtual machines continue to run. to take advantage of new. or other reasons. if two Hyper-V servers are each configured to use different storage devices and a virtual machine must be migrated between these two servers. The source virtual hard disk is deleted. lower-cost storage that is supported in this version of Hyper-V. the virtual machine switches over to using the destination virtual hard disk. storage device servicing. For example. allocated storage for running virtual hard disks might sometimes need to be moved for storage load distribution. you can use storage migration to a shared folder on a file server that is accessible to both servers and then migrate the virtual machine between the servers (because they both have access to that share). [Additional information] Updating the physical storage that is available to Hyper-V is the most common reason for moving a virtual machine’s storage. [Click] After the initial disk copy is complete. combined with live migration.to the new destination virtual hard disk. you can use 20 . Following the live migration. such as SMB-based storage. Just as virtual machines might need to be dynamically moved in a cloud data center.

another storage migration to move the virtual hard disk to the storage that is allocated for the target server.

You can easily perform the live storage migration using a wizard in Hyper-V Manager or Hyper-V cmdlets for Windows PowerShell. Benefits Hyper-V in Windows Server 2012 lets you manage the storage of your cloud environment with greater flexibility and control while you avoid disruption of user productivity. Storage migration with Hyper-V in Windows Server 2012 gives you the flexibility to perform maintenance on storage subsystems, upgrade storage appliance firmware and software, and balance loads as capacity is used without shutting down virtual machines. Requirements for live storage migration • • • Windows Server 2012. The Hyper-V role. Virtual machines configured to use virtual hard disks for storage.

20

8/29/2012

NOTE: This slide is animated and has 4 clicks

With Windows Server 2012 Hyper-V, you can also perform a “Shared Nothing” Live Migration where you can move a virtual machine, live, from one physical system to another even if they don’t have connectivity to the same shared storage. This is useful, for example, in a branch office where you may be storing the virtual machines on local disk, and you want to move a VM from one node to another. This is also especially useful when you have two independent clusters and you want to move a virtual machine, live, between them, without having to expose their shared storage to one another. You can also use “Shared Nothing” Live Migration to migrate a virtual machine from one datacenter to another provided your bandwidth is large enough to transfer all of the data between the datacenters. As you can see in the animation, when you perform a live migration of a virtual machine between two computers that do not share an infrastructure, Hyper-V first performs a partial migration of the virtual machine’s storage by creating a virtual machine on the remote system and creating the virtual hard disk on the target storage device. [Click] While reads and writes occur on the source virtual hard disk, the disk contents are copied over the network to the new destination virtual hard disk.

Page 21

This copy is performed by transferring the contents of the VHD between the two servers over the IP connection between the Hyper-V hosts. [Click] After the initial disk copy is complete, disk writes are mirrored to both the source and destination virtual hard disks while outstanding disk changes are replicated. This copy is performed by transferring the contents of the VHD between the two servers over the IP connection between the Hyper-V hosts. [Click] After the source and destination virtual hard disks are synchronized, the virtual machine live migration process is initiated, following the same process that was used for live migration with shared storage. After the virtual machine’s storage is migrated, the virtual machine migrates while it continues to run and provide network services. [Click] After the live migration is complete and the virtual machine is successfully running on the destination server, the files on the source server are deleted.

21

IT hardware failure. Organizations need an affordable and reliable business continuity solution that helps them recover from a failure. IT software failures. this does not protect businesses from outage of an entire data center. While Failover Clustering can be used with hardware-based SAN replication across data centers. While this can protect virtualized workloads from a local host failure or scheduled maintenance of a host in a cluster. Hyper-V Replica fills an important gap in the Windows Server Hyper-V offering by providing an affordable in-box disaster recovery solution. Page 22 . human errors. Before Windows Server 2012 Beginning with Windows Server 2008 R2. Hyper-V and Failover Clustering can be used together to make a virtual machine highly available and minimize disruptions. and natural disasters. some outages that impact the entire data center such as natural disaster or an extended power outage require a disaster recovery solution that restores data at a remote site in addition to bringing up the services and connectivity. these are typically expensive. customers need a high availability solution that simply restores the service. Administrators can seamlessly migrate their virtual machines to a different host in the cluster in the event of outage or to load balance their virtual machines without impacting virtualized applications.8/29/2012 Current situation Business continuity is the ability to quickly recover business functions from a downtime event with minimal or no data loss. Depending on the type of outage. There are number of reasons why businesses experience outage including power failure. However. network outage.

The figure shows secure replication of virtual machines from different systems and clusters to a remote site over a WAN. It lets you replicate your Hyper-V virtual machines over a network link from one Hyper-V host at a primary site to another Hyper-V host at a Replica site without reliance on storage arrays or other software replication technologies.Windows Server 2012 Hyper-V Replica Windows Server 2012 introduces Hyper-V Replica. In the event of failures (such as power failure. the administrators can manually revert the virtual machines to the Hyper-V server at the primary site. and within minutes they can be accessed by the rest of the network with minimal impact to the business. Once the primary site comes back. Benefits of Hyper-V Replica • Hyper-V Replica fills an important gap in the Windows Server Hyper-V offering by providing an affordable in-box business continuity and disaster recovery solution. the virtual machines are brought back to a consistent point in time. for an encrypted connection. 22 . Connections configured to use integrated authentication are not encrypted. Hyper-V Replica tracks the write operations on the primary virtual machine and replicates these changes to the Replica server efficiently over a WAN. Hyper-V Replica is closely integrated with Windows failover clustering and provides easier replication across different migration scenarios in the primary and Replica servers. Failure recovery in minutes. or natural disaster) at the primary site. the administrator can manually fail over the production virtual machines to the Hyper-V server at the recovery site. Hyper-V Replica can restore your system in just minutes. The network connection between the two servers uses the HTTP or HTTPS protocol and supports both integrated and certificate-based authentication. you should choose certificate-based authentication. fire. • • More secure replication across the network. In the event of an unplanned shutdown. • Hyper-V Replica doesn’t rely on storage arrays. During failover. a built-in feature that provides asynchronous replication of virtual machines for the purposes of business continuity and disaster recovery. Hyper-V Replica is a new feature in Windows Server 2012.

you need two physical computers configured with: • • Windows Server 2012. Sufficient storage to host the files that virtualized workloads use. o Extensible WMI interface. Firewall rules to permit replication between the primary and Replica servers and sites. Sufficient network bandwidth among the locations that host the primary and Replica servers and sites. • • • • • Hardware that supports the Hyper-V role. Failover Clustering feature. Additional storage on the Replica server based on the replication configuration settings may be necessary. 22 . Configuration and management are simpler with Hyper-V Replica: o Integrated user interface (UI) with Hyper-V Manager. o Windows PowerShell command-line interface scripting capability. if you want to use Hyper-V Replica on a clustered virtual machine. Hyper-V server role. Hyper-V Replica automatically handles live migration.• • • Hyper-V Replica doesn’t rely on other software replication technologies. o Failover Cluster Manager snap-in for Microsoft Management Console (MMC). Requirements To use Hyper-V Replica.

8/29/2012 Click through for this demo is located at “\\scdemostore01\demostore\Windows Server 2012\WS 2012 Demo Series\Click Thru Demos\Server Virtualization\Hyper-V Shared Nothing Live Migration Demo environment build instructions are located here: \\scdemostore01\demostore\Windows Server 2012\WS 2012 Demo Series\Demo Builds Page 23 .

[Click] If you have a multi-tenant environment. [Click] Also.8/29/2012 NOTE: This slide is animated and has 5 clicks [Click] In this scenario. Here. the benefits are straight forward. [Click] We do this through new features delivered with Windows Server 2012 Hyper-V like: Page 24 . or need to guarantee minimum bandwidth. you may want to modify the virtual machine’s configuration without having to shut down the VM. Windows Server 2012 has some improvements within it’s Quality of Service (QoS) settings. we are going to talk about how you can achieve greater uptimes for your virtual environment by leveraging the availability improvements from within Windows Server 2012. [Click] This means minimizing downtime due to infrastructure changes. You want to keep the virtual machines up and running and performing as well as possible.

• • • • Clustering enhancements Quality of Service (QoS) minimum bandwidth Dynamic Memory improvements NIC Teaming 24 .

8/29/2012

Clustering has provided organizations with protection against:

• • •

Application and service failure. System and hardware failure (such as CPUs, drives, memory, network adapters, and power supplies.) Site failure (which could be caused by natural disaster, power outages, or connectivity outages).

Clustering enables high-availability solutions for many workloads, and has included Hyper-V support since its initial release. By clustering your virtualized platform, you can increase availability and enable access to server based application in time of planned or unplanned downtime. Other Benefits Hyper-V and Windows Server 2012:

Extend clustered environment features to a new level

Page 25

• •

Support greater access to storage Provide faster failover and migration of nodes

25

8/29/2012

Support for guest clustering via Fiber Channel. Windows Server 2012 provides Fibre Channel ports within the guest operating system, allowing you connect to Fibre Channel directly from within virtual machines. This feature lets you virtualize workloads that use direct access to Fibre Channel storage and cluster guest operating systems over Fibre Channel. Virtual Fibre Channel also allows guest multipathing for high link availability using standard MPIO and DSMs. Clustered live migration enhancements. Live migrations in a clustered environment can now use higher network bandwidths (up to 10 GB) to complete migrations faster. Encrypted cluster volumes. BitLocker-encrypted cluster disks enhance physical security for deployments outside secure data centers, providing a critical safeguard for the cloud. Cluster Shared Volume (CSV) 2.0. The CSV feature, which simplifies the configuration and operation of virtual machines, has also been improved for greater security and performance. It also now integrates with storage arrays for replication and hardware snapshots out of the box.

Page 26

virtual hard disk files. SMB3 transparent failover lets file shares fail over to another cluster node with minimal interruption of server applications that are storing data on these file shares. Hyper-V and failover clustering work together to bring higher availability to workloads that do not officially support clustering. storage virtual machine files such as configuration files. if a hardware or software failure occurs on a cluster node.8/29/2012 • Transparent failover. • Page 27 . Hyper-V and failover clustering can detect whether the key services being provided by a virtual machine are healthy. If they are not healthy. Hyper-V application monitoring. By monitoring services and event logs inside the virtual machine. automatic corrective action (restarting the virtual machine or moving it to a different Hyper-V server) can be taken. You can now more easily perform hardware or software maintenance of nodes in a File Server cluster (for example. Also. and snapshots in file shares over the SMB3 protocol) by moving file shares between nodes with minimal interruption of server applications that are storing data on these file shares.

Administrators can now perform large multiselect actions to queue live migrations of multiple virtual machines. administrators can configure their SharePoint virtual machine and the partnered SQL Server virtual machine to fail over together to the same node. Administrators can also specify that two virtual machines cannot coexist on the same node in a failover scenario. Administrators can now configure virtual machine priorities to control the order in which virtual machines fail over or are started to help ensure that lower-priority virtual machines automatically release resources if they are needed for higher-priority virtual machines.8/29/2012 • Virtual machine failover prioritization. • • Requirements: Windows Server 2012 with the Hyper-V role installed. For example. In-box live migration queuing. Affinity (and anti-affinity) virtual machine rules. Page 28 . Administrators can now configure partnered virtual machines so that at failover the partnered machines are migrated together.

10-GbE network adapters and switches are considerably more expensive than their 1-GbE counterparts. To optimize the 10 GigE hardware. Windows Server 2008 R2 In Windows Server 2008 R2. For most deployments. a server running Hyper-V requires new capabilities to manage bandwidth.8/29/2012 Current situation Public cloud hosting providers and large enterprises must often run multiple application servers on servers running Hyper-V. to help achieve network performance isolation on a server running Hyper-V. This is Page 29 . Enterprises want to run multiple application servers on a server running Hyper-V with the confidence that each one will perform predictably. such as storage or live migration. QoS supports the enforcement of maximum bandwidth. but becomes impractical for those using or planning to use 10 GigE network adapters. Most hosting providers and enterprises use a dedicated network adapter and a dedicated network for a specific type of workload. Hosting providers that host customers on a server running Hyper-V must deliver performance that’s based on service level agreements (SLAs). This strategy works for 1-gigabit Ethernet (GbE) network adapters. However. one or two 10 GigE network adapters provide enough bandwidth for all the workloads on a server running Hyper-V.

29 . If virtual machine data is rate limited to 3 gigabits per second (Gbps). Windows Server 2012 solution Windows Server 2012 introduces new a QoS bandwidth management feature. Consider a typical server running Hyper-V in which the following four types of network traffic share a single 10 GigE network adapter: • Traffic between virtual machines and resources on other servers. In the event of congestion. minimum bandwidth is also known as fair sharing. depending on how their maximum bandwidths are defined. This characteristic is essential to converge multiple types of network traffic on a single network adapter. Features of minimum bandwidth Unlike maximum bandwidth. when the desired network bandwidth exceeds the available bandwidth. that enables hosting providers and enterprises to provide services with predictable network performance to virtual machines on a server running Hyper-V. which is a bandwidth cap. If there’s no congestion—that is.known as rate limiting. this also means the other types of traffic can reduce the actual amount of bandwidth available for virtual machine data to unacceptable levels. the sum of the virtual machine data throughputs can’t exceed 3 Gbps at any time. minimum bandwidth. when there’s sufficient bandwidth to accommodate all network traffic—each type of network traffic can exceed its quota and consume as much bandwidth as is available. • Traffic for live migration of virtual machines between servers running Hyper-V. minimum bandwidth is designed to help ensure that each type of network traffic receives at least its assigned bandwidth. • Traffic to and from storage. This characteristic makes minimum bandwidth superior to maximum bandwidth in using available bandwidth. For this reason. • Traffic to and from a CSV (intercommunication between nodes in a cluster). It assigns a certain amount of bandwidth to a given type of traffic. However. even if the other network traffic types don’t use the remaining 7 Gbps of bandwidth. minimum bandwidth is a bandwidth floor.

supports far fewer traffic flows but is able to classify network traffic that doesn’t originate from the networking stack. or a remote desktop connection. The result of classification is a number of traffic flows in Windows. A typical scenario involves a Converged Network Adapter that supports iSCSI offload. It’s the only viable choice if there are many traffic flows that require minimum bandwidth enforcement. which is built on the new packet scheduler in Windows Server 2012. Two mechanisms Windows Server 2012 offers two different mechanisms to enforce minimum bandwidth: The software solution: The newly enhanced packet scheduler. The figure shows an invalid. In both cases. you can use relative minimum bandwidth. a file transfer between a server and a client. where each virtual machine is classified as a traffic flow. 29 . The hardware solution: Network adapters that support Data Center Bridging. Because the packet scheduler in the networking stack doesn’t process this offloaded traffic. giving the more important ones a higher weight. network traffic needs to first be classified: • • The server either classifies a packet itself or gives instructions to a network adapter to classify it. • Network adapter with DCB support. oversubscribed configuration. DCB is the only viable choice to enforce minimum bandwidth. For example. you should use strict minimum bandwidth where you assign an exact bandwidth quota to each virtual machine that is attached to the Hyper-V Extensible Switch. You determine the bandwidth fraction that you assign to a virtual machine by dividing the virtual machine’s weight by the sum of all the weights of virtual machines that are attached to the Hyper-V Extensible Switch. Bandwidth oversubscription: The maximum amount of bandwidth that can be assigned to virtual machines is the bandwidth of a member network adapter in the network adapter team. The software solution. and a given packet can only belong to one of them. in which iSCSI traffic bypasses the networking stack and is framed and transmitted directly by the Converged Network Adapter. Each of the two mechanisms has its own advantages and disadvantages: • Packet scheduler.If the importance of workloads in virtual machines is relative. A typical example is a server running Hyper-V hosting many virtual machines. Based on how the bandwidth policies are configured. a traffic flow could be a live migration connection. where you assign a weight to each virtual machine. either the packet scheduler in Windows Server 2012 or the network adapter will dispatch the packets at a rate equal to or higher than the minimum bandwidth configured for the traffic flow. which depends on DCB support on the network adapter. provides a fine granularity of classification. The following figure illustrates relative minimum bandwidth If you want to provide an exact bandwidth. The hardware solution.

Microsoft doesn’t recommend that you enable both mechanisms at the same time for a given type of network traffic: • Using the previous example. you shouldn’t also configure the packet scheduler in Windows Server 2012 to do the same. live migration and storage traffic are configured to use the second network adapter on the server running Hyper-V. and the other serves the rest of the traffic of the host server. Enabling both mechanisms at the same time for the same types of network traffic compromises the intended results. You can enable the software-based minimum bandwidth in Hyper-V to help ensure bandwidth fair sharing among virtual machines and enable the hardware-based minimum bandwidth on the second network adapter to help ensure bandwidth fair sharing among various types of network traffic from the host server. • If you’ve already configured the network adapter to allocate bandwidth for live migration and storage traffic. a server running Hyper-V has two physical network adapters: one binds to a virtual switch and serves virtual machine data.Both mechanisms can be employed on the same server: For example. • 29 . and vice versa.

The table on the right shows the actual amount of bandwidth each type of network traffic has in T1.8/29/2012 The figure shows how relative minimum bandwidth works for each of the four types of network traffic flows in three different time periods: T1. in the three periods. Benefits of QoS minimum bandwidth QoS minimum bandwidth benefits vary from public cloud hosting providers to enterprises. Most hosting providers and enterprises today use a dedicated network adapter and a dedicated network for a specific type of workload such as storage or live migration to help achieve network performance isolation on a server running Hyper-V. For example. T2. you manage QoS policies and settings dynamically with Windows PowerShell. the table on the left shows the configuration of the minimum amount of required bandwidth a given type of network traffic flow needs. The new QoS cmdlets support both the QoS functionalities available in Windows Server 2008 R2—such as maximum bandwidth and priority tagging—and the new features available in Windows Server 2012. In this example. storage is actually sent at 5 Gbps. QoS management In Windows Server 2012. respectively. and T3. storage is configured to have at least 40 percent of the bandwidth (4 Gbps of a 10-GbE network adapter) at any time. • Not only does one 10-GbE network adapter (or two for high availability) already provide Page 30 . and T3. In this figure. it becomes impractical for those using or planning to use 10-GbE network adapters. 4 Gbps. and 6 Gbps. such as minimum bandwidth. T2. • Although this works for those using 1-GbE network adapters.

and network resources. eliminating the fear of virtualization due to lack of performance predictability.sufficient bandwidth for all the workloads on a server running Hyper-V in most deployments. Requirements Minimum QoS can be enforced through the following two methods: • The first method relies on software built into Windows Server 2012 and has no other requirements. a server running Hyper-V requires new capabilities to manage bandwidth. • Benefits for enterprises: • Run multiple application servers on a server running Hyper-V and be confident that each application server will deliver predictable performance. A network adapter must support Enhanced Transmission Selection and Priority-Based Flow Control to pass the NDIS QoS logo test created for Windows Server 2012. you must use a network adapter that supports DCB and the miniport driver of the network adapter must implement the NDIS QoS APIs. • To make the best use of 10-GbE hardware. which is hardware assisted. which includes computing. Explicit Congestion Notification is not required for the logo. 30 . DCBX is also not required for the logo. For hardware-enforced minimum bandwidth. Help to ensure that customers won’t be affected or compromised by other customers on their shared infrastructure. The IEEE Enhanced Transmission Selection specification includes a software protocol called Data Center Bridging Exchange (DCBX) to let a network adapter and switch exchange DCB configurations. requires a network adapter that supports Data Center Bridging. Benefits for public cloud hosting providers: • Host customers on a server running Hyper-V and still be able to provide a certain level of performance based on SLAs. storage. but 10-GbE network adapters and switches are considerably more expensive than their 1-GbE counterparts. • The second method.

when it is running as a virtual machine. 30 .Enabling QoS in Windows Server 2012. is not recommended. The minimum bandwidth enforced by the packet scheduler works best on 1-GbE or 10-GbE network adapters.

In Windows Server 2008 R2 with SP1. a common challenge for administrators is upgrading the maximum amount of memory for a virtual machine as demand increases. Because of an increase in the size of the databases. consider a virtual machine running SQL Server and configured with a maximum of 8 GB of RAM.8/29/2012 Note: This slide is animated and has 1 click Dynamic Memory was introduced with Windows Server 2008 R2 SP1 and is used to reallocate memory between virtual machines that are running on a Hyper-V host. Improvements made within Windows Server 2012 Hyper-V include • • Minimum memory setting – being able to set a minimum value for the memory assigned to a virtual machine that is lower than the startup memory setting Hyper-V smart paging – which is paging that is used to enable a virtual machine to reboot while the Hyper-V host is under extreme memory pressure • • Memory ballooning – the technique used to reclaim unused memory from a virtual machine to be given to another virtual machine that has memory needs Runtime configuration – the ability to adjust the minimum memory setting and the maximum memory configuration setting on the fly while the virtual machine is running without requiring a reboot. For example. Because a memory upgrade requires shutting down the virtual machine. you must shut down the virtual machine to perform the Page 31 . the virtual machine now requires more memory.

you can apply that change while the virtual machine is running. the Hot-Add memory process of the VM will ask for more memory and that memory is now available for the virtual machine to use. an administrator can change the maximum memory value of the virtual machine. 31 . Then. while it is running and without any downtime to the VM. [Click] As memory pressure on the virtual machine increases. With Windows Server 2012.upgrade. which requires planning for downtime and decreasing business productivity.

To minimize the performance impact of Smart Paging. It provides a reliable way to keep the virtual machines running when no physical memory is available. Hyper-V Smart Paging is not used when: • A virtual machine is being started from an off state (instead of a restart).8/29/2012 Note: This slide is animated and has 2 clicks Hyper-V Smart Paging is a memory management technique that uses disk resources as additional. • No memory can be reclaimed from other virtual machines that are running on the host. However. the paging operation inside virtual machines is performed by Windows Memory Manager. Hyper-V uses it only when all of the following occur: • The virtual machine is being restarted. it can degrade virtual machine performance because disk access speeds are much slower than memory access speeds. With internal guest paging. • A virtual machine is failing over in Hyper-V clusters. • Oversubscribing memory for a running virtual machine would result. • No physical memory is available. This approach has both advantages and drawbacks. Windows Memory Manager has more information than does the Page 32 . Hyper-V continues to rely on internal guest paging when host memory is oversubscribed because it is more effective than Hyper-V Smart Paging. temporary memory when more memory is required to restart a virtual machine.

Hyper-V host about memory use within the virtual machine. In this example. that VM would be using some amount of memory between the Minimum and Maximum values. Because of this. In this case. [Click] When this occurs. we have multiple VMs running. Normally. the Hyper-V host will use the Dynamic Memory techniques like ballooning to pull the RAM away from this or other virtual machines to free up enough RAM to bring all of the Smart Paging contents back off of the disk. 32 . which means it can provide Hyper-V with better information to use when it chooses the memory to be paged. a Hyper-V Smart Paging file is created for the VM to give it enough RAM to be able to start. [Click] After some time. the Hyper-V host is running fairly loaded and there isn’t enough memory available to give the virtual machine all of the startup value needed to boot. and we are restarting the last virtual machine. internal guest paging incurs less overhead to the system than Hyper-V Smart Paging.

when both adapters are connected. NIC Teaming in a Hyper-V environment The failure of an individual Hyper-V port or virtual network adapter can cause a loss of connectivity for a virtual machine. Although NIC Teaming in Windows Server 2012 is not a Hyper-V feature. you can protect against connectivity loss and. To help increase reliability and performance in virtualized environments. and to provide redundancy in case one of the links fails. Virtual network adapter virtual switch teaming NIC Teaming in Windows Server 2012 lets a virtual machine have virtual network adapters connected to more than one Hyper-V Extensible Switch and maintain connectivity even if the network adapter under that virtual switch is disconnected. Page 33 . Windows Server 2012 includes built-in support for NIC Teaming–capable network adapter hardware.8/29/2012 What is NIC Teaming? NIC Teaming is a method of combining (aggregating) multiple network connections in parallel to increase throughput beyond what a single connection could sustain. double throughput. By using two virtual network adapters in a team. NIC Teaming is also known as “network adapter teaming technology” and “load balancing failover” (LBFO). it’s important for business-critical Hyper-V environments because it can provide increased reliability and performance for virtual machines.

At least one network adapter or two or more network adapters of the same speed. NIC Teaming then works in one of the following ways: • Each virtual machine can install a virtual function from one or both SR-IOV network adapters and. If the network adapter associated with the virtual function becomes disconnected. SR-IOV This is particularly important when working with features such as SR-IOV. SR-IOV traffic doesn’t go through the virtual switch and thus can’t be protected by a network adapter team that’s under a virtual switch. if a network adapter disconnection occurs. each virtual switch port associated with a virtual machine that’s using NIC Teaming must be set to allow MAC spoofing. with no need for a third-party teaming solution. you need: • • Windows Server 2012. the traffic can fail over to the other switch with minimal loss of connectivity. Each virtual machine may have a virtual function from one network adapter and a nonvirtual function interface to the other switch. With the NIC Teaming feature. fail over from the primary virtual function to the backup virtual function. NIC Teaming management You can configure NIC Teaming in Windows Server 2012 through the NIC Teaming Server Manager configuration UI or with Windows PowerShell. • Because failover between network adapters in a virtual machine might result in traffic being sent with the MAC address of the other interface. Requirements To implement NIC Teaming for a virtual machine. you can set up two virtual switches.The Windows Server 2012 implementation of NIC Teaming supports up to 32 network adapters in a team. NIC Teaming gives you the benefit of network fault tolerance on your physical servers and virtual machines by using at least two receive-side scaling (RSS)–capable network adapters by vendors others than Microsoft. Better throughput. Benefits NIC Teaming benefits include: • • Higher reliability against failure. 33 . each connected to its own SRIOV-capable network adapter.

• • Two or more network adapters if you are seeking bandwidth aggregation or failover protection. One or more network adapters if you are seeking VLAN segregation for the network stack. 33 .

when multiple adapters are connected. you can set up two virtual switches. Although NIC Teaming in Windows Server 2012 is not a Hyper-V feature. NIC Teaming then works in one Page 34 . NIC Teaming in Windows Server 2012 lets a virtual machine have virtual network adapters that are connected to more than one virtual switch and still have connectivity even if the network adapter under that virtual switch is disconnected. multiply throughput. With the virtual machine teaming option. NIC Teaming is also known as “network adapter teaming technology” and “load balancing failover” (LBFO). which does not go through the Hyper-V Extensible Switch and thus cannot be protected by a network adapter team that is under a virtual switch. Windows Server 2012 includes built-in support for NIC Teaming-capable network adapter hardware. Using multiple virtual network adapters in a Network Interface Card (NIC) Teaming solution can prevent connectivity loss and. This is particularly important when working with features such as SR-IOV traffic. it is important for business-critical Hyper-V environments because it can provide increased reliability and performance for virtual machines.8/29/2012 Note: This slide is animated and has 2 clicks The failure of an individual Hyper-V port or virtual network adapter can cause a loss of connectivity for a virtual machine. each connected to its own SR-IOV–capable network adapter. To increase reliability and performance in virtualized environments.

if a network adapter disconnection occurs. allowing high availability and load balancing across multiple physical network interfaces. the traffic can fail over to the other switch without losing connectivity. each virtual switch port associated with a virtual machine using NIC Teaming must be set to permit MAC spoofing. the Hyper-V Extensible Switch can take advantage of the native provider support for NIC Teaming. As shown in the following figure. If the network adapter associated with the virtual function becomes disconnected.of the following ways: Each virtual machine can install a virtual function from one or both SR-IOV network adapters and. 34 . The Windows Server 2012 implementation of NIC Teaming supports up to 32 network adapters in a team. Because failover between network adapters in a virtual machine might result in traffic being sent with the MAC address of the other interface. [Click] As you lose one of the NICs within the team… [Click] The network traffic that was going through that adapter will now flow through one of the remaining adapters within the team. Each virtual machine may have a virtual function from one network adapter and a nonvirtual function interface to the other switch. fail over from the primary virtual function to the back-up virtual function.

8/29/2012 Click through demo located at “\\scdemostore01\demostore\Windows Server 2012\WS 2012 Demo Series\Click Thru Demos\Server Virtualization\Hyper-V QOS Demo environment build instructions are located here: \\scdemostore01\demostore\Windows Server 2012\WS 2012 Demo Series\Demo Builds Page 35 .

where we’ve worked with our partners to increase the capabilities of Hyper-V for an organization as well as increase the performance of the virtual machines running within this Hyper-V environment. [Click] We have also increased the management capabilities of running Hyper-V environment through the tools provided inbox and through partner generated tools. lets talk about things that we have done. [Click] Windows Server 2012 Hyper-V accomplishes this with: • Hyper-V Extensible Switch • Hardware offloading • Windows PowerShell • Integration with Microsoft System Center Page 36 .8/29/2012 NOTE: This slide is animated and has 3 clicks [Click] Now.

devices. With Windows Server 2012 The Hyper-V Extensible Switch in Windows Server 2012 is a layer-2 virtual network switch that provides programmatically managed and extensible capabilities to connect virtual machines to the physical network. Extensibility of the Hyper-V Extensible Switch Page 37 . If you’re in charge of making IT purchasing decisions at your company. programmatically with WMI or the Hyper-V Manager user interface. you want to know that the virtualization platform you choose won’t lock you in to a small set of compatible features. The Hyper-V Extensible Switch is an open platform that lets multiple vendors provide extensions that are written to standard Windows API frameworks. The reliability of extensions is strengthened through the Windows standard framework and reduction of required third-party code for functions and is backed by the Windows Hardware Quality Labs (WHQL) certification program. or technologies. You can manage the Hyper-V Extensible Switch and its extensions by using Windows PowerShell.8/29/2012 Before Windows Server 2012 Many enterprises need the ability to extend virtual switch features with their own plug-ins to suit their virtual environment.

0 specification. forwarding. and filtering to the virtual switch. Some other features of Hyper-V Extensible Switch extensibility are: 37 . NDIS filters were introduced with the NDIS 6. and filter remote procedure calls (RPCs). you can examine or modify outgoing and incoming packets before additional processing occurs. Two platforms Extensions are implemented using the following drivers: • Network Device Interface Specification (NDIS) filter drivers are used to monitor or modify network packets in Windows. you can more easily create firewalls. monitor or authorize connections. antivirus software. and it overrides the default switching of the Hyper-V Extensible Switch. By accessing the TCP/IP processing path at different layers. see the Windows Filtering Platform. Extensions may extend or replace these aspects of the switching process: •Ingress filtering •Destination lookup and forwarding •Egress filtering Only one instance of the forwarding extension may be used per switch instance. filter IP security (IPsec)-protected traffic. diagnostic software. The Hyper-V Extensible Switch architecture in Windows Server 2012 is an open framework that allows third parties to add new functionality such as monitoring. and other types of applications and services. let independent software vendors (ISVs) create drivers to filter and modify TCP/IP packets. • Windows Filtering Platform (WFP) callout drivers introduced in Windows Vista and Windows Server 2008. For more information. The diagram shows the architecture of the Hyper-V Extensible Switch and the extensibility model. In this path.Windows Server 2012 extends the virtual switch to provide new capabilities. Filtering and modifying TCP/IP packets provides unprecedented access to the TCP/IP packet processing path.

Multiple extensions can coexist on the same Hyper-V Extensible Switch. • Multiple extensions on same switch. Extensions can help ensure the security and reliability of your system by identifying harmful state changes. Extensions can learn the flow of network traffic based on the workload cycle of your virtual machines and optimize your virtual network for greater performance.• Extension monitoring. and stopping them from being implemented. security. and other features to further improve the performance. • Extensions that can veto state changes. • Extension uniqueness. • Extensions that learn virtual machine life cycle. Extensions can implement monitoring. having peak times during various parts of the day or night based on their core workloads. In addition. management. 37 . by monitoring extensions you can gather statistical data by monitoring traffic at different layers of the Hyper-V Extensible Switch. and diagnostic enhancements of the Hyper-V Extensible Switch. Extension state/configuration is unique to each instance of an Extensible Switch on a machine. Virtual machine activity cycle is similar to that of physical servers. Multiple monitoring and filtering extensions can be supported at the ingress and egress portions of the Hyper-V Extensible Switch.

you could help with things like networking monitoring. you can also create virtual file walls as well as things like intrusion detection within the systems. Page 38 . Depending on which layer you filter. which layer you integrate into. packet filter so you can increase security.8/29/2012 This table lists the various types of Hyper-V Extensible Switch extensions. These are some of the things that you can do when you have the extensible switch.

the speed of your virtualization platform should rival that of physical hardware. letting the workload use ODX–enabled storage as if it were running in a non-virtualized environment. and compact. The Page 39 . lets a storage device perform a file copy operation without the main processor of the Hyper-V host actually reading the content from one storage place and writing it to another. This storage lets the system above the disks specify the move of a specific data set from one location to another. move. Technical description The storage stack of Hyper-V in Windows Server 2012 supports ODX operations so that these operations can be passed from the guest operating system to the host hardware. Offloaded Data Transfer (ODX) support is a feature of the storage stack of Hyper-V in Windows Server 2012.8/29/2012 Crucial maintenance tasks for virtual hard disks. In Windows Server 2012. depend on copying large amounts of data. Hyper-V takes advantage of the new SAN copy offload innovations to copy large amounts of data from one location to another. SAN vendors are working to provide near-instantaneous copy operations of large amounts of data. ODX. Whenever possible. such as merge. when used with offload-capable SAN storage hardware. a hardware feature known as a copy offload.

Windows Server automatically translates this transfer request into an ODX (if supported by the storage array) and receives a token representation of the data. Benefits ODX frees up the main processor to handle virtual machine workloads and lets you achieve native-like performance when your virtual machines read from and write to storage. The storage array performs the copy internally and returns progress status. 2. ODX is especially significant in the cloud space when you must provision new virtual machines from virtual machine template libraries or when virtual hard disk operations are triggered and require large blocks of data to be copied. and live migration. In a token-based copy operation. 3.Hyper-V storage stack also issues copy offload operations in VHD and VHDX maintenance operations such as merging disks and storage migration meta-operations in which large amounts of data are moved from one virtual hard disk to another virtual hard disk or to another location. a small token is copied between the source and destination. as in virtual hard disk merges. which removes the need to copy the underlying data through the servers. A user initiates a file copy or move in Windows Explorer. 39 . when you copy a file or migrate a virtual machine between storage locations (either within or between storage arrays). storage migration. the steps are as follows (see the following figure): 1. Instead of routing the data through the host. 5. These copy operations are then handled by the storage device that must be able to perform offloads (such as an offload-capable iSCSI. Feature-level benefits of ODX are: • Greatly reduced time to copy large amounts of data. 4. ODX uses a token-based mechanism for reading and writing data within or between intelligent storage arrays. The token is copied between the source and destination systems. a command-line interface. The token is delivered to the storage array. a token that represents the virtual machine file is copied. or a file server based in Windows Server 2012) and frees up the Hyper-V host processors to carry more virtual machine workloads. As an example. The token simply serves as a point-in-time representation of the data. or a virtual machine migration. Fibre Channel SAN.

• VHD. Enabling ODX support in the Hyper-V storage stack makes it possible to complete these operations in a fraction of the time it would have taken without the support. You can more rapidly perform crucial maintenance tasks for virtual hard drives (such as merge. connected to the virtual machine as virtual SCSI devices or directly attached (sometimes referred to as pass-through disks). and compact) that depend on copying large amounts of data without using processor time. Requirement ODX support in Hyper-V requires the following: • ODX-capable hardware to host the virtual hard disk files. Virtualized workload that operates as efficiently as it would in a non-virtualized environment.or VHDX-based virtual disks attached to virtual IDE do not support this optimization because integrated development environment (IDE) devices lack ODX support. 39 . VHDX-based virtual disks.• • Copy operations that don’t use processor time. • This optimization is also supported for natively attached. move.

Hyper-V in Windows Server 2012 enables support for SR-IOV-capable network devices and allows an SR-IOV virtual function of a physical network adapter to be assigned directly to a virtual machine. demanding workloads. Support for SR-IOV networking devices Single Root I/O Virtualization (SR-IOV) is a standard introduced by the PCI-SIG. the specialinterest group that owns and manages PCI specifications as open industry standards. Benefits These new Hyper-V features let enterprises take full advantage of the largest available host systems to deploy mission-critical. and allows SR-IOV-capable devices to be assigned directly to a virtual machine. tier-1 business applications with large. This increases network throughput and reduces network latency while also reducing the host CPU overhead required for processing network traffic. You can configure your systems to maximize the use of host system processors and memory to Page 40 . SR-IOV works in conjunction with system chipset support for virtualization technologies that provide remapping of interrupts and Direct Memory Access.8/29/2012 The figure shows the architecture of SR-IOV support in Hyper-V.

Hyper-V requires a server that provides processor support for hardware virtualization. to configure a virtual machine with the maximum of 32 virtual processors.effectively handle the most demanding workloads. you must be running Hyper-V in Windows Server 2012 on a virtualization host that has 32 or more logical processors. including chipset support for interrupt and DMA remapping and proper firmware support to enable and describe the platform’s SR-IOV capabilities to the operating system. For example. • The number of virtual processors that may be configured in a virtual machine depends on the number of processors on the physical machine. You must have at least as many logical processors in the virtualization host as the number of virtual processors required in the virtual machine. Requirements To take advantage of the new Hyper-V features for host scale and scale-up workload support. 40 . • An SR-IOV–capable network adapter and driver in both the management operating system (which runs the Hyper-V role) and each virtual machine where a virtual function is assigned. SR-IOV networking requires the following: • A host system that supports SR-IOV (such as Intel VT-d2). you need the following: • One or more Windows Server 2012 installations with the Hyper-V role installed.

The new Hyper-V cmdlets for Windows PowerShell.8/29/2012 Before Windows Server 2012 Windows PowerShell is the scripting solution for automating tasks in Windows Server. if you are familiar with managing services by using Windows PowerShell. This design is reflected in the following ways: • Task-oriented interface: o Hyper-V cmdlets are designed to make it easier for IT professionals to go from thinking about the task to actually performing it. which provides a very flexible set of interfaces that are designed for developers. o Hyper-V administrators often must manage more than just Hyper-V. the Hyper-V cmdlets make it easier for you to extend your existing knowledge of Windows PowerShell. you can reuse the same verbs to perform the corresponding tasks on a virtual machine. For example. designed for IT professionals. let you perform all available tasks in the GUI of Hyper-V Manager and several tasks exclusively through the cmdlets in Windows PowerShell. New automation support for Hyper-V Windows Server 2012 introduces more than 150 built-in Hyper-V cmdlets for Windows PowerShell. By using the same verbs as other Windows PowerShell cmdlets. IT professionals who are involved with virtualization need ways to easily automate a number of administrative tasks without having to learn developer skills. writing scripts for Hyper-V with in-box tools required you to learn WMI. in earlier versions of Windows Server. However. Page 41 .

Optionally. install the Hyper-V Windows Powershell cmdlets feature on a computer running Windows 8 and run the cmdlets as an Administrator or Hyper-V Administrator on the server. Hyper-V server role. Administrator or Hyper-V Administrator user account. 41 .• Consistent cmdlet nouns to simplify discoverability. There are many cmdlets to learn (more than 140). The nouns of the Hyper-V cmdlets make it easier for you to discover the cmdlets you need when you need them. Requirements To use the new Hyper-V cmdlets you need the following: • • • • • Windows Server 2012. Computer with processor support for hardware virtualization. if you want to use the Hyper-V cmdlets remotely.

or hosts where the only connection between them is an Ethernet cable.8/29/2012 When you have many Hyper-V hosts. Page 42 . Another example would be the support of Shared Nothing Live Migration. administration across these environments can become very challenging. Microsoft System Center provides the management capabilities for Windows Server 2012. a hosting provider may use System Center to configure the Network Virtualization components of Windows Server 2012 to support the multitenancy capabilities that Windows Server 2012 enables. The main component within System Center 2012 SP1 for managing Hyper-V is Virtual Machine Manager which will support all of the new capabilities of Windows Server 2012. Within Virtual Machine Manager you will be able to migrate virtual machines between hosts within a cluster. Not only are we providing access via PowerShell for organizations to support and create their own custom workflows in automation but we are also building within System Center the ability itself to support these new features and advancements within Windows Server 2012. For example.

8/29/2012 We only have a Video capture of this Demo at this time. but it is placed in “\\scdemostore01\demostore\Windows Server 2012\WS 2012 Demo Series\Click Thru Demos\Server Virtualization Page 43 .

you will need to be able to span or connect from outside networks to your existing networks. you would have had to do this with many different VLANs.8/29/2012 NOTE: This slide is animated and has 4 clicks [Click] With Windows Server 2012. With Windows Server 2012. This becomes a challenge for Page 44 . and that you have the ability to showback and chargeback to the organization or customers utilizing your resources. Whether you are an enterprise organization. Previously. [Click] When hosting these many different customers it is imperative that you have knowledge regarding the usage of your resources. whether you have organizations running virtual machines on their premises and want to also run virtual machines at your site. we’ve added some capabilities to help handle a multitenant environment allowing you to have multiple customers or divisions on the same subnets without IP address conflicts. or even a small organization or you are hosting provider. [Click] And lastly. you may have multiple business units or customers that require you to support multiple different domains or multiple customers that have separate distinct networks but may have the exact same IP space. we have talked about how customers want to be able to handle a multitenant environment.

organizations to be able to handle these requests and Windows Server 2012 does a great job of supporting this type of environment. [Click] Windows Server 2012 Hyper-V supports this with: • Network Virtualization • IP Portability • Resource Metering 44 .

IT organizations and hosting providers must offer customers enhanced security and isolation from one another. IT departments that offer private Page 45 .8/29/2012 Virtualized data centers are becoming more popular and practical every day. you must help ensure that each company is provided its own privacy and security. server virtualization provided isolation between virtual machines.” Because of this trend. but the network layer of the data center was still not fully isolated and implied layer-2 connectivity between different workloads that run over the same infrastructure. For the hosting provider. Before Windows Server 2012. Isolation is almost as important in an enterprise environment. If you’re hosting two companies. isolation in the virtualized environment must be equal to isolation in the physical data center. Although all internal departments belong to the same organization. to meet customer expectations and not be a barrier to cloud adoption. IT organizations and hosting providers have begun offering infrastructure as a service (IaaS). certain workloads and environments (such as finance and human resource systems) must still be isolated from each other. which provides more flexible. virtualized infrastructures to customers—“server instances on-demand.

you can configure Hyper-V servers to enforce network isolation among any set of arbitrary isolation groups. With Windows Server 2012. Windows Server 2012 provides the isolation and security capabilities for multitenancy by offering the new features presented in the next slides.clouds and move to an IaaS operational mode must consider this requirement and provide a way to isolate such highly sensitive workloads. The figure shows a network with Hyper-V Extensible Switch. which are typically defined for individual customers or sets of workloads. Windows Server 2012 contains new security and isolation capabilities through the Hyper-V Extensible Switch. 45 . The Hyper-V Extensible Switch is a layer-2 virtual network switch that provides programmatically managed and extensible capabilities to connect virtual machines to the physical network with policy enforcement for security and isolation.

If you want to isolate a virtual machine. VLAN technology is traditionally used to subdivide a network and provide isolation for individual groups that share a single physical infrastructure. you can put the secondary PVLANs into one of three modes (as shown in the following table). Page 46 . By assigning each virtual machine in a PVLAN one primary VLAN ID and one or more secondary VLAN IDs. a technique used with VLANs that can be used to provide isolation between two virtual machines on the same VLAN. These PVLAN modes determine which other virtual machines on the PVLAN a virtual machine can talk to. you can use PVLANs to isolate it from other virtual machines in your data center. put it in isolated mode. In this example the primary VLAN ID is 2. Virtual machine isolation with PVLANs. Windows Server 2012 introduces support for PVLANs. The animation shows how the three PVLAN modes can be used to isolate virtual machines that share a primary VLAN ID.8/29/2012 NOTE: This slide is animated and has 3 clicks Multitenant security and isolation using the Hyper-V Extensible Switch is accomplished with private virtual LANs (PVLANs) (this slide) and other tools (next slide). and the two secondary VLAN IDs are 4 and 5. When a virtual machine doesn’t need to communicate with other virtual machines.

isolated ports can only talk to promiscuous ports. [Click] • Community. [Click] • Promiscuous. Isolated ports cannot exchange packets with each other at layer 2. They can also talk to promiscuous ports. If fact. Promiscuous ports can exchange packets with any other port on the same primary VLAN ID (secondary VLAN ID makes no difference). They cannot talk to isolated ports. Community ports on the same VLAN ID can exchange packets with each other at layer 2. 46 .You can put the secondary PVLANs into one of three modes: [Click] • Isolated.

and the frequent reconfiguration of the physical network to add or modify VLANs increases the risk of an unplanned loss of service. which limits the number of nodes in a single VLAN and Page 47 .095).000 VLAN IDs (with a maximum of 4. the challenge becomes even greater. • VLANs have limited scalability because typical switches support only 1.8/29/2012 Note: This slide is animated with 1 click Before Windows Server 2012 Isolating the virtual machines of different departments or customers can be a challenge on a shared network. The following are the primary drawbacks of VLANs: • Cumbersome reconfiguration of production switches is required whenever virtual machines or isolation boundaries must be moved. Traditionally. • VLANs cannot span multiple subnets. When these departments or customers must isolate entire networks of virtual machines. VLANs are used to isolate networks. but VLANs are very complex to manage on a large scale.

Unfortunately. • Physical locations that determine virtual machine IP addresses. • Topological dependency of virtual machine deployment and traffic isolation. The physical layout of a data center influences the permissible potential IP addresses for virtual machines that run on a specific server or blade server that is connected to a specific rack in the data center. A virtual machine that is provisioned and placed in the data center must adhere to the choices and restrictions regarding its IP address. the addresses must be changed to accommodate the physical and topological restrictions of the data center. the typical result is that data center administrators assign IP addresses to the virtual machines and force virtual machine owners to adjust their policies that were based on the original IP address. virtual machine IP address assignment presents other key issues when organizations move to the cloud: • Required renumbering of service workloads. because most network traffic is TCP/IP.restricts the placement of virtual machines based on physical location. The IP address is the fundamental address that is used for layer-3 network communication. • Policies that are tied to IP addresses. Therefore. This renumbering overhead is so high that many enterprises choose to deploy only new services into the cloud and leave legacy applications unchanged. With Windows Server 2012 47 . In addition to the drawbacks of VLANs. when IP addresses are moved to the cloud. Renumbering IP addresses is cumbersome because the associated policies that are based on the IP addresses must also be updated.

you can even use Hyper-V Network Virtualization to transparently integrate these private networks into a preexisting infrastructure on another site. [Click] Server Virtualization is a well-understood concept that allows multiple server instances to run on a single physical host concurrently. but isolated from each other. each virtual machine is assigned two IP addresses: 47 . Finally. potentially with overlapping IP addresses. The figure illustrates how Hyper-V Network Virtualization isolates network traffic belonging to two different customers. or even on the same physical server. Network Virtualization provides a similar capability. How network virtualization works • Two IP addresses for each virtual machine. However. In it. because they belong to separate Blue and Yellow virtual networks. with each server instance essentially acting as if it’s the only one running on the physical machine. to be deployed on the same physical network.Hyper-V Network Virtualization solves these problems. With Hyper-V Network Virtualization. On the same physical network: • You can run multiple virtual network infrastructures. To virtualize the network with Hyper-V Network Virtualization. you can set policies that isolate traffic in your dedicated virtual network independently of the physical infrastructure. With this feature. you can isolate network traffic from different business units or customers on a shared infrastructure and not be required to use VLANs. the virtual machines can’t communicate with each other even if the customers assign them IP addresses from the same address space. • • You can have overlapping IP addresses. Hyper-V Network Virtualization extends the concept of server virtualization to allow multiple virtual networks. Blue and Yellow virtual machines are hosted on a single physical network. Each virtual network infrastructure acts as if it’s the only one running on the shared physical network infrastructure. Hyper-V Network Virtualization also lets you move virtual machines as needed within your virtual infrastructure while preserving their virtual network assignments.

The layer of CAs is consistent with the customer's network topology. The CA is visible to the virtual machine and reachable by the customer. but not to the virtual machine. Problems solved Network virtualization solves earlier problems by: • Removing VLAN constraints. which is virtualized and decoupled from the underlying physical network addresses. • Eliminating hierarchical IP address assignment for virtual machines. This address lets the customer exchange network traffic with the virtual machine as if it had not been moved to a public or private cloud. 47 . The PA appears in the packets on the wire exchanged with the virtualization server that hosts the virtual machine.o The Customer Address (CA) is the IP address that the customer assigns based on the customer’s own intranet infrastructure. o The Provider Address (PA) is the IP address that the host assigns based on the host’s physical network infrastructure. as implemented by the layer of PAs. The PA is visible on the physical network.

With the new network virtualization feature of Hyper-V in Windows Server 2012. Page 48 . is a service provider that provides cloud services to businesses that need them. as shown in the figure. Contoso can do this.8/29/2012 Example (see figure) In this scenario. Blue Corp and Yellow Corp are two companies that want to move their Microsoft SQL Server infrastructures into the Contoso cloud. Contoso Ltd. but they want to maintain their current IP addressing.

You could also use the information that this feature provides to help build a billing solution. Because a single client may have many virtual machines. You can use this data to perform capacity planning. to monitor consumption by different business units or customers. You need to know how different workloads draw upon these resources—even when they are virtualized. a technology that helps you track historical data of the use of virtual machines. permitting single-point querying of the client’s overall resource use. Resource pools are logical containers that collect resources of the virtual machines that belong to one client. aggregation of resource usage data can be a challenging task. so that customers of your hosting services can be charged appropriately for resource usage. With Resource Metering. a feature available in Hyper-V. However. as shown in the figure. Windows Server 2012 simplifies this task by using resource pools. you can gain insight into the resource use of specific servers. Hyper-V introduces Resource Metering. Hyper-V in Windows Server 2012 lets providers build a multitenant environment. Page 49 . In Windows Server 2012. or to capture data needed to help redistribute the costs of running a workload.8/29/2012 Note: This slide is animated with 2 clicks Your computing resources are limited. in which virtual machines can be served to multiple clients in a more isolated and secure way.

or storage migration) doesn’t affect the collected data. for a virtual network adapter over a period of time. in megabytes. Average physical memory. Average memory use. Total incoming network traffic. assigned to a virtual machine over a period of time. Incoming network traffic.Hyper-V Resource Metering has the following features: • • • • Uses resource pools. Maximum memory use. in megabytes. in megabytes. Maximum disk allocation. so providers can measure incoming and outgoing network traffic for a given IP address range. Uses Network Metering Port ACLs to differentiate between Internet and intranet traffic. Helps ensure that movement of virtual machines between Hyper-V hosts (such as through live. in megabytes. Works with all Hyper-V operations. Highest amount of physical memory. Average CPU. Lowest amount of physical memory. Resource Metering can measure the following: • • • • Average CPU use. Outgoing network traffic. used by a virtual machine over a period of time. in megabytes. in megabytes. Minimum memory use. • • • 49 . offline. for a virtual network adapter over a period of time. used by a virtual machine over a period of time. allocated to a virtual machine over a period of time. Total outgoing network traffic. in megahertz. Highest amount of disk space capacity. logical containers that collect resources of the virtual machines that belong to one client and allow single-point querying of the client’s overall resource use. assigned to a virtual machine over a period of time.

With this type of man-in-the-middle attack. a malicious virtual machine sends a fake ARP message. In a DHCP environment. Windows Server 2012 provides equivalent protection for ND spoofing. DHCP server traffic from other virtual switch ports is automatically dropped. Dynamic Host Protocol (DHCP) guard protection blocks virtual machines from providing services to other virtual machines. The rogue DHCP server could cause traffic to be routed to a malicious intermediary that sniffs all traffic before forwarding it to the legitimate destination. Virtual port ACLs Page 50 . Unsuspecting virtual machines send the network traffic targeted to that IP address to the MAC address of the malicious virtual machine instead of the intended destination. a rogue DHCP server could intercept client DHCP requests and provide incorrect address information. To protect against this particular type of man-in-the-middle attack. The Hyper-V Extensible Switch now protects against a rogue DHCP server attempting to provide IP addresses that would cause traffic to be rerouted. For IPv6.8/29/2012 Other tools Other tools that provide enhanced multitenant security and isolation through the Hyper-V Extensible Switch are: Protection from Address Resolution Protocol/Neighbor Discovery (ARP/ND) poisoning (ARP spoofing) The Hyper-V Extensible Switch provides protection against a malicious virtual machine stealing IP addresses from other virtual machines through ARP spoofing (also known as ARP poisoning in IPv4). which associates its own MAC address to an IP address it doesn’t own. the Hyper-V administrator can designate which virtual switch ports can have DHCP servers connected to them.

independent of their actual physical locations. monitoring. Port ACLs provide a mechanism for network isolation and metering network traffic for a virtual port on the Hyper-V Extensible Switch. traffic from multiple VLANs can now be directed to a single network adapter in a virtual machine that could previously receive traffic from only one VLAN. A VLAN makes a set of host machines or virtual machines appear to be on the same local LAN. Each port ACL consists of a source or destination network address and a permit to deny or meter action. configuration. You can designate which virtual ports should be monitored and to which virtual port the monitored traffic should be delivered for further processing. The metering capability also supplies information about the number of instances where traffic was attempted to or from a virtual machine from a restricted (“deny”) address. you can diagnose network connectivity issues by monitoring traffic bound for a particular virtual switch port. Windows Server 2012 provides better multitenant 50 . For example. These cmdlets can be run remotely. By using port ACLs . you can measure network traffic going to or from a specific IP or MAC address. Trunk mode to virtual machines. Windows Server 2012 now provides Windows PowerShell cmdlets for the Hyper-V Extensible Switch that let you build command-line tools or automated scripts for setup. Windows PowerShell also enables third parties to build their own tools to manage the Hyper-V Extensible Switch.Virtual port access control lists (ACLs) provide the ability to block traffic by source and destination virtual machine. even when they are stored on the same physical server. a security monitoring virtual machine can look for anomalous patterns in the traffic flowing through other specific virtual machines on the switch. As a result. which allows you to report on traffic sent or received from the Internet or from network storage arrays. With the Hyper-V Extensible Switch trunk mode. Monitoring. traffic from different VLANs is consolidated. and troubleshooting. The Hyper-V Extensible Switch also provides this port mirroring. Windows PowerShell/Windows Management Instrumentation (WMI). In addition. You can configure multiple port ACLs for a virtual port. Many physical switches can monitor the traffic from specific ports flowing through specific virtual machines on the switch. For example. Benefits Windows Server 2012 multitenant isolation keeps customer virtual machines isolated. By using the metering capability. you can meter the IP or MAC addresses that can (or can’t) communicate with a virtual machine. you can use port ACLs to enforce the isolation of a virtual machine by allowing it to talk only to the Internet or communicate only with a predefined set of addresses. and a virtual machine can listen in on multiple VLANs. This feature can help you shape network traffic and enhance multitenant security in your data center.

Port mirroring also supports live migration of extension configurations.security for customers on a shared IaaS cloud through the new Hyper-V Extensible Switch. virtual port ACLs. With Hyper-V in Windows Server 2012. and VLAN trunk mode support. With port mirroring. and (2) the security risk of a multitenant virtualized environment. you can run security and diagnostics applications in virtual machines that can monitor virtual machine network traffic. Two such concerns are (1) the additional management overhead of implementing VLANs on their Ethernet switching infrastructure to ensure isolation between their customers’ virtual infrastructures. protection against ARP poisoning and spoofing. protection against DHCP snooping. Also. • Manageability. you can now use port ACLs to isolate customers’ networks from one another and not be required to set up and maintain VLANs. Benefits of the Hyper-V Extensible Switch for better multitenant security and isolation are: • Security and isolation. You can now use Windows PowerShell and WMI support for commandline and automated scripting support plus full event logging. Multitenant isolation in Windows Server 2012 addresses concerns that may have previously prevented organizations from deploying Hyper-V within their data centers. Requirements The requirements for using the Hyper-V Extensible Switch for multitenant security and isolation are: •Windows Server 2012 •The Hyper-V server role 50 . The Hyper-V Extensible Switch provides better security and isolation for IaaS multitenancy with PVLAN support. your security needs are provided by protection against ARP spoofing and DHCP snooping. • Monitoring.

8/29/2012 Click through demo located at “\\scdemostore01\demostore\Windows Server 2012\WS 2012 Demo Series\Click Thru Demos\Management and Automation\Resource Metering Demo environment build instructions are located here: \\scdemostore01\demostore\Windows Server 2012\WS 2012 Demo Series\Demo Builds Page 51 .

8/29/2012 Page 52 .

Page 53 .8/29/2012 Here are the different scenarios where we see organizations using Hyper-V and the benefits they are going to get from leveraging the new capabilities within Windows Server 2012. But what we have talked about today is just the tip of the iceberg as far capabilities that Windows Server 2012 Hyper-V supports.

Page 54 .8/29/2012 So I want to show you now some other features and capabilities that are also available within Windows Server 2012 Hyper V that give organizations the ability to take virtualization from just simply running a bunch of VMs to gaining many more benefits in operational cost savings around virtualization.

8/29/2012 Page 55 .

8/29/2012 Page 56 .

8/29/2012 Page 57 .

• The saved state of the different. • The virtual machine configuration file. stored as a special type of virtual hard disk file. Administrators often think of a virtual machine as a single. a virtual machine consists of several parts: • Virtual hard disks. In reality. • Virtual machine snapshots.8/29/2012 Importing a virtual machine from one physical host to another can expose file incompatibilities and other unforeseen complications. host-specific devices. • The memory file. for the virtual machine. Page 58 . or snapshot. stand-alone entity that they can move around to address their operational needs. stored as files in the physical storage. Each virtual machine and each snapshot that is associated with it use unique identifiers. which organizes the preceding components and arranges them into a working virtual machine.

it undergoes a series of validation checks before being started. Windows Server 2012 includes an Import wizard that helps you quickly and reliably import virtual machines from one server to another. You don’t have to worry ahead of time about the configuration that’s associated with physical hardware. such as memory.Additionally. Problems such as hardware differences that might exist when a virtual machine is imported to another host can cause these validation checks to fail. This list identifies what needs to be reconfigured and determines which pages appear next in the wizard. Compiles a list of errors. such as the path that identifies the location for virtual hard disk files. Hyper-V in Windows Server 2012 introduces a new Import Wizard that is designed to detect and fix more than 40 different types of incompatibilities. and virtual processors. This is created as a precaution in case an unexpected restart occurs on the host. When you import a virtual machine. virtual switches. the wizard does the following: 1. Creates a copy of the virtual machine configuration file. Information in the virtual machine configuration file is compared to hardware on the new host. The Import Wizard guides you through the steps to resolve incompatibilities when you import the virtual machine to the new host. You no longer need to export a virtual machine to be able to import it. 58 . prevents the virtual machine from starting. such as from a power outage. This “registers” the virtual machine with Hyper-V and makes it available for use. When Hyper-V starts a virtual machine. virtual machines store and use some host-specific information. 3. 2. You can simply copy a virtual machine and its associated files to the new host and then use the Import Wizard to specify the location of the files. That. Validates hardware. in turn. The Import Wizard for virtualization: • Detects and fixes problems. Doesn’t require the virtual machine to be exported. • The flowchart shows the Import Wizard process.

Displays the relevant pages. The wizard identifies incompatibilities to help you reconfigure the virtual machine so it’s compatible with the new host. The Windows PowerShell cmdlets for importing virtual machines let you automate the process. such as hardware or file differences that might exist when a virtual machine is imported to another host. After the wizard does this. such as from a power outage. Benefits The new Import Wizard is a simpler. one category at a time. better way to import or copy virtual machines. 5. A user account that belongs to the local Hyper-V administrators group. As an added safety feature. The wizard detects and fixes potential problems.4. A computer that has processor support for hardware virtualization. A virtual machine. the virtual machine is ready to start. 58 . the wizard creates a temporary copy of a virtual machine configuration file in case an unexpected restart occurs on the host. Requirements Import wizard requirements are: • • • • Two installations of Windows Server 2012 with the Hyper-V role installed. Removes the copy of the configuration file.

Snapshots provide a faster and easier way to revert the virtual machine to a previous state. Snapshots have mainly been used for testing changes to existing virtual machine environments. data. Page 59 . this operation pauses the live virtual machine. you can use snapshots to provide a way to revert a potentially risky operation in a production environment. to merge a snapshot into the parent virtual machine requires downtime and results in virtual machine unavailability.8/29/2012 Before Windows Server 2012 Virtual machine snapshots capture the state. as a way to return to a previous state or time if required. and hardware configuration of a running virtual machine. However. effectively making it unavailable while the merge is taking place. However. such as applying an update to the software running in the virtual machine. For example. many customers merge their snapshots back into the original partner (to reduce storage space and increase virtual machine disk performance). There are certain circumstances in which it may make sense to use snapshots in a production environment. After a successful test of new changes or updates. Having a straightforward way to revert a virtual machine can be very useful if you need to recreate a specific state or condition so that you can troubleshoot a problem.

further writes to areas that have already been merged are redirected to the merge destination. you can merge snapshots into the virtual machine parent while the server is running.avhd disk into the parent while the virtual machine continues to run. Now. Benefits Virtual machine snapshots capture the state. Technical description The Hyper-V virtual machine Snapshot feature provides a fast and straightforward way to revert the virtual machine to a previous state. Upon completion. merging a snapshot into the parent virtual machine requires downtime and virtual machine unavailability. and hardware configuration of a running virtual machine. Many organizations use snapshots in their current environments for testing updates and patches.With Windows Server 2012 In Windows Server 2012. Windows Server 2012 date now provides the ability to merge the associated . with the Live Merge feature of Windows Server 2012 Hyper-V. the Hyper-V Live Merge feature now allows organizations to merge current snapshots back into the original parent while the virtual machine continues to run. the associated . When a snapshot is deleted. I/O is suspended to a small range while data in that range is read from the source and written to the destination.avhd files. Snapshot data files (the current leaf node of virtual hard disk that is being forked into a read-only parent differential disk) are stored as . with little effect on users. As the process proceeds. However. When the leaf is being merged away. Requirements • Windows Server 2012 • The Hyper-V role 59 . easier way to revert a virtual machine to a previous state. data.avhd disks cannot be removed while the virtual machine is running. Live merging of snapshots provides a faster. the online merge fixes the running chain to unlink merged disks and closes those files.

Before Windows Server 2012 Separate isolated connections for network. live migration. and management traffic make managing network switches and other networking infrastructure a challenge. Page 60 . Technical description Support for DCB-enabled 10-G network adapters is one of the new QoS bandwidth management features in Windows Server 2012 that let hosting providers and enterprises provide services with predictable network performance to virtual machines on a Hyper-V server. IT organizations consider using some of the latest innovations in networking to reduce costs and management requirements.8/29/2012 Data Center Bridging (DCB) refers to enhancements to Ethernet LANs for use in data center environments. These enhancements consolidate the various forms of network into one technology. Hyper-V in Windows Server 2012 can take advantage of DCBcapable hardware to converge multiple types of network traffic on a single network adapter with a maximum level of service to each. known as a Converged Network Adapter (CNA). As data centers evolve. With Windows Server 2012 In the virtualized environment.

supporting far fewer traffic flows. management. This helps reduce costs and makes it easier to maintain separate connections in the data center. Because the packet scheduler in the networking stack doesn’t process this offloaded traffic. including network. and live migration traffic. Benefits The Hyper-V ability to take advantage of the latest DCB innovations (a converged modern 10-Gb LAN) lets you converge network. and live migration traffic and helps ensure that each customer receives the required QoS in the virtualized environment. you need the following to take advantage of DCB: •Windows Server 2012 •The Hyper-V role •A DCB-enabled network adaptor 60 . DCB is the only viable choice to enforce minimum bandwidth.DCB is a hardware mechanism that classifies and dispatches network traffic that depends on DCB support on the network adapter. storage. management. Requirements In the Hyper-V environment. storage. because the allocation becomes software controlled and therefore more flexible and easier to modify. A typical scenario involves a CNA that supports iSCSI offload. in which iSCSI traffic bypasses the networking stack and is framed and transmitted directly by the CNA. it can classify network traffic that doesn’t originate from the networking stack. However. It converges different types of traffic. This approach also makes it easier to change allocations to different traffic flow when needed.

Page 61 . only the differences are backed up (green highlights in the figures). These are similar to regular virtual machine snapshots. Incremental backup can be independently enabled on each virtual machine through the backup software.8/29/2012 Before Windows Server 2012. During each incremental backup. the virtual machine must be configured to use incremental backup and a full backup must be performed after incremental backup is enabled (see Sunday). Monday. Technical description Incremental backup of virtual hard disks lets you perform backup operations more quickly and easily and saves network bandwidth and disk space. backing up your data required you to perform full file backups. backing up tenant virtual machines efficiently and offering additional layers of service to their customers without the need for a backup agent inside the virtual machines. hosting providers can run backups of the Hyper-V environment. Windows Server 2012 uses “recovery snapshots” to track the differences between backups. Windows Server 2012 supports incremental backup of virtual hard disks while the virtual machine is running. and Tuesday) followed by a restore (Friday). Because backups are VSS aware. Notes on the figure: • To enable change tracking. but they are managed directly by Hyper-V software. These figures consider a virtual machine with one virtual hard disk and show 3 days of backups (Sunday. This meant that you had to either (1) back up the virtual machine and snapshots as flat files when offline or (2) use Windows server or third party backup tools to back up the virtual machine itself with a normal backup of the operating system and data.

They merge the earlier recovery snapshot into the base virtual hard disk at the end of the backup process. so backups can be made more recent. • Benefits Incremental backup of virtual hard disks saves network bandwidth. and lowers the cost of each backup. you need Windows Server 2012 and the Hyper-V role. hosting providers can run backups of the entire Hyper-V environment. backing up tenant virtual machines in an efficient way and offering additional layers of service to their customers without the need for a backup-agent inside the virtual machines. reduces backup sizes and saves disk space.• During an incremental backup. The virtual machine’s configuration XML files are very small and are backed up often. It also lets you increase backup frequency because it is now faster and smaller. For simplicity. 61 . the virtual machine will briefly be running off of two levels of recovery snapshots. Because backups are VSS aware. Requirements To use incremental backup of virtual hard disks. they’re not shown in the figures.

8/29/2012 Page 62 .

8/29/2012 Hyper-V is an integral part of Windows Server and provides a foundational virtualization platform that enables you to transition to the cloud. Page 63 . Hyper-V gives you better flexibility with features like live migration and cluster shared volumes for storage flexibility. Windows Server 2008 R2 Hyper-V also delivers greater scalability with support for up to 64 logical processors and improved performance with support for Dynamic Memory and enhanced networking support. and testing and development. business continuity. Windows Server 2008 R2 provides a compelling solution for core virtualization scenarios—production server consolidation. Virtual Desktop Infrastructure (VDI). dynamic data center.

• Extensions that can veto state changes.8/29/2012 Extensibility features • Extension monitoring. Page 64 . By monitoring extensions you can gather statistical data by checking the traffic at different layers of the Hyper-V Extensible Switch. • Extension uniqueness. • Extensions that can learn the virtual machine lifecycle. Multiple monitoring and filtering extensions can be supported at the ingress and egress portions of the Hyper-V Extensible Switch. The extension state/configuration is unique to each instance of a Hyper-V Extensible Switch on a machine.

• Capture extensions. monitoring. • Integration with built-in features.• Multiple extensions on same switch. Benefits 64 . Manageability Management features built into the Hyper-V Extensible Switch allow you to troubleshoot and resolve problems on Hyper-V Extensible Switch networks. making it easier to pinpoint where an issue has occurred. Capture extensions do not modify existing Hyper-V Extensible Switch traffic. Multiple extensions can coexist on the same Hyper-V Extensible Switch. • Unified tracing and enhanced diagnostics. o At the first level. Capture extensions can inspect traffic and generate new traffic for report purposes. o The second level allows capturing packets for a full trace of events and traffic packets. the Event Tracing for Windows (ETW) provider for the Hyper-V Extensible Switch allows tracing packet events through the Hyper-V Extensible Switch and extensions. • Windows PowerShell and scripting support. Extensions integrate with built-in Hyper-V Extensible Switch features. Windows Server 2012 provides Windows PowerShell cmdlets for the Hyper-V Extensible Switch that let you build command-line tools or automated scripts for setup. and troubleshooting. The Hyper-V Extensible Switch includes unified tracing to provide two levels of troubleshooting. configuration.

including virtual machine-tovirtual machine traffic. • Easier support. • Free core services. For example. Reduced downtime increases the availability of services. Unified tracing means that it’s quicker and easier to diagnose issues when they arise. The management of extensions is integrated into Windows management through Windows PowerShell cmdlets and WMI scripting. There’s one management story for all. The Hyper-V Extensible Switch provides capabilities so extensions can participate in Hyper-V live migration. Extensions experience a high level of reliability and quality from the strength of the Windows platform and the Windows logo certification program. Requirements Hyper-V Extensible Switch extensibility is built into the Hyper-V server role and requires Windows Server 2012. all extensions have live migration support by default. firewall filters. The Hyper-V Extensible Switch is an open platform that allows plug-ins to sit in the virtual switch between all traffic. Core services for extensions are provided without charge. Extensions can provide traffic monitoring. • Unified management. To jump-start the ecosystem. and switch forwarding. 64 . which set a high bar for extension quality. no special coding for services is required. there’s no “one-switch-only” solution for Hyper-V. • Live migration support. several partners will announce extensions with the unveiling of the Hyper-V Extensible Switch. • Windows reliability/quality.Hyper-V Extensible Switch benefits include: • Open platform to fuel plug-ins.

metering.8/29/2012 How Generic Routing Encapsulation (GRE) Works • • • • • GRE is defined by RFC 2784 and RFC 2890. A tenant network ID is embedded in the GRE header Key field. Benefits of GRE As few as one IP address per host lowers the burden on the switches. and control. One customer address is required per virtual machine. One provider address is required per host that is shared by all virtual machines on the host. Page 65 . Full MAC headers and explicit tenant network ID marking allows for traffic analysis. A full MAC header is included in the packet.

8/29/2012 Page 66 .

8/29/2012 Page 67 .

8/29/2012 Page 68 .

8/29/2012 Page 69 .

Sign up to vote on this title
UsefulNot useful