You are on page 1of 35

Disaster Tolerant Virtualization Architecture with HP StorageWorks Cluster Extension and Microsoft Hyper-V

White paper
Table of contents
Executive Summary .............................................................................................................................. 3 Applicability and Definitions ................................................................................................................. 3 Target Audience .................................................................................................................................. 4 Introduction to Cluster Extension (CLX) .................................................................................................... 4 Server Virtualization............................................................................................................................. 8 Benefits of Virtualization ....................................................................................................................... 8 Introduction to Hyper-V......................................................................................................................... 9 HP supported Configuration/hardware for Windows Server 2008 ............................................................ 9 Server Support................................................................................................................................. 9 Storage Support............................................................................................................................... 9 Hyper-VImplementation Considerations ............................................................................................. 10 Disk Performance Optimization........................................................................................................ 10 Storage Options in Hyper-V ............................................................................................................ 10 Virtual Hard Disk (VHD) choices ................................................................................................... 10 Pass-through Disk........................................................................................................................ 11 Hyper-V Integration Services............................................................................................................ 11 Memory Usage Maximization.......................................................................................................... 11 Network Configurations.................................................................................................................. 12 CLX and Hyper-V Scenarios ................................................................................................................ 12 Scenario A: Disaster Recovery and High Availability for Virtual Machine.............................................. 13 Scenario B: Disaster Recovery and High Availability for Virtual Machine and applications hosted by Virtual Machine ............................................................................................................................. 15 Scenario C: Failover Cluster between Virtual Machines ...................................................................... 17 Cluster between Virtual Machines hosted on the same physical server .............................................. 17 Cluster between Virtual Machines hosted on separate physical server............................................... 17 Step-by-step guide to deploy CLX Solution............................................................................................. 20 Deploying Windows Server 2008 Hyper-V ....................................................................................... 20 Installing Hyper-V role .................................................................................................................... 21 Creating Geographically Dispersed MNS cluster ............................................................................... 21 Configuring Network for Virtual Machine ......................................................................................... 22 Provisioning Storage for Virtual Machine .......................................................................................... 23 Creating and Configuring Virtual Machine........................................................................................ 26 Installing Cluster Extension Software................................................................................................. 28 Configuring Cluster Extension Software ............................................................................................ 29 Making Virtual Machine Highly Available......................................................................................... 30

Creating a CLX resource ................................................................................................................. 31 Set the dependency........................................................................................................................ 32 Test Failover or Quick Migration ...................................................................................................... 32 Appendix A: Bill of Materials .............................................................................................................. 34 References ........................................................................................................................................ 35

Executive Summary
Server virtualization, also known as hardware virtualization, is a hot topic in the information technology (IT) world because of the potential for serious economic benefits. Server virtualization enables multiple operating systems to run on a single physical machine as Virtual Machines. With server virtualization, you can consolidate workloads across multiple underutilized server machines onto a smaller number of machines. Fewer physical machines can lead to reduced costs through lower hardware, energy, and management overhead, plus the creation of a more dynamic IT infrastructure. Windows Server 2008 Hyper-V, the next-generation Hyper-Visor-based server virtualization technology, is available as an integral feature of Windows Server 2008 and enables you to implement server virtualization with ease. One of the key use cases of Hyper-V is to achieve Business Continuity and disaster recovery through complete integration with Microsoft Failover Cluster. The natural extension of this ability is to complement this feature with the HP products like HP StorageWorks Cluster Extension and HP StorageWorks Continuous Access to achieve comprehensive and robust multi-site disaster recovery and high availability solution. This paper briefly touches upon the Hyper-V and Cluster Extension (CLX) key features, functionality, best practices and various use cases. It also objectively describes the various scenarios in which Hyper-V and CLX complement each other to achieve the highest level of disaster tolerance. This paper concludes with the step-by-step details for creating a CLX solution in Hyper-V environment from scratch. There are many references also provided for referring to relevant technical details.

Applicability and Definitions


This document is applicable for implementing both HP StorageWorks Cluster Extension EVA and HP StorageWorks Cluster Extension XP on Microsoft Hyper-V platform. CLX XP CLX EVA CLX MNS MSCS OS HP Storage SCVMM Hyper-V Host OS Guest OS Cluster Extension HP StorageWorks XP Cluster Extension HP StorageWorks EVA Cluster Extension Majority Node Set Microsoft Cluster Service Operating System XP and EVA Microsoft Systems Center Virtual Machine Manager Virtualization technology from Microsoft The physical server on which Hyper-V is installed Virtual Machine created on Host OS

Figure 1: Hyper-V Terminology

Virtual Machine

Guest OS/Guest partition/ Child partition

Host OS/Host partition

Hyper-V server

Target Audience
This document is for anyone who plans to implement Cluster Extension on a multi-site or stretched cluster with array based HP continuous access solution in a Hyper-V environment. The document describes Hyper-V configuration on HP ProLiant servers, explains how to create a Virtual Machine, mentions step by step method of adding a Virtual Machine to a Failover Cluster and finally mentions the CLX supported scenario with an example. To learn more about Cluster Extension, please see the documentation available on the Cluster Extension Websites (http://www.hp.com/go/clxxp and http://www.hp.com/go/clxeva) using the Technical documentation link under Support.

Introduction to Cluster Extension (CLX)


HP StorageWorks Cluster Extension (CLX) software offers protection against system downtime from fault, failure, and disasters. It enables flawless integration of the remote mirroring capabilities of HP StorageWorks Continuous Access with the high-availability capabilities provided by Clustering software in Windows. This extended-distance solution allows for more robust disaster recovery topologies as well as automatic failover, failback, and redirection of mirrored pairs for a true Disaster Tolerant solution. CLX provides efficiency that preserves operations and delivers investment protection. The CLX solution will monitor and recover disk pair synchronization on an application level and offload data replication tasks from the host. CLX automates the time-consuming, labor-intensive processes required to verify the status of the storage as well as the server cluster, thus allowing the correct failover and failback decisions to be made to minimize downtime. HP Cluster Extension offers protection against application downtime from fault, failure, and disasters by extending a local cluster between data centers over large geographical distances.

To summarize the key features of CLX are as follows: Protection against transaction data lossBecause the application cluster is extended over two sites and the storage is replicated at the second site using replication methodology, data exists ubiquitously with virtually no difference (depending on the replication method) in the data storage from site A to site B. No Single Point of Failure (SPOF) solution to increase the availability of company and customer data. Fully automatic failover and failbackAutomated failover and failback reduces the complexity involved in a disaster recovery situation. It is protection against the risk of downtime, whether planned or unplanned. No server reboot during failoverDisks on the server on both the primary and secondary sites are recognized during the initial system boot in a XP CLX environment; therefore LUN presentation and LUN mapping changes are not necessary during failover or failbackfor a truly hand-free Disaster Tolerant solution. No single point of failureSupreme redundancy: identical configuration of established SAN infrastructure redundancies is implemented on site B. MSCS Majority Node Set integrationCLX Windows uses the Microsoft Majority Node Set (MNS) and Microsoft File Share Witness (FSW) quorum feature that is available in Windows from Microsoft Windows Server 2003. These quorum models provide protection against split-brain situations without the single point of failure of a traditional Quorum disk. Split-brain syndrome occurs when the servers at one site of the stretched cluster lose all connection with the servers at the other site and the servers at each site form independent clusters. The serious consequence of the split-brain is corruption of business data since data will no longer be consistent. EVA array family supportThe entire EVA array family is supported on either end of the Disaster Tolerant configuration. Whether repurposing legacy units or simply adding newer generation units to an existing CLX EVA configuration, both sites are covered. See Software Support section for configuration requirements. XP array supportXP24000/20000, XP12000/10000, XP1024/128, and XP512/48 arrays are supported on either end of the DT configuration. Whether re-purposing legacy units or adding next-generation units to an existing XP CLX configuration, both sites are covered. See Software Support section for configuration requirements.

One of the failover use cases of CLX is described pictorially as below. Before site failureApplications are hosted on primary site and the business critical data is always replicated to recovery site for disaster recovery.

Figure 2: Before Site Failure

Arbitrator Node

App A App B Quorum Microsoft Failover Cluster

App A Continuous Access App B Primary Site Recovery Site

After Site FailureCLX Automates the application storage failover and change in replication direction in conjunction with Microsoft Clustering solution achieving Business Continuity and High Availability.

Figure 3: After Site Failure

Arbitrator Node

App A Microsoft Failover Cluster App B Quorum

Automated by CLX
App A Continuous Access App B Primary Site Recovery Site

Server Virtualization
In the computing realm, the term virtualization refers to presenting a single physical resource as many individual logical resources (such as platform virtualization), as well as making many physical resources appear to function as a singular logical unit (such as resource virtualization). A virtualized environment may include servers and storage units, network connectivity and appliances, virtualization software, management software, and user applications. Basically, a virtual server or VM, is an instance of some operating system platform running on any given configuration of server hardware, centrally managed by a Virtual Machine Manager, or hypervisor, and consolidated management tools.

Figure 4: Server Virtualization

Virtual Machine

Virtual Machine

Virtual Machine

Virtual Machine Virtualized server

Virtualization Manager

Benefits of Virtualization
Virtualized infrastructure can benefit companies and organizations of all sizes. Virtualization greatly simplifies a physical IT infrastructure to provide greater centralized management over your technology assets and better flexibility over the allocation of your computing resources. This enables your business to focus resources when and where they're needed most, without the limitations imposed by the traditional one computer per box model. Virtualization enables you to create VMs that share hardware resources and transparently function as individual entities on the network. Consolidating servers as VMs on a small number of physical computers can save money on hardware costs and make centralized server management easier. Server virtualization also makes backup and disaster recovery simpler and faster, providing for a high level of business continuity. In addition, virtual environments are ideal for testing new operating systems, service packs, applications and configurations before rolling them out on a production network.

Introduction to Hyper-V
Windows Server 2008 Hyper-V, the next-generation Hyper-Visor-based server virtualization technology, is available as an integral feature of Windows Server 2008 and enables you to implement server virtualization with ease. Hyper-V allows you to make the best use of your server hardware investments by consolidating multiple server roles as separate Virtual Machines (VMs) running on a single physical machine. With Hyper-V, you can also efficiently run multiple different operating systemsWindows, Linux, and othersin parallel, on a single server, and fully leverage the power of x64 computing. Hyper-V offers great advantage in certain scenarios like Server Consolidation, Business Continuity and Disaster Recovery through complete integration with Microsoft Failover Cluster, Testing and deployment through its inherent ease and scaling capability, and so on. For more details on Hyper-V technology refer to Microsoft page at http://www.microsoft.com/windowsserver2008/en/us/hyperv.aspx

HP supported Configuration/hardware for Windows Server 2008


This section covers the HP hardware and related components for Windows Server 2008 Hyper-V. As a pre-requisite step it is important to ensure that hardware components are officially supported by HP. Though this document will list down supported servers and StorageWorks arrays, as a best practice it is recommended that support streams should be referred for latest information. Streams will also list down the supportability of various other solution components like Host Bus Adapters (HBA), Network Interface Cards (NIC), and so on. The various available resources are listed in the references section at the end of this document. Alternatively get in touch with local HP account or service representative for definitive information about supported HP configurations.

Server Support
Refer to the HP white paper available at the following link to identify the supported HP server for Hyper-V. It also covers various other supported components like HBA, NIC and Storage controllers. Also covered are the various steps to enable the hardware assisted virtualization for Hyper-V http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01516156/c01516156.pdf

Storage Support
Refer to link below for detailed list of HP storage products supported with Windows Hyper-V http://h71028.www7.hp.com/enterprise/cache/612466-0-0-0-121.html

Hyper-VImplementation Considerations
Disk Performance Optimization
Hyper-V offers two different kinds of disk controller, IDE and SCSI controllers. Both have their distinct advantages and disadvantages. IDE controllers are the preferred choice because they provide the highest level of compatibility for a large range of guest operating systems. However, SCSI controllers can enable a virtual SCSI bus to provide multiple transactions simultaneously. If the workload is disk intensive, consider using only virtual SCSI controllers if the guest OS supports that configuration. If this is not possible, add additional SCSI connected VHDs. It is optimal to select SCSI connected VHDs that are stored on separate physical spindles or arrays on the host server.

Storage Options in Hyper-V


Virtual Hard Disk (VHD) choices There are three types of VHD files. The performance characteristics and tradeoffs of each VHD type are discussed below. Dynamically expanding VHDSpace for the VHD is allocated on demand. The blocks in the disk start as zeroed blocks, but are not backed by any actual space in the file. Reads from such blocks return a block of zeros. When a block is first written to, the virtualization stack must allocate space within the VHD file for the block and then update the metadata. This increases the number of necessary disk I/Os for the write and increases CPU usage. Reads and writes to existing blocks incur both disk access and CPU overhead when they access the block mapping in the metadata. Fixed-size VHDSpace for the VHD is first allocated when the VHD file is created. This type of VHD is less apt to fragment, which reduces the I/O throughput when a single I/O is split into multiple I/Os. It has the lowest CPU overhead of the three VHD types because reads and writes do not need to look up the block mapping. Differencing VHDThe VHD points to a parent VHD file. Any writes to blocks never written to before cause the space to be allocated in the VHD file, as with a dynamically expanding VHD. Reads are serviced from the VHD file if the block has been written to. Otherwise, they are serviced from the parent VHD file. In both cases, the metadata is read to determine the mapping of the block. Reads and writes to this VHD can consume more CPU resources and result in more I/Os than a fixed-sized VHD.

10

Pass-through Disk One of the storage configuration options in Hyper-V child partitions is the ability to use pass-through disks. While the former options all have referred to VHD files which are files on the file system of the parent partition, a pass-through disk routes a physical volume from the parent to the child partition. The volume must be configured offline on the parent partition for this operation. This simplifies the communication and by design, the pass-through disk overhead is lower than the overhead with VHDs. However, Microsoft has invested in optimizing the fixed-size VHD performance which has significantly reduced the performance differential. Therefore, the more relevant reason to choose one over the other should be based on the size of the data sets. If there is a need to store very large data sets, a pass-through disk is typically the more practical choice. In the Virtual Machine, the pass-through disk would be attached either through an IDE controller or a SCSI controller.
Note: When presenting a pass-through disk, the disk should remain offline on the host OS. This prevents host and guest OS from using the pass-through disks simultaneously. Pass-through disks doesnt support VHD-related features, like VHD Snapshots, dynamically expanding VHDs and differencing VHD.

The further considerations for choosing pass-through disk against VHD disks are available at this Microsoft TechNet blog. Detailed information about Hyper-V storage performance based on measurements derived from the Microsoft performance lab can be found in the following blog entry http://blogs.msdn.com/tvoellm/archive/2008/09/24/what-hyper-v-storage-is-best-for-you-show-methe-numbers.aspx

Hyper-V Integration Services


The integration services provided with Hyper-V are critical to incorporate into the general practices. These drivers improve performance when Virtual Machines make calls to the hardware. After installing a guest operating system, be sure to install the latest drivers. Integration services target very specific areas that enhance the functionality or management of supported guest operating systems. Periodically check for integration service updates between the major releases of Hyper-V.

Memory Usage Maximization


In general, allocate as much memory to a Virtual Machine as you would for the same workload running on a physical machine without wasting physical memory. If you know how much RAM is needed to run a guest OS with all required applications and services, begin with this amount. Be sure to include a small amount of additional memory for virtualization overhead. 64 MB is typically sufficient. Insufficient memory can create many problems including excessive paging within a guest OS. This issue can be confusing because it might appear to be that the problem is with the disk I/O performance. In many cases, the primary cause is because an insufficient amount of memory has been assigned to the Virtual Machine. It is very important to be aware of the application and service needs before making changes throughout a data center. The various Sizing considerations can be reviewed at this Hyper-V sizing white paper.

11

Network Configurations
Virtual network configurations are configured in the Virtual Network Manager in the Hyper-V GUI or in the System Center Virtual Machine Manager in the properties of a physical server. There are three types of virtual networks: External: An external network is mapped to a physical NIC to allow Virtual Machines access to one of the physical networks that the parent partition is connected. Essentially, there is one external network and virtual switch per physical NIC. Internal: An internal network provides connectivity between Virtual Machines, and between Virtual Machines and the parent partition. Private: A private network provides connectivity between Virtual Machines only. The solution network requirements for the specific scenarios should be understood before configuring the network of various types.

CLX and Hyper-V Scenarios


This section will list down the various scenarios in which CLX and Hyper-V Failover Clustering can together achieve disaster recovery and high availability. For each of these scenarios it is important to note that the Virtual Machine (VM) data has to reside on the disk which is part of the storage replication group (more formally known as DR Group for Continuous Access EVA and Device Group for Continuous Access XP). This will ensure that the VM data is replicated to the recovery side as well. As explained above the main job of CLX software is to respond to any site failovers by reversing the Continuous Access replication direction between sites. Through integration with Failover Clustering this functionality is performed automatically and completely transparent to user applications (VM in this case). For proper functioning it is important that VM cluster resource, the VM data disk cluster resource and the CLX resource managing the corresponding replication group should be in the same Failover Cluster resource group and they should follow a specific order. The order is determined by the dependency hierarchy of the cluster resources. The CLX resource should be on top of the hierarchy, the disk resource should be dependent on the CLX resource and in turn the VM resource should be dependent on the disk resource. When multiple VMs are created on a single vdisk (LUN) or multiple vdisk that is/are part of a single replication group then all the VMs and corresponding physical disks should be part of the same service group of the Failover Cluster. CLX does the storage failover at the replication group level and since all the disks that host the VM are in a single replication group, only one CLX resource needs to be configured for all the VMs in a service group. All the VM resources should depend on their respective physical disks and all the disks should in turn depend on the single CLX resource. For more detailed information, refer to CLX installation and Administrator Guide at the link provided in the reference section at the end of this document.

12

Scenario A: Disaster Recovery and High Availability for Virtual Machine


This is a most prominent scenario where the VM is hosted on a standard Windows Server 2008 (host OS/Host partition) and it is clustered with another similar host OS. Whenever any failure is detected by the cluster, the VM can failover from one host OS to another. Every time the failover is performed across the nodes, CLX will take appropriate actions to make replicated storage accessible and highly available. The following diagram is an illustration of this scenario. A multi-site replicated disk is presented to two Hyper-V parent partitions that are clustered using Windows Failover Cluster. Each of these parents OS reside on each site. VM is created on the replicated disk and the VM is added to the Failover Cluster as a service group on the parent partition. When a failure is detected by the Failover Cluster (for example host failure) Failover Cluster moves the VM to the other cluster node and CLX ensures that storage replication direction is changed and the storage disk is made write enabled.

Figure 5: Scenario ABefore Failover

VM
C:\

E:\

E:\

Host partition 1

Host partition 2

Failover Cluster

Continuous Access Direction managed by CLX


Lun Lun

Site A

XP1

XP2

Site B

13

Figure 6: Scenario AAfter Failover

VM
C:\

E:\

E:\

Host partition 1

Host partition 2

Failover Cluster

Continuous Access Direction managed by CLX


Lun Lun

Site A

XP1

XP2

Site B

14

Scenario B: Disaster Recovery and High Availability for Virtual Machine and applications hosted by Virtual Machine
This is an extension of scenario A. This scenario considers that there is an application hosted by the VM. Also assume that this application also uses replicated SAN storage for disaster recovery. When the VM failover, the application will also failover along with it to other host OS. After failover, for VM and application to come online, the disk access should be made available. This again can be achieved through replicated disks and failover through CLX. The way this would work is to have a minimum of two physical disks with multi-site replication, one disk for hosting the VM and the other hosts the application. The following diagram is an illustration of scenario B. Two multi-site replicated disks are presented to two Hyper-V parent partitions that are clustered using Windows Failover Cluster. On the parent partition, VM is created on one of the replicated disk and the data for the application hosted by VM is stored on the other replicated disk. The VM is added to the Failover Cluster as a service group. When a failure is detected by the Failover Cluster (host failure for example). Failover Cluster moves the VM and with it the application to the other cluster node and CLX ensures that storage replication direction is changed hence achieving automatic failover and high availability.

Figure 7: Scenario BBefore Failover

Application

VM
CLX managed disk

E:\ C:\

CLX managed disk


E:\ F:\ E:\ F:\

Host partition 1

Host partition 2

Failover Cluster

Continuous Access Direction managed by CLX


Lun 1 Lun 2 Lun 1 Lun 2

Site A

XP1

XP2

Site B

15

Figure 8: Scenario BAfter Failover

Application

VM

E:\ C:\

E:\

F:\

E:\

F:\

Host partition 1

Host partition 2

Failover Cluster

Continuous Access Direction managed by CLX


Lun 1 Lun 2 Lun 1 Lun 2

Site A

XP1

XP2

Site B

For this scenario to function properly it is important to note that both the VM as well as application hosted by VM should use the disk (LUN) which belongs to the same replication group. In such a situation, a single CLX resource can manage the failover of both the disks as a single atomic unit. The hierarchy of the resources within the cluster should me maintained as mentioned in a section above. For example, in the configuration above both Lun 1and Lun 2 should be part of same replication group.

16

Scenario C: Failover Cluster between Virtual Machines


This is a scenario where the VMs hosted on the parent OS are clustered (not the parent OS) and application (MS Exchange, MS SQL and so on) running on VM can be failed over from one VM to other. In this scenario the VMs should be hosted on the distinct host physical servers as cluster cannot be created between VMs hosted on the same physical server using SCSI or FC SAN disks. The cluster can be created between VMs hosted on the same physical server only if iSCSI disks are used. Let us look at these sub scenarios separately. Cluster between Virtual Machines hosted on the same physical server As mentioned above, there is a restriction in Windows Server 2008 such that a single FC SAN disk cannot be presented to multiple VMs on the same host. Due to this, it is not possible to create cluster between VMs hosted on the same server. However this restriction can be overcome if iSCSI disks are used. This is not a very prominent scenario as far as CLX is concerned but can be very useful for demonstration and testing purposes. Cluster between Virtual Machines hosted on separate physical server This is very similar to scenario A except that cluster is created between Virtual Machines as against the host partition in scenario A. However there are certain restrictions. If the VM OS is Windows Server 2008 then the cluster can be created only using iSCSI disks. The storage option available for Windows Server 2008 does not include SCSI. Whereas SAN storage can be provisioned to any VM as SCSI or IDE disk only. However cluster can be crated between Windows Server 2003 VMs as SCSI disks are allowed.
Note: According to the HP support matrix for EVA Continuous Access and XP Continuous Access, clustering of VM running on Hyper-V is not supported. Hence this Scenario is not currently supported by CLX. Please refer to SPOCK to ensure solution supportability. Also for iSCSI support refer to CLX and other solution component (Arrays, iSCSI Bridge or MP Router) SPOCK.

17

The following diagram is illustration of the sub scenario where cluster is created between the Virtual Machines belonging to different physical servers.

Figure 9: Scenario CBefore Failover

Application
VM 1 E:\ C:\

CLX managed disk

Failover Cluster

VM 2

E:\ C:\

C:\vm1.vhd

E:\

C:\vm1.vhd

E:\

Host partition 1

Host partition 2

Continuous Access Direction managed by CLX


Lun Lun

Site A

XP1

XP2

Site B

18

Figure 10: Scenario CAfter Failover

VM 1

Application
E:\ C:\

Failover Cluster

E:\ C:\

C:\vm1.vhd

E:\

C:\vm1.vhd

E:\

Host partition 1

Host partition 2

Continuous Access Direction managed by CLX


Lun Lun

Site A

XP1

XP2

Site B

19

Step-by-step guide to deploy CLX Solution


This section will describe the major steps required to build the CLX EVA solution from scratch. The steps are described in brief and for more details refer to relevant product manuals and specifications. The steps required to configure the CLX XP solution are little different in terms of storage provisioning and CLX configuration but in general the described sequence below is applicable. The section assumes the configuration as depicted in the figure below.

Figure 11: Typical CLX EVA Solution Configuration


Site C

CV Station

Sit

eA

Site A

CV Station
Site B

te Si

File Share Witness Server

My First VM

Microsoft Failover Cluster


CLX 23

CLX 20

Source
VM01 Replication using Continuous Access EVA

Destination
VM01

EVA A

EVA B

The configuration involves a two node FSW cluster with one node in each of the sites and third node with file share for arbitration in the third site. It is recommended to use the FSW cluster configuration for CLX. It is also recommended to ensure redundancy for each and every component at each site. The inter site links also should be in redundant configuration so as not to have any Single Point of Failure (SPOF). For details on SAN configuration refer to HP SAN Design Guide. Link to the SAN Design Guide is provided in the references section at the end of this document. Also make sure the supportability of each and every component like Server, Storage, HBA, Switch and Softwares by referring to SPOCK Website.

Deploying Windows Server 2008 Hyper-V


Refer to the section Deploying Windows Server 2008 Hyper-V on ProLiant servers of HP white paper available at the following link http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01516156/c01516156.pdf

20

Installing Hyper-V role


Refer to the section Installing Windows Server 2008 Hyper-V server role of HP white paper available at the following link http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01516156/c01516156.pdf To install the Hyper-V Role on Server Core, run the following command: start /w OCSetup Microsoft-Hyper-V After the command completes, reboot the server to complete the installation. After the server reboots, run the oclist command to verify the role has been installed.

Creating Geographically Dispersed MNS cluster


Before forming a cluster, make sure the nodes meet the following requirement: Hyper-V role is installed Failover Cluster and MPIO features are installed KB950050 and KB951308 are installed Cluster validation tool runs successfully (storage validation tests in a multi-site cluster are an exception). Refer to KB943984 Follows the steps mentioned to launch the Failover Cluster Management tool. Start -> Programs -> Administrative Tools -> Failover Cluster Management In the actions pane click on Create a cluster and enter the cluster node names

Figure 12: Cluster Creation Wizard

21

Follow the cluster installation wizard and provide inputs whenever prompted. When the wizard completes a Failover Cluster will be created and can be managed through Failover Cluster Management MMC Snap-in.
Note: In the case where the cluster nodes are a Windows 2008 server core machines, use MMC from a different Window 2008 machine

Configuring Network for Virtual Machine


Before starting with network configuration, it is a good practice to manage all the cluster nodes from one node. For this launch the Hyper-V Manger from start -> programs -> administrative tools (Hyper-V Manager is a tool which comes along with the Windows Server 2008 and can be used for the Virtual Machine management) In the Actions pane, click Connect to Server and connect to all nodes that belong to the cluster. This helps manage the Hyper-V functionality of all cluster nodes from a single Window In the example, the Hyper-V Manager is used to manage virtualization capabilities of nodes CLX20 and CLX23

Figure 13: Hyper-V Manager

22

Now that all Hyper-V host servers can be managed from a single point, select the Hyper-V host server and right click to open Virtual Network Manager. Create virtual networks as appropriate for the setup. In this example, two virtual networks are created. Both the virtual network are of network type External.

Figure 14: Virtual Network Configuration

Provisioning Storage for Virtual Machine


Before creating the Virtual Machine, both the cluster nodes must be provisioned with an array based replication disks. The following steps describe how to add a data replicated disk to cluster nodes in HP Enterprise Virtual Array (EVA) using the storage management software called Command View EVA. For more details on how to use HP StorageWorks Command View EVA refer to its product manuals available on www.hp.com Add the host CLX20 to the EVA named as CLXEVA01 Add the host CLX23 to the EVA named as CLXEVA02 Create a virtual disk (vdisk) on CLXEVA01 and present the vdisk to host CLX20. In this example a vdisk named vm01 of size 20 GB was created inside the folder Hyper-V white paper and presented to the host CLX20.

23

Figure 15: Storage Provisioning using HP StorageWorks Command View EVA

After creating the vdisk, a Data replication (DR) group is created. The source vdisk of the data replication group will be the vdisk vm01. The destination array will be the EVA named as CLXEVA02. In this example the DR group is named as Hyper-V white paper DR. When the DR group is created, a vdisk of similar size and name is created on the remote EVA CLXEVA02. Present the vm01 of CLXEVA02 to the host CLX23. The Vm01 vdisk of 20 GB size should be visible on the cluster nodes. Bring the disk online and name the disk appropriately. In this example the vdisk vm01 is seen on CLX20 node as disk1. The disk was initialized and named as EVA Disk for VM. These changes get reflected automatically on the remote node CLX23 as well.

24

Figure 16: Disk Management Snap-in on Host Partitions

25

Creating and Configuring Virtual Machine


The Virtual Machine data should reside on a multi-site/storage replicated disk provisioned in the step above. This is to ensure that VM related data is available and accessible to all cluster nodes in the event of failover To create a virtual follow the steps mentioned below: Click to highlight the Hyper-V host serve in the Hyper-V Manager, then from the Action pane click New -> Virtual Machine. In this example, CLX20 was selected and a Virtual Machine creation was initiated. On the Specify Name and Location page, specify a name for the Virtual Machine, such as VM02-win2k3. In this example the Virtual Machine is named My First VM. Click Store the Virtual Machine in a different location, and then type the full path or click Browse and navigate to the shared storage. In this example the Virtual Machine will reside on the disk named EVA disk for VM provisioned in the step above.

Figure 17: Virtual Machine Creation Wizard

The New Virtual Machine wizard will prompt for user inputs. At the Configure Networking page, select the virtual network created in earlier step. In the Connect virtual Disk page, select a virtual hard disk already available or create a new disk. The disk should be create/located on the EVA vdisk. In this example a new VHD disk is created on the EVA disk provisioned. A Windows operating system image located locally is attached for installing the operating system on the Virtual Machine.

26

Figure 18: Virtual Machine Configuration

Note: Operating system for the Virtual Machine can installed before or after adding the Virtual Machine to the cluster.

27

Installing Cluster Extension Software


Install CLX software on the cluster nodes. CLX is available as a 60 day free trial version. The XP CLX software can be downloaded from http://h18006.www1.hp.com/products/storage/software/xpcluster/index.html The EVA CLX software can be downloaded from http://h18000.www1.hp.com/products/storage/software/ceeva/index.html Extract the files and launch the CLX installation. Follow the on screen instructions to complete the installation. The latest version of CLX is feature rich. It now supports cluster wide installation. In this example, the CLX installation was launched from CLX20 and installed simultaneously on CLX20 and CLX23. For more detailed information refer to CLX Installation Guide.

Figure 19: CLX Installation

28

Configuring Cluster Extension Software


After the CLX software installation completes, launch the CLX configuration tool. Populate all data required for the CLX configuration tool. A Snapshot of the CLX configuration tool is shown in the following figure. As is evident, there are two CV Stations managing two EVA arrays belonging to two different data centers or sites.

Figure 20: CLX Configuration

Note: For more detailed description about CLX installation and configuration refer to the CLX installation and Administrative Guide respectively.

29

Making Virtual Machine Highly Available


Launch the Failover Cluster Management tool and connect it to the cluster Select Services and Application in the left hand pane and from the actions pane click Configure a Service or Application Choose Virtual Machine in Select Service or Application page and click Next

Figure 21: Virtual Machine High Availability Wizard

In the Select Virtual Machine page select the Virtual Machine created in the step above and click Next. In this example, the Virtual Machine My First VM should be selected. Once the wizard is over one can see the Virtual Machine resource as a part of new Service and Application group.

30

Creating a CLX resource


Select the Virtual Machine service group, right click Add a resource -> More resources -> 1-Add Cluster Extension EVA A CLX resource is created in the selected Virtual Machine service groups. The next steps are as follows Configure CLX resource properties Double click the CLX resource Rename the CLX resource select the properties tab and configure the CLX properties o A completed CLX resource property tab looks as shown in the following figure. For details refer to CLX Administrators Guide

Figure 22: CLX Resource Configuration

Note: If the Failover Cluster (with CLX resource) is managed from a remote node then the CLX resource parameter tab may appear different from the previous figure. For remote administration of Failover Cluster with CLX resource refer to the latest XP CLX and EVA CLX Administrative Guide.

31

Set the dependency


Before bring the CLX resource online, set a dependency between the VM Disk and CLX. Set the dependency such that before the replicated disk is brought online, CLX should be brought online. After creating CLX resource, the dependency report will appear similar to the dependency report shown in the following figure.

Figure 23: CLX Resource Dependency

At this step the CLX solution is almost ready. It is always advisable to perform some test failovers to ensure that solution is behaving as expected.

Test Failover or Quick Migration


To test quick migration, select the Virtual Machine service and click Move this service or application to another node. In this example the Virtual Machine will be moved from CLX20 to CLX23. The Virtual Machine will be saved on CLX20 and then restored on CLX23.

Figure 24: Virtual Machine Test Failover

When the Virtual Machine movement completes, the Virtual Machine is no longer available or related to node CLX20 and is now hosted on CLX23.

32

Figure 25: After Test Failover

33

Appendix A: Bill of Materials


HP c-Class BladeSystem infrastructure Part Number 412152-B21 410916-B21 AE371A 378929-B21 412138-B21 412140-B21 412142-B21 459497-B21 459502-B21 397415-B21 431958-B21 403621-B21 Description HP BLc7000 CTO Enclosure HP BLc Cisco 1GbE 3020 Switch Opt Kit Brocade BladeSystem 4/24 SAN Swt Power Pk Cisco Ethernet Fiber SFP Module HP BLc7000 Encl Power Sply IEC320 Option HP BLc Encl Single Fan Option HP BLc7000 Encl Mgmt Module Option HP ProLiant BL480c CTO Chassis HP Intel E5450 Processor kit for BL480c HP 8 GB FBD PC2-5300 2x4 GB Kit 146 GB 10K SAS 2.5 HP HDD Emulex LPe1105-HP 4Gb FC HBA for HP BLc

HP StorageWorks Enterprise Virtual Array EVA4400 and related Software Part Number AF002A AF002A 001 AF062A AF062A B01 AG637A AG637A 0D1 AG638A AG638A 0D1 AJ711A AJ711A 0D1 252663-D72 252663-D72 0D2 AF054A AF054A 0D1 221692-B21 T5497A T5486A T3667A T5505AAE U5466S Description HP Universal Rack 10642 G2 Shock Rack Factory Express Base Racking HP 10K G2 600W Stabilizer Kit Include with complete system HP EVA4400 Dual Controller Array Factory integrated HP M6412 Fibre Channel Drive Enclosure Factory integrated HP 400 GB 10K FC EVA M6412 Enc HDD Factory integrated HP 24A High Voltage US/JP Modular PDU Factory horizontal mount of PDU HP 10642 G2 Side panel Kit Factory integrated StorageWorks LC/LC 2m Cable HP CV EVA4400 Unlimited LTU HP Continuous Access EVA4400 Unlimited LTU HP EVA Cluster Extension Windows LTU HP Smartstart EVA Storage E-Media Kit HP Care Pack

34

References
Step-by-Step Guide for Testing Hyper-V and Failover Clustering http://www.microsoft.com/downloads/thankyou.aspx?familyId=cd828712-8d1e-45d1-a2907edadf1e4e9c&displayLang=en Deploying Windows server 2008 on HP ProLiant servers http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00710606/c00710606.pdf Microsoft Virtualization Resources http://www.microsoft.com/virtualization/default.mspx Microsoft Virtualization on HP Website http://h18004.www1.hp.com/products/servers/software/microsoft/virtualization/ Hyper-V on HP Active Answers http://h71019.www7.hp.com/ActiveAnswers/cache/604725-0-0-0-121.html XP Cluster Extension Manuals http://h20392.www2.hp.com/portal/swdepot/displayProductInfo.do?productNumber=CLXXP EVA Cluster Extension Manuals http://h20392.www2.hp.com/portal/swdepot/displayProductInfo.do?productNumber=CLXEVA Comprehensive Hyper-V update list http://technet.microsoft.com/en-us/library/dd430893.aspx HP StorageWorks SPOCK (External) http://www.hp.com/storage/spock SAN Design Guide http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00403562/c00403562.pdf?j umpid=reg_R1002_USEN HP StorageWorks XP Continuous Access http://h18006.www1.hp.com/products/storage/software/continuousaccess/index.html?jumpid=r eg_R1002_USEN HP StorageWorks EVA Continuous Access http://h18006.www1.hp.com/products/storage/software/conaccesseva/?jumpid=reg_R1002_USEN HP StorageWorks Command View EVA http://h18006.www1.hp.com/products/storage/software/cmdvieweva/index.html?jumpid=reg_R 1002_USEN HP Multipathing solutions http://h18006.www1.hp.com/products/sanworks/multipathoptions/index.html Microsoft Multipathing solutions http://www.microsoft.com/downloads/details.aspx?FamilyID=cbd27a84-23a1-4e88-b1986233623582f3&displaylang=en

Technology for better business outcomes


Copyright 2009 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Microsoft is a US registered trademark of Microsoft Corporation. 4AA2-6905ENW, August 2009

You might also like