VMware Infrastructure 3, deployment

Deploying a VMware Infrastructure with HP ProLiant servers, storage, and management products

Executive summary............................................................................................................................... 3 Audience ........................................................................................................................................ 4 This white paper .............................................................................................................................. 4 ESX server pre-deployment.................................................................................................................... 4 Compatibility and support................................................................................................................. 4 HP ProLiant servers ....................................................................................................................... 4 HP StorageWorks SAN................................................................................................................. 4 HP I/O devices ............................................................................................................................ 5 Server configuration options .............................................................................................................. 5 IOAPIC table setting ..................................................................................................................... 5 Hyper-Threading Technology ......................................................................................................... 5 Node interleaving ........................................................................................................................ 5 Platform specific considerations ......................................................................................................... 6 HP ProLiant servers with AMD Opteron processors ........................................................................... 6 SAN configuration ........................................................................................................................... 7 Configuring HBAs ........................................................................................................................ 7 Configuring clustering support ....................................................................................................... 7 Supported SAN topologies ............................................................................................................ 7 MSA configuration notes ............................................................................................................... 8 EVA configuration notes ................................................................................................................ 9 XP configuration notes................................................................................................................. 10 Supported guest operating systems ............................................................................................... 10 Boot From SAN .......................................................................................................................... 11 Deploying VMware ESX Server 3.0 ..................................................................................................... 12 Installation methods ........................................................................................................................ 12 HP Integrated Lights-Out .............................................................................................................. 12 Scripted installation .................................................................................................................... 12 HP ProLiant Essentials Rapid Deployment Pack (RDP)....................................................................... 12 Installation considerations ............................................................................................................... 13 ATA/IDE and SATA drives........................................................................................................... 13 SAN ......................................................................................................................................... 13 Boot From SAN .......................................................................................................................... 13 Disk partitioning ......................................................................................................................... 13

Post-installation tasks .......................................................................................................................... 15 HP Insight Management agents........................................................................................................ 15 Obtaining IM agents for ESX Server ............................................................................................. 15 Installing and configuring IM agents ............................................................................................. 16 Silent installation ........................................................................................................................ 17 Re-configuring the IM agents........................................................................................................ 17 ESX Server configuration................................................................................................................. 17 Virtual Machine deployment ............................................................................................................... 18 Creating new VMs ......................................................................................................................... 18 Using standard media................................................................................................................. 18 Network deployment .................................................................................................................. 18 Using RDP ................................................................................................................................. 19 Using templates.......................................................................................................................... 20 Migrating VMs from ESX Server 2.x ................................................................................................. 20 VMware Tools ............................................................................................................................... 20 Using RDP to install VMware Tools ............................................................................................... 21 Using HP Systems Insight Manager ...................................................................................................... 23 Host management .......................................................................................................................... 23 SNMP settings............................................................................................................................ 23 Trust relationship ........................................................................................................................ 24 Troubleshooting ............................................................................................................................. 24 For more information.......................................................................................................................... 25

Executive summary
This white paper provides guidance on deploying a VMware Infrastructure environment based on HP servers, storage, and management products. The following key technology components are deployed: • HP ProLiant servers • HP Management software (HP Systems Insight Manager (HP SIM) and OpenView) • HP ProLiant Essentials software • HP StorageWorks Storage Area Network (SAN) products • VMware Infrastructure 3 • VMware ESX Server 3.0 • VMware VirtualCenter 2.0 This white paper is not designed to replace documentation supplied with individual solution components but, rather, is intended to serve as an additional resource to aid the IT professionals responsible for planning a VMware environment. This is the second in a series of documents on the planning, deployment, and operation of an Adaptive Infrastructure based on VMware Infrastructure and HP servers, storage, and management technologies.

Figure 1. Phases in implementing a VMware Infrastructure

The documents in this series are: • An architecture guide (VMware Infrastructure 3, architecture) • A planning guide (VMware Infrastructure 3, planning)

3

• A deployment guide (VMware Infrastructure 3, deployment), and • An operations guide (VMware Infrastructure 3, operations). This white paper contains deployment information to help customers effectively deploy a VMware Infrastructure running on HP ProLiant servers, HP StorageWorks storage solutions, and HP ProLiant Essentials management software components. Prior to reading this guide, the reader should understand the VMware Infrastructure architecture and how it virtualizes the hardware. All of the HP guides, white papers and technical documents for VMware ESX Server can be found at: www.hp.com/go/vmware.

Audience
The deployment information contained in this white paper is intended for solutions architects, engineers, and project managers involved in the deployment of virtualization solutions. The reader should be familiar with networking in a heterogeneous environment and with virtualized infrastructures, and have a basic knowledge of VMware ESX Server 3.0 and VirtualCenter 2.0, and HP ProLiant servers, HP StorageWorks and HP software products.

This white paper
This white paper provides information on the following topics: • Compatibility and support – Identifying and configuring HP ProLiant server platforms that are certified for VMware ESX Server 3.0. • SAN configuration – Configuring supported HP StorageWorks SAN arrays for connectivity with ESX Server systems • Deploying VMware ESX Server 3.0 – Advanced methods for deploying ESX Server on HP ProLiant servers • HP Insight Management agents – Installing HP management tools on an ESX Server system • Virtual Machine deployment – Using conventional and advanced methods to deploy a virtual machine (VM) • VMware Tools – Deploying management tools into a guest operating system

ESX server pre-deployment
This section contains configuration steps that should be performed before you deploy VMware ESX Server.

Compatibility and support
This section details HP servers, storage, and I/O devices that have been tested and are supported by HP for ESX Server 3.0. HP ProLiant servers For the most up-to-date list of supported platforms and important configuration notes refer to the support matrix at http://h18004.www1.hp.com/products/servers/software/vmware/hpvmwarecert.html. HP StorageWorks SAN The following is a list of HP StorageWorks SAN array systems that have been certified with VMware ESX Server 3.0. For the most up-to-date list of supported arrays and important configuration notes,

4

refer to the Storage / SAN Compatibility Guide for ESX Server 3.0 at http://www.vmware.com/pdf/vi3_san_guide.pdf. • HP StorageWorks 1500cs Modular Smart Array (MSA1500) • HP StorageWorks 1000 Modular Smart Array (MSA1000) • HP StorageWorks 4000 Enterprise Virtual Arrays (EVA4000) • HP StorageWorks 6000 Enterprise Virtual Arrays (EVA6000) • HP StorageWorks 8000 Enterprise Virtual Arrays (EVA8000) • HP StorageWorks XP128 Disk Array (XP128) • HP StorageWorks XP1024 Disk Array (XP1024) HP I/O devices For the most up-to-date list of supported devices and important configuration notes, refer the HP ProLiant option support matrix at http://h18004.www1.hp.com/products/servers/software/vmware/hpvmware-options-matrix.html.

Server configuration options
This section provides information on configuring the IOAPIC table, Intel® Hyper-Threading, and node interleaving for AMD Opteron™-based systems. All of these options can be configured in the ROMBased Setup Utility (RBSU). To access the RBSU, press F9 when prompted during the Power-On Self Test (POST). IOAPIC table setting The IOAPIC (Input/Output Advanced Programmable Interrupt Controller) controls the flow of interrupt requests in a multi-processor system. It also affects the mapping of IRQs to interrupt-driven subsystems such as PCI or ISA devices. Full IOAPIC table support should be enabled for all HP ProLiant servers running VMware ESX Server. This option can be found in the Advanced Options menu of the RBSU. Full IOAPIC table support is enabled by default in current generation ProLiant servers. Note: Previous generation ProLiant servers may refer to this option as MPS Table Mode.

Hyper-Threading Technology Hyper-Threading Technology is an embedded Intel processor technology that allows the operating system to view a single CPU as two logical units. The processor is capable of managing multiple tasks generated by different applications. Hyper-Threading is supported by ESX Server. To enable or disable Hyper-Threading at the system level, select Processor Hyper-Threading from the Advanced Options menu in the RBSU. Node interleaving To optimize performance over a wide variety of applications, the AMD Opteron processor supports two different types of memory access: Non-Uniform Memory Access (NUMA) and Uniform Memory Access (UMA), or node interleaving.

5

NUMA is enabled by default on Opteron-based ProLiant servers. To place the server in UMA mode, enable Node Interleaving from the Advanced Options menu in the RBSU. Additional details about memory access and configuration for Opteron-based ProLiant servers are provided in the next section.

Platform specific considerations
The following section contains configuration details and consideration specific to various ProLiant server lines. HP ProLiant servers with AMD Opteron processors As mentioned above, the Opteron processor supports two different types of memory access: nonuniform memory access (NUMA) and sufficiently uniform memory access (SUMA), or node interleaving. A node consists of the processor cores, including the embedded memory controller and the attached DIMMs. The total memory attached to all the processors is divided into 4096 byte segments. In the case of linear addressing (NUMA), consecutive 4096 byte segments are on the same node. In the case of node interleaving (SUMA), consecutive 4096 byte segments are on different or adjacent nodes. Linear memory accessing (NUMA) defines the memory starting at 0 on node 0 and assigns the total amount of memory on node 0 the next sequential address, up to the memory total on node 0. The memory on node 1 will then start with the next sequential address until the process is complete. Node interleaving (SUMA) breaks memory into 4KB addressable entities. Addressing starts with address 0 on node 0 and sequentially assigns through address 4095 to node 0, addresses 4096 through 8191 to node 1, addresses 8192 through 12287 to node 3, and addresses 12888 through 16383 to node 4. Address 16384 is assigned to node 0 and the process continues until all memory has been assigned in this fashion. ESX Server currently offers NUMA support for Opteron-based systems and implements several optimizations designed to enhance virtual machine performance on NUMA systems. However, some virtual machine workloads may not benefit from these optimizations. For example, virtual machines that have more virtual processors than the number of processor cores available on a single hardware node cannot be managed automatically. Virtual machines that are not managed automatically by the NUMA scheduler still run correctly; they simply don't benefit from ESX Server's NUMA optimizations. In this case, performance may be improved by activating node interleaving. For best performance, HP recommends configuring each node with an equal amount of RAM. Additionally, each ProLiant server may have its own rules and guidelines for configuring memory. Please see the QuickSpecs for each platform available at http://h18000.www1.hp.com/products/quickspecs/ProductBulletin.html For more information on ESX Server and NUMA technology, refer to VMware Knowledge Base article 1570.

6

SAN configuration
This section contains important information for both server and SAN administrators to use when configuring ESX Server hosts for SAN connectivity. Configuring HBAs This section contains information on obtaining the World-Wide Port Names (WWPNs) from your Fibre Channel HBAs and configuring them for clustering support. Obtaining World-Wide Port Name (WWPN) In order to configure an HP StorageWorks SAN, you will need to know the WWPN for each HBA you intend to connect to the SAN. Follow the instructions for your HBA model below to obtain the WWPN. Write down the WWPN for later use.
Table 1. Obtaining WWPNs for HBAs HBA How to obtain WWPN The WWPN can be found by entering the QLogic Fast!UTIL utility during server post. Select the appropriate host adapter (if more than one is present), then go to Configuration Settings -> Host Adapter Settings and look for Adapter Port Name. The WWPN can be found by entering the Emulex BIOS Utility during server post. Select the appropriate host adapter (if more than one is present). The WWPN will be displayed at the top of the screen.

QLogic

Emulex

Configuring clustering support Clustering your virtual machines between ESX Server machines requires shared disks. To configure an HBA for clustering support, follow the instructions for your HBA listed in the table below.
Table 2. Configuring clustering support HBA Configuring clustering support Enter the QLogic Fast!UTIL utility during server POST, then select the desired HBA. Select Configuration Settings Advanced Adapter Settings; ensure that the following settings are configured:

QLogic

• Enable LIP Reset is set to No • Enable LIP Full Login is set to Yes • Enable Target Reset is set to Yes

Emulex

N/A

Supported SAN topologies All HP StorageWorks SANs are supported in both single-fabric and multi-fabric environments. Direct connect is not supported except when using the HP StorageWorks MSA SAN Switch 2/8 1 .

1

The MSA SAN Switch 2/8 is a true fibre channel switch thus does not represent a true direct connect architecture.

7

For more information on specific SAN topologies, refer to the HP SAN Design Reference Guide available at http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00403562/c00403562.pdf. IMPORTANT: For high availability, multi-path capability is provided natively by ESX Server. Do not attempt to install other multipath software such as HP Secure Path or Auto Path.

MSA configuration notes Before configuring an MSA1000 or MSA1500 SAN and an ESX Server machine, HP recommends upgrading the array controller to an appropriate firmware version, as specified in Table 3.
Table 3. Array controller firmware levels Array MSA1000 MSA1500 Minimum 4.48 4.98 Recommended 4.48 5.02

For more information on upgrading the firmware and configuring the MSA, see http://h18006.www1.hp.com/storage/arraysystems.html For proper communications between the MSA and an ESX Server host, set the connection profile for each HBA port to Linux. Follow these steps:
1. Determine the World Wide Port Name (WWPN) for each HBA port to be connected to the array. 2. Set the profile for each HBA port using either the Command Line Interface (CLI) or Array

Configuration Utility (ACU). CLI

– Provide a unique name for the connection and set the profile by typing the following command: CLI> add connection <unique_name> wwpn=<wwpn> profile=Linux – To verify that each connection has been correctly set, type the following command: CLI> show connections – For each connection, verify that the profile is set to Linux and that its status is Online. If there are any problems, refer to the MSA1000/MSA1500 documentation for troubleshooting guidelines. ACU – Enable Selective Storage Presentation (SSP). – Review the list of HBAs connected to the array. – Assign a unique name to each connection and select Linux from the drop-down list as the desired profile. – Enable access to the LUNs you wish to present to the ESX Server systems.

8

EVA configuration notes Before configuring an EVA SAN to an ESX Server machine, HP recommends upgrading the array controller to the appropriate firmware version, as specified in Table 4.
Table 4. Array controller firmware levels Array EVA3000 (HSV100) EVA5000 (HSV110) EVA4000 EVA6000 (HSV200) EVA8000 (HSV210) XCS 5.031 XCS 5.100 Minimum VCS 3.028 VCS 4.004 VCS 3.028 VCS 4.004 XCS 5.031 Recommended VCS 3.028 VCS 4.004 VCS 3.028 VCS 4.004 XCS 5.100

For more information on upgrading the firmware of an EVA array and on configuring the EVA, see http://h18006.www1.hp.com/storage/arraysystems.html When adding an ESX Server system as a new host, Command View EVA may not populate all the HBAs in the WWPN drop-down list. The WWPN can be entered manually if this occurs. The connection type must be set to according to Table 5.

9

Table 5. Array controller connection type Array EVA3000 EVA5000 EVA3000 EVA5000 EVA4000 EVA6000 EVA8000 EVA4000 EVA6000 EVA8000 XCS 5.100 VMware XCS 5.031 Custom: 00000000220008BC VCS 4.004 Firmware VCS 3.028 Connection Type Custom: 000000002200282E VMware

Red Hat Advanced Server 2.1, Red Hat Enterprise Linux 3, and SuSE Linux Enterprise Server 8 guest VM support requires using the "vmxlsilogic" SCSI emulation. XP configuration notes Before configuring an XP SAN and ESX Server, HP recommends upgrading the array controller to an appropriate firmware version, as specified in Table 6.
Table 6. Array controller firmware levels Array XP128 XP1024 Minimum 21.14.18.00/00 Recommended 21.14.18.00/00

For more information on upgrading the firmware of an XP array and on configuring this array, see: http://h18006.www1.hp.com/storage/arraysystems.html The host mode for all XP arrays should be set to “0x0C”. Supported guest operating systems The following guest operating systems are supported with HP StorageWorks SAN arrays and VMware ESX Server:

10

Table 7. Supported Guest Operation Systems Microsoft® Windows® SuSE Linux Red Hat

Windows 2000 SP3 and SP4 Windows 2003 base and SP1

SLES 8 SP3 SLES 9 SP1 and SP2

RHEL 2.1 U6 and U7 RHEL 3 U4 and U5 RHEL 4 U2

RHEL 2.1, RHEL 3, and SLES8 support requires using the "vmxlsilogic" SCSI emulation. Boot From SAN Enabling Boot From SAN on an HP ProLiant server is a two-stage process: enabling and configuring the QLogic BIOS, and configuring the server’s host boot order in the RBSU. Perform the following steps: Configuring the BIOS
1. While the server is booting, press Ctrl Q to enter Fast!UTIL. 2. From the Select Host Adapter menu, choose the adapter you want to boot from, then press Enter. 3. In the Fast!UTIL Options menu, choose Configuration Settings, then press Enter. 4. In the Configuration Settings menu, choose Host Adapter Settings, then press Enter. 5. In the Host Adapter Settings menu, change the Host Adapter BIOS setting to Enabled by pressing

Enter.

6. Press ESC to go back to the Configuration Settings menu. Choose Selectable Boot Settings, then

press Enter.

7. In the Selectable Boot Settings menu, enable the Selectable Boot option, then move the cursor to the

Primary Boot Port Name, LUN:, then press Enter.
8. In the Select Fibre Channel Device menu, choose the device to boot from, then press Enter. 9. In the Select LUN menu, choose the supported LUN. 10. Save the changes by pressing ESC twice.

Configuring the host boot order
1. While the system is booting, Press F9 to start the BIOS Setup Utility. 2. Choose Boot Controller Order. 3. Select the primary HBA (that is, the HBA dedicated to your SAN or presented to your LUN) and

move it to Controller Order 1.
4. Disable the Smart Array Controller. 5. Press F10 to save your configuration and exit the utility.

11

Deploying VMware ESX Server 3.0
Installation methods
VMware ESX Server 3 includes both a graphical and a text-mode installer. The graphic-mode installer is the default and recommended method for installation. When using the VMware installation media, you will be presented with a boot prompt at system startup. Press Enter to start the graphical installer or type “esx text” at the boot prompt to use the text-mode installer. HP Integrated Lights-Out Integrated Lights-Out (iLO) is a web-based, remote management technology available on HP ProLiant servers. iLO offers complete control – as if you were physically standing in front of the target server – from any network accessible location. The iLO Virtual Media feature offers a number of options for booting a remote machine in order to install ESX Server.
Table 8. Options for booting a remote machine How? Using a standard 1.44-MB floppy diskette Using a CD-ROM Using an image of the floppy diskette or CD-ROM Where? On a client machine On a client machine From anywhere on the network

ESX Server supports installation via iLO Virtual Media using either the physical installation CD or an ISO 2 image from a client machine or the network. Virtual Media requires an iLO/iLO2 Advanced license. For more information about iLO, refer to www.hp.com/servers/ilo. Scripted installation Once ESX Server has been deployed on an HP ProLiant server, IT staff can use this system to automate further deployments. This is particularly useful when deploying ESX Server instances on a number of similarly configured servers. See the Installation and Upgrade Guide available from VMware at http://www.vmware.com/pdf/vi3_installation_guide.pdf for more information on creating a scripted installation. HP ProLiant Essentials Rapid Deployment Pack (RDP) Beginning with Rapid Deployment Pack 3.1 (releasing in 4th Quarter of 2006), VMware ESX Server 3 is a supported operating system for deployment. The primary advantages of deploying ESX Server 3.0 with RDP are as follows. The HP Insight Management Agents are deployed along with the operating system which saves a step in the deployment process. Also, numerous systems can be deployed and personalized simultaneously eliminating the need for customized scripted installs. RDP also eliminates the need to configure individual hardware settings on HP ProLiant servers.

2

As prescribed by ISO standard ISO 9660

12

Installation considerations
This section highlights some additional items that you may wish to consider before beginning your deployment. ATA/IDE and SATA drives VMware supports booting from ATA/IDE and SATA devices; however you cannot create a VMFS volume on these devices. An ESX Server host must have SCSI/SAS storage, NAS, or a SAN on which to store and run virtual machines. SAN Before installation, you should zone and mask all SAN LUNs away from your server except those needed during installation. This includes shared LUNs with existing VMFS partitions. This will help prevent accidental deletion of critical VMs and data. After installation, you may then unmask the shared LUNs. The maximum number of LUNs supported by the ESX Server installer is 128, and the maximum for ESX Server is 255. Keep these maxima in mind when configuring and presenting LUNs. Boot From SAN ESX Server does not support booting from a shared LUN. Each ESX Server host should have its own boot volume, and this volume should be masked away from all other systems. IMPORTANT: Unlike ESX 2.5, ESX 3.0’s Boot From SAN installation is integrated with the boot from local disk installation. If you do not mask your LUNs properly during a manual installation, the SAN LUNs will be available for selection at the Advanced Options screen. If you select one of these LUNs and it is housing Virtual Machines they will be erased during installation when the LUN is formatted.

Disk partitioning The following table shows how the ESX Server host’s storage should be partitioned. The sizes provided are recommended minima, and optional partitions are noted.

13

Table 9. Default storage configuration and partitioning for a VMFS volume on internal drives Partition name /boot / File system format ext3 ext3 Size 100MB 2560MB Description The boot partitions stores files required to boot ESX Server. Called the “root” partition, this contains the ESX Server operating system and Web Center files. Allocate an additional 512MB if you plan to use this server for scripted installations. The swap partition is used by the service console and tools like the HP Insight Management agents. This partition serves as a repository for the VMkernel core dump files in the event of a VMkernel core dump. The VMFS file system for the storage of virtual machine disk files. Must be large enough to hold your VM disks. Storage for individual users. Partition used for temporary storage. Partition is used for log file storage. HP recommends creating a /var partition to prevent unchecked log file growth from creating service interruptions.

NA vmkcore VMFS /home (optional) /tmp (optional) /var (optional)

Swap vmkcore VMFS-3 ext3 ext3 ext3

544MB 100MB 1200MB+ 512MB 1024MB 1024MB

14

Post-installation tasks
HP Insight Management agents
HP Insight Management (IM) agents provide server management capabilities for ESX Server installed on supported server platforms. This section describes how to obtain, install, and configure the agents for a particular server environment. Obtaining IM agents for ESX Server The latest IM agents are available from the HP website. Follow these steps to download the agents:
1. Go to http://www.hp.com/servers/swdrivers. 2. Under Option 2: Locate by category, select the appropriate version of VMware ESX Server from 3. 4. 5. 6.

the Operating system drop-down list. From the Category drop-down list, select Software – System Management. Click Locate software to obtain a download link. Download the compressed tar file directly to the ESX Server system. Unpack the archive with the command: %> tar zxvf hpmgmt-<version>-vmware.tgz

Note: Opening the tar file on a Windows server may corrupt files in the archive.

The contents of the archive are unpacked in the hpmgmt/<version>/ directory. Table 10 lists the packages included in the archive and their functions.
Table 10. Descriptions of packages included with the download Package hpasm hprsm cmanic hpsmh Description This package provides server and storage management capabilities. This package provides rack and iLO management capabilities. This agent gathers critical HP ProLiant NIC hardware and software information to help IT administrators manage and troubleshoot their systems. The System Management Homepage provides a consolidated view for single-server management that highlights tightly-integrated management functionalities (such as performance, fault, security, diagnostic, configuration, and software change management). Package dependency for hpsmh. This package is not installed on ESX Server 3.0 This package contains the UCD-SNMP protocol and cmaX extensions. This package is not installed on ESX Server 3.0. This package contains tools for requesting or setting information from SNMP agents, tools for generating and handling SNMP traps, a version of the netstat command, which uses SNMP, and a Tk/Perl MIB browser. This package is not installed on ESX Server 3.0.

expat ucd-snmp-cmaX ucd-snmp-cmaX-utils

15

Installing and configuring IM agents After the tar file has been downloaded and unpacked, view the included README file for important installation and configuration information. Before starting the installation, you should consider or have available the following information: • SNMP settings During the installation, you will be asked to supply community strings—both read-only and readwrite—for SNMP communication with the localhost and with a management station such as HP Systems Insight Manager. The settings you provide will be written to the SNMP configuration file, /etc/snmp/snmpd.conf. You must specify a read-write community string for the localhost. This community string is used by the agents to write data to the SNMP Management Information Base (MIB) tree. If you are using HP SIM or other management software, see Central Management Server (CMS) below. • Central Management Server If using a central management system such as HP Systems Insight Manager, you will need to provide the IP address or DNS name for the management server during the installation. Enter the management server’s IP address along with the community string that matches the settings in your management server. When using HP SIM, you only need to allow read-only access to the CMS. • Firewall Configuration VMware ESX Server 3.0 uses a firewall to restrict network communications to and from the ESX Server host to essential services only. For full functionality of health and management agents, the following ports must be opened:
Table 11. Ports Port Protocol ESX Firewall Service Description

161 tcp/udp 162 tcp/udp 2381 tcp

snmp snmp https

snmpd snmpd N/A

SNMP traffic SNMP traps HP System Management Homepage

During the installation, you will be given the option to have the installer configure the ESX Server firewall for you. For more information about the HP health and management agents, please see the Managing ProLiant Servers with Linux HOWTO at ftp://ftp.compaq.com/pub/products/servers/Linux/linux.pdf. Although this HOWTO was written for enterprise Linux systems, much of the information contained is also applicable to VMware ESX Server environments. To begin the installation, login as root and run the following command: %> ./installvm<version>.sh --install The installation script performs some basic checks and guides you through the installation process. After the installation has completed, you may wish to configure the System Management Homepage (SMH). To start the SMH configuration wizard, run the following command: %> /usr/local/hp/hpSMHSetup.pl

16

For detailed information on configuring SMH, refer to the System Management Homepage Installation Guide available at http://docs.hp.com/en/381372-002/381372-002.pdf. Silent installation The installation script may also be run in silent mode, installing and configuring the IM agents based on settings contained in an input file – without user interaction. To automate the installation of the agents, create an input file using the hpmgmt.conf.example file from the download package as a template. Information on the available options is given in the example file; at a minimum, you should configure local read-write community access for SNMP. To automate the configuration of the System Management Homepage, place a copy of the SMH configuration file (smhpd.xml) with the desired setting into the same directory as the agent installation script. It is recommended to use a file from a pre-existing installation rather than edit the file by hand. The smhpd.xml file can be found in /opt/hp/hpsmh/conf/. During a silent installation, the installer will check for the presence of this file. If found, it will be used to configure SMH; otherwise, SMH will be configured with the default options. When you are ready to begin the installation, login as root and run the following command: %> ./installvm<version>.sh --silent --inputfile input.file The installation process starts immediately; you are not prompted for confirmation. However, if necessary information from the configuration file is missing, you may be prompted for it during the installation. Re-configuring the IM agents To change the configuration of the agents after the installation is complete, login as root and run the following command: %> service hpasm reconfigure This command stops the agents and reruns the interactive configuration wizard. After reconfiguring the agents, you must restart the SNMP service.

ESX Server configuration
After installing ESX Server, you will need to configure the host’s networking, storage, and security settings. Please see the Virtual Infrastructure: Server Configuration Guide for complete details on configuring your ESX Server host.

17

Virtual Machine deployment
The provisioning and deployment of a virtual machine (VM) is, in many ways, similar to the provisioning and deployment of a physical server. Servers are first configured with the desired hardware (such as CPUs, memory, disks, and NICs) then provisioned with an operating system – most likely via physical media such as a CD-ROM or DVD, or over the network. Likewise, a VM is created with a specific virtual hardware configuration; however, in addition to the more conventional methods of server provisioning, VMs offer some unique options. This section discusses conventional and advanced methods for deploying VMs.

Creating new VMs
Many of the deployment options used for physical servers are also available to virtual machines. Some of the more widely-used options (the use of standard media, a network deployment, and the use of HP ProLiant Essentials Rapid Deployment Pack (RDP)) are discussed below. Using standard media The most basic method of installing an operating system is by using the physical install media – most likely a CD-ROM or DVD or, perhaps, a floppy diskette. A VM’s CD-ROM drive can be mapped directly to the CD-ROM drive of the host machine, permitting a very simple guest operating system installation. The VM’s CD-ROM drive can also be mapped to an image file on the host or the host’s network. With ESX Server 3.0 and the Virtual Infrastructure Client, you can now also use images on the client or client’s network. By creating a repository for CD-ROM images on the network or SAN, you can maintain a central location for images to be shared by all ESX Server machines, eliminating the need to locate and swap CD-ROMs between hosts. VMs can also access CD-ROMs and image files via the HP ProLiant server host’s iLO Virtual Media feature. When connecting a VM’s CD-ROM to the iLO Virtual CD, take note of the following: • Use the special device /dev/scd0 rather than the standard /dev/cdrom. • It is NOT necessary to first mount the device in the ESX service console. When using iLO Virtual Floppy, the device is typically /dev/sda; however, if a SAN or some other SCSI device is attached, this may not be the case. To verify which device is attached to the Virtual Floppy, run the dmesg command in the service console after connecting the floppy in the iLO interface. Look for lines similar to the following example: scsi3 : SCSI emulation for USB Mass Storage devices Vendor: HP Model: Virtual Floppy Rev: 0.01 Type: Direct-Access ANSI SCSI revision: 02 VMWARE: Unique Device attached as scsi disk sde at scsi3, channel 0, id 0, lun 0 Attached scsi removable disk sde at scsi3, channel 0, id 0, lun 0 Note that the fourth and fifth lines of this example show the Virtual Floppy attached to /dev/sde. Network deployment Many operating systems now support some method of installation over the network (for example, Microsoft Remote Installation Service). This scenario is usually accomplished by remote booting with a Pre-Boot eXecution Environment (PXE) ROM or by using special boot media containing network support. PXE boot is supported by ESX Server VMs.

18

A VM with no guest operating system installed attempts to boot from devices (hard disk, CD-ROM drive, floppy drive, network adapter) in the order in which these devices appear in the boot sequence specified in the VM’s BIOS. As a result, if you plan to use PXE boot capability, HP recommends placing the network adapter at the top of the boot order. To achieve this, press F2 when the VM first boots to enter the VM’s BIOS; update the boot order in the BIOS. Note: The PXE boot image must contain drivers for the Universal Network Device Interface (UNDI) or the VM’s virtual network adapter to support network connectivity.

Using RDP RDP includes predefined jobs for deploying an operating system to a VM. However, before using one of these jobs, you must perform some additional steps, as described below. First, you must set the PXE NIC to appear first in the VM’s boot order. To achieve this, perform the following steps:
1. Power on the VM. 2. Press F2 during POST to enter the VM’s BIOS configuration utility. 3. From the Boot menu, select Network Boot and then press the + (plus) key until the PXE NIC is first

in the boot order.

Next, allow the VM to PXE boot and connect to the Deployment Server. Once connected, the VM is displayed under New Computers in the Deployment Server console as shown in Figure 2.

Figure 2. The VM is displayed in the Deployment Server console

The default deployment scripts will use the console name as the system name. You should consider renaming the VM using a name that complies with the requirements of the operating system that will be deployed, or modify to deployment job to use/create a valid system name. The deployment job may now be run on the VM. To customize the deployment of the operating system on the VM, use the same procedures as you would for a physical server. For example, if installing a Windows operating system, you must create an unattend.txt file that is customized for the specific VM, then configure the deployment job to use your custom unattend.txt file.

19

NOTE: Once you have completed installation of your guest OS you will likely want to change the boot order to prevent further PXE booting. Doing so will prevent excessive DHCP leases from a single system.

Using templates VMware Virtual Center supports the creation and deployment of templates after the first virtual machine has been created. For a complete discussion of using templates to deploy virtual machines, consult the VMware Infrastructure 3, Operations guide at http://www.hp.com/go/vmware.

Migrating VMs from ESX Server 2.x
Virtual Machines created with ESX Server 2.x can be migrated to ESX Server 3.0 hosts. Follow the procedures in the VMware Installation and Upgrade Guide at http://www.vmware.com/pdf/vi3_installation_guide.pdf for migrating your ESX 2.x virtual machines.

VMware Tools
This section provides guidelines for deploying VMware Tools into a Windows guest operating system. IMPORTANT: It is very important to install VMware Tools in the guest operating system. While the guest operating system can run without VMware tools, significant functionality and convenience would be lost.

VMware Tools is a suite of utilities that can enhance the performance of the VM’s guest operating system and improve VM management. Features include: • VMware Tools service for Windows (or vmware-guestd on Linux guests) • A set of VMware device drivers, including an SVGA display driver, the advanced networking driver for some guest operating systems, the BusLogic SCSI driver for some guest operating systems, the memory control driver for efficient memory allocation between virtual machines, the sync driver to quiesce I/O for Consolidated Backup, and the VMware mouse driver. • The VMware Tools control panel, which allows IT staff to modify settings, shrink virtual disks, and connect and disconnect virtual devices • A set of scripts that helps automate guest operating system operations; the scripts run when the VM’s power state changes. • A component that supports copying-and-pasting text between the guest and managed host operating systems For more information on installing VMware Tools, refer to the Basic System Administration Guide available at http://www.vmware.com/pdf/vi3_admin_guide.pdf. See below for guidelines on using RDP to install VMware Tools.

20

Using RDP to install VMware Tools RDP can be used to automate the installation of VMware Tools into a Windows guest OS. Perform the following steps: Note: With ESX 3, it is no longer necessary to ignore the DriverSigningPolicy specified within Microsoft KB article 298503 for the installation of VMware Tools to be automated. All drivers within ESX 3 virtual machines are signed by Microsoft.

1. To copy VMware Tools files to the Deployment Server, first use the Virtual Infrastructure Client to 2. Connect the VM’s virtual CD-ROM to the VMware Tools ISO image at

attach to an existing VM’s console.

/usr/lib/vmware/isoimages/windows.iso.

3. Copy the contents of the CD-ROM to VMwareTools, a newly-created directory under

<deployment_server>\lib\software on the Deployment Server.

4. Create a new Distribute Software task in a new or existing job. 5. Configure the task to support the silent installation of all VMware Tools components, as shown in

Figure 3.

21

Figure 3. Configuring a task for the silent install of all VMware Tools components

Note: When scheduling this task, be aware that the VM reboots when the installation is complete.

22

Using HP Systems Insight Manager
Host management
This section provides details on configuring HP Systems Insight Manager (HP SIM) to manage your HP ProLiant servers running ESX Server 3.0. SNMP settings Before discovering your host, you should install and configure the HP Insight Management agents according to the instructions above. Verify that you have provided read-only SNMP access to the CMS and that the community string matches one provided to HP SIM. To check the community strings, go to Options Protocol Settings Global Protocol Settings….

Figure 4. To check community strings:

You can provide up to 8 global community strings. If you need more, you can provide additional strings for each host after discovery. Note: If you use “public” for a community string, HP recommends making it the lowest priority.

23

During discovery, HP SIM will attempt to use the first community string in the list. If it is unable to establish communication with the host, then it will try with the second string in the list and so on. Once HP SIM can establish a connection with a particular community string, no additional strings will be tried and that string will be used for all future SNMP communication (unless explicitly configured via the host’s protocol settings page). Once you have verified that your SNMP configuration is correct, you can then perform device discovery of your host. Note: Make sure that you have configured your ESX firewall to permit SNMP traffic. SNMP traffic operates on ports 161 and 162 and requires that both TCP and UDP be allowed.

Trust relationship You may wish to establish a “trust relationship” between the HP SIM CMS and your ESX Server host. This will enable single sign-on (SSO) management between the CMS and the host and permit remote task execution. Trust can be established by Name or by Certificates and is configured through the System Management Homepage. To export the CMS certificate, go to Options Security Certificates Server Certificate and use the Export feature. This certificate can be saved, then uploaded to your ESX Server host. Refer to the System Management Homepage Installation Guide available at http://docs.hp.com/en/381372-002/381372-002.pdf for further details.

Troubleshooting
This section describes some common problems you may encounter while discovering your ESX Server hosts and VMs and provides some tips on how to resolve them. My ESX Server hosts do not display the ProLiant server model or report as “unknown” or “Linux Server”. • Verify the HP Insight Management agents have been installed and are running. • Verify your ESX firewall is configured to permit SNMP traffic (ports 161 and 162). • Open a browser to the System Management Homepage (https://hostname:2381). If you are unable to bring up SMH or if any information is missing, restart the management agents. • Navigate to the System Page for the host in HP SIM. Expand the Product Description area and verify that SNMP is listed as one of the Management Protocols. • Navigate to the host’s System Protocol Settings page and verify that SNMP community string matches the one you configured on your host. If not, override the global setting and supply the community string you wish to use. • Re-run Identify Systems from the Options menu.

24

For more information
Resource description HP website HP ProLiant servers HP ProLiant Server Management Software HP StorageWorks VMware server virtualization Web address www.hp.com www.hp.com/go/proliant www.hp.com/go/hpsim www.hp.com/go/storageworks www.hp.com/go/vmware

VMware website VMware Infrastructure VMware Infrastructure Documentation VMware Knowledge Base VMware and HP Alliance

www.vmware.com www.vmware.com/products/vi/ www.vmware.com/support/pubs/vi_pubs.html www.vmware.com/kb www.vmware.com/hp

Other resources Microsoft KB Article http://support.microsoft.com/?kbid=298503

© 2006 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Intel is a trademark or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. AMD Opteron is a trademark of Advanced Micro Devices, Inc. Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. August 2006