You are on page 1of 130

Overview_A.

book Page 1 Friday, September 5, 2008 12:40 PM

Overview of VMware
Infrastructure 3
Instructor Manual
VMware ESX 3.5 and VirtualCenter 2.5

VMware® Education Services


VMware, Inc.
education@vmware.com
Overview_A.book Page 2 Friday, September 5, 2008 12:40 PM

Overview of VMware
Infrastructure 3
VMware ESX 3.5 and VirtualCenter 2.5
Part Number EDU-ENG-A-OVW35-LECT-INST
Instructor Manual
Revision A

All rights reserved. This work and the computer programs to which it relates are the property of, and embody
trade secrets and confidential information proprietary to, VMware, Inc., and may not be reproduced, copied,
disclosed, transferred, adapted or modified without the express written approval of VMware, Inc.

Copyright/Trademark

This manual and its accompanying materials copyright © 2008 VMware, Inc. All rights reserved. Printed in
U.S.A. This document may not, in whole or in part, be copied, photocopied, reproduced, translated,
transmitted, or reduced to any electronic medium or machine-readable form without prior consent, in writing,
from VMware, Inc.

The training material is provided “as is,” and all express or implied conditions, representations, and
warranties, including any implied warranty of merchantability, fitness for a particular purpose or non-
infringement, are disclaimed, even if VMware, Inc., has been advised of the possibility of such claims.
This training material is designed to support an instructor-led training course and is intended to be used for
reference purposes in conjunction with the instructor-led training course. The training material is not a
standalone training tool. Use of the training material for self-study without class attendance is not
recommended.

Copyright © 2008 VMware, Inc. All rights reserved. VMware and the VMware boxes logo are registered
trademarks of VMware, Inc. MultipleWorlds, GSX Server, ESX Server, VMware ESX, and VMware ESXi are
trademarks of VMware, Inc. Microsoft, Windows and Windows NT are registered trademarks of Microsoft
Corporation. Linux is a registered trademark of Linus Torvalds. All other marks and names mentioned herein
may be trademarks of their respective owners.

education@vmware.com
Overview_A.book Page i Friday, September 5, 2008 12:40 PM

CONTENTS

MODULE 1 Virtual Infrastructure Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 1


What Is Virtual Infrastructure?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
How Does Virtualization Work? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
ESX Uses a Hypervisor Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
VMware Infrastructure 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
VMware Infrastructure Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Management Made Easy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
View VirtualCenter Inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
ESX Storage Choices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
VMFS and NFS Datastores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Virtual Network Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Flexible Network Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Virtual Switches Support VLANs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Lab for Module 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

MODULE 2 Create a Virtual Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19


What Is a Virtual Machine? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
ESX Virtual Machine Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Centralized Virtual Machine Management . . . . . . . . . . . . . . . . . . . . . . . . . 22
Fast, Flexible Guest OS Installations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Reducing Virtual Machine Deployment Time . . . . . . . . . . . . . . . . . . . . . . 24
Create a Template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Deploy a Virtual Machine from a Template . . . . . . . . . . . . . . . . . . . . . . . . 26
Automating Guest OS Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Lab for Module 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

MODULE 3 CPU and Memory Resource Pools . . . . . . . . . . . . . . . . . . . . 29


CPU Management Supports Server Consolidation. . . . . . . . . . . . . . . . . . . 30
Flexible Resource Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Virtual Machine CPU Resource Controls. . . . . . . . . . . . . . . . . . . . . . . . . . 32
Supporting Higher Consolidation Ratios (1) . . . . . . . . . . . . . . . . . . . . . . . 33
Supporting Higher Consolidation Ratios (2) . . . . . . . . . . . . . . . . . . . . . . . 34
Virtual Machine Memory Resource Controls. . . . . . . . . . . . . . . . . . . . . . . 35
Using Resource Pools to Meet Business Needs . . . . . . . . . . . . . . . . . . . . . 37

Overview of VMware Infrastructure 3 i


Overview_A.book Page ii Friday, September 5, 2008 12:40 PM

Configuring a Pool's Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38


Viewing Resource Pool Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Resource Pools Example: CPU Contention. . . . . . . . . . . . . . . . . . . . . . . . 40
Admission Control for CPU and Memory Reservations . . . . . . . . . . . . . . 41

MODULE 4 Migrate Virtual Machines Using VMotion . . . . . . . . . . . . . . 43


VMotion Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
How VMotion Works (1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
How VMotion Works (2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
How VMotion Works (3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
How VMotion Works (4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
How VMotion Works (5) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
How VMotion Works (6) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
ESX Host Requirements for VMotion . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Virtual Machine Requirements for VMotion. . . . . . . . . . . . . . . . . . . . . . . 52
Lab for Module 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

MODULE 5 VMware DRS Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55


What Is a DRS Cluster? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Create a DRS Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Automating Workload Balance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Adding ESX Hosts to a DRS Cluster (1). . . . . . . . . . . . . . . . . . . . . . . . . . 60
Adding ESX Hosts to a DRS Cluster (2). . . . . . . . . . . . . . . . . . . . . . . . . . 61
Automating Workload Balance per VM . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Adjusting DRS Operation for Performance or HA . . . . . . . . . . . . . . . . . . 63
Lab for Module 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

MODULE 6 Monitoring Virtual Machine Performance . . . . . . . . . . . . . 65


VirtualCenter Performance Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Example CPU Performance Issue Indicator . . . . . . . . . . . . . . . . . . . . . . . 67
Are VMs Being CPU Constrained?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Supporting Higher Consolidation Ratios. . . . . . . . . . . . . . . . . . . . . . . . . . 69
Are VMs Being Memory Constrained? . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Are VMs Being Disk Constrained? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Are VMs Being Network Constrained? . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Lab for Module 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

ii Overview of VMware Infrastructure 3


Overview_A.book Page iii Friday, September 5, 2008 12:40 PM

MODULE 7 VirtualCenter Alarms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75


Proactive Datacenter Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Preconfigured VirtualCenter Alarms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Creating a Virtual Machine-Based Alarm . . . . . . . . . . . . . . . . . . . . . . . . . 78
Creating a Host-Based Alarm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Actions to Take When an Alarm Is Triggered . . . . . . . . . . . . . . . . . . . . . . 80
Alarm Reporting Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Configure VirtualCenter Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Lab for Module 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

MODULE 8 VMware HA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
What Is VMware HA? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Architecture of a VMware HA Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
VMware HA Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Create a VMware HA Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Add an ESX Host to the Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Configure Cluster-Wide Admission Control . . . . . . . . . . . . . . . . . . . . . . . 92
Failover Capacity Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Maintain Business Continuity if ESX Hosts Become Isolated . . . . . . . . . . 95
Lab for Module 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

MODULE 9 VI3 Product and Feature Overview . . . . . . . . . . . . . . . . . . . . 97


Customers Move Rapidly Along the Adoption Curve . . . . . . . . . . . . . . . . 98
Standardizing on VMware Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Functional Layers in a Virtual Infrastructure . . . . . . . . . . . . . . . . . . . . . . 100
New Virtualization Platform Layer Product . . . . . . . . . . . . . . . . . . . . . . . 102
ESXi Hypervisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
From Server Boot to Virtual Machines in Minutes . . . . . . . . . . . . . . . . . 104
Additional VI Layer Products and Features . . . . . . . . . . . . . . . . . . . . . . . 105
VMware Consolidated Backup (VCB). . . . . . . . . . . . . . . . . . . . . . . . . . . 106
VMware Consolidated Backup Operation . . . . . . . . . . . . . . . . . . . . . . . . 108
Distributed Power Management (Experimental) . . . . . . . . . . . . . . . . . . . 109
Storage VMotion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Management and Automation Layer Products . . . . . . . . . . . . . . . . . . . . . 111
VMware Update Manager (VUM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Update Manager and DRS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
VDI - Virtual Desktop Manager (VDM) . . . . . . . . . . . . . . . . . . . . . . . . . 114

Contents iii
Overview_A.book Page iv Friday, September 5, 2008 12:40 PM

Guided Consolidation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .115


VMware Site Recovery Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .116
Site Recovery Manager Key Components . . . . . . . . . . . . . . . . . . . . . . . .118
VMware Converter Enterprise Capabilities . . . . . . . . . . . . . . . . . . . . . . .119
Using Lab Manager with VMware Infrastructure . . . . . . . . . . . . . . . . . . 120
VMware Lifecycle Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Lifecycle Workflow Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Summary of VI3 Products and Features . . . . . . . . . . . . . . . . . . . . . . . . . 124

iv Overview of VMware Infrastructure 3


Overview_A.book Page 1 Friday, September 5, 2008 12:40 PM


MODULE 1

Virtual Infrastructure

1
Overview 1

Virtual Infrastructure Overview


Importance
Virtualization is a technology that is revolutionizing the computer industry.
It is the foundational technology for virtual infrastructure. This module
introduces virtualization and VMware® Infrastructure 3.

Objectives for the Learner


• Understand the concept of virtualization
• Understand the basic components of VMware Infrastructure 3
• Understand virtual network components
• Understand storage and datastores

Lesson Topics
• Define virtual infrastructure
• Describe how virtualization works
• Introduce VMware Infrastructure components
• Describe virtual network components
• Introduce storage, datastores, and VMFS

Course Presentation Guidance


Remember that this course is just an “overview” of VMware Infrastructure 3. It is not a system
administration course. This course does not aim to provide detailed technical information and
explanations. Customers who actually buy the product will have to send their administrators to one
of the system administration courses later on. The goal of this course is to provide a potential
customer with just enough technical background to do the following:
• Excite them about the product
• Give them some hands-on experience with the product
• Allow them to make informed decisions about purchasing the product
• Provide them with enough core concepts that they can understand the more advanced products
presented by sales, including Update Manager, Lab Manager, and Site Recovery Manager.
Based on the above points, you can see that this course is primarily a sales presentation with some
technical depth added. The course includes nearly three hours of hands-on labs. This means you will
have about three and a half hours to lecture from the material. There is not enough time to delve
into technical details.

Overview of VMware Infrastructure 3 1


Overview_A.book Page 2 Friday, September 5, 2008 12:40 PM

What Is Virtual Infrastructure?


Slide 1-5

Virtual Infrastructure allows dynamic mapping of compute, storage,


and network resources to business applications.
Virtual Infrastructure provides the opportunity for both current
server consolidation and future server containment.

Virtualization is an important In traditional physical datacenters, there is a tight relationship between


concept to understand
because it has become widely
servers, disk drives, and network ports, and the operating systems and
deployed across both large, applications they support. A virtual infrastructure, like VMware
multinational corporations Infrastructure, allows us to break this tight relationship. VMware
and small-to-medium-sized
businesses. Virtualization is Infrastructure allows the dynamic mapping of compute, storage, and
also the foundational network resources to business applications.
technology of the VMware
Infrastructure. Understanding No longer is there a one-to-one relationship between an operating system
VMware Infrastructure well
and the physical hardware. In a VMware Infrastructure, multiple operating
means understanding the
basic concepts of systems and their applications share the hardware resources of a physical
virtualization. host, physical LUN, or physical storage and network ports. Sharing
This module introduces both
hardware resources provides the basis for server consolidation now and
general virtualization server containment later.
concepts and information
specific to VMware Reducing the number of required servers, storage LUNs, storage ports, and
Infrastructure. network ports reduces both capital and operational costs. Many companies
The pages that follow have now standardized on virtual infrastructures and have “virtualize first”
introduce the basic hardware policies.
and software components
that comprise a VMware
Infrastructure. All the Course Flow
hardware and software Modules 1-8 introduce only what might be considered the more “core” and “common” VI3 features
components introduced here like encapsulation, virtualized network and storage components, VMotion, DRS, and HA. The
will be used throughout the products and features built on these core features, like Storage VMotion, Update Manager, VCB,
remainder of the course. etc., are mentioned only via a quick overview in Module 9.

2 Overview of VMware Infrastructure 3


Overview_A.book Page 3 Friday, September 5, 2008 12:40 PM


How Does Virtualization Work?

1
Slide 1-6
• It allows multiple

Virtual Infrastructure Overview


operating system
instances to run
concurrently on a
single computer within
virtual machines.
• A virtualization layer
creates the virtual
machines.
• The virtualization layer
is implemented using
either a hosted or
bare-metal hypervisor
architecture.

Virtualization is based on the concept of a virtual machine. Virtual machines


are created by a virtualization software layer installed above the hardware.
Virtual hardware in the virtual machines is created by the virtualization
software. Virtual machines share the actual physical hardware.
The virtualization software includes a hypervisor. The hypervisor is a higher
supervisory operating system and is used to schedule virtual machine access
to the hardware. The hypervisor schedules resources and prevents conflicts.
Multiple virtual machines share hardware resources without interfering with
each other. Apart from standard network connections, virtual machines have
no knowledge of each other and run in entirely separate partitions for
security purposes.
The hypervisor can run as an application on an existing operating system or
be implemented as its own operating system. A hypervisor running as an
application is referred to as a hosted hypervisor, while a standalone
hypervisor is sometimes referred to as a bare-metal hypervisor. Bare-metal
hypervisors incur less overhead. This typically translates to higher virtual
machine performance.

Module 1 Virtual Infrastructure Overview 3


Overview_A.book Page 4 Friday, September 5, 2008 12:40 PM

ESX Uses a Hypervisor Architecture


Slide 1-7

• A bare-metal hypervisor system does not require an


operating system. The VMkernel is the hypervisor on
an ESX host.
• The service console provides an extensible, secure
management interface to the VMkernel.

VMware® ESX uses a bare-metal hypervisor that allows multiple virtual


machines to run simultaneously on the same physical server. ESX does not
require an operating system, ESX is the operating system.
The ESX hypervisor is called the VMkernel. The VMkernel is a proprietary,
high-performance kernel optimized for running virtual machines.
The ESX service console is used only to provide a management interface to
the VMkernel. The service console is a cut-down, secure version of Linux.
The service console is scheduled to use hardware resources in a manner
similar to normal virtual machines.
ESX 3 and ESXi are both bare-metal hypervisors that partition physical
servers into multiple virtual machines. The difference is that ESXi does not
use a Linux service console as a management interface. ESXi is managed
by VirtualCenter, using an embedded hardware agent.
ESXi is discussed later in the course.

4 Overview of VMware Infrastructure 3


Overview_A.book Page 5 Friday, September 5, 2008 12:40 PM


VMware Infrastructure 3

1
Slide 1-8

Virtual Infrastructure Overview


• A software suite for optimizing and managing
IT environments through virtualization
• VMware ESX or ESXi
• VMware Virtual SMP
• VMware High Availability (HA)
• VMware VMotion
• VMware Distributed Resource Scheduler (DRS)
• VMware VMFS
• VMware Consolidated Backup (VCB)
• VMware Update Manager
• VMware Storage VMotion
• VMware VirtualCenter
• Provisions, monitors, and manages a virtualized
IT environment

VMware Infrastructure 3 is VMware’s premier product family designed for


building and managing virtual infrastructures. It is a suite of software that
provides virtualization, management, resource optimization, application
availability, and migration capabilities.
VMware Infrastructure 3 consists of products and features that include the
following:

VMware ESX 3 Platform on which virtual machines run


VMware ESXi Alternate platform on which virtual machines
run
VirtualCenter Centralized management tool for ESX hosts and
virtual machines
Virtual SMP Multiprocessor support (up to four) for virtual
machines
VMware HA VirtualCenter high availability feature for virtual
machines
VMware VMotion VirtualCenter feature that permits migrating
powered-on virtual machines with no disruption
VMware DRS VirtualCenter feature that provides dynamic
workload balancing across ESX hosts

Module 1 Virtual Infrastructure Overview 5


Overview_A.book Page 6 Friday, September 5, 2008 12:40 PM

VMware VMFS Cluster-aware file system optimized to hold


virtual machine, virtual machine template, and
ISO files
VMware Consolidated Centralized, online backup framework for
Backup virtual machines
VMware Update Automatic patch management of ESX hosts and
Manager select guest operating systems
Storage VMotion Migration of powered-on virtual machine files
with no disruption

Each of these products and features is discussed in more detail throughout


the course.

6 Overview of VMware Infrastructure 3


Overview_A.book Page 7 Friday, September 5, 2008 12:40 PM


VMware Infrastructure Components

1
Slide 1-9

Virtual Infrastructure Overview


VMware Infrastructure consists of a number of hardware and software
components.
Servers running ESX provide hardware resources to multiple virtual
machines. ESX hosts use one or more datastores as repositories for virtual
machine, virtual machine template, and ISO files.
VirtualCenter is a Windows-based application that provides centralized
management of ESX hosts and virtual machines. VirtualCenter management
information is stored in a dedicated VirtualCenter database—both Oracle
and SQL databases are supported. VirtualCenter agents are configured on
the ESX hosts. VirtualCenter tasks sent to an ESX host are received by the
VirtualCenter agent. Connections between the VirtualCenter Server and the
VirtualCenter agents are secured by Secure Socket Layer (SSL).
Purchased VMware Infrastructure products and features are licensed
through one or more license files. License files are obtainable using a self-
service Web portal. Downloaded license files are typically stored in a
directory on a centralized license server. A centralized license server
simplifies license management. It is possible to store license files on
individual ESX hosts, but not all VMware Infrastructure products and
features are supported using this option.
Different licensing packages are available, depending on business
requirements and cost constraints. A 60-day evaluation license that enables
all features is available during installation

Module 1 Virtual Infrastructure Overview 7


Overview_A.book Page 8 Friday, September 5, 2008 12:40 PM

One or more VMware Infrastructure Clients (VI Clients) provide graphical


user interface–based management of the VMware Infrastructure. The VI
Client is a Windows application. Connections between the VI Clients and
the VirtualCenter Server are secured by SSL.

8 Overview of VMware Infrastructure 3


Overview_A.book Page 9 Friday, September 5, 2008 12:40 PM


Management Made Easy

1
Slide 1-10

Virtual Infrastructure Overview


• Use the VI Client to
log in to
VirtualCenter.
• VI Client and
VirtualCenter provide
easy, centralized,
graphical
management of the
VMware
Infrastructure.
• ESX hosts
• Virtual machines
• Templates

VirtualCenter and the VI Client interface make centralized management of


the VMware Infrastructure easy. During a default VI Client installation, a
VI Client icon is added to the user desktop.
You use this icon to launch the VI Client interface. When prompted, you
enter a Windows-based user account name and password to log in to the
VitualCenter Server.
The VI Client interface provides graphical management of VMware
Infrastructure components, including ESX hosts, virtual machines, and
virtual machine templates.

Module 1 Virtual Infrastructure Overview 9


Overview_A.book Page 10 Friday, September 5, 2008 12:40 PM

View VirtualCenter Inventory


Slide 1-11
Hosts and Clusters View Virtual Machine and Templates View

The VI Client displays VMware Infrastructure management objects in a


hierarchical format called an inventory. Four different inventory views are
available to the user. This slide illustrates the two most common views used
to display the VirtualCenter inventory: the Hosts & Clusters view and the
Virtual Machines & Templates view. The other two views are the Networks
view and the Datastores view.
On the slide, Los Angeles is an example of a datacenter. ESX hosts and
their virtual machines are added to a datacenter that often corresponds to a
specific geographic location. Migration of running virtual machines is
permitted only between ESX hosts within the same datacenter.
Hosts and Clusters, Americas, and Discovered Virtual Machines are all
examples of folders. Folders are used to group objects in the inventory.
Folders beneath the datacenter in the Virtual Machines and Templates view
are rendered in blue.
The objects called kentfield03.priv.vmeduc.com and
kentfield04.priv.vmeduc.com are ESX hosts. ESX hosts do not appear in the
Virtual Machines and Templates view. ESX hosts are added to the
VirtualCenter inventory after they are installed.
Database01 and Database02 are virtual machines. The icons indicate that
Database01 is powered on, while Database02 is not. Virtual machines can
be organized differently in the different inventory views.

10 Overview of VMware Infrastructure 3


Overview_A.book Page 11 Friday, September 5, 2008 12:40 PM


DatabaseTemplate and FilePrint Template are virtual machine templates.
Virtual machine templates can be used to quickly deploy additional virtual

1
machines. Templates do not appear in the Hosts and Clusters view.

Virtual Infrastructure Overview


The VirtualCenter security model allows permissions to be applied to any
object in the inventory. Permissions are beyond the scope of this course.

Module 1 Virtual Infrastructure Overview 11


Overview_A.book Page 12 Friday, September 5, 2008 12:40 PM

ESX Storage Choices


Slide 1-12

• ESX supports
multiple types of
storage:
• Local SCSI, SATA,
Fibre Channel,
iSCSI, NFS
• Flexibility to meet
cost, availability,
and performance
requirements

ESX storage support is flexible enough to meet most cost, availability, and
performance requirements. ESX supports the following kinds of storage:
• Direct-attached SCSI and SATA storage
• Fibre Channel storage
• iSCSI storage
• NFS storage
Storage accessed by ESX is called a datastore.
Multiple LUNs from different storage types can be combined to form a
single datastore. LUNs can be dynamically added to an existing datastore if
more storage space is required.

12 Overview of VMware Infrastructure 3


Overview_A.book Page 13 Friday, September 5, 2008 12:40 PM


VMFS and NFS Datastores

1
Slide 1-13

Virtual Infrastructure Overview


• Direct attached, Fibre Channel, and iSCSI LUNs use
the cluster-aware VMFS file system.
• NFS storage is not formatted with VMFS.
• Both VMFS and NFS datastores provide shared
storage for virtual machine files, virtual machine
template files, and ISO files.
• Shared datastores support high availability and
resource management features.
• VMotion
• VMware DRS and HA clusters

Storage LUNs are typically formatted with the Virtual Machine File System
(VMFS). VMFS is a high-performance, cluster-aware file system designed
by VMware to hold virtual machine, virtual machine template, and ISO
files.
An alternative file system type is NFS. Like VMFS, NFS is a shared file
system and it also holds virtual machine, virtual machine template, and ISO
files.
Multiple ESX servers can simultaneously access the same VMFS or NFS
datastore. Simultaneous access is designed to support high availability and
resource-balancing features like VMotion or VMware DRS and HA
clusters, which are covered later in the course.

Module 1 Virtual Infrastructure Overview 13


Overview_A.book Page 14 Friday, September 5, 2008 12:40 PM

Virtual Network Components


Slide 1-14

ESX uses physical and virtual network components to provide connectivity


to virtual machines, the VMkernel, and the service console. One of the key
virtual components is the virtual switch.
A virtual switch is a software construct implemented in the VMkernel. A
virtual switch might be connected to one or more physical NICs, or none at
all. Physical NICs provide virtual switches with connectivity to external
devices.
The ability to connect multiple physical NICs to a single virtual switch is
called NIC teaming. NIC teams provide high availability through automatic
failover and failback. NIC teams also provide automatic load distribution
for better performance.
The VMkernel creates virtual NICs for virtual machines, the VMkernel, and
the service console. The VMkernel assigns a unique MAC address to each
virtual NIC.
The ability to connect multiple virtual machines, the VMkernel, and the
service console through a small number of physical NICs and physical
switch ports reduces both capital and operational costs.

14 Overview of VMware Infrastructure 3


Overview_A.book Page 15 Friday, September 5, 2008 12:40 PM


Flexible Network Connectivity

1
Slide 1-15
• There are three types of network connections:

Virtual Infrastructure Overview


• Service console port – Access to ESX host management network
• VMkernel port – Access to VMotion, iSCSI, and/or NFS/NAS
networks
• Virtual machine port group – Access to VM networks
• More than one connection type can exist on a single virtual
switch. Or each connection type can exist on its own virtual
switch.

Service VMkernel
Console port Virtual machine port groups
port

uplink ports

Virtual switch architecture provides the IT architect the flexibility to meet


datacenter connection requirements in cost-effective ways.
Virtual switches include connections. There are three connection types:
• A service console port
• A VMkernel port
• A virtual machine port group
Connections must be defined before using a virtual switch. More than one
connection type can exist on a single virtual switch. Or each connection
type can exist on its own virtual switch.
As each connection is configured, it is assigned a unique name by the
administrator. This name is displayed in the VI Client interface and is used
when connecting virtual machines, the VMkernel, or the service console to
a network.
A service console port connection provides the access point for the service
console to connect to an external management network. Multiple service
console connections on different virtual switches and networks can be
created to support high availability.
A VMkernel port connection provides the access point for the VMkernel to
connect to external IP storage or VMotion networks. Multiple VMkernel
connections on different virtual switches and networks can be created to
support different types of network traffic. For example, one VMkernel port

Module 1 Virtual Infrastructure Overview 15


Overview_A.book Page 16 Friday, September 5, 2008 12:40 PM

could be configured to support iSCSI storage traffic while another


VMkernel port could be configured to support VMotion traffic. Different
types of network traffic should be segregated for security and performance
reasons.
A virtual machine port group connection provides an access point for a
virtual machine to connect to either internal or external virtual machines, or
to other external devices. Each virtual machine port group has its own
network configuration. For example, on the same virtual switch or on
different virtual switches, virtual machine port group Alpha can be assigned
to one network, while virtual machine port group Beta is assigned to
another network.
The ability to configure multiple virtual machine port group connections on
the same or different virtual switches provides the flexibility to meet most
business requirements.

16 Overview of VMware Infrastructure 3


Overview_A.book Page 17 Friday, September 5, 2008 12:40 PM


Virtual Switches Support VLANs

1
Slide 1-16

Virtual Infrastructure Overview


• VMkernel, service
console, and virtual
machine port groups
support IEEE 802.1Q
VLAN tagging
• Example:
• Packets from a VM are
tagged as they exit the
virtual switch
• Packets are untagged as
they return to the VM

ESX supports virtual LANs with VLAN IDs between 1 and 4095 on
VMkernel, service console, and virtual machine connections. VLAN
functionality provides yet additional flexibility and cost savings in network
configuration.

Module 1 Virtual Infrastructure Overview 17


Overview_A.book Page 18 Friday, September 5, 2008 12:40 PM

Lab for Module 1


Slide 1-17

• Using VirtualCenter
• In this lab, you perform the following tasks:
•Use the VI Client to log in to VirtualCenter
•View the VirtualCenter inventory
•View virtual network components
•View storage components

18 Overview of VMware Infrastructure 3


Overview_A.book Page 19 Friday, September 5, 2008 12:40 PM


MODULE 2

Create a Virtual Machine 2

Importance

2
Virtual infrastructure is based on virtual machines. The ability to quickly
provision virtual machines is critical. To quickly provision multiple virtual

Create a Virtual Machine


machines, you create a base image virtual machine. Once you have a base
image virtual machine, you can convert it to a template and provision
additional virtual machines from the template. This dramatically decreases
deployment time and reduces costly mistakes.

Objectives for the Learner


• Understand virtual machines and virtual machines hardware choices
• Create a template deploy a virtual machine from a template

Lesson Topics
• Define a virtual machine
• Virtual machine hardware
• Installing a guest operating system into a virtual machine
• Creating templates
• Deploying virtual machines from a template
• Guest operating system customization

Overview of VMware Infrastructure 3 19


Overview_A.book Page 20 Friday, September 5, 2008 12:40 PM

What Is a Virtual Machine?


Slide 2-5
• A software platform that, like a
physical computer, runs an
operating system and applications
• Encapsulated in a discrete set of
files. The main files are these:
• Configuration file (.vmx)
• Virtual disk file (.vmdk)
• Encapsulation supports disaster
recovery and high availability
features and products.
Virtual Machine
• Individually created using a VI
Client wizard or deployed from a
template

A virtual machine is a software construct controlled by the VMkernel. A


virtual machine has virtual hardware that appears as physical hardware to an
installed guest operating system and its applications. All virtual machine
configuration, state information, and data are encapsulated in a set of files
stored on a datastore. This encapsulation means virtual machines are
portable and can easily be backed up or cloned.
There are multiple methods to create virtual machines. One way to create a
virtual machine is by launching an easy-to-use wizard in the VI Client and
answering a few simple questions. A second—and faster—method is to use
the VI Client to deploy the new virtual machine from a virtual machine
template.

20 Overview of VMware Infrastructure 3


Overview_A.book Page 21 Friday, September 5, 2008 12:40 PM


ESX Virtual Machine Hardware
Slide 2-6
Up to 2 ports Up to 2 ports

1-2 drives

2
Up to 4
CD- ESX virtual
ROMs machine

Create a Virtual Machine


hardware is
scalable
enough to
meet most
business and
application
Up to needs.
64GB
RAM

VM chipset 1-4 adapters


1 CPU (2 or 4
CPUs
with VMware 1-4 adapters;
SMP) 1-15 devices each

Each guest operating system sees ordinary hardware devices. It does not
know these devices are virtual. Furthermore, all VMware® ESX 3 virtual
machines have uniform hardware, except for a small number of variations
the system administrator can apply. This makes virtual machines uniform
and portable across ESX hosts.
Each virtual machine has a total of six virtual PCI slots. One of these is used
for the virtual video adapter. Therefore, the total number of virtual Ethernet
and SCSI host adapters cannot exceed five. The virtual chipset is an Intel
440BX–based motherboard with an NS338 SIO chip. This chipset ensures
compatibility for a wide range of supported guest operating systems,
including legacy operating systems like Windows NT.

Module 2 Create a Virtual Machine 21


Overview_A.book Page 22 Friday, September 5, 2008 12:40 PM

Centralized Virtual Machine Management


Slide 2-7
• Send power
changes to VM.
VM console
icon
• Access VM’s
guest OS.
• Send Ctrl+Alt+Del
to guest OS.
• Press
Ctrl+Alt+Ins in
VM console.
• Modify virtual
machine
hardware.

Virtual machine management access is provided through a virtual machine


console window available in the VI Client, or in an optional Web-based
interface (not shown here).
The virtual machine’s console provides the mouse, keyboard, and console
screen functionality. You can use the virtual machine console to access the
virtual machine BIOS, power-cycle the virtual machine, modify virtual
hardware, or install an operating system.
The virtual machine console is not normally used to access the virtual
machine’s applications. Tools such as VMware’s Virtual Desktop Manager,
or RDP, or VNC, or X Windows, or Web browsers are typically used,
instead.

22 Overview of VMware Infrastructure 3


Overview_A.book Page 23 Friday, September 5, 2008 12:40 PM


Fast, Flexible Guest OS Installations
Slide 2-8

2
Create a Virtual Machine
VM Console

Install from ISO image (mounted on virtual


CD-ROM drive) to virtual disk.

Configure VMFS or NFS datastores with a


Local library of ISO images for easy VM
deployment and application installation.

It is easy to install a guest operating system or an application by using ISO


files and virtual CD-ROM devices. To install software, you connect an ISO
image loaded on an accessible datastore to the virtual CD-ROM device. To
simplify management, a library of ISO images can written to a datastore
accessible to all ESX hosts. Although it is possible to map a physical CD in
a physical CD-ROM device to the virtual CD-ROM device, using ISO
images frees administration staff from having to be physically present in the
datacenter. This saves time and reduces costs.
VMware® Infrastructure 3 supports a large variety of operating systems
including Windows, Linux, Solaris, and Novell. For a complete list of
supported operating systems see the Guest Operating System Installation
Guide available on the VMware Web site at http://www.vmware.com/pdf/
GuestOS_guide.pdf.

Module 2 Create a Virtual Machine 23


Overview_A.book Page 24 Friday, September 5, 2008 12:40 PM

Reducing Virtual Machine Deployment Time


Slide 2-9
• Templates are a
VirtualCenter
feature used to
create commonly
deployed VMs.
• A template is a VM
marked as “never
to be powered on.”
• Disk files stored in
either normal or
compact disk
format
• Templates can be
stored in a VMFS
or NFS datastore.

Templates are a VirtualCenter feature. A template is a master image of a


virtual machine that is marked as “never to be powered on.” Templates
appear in the inventory only while you are using the Virtual Machine and
Templates view.
A new virtual machine can be quickly provisioned using a template. The
new virtual machine is essentially a clone of the template, although
VirtualCenter has the ability to apply guest operating system customization
to the clone. Creating a library of templates can dramatically decrease
provisioning time and reduce costly mistakes. Templates can be stored in
compact disk format to reduce storage costs.

24 Overview of VMware Infrastructure 3


Overview_A.book Page 25 Friday, September 5, 2008 12:40 PM


Create a Template
Slide 2-10

• Create a base image

2
VM, power it off,
then …

Create a Virtual Machine


• Two methods:
• Clone VM to
Template
• Convert VM to
Template
• Choose Clone to
Template if the
original VM is still
needed.

There are two ways to create a template using the VI Client: Clone to
Template and Convert to Template. Which method is chosen depends on
whether the original virtual machine is still needed. The original virtual
machine is no longer available when converted to a template.

Module 2 Create a Virtual Machine 25


Overview_A.book Page 26 Friday, September 5, 2008 12:40 PM

Deploy a Virtual Machine from a Template


Slide 2-11

To deploy a VM, provide


the following:
• Virtual machine name
• Inventory location
• ESX host, datastore
• Guest operating system
customization data

You use the VI Client to provision a new virtual machine from a template.
You launch the Deploy Template wizard by right-clicking on the template
and answering a few simple questions. The datacenter administrator can
choose the new virtual machine display name, its location in the
VirtualCenter inventory, its ESX host, and the datastore to use.

26 Overview of VMware Infrastructure 3


Overview_A.book Page 27 Friday, September 5, 2008 12:40 PM


Automating Guest OS Customization
Slide 2-12

• VirtualCenter can automatically apply unique system


information to a virtual machine when it is deployed from a

2
template.
• For guest operating system customization to work, it must

Create a Virtual Machine


be enabled in VirtualCenter.
• To enable for Windows VMs, install sysprep files on
VirtualCenter Server.
• Already enabled for Linux VMs (open-source components
are installed on the VirtualCenter Server)

VirtualCenter can automatically apply unique system information to a


virtual machine deployed from a template. This saves time and prevents
costly human error. Customization exists for both Windows and Linux guest
operating systems. Linux guest customization is automatically enabled
when VirtualCenter is installed. Windows guest customization must be
manually enabled by installing the Windows sysprep software on the
VirtualCenter Server.

Module 2 Create a Virtual Machine 27


Overview_A.book Page 28 Friday, September 5, 2008 12:40 PM

Lab for Module 2


Slide 2-13

Template Provisioning
In this lab, you perform the following tasks:
•Convert a virtual machine to a template
•Convert a template back to a virtual machine
•Deploy a virtual machine from a template

28 Overview of VMware Infrastructure 3


Overview_A.book Page 29 Friday, September 5, 2008 12:40 PM


MODULE 3

CPU and Memory Resource


Pools 3

Importance
Resource pools allow CPU and memory resources to be hierarchically

3
assigned to meet the business requirements of your enterprise. Virtual
machine CPU and memory resource controls provide finer-grained tuning

CPU and Memory Resource Pools


options to meet the business requirements of your applications.

Objectives for the Learner


• Understand available virtual machine CPU and memory resource
controls
• To use resource pools for resource policy control

Lesson Topics
• How are virtual machines’ CPU and memory resources managed?
• What is a resource pool?
• Managing a pool’s resources
• A resource pool example
• Admission control

Overview of VMware Infrastructure 3 29


Overview_A.book Page 30 Friday, September 5, 2008 12:40 PM

CPU Management Supports Server Consolidation


Slide 3-5

• A virtual machine can have 1, 2,


or 4 virtual CPUs (VCPUs).
• When a VCPU needs to be
scheduled, the VMkernel maps a
VCPU to a “hardware execution
context.” H.E.C. H.E.C. H.E.C.

• A hardware execution context is


a processor’s capability to
schedule one thread of
execution.
• A core or a hyperthread
• VMkernel load balances
• All the VCPUs in a VM must be H.E.C. H.E.C. H.E.C. H.E.C.

simultaneously scheduled.

Physical servers in a typical datacenter use on average less than 10 percent


of their available CPU resources. Higher CPU utilization can be achieved
by combining multiple virtual machines on one physical server. This
efficient use of CPU resources reduces datacenter capital and operating
costs. VMware® ESX hosts often achieve and maintain 80–90 percent CPU
utilization.
A virtual machine is configured with at least one virtual CPU (VCPU).
When a VCPU needs to run, the VMkernel maps the VCPU to an available
“hardware execution context.” A hardware execution context is a
processor’s capability to schedule one thread of execution. A hardware
execution context is a CPU core or a hyperthread, if the CPU supports
hyperthreading. Hyperthreaded or multicore CPUs provide two or more
hardware execution contexts on which VCPUs can be scheduled to run.
Using ESX’s virtual symmetric multiprocessor (VSMP) feature means
virtual machines can be configured with one, two, or four VCPUs. A single-
VCPU virtual machine gets scheduled on one hardware execution context at
a time. A two-VCPU virtual machine gets scheduled on two hardware
execution contexts at a time, or not at all. A four-VCPU virtual machine
gets scheduled on four hardware execution contexts at a time, or not at all.

30 Overview of VMware Infrastructure 3


Overview_A.book Page 31 Friday, September 5, 2008 12:40 PM


Flexible Resource Allocation
Slide 3-6
• Proportional-share system for relative resource management
• Used to grant resources according to business requirements
• Applied during resource contention
• Prevents virtual machines from monopolizing resources
Number of Shares

3
CPU and Memory Resource Pools
• Change number of
shares

• Power on VM

• Power off VM

CPU and memory shares guarantee a virtual machine will be allocated a


certain amount of CPU and memory resources, even during periods of
contention. Shares have no effect on resource allocation until contention
occurs.
Consider VM B in the graphic. On the first line, VM B is assigned 1,000 of
the total 3,000 shares of a resource. All other factors being equal, VM B
will be allocated one-third of the resource during periods of contention.
On the second line, VM B’s share allocation has been increased to 3000. At
this time, VM B would be allocated three-fifths of the resource during
periods of contention.
On the last two lines, VM B still has 3,000 assigned shares, although the
total number of assigned shares changes as VM D is powered on or VM C
is powered off. When VM D is powered on, VM B is entitled only to three-
quarters of the resource. When VM C is powered off, VM B is entitled to
three-fifths of the resource.

Module 3 CPU and Memory Resource Pools 31


Overview_A.book Page 32 Friday, September 5, 2008 12:40 PM

Virtual Machine CPU Resource Controls


Slide 3-7

Limit
• A cap on the consumption of physical CPU
time by this VM, measured in MHz
Reservation
• A certain number of physical CPU cycles
reserved for this VM, measured in MHz
• The VMkernel chooses which CPUs, and may
migrate.
• A VM will power on only if the VMkernel can
guarantee the reservation.
• A reservation of 1,000MHz might be generous
for a 1-VCPU VM, but not for a 4-VCPU VM.
Shares
• More shares means this VM will win
competitions for physical CPU time more often.

ESX features three virtual machine CPU resource controls that are used to
tune virtual machine behavior. These CPU resource controls are dynamic in
that they can be modified while the virtual machine is powered on or off.
CPU “limit” defines the maximum amount of physical CPU, measured in
MHz, that a virtual machine is allowed.
CPU “reservation” defines the amount of physical CPU, measured in MHz,
reserved for this virtual machine at power up. As long as a virtual machine
is not using its total reservation, the unused portion is available for use by
other virtual machines. The VMkernel will not allow a virtual machine to
power on unless it can guarantee its CPU reservation.
Each virtual machine is assigned a number of CPU “shares.” The more
shares a virtual machine has relative to other virtual machines, the more
often it gets CPU time above its reservation when there is contention for
CPU resources.

32 Overview of VMware Infrastructure 3


Overview_A.book Page 33 Friday, September 5, 2008 12:40 PM


Supporting Higher Consolidation Ratios (1)
Slide 3-8

Virtual memory
• Memory mapped by an application
inside the guest operating system
Application
Physical memory
• ESX presents the virtual machines
and the service console with physical

3
pages.
Guest OS
• Identical pages might be shared by

CPU and Memory Resource Pools


multiple virtual machines (transparent
page sharing).

Machine memory Hypervisor


• Actual pages allocated by ESX from
RAM
• Multiple guest operating system
pages might map to same machine
page (transparent page sharing).

Although physical servers in a datacenter are often configured with large


amounts of memory, only a small portion is typically active at any one time.
Higher active memory utilization is achieved by combining multiple virtual
machines on one physical server. This efficient use of memory resources
reduces datacenter capital and operating costs.
The ESX VMkernel manages the server’s RAM. RAM in an ESX host is
called “machine memory.” Various pages of machine memory are collected
and presented as contiguous memory to each virtual machine. Virtual
machines may actually be sharing identical pages of read-only machine
memory. This is called “transparent page sharing” and is covered later in
this module.
Each virtual machine guest operating system manages the machine memory
presented to it by the VMkernel. Memory managed by the virtual machine
is called “physical memory” and is analogous to the memory available in a
physical server.
The definition of “virtual memory” remains unchanged in a virtual
datacenter. Virtual memory is managed by the virtual machine’s guest
operating system and is comprised of both real memory and disk space.

Module 3 CPU and Memory Resource Pools 33


Overview_A.book Page 34 Friday, September 5, 2008 12:40 PM

Supporting Higher Consolidation Ratios (2)


Slide 3-9
Transparent page sharing
• Supports higher server-
consolidation ratios
• VMkernel detects identical
pages in VMs’ memory and
maps them to the same
underlying machine page.
• No changes to guest
operating system required
• VMkernel treats the shared
pages as copy-on-write.
• Read-only when shared
• Private copies after write

ESX uses several features designed by VMware to support efficient use of


RAM and higher consolidation ratios. Transparent page sharing is one of
these features.
Transparent page sharing helps to reduce the total amount of required RAM
by allowing virtual machines to share identical memory pages. The
VMkernel dynamically scans ESX memory for read-only pages with
identical content.
When pages are found, the duplicates are released and the virtual machines
are mapped to the single remaining page. If any virtual machine attempts to
modify a shared page, the VMkernel will create a new, private page for that
virtual machine to use.

34 Overview of VMware Infrastructure 3


Overview_A.book Page 35 Friday, September 5, 2008 12:40 PM


Virtual Machine Memory Resource Controls
Slide 3-10
Available memory
• Memory size defined when the VM was
created
Limit
• A cap on the consumption of physical
memory by this VM, measured in MB

3
• Equal to available memory by default
Reservation

CPU and Memory Resource Pools


• A certain amount of physical memory
reserved for this VM, measured in MB
Shares
• More shares means this VM will win
competitions for physical memory more often
VMkernel allocates a per-VM swap file to
cover each VM’s range between available
memory and reservation.

ESX features four virtual machine memory resource controls that are used
to tune a virtual machine’s behavior. Three of these memory resource
controls are dynamic and can be modified while the virtual machine is
powered on.
“Available memory,” measured in megabytes, is assigned to the virtual
machine when it is created. It is the total amount of memory presented by
the virtual machine to the guest operating system at boot-up. Available
memory cannot be changed while the virtual machine is powered on.
Memory “limit,” measured in megabytes, defines the maximum amount of
virtual machine memory that can reside in RAM. It never exceeds available
memory. By default, available memory and memory limit are set to the
same value.
Memory “reservation,” measured in megabytes, is the amount of RAM
reserved by the VMkernel for the virtual machine at power-on. As long as a
virtual machine has not used its total reservation, the unused portion is
available for use by other virtual machines. The VMkernel will not allow a
virtual machine to power on, unless it can guarantee the memory
reservation.
Each virtual machine is assigned a number of memory “shares.” The more
shares a virtual machine has relative to other virtual machines, the more
often it is allocated RAM above its reservation when there is memory
contention.

Module 3 CPU and Memory Resource Pools 35


Overview_A.book Page 36 Friday, September 5, 2008 12:40 PM

The VMkernel might use disk space as virtual machine virtual memory in
unusual circumstances. The reserved disk space is calculated per virtual
machine, using the difference between the memory limit and the memory
reservation.

36 Overview of VMware Infrastructure 3


Overview_A.book Page 37 Friday, September 5, 2008 12:40 PM


Using Resource Pools to Meet Business Needs
Slide 3-11

• A resource pool is a
logical abstraction
for hierarchically
managing CPU and
memory resources.

3
• Configurable on a

CPU and Memory Resource Pools


standalone host or a
VMware DRS-
enabled cluster Root
Resource Resource
Pools Pool
• Provides resources
for VMs and child Geography?
Department?
resource pools Function?
Hardware?

Resource pools provide a business with the ability to divide and allocate
CPU and memory resources hierarchically as required by business need.
Reasons to divide and allocate CPU and memory resources include such
things as maintaining administrative boundaries, enforcing charge-back
policies, or accommodating geographic locations or departmental divisions.
It is possible to further divide and allocate resources by creating child
resource pools.
Configuring CPU and memory resource pools is possible only on
nonclustered ESX hosts or on VMware® DRS-enabled clusters.
Clusters are indicated in the inventory with pie chart icons.

Module 3 CPU and Memory Resource Pools 37


Overview_A.book Page 38 Friday, September 5, 2008 12:40 PM

Configuring a Pool's Resources


Slide 3-12

• Resource pools have the


following attributes:
• Shares
• Low, Normal, High, Custom
• Reservations, in MHz and MB
• Limits, in MHz and MB
• Unlimited access, by default (up to
maximum amount of resource
accessible)
• Expandable reservation?
• Yes: VMs and subpools can draw
from this pool’s parent.
• No: VMs and subpools can only
draw from this pool, even if its
parent has free resources.

Resource pools have CPU and memory resource controls that behave like
virtual machine CPU and memory controls. Resource pool resource controls
can be modified while virtual machines are running.
CPU and memory limits define the maximum amount of CPU or RAM a
resource pool is allowed.
CPU and memory reservations define the amount of CPU or RAM reserved
for the resource pool when it is created. The VI Client interface will not
allow resource pool creation unless the reservation can be guaranteed.
Each resource pool is assigned a number of CPU and memory shares. The
more shares a resource pool has relative to other resource pools (and,
possibly, virtual machines), the more often it is allocated CPU and memory
resources above its reservations during periods of contention.
Expandable reservations allow a resource pool with insufficient capacity to
borrow CPU or memory resources from a parent pool to satisfy reservation
requests from a child resource pool or virtual machine. Requests to borrow
resources proceed up the pool hierarchy until the top level is reached or a
pool with no expandable reservations is encountered. Expandable
reservations provide great flexibility but have the potential to be abused.

38 Overview of VMware Infrastructure 3


Overview_A.book Page 39 Friday, September 5, 2008 12:40 PM


Viewing Resource Pool Information
Slide 3-13

• Display the resource pool’s


Resource Allocation tab.

3
CPU and Memory Resource Pools
Use the resource pool’s Resource Allocation tab to view configuration and
current usage information for virtual machine and child pools.

Module 3 CPU and Memory Resource Pools 39


Overview_A.book Page 40 Friday, September 5, 2008 12:40 PM

Resource Pools Example: CPU Contention


Slide 3-14
Svr001
All VMs below are running on
same physical CPU (PCPU)

Engineering Finance
CPU Shares: 1,000 CPU Shares: 2,000
~33% of PCPU ~67% of PCPU

Eng-Test Eng-Prod Fin-Test Fin-Prod


CPU Shares: CPU Shares: CPU Shares: CPU Shares: Engineering
1,000 2,000 1,000 2,000 ~33%
11%

22%
45%

22%

Eng-Test gets ~33% of Engineering’s Finance


CPU allocation = Approximately 11% of ~67%
the PCPU.
% of PCPU allocation

In this example, Finance has been assigned twice as many shares as


Engineering because Finance supplies two-thirds of the ESX host’s budget.
In this scenario, Engineering virtual machines could actually use more than
one-third of the CPU resources as long as Finance does not use its two-
thirds.
Although this example focuses only on CPU allocation, memory is
allocated using similar methods.

40 Overview of VMware Infrastructure 3


Overview_A.book Page 41 Friday, September 5, 2008 12:40 PM


Admission Control for CPU and Memory Reservations
Slide 3-15

Minimizes operating system and


application CPU and memory starvation

Create a new subpool Increase a pool’s


Power on a VM.
with its own reservation. reservation.

3
Yes Can this pool

CPU and Memory Resource Pools


Succeed
satisfy reservation?

No

No
Fail
Expandable
reservation?

Yes – Go to Parent Pool

Admission control affects whether a virtual machine is allowed to power on


or a resource pool can be created. Admission control is enforced by the
VMkernel and is based on reservations. If the VMkernel can guarantee the
reservation, the virtual machine will power on or the pool can be created.
Admission control, along with proper reservation settings, ensures
applications can meet predetermined service-level agreements.

Module 3 CPU and Memory Resource Pools 41


Overview_A.book Page 42 Friday, September 5, 2008 12:40 PM

42 Overview of VMware Infrastructure 3


Overview_A.book Page 43 Friday, September 5, 2008 12:40 PM


MODULE 4

Migrate Virtual Machines


Using VMotion 4

Importance
VMotion is a valuable tool for delivering higher service levels and
improving overall hardware utilization.

Objectives for the Learner

4
• Understand the VMotion migration process

Migrate Virtual Machines Using VMotion


• Migrate virtual machines with VMotion
• Understand VMotion requirements

Lesson Topics
• VMotion migration
• VMotion compatibility requirements

Overview of VMware Infrastructure 3 43


Overview_A.book Page 44 Friday, September 5, 2008 12:40 PM

VMotion Migration
Slide 4-5

• A VMotion migration moves a powered-on virtual


machine from one ESX host to another.
• Why migrate using VMotion?
• Higher service levels by allowing continued VM
operation during scheduled hardware downtime
• To balance overall hardware utilization

VMotion migration is a VirtualCenter feature that moves a running virtual


machine from one VMware® ESX host to another with no virtual machine
downtime.
VMotion capitalizes on the fact that the entire state of a running virtual
machine is encapsulated in memory and in a set of files on a datastore.
VMotion uses a dedicated Gigabit Ethernet network to move the memory
from one ESX host to another. Virtual machine files do not have to be
moved, because both the source and target ESX hosts have access to the
datastore containing the virtual machine’s files. Migrated virtual machines
maintain their unique host name, IP address, and MAC address.
VMotion enables higher service levels. First, virtual machines can be
moved from one ESX host to another to accommodate planned downtime
for hardware maintenance. Second, virtual machines can be moved to
balance workloads across multiple ESX hosts.

44 Overview of VMware Infrastructure 3


Overview_A.book Page 45 Friday, September 5, 2008 12:40 PM


How VMotion Works (1)
Slide 4-6

• Users currently accessing VM A on esx01


• Initiate migration of VM A from esx01 to esx02
while VM A is up and running.

4
VMotion
Network

Migrate Virtual Machines Using VMotion


Production
Network

You initiate a VMotion migration using the VI Client. In the example


above, the source ESX host is esx01, and the target ESX host is esx02. Both
the source and target servers share access to a common datastore containing
the virtual machine’s files. VMotion is fully supported on NFS datastores
and VMFS datastores residing on either Fibre Channel or iSCSI storage
networks. Both servers also share access to the dedicated VMotion Gigabit
Ethernet network.

Module 4 Migrate Virtual Machines Using VMotion 45


Overview_A.book Page 46 Friday, September 5, 2008 12:40 PM

How VMotion Works (2)


Slide 4-7

• Pre-copy memory from esx01 to esx02.


• Log ongoing memory changes into a memory bitmap
on esx01.

Memory
Bitmap

VMotion Memory
Network
Production
Network

The virtual machine’s memory state is copied over the VMotion network
from the source to the target ESX host. While the virtual machine’s memory
is being copied, users continue to access the virtual machine and can update
pages in the source ESX host memory. A list of any modified memory
pages is kept in a memory bitmap on the source ESX host.

46 Overview of VMware Infrastructure 3


Overview_A.book Page 47 Friday, September 5, 2008 12:40 PM


How VMotion Works (3)
Slide 4-8

• Quiesce virtual machine on esx01.


• Copy memory bitmap to esx02.

4
VMotion Memory Bitmap
Network

Migrate Virtual Machines Using VMotion


Production
Network

After most of the virtual machine’s memory is copied from the source to the
target ESX host, the virtual machine is quiesced: this means that the virtual
machine is temporarily placed in a state where no additional activity will
occur. This is the only time in the VMotion procedure in which the virtual
machine is unavailable. Quiescence typically lasts approximately one
second. During this period, VMotion begins to transfer the virtual machine
state to the target ESX host. The virtual machine device state and the
memory bitmap containing the list of pages that have changed are also
transferred during this time.
If a failure occurs during the VMotion migration, the virtual machine being
migrated is failed back to the source ESX host. For this reason, the source
virtual machine is maintained until the virtual machine on the target ESX
host starts running.

Module 4 Migrate Virtual Machines Using VMotion 47


Overview_A.book Page 48 Friday, September 5, 2008 12:40 PM

How VMotion Works (4)


Slide 4-9

• Copy virtual machine’s remaining memory (as listed


in memory bitmap) from esx01.

Memory
Bitmap

VMotion Copy Pages


Network
Production
Network

The remaining memory identified in the bitmap is copied from the source to
the target ESX host.

48 Overview of VMware Infrastructure 3


Overview_A.book Page 49 Friday, September 5, 2008 12:40 PM


How VMotion Works (5)
Slide 4-10

• Start VM A on esx02.

4
VMotion
Network

Migrate Virtual Machines Using VMotion


Production
Network

Immediately after the virtual machine is quiesced on the source ESX host,
the virtual machine on the target ESX host is initialized and starts running.
A virtual machine’s entire network identity, including MAC and IP address,
is preserved during a VMotion.
To update the physical switch port, the VMkernel sends a Reverse Address
Resolution Protocol (RARP) request with the virtual machine’s MAC
address to the physical network.

Module 4 Migrate Virtual Machines Using VMotion 49


Overview_A.book Page 50 Friday, September 5, 2008 12:40 PM

How VMotion Works (6)


Slide 4-11

• Users now access VM A on esx02.


• Delete VM A from esx01.

VMotion
Network
Production
Network

The original virtual machine is finally deleted from the source ESX host.
Users now access the virtual machine on the target ESX host.

50 Overview of VMware Infrastructure 3


Overview_A.book Page 51 Friday, September 5, 2008 12:40 PM


ESX Host Requirements for VMotion
Slide 4-12

• Source and destination ESX hosts must have the


following:
• Visibility to all SAN LUNs (either FC or iSCSI) and NAS
datastores used by the VM
• A Gigabit Ethernet VMotion network
• Access to the same virtual machine networks
• Compatible CPUs

4
•Same vendor and features

Migrate Virtual Machines Using VMotion


Listed here are several important ESX host requirements for successful
VMotion migration. Groups of identical servers should be purchased at the
same time to better ensure VMotion compatibility.

Module 4 Migrate Virtual Machines Using VMotion 51


Overview_A.book Page 52 Friday, September 5, 2008 12:40 PM

Virtual Machine Requirements for VMotion


Slide 4-13

Migrating a virtual machine with the following conditions produces


an error (red error icon and message in VMotion wizard):
• Virtual machine has an active connection to a local-only ESX resource.
• An internal-only virtual switch
• A CD-ROM or floppy device with a local image
• Virtual machine is in a cluster relationship (for example, using MSCS) with
another VM.
Migrating a virtual machine with the following conditions produces
a warning (yellow warning icon and message in VMotion wizard):
• Virtual machine has an configured but inactive connection to a local-only
ESX resource.
• An internal–only virtual switch
• A local CD-ROM or floppy image
• Virtual machine has one or more snapshots.

The VI Client has an easy-to-use VMotion wizard. A series of checks are


performed when you select a virtual machine and a destination ESX host for
VMotion. The wizard provides validation messages for both the source and
the destination ESX hosts once they pass the automatic VMotion
requirement checks.
The VMotion wizard also features user-friendly error and warning
messages. When an error is encountered, it must be fixed before proceeding.
When a warning is encountered, VMotion migration can proceed.

52 Overview of VMware Infrastructure 3


Overview_A.book Page 53 Friday, September 5, 2008 12:40 PM


Lab for Module 4
Slide 4-14

• Migrate Virtual Machines


Using VMotion
• In this lab, you perform the
following task:
VirtualCenter
•Migrate a virtual machine Server
using VMotion

ESX Host ESX Host

4
#1 #2

Migrate Virtual Machines Using VMotion


Student 01a Student 01b Student 02a Student 02b

Module 4 Migrate Virtual Machines Using VMotion 53


Overview_A.book Page 54 Friday, September 5, 2008 12:40 PM

54 Overview of VMware Infrastructure 3


Overview_A.book Page 55 Friday, September 5, 2008 12:40 PM


MODULE 5

VMware DRS Clusters 5

Importance
VMware® DRS-enabled clusters assist your system administration staff by
providing automated resource management for multiple ESX hosts. Less
management and more efficient use of existing hardware resources reduces
costs.

Objectives for the Learner


• To understand the functionality and benefits of a DRS cluster
• To create and configure a DRS cluster
• To create resource pools in a DRS cluster for multi-ESX host resource

5
policy control

VMware DRS Clusters


Lesson Topics
• What is a DRS cluster?
• Creating a DRS cluster
• DRS cluster settings, including the following:
• Automation level
• Migration threshold
• Placement constraints

Overview of VMware Infrastructure 3 55


Overview_A.book Page 56 Friday, September 5, 2008 12:40 PM

What Is a DRS Cluster?


Slide 5-5

• Cluster
•A collection of ESX hosts and
associated virtual machines
• DRS-enabled cluster
•Uses VMotion to balance workloads
across ESX hosts
•Enforces resource policies accurately
(reservations, limits, shares)
•Respects placement constraints
• Affinity rules and VMotion compatibility
•Managed by VirtualCenter Cluster

•Experimental support for automatic


power management
• Powers off ESX hosts when not needed

A cluster is a collection of ESX hosts that have their resources managed as a


single unit. The goal of a DRS-enabled cluster is to balance the workload
generated by the virtual machines across the ESX hosts in the cluster. The
workload-balance computations are performed automatically by DRS.
VMware DRS considers user-defined resource policy settings and
placement constraints, along with VMotion compatibility, when deciding
how to balance ESX host workloads.
Once imbalance is detected and a solution is calculated, DRS can either
recommend or perform specific VMotion migrations, depending on the
DRS automation level settings.

56 Overview of VMware Infrastructure 3


Overview_A.book Page 57 Friday, September 5, 2008 12:40 PM


Create a DRS Cluster
Slide 5-6

1. Right-click your datacenter.


2. Choose New Cluster.

Name your cluster, then


enable VMware DRS by
selecting the check box.

5
VMware DRS Clusters
A DRS cluster is configured using a wizard in the VI Client. The user is
prompted to provide the cluster a unique name and enable it for DRS.

Module 5 VMware DRS Clusters 57


Overview_A.book Page 58 Friday, September 5, 2008 12:40 PM

Automating Workload Balance


Slide 5-7

Configure the cluster-wide automation level for initial placement


and dynamic workload balancing while VMs are running.

Automation Initial VM Dynamic


level placement balancing
Manual Manual Manual
Partially
Automatic Manual
automated
Fully
Automatic Automatic
automated

Once DRS has been enabled on a cluster, new configuration choices appear
in the wizard’s left-side menu. These choices include VMware DRS, Rules,
Virtual Machine Options, and Power Management.
The VMware DRS menu option is where the cluster-wide automation level
is configured. The cluster-wide automation level affects how DRS performs
its two main functions. These are the two main DRS functions:

Initial placement When a virtual machine is powered on, it must


be initially placed on an ESX host.
Dynamic balancing The workloads created by running virtual
machines must be balanced across the ESX
hosts in the cluster.

DRS features three different cluster-wide automation levels. The automation


level determines how much of the decision-making process is granted to
VMware DRS when it needs to perform initial placement and dynamic
balancing.

58 Overview of VMware Infrastructure 3


Overview_A.book Page 59 Friday, September 5, 2008 12:40 PM


Manual When a virtual machine is powered on, DRS
displays a star-ranked list of the ESX servers
based on their current CPU and memory
utilization. The user selects which ESX server to
use. When the workloads across the ESX servers
in the DRS cluster become unbalanced, DRS
displays a ranked list of VMotion
recommendations.
Partially automated When a virtual machine is powered on, DRS
automatically places it on the best-suited ESX
server. When the workloads across the ESX
servers in the DRS cluster become unbalanced,
DRS displays a ranked list of VMotion
recommendations.

5
Fully automated When a virtual machine is powered on, DRS
automatically places it on the best-suited ESX

VMware DRS Clusters


server. When the workloads across the ESX
servers in the DRS cluster become unbalanced,
DRS automatically VMotions virtual machines
to restore balance.

DRS migration recommendations are ranked using a one-to-five–star


metric. Applying a four-star migration recommendation will result in
restoring more balance than would occur if applying a one-star
recommendation. Five-star recommendations occur when a DRS affinity
rule has been broken. Affinity rules are covered later in this module.
The migration threshold determines how DRS responds to migration
recommendations when DRS is configured in fully automated mode. The
slider bar has five distinct positions, which correspond to the five-star
ranking system. For example, moving the slider all the way to left
configures DRS to VMotion virtual machines only in response to a five-star
recommendation. Moving the slider all the way to the right configures DRS
to VMotion virtual machines in response to one-, two-, three-, four-, or five-
star recommendations.

Module 5 VMware DRS Clusters 59


Overview_A.book Page 60 Friday, September 5, 2008 12:40 PM

Adding ESX Hosts to a DRS Cluster (1)


Slide 5-8

• Drag and drop ESX


host onto cluster and
Drag-and-drop …

• Use the Add


Host wizard
to complete
the process.

To add an ESX host to a DRS cluster, you drag and drop the ESX host onto
the cluster icon in the VirtualCenter inventory. Supply the requested
information when prompted by the Add Host wizard.

60 Overview of VMware Infrastructure 3


Overview_A.book Page 61 Friday, September 5, 2008 12:40 PM


Adding ESX Hosts to a DRS Cluster (2)
Slide 5-9

• When adding a new ESX host or moving an existing ESX


host into the DRS cluster, you have the option of keeping the
resource pool hierarchy, if there is one, of the existing ESX
host.
• For example, add kentfield04 to Lab Cluster.

When adding
the host,
choose to
create a new
resource pool
for this host’s
virtual
machines and
resource

5
pools.

VMware DRS Clusters


The Add Host wizard will prompt the user to choose how to handle any
ESX host resource pools. There are two choices. Existing ESX host
resource pools can be maintained as depicted in the graphic above. As an
alternative, existing ESX host resource pools can be removed. If the
resource pools are removed, the CPU and memory resources of the ESX
host are added to the cluster and become available for redistribution to child
objects.
When finished with the Add Host wizard, you monitor the DRS
configuration progress using the Recent Tasks pane, located at the bottom of
the VI Client window.

Module 5 VMware DRS Clusters 61


Overview_A.book Page 62 Friday, September 5, 2008 12:40 PM

Automating Workload Balance per VM


Slide 5-10

• (Optional) Set automation level per virtual machine.


• Fine-grained workload control

Virtual machines added to ESX hosts in a DRS cluster are automatically


added to the DRS cluster as well. By default, each virtual machine is
initially placed and dynamically balanced using the cluster-wide automation
level. However, the cluster-wide automation level can be overridden per
virtual machine. This provides additional flexibility to meet business needs.
For example, a virtual machine running a business-critical application could
be configured for more manual migration control.

62 Overview of VMware Infrastructure 3


Overview_A.book Page 63 Friday, September 5, 2008 12:40 PM


Adjusting DRS Operation for Performance or HA
Slide 5-11

• Affinity rules
• Run virtual machines
on same ESX host.
• Use for multi-VM
systems where
performance benefits.
• Anti-affinity rules
• Run virtual machines
on different ESX hosts.
• Use for multi-VM
systems that load-
balance or require high

5
availability.

VMware DRS Clusters


DRS cluster operation can be configured to support better application
performance or higher application availability through the use of virtual
machine affinity rules. There are two types of virtual machine affinity rules:
an affinity rule and an anti-affinity rule.
An affinity rule will try to keep the listed virtual machines together on the
same ESX host. This is typically done to enhance performance. For
example, two virtual machines that pass a lot of network traffic might
benefit from using a virtual switch implemented in fast memory, rather than
using slower, external physical network components.
An anti-affinity rule will try to keep the listed virtual machines on separate
ESX hosts. This is typically done to enhance availability. For example, two
virtual machines running a business-critical application can be kept on
separate ESX hosts to reduce the possibility of a service outage due to
hardware failure.

Module 5 VMware DRS Clusters 63


Overview_A.book Page 64 Friday, September 5, 2008 12:40 PM

Lab for Module 5


Slide 5-12

• Create a DRS Cluster Two ESX host teams


• In this lab, you perform belong to one
the following tasks: cluster team.
•Create a DRS cluster
VirtualCenter
•Add ESX hosts to the Server
DRS cluster
•Add resource pools to a
DRS cluster ESX Host ESX Host
1 2
•Test the functionality of
the resource pools
Student 01a Student 01b Student 02a Student 02b

Cluster Team

64 Overview of VMware Infrastructure 3


Overview_A.book Page 65 Friday, September 5, 2008 12:40 PM


MODULE 6

Monitoring Virtual Machine


Performance 6

Importance
Although the VMkernel and VirtualCenter work proactively to avoid
resource contention, maximizing and verifying performance levels requires
both analysis and ongoing monitoring.

Objectives for the Learner


• Understand VirtualCenter’s user-friendly performance monitoring
capabilities
• Monitor a virtual machine’s performance
• Determine whether a virtual machine is constrained by a resource, and
solve the problem if one exists

6
Monitoring Virtual Machine Performance
Lesson Topics
• Virtual machine performance graphs
• Monitoring a virtual machine’s usage of the following:
• CPU
• Memory
• Disk
• Network

Overview of VMware Infrastructure 3 65


Overview_A.book Page 66 Friday, September 5, 2008 12:40 PM

VirtualCenter Performance Graphs


Slide 6-5

Provide students with a brief VirtualCenter features performance graphs for VMware® ESX hosts, virtual
overview of the features of
VirtualCenter’s performance
machines, clusters, and resource pools. ESX host and virtual machine
graphs. Specifically, point out: performance graphs display information about CPU, memory, disk I/O, and
network I/O usage. Cluster and resource pool performance graphs display
- The graph
- The legend only CPU and memory usage.
- The Options link
- Save as .csv Performance graphs provide an easy method to quickly display numerous
- The tear-off chart performance data points. To view a performance graph, select an object in
the VirtualCenter inventory and use its Performance tab. Graphs that display
Be sure to explain the
relationship between the real-time data, or historical data for the past day, week, month, or year, are
graph and the legend. available.
It is possible to export a comma-separated-value file (.csv) using the
performance graph interface. The .csv file may be imported into programs
such as Microsoft Excel.
Many performance graphs offer not only a wide choice of data types to
display but also a choice in the type of graph to display. This flexibility
allows large amounts of data to be more easily viewed and interpreted,
resulting in better decisions.
The ability to display real-time data allows an enterprise to react to
situations as they occur. Capturing up to a year of performance data
provides information for trend analysis to better plan for the future.

66 Overview of VMware Infrastructure 3


Overview_A.book Page 67 Friday, September 5, 2008 12:40 PM


Example CPU Performance Issue Indicator
Slide 6-6

• Ready Time
• The amount of time the virtual machine is ready to run
but cannot, because there is no available physical CPU
• High ready time indicates possible contention.

6
Understanding both ESX host operation and the data types displayed by
performance graphs is critical to properly interpreting information and

Monitoring Virtual Machine Performance


taking correct action. VMware provides many tools to gain this
understanding.
Administrators and operators have a large choice of resources to turn to.
VMware and its partners publish an array of online manuals, technical
papers, and knowledge base articles that feature performance monitoring
information and recommendations. Administrators and operators can also
attend VMware instructor-led or eLearning training courses. (The graphic
above is taken from a VMware training course.)
Virtual machine ready time is the amount of time, measured in milliseconds,
that a virtual machine is ready to run but cannot, because there is no
available physical CPU to be scheduled on. If all physical CPUs are busy
and ready time has increased, it is an indication of CPU contention.

Module 6 Monitoring Virtual Machine Performance 67


Overview_A.book Page 68 Friday, September 5, 2008 12:40 PM

Are VMs Being CPU Constrained?


Slide 6-7

Task Manager inside VM


Virtual machine’s CPU ready graph in VI Client

If the virtual machine is constrained by CPU:


• Add shares or increase CPU reservation.
• VMotion this virtual machine.
• Shut down, VMotion, or remove shares from other VMs.

Above is an example of using VirtualCenter to monitor virtual machine


ready time. Monitoring virtual machine ready time is useful as an early
indicator of CPU contention. You select the virtual machine in the inventory
and click on its Performance tab. You adjust the graph settings to display
real-time CPU information that includes the value “CPU Ready.”
Performance monitoring can still be done using guest operating system or
application-based tools. For example, the graphic above shows a screen
capture of Windows Task Manager running inside a virtual machine. A CPU
usage value of 100 percent means that the virtual machine is using all the
CPU time currently allotted to it.

68 Overview of VMware Infrastructure 3


Overview_A.book Page 69 Friday, September 5, 2008 12:40 PM


Supporting Higher Consolidation Ratios
Slide 6-8
• The VMware Tools vmmemctl balloon driver supports higher
consolidation ratios.
• VMware Tools is installed in the guest operating systems.
• Deallocate memory from selected virtual machines when
machine memory (RAM) is scarce.
ample memory;
balloon remains
uninflated
guest is forced to page
out to its own paging
inflate
area;
balloon VMkernel reclaims
(driver
demands
memory
memory from
guest OS) guest may page
deflate balloon in; VMkernel
(driver relinquishes grants memory
memory)

6
The memory “balloon” driver is another ESX feature that supports efficient
use of RAM and higher consolidation ratios. It is informally called the

Monitoring Virtual Machine Performance


balloon driver because of the way it operates. The balloon driver is part of
the VMware Tools software and operates as a native guest operating system
driver. A balloon driver exists for all supported guest operating systems.
The VMkernel uses the balloon driver to take memory from one virtual
machine and give it to another virtual machine when there is contention for
RAM. Which virtual machine must yield memory depends on each virtual
machine’s relative number of memory shares. The virtual machines with the
lower number of memory shares will be ballooned first. A virtual machine’s
reserved memory can never be ballooned.
When an ESX host is not under memory pressure, no virtual machine’s
balloon is inflated. But when memory becomes scarce, the VMkernel
chooses a virtual machine and inflates its balloon. The VMkernel tells the
balloon driver in the virtual machine to demand memory from the guest
operating system. The guest operating system complies by yielding memory
according to its own algorithms. The content of the yielded memory is
written to the guest’s paging device, which is normally its disk. The
relinquished memory can be assigned to other virtual machines.
When memory pressure diminishes, the relinquished memory is returned to
the virtual machine.

Module 6 Monitoring Virtual Machine Performance 69


Overview_A.book Page 70 Friday, September 5, 2008 12:40 PM

Are VMs Being Memory Constrained?


Slide 6-9

Task Manager inside VM


If the virtual machine is constrained by memory:
• Add shares or raise memory reservation. Check for high
ballooning activity.
• VMotion this virtual machine.
• Shut down, VMotion, or remove shares from other virtual
machines.
• Add machine memory (RAM).

Above is an example of using VirtualCenter to monitor virtual machine


memory ballooning. Monitoring virtual machine memory ballooning is
useful as an early indicator of memory contention.
To monitor ballooning activity, you select the virtual machine in the
inventory and click on its Performance tab. You adjust the graph settings to
display real-time memory information that includes the values “Memory
Balloon Target” and “Memory Balloon.” Memory Balloon Target is how
much memory the VMkernel wants to balloon from the virtual machine.
Memory Balloon is how much memory has actually been ballooned from
the virtual machine.
Performance monitoring can still be done using guest operating system or
application-based tools. For example, the graphic above shows a screen
capture of Windows Task Manager running inside a virtual machine. Guest
operating systems tools can be used to determine how the memory is being
used.

70 Overview of VMware Infrastructure 3


Overview_A.book Page 71 Friday, September 5, 2008 12:40 PM


Are VMs Being Disk Constrained?
Slide 6-10
• Disk-intensive applications can
saturate the storage or the path.
• If you suspect that a VM is
constrained by disk access:
• Measure the resource consumption
using performance graphs
• Measure the effective bandwidth
between VM and the storage
• To improve disk performance:
• Ensure VMware Tools is installed.
• Reduce competition.
• Move other VMs to other storage.
• Use other paths to storage.
• Reconfigure the storage.
• Ensure that the storage’s configuration
(RAID level, cache configuration, etc.)
are appropriate.

6
Above is an example of using VirtualCenter to monitor virtual machine disk
I/O. Monitoring virtual machine disk I/O is useful as an early indicator of

Monitoring Virtual Machine Performance


storage performance issues. You select the virtual machine in the inventory
and click on its Performance tab. You adjust the graph settings to display
real-time memory information that includes the values “Disk Read Rate”
and “Disk Write Rate,” measured in kilobytes per second.
Performance monitoring can still be done using guest operating system- For more information about
Iometer, see http://
based, application-based, or storage-based tools. For example, the graphic sourceforge.net/projects/
above shows a screen capture of Iometer running inside a virtual machine. iometer.

Module 6 Monitoring Virtual Machine Performance 71


Overview_A.book Page 72 Friday, September 5, 2008 12:40 PM

Are VMs Being Network Constrained?


Slide 6-11
• Network-intensive applications
will often bottleneck on path
segments outside ESX.
• Example: WAN links between
server and client
• If you suspect that a VM is
constrained by the network:
• Examine performance graphs.
• Measure the effective bandwidth
between VM and its peer system.
• To improve network
performance:
• Confirm that VMware Tools is
installed.
• Move VMs to another physical NIC.
• Traffic-shape other VMs.
• Reduce overall CPU utilization.

Above is an example of using VirtualCenter to monitor virtual machine


network I/O. Monitoring virtual machine network I/O is useful as an early
indicator of network performance issues. You select the virtual machine in
the inventory and click on its Performance tab. You adjust the graph settings
to display real-time network I/O information that includes the value
“Network Usage,” measured in kilobytes per second.
Performance monitoring can still be done using guest operating system–
based, application-based, or network-based tools. For example, the graphic
above shows a screen capture of Iometer running inside a virtual machine.
One of the suggestions above for improving virtual machine network
performance involves the use of traffic shaping. Unlike CPU, memory, and
disk bandwidth, network bandwidth cannot be allocated using shares.
Network bandwidth is allocated using traffic shaping, if it is enabled by the
administrator.
Using traffic shaping to divide network bandwidth between virtual machine
NICs is analogous to cutting a pie into sections and handing each person a
piece of the pie. Each person can eat only their section of the pie. However,
each person can choose not to eat his or her whole piece. Part of the pie
might be left over. In the same way, virtual NICs can use all the network
bandwidth they are allocated and no more. However, a virtual NIC might
not use all its bandwidth, leaving some available bandwidth unused.

72 Overview of VMware Infrastructure 3


Overview_A.book Page 73 Friday, September 5, 2008 12:40 PM


Lab for Module 6
Slide 6-12

• Monitor Virtual Machine This lab will be


Performance
performed by each
• In this lab, you perform ESX host team
the following task: separately.
•Monitor CPU ready time
using VirtualCenter VirtualCenter
Server

ESX Host ESX Host


1 2

Student 01a Student 01b Student 02a Student 02b

ESX Host Team 1 ESX Host Team 2

6
Monitoring Virtual Machine Performance

Module 6 Monitoring Virtual Machine Performance 73


Overview_A.book Page 74 Friday, September 5, 2008 12:40 PM

74 Overview of VMware Infrastructure 3


Overview_A.book Page 75 Friday, September 5, 2008 12:40 PM

MODULE 7

VirtualCenter Alarms 7

Importance
VirtualCenter alarms proactively monitor VMware® ESX and virtual
machine performance. Alarms allow your system administrators to be more
responsive to changes in the datacenter. Alarms send notifications when
either the ESX host or the virtual machine state changes or user-defined
thresholds are exceeded.

Objectives for the Learner


• Understand ESX host and virtual machine alarms
• Configure ESX host and virtual machine alarms
• Configure VirtualCenter SMTP and SNMP notification settings

Lesson Topics
• ESX host–based alarms

7
• Virtual machine–based alarms
• VirtualCenter SMTP and SNMP configuration

VirtualCenter Alarms

Overview of VMware Infrastructure 3 75


Overview_A.book Page 76 Friday, September 5, 2008 12:40 PM

Proactive Datacenter Management


Slide 7-5

• VirtualCenter sends notifications when ESX host or VM state


changes or when user-defined thresholds are exceeded.

Alarms are Status determined by View of VMs’ CPU


indicated in the threshold levels in and memory
inventory. alarm definition utilization on
selected ESX host

Alarms are asynchronous notifications of changes in host or virtual machine


state. When a host or virtual machine’s load passes certain configurable
thresholds, the VI Client displays messages to this effect. You can also
configure VirtualCenter to transmit these messages to external monitoring
systems.

76 Overview of VMware Infrastructure 3


Overview_A.book Page 77 Friday, September 5, 2008 12:40 PM

Preconfigured VirtualCenter Alarms


Slide 7-6
• Default CPU and memory alarms are defined at the
top of the inventory.

• Add custom alarms anywhere in the inventory.

The highest point in the VirtualCenter inventory—Hosts & Clusters—is the


location of the default alarms. You can modify these alarms in place. You
can also define finer-grained alarms. For example, you might organize

7
several ESX hosts or clusters into a folder and apply an alarm to that folder.

VirtualCenter Alarms

Module 7 VirtualCenter Alarms 77


Overview_A.book Page 78 Friday, September 5, 2008 12:40 PM

Creating a Virtual Machine-Based Alarm


Slide 7-7

• Right-click on a virtual machine and choose Add


Alarm.
Click any
Name and field
describe to modify.
the new
alarm. Percentages
Powered on,
powered off,
suspended

When you right-click on a virtual machine and choose Add Alarm, the
resulting window has four panels. You use the General panel to name this
alarm. You use the Triggers panel to control which load factors are
monitored and what the threshold for the yellow and red states are. The
Reporting and Actions panels are discussed in upcoming slides.

78 Overview of VMware Infrastructure 3


Overview_A.book Page 79 Friday, September 5, 2008 12:40 PM

Creating a Host-Based Alarm


Slide 7-8

• Right-click on an ESX host and choose Add Alarm.

Name and
describe Click any
the new field
alarm. to modify.
Percentages
Connected,
disconnected,
not responding

The dialog box displayed when you right-click on an ESX host and choose
Add Alarm is very similar to that for a virtual machine. The key difference
is the list of available triggers.

7
VirtualCenter Alarms

Module 7 VirtualCenter Alarms 79


Overview_A.book Page 80 Friday, September 5, 2008 12:40 PM

Actions to Take When an Alarm Is Triggered


Slide 7-9

• Use the Actions tab to send external messages or


to automate the response to problems.

Available
only for VM-
based alarms

You can specify actions to occur when an alarm is triggered (other than
simply displaying it in the VI Client). These actions include the following:
• Sending a notification email
• Sending a notification trap
• Running a script
• Powering on a virtual machine
• Powering off a virtual machine
• Suspending a virtual machine
• Resetting a virtual machine

80 Overview of VMware Infrastructure 3


Overview_A.book Page 81 Friday, September 5, 2008 12:40 PM

Alarm Reporting Options


Slide 7-10

• Use the Reporting tab to avoid needless re-alarms.

Avoids threshold
repeat alarms

Avoids state-change
repeat alarms

If you plan to transmit alarms to some external monitoring system, such as


an SNMP monitoring tool, someone’s email, or someone's pager, you
probably want to avoid generating a flood of duplicate alarms. Use the

7
controls on the Reporting pane to avoid such a flood.

VirtualCenter Alarms

Module 7 VirtualCenter Alarms 81


Overview_A.book Page 82 Friday, September 5, 2008 12:40 PM

Configure VirtualCenter Notifications


Slide 7-11
• Choose Administration > VirtualCenter Management
Server Configuration.

• Click Mail to set


SMTP
parameters.

• Click SNMP to
specify trap
destinations.

If you want to transmit SNMP or email alarms, you must supply the IP
address of the destination server.
If your SNMP community string is not public, specify it here.
Specify the email address to be used for the From address of email alerts.

82 Overview of VMware Infrastructure 3


Overview_A.book Page 83 Friday, September 5, 2008 12:40 PM

Lab for Module 7


Slide 7-12

ESX host-based and VM- This lab will be


based performance alarms performed by each
• In this lab, you perform ESX host team
the following tasks: separately.
•Create ESX host-based VirtualCenter
and VM-based alarms in Server
VirtualCenter
•Monitor CPU Usage
alarms in VirtualCenter ESX Host ESX Host
1 2

Student 01a Student 01b Student 02a Student 02b

ESX Host Team 1 ESX Host Team 2

7
VirtualCenter Alarms

Module 7 VirtualCenter Alarms 83


Overview_A.book Page 84 Friday, September 5, 2008 12:40 PM

84 Overview of VMware Infrastructure 3


Overview_A.book Page 85 Friday, September 5, 2008 12:40 PM


MODULE 8

VMware HA 8

Importance
Services that are highly available are important to any business.
Configuring VMware® HA can increase service levels.

Objectives for the Learner


• Implement a VMware HA cluster

Lesson Topics
• Architecture of VMware HA
• VMware HA prerequisites
• Clustering virtual machines using VMware HA
• Admission control
• Restart priorities
• Isolation response

8
VMware HA

Overview of VMware Infrastructure 3 85


Overview_A.book Page 86 Friday, September 5, 2008 12:40 PM

What Is VMware HA?


Slide 8-5

• A VirtualCenter feature
• Configuration, management, and monitoring done
through the VI Client
• Automatic restart of virtual machines in case of
physical ESX server failures
• Not VMotion
• Provides higher availability while reducing the need
for passive standby hardware and dedicated
administrators
• Provides restart capability to a range of applications
not configurable under MSCS
• Provides experimental support for per-VM failover

VMware High Availability (HA) provides easy-to-use, cost-effective high


availability for applications running in virtual machines. In the event of
server failure, affected virtual machines are automatically restarted on other
production servers with spare capacity. VMware HA allows IT
organizations to minimize downtime and IT service disruption while
eliminating the need for dedicated standby hardware and installation of
additional software.
VMware HA continuously monitors all VMware ESX servers in a cluster
and detects server failures. An agent placed on each server maintains a
“heartbeat” with the other servers in the cluster. ESX server heartbeats are
sent every five seconds. If a heartbeat is lost, the agent initiates the restart
process of all affected virtual machines on other servers. The heartbeat
timeout is 15,000 milliseconds or 15 seconds. VMware HA ensures that
sufficient resources are available in the cluster at all times to be able to
restart virtual machines on different physical servers, in the event of server
failure. Restart of virtual machines is made possible by the distributed
locking mechanism in VMFS, which gracefully coordinates read-write
access to the same virtual machine files by multiple ESX hosts. VMware
HA is easily configured for a cluster through VirtualCenter.
In every cluster, the amount of downtime experienced depends on how long
it takes whatever is running to restart when the virtual machine is failed
over. The answer to how long it will take to restart the virtual machine is “it
depends.”

86 Overview of VMware Infrastructure 3


Overview_A.book Page 87 Friday, September 5, 2008 12:40 PM


Virtual Machine Failure Monitoring
An additional VMware HA function called virtual machine failure
monitoring allows VMware HA to monitor whether a virtual machine is
available or not. VMware HA uses the heartbeat information that VMware
Tools captures to determine virtual machine availability.
On each virtual machine, VMware Tools sends a heartbeat every second.
Virtual machine failure monitoring checks for a heartbeat every 20 seconds.
If heartbeats have not been received within a specified (user-configurable)
interval, virtual machine failure monitoring declares that virtual machine as
failed and resets the virtual machine.
Virtual machine failure monitoring can distinguish between a virtual
machine that was powered on but has stopped sending heartbeats and a
virtual machine that is powered-off, suspended, or migrated.
Virtual machine failure monitoring is experimental and not supported for
production use. By default, virtual machine failure monitoring is disabled
but can be enabled by editing the VMware HA Virtual Machine Options in
the VI Client.

8
VMware HA

Module 8 VMware HA 87
Overview_A.book Page 88 Friday, September 5, 2008 12:40 PM

Architecture of a VMware HA Cluster


Slide 8-6
VMware HA agents
VirtualCenter are configured by
Server VirtualCenter, but
failover is
independent of
VirtualCenter.

A key component to the VMware HA architecture is the cluster of ESX


servers. VirtualCenter is used to configure the cluster but the ability to
perform failovers is independent of VirtualCenter availability. VirtualCenter
is not a single point of failure for an HA cluster. In this example, the cluster
consists of three ESX servers. When each server was added to the cluster,
the VMware HA agent was uploaded to the server. A VMware HA agent on
each server provides a heartbeat mechanism on the service console network.

88 Overview of VMware Infrastructure 3


Overview_A.book Page 89 Friday, September 5, 2008 12:40 PM


VMware HA Prerequisites
Slide 8-7

• You should be able to power on a virtual machine


from all ESX servers in the cluster.
• Access to common resources (shared storage, VM
networks)
• ESX servers should be configured for DNS.
• DNS resolution between all ESX servers in the cluster
is needed during cluster configuration and startup.

For the HA cluster to work properly, there are two prerequisites:


• Each ESX server in the cluster must be configured to use DNS, and
DNS resolution of the host’s fully qualified domain name must be
successful because VMware HA relies on that name.
• Each ESX server in the cluster should have access to the virtual
machines’ files and should be able to power on the virtual machine
without a problem. Distributed locking prevents simultaneous access to
8
virtual machines, thus protecting data integrity.
VMware HA

Module 8 VMware HA 89
Overview_A.book Page 90 Friday, September 5, 2008 12:40 PM

Create a VMware HA Cluster


Slide 8-8
Configure cluster for VMware HA and/or DRS.

Enable VMware HA by
selecting the check
box.

Creating a VMware HA cluster is very similar to creating a DRS cluster.


The first step is to select the cluster type. It is best to create a cluster that has
both VMware HA and DRS implemented: VMware HA for the reactive
solution and DRS for the proactive solution. The job of DRS is to VMotion
virtual machines to balance servers’ CPU and memory loads. The job of
VMware HA is to reboot virtual machines on a different ESX host when an
ESX host crashes. No VMotion is involved in VMware HA.
Why enable both VMware HA and DRS? The decision of initial placement
of the virtual machines is done only for DRS clusters. The users can use
DRS not just for overall cluster balance but also for initial placement. Thus
VMware HA-plus-DRS is a reactive-plus-proactive system—an ideal
situation.

90 Overview of VMware Infrastructure 3


Overview_A.book Page 91 Friday, September 5, 2008 12:40 PM


Add an ESX Host to the Cluster
Slide 8-9
• Drag and drop
ESX host onto
cluster and …
• Use the Add Host
Wizard to complete
the process.

• Consider
configuring enough
redundant ESX
host capacity to
restart virtual
machines.

To add an ESX host to the cluster, you drag and drop the existing standalone
server into the HA cluster, then use the Add Host wizard to complete the
process.

8
VMware HA

Module 8 VMware HA 91
Overview_A.book Page 92 Friday, September 5, 2008 12:40 PM

Configure Cluster-Wide Admission Control


Slide 8-10
Configure number of tolerated ESX host failures
and cluster admission control settings.

How much redundant


capacity should be
maintained?

Cluster-wide
Can prevent human error
settings. Per-
from starting more VMs
VM settings for
than can be restarted
each are also
available.

Which is more important:


uptime or resource
fairness?

VMware HA cluster configuration requires cluster-wide policies and


individual virtual machine customizations.
There are two cluster-wide policy settings: Number of host failures allowed
and Admission Control. The number of host failures to tolerate ranges from
1 to 4. For example, if one ESX host fails in the cluster, there should be
enough resources on the remaining servers in the cluster on which to run the
virtual machines that were on the failed server.
Admission control policies for VMware HA define when or when not to
power on a virtual machine. By default, if a virtual machine violates
availability constraints, the virtual machine will not be powered on.
Availability constraints refer to the cluster’s resource reservations as well as
the constraint specifying the number of host failures to tolerate. VMware
HA tries to maintain enough spare capacity across the cluster based on these
values. The actual spare capacity available can be monitored in the current
failover capacity field in a VMware HA cluster’s Summary tab in the VI
Client.
You can configure how VMware HA should respond in the event that an
ESX host failure occurs and there is insufficient capacity to restart all the
virtual machines. At the cluster level, you can specify what the default
priority is for virtual machine restarts. You can also specify—on a per-
virtual machine basis—how high the priority is to bring each particular
virtual machine back online.

92 Overview of VMware Infrastructure 3


Overview_A.book Page 93 Friday, September 5, 2008 12:40 PM


Restart priority is based on the criticality of virtual machines.
For example, in a Windows environment, DNS and domain controllers
would normally be specified as the highest restoration priority, due to other
servers depending on those infrastructure services.
This priority decision may be influenced if you have redundant DNS and
domain controller elements that are forced to be resident on different servers
at all times, such as if an anti-affinity rule is applied at a DRS level. Note
that this will not prevent someone from manually invoking migrations that
cause these virtual machines to be on the same ESX host.
There are also some virtual machines that are not essential in the event of a
failure and that may be disabled from being restored. This means that, if the
HA cluster will have drastically reduced available resources, shedding these
less-essential resource consumers will reduce contention for these limited
resources.
You set low, medium, and, high restart priorities to customize failover
ordering. The default is medium. High-priority virtual machines are
restarted first. Nonessential virtual machines should be set to Disabled
(automated restart will skip them).
You can set the default response in the event that an ESX host becomes
isolated. You can choose to do the following:
• Leave the virtual machines powered on
• Power off the virtual machines
This setting can be specified at the cluster level and on a per-virtual
machine basis—as the following page illustrates. 8
The user can also determine whether to power down the virtual machines or
VMware HA

not, on node isolation. This is determined by the Isolation Response setting.


The Isolation Response setting of Power off does just that: VMware HA
does not do a clean shutdown of the virtual machines.
Isolation response is initiated when an ESX host experiences network
isolation from the rest of the cluster. Power off is the default response. The
Leave power on setting is intended for these cases:
• Where lack of redundancy and environmental factors make outages
likely
• Where virtual machine networks are separate from service console
networks (and more reliable)
Isolation events can be prevented if proper network redundancy is employed
from the start.

Module 8 VMware HA 93
Overview_A.book Page 94 Friday, September 5, 2008 12:40 PM

Failover Capacity Examples


Slide 8-11

Only 8 VMs could run and Only 4 VMs could run and
still be restarted. still be restarted.

Failover capacity: 1 host failure Failover capacity: 2 host


failures
VMware HA cluster VMware HA cluster

In the first example, the VMware HA cluster has been set up to


accommodate one host failure. Therefore, if any single ESX host fails in the
cluster, the remaining ESX hosts should have enough capacity to run the
virtual machines that are on the failed server.
In the second example, the VMware HA cluster has been set up to
accommodate up to two host failures. Therefore, if two ESX hosts fail, the
remaining ESX host in the cluster should have enough capacity to run all
virtual machines.

NOTE

Both of these examples assume that all virtual machines require the same
amount of resources.

94 Overview of VMware Infrastructure 3


Overview_A.book Page 95 Friday, September 5, 2008 12:40 PM


Maintain Business Continuity if ESX Hosts Become Isolated
Slide 8-12

• A network failure ?
might cause a “split-
brain” condition.

• VMware HA waits
15 seconds by default
before deciding that
an ESX host is
isolated.

Datacenters configured for high availability should include redundant


management network connections between the ESX hosts. VMware HA
includes a recovery mechanism in the event redundant network connections
are not configured.
Network failures can cause “split-brain” conditions. In such cases, ESX
hosts are unable to determine if the rest of the cluster has failed or has
become unreachable. 8
Isolation response is used to prevent split-brain conditions and is started
VMware HA

under the following conditions:


• An ESX host has stopped receiving heartbeats from other cluster nodes
and the isolation address cannot be pinged.
• The default isolation address is the service console gateway, and the
default isolation response time is 15 seconds.
Powering virtual machines off releases VMFS locks and enables other ESX
hosts to recover them. When the Leave power on option is set, virtual
machines may require manual power-off/migration in case of an actual
network isolation.

A different isolation address can be specified by using the advanced HA option das.isolationaddress.
A different isolation response time can also be specified by using the advanced HA option
das.failuredetectiontime. These are cluster-wide settings, which can be set in the Advanced Options
menu of the VMware HA properties.

Module 8 VMware HA 95
Overview_A.book Page 96 Friday, September 5, 2008 12:40 PM

Lab for Module 8


Slide 8-13

• Using VMware HA Two ESX host teams


• In this lab, you perform belong to one
the following tasks: cluster team.
•Add VMware HA
functionality to an existing VirtualCenter
cluster Server

•Cause VMware HA to
restart virtual machines
following the “crash” of an ESX Host ESX Host
ESX host 1 2

Student 01a Student 01b Student 02a Student 02b

DRS/HA Cluster Team

96 Overview of VMware Infrastructure 3


Overview_A.book Page 97 Friday, September 5, 2008 12:40 PM


MODULE 9

VI3 Product and Feature


Overview 9

Importance
VMware’s latest products and features take full advantage of the
groundbreaking mobility and manageability characteristics of virtual
machines explored in the previous modules to deliver scalable, repeatable,
and efficient IT processes.

Objectives for the Learner


• Understand how other businesses have typically adopted VMware®
Infrastructure
• Learn how you can make more efficient use of your existing resources,
reduce costs, and respond to business needs faster with a VMware
Infrastructure

Lesson Topics
• Standardizing on virtualization
• VMware Infrastructure 3 products and features

Course Flow
The intent of this final module is to provide only a very brief introduction to the VMware VI3
product line. Many of these products leverage the core VI3 features introduced in the earlier
9

modules. This module does not attempt to provide technical details of the products. It is meant only
to inform customers these products exist and provide a few facts about each product. If the
customer has interest in learning more, then consider it a opportunity for sales to get involved.
VI3 Product and Feature Overview

Overview of VMware Infrastructure 3 97


Overview_A.book Page 98 Friday, September 5, 2008 12:40 PM

Customers Move Rapidly Along the Adoption Curve


Slide 9-5
Customer Example: Large Wireless Technology Company

Standardization

300

1000 Expanded 250


Active Virtual Machines

Rollout

ESX Host Instances


800 200

Departmental 150
600
Rollout

400 100
Proof of Concept

200 50

0 0
2003 2004 2005 2006

Over the last few years, virtualization has gone from a technology being
tried out in test/dev to a production server consolidation technology. It is
now is gaining momentum as the industry-standard way of computing.
Early adopters of our technology used the hypervisor for basic partitioning.
As VMware technology matured and provided means to aggregrate multiple
virtualized nodes and centralized management, customers rolled it out into
mainstream production environments.
As our customers and our technology matured, virtualization began to go far
beyond its original use for server consolidation and live migration of virtual
machines. Ensuring availability and uptime helped customers achieve better
service levels. Using virtualization for business continuity and disaster
recovery helped customers achieve better recovery time objectives (RTOs)
and recovery point objectives (RPOs) at a fraction of the cost. With the end-
to-end management and automation capabilities available from VMware, it
became very easy for customers worldwide to make VMware virtualization
the default in the datacenter.
Forty-three percent of customers surveyed last year said that their default
policy for all or most new machines was a virtual machine.

98 Overview of VMware Infrastructure 3


Overview_A.book Page 99 Friday, September 5, 2008 12:40 PM


Standardizing on VMware Infrastructure
Slide 9-6
• It takes more than just a hypervisor layer to create and successfully
manage a virtual infrastructure.

Early Adoption Mainstreaming Standardization

Infrastructure
Test & Server
Management
Development Consolidation
High Availability

Management &
Automation
Virtual Virtual
Infrastructure Infrastructure

Hypervisor Hypervisor Hypervisor

1st generation 2nd generation 3rd generation


1998–2002 2003–2005 2006–2008

This graph illustrates the typical customer adoption phases: Who is this large wireless
technology company?
• Proof of concept
Answer: Qualcomm
• Departmental rollout
• Expanded rollout
• Standardization
This company started a proof of concept in the first half of 2003. Today, it
has implemented a VMware-first policy (that is, it has standardized on
VMware for x86 workloads). Today, 60 percent of its x86 environment is
virtualized (of 1,900 total servers, 1,150 are virtualized).
9

The number of physical servers has grown from 950 to 1,900 over the past
2.5 years. Because of the much simplified provisioning with virtualization,
VI3 Product and Feature Overview

the company has been able to maintain the same number of server
administrators. It provisions 68 new virtual machines per month. This
would be impossible in the physical world without dramatic staffing
increases. This means that the number of physical servers a single system
administrator can manage has more than doubled. This translates into
substantial operational savings for the company.

The information on this page is available at http://www.vmware.com/files/pdf/


VI3_New_presentation.pdf.

Module 9 VI3 Product and Feature Overview 99


Overview_A.book Page 100 Friday, September 5, 2008 12:40 PM

Functional Layers in a Virtual Infrastructure


Slide 9-7
• VMware provides advanced implementation and management features
at all three layers of the virtual infrastructure.

p • Update Manager
• Virtual Desktop Manager
• Enterprise Converter
• Lab Manager
Management & • Guided Consolidation • Lifecycle Manager
Automation • Site Recovery Manager

o • VMware DRS
• VMware HA
• VMware Consolidated Backup
• Distributed Power Management
Virtual • VMotion • Storage VMotion
Infrastructure

n • ESX hypervisor
• ESXi hypervisor
• VMFS
• VSMP
Virtualization
Platform

VMware Infrastructure 3 products and features provide a wealth of


functionality to optimize datacenter operations. The products introduced
later in this module include the following:
These products are
introduced in the rest of this
module. They are ordered Update Manager Automates patch management and reduces
according to their placement
in the three layers of the manual tracking and patching of VMware ESX
Virtual Infrastructure. For hosts and virtual machines
example, the first product
introduced is in the bottom Virtual Desktop Provides an integrated desktop virtualization
layer, while the final products
introduced are in the top
Manager solution that delivers enterprise-class control
layer. Only those products and manageability with a familiar user
not covered in earlier experience
modules are covered in this
module. Guided Consolidation Guides first-time virtualization users through the
process of discovering and converting physical
servers into virtual machines
Site Recovery Automation of disaster recovery setup, testing,
Manager failover, and failback
Enterprise Converter Simplifies the discovery and analysis of physical
servers and converting these servers into virtual
machines

100 Overview of VMware Infrastructure 3


Overview_A.book Page 101 Friday, September 5, 2008 12:40 PM


Lab Manager Automates the setup, capture, storage, and
sharing of multimachine system configurations
Lifecycle Manager Implements a consistent, automated workflow
for provisioning, operating, and
decommissioning virtual machines
VMware DRS Monitors utilization continuously across
resource pools and intelligently allocates
available resources among the virtual machines
based on predefined rules that reflect business
needs and changing priorities
VMware HA Delivers cost-effective high availability for any
application running in a virtual machine,
regardless of its operating system or underlying
hardware configuration
VMotion Migrates running virtual machines from one
ESX host to another with no disruption
VMware Consolidated Enables LAN-free backup of virtual machines
Backup from a centralized proxy server
Distributed Power Minimizes power consumption by consolidating
Management workloads onto fewer ESX hosts while
guaranteeing service levels
Storage VMotion Perform live migration of virtual machine disk
files across storage arrays with no disruption in
service for critical applications
ESX Forms the robust foundation of the VMware
Infrastructure 3 suite
9

ESXi Provides a hardware-integrated hypervisor built


on a next-generation thin architecture
VI3 Product and Feature Overview

VMFS Provides a high-performance cluster file system


optimized for virtual machines
Virtual SMP Allows a single virtual machine to use up to four
physical processors simultaneously for increased
application scalability

These products and technologies are discussed in the following pages.

Module 9 VI3 Product and Feature Overview 101


Overview_A.book Page 102 Friday, September 5, 2008 12:40 PM

New Virtualization Platform Layer Product


Slide 9-8

• ESX 3.5 introduces the new virtualization


platform layer ESXi hypervisor.
• Next-generation, thin hypervisor integrated
into server hardware enabling rapid
deployment.

n • ESXi hypervisor
Virtualization
Platform

ESX 3.5 introduces a new hardware-based hypervisor called ESXi. Because


ESXi is delivered preinstalled by major OEMs, installation is not required.

102 Overview of VMware Infrastructure 3


Overview_A.book Page 103 Friday, September 5, 2008 12:40 PM


ESXi Hypervisor
Slide 9-9

• Compact, 32MB footprint


• Only architecture with no reliance
on a general-purpose operating
system
• Integration in hardware eliminates
installation.
• Intuitive wizard-driven startup
experience dramatically reduces
deployment time.
• Simplified management
• Increased security and reliability

At 32MB, VMware ESXi weighs in at a fraction of the size of a general-


purpose operating system. This compact footprint sets a new bar for security
due to a smaller “attack surface.” This small footprint and hardware-like
reliability also enable ESXi to be built directly into industry-standard x86
servers while continuing to provide the same great performance and
scalability of ESX.
The operating system–independent design of ESXi is optimized for
virtualization performance.
ESXi utilizes an intuitive wizard that dramatically reduces deployment time.
This makes it possible to go from server boot to running virtual machines in
9

minutes.
VI3 Product and Feature Overview

ESXi hosts are managed by VirtualCenter. This centralizes and simplifies


management of the entire virtual infrastructure.
Both VMware ESX and VMware ESXi support the entire suite of VMware
Infrastructure 3 products, features, and solutions. You can use VMware
ESX and VMware ESXi side by side in your virtual infrastructure.

Module 9 VI3 Product and Feature Overview 103


Overview_A.book Page 104 Friday, September 5, 2008 12:40 PM

From Server Boot to Virtual Machines in Minutes


Slide 9-10

1. Power on server and boot


into hypervisor.

2. Configure Admin password.

3. (Optional) Modify network


configuration.

4. Connect VI Client to IP
address (or manage with
VirtualCenter).

Companies can quickly bring new ESXi servers online. All that is required
to do so is to power on the server, boot into the ESXi hypervisor, configure
an administrator password, optionally modify the network configuration,
and connect to the server through either the VI Client or VirtualCenter.
ESXi enables companies to quickly add additional computing resources to
their virtual infrastructure.

104 Overview of VMware Infrastructure 3


Overview_A.book Page 105 Friday, September 5, 2008 12:40 PM


Additional VI Layer Products and Features
Slide 9-11

• The Virtual Infrastructure layer includes


several products and features based on
core features introduced earlier in the
course.
• These additional Virtual Infrastructure
products and features enable higher
availability and greater cost savings.

o • VMware Consolidated Backup


• Distributed Power Management
Virtual • Storage VMotion
Infrastructure

9
VI3 Product and Feature Overview

Module 9 VI3 Product and Feature Overview 105


Overview_A.book Page 106 Friday, September 5, 2008 12:40 PM

VMware Consolidated Backup (VCB)


Slide 9-12

• An online backup solution for ESX host VMs


• Fibre Channel, iSCSI, NFS, and local storage support
• File system-consistent guest OS backup
• VMware Tools quiesces file system before backup.
• Supports different backup modes
• File-level backup (Windows guests)
• Full virtual machine backup (all guests)
• Works with major third-party backup software
• Backup is offloaded to a physical Windows 2003
server.
• VCB 1.1 is supported in a virtual machine.

VMware Consolidated Backup enables LAN-free backup of virtual


machines from a centralized proxy server. VCB supports Fibre Channel,
iSCSI, NFS, and local storage.
VCB performs file system–consistent backups of guest operating system
data, with VMware Tools quiescing the file systems before the backup
occurs.
VCB can perform full virtual machine backups of all supported guest
operating system types. And VCB can perform file-level backups of
supported Windows guest operating systems.
VCB integrates with the existing backup tools and technologies already in
place. This improves manageability of existing IT resources and eliminates
the need to run a backup agents on every virtual machine.

106 Overview of VMware Infrastructure 3


Overview_A.book Page 107 Friday, September 5, 2008 12:40 PM


VCB offloads the load associated with performing backups. This leaves the
computing power of ESX hosts available for running virtual machines. And
VCB can bypass the local area network when performing backups so that
the network performance is not affected.

Recent changes to VCB that make it more attractive to the SMB market space:
• In addition to supporting SAN, VCB now supports iSCSI, NAS, and locally attached storage
(released in 3.0.2).
• VCB can run in a virtual machine, thereby eliminating the need for a dedicated backup proxy
server.
VMware Converter can be used to restore VCB images (released in 3.0.1). This provides a simple
graphical technique to restore virtual machines from tape and return them to operation in VI3.

9
VI3 Product and Feature Overview

Module 9 VI3 Product and Feature Overview 107


Overview_A.book Page 108 Friday, September 5, 2008 12:40 PM

VMware Consolidated Backup Operation


Slide 9-13

When a backup is performed using VCB, VMware Tools can be used to


quiesce the file systems of guest operating systems. Once the file systems
have been quiesced, VCB takes a snapshot of each VMDK being backed up.
This enables VCB to back up the data without any disruption to the virtual
machines. Next, the VCB server mounts the VMFS file systems so that the
virtual disks can be seen. Finally, the third-party backup solution integrated
with the VCB server can perform the backup.
All of this occurs without disruption to running virtual machines and can be
performed in a LAN-free manner so that the network performance is not
affected.

108 Overview of VMware Infrastructure 3


Overview_A.book Page 109 Friday, September 5, 2008 12:40 PM


Distributed Power Management (Experimental)
Slide 9-14

• Consolidates workloads
onto fewer servers when
the cluster needs fewer
resources
• Places unneeded servers
in standby mode
Resource Pool
• Brings servers back online
as workload needs
increase
• Minimizes power
consumption while
guaranteeing service
Physical Servers
levels
• No disruption or downtime
to virtual machines

VMware Distributed Power Management (DPM) (this functionality is


supported experimentally) continuously monitors resource requirements and
power consumption across a DRS cluster. When the cluster needs fewer
resources, it consolidates workloads and puts ESX hosts in standby mode to
reduce power consumption. When resource requirements of workloads
increase, DPM brings powered-down ESX hosts back online to ensure
service levels are met.
Distributed Power Management allows IT organizations to do the
following:
• Cut power and cooling costs in the datacenter during low-utilization
9

periods
VI3 Product and Feature Overview

• Automate management of energy efficiency in the datacenter


With Distributed Power Management, administrators can define the
following:
• Reserve capacity to always be available
• Time for which load history can be monitored before power on/off
decisions are made.
Power on will also be triggered when there aren’t enough resources
available to power on a virtual machine or when more spare capacity
needed for HA.

Module 9 VI3 Product and Feature Overview 109


Overview_A.book Page 110 Friday, September 5, 2008 12:40 PM

Storage VMotion
Slide 9-15

• Storage-independent
migration of virtual
machine disks
•Zero downtime to virtual
machines
•LUN-independent
•Supported for Fibre
Channel SANs
• Storage array migration
• Storage I/O
optimization

VMware Storage VMotion is a state-of-the-art solution that enables you to


perform live migration of virtual machine disk files across heterogeneous
storage arrays with complete transaction integrity and no interruption in
service for critical applications.
By implementing VMware Storage VMotion in your virtual infrastructure,
you gain the ability to perform proactive storage migrations, simplify array
refreshes/retirements, improve virtual machine storage performance, and
free up valuable storage capacity in your datacenter.
Complete operating system and hardware independence allows Storage
VMotion to migrate any virtual machines running any operating system
across any type of hardware and storage supported by VMware ESX.
Storage VMotion does all this with zero downtime to the virtual machines.

110 Overview of VMware Infrastructure 3


Overview_A.book Page 111 Friday, September 5, 2008 12:40 PM


Management and Automation Layer Products
Slide 9-16

• The Automation layer also includes several


products based on core features covered
earlier in the course.
• These next-generation Automation layer
products reduce the cost and complexity
of managing a VMware Infrastructure.

p • Update Manager
• Virtual Desktop Manager
• Enterprise Converter
• Lab Manager
Management & • Guided Consolidation • Lifecycle Manager
Automation • Site Recovery Manager

9
VI3 Product and Feature Overview

Module 9 VI3 Product and Feature Overview 111


Overview_A.book Page 112 Friday, September 5, 2008 12:40 PM

VMware Update Manager (VUM)


Slide 9-17

• Automates patch management for


ESX hosts and select Microsoft and
RHEL virtual machines
•Scans and remedies online as
E

well as offline virtual machines


IN
FL
OF

and online ESX hosts


•Optional virtual machine
snapshot before patching allows
rollback
• Reduces manual tracking of patch
levels of ESX hosts and virtual
machines
Update Host • Automates enforcement of patch
Manager Server standards

VMware Update Manager is an automated patch management solution for


ESX hosts as well as for Microsoft and Linux virtual machines. It reduces
risk by doing the following:
• Securing your datacenter from vulnerabilities
• Patching both online and offline virtual machines
• Snapshotting virtual machines before patching to allow rollback
• Patching noncompliant offline/suspended machines in a quarantined
state so that the rest of the network is not exposed to them
VMware Update Manager provides for automatic enforcement of patch
standards and eliminates cumbersome and error-prone manual tracking of
patch levels of ESX hosts and virtual machines.
RHEL guests can only be scanned, not remediated.

112 Overview of VMware Infrastructure 3


Overview_A.book Page 113 Friday, September 5, 2008 12:40 PM


Update Manager and DRS
Slide 9-18

• Update Manager
Update patches entire DRS
Manager
clusters.
• Each host in the cluster
enters DRS maintenance
mode, one at a time.
• VMs are migrated off.
VMotion VMotion Host is patched and
rebooted if required.
• VMs are migrated back
Resource Pool on.
• Next host is selected.
• Automates patching of
large number of hosts
with zero downtime to
virtual machines

When used in conjunction with VMware DRS, VMware Update Manager


enables entire datacenters of ESX hosts to be patched automatically with
zero downtime to the virtual machines running on those servers.

9
VI3 Product and Feature Overview

Module 9 VI3 Product and Feature Overview 113


Overview_A.book Page 114 Friday, September 5, 2008 12:40 PM

VDI - Virtual Desktop Manager (VDM)


Slide 9-19
• Enterprise-class, scalable
VMware Centralized Virtual connection broker
Clients
VDM Desktops
• Central administration and policy
enforcement
• Automatic desktop provisioning
with optional “smart pooling”
• Desktop persistence and secure
tunneling options
• Microsoft AD integration and
optional two-factor
authentication via RSA SecurID

• End-to-end enterprise-class desktop control and manageability


• Familiar end-user experience
• Tightly integrated with VMware’s proven virtualization platform (VI3)
• Scalability, security, and availability suitable for organizations of all sizes

VMware VDI is an integrated desktop virtualization solution that delivers


enterprise-class control and manageability with a familiar user experience.
VMware VDI provides new levels of efficiency and reliability for your
virtual desktop environment.
With VMware VDI, you get the proven VMware Infrastructure 3 software
along with VMware Virtual Desktop Manager (VDM), an enterprise-class
desktop management server that securely connects users to virtual desktops
in the datacenter and provides an easy-to-use, Web-based interface to
manage the centralized environment.
VMware VDI provides users with desktop business continuity, high
availability, and disaster recovery capabilities that until now were available
only for mission-critical server applications.
With VMware VDI, end users get a complete, unmodified virtual desktop
that behaves just like a normal PC. There is no change to the applications or
desktop environment, no application sharing, and no retraining required.
Administrators can allow users to install applications, customize their
desktop environment, and use local printers and USB devices.

114 Overview of VMware Infrastructure 3


Overview_A.book Page 115 Friday, September 5, 2008 12:40 PM


Guided Consolidation
Slide 9-20

• Automatically discovers
Discover physical servers
• Analyzes utilization and
usage patterns
• Converts physical servers
to VMs placed intelligently
Analyze
based on user response
• Lowers training
requirements for new
virtualization users
Convert • Guides users through the
entire consolidation
process

Guided Consolidation is a new feature in VirtualCenter 2.5. It is intended to


guide first-time virtualization users through the process of discovering
physical servers suitable for virtualization, collecting performance data
from these servers, and converting these servers to virtual machines placed
intelligently on the most appropriate hosts.
Guided Consolidation enables new users to quickly realize benefits from
server consolidation and reduces training requirements for first-time
“virtualizers.”
Guided Consolidation makes server consolidation easier by building new
virtualization users through the consolidation process in a wizard-based,
9

tutorial-like fashion.
VI3 Product and Feature Overview

Module 9 VI3 Product and Feature Overview 115


Overview_A.book Page 116 Friday, September 5, 2008 12:40 PM

VMware Site Recovery Manager


Slide 9-21
Site Recovery Manager leverages VMware Infrastructure
to transform disaster recovery.
• Simplifies and automates
disaster recovery workflows
• Setup, testing, failover, failback
• Provides central management of
recovery plans from
VirtualCenter
• Turns manual recovery
processes into automated
recovery plans
• Simplifies integration with third-
party storage replication
• Makes disaster recovery rapid,
reliable, manageable, affordable

Until now, keeping recovery plans and the runbooks that documented them
accurate and up-to-date has been practically impossible because of the
complexity of plans and the dynamic environment in today’s datacenters.
Adding to that challenge, traditional solutions do not offer a central point of
management for recovery plans and make it difficult to integrate the
different tools and components of disaster recovery solutions.
VMware Site Recovery Manager simplifies and centralizes the creation and
ongoing management of disaster recovery plans. Site Recovery Manager
turns traditional oversized disaster recovery runbooks into automated plans
that are easy to manage, store, and document. And Site Recovery Manager
is tightly integrated with VMware Infrastructure 3, so you can create,
manage, and update recovery plans from the same place that you manage
your virtual infrastructure.
Testing disaster recovery plans and ensuring that they are executed correctly
are critical to making recovery reliable. However, testing is difficult with
traditional solutions because of the high cost, complexity, and disruption
associated with tests. Another challenge is ensuring that staff are trained and
prepared to successfully execute the complex process of recovery.

116 Overview of VMware Infrastructure 3


Overview_A.book Page 117 Friday, September 5, 2008 12:40 PM


Site Recovery Manager helps you overcome these obstacles by enabling
realistic, frequent tests of recovery plans and eliminating common causes of
failures during recovery.
Site Recovery Manager provides built-in capabilities for executing realistic,
nondisruptive tests without the cost and complexity of traditional disaster
recovery testing. Because the recovery process is automated, you can also
ensure that the recovery plan will be carried out correctly in both testing and
failover scenarios.
Site Recovery Manager leverages VMware Infrastructure to provide
hardware-independent recovery to ensure successful recovery, even when
recovery hardware is not identical to production hardware.

9
VI3 Product and Feature Overview

Module 9 VI3 Product and Feature Overview 117


Overview_A.book Page 118 Friday, September 5, 2008 12:40 PM

Site Recovery Manager Key Components


Slide 9-22

This slide illustrates the key components of a Site Recovery Manager


deployment.
Site Recovery Manager requires that the storage utilized by protected virtual
machines be replicated to a secondary site. This can be performed with a
variety of third-party replication solutions. VirtualCenter is required to
manage both sites.
Site Recovery Manager manages the mapping of components (virtual
machines, resource pools, networks, and the like) between the two sites and
provides workflow automation for setup, failover, failback, and testing of
the disaster recovery environment.

118 Overview of VMware Infrastructure 3


Overview_A.book Page 119 Friday, September 5, 2008 12:40 PM


VMware Converter Enterprise Capabilities
Slide 9-23

• VMware Converter is a migration tool bundled in


VirtualCenter 2.5 that aids in server consolidation.
• Imports physical machines to virtual machines
• Imports non-ESX VMware virtual machines
• Imports Microsoft Virtual Server 2005 virtual machines
• Converts third-party backup or disk images to virtual
machines
• Reconfigures virtual machines so that they are
bootable inside ESX
• Decreases the time and effort to migrate from a
physical infrastructure to a virtual infrastructure
• Preserves existing configurations while saving time
and reducing costs and complexity

VMware Converter Enterprise enables administrators to quickly and


reliably convert local and remote physical machines into virtual machines
without any disruption or downtime. Administrators can do the following:
• Import physical machines to virtual machines
• Import non-ESX VMware virtual machines
• Import Microsoft Virtual Server 2005 virtual machines
• Convert third-party backups or disk images to virtual machines
The centralized management console in Converter Enterprise allows users
to queue up and monitor multiple simultaneous remote conversions as well
9

as local conversions. This decreases the time and effort required in large-
scale virtualization implementations.
VI3 Product and Feature Overview

Remote conversions are accomplished by the Converter Server downloading a Converter agent to
the source system.
Local conversions are accomplished by booting the source system from the Converter CD.

Module 9 VI3 Product and Feature Overview 119


Overview_A.book Page 120 Friday, September 5, 2008 12:40 PM

Using Lab Manager with VMware Infrastructure


Slide 9-24

• Provision new
environments
quickly.
• Test
• Development
• Support

VMware Lab Manager provides the ability to automate the setup, capture,
storage, and sharing of multimachine software configurations. Development
and test teams can access them on demand through a self-service, Web-
based portal. With its shared library and shared pool of virtualized servers
and templates, VMware Lab Manager lets you efficiently move and share
multimachine configurations across software development and test teams
and facilities.

120 Overview of VMware Infrastructure 3


Overview_A.book Page 121 Friday, September 5, 2008 12:40 PM


VMware Lab Manager provides the ability to do the following:
• Allocate resources as needed instead of maintaining multiple static
systems that are only used sporadically. VMware Lab Manager lets you
pool and share resources between development and test teams for
maximum utilization—and increased cost savings.
• Provision new machines nearly instantly with VMware Lab Manager.
This eliminates the painstaking, multihour process of gathering
machines, installing operating systems, installing and configuring
applications, and establishing intermachine connections. Now software
developers and QA engineers can fulfill their own provisioning needs,
leaving IT in control of user management, storage quotas, and server
deployment policies—achieving the best of both worlds.
• Quickly reproduce software defects and resolve them earlier in the
software lifecycle—and ensure higher quality software and systems.
VMware Lab Manager enables “closed loop” defect reporting and
resolution through its unique ability to snapshot complex multimachine
configurations in an error state, capture them to the library, and make
them available for sharing—and troubleshooting—across development
and test teams.
You can give your outsourced partners secure, remote access to your
software lab—and maintain your flexibility to rapidly add, remove, or
replace outsourced resources as your needs change. Your intellectual
property remains securely in Lab Manager’s environment, and you
eliminate time-consuming and costly replication of equipment in your
partners’ labs.
9
VI3 Product and Feature Overview

Module 9 VI3 Product and Feature Overview 121


Overview_A.book Page 122 Friday, September 5, 2008 12:40 PM

VMware Lifecycle Manager


Slide 9-25

• Automate, manage, and control the life of virtual


machines.
• Track and report on deployed virtual machines.
• Provide process and policies for the following:
• How virtual machines are created
• How virtual machines are deployed Create

• How virtual machines are changed


• How virtual machines are retired
Change Deploy

Retire

VMware Lifecycle Manager enables administrators to track and control


virtual machines through a consistent approval process throughout the entire
lifecycle. Lifecycle Manager automates the steps within the workflow to
improve efficiency and productivity, and to ensure strict corporate
compliance with company policies.
Lifecycle Manager brings many benefits to the datacenter:
• It employs standardization and best practices for tracking and managing
virtual machine deployment and use.
• It eliminates manual and repetitive administrative tasks through
automation.
• It prevents virtual machine sprawl and ensures corporate IT compliance.
It leverages existing tools like VMware VirtualCenter, change management
software, and IT process/runbook automation tools.

122 Overview of VMware Infrastructure 3


Overview_A.book Page 123 Friday, September 5, 2008 12:40 PM


Lifecycle Workflow Management
Slide 9-26

VM Tracking
Request for
VM
Provisioning

Route for Automated Intelligent Policy & Control


Approval Deployment Placement

Archive Delete

Decommission

VMware Lifecycle Manager allows administrators to implement a


consistent, automated workflow for provisioning, operating, and
decommissioning virtual machines. During setup, the IT administrator
creates a catalog of virtual machine templates that users can view and
select. The IT administrator also defines where virtual machines can be
deployed and what types of approvals are required before virtual machine
deployment.
Using a self-service portal, users request virtual machines and can track the
status of any pending requests. During the request process, the user enters
information to help Lifecycle Manager select the specific resources that best
9

support the request. The user can log back in to Lifecycle Manager at any
time to check on the request status.
VI3 Product and Feature Overview

An “approver” approves or denies requests for virtual machines. The


approver can be from any department. If the request is approved, the virtual
machine is deployed automatically, based on the user-defined criteria and
the way in which IT staff has mapped those criteria to existing computing
resources.
The final step within Lifecycle Manager is to decommission the virtual
machine. The decommissioning process, which consists of archiving and
ultimately deleting a virtual machine, provides better resource utilization by
ensuring that resources come back into the resource pool for future use. The
virtual machine will be decommissioned based on the end date the user
enters when first submitting a request.

Module 9 VI3 Product and Feature Overview 123


Overview_A.book Page 124 Friday, September 5, 2008 12:40 PM

Summary of VI3 Products and Features


Slide 9-27
• VMware Infrastructure is much more than just a hypervisor. It is the
most complete virtual infrastructure solution in the market today.

p • Update Manager
• Virtual Desktop Manager
• Enterprise Converter
• Lab Manager
Management & • Guided Consolidation • Lifecycle Manager
Automation • Site Recovery Manager

o • VMware DRS
• VMware HA
• VMware Consolidated Backup
• Distributed Power Management
Virtual • VMotion • Storage VMotion
Infrastructure

n • ESX
• ESXi
• VMFS
• VSMP
Virtualization
Platform

124 Overview of VMware Infrastructure 3