Professional Documents
Culture Documents
Fast Track
Lecture Manual – Volume 3
ESXi 5.1 and vCenter Server 5.1
Copyright/Trademark
Copyright © 2012 VMware, Inc. All rights reserved. This manual and its accompanying
materials are protected by U.S. and international copyright and intellectual property laws.
VMware products are covered by one or more patents listed at http://www.vmware.com/go/
patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States
and/or other jurisdictions. All other marks and names mentioned herein may be trademarks
of their respective companies.
The training material is provided “as is,” and all express or implied conditions,
representations, and warranties, including any implied warranty of merchantability, fitness for
a particular purpose or noninfringement, are disclaimed, even if VMware, Inc., has been
advised of the possibility of such claims. This training material is designed to support an
instructor-led training course and is intended to be used for reference purposes in
conjunction with the instructor-led training course. The training material is not a standalone
training tool. Use of the training material for self-study without class attendance is not
recommended.
These materials and the computer programs to which it relates are the property of, and
embody trade secrets and confidential information proprietary to, VMware, Inc., and may not
be reproduced, copied, disclosed, transferred, adapted or modified without the express
written approval of VMware, Inc.
Course development: John Tuffin
Technical editing: PJ Schemenaur
Production and publishing: Ron Morton
www.vmware.com/education
TA B L E OF C ONTENTS
MODULE 13 Storage Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .681
You Are Here . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .682
Importance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .683
Module Lessons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .684
Lesson 1: Storage APIs and Profile-Driven Storage . . . . . . . . . . . . . . .685
Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .686
VMware vSphere Storage APIs: Array Integration. . . . . . . . . . . . . . . .687
VMware vSphere Storage APIs: Storage Awareness . . . . . . . . . . . . . .689
Benefits Provided by Storage Vendor Providers . . . . . . . . . . . . . . . . . .691
Configuring a Storage Vendor Provider . . . . . . . . . . . . . . . . . . . . . . . .692
Profile-Driven Storage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .693
Storage Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .695
Virtual Machine Storage Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . .696
Overview of Steps for Configuring Profile-Driven Storage . . . . . . . . .697
Using the Virtual Machine Storage Profile . . . . . . . . . . . . . . . . . . . . . .699
Checking Virtual Machine Storage Compliance . . . . . . . . . . . . . . . . . .700
Identifying Advanced Storage Options . . . . . . . . . . . . . . . . . . . . . . . . .701
N_Port ID Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .702
N_Port ID Virtualization Requirements . . . . . . . . . . . . . . . . . . . . . . . .703
vCenter Server Storage Filters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .704
Identifying and Tagging SSD Devices . . . . . . . . . . . . . . . . . . . . . . . . .706
Configuring Software iSCSI Port Binding . . . . . . . . . . . . . . . . . . . . . .707
VMFS Resignaturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .709
Pluggable Storage Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .710
VMware Default Multipathing Plug-In . . . . . . . . . . . . . . . . . . . . . . . . .712
Overview of the MPP Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .713
Path Selection Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .715
Lab 25 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .716
Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .717
Lesson 2: Storage I/O Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .718
Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .719
What Is Storage I/O Control? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .720
Storage I/O Control Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . .721
Storage I/O Control Automatic Threshold Detection . . . . . . . . . . . . . .722
Storage I/O Control Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . .723
Configuring Storage I/O Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .724
Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .726
Lesson 3: Datastore Clusters and Storage DRS . . . . . . . . . . . . . . . . . .727
Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .728
What Is a Datastore Cluster? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .729
Datastore Cluster Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .730
Relationship of Host Cluster to Datastore Cluster . . . . . . . . . . . . . . . .731
Storage DRS Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .732
Contents iii
esxcfg Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .830
esxcfg Equivalent vicfg Commands Examples . . . . . . . . . . . . . . . . . . .831
Managing Hosts with vMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .832
Common Connection Options for vCLI Execution (1) . . . . . . . . . . . . .833
Common Connection Options for vCLI Execution (2) . . . . . . . . . . . . .835
vicfg Command Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .836
Entering and Exiting Host Maintenance Mode . . . . . . . . . . . . . . . . . . .837
esxcli Command Hierarchies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .838
Example esxcli command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .839
resxtop Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .840
Using resxtop Interactively . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .841
Navigating resxtop. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .843
Sample Output from resxtop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .844
Using resxtop in Batch and Replay Modes . . . . . . . . . . . . . . . . . . . . . .845
Lab 28 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .847
Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .848
Key Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .849
Contents v
Subsequent Boot of an Autodeployed ESXi Host: Step 4 . . . . . . . . . . .927
Auto Deploy Stateless Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .928
Stateless Caching Host Profile Configuration . . . . . . . . . . . . . . . . . . . .929
Auto Deploy Stateless Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .930
Auto Deploy Stateful Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .931
Stateful Installation Host Profile Configuration . . . . . . . . . . . . . . . . . .932
Auto Deploy Stateful Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .933
Managing the Auto Deploy Environment . . . . . . . . . . . . . . . . . . . . . . .934
Using Auto Deploy with Update Manager to Upgrade Hosts . . . . . . . .935
Lab 31 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .936
Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .937
Key Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .938
Storage Scalability 13
13
Slide 13-1 g y
Storage Scalability
Module 13
13
Slide 13-3
Storage Scalability
As the enterprise grows, new scalability features in VMware
vSphere® enable the infrastructure to handle the growth efficiently.
Datastore growth and balancing issues can be addressed
automatically with VMware vSphere® Storage DRS.
13
Slide 13-5
Storage Scalability
Lesson 1:
Storage APIs and Profile-Driven Storage
13
Slide 13-7
Storage Scalability
VAAI helps storage vendors provide hardware assistance to
accelerate VMware® I/O operations that are more efficiently
accomplished in the storage hardware.
VAAI includes the following API subsets:
Hardware Acceleration APIs:
Allows arrays to integrate with vSphere to transparently offload certain
storage operations to the array:
- This integration significantly reduces the CPU overhead on the host.
- Support for NAS plug-ins for array integration exists.
Array Thin Provisioning APIs:
Allows the monitoring of space on thin-provisioned storage arrays:
- This functionality helps to prevent out-of-space conditions and to perform
space reclamation.
Storage APIs is a family of APIs used by third-party hardware, software, and storage providers to
develop components that enhance several vSphere features and solutions. This module describes
two sets of Storage APIs: Array Integration and Storage Awareness. For a description of other APIs
from this family, see http://www.vmware.com/technical-resources/virtualization-topics/virtual-
storage/storage-apis.html.
VMware vSphere® Storage APIs – Array Integration (VAAI) is a set of protocol interfaces and
VMkernel APIs between VMware ESXi™ and storage arrays. In a virtualized environment, virtual
disks are files located on a VMware vSphere® VMFS datastore. Disk arrays cannot interpret the
VMFS datastore’s on-disk data layout, so the VMFS datastore cannot leverage hardware functions
per virtual machine or per virtual disk file. The goal of VAAI is to help storage vendors provide
hardware assistance to accelerate VMware I/O operations that are more efficiently accomplished in
the storage hardware. VAAI plug-ins can improve data transfer performance and are transparent to
the end user.
13
Slide 13-8
Storage Scalability
enables a storage vendor to develop a software component (known
as a storage vendor provider) for its storage arrays.
A storage vendor provider gets information from the storage array
about available storage topology, capabilities, and state.
storage vSphere
vendor
vCenter
Client
provider Server
storage
device
Today, vSphere administrators do not have visibility in VMware® vCenter Server™ into the storage
capabilities of the storage array on which their virtual machines are stored. Virtual machines are
provisioned to a storage black box. All the vSphere administrator sees of the storage is a logical unit
number (LUN) identifier, such as a Network Address Authority ID (NAA ID) or a T10 identifier.
VMware vSphere Storage APIs – Storage Awareness (VASA) is a set of software APIs that a storage
vendor can use to provide information about their storage array to vCenter Server. Information
includes storage topology, capabilities, and the state of the physical storage devices. Administrators
now have visibility into the storage on which their virtual machines are located because storage
vendors can make this information available.
vCenter Server gets the information from a storage array by using a software component called a
VASA provider. A VASA provider is written by the storage array vendor. The VASA provider can
exist on either the storage array processor or on a standalone host. This decision is made by the
storage vendor. Storage devices are identified to vCenter Server with a T10 identifier or an NAA ID.
VMware recommends that vendors use these types of identifiers so that devices can be matched
between the VASA provider and vCenter Server.
13
Slide 13-9
Storage Scalability
Storage vendor providers benefit vSphere administrators by:
Allowing administrators to be aware of the topology, capabilities, and
state of the physical storage devices on which their virtual machines
are located
Allowing them to monitor the health and usage of their physical storage
devices
Assisting administrators in choosing the right storage in terms of space,
performance, and service-level agreement requirements:
Done by using virtual machine storage profiles
A VASA provider supplies capability information in the form of descriptions of specific storage
attributes.
Types of capability information include the following:
• Performance capabilities, such as the number and type of spindles for a volume or the I/O
operations or megabytes/second
• Disaster recovery information, such as recovery point objective and recovery time objective
metrics for disaster recovery
• Space efficiency, such as type of compression used or if thick-provisioned format is used
This information allows vSphere administrators:
• To be more aware of the topology, capabilities, and state of the physical storage devices on
which their virtual machines are located.
• To monitor the health and usage of their physical storage devices.
To choose the right storage in terms of space, performance, and service-level agreement
requirements. Storage capabilities can be displayed in the vSphere Client. Virtual machine storage
profiles can be created to make sure that the storage being used for virtual machines complies with
the required levels of service.
If your storage supports a VASA provider, use the vSphere Client to register and manage the VASA
provider. The Storage Providers icon on the vSphere Client Home page allows you to configure the
VASA provider. All system storage capabilities that are presented by the VASA provider are
displayed in the vSphere Client. The new Storage Capabilities panel appears in a datastore’s
Summary tab.
To register a VASA provider, the storage vendor provides a URL, a login account, and a password.
Users log in to the VASA provider to get array information. vCenter Server must trust the VASA
provider host. So a security certificate from the VASA provider must be installed on the vCenter
Server system. For procedures, see the VASA provider documentation.
13
Slide 13-11
Storage Scalability
Profile-driven storage enables the creation of datastores that provide
different levels of service.
Profile-driven storage can be
used to do the following:
Categorize datastores based
on system-defined or user-
defined levels of service: gold silver bronze uncategorized
Profile-driven storage enables the creation of datastores that provide varying levels of service. With
profile-driven storage, you can use storage capabilities and virtual machine storage profiles to
ensure that virtual machines use storage that provides a certain level of capacity, performance,
availability, redundancy, and so on.
Profile-driven storage minimizes the amount of storage planning that the administrator must do for
each virtual machine. For example, the administrator can use profile-driven storage to create basic
storage tiers. Datastores with similar capabilities are tagged to form a gold, silver, and bronze tier.
Redundant, high-performance storage might be tagged as the gold tier, Nonredundant, medium-
performance storage might be tagged as the bronze tier.
Profile-driven storage can be used during the provisioning of a virtual machine to ensure that a
virtual machine’s disks are placed on the storage that is best for its situation. For example, profile-
driven storage can help you ensure that the virtual machine running a critical I/O-intensive database
is placed in the gold tier. Ideally, the administrator wants to create the best match of predefined
virtual machine storage requirements with available physical storage properties.
13
Slide 13-12
Storage Scalability
Storage capabilities:
System defined From storage vendor providers
User-defined
vCenter Server
Profile-driven storage is achieved by using two key components: storage capabilities and virtual
machine storage profiles.
A storage capability outlines the quality of service that a storage system can deliver. It is a guarantee
that the storage system can provide a specific set of characteristics. The two types of storage
capabilities are system-defined and user-defined.
A system-defined storage capability is one that comes from a storage system that uses a VASA
vendor provider. The vendor provider informs vCenter Server that it can guarantee a specific set of
storage features by presenting them as a storage capability. vCenter Server recognizes the capability
and adds it to the list of storage capabilities for that storage vendor. vCenter Server assigns the
system-defined storage capability to each datastore that you create from that storage system.
A user-defined storage capability is one that you can define and associate with datastores. Examples
of user-defined capabilities are:
• Storage array type
• Replication status
• Storage tiers, such as gold, silver and bronze datastores
A user-defined capability can be associated with multiple datastores. You can associate a user-
defined capability with a datastore that already has a system-defined capability.
Can be used to
test that virtual
machines reside on
compliant storage
Storage capabilities are used to define a virtual machine storage profile. A virtual machine storage
profile lists the storage capabilities that virtual machine home files and virtual disks require to run
the applications in the virtual machine. A virtual machine storage profile is created by an
administrator, who can create different storage profiles to define different levels of storage
requirements. The virtual machine home files (.vmx, .vmsd, .nvram, .log, and so on) and the
virtual disks (.vmdk) can have separate virtual machine storage profiles.
With a virtual machine storage profile, a virtual machine can be checked for storage compliance. If
the virtual machine is placed on storage that has the same capabilities as those defined in the virtual
machine storage profile, the virtual machine is storage-compliant.
13
Slide 13-14
Storage Scalability
To configure profile-driven storage:
1. View existing storage capabilities.
2. (Optional) Create user-defined storage capabilities.
3. Associate user-defined storage capabilities with a datastore or
datastore cluster.
4. Enable the VM Storage Profiles function on a host or cluster.
5. Create a virtual machine storage profile.
6. Associate a virtual machine storage profile with a virtual machine.
13
Slide 13-15
Storage Scalability
Use the virtual machine storage profile when you create, clone, or
migrate a virtual machine.
When you create, clone, or migrate a virtual machine, you can associate the virtual machine with a
virtual machine storage profile. When you select a virtual machine storage profile, the vSphere
Client displays the datastores that are compatible with the capabilities of the profile. You can then
select a datastore or a datastore cluster. If you select a datastore that does not match the virtual
machine storage profile, the vSphere Client shows that the virtual machine is using noncompliant
storage.
When a virtual machine storage profile is selected, datastores are now divided into two categories:
compatible and incompatible. You can still choose other datastores outside of the virtual machine
storage profile, but these datastores put the virtual machine into a noncompliant state.
By using virtual machine storage profiles, you can easily see which storage is compatible and
incompatible. You can eliminate the need to ask the SAN administrator, or refer to a spreadsheet of
NAA IDs, each time that you deploy a virtual machine.
You can associate a virtual machine storage profile with a virtual machine or individual virtual
disks. When you select the datastore on which a virtual machine should be located, you can check
whether the selected datastore is compliant with the virtual machine storage profile.
To check the storage compliance of a virtual machine:
• In the Virtual Machines tab of the virtual machine storage profile, click the Check
Compliance Now link.
If you check the compliance of a virtual machine whose host or cluster has virtual machine storage
profiles disabled, the virtual machine will be noncompliant because the feature is disabled.
Virtual machine storage compliance can also be viewed from the virtual machine’s Summary tab.
13
Slide 13-17
Storage Scalability
Some advanced storage options include the following:
N_Port ID virtualization (NPIV)
vCenter Server storage filters
Identifying and tagging solid state drive (SSD) devices
Software iSCSI port binding
VMware vSphere® VMFS (VMFS) resignaturing
Pluggable storage architecture (PSA)
In normal ESXi operation, only the Fibre Channel HBA has a World Wide Name (WWN) and
N_Port ID. N_Port ID Virtualization (NPIV) is used to assign a virtual WWN and virtual N_Port ID
to a virtual machine. NPIV is most useful in two situations:
• Configure NPIV if there is a management requirement to be able to monitor SAN LUN usage
down to the virtual machine level. Because a WWN is assigned to an individual virtual
machine, the virtual machine’s LUN usage can be tracked by SAN management software.
• NPIV is also useful for access control. Because Fibre Channel zoning and array-based
LUN masking use WWNs, access control can be configured down to the individual virtual
machine level.
13
Slide 13-19
Storage Scalability
NPIV requires the following:
Virtual machines use RDMs.
Fibre Channel HBAs support NPIV.
Fibre Channel switches support NPIV.
ESXi hosts have access to all LUNs used by their virtual machines.
NPIV cannot be used with virtual machines configured with VMware
vSphere® Fault Tolerance (FT).
The requirements to configure NPIV are listed on the slide. For more about NPIV, see vSphere
Storage Guide at http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html.
vCenter Server provides storage filters to help you avoid storage device corruption or performance
degradation that can be caused by an unsupported use of storage devices. The filters below are
available by default:
• VMFS Filter - Filters out storage devices, or LUNs, that are already used by a VMFS datastore
on any host managed by vCenter Server. The LUNs do not show up as candidates to be
formatted with another VMFS datastore or to be used as an RDM.
• RDM Filter - Filters out LUNs that are already referenced by an RDM on any host managed by
vCenter Server. The LUNs do not show up as candidates to be formatted with VMFS or to be
used by a different RDM. If you need virtual machines to access the same LUN, the virtual
machines must share the same RDM mapping file. For information about this type of
configuration, see the vSphere Resource Management documentation.
13
extents because of host or storage type incompatibility. Prevents you from adding the following
LUNs as extents:
• LUNs not exposed to all hosts that share the original VMFS datastore.
Storage Scalability
• LUNs that use a storage type different from the one the original VMFS datastore uses. For
example, you cannot add a Fibre Channel extent to a VMFS datastore on a local storage
device.
• Host Rescan Filter - Automatically rescans and updates VMFS datastores after you perform
datastore management operations. The filter helps provide a consistent view of all VMFS
datastores on all hosts managed by vCenter Server.
To change the filter behavior:
3. In the Key text box, type the key you want to change.
5. Click Add.
6. Click OK.
Before making any changes to the device filters, consult with the VMware support team.
The VMkernel can now automatically detect, tag, and enable an SSD. ESXi detects SSD devices
through an inquiry mechanism based on the T10 standard. This mechanism allows ESXi to discover
SSD devices on many storage arrays. Devices that cannot be autodetected (that is, arrays that are not
T10-compliant) can be tagged as SSD by setting up new Pluggable Storage Architecture Storage
Array Type Plug-in claim rules.
You can use the vSphere Client to identify your SSD storage. The storage section in the ESXi host’s
Summary tab identifies the drive type. The drive type shows you whether a storage device is SSD.
The benefits to using SSD include:
• Quicker VMware vSphere® Storage vMotion® migrations can occur among hosts that share
the same SSD.
• SSD can be used as swap space for improved system performance when under memory
contention.
13
Slide 13-22
Storage Scalability
From the Network Configuration tab, select the Storage Adapters link.
Highlight the software initiator and select properties.
iSCSI port binding enables a software or a hardware dependent iSCSI initiator to be bound to a
specific VMkernel adapter. If you are using dependent hardware iSCSI adapters, then you must bind
the adapter to a VMkernel port to function properly.
By default, all network adapters appear as active. If you are using multiple VMkernel ports on a
single switch, you must override this setup, so that each VMkernel interface maps to only one
corresponding active NIC. For example, vmk1 maps to vmnic1 and vmk2 maps to vmnic2.
To bind a iSCSI adapter to a VMkernel port, create a virtual VMkernel adapter for each physical
network adapter on your host. If you use multiple VMkernel adapters, set up the correct network
policy:
1. Click the Configuration tab, and click Storage Adapters in the Hardware panel.
3. Select the software or dependent iSCSI adapter to configure and click Properties.
4. In the iSCSI Initiator Properties dialog box, click the Network Configuration tab.
5. Click Add and select a VMkernel adapter to bind with the iSCSI adapter.You can bind the
software iSCSI adapter to one or more VMkernel adapters. For a dependent hardware iSCSI
adapter, only one VMkernel interface associated with the correct physical NIC is available.
7. The network connection appears on the list of VMkernel port bindings for the iSCSI adapter.
8. Verify that the network policy for the connection is compliant with the binding requirements.
13
Slide 13-23
VMFS UUID:
Storage Scalability
VMFS_1
4e26f26a-9fe2664c-c9c7-000c2988e4dd
resignature
StorageVMFS_1
replicate Array Replication
When a LUN is replicated or a copy is made, the resulting LUN copy is identical, byte-for-byte,
with the original LUN. As a result, the original LUN contains a VMFS datastore with UUID X, and
the LUN copy appears to contain an identical copy of a VMFS datastore (a VMFS datastore with the
same UUID). ESXi can determine whether a LUN contains the VMFS datastore copy and does not
mount it automatically.
The LUN copy must be resignatured before it is mounted. When a datastore resignature is
performed, consider the following points:
• Datastore resignaturing is irreversible because it overwrites the original VMFS UUID.
• The LUN copy that contains the VMFS datastore that you resignature is no longer treated as a
LUN copy. Instead it appears as an independent datastore with no relation to the source of the
copy.
• A spanned datastore can be resignatured only if all its extents are online.
• The resignaturing process is crash-and-fault tolerant. If the process is interrupted, you can
resume it later.
The default format of the new label assigned to the datastore is snap-snapID-oldLabel (where
snapID is an integer and oldLabel is the label of the original datastore).
The Pluggable Storage Architecture (PSA) sits in the SCSI middle layer of the VMkernel I/O stack.
The VMware Native Multipathing Plug-in (NMP) supports all storage arrays on the VMware
storage hardware compatibility list. The NMP also manages sub-plug-ins for handling multipathing
and load balancing.
The PSA discovers available storage paths and, based on a set of predefined rules, determines which
multipathing plug-in (MPP) should be given ownership of the path. The MPP associates a set of
physical paths with a specific storage device or LUN. The details of handling path failover for a
given storage array are delegated to a sub-plug-in called the Storage Array Type Plug-in (SATP).
The SATP is associated with paths. The details for determining which physical path is used to issue
an I/O request (load balancing) to a storage device are handled by a sub-plug-in called the Path
Selection Plug-in (PSP). The PSP is associated with logical devices.
13
• Load and unload multipathing plug-ins
• Handle physical path discovery and removal (through scanning)
Storage Scalability
• Route I/O requests for a specific logical device to an appropriate multipathing plug-in
• Handle I/O queuing to the physical storage HBAs and to the logical devices
• Implement logical device bandwidth sharing between virtual machines
• Provide logical device and physical path I/O statistics
NMP tasks:
• Manage physical path claiming and unclaiming
• Create, register, and deregister logical devices
• Associate physical paths with logical devices
• Process I/O requests to logical devices
• Select an optimal physical path for the request (load balancing)
• Perform actions necessary to handle failures and requests retries
• Support management tasks like abort or reset of logical devices
PSA
VMware NMP
The default MPP
is the NMP, which VMware SATP VMware PSP
includes the
SATPs and PSPs. VMware SATP VMware PSP
13
Slide 13-26
Storage Scalability
The PSA:
Discovers available storage (physical paths)
Uses predefined claim rules to assign each device to an MPP
An MPP claims a physical path and exports a logical device.
Details of path failover for a specific path are delegated to the SATP.
Details for determining which physical path is used to a storage
device (load balancing) are handled by the PSP.
The PSA has two major tasks. The first task is to discover what storage devices are available on a
system. Once storage is detected, the second task is to assign predefined claim rules to control the
storage device.
Each device should be claimed by only one claim rule. Claim rules come from and are used by MPPs.
So when a device is claimed by a rule, it is being claimed by the MPP associated with that rule. The
MPP is actually claiming a physical path to a storage device. Once the path has been claimed, the
MPP exports a logical device. Only an MPP can associate a physical path with a logical device.
Within each MPP there are two sub-plug-in types. These are SATPs and PSPs.
The SATP is associated with physical paths and controlling path failover. SATPs are covered in
detail later.
13
Slide 13-27
Storage Scalability
Information about all functional
paths is forwarded by the SATP to
the PSP. The PSP chooses which
path to use.
HBA 1 HBA 2
3
When a virtual machine issues an I/O request to a logical device managed by the NMP, the
following takes place:
1. The NMP calls the PSP assigned to this logical device.
2. The PSP selects an appropriate physical path to send the I/O, load-balancing the I/O if
necessary.
3. If the I/O operation is successful, the NMP reports its completion. If the I/O operation reports
an error, the NMP calls an appropriate SATP.
4. The SATP interprets the error codes and, when appropriate, activates inactive paths and fails
over to the new active path.
5. The PSP is then called to select a new active path from the available paths to send the I/O.
13
Slide 13-29
Storage Scalability
You should be able to do the following:
Describe vSphere Storage APIs Array Integration.
Describe vSphere Storage APIs Storage Awareness.
Configure and use profile-driven storage.
Lesson 2:
Storage I/O Control
13
Slide 13-31
Storage Scalability
After this lesson, you should be able to do the following:
Describe VMware vSphere® Storage I/O Control.
Configure Storage I/O Control.
Helps reduce Mining Server Store Server Mining Server Store Server
extra costs
associated with
overprovisioning
Is used to
balance I/O load
in a datastore
cluster enabled
for Storage DRS
VMware vSphere® Storage I/O Control extends the constructs of shares and limits to handle storage
I/O resources. Storage I/O Control is a proportional-share IOPS scheduler that, under contention,
throttles IOPS. You can control the amount of storage I/O that is allocated to virtual machines
during periods of I/O congestion. Controlling storage I/O ensures that more important virtual
machines get preference over less important virtual machines for I/O resource allocation.
You can use Storage I/O Control with or without VMware vSphere® Storage DRS™. There are two
thresholds: one for standalone Storage I/O Control and one for Storage DRS. For Storage DRS,
latency statistics are gathered by Storage I/O Control for an ESXi host and sent to vCenter Server
and stored in the vCenter Server database. With these statistics, Storage DRS can make the decision
on whether a virtual machine should be migrated to another datastore.
13
Slide 13-33
Storage Scalability
The latency thresholds for the Storage I/O Control can be set using
injector-based models or can be manually set.
Injector-based models are latency thresholds that are measured
when peak throughput is measured.
The benefit of using injector-based models is that Storage I/O Control
determines the best threshold for a datastore.
The latency threshold is set to the value determined by the injector when
90% of the throughput value is achieved.
You can also set the latency threshold manually:
Latency setting is 30 ms by default.
Storage I/O Control is set to stats only mode by default.
Storage I/O Control doesn't enforce throttling but does gather statistics.
Storage DRS now has statistics in advance for new datastores being
added to the datastore cluster.
The default latency threshold for Storage I/O Control is 30 milliseconds. The default setting might
be fine for some storage devices, but other devices might reach their latency threshold well before or
after the default setting is reached. For example, solid state disks (SSDs) typically reach their
contention point sooner than the default setting protects against. Not all devices are created equally.
Storage I/O Control can automatically determine an optimal latency threshold by using injector
based models to determine the latency setting. The injector determines and sets the latency threshold
when 90% of the throughput is reached.
Storage I/O Control is set to stats only mode by default. Stats only mode collects and stores statistics
but does not perform throttling on the storage device. Storage DRS can use the stored statistics
immediately after initial configuration or when new datastores are added.
Latency
Control determines the peak throughput of Lpeak
the device.
The injector-based models measure the Lα
peak latency value when the throughput is
at its peak. Load
The Threshold is then set (by default) to
90% of this value. Tpeak
Throughput
You can still do the following:
Change the percentage value. Tα
Manually set the congestion threshold.
Load
Storage I/O Control can have its latency threshold set automatically by using the I/O injector model
to determine the peak throughput of a datastore. The resulting peak throughput measurement can be
used to determine the peak latency of a datastore. Storage I/O control can then set the latency
threshold to 90% of the peak latency.
13
Slide 13-35
Storage Scalability
Datastores that enabled for Storage I/O Control must be managed by
a single vCenter Server system.
Storage I/O Control is supported for Fibre Channel, iSCSI, and NFS
storage.
Storage I/O Control does not support datastores with multiple
extents.
Verify whether your automated tiered storage array is certified as
compatible with Storage I/O Control.
Storage I/O Control provides quality-of-service capabilities for storage I/O in the form of I/O shares
and limits that are enforced across all virtual machines accessing a datastore, regardless of which
host they are running on. Using Storage I/O Control, vSphere administrators can ensure that the
most important virtual machines get adequate I/O resources even in times of congestion.
When you enable Storage I/O Control on a datastore, ESXi begins to monitor the device latency that
hosts observe when communicating with that datastore. When device latency exceeds a threshold,
the datastore is considered to be congested, and each virtual machine that accesses that datastore is
allocated I/O resources in proportion to their shares.
13
machine. By default, the number of IOPS allowed for a virtual machine is unlimited. If the limit that
you want to set for a virtual machine is in terms of megabytes per second instead of IOPS, you can
convert megabytes per second into IOPS based on the typical I/O size for that virtual machine. For
Storage Scalability
example, a backup application has a typical I/O size of 64KB. To restrict a backup application to
10MB per second, set a limit of 160 IOPS (10MB per second / 64KB I/O size = 160 I/Os per
second).
On the slide, virtual machines VM1 and VM2 are running an I/O load generator called Iometer.
Each virtual machine is running on a different host, but they are running the same type of workload:
16KB random reads. The shares of VM2 are set to twice as many shares as VM1, which implies that
VM2 is more important than VM1. With Storage I/O Control disabled, the IOPS that each virtual
machine achieves, as well as their I/O latency, is identical. But with Storage I/O Control enabled, the
IOPS achieved by the virtual machine with more shares (VM2) are greater than the IOPS of VM1.
The example assumes that each virtual machine is running enough load to cause a bottleneck on the
datastore.
To enable Storage I/O Control on a datastore:
1. In the Datastores and Datastore Clusters inventory view, select a datastore and click the
Configuration tab.
2. Click the Properties link.
4. Click Close.
To set the storage I/O shares and limits:
1. Right-click the virtual machine in the inventory and select Edit Settings.
2. In the Virtual Machine Properties dialog box, click the Resources tab.
By default, all virtual machine shares are set to Normal (1000), with unlimited IOPS.
For more about Storage I/O Control, see vSphere Resource Management Guide at
http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html.
13
Slide 13-38
Storage Scalability
Lesson 3:
Datastore Clusters and Storage DRS
13
Slide 13-40
Storage Scalability
A datastore cluster is a collection of
datastores that are grouped together
without functioning together. 2TB
datastore
cluster
A datastore cluster enabled for Storage
DRS is a collection of datastores
working together to balance:
Capacity 500GB 500GB 500GB 500GB
I/O latency
The datastore cluster serves as a container or folder. The user can store datastores in the container,
but the datastores work as separate entities.
A datastore cluster that is enabled for Storage DRS is a collection of datastores designed to work as
a single unit. In this type of datastore cluster, Storage DRS balances datastore use and I/O latency.
Datastores and hosts that are associated with a datastore cluster must meet certain requirements to
use datastore cluster features successfully.
A datastore cluster can contain a mix of datastores with different sizes and I/O capacities, and can be
from different arrays and vendors. However, LUNs with different performance characteristics can
cause performance problems.
All hosts attached to the datastores in a datastore cluster must be ESXi 5.0 and later. ESXi 4.x and
earlier hosts cannot be included in a datastore cluster.
NFS and VMFS datastores cannot be combined in the same datastore cluster enabled for Storage
DRS. Storage DRS cannot move virtual machines between NFS and VMFS datastores.
VMFS-3 and VMFS-5 datastores can be added to the same Storage DRS cluster. But performance of
these datastores should be similar.
13
Slide 13-42
Storage Scalability
The relationship between a VMware vSphere® High Availability /
VMware vSphere® Distributed Resource Scheduler cluster and a
datastore cluster can be one to one, one to many, or many to many.
one to one one to many many to many
host cluster host host cluster host host clusters
Host clusters and datastore clusters can coexist in the virtual infrastructure. A host cluster refers to a
VMware vSphere® Distributed Resource Scheduler™ (DRS)/VMware vSphere® High Availability
(vSphere HA) cluster.
Load balancing by DRS and Storage DRS can occur at the same time. DRS balances virtual
machines across hosts based on CPU and memory usage. Storage DRS load-balances virtual
machines across storage, based on storage capacity and IOPS.
A host that is not part of a host cluster can also use a datastore cluster.
Storage DRS manages the placement of virtual machines in a datastore cluster, based on the space
usage of the datastores. It attempts to keep usage as even as possible across the datastores in the
datastore cluster.
Storage vMotion migration of virtual machines can also be a way of keeping the datastores
balanced.
Optionally, the user can configure Storage DRS to balance I/O latency across the members of the
datastore cluster as a way to help mitigate performance issues that are caused by I/O latency.
Storage DRS can be set up to work in either manual or fully automated mode:
• Manual mode presents migration and placement recommendations to the user, but nothing is
executed until the user accepts the recommendation.
Fully automated mode automatically handles initial placement and migrations based on
runtime rules.
13
Slide 13-44
Storage Scalability
When virtual machines are created, cloned, or migrated:
You select a datastore cluster, rather than a single datastore.
Storage DRS selects a member datastore based on capacity and optionally
on IOPS load.
By default, a virtual machines files are placed on the same datastore in
the datastore cluster.
Storage DRS affinity and anti-affinity rules can be created to change this
behavior.
When a virtual machine is created, cloned, or migrated, the user has the option of selecting a
datastore cluster on which to place the virtual machine files. When the datastore cluster is selected,
Storage DRS chooses a member datastore (a datastore in the datastore cluster) based on storage use.
Storage DRS attempts to keep the member datastores evenly used.
By default, Storage DRS locates all the files that make up a virtual machine on the same datastore.
However, Storage DRS anti-affinity rules can be created so that virtual machine disk files can be
placed on different datastores in the cluster.
Storage DRS provides as many recommendations as necessary to balance the space and, optionally,
the IOPS resources of the datastore cluster.
Reasons for migration recommendations include:
• Balancing space usage in the datastore
• Reducing datastore I/O latency
• Balancing datastore IOPS load
Storage DRS can also make mandatory recommendations based on whether:
• A datastore is out of space
• Storage DRS anti-affinity or affinity rules are being violated
• A datastore is entering maintenance mode
Storage DRS also considers moving powered-off virtual machines to balance datastores.
13
Slide 13-46
Storage Scalability
Datastore correlation refers to datastores that are created on the
same physical set of spindles.
Storage DRS detects datastore correlation by doing the following:
Measuring individual datastore performance
Measuring combined datastore performance
If latency increases on multiple datastores when a load placed on
one, the datastores are correlated.
Correlation is determined by a long running background process.
Anti-affinity rules can use correlation detection to ensure the virtual
machines or virtual disks are on different spindles.
Datastore correlation is enabled by default.
The purpose of datastore correlation is to help the decision making process in Storage DRS when
deciding where to move a virtual machine. For example, you gain little advantage by moving a
virtual machine from one datastore to another if both datastores are backed by the same set of
physical spindles on the array.
The datastore correlation detector uses the I/O injector to determine if a source and destination
datastore are using the same back-end spindles.
The detector works by monitoring the load on one datastore and monitoring the latency on another.
If latency increases on other datastores when a load is placed on one datastore, the datastores are
correlated.
The datastore correlation detector can also be used for Anti-Affinity rules, making sure that virtual
machines and virtual disks are not only kept on separate datastores, but also kept on different
spindles on the back-end.
Option for
including I/O
latency in
balancing
Configuration
settings for
Advanced
utilized space
settings
and latency
for latency
thresholds
thresholds
In the SDRS Runtime Rules page of the wizard, select or deselect the Enable I/O metric for SDRS
recommendations check box to enable or disable IOPS metric inclusion. When I/O load balancing
is enabled, Storage I/O Control is enabled for all the datastores in the datastore cluster if it is not
already enabled. When this option is deselected, you disable:
• IOPS load balancing among datastores in the datastore cluster
• Initial placement for virtual disks based on IOPS metric
Space is the only consideration when placement and balancing recommendations are made.
13
performs initial placement or migration recommendations:
• Utilized Space – Determines the maximum percentage of consumed space allowed before
Storage DRS recommends or performs an action.
Storage Scalability
• I/O Latency – Indicates the maximum latency allowed before recommendations or migrations
are performed. This setting is applicable only if the Enable I/O metric for SDRS
recommendations check box is selected
Click the Show Advanced Options link to view advanced options:
• No recommendations until utilization difference between source and destination is –
Defines the space utilization threshold. For example, datastore A is 80 percent full and datastore
B is 83 percent full. If you set the threshold to 5, no recommendations are made. If you set the
threshold to 2, a recommendation is made or a migration occurs.
• Evaluate I/O load every – Defines how often Storage DRS checks space or IOPS load
balancing or both.
I/O imbalance threshold – Defines the aggressiveness of IOPS load balancing if the Enable
I/O metric for SDRS recommendations check box is selected.
13
Slide 13-49
Storage Scalability
Select the host cluster that will use the datastore cluster.
If no host clusters are created, the user can select individual ESXi hosts
to use the datastore cluster.
You can configure a host cluster or individual hosts to use the datastore cluster enabled for
Storage DRS.
VMware recommends
selecting datastores that
all hosts can access.
You can select one or more datastores in the Available Datastores pane. The Show Datastores
drop-down menu enables you to filter the list of datastores to display. VMware recommends that all
hosts have access to the datastores that you select.
In the example, all datastores accessed by all hosts in the vCenter Server inventory are displayed.
All datastores are accessible by all hosts, except for the datastores Local01 and Local02.
13
Slide 13-51
Storage Scalability
A panel on the datastore clusters Summary tab displays the Storage
DRS settings.
The vSphere Storage DRS panel on the Summary tab of the database cluster displays the Storage
DRS settings:
• I/O metrics – Displays whether or not the I/O metric inclusion option is enabled
• Storage DRS – Indicates whether Storage DRS is enabled or disabled
• Automation level – Indicates either manual or fully automated mode
• Utilized Space threshold – Displays the space threshold setting
• I/O latency threshold – Displays the latency threshold setting
The Storage DRS tab displays the Recommendations view by default. In this view, datastore cluster
properties are displayed. Also displayed are the migration recommendations and the reasons for the
recommendations.
To refresh recommendations, click the Run Storage DRS link.
To apply recommendations, click Apply Recommendations.
The Storage DRS tab has two other views. The Faults view displays issues that occurred when
applying recommendations. The History view maintains a migration history.
13
Slide 13-53
Storage Scalability
use in order to service it.
Storage DRS maintenance mode:
Evacuates virtual machines from a datastore placed in maintenance
mode:
Registered virtual machines (on or off) are moved.
Templates and unregistered virtual machines are not moved.
Storage DRS allows you to place a datastore in maintenance mode. A datastore enters or leaves
maintenance mode only as the result of your performing the task. Storage DRS maintenance mode is
available to datastores in a datastore cluster enabled for Storage DRS. Standalone datastores cannot
be placed in maintenance mode.
When a datastore cluster enters Storage DRS maintenance mode, only registered virtual machines
are moved to other datastores in the datastore cluster. Unregistered virtual machines, templates, ISO
images, and other nonvirtual machine files are not moved. The datastore does not enter maintenance
mode until all files on the datastore are moved. So you must manually move these files off the
datastore in order for the datastore to enter Storage DRS maintenance mode.
If the datastore cluster is set to fully automated mode, virtual machines are automatically migrated
to other datastores.
If the datastore cluster is set to manual mode, migration recommendations are displayed in the
Storage DRS tab. The virtual disks cannot be moved until the recommendations are accepted.
To place a datastore into Storage DRS maintenance mode:
Scheduled tasks can be configured to change Storage DRS behavior. Scheduled tasks can be used to
change the Storage DRS configuration of the datastore cluster to match enterprise activity. For
example, if the datastore cluster is configured to perform migrations based on I/O latency, you might
disable the use of I/O metrics by Storage DRS during the backup window. You can reenable I/O
metrics use after the backup window ends.
To set up a Storage DRS scheduled task for a datastore cluster:
1. In the Datastores and Datastore Clusters inventory view, right-click the datastore cluster and
select Edit Settings.
2. In the left pane, select SDRS Scheduling and click Add.
3. In the Set Time page, enter the start time, end time, and days that the task should run. Click
Next.
4. In the Start Settings page, enter a description and modify the Storage DRS settings as you want
them to be when the task starts. Click Next.
5. In the End Settings page, enter a description and modify the Storage DRS settings as you want
them to be when the task ends. Click Next.
6. Click Finish to save the scheduled task.
13
Slide 13-55
Storage Scalability
Supported/Not Migration
Feature or product
supported recommendation
VMware Thin-
Supported Fully Automated
Provisioned Disks
VMware vSphere
Not Supported Not Supported
Linked Clones
VMware vSphere
Supported Manual
Storage Metro Cluster
VMware® vCenter
Not Supported Not Supported
Site Recovery Manager
VMware vCloud®
Not Supported Not Supported
Director
The table shows some features that are supported with Storage DRS. For information about Storage
DRS features and requirements, see vSphere Resource Management Guide at
http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html.
Migration
Feature or product Initial placement
recommendations
Array-Based
Supported Manual
Snapshots
Array-Based
Supported Manual
Duplication
Array-Based Thin
Supported Manual
Provisioning
Array-Based Auto- Manual (only capacity
Supported
Tiering load balancing)
Array-Based
Supported Fully Automated
Replication
VMware vSphere®
Not Supported Not Supported
Replication
The table shows some array features that are supported with Storage DRS. For information about
Storage DRS and supported array features and requirements, go to the vSphere Storage DRS
Interoperability whitepaper at http://www.vmware.com/resources/techresources/10286
13
Slide 13-57
Storage Scalability
Storage DRS and Storage I/O Control are complementary solutions:
Storage I/O Control is set to stats only mode by default.
Storage DRS works to avoid I/O bottlenecks.
Storage I/O Control manages unavoidable I/O bottlenecks.
Storage I/O Control works in real time.
Storage DRS does not use real-time latency to calculate load
balancing.
Storage DRS and Storage I/O Control provide you with the
performance that you need in a shared environment, without having to
massively overprovision storage.
Both Storage DRS and Storage I/O Control work with IOPS and should be used together. Storage
DRS works to avoid IOPS bottlenecks. Storage I/O Control is enabled when you enable Storage
DRS. Storage DRS is used to manage unavoidable IOPS bottlenecks, such as short, intermittent
bottlenecks, and congestion on every datastore in the datastore cluster.
Storage I/O Control runs in real time. It continuously checks for latency and controls I/O
accordingly.
Storage DRS uses IOPS load history to determine migrations. Storage DRS runs infrequently and
does analysis to determine long-term load balancing.
Storage I/O Control monitors the I/O metrics of the datastores. Storage DRS uses this information to
determine whether a virtual machine should be moved from one datastore to another.
13
Slide 13-59
Storage Scalability
You should be able to do the following:
Create a datastore cluster.
Configure Storage DRS.
Explain how Storage I/O Control and Storage DRS complement each
other.
Data Protection 14
Slide 14-1
Data Protection
Module 14
14
Data Protection
14
changes to its hardware or software configuration. In addition,
application data goes through constant change.
From a manageability perspective, making regular backups of your
Data Protection
vSphere environment is important.
Backing up virtual machines requires strategies that leverage
virtualization architecture to perform highly efficient backups.
14
backup agent
backup server (uses nearly 100 percent of
Data Protection
server resources during backup)
network
connection
backup to tape or disk data to back up
When you think of traditional backup and restore methods, you imagine a physical environment
with a single operating system running on a single physical server. The traditional backup and
restore process relies on a software backup agent that is installed in the operating system. The
backup and restore model requires a backup server to be configured with a tape or disk subsystem
for data to be written to. At regular intervals, the backup server establishes a TCP/IP session with
the backup agent. After the connection is established, the backup agent begins to read all the file
systems on all the disks that are configured for that host.
When you are considering traditional backup methods for virtual machines, the advantages to this
model include:
• A process that is well understood
• The ease of deploying and managing for administrators
• Features and functionality that you are accustomed to
14
server
VMware vSphere
Data Protection
Excessive physical resource use generated by each virtual machine.
Backup agents installed in virtual machines monopolize host CPU
resources during backups, which results in less CPU resource for
other virtual machines running on that ESXi host.
I/O resources like network and storage are also saturated with read
and write operations during backup.
Unlike virtual machines, physical servers do not share resources. A physical server has uninhibited
access to 100 percent of its resources. Virtual machines must share available server resources. As
with a single physical system, back up of a single virtual machine uses nearly all of the available
physical server resources. With physical servers, you can schedule backups concurrently, with
backups running on other physical servers in the environment.
Performing concurrent backup operations on virtual machines, especially those running on the same
physical server, places a heavy strain on CPU, network, and shared storage resources.
If you must install a backup agent into your virtual machines, consider a strategy that performs a
backup of virtual machines that uses different networks and datastores. And limit the number of
virtual machine backups to one per VMware ESXi™ host.
The backup process can strain I/O resources because the backup process copies data from client to
server. The backup process also requires a significant amount of CPU cycles to identify which data
to back up and which data to leave alone. Transfer of the data across the network can also be
burdensome. Adding to the overhead is the fact that some backup programs work to eliminate
redundancies in data, which requires additional CPU cycles to complete. Combined, all of this
processing consumes an excessive amount of server resources, especially CPU resources.
Because of the flexibility of virtual architectures, they have many advantages in regard to backup
strategies. These advantages offer the prospect of saving you time and money as well as introducing
new technologies, not available in physical architectures, into your datacenter.
Virtual machines always see the same set of virtual hardware, regardless of the hardware installed
on the physical server, which makes bare-metal recoveries easier. Physical servers require separate
processes to create bare-metal and file-level restores. Virtual machines require only an image-level
backup, which can be used for both bare-metal and file-level restoration. Virtual machines are not
required to have a backup agent installed, because backup solutions created for virtual architectures
can directly access VMware® vSphere® datastores. Direct access to the datastore enables
offloading of backup processing to a server other than the host on which the virtual machines are
running. Direct datastore access also means that the backups do not consume network bandwidth.
14
Enables backup and recovery of entire virtual machine images across
SAN storage or local area networks
Is an API that is directly integrated with backup tools from third-party
Data Protection
vendors
Enables you to remove load from the host and consolidate backup load
onto a central backup server
Protects virtual machines that use any type of storage supported by
ESXi (Fibre Channel, iSCSI, NAS, or local storage)
VMware vSphere® Storage APIs – Data Protection (VADP) requires no software installation,
because it is built in to the ESXi framework and can be used to run a full backup.
VADP provides the following features:
• Backing up VMware® guests does not use a temporary directory. So VADP does not use as
much storage on the proxy backup server.
• If you have VMware® vSphere Data Protection, you can restore individual files.
• If you have Data Protection, you can run incremental backups after running an initial full
backup.
For more about VADP, go to http://www.vmware.com/products/vstorage-apis-for-data-protection.
On this page, click backup software for a list of third-party vendors who have integrated VADP
into their backup tools.
backup
server
VMware vSphere
mount virtual
disks to backup
server
tape disk
14
scsi0:0.ctkEnabled=true scsi1:0.ctkEnabled=true
Data Protection
-ctk.vmdk -ctk.vmdk -ctk.vmdk
Copy only file blocks that have changed since the last backup.
Changed block tracking enables faster incremental backups and
near-continuous data protection.
With changed block tracking (CBT), the VMkernel tracks changed blocks of a virtual machine’s
disk. The implementation of CBT alleviates the burden of the backup applications having to scan or
track changed blocks. The result is much quicker incremental backups because scanning an entire
virtual machine disk for changes since the last backup is no longer necessary.
The CBT feature can be accessed by third-party applications as part of VADP. Applications can use
the API to query the VMkernel to return the blocks of data that have changed on a virtual disk since
the last backup operation. You can use CBT on any type of virtual disk, thick or thin, and on any
datastore type, including NFS and iSCSI datastores. You cannot use CBT on physical mode raw
device mappings (RDMs).
Use of CBT depends on the virtual machine hardware version. Virtual machines created with
hardware version 7 or later can use CBT. The feature is not enabled by default. You must enable
CBT on each virtual machine.
The size of the -ctk.vmdk file is static. The file does not expand past the initial size unless the size
of the virtual disks is changed. File size changes are based on the size of a virtual disk, which is
about .5MB for every 10GB. CBT cannot be used with virtual machines created with VMware
products before VMware vSphere 4.0. As a result, virtual machines created before vSphere 4.0 take
longer to back up.
storage array
14
backup 1
backup 2
Data Protection
backup 3
Data deduplication greatly minimizes the amount of storage for backups and reduces the overall cost
of ownership for data protection. Deduplicated backups mean that the backup operation:
• Evaluates blocks that will be saved
• Compares them to blocks that have already been saved
• Identifies blocks containing identical data
Duplicate blocks (blocks with the same information as a previous backup) are not stored twice.
Deduplication store technology completes three processes:
• Integrity check – Verifies and maintains the data integrity of the backup store
• Recatalog – Synchronizes restore points with the contents of the backup store
• Reclaim – Reclaims space on the backup store
A well-known limitation to backup of individual files is file locking. Files that are in use by the
operating system are typically locked and cannot be accessed by the backup agent. With special
programs that can be leveraged by the backup software, locked or opened files can be included in
the backup operation. The ability to back up these locked files results in application-consistent
backups. Microsoft Volume Shadow Copy Service (VSS) is an example of this kind of program.
With vSphere 5.1, VMware is releasing a new backup and recovery solution for virtual machines
called vSphere Data Protection (VDP). This solution is fully integrated with VMware® vCenter
Server™ and provides agentless, disk-based backup of virtual machines to deduplicated storage.
Benefits of VDP include the following:
• It ensures fast, efficient protection for virtual machines even if they are powered off.
• It uses patented deduplication technology across all backup jobs, significantly reducing disk
space consumption.
• VMware vSphere® APIs – Data Protection (VADP) and Changed Block Tracking (CBT) are
utilized to reduce load on the vSphere hosts and minimize backup window requirements.
• It performs full virtual machine and File-Level Restore (FLR) without installing an agent in
every virtual machine.
• Installation and configuration is simplified using an appliance form factor.
• The VDP appliance and its backups are protected using a checkpoint and rollback mechanism.
• Windows and Linux files can easily be restored by the end user with a Web browser.
Scalability (maximums):
14
100 virtual machines per VDP appliance
10 appliances per vCenter Server
Data Protection
2 TB of deduplicated storage
Appliances deployed with .5 TB, 1 TB, or 2 TB
VADP enables backup software to perform centralized virtual machine backups without the
disruption and overhead of running backup tasks from inside each virtual machine.
A key factor in eliminating redundant data at a segment (or subfile) level is the method for
determining segment size. Fixed-block or fixed-length segments are commonly employed by
snapshot and some deduplication technologies. Unfortunately, even small changes to a dataset can
change all fixed-length segments in a dataset (for example, inserting data at the beginning of a file).
This change to all fixed-length segments occurs despite the fact that very little of the dataset has
been changed. VDP uses an intelligent variable-length method for determining segment size that
examines the data to determine logical boundary points, eliminating the inefficiency.
VDP uses a patented method for segment size determination designed to yield optimal efficiency
across all systems. VDP’s algorithm analyzes the binary structure of a data set (all the 0s and 1s that
make up a dataset) in order to determine segment boundaries that are context-dependent. Variable-
length segments average 24 KB in size and are compressed to an average of 12 KB. By analyzing
the binary structure within the VMDK files, VDP works for all file types and sizes and intelligently
de-duplicates the data.
In vSphere 5.x, the SCSI HotAdd feature is enabled only for vSphere editions Enterprise and higher,
which have Hot Add licensing enabled. No separate Hot Add license is available for purchase as an
add-on.
VDP requires vCenter Server 5.1 or higher. vCenter Server can be the traditional Windows
implementation—or the Linux-based VMware® vCenter™ Server Appliance™.
VDP is deployed as a preconfigured Linux-based appliance. Each appliance supports as many as
100 virtual machines, and as many as 10 VDP appliances can be deployed per vCenter Server
instance. The Windows-based VMware vSphere® Client™ is used to deploy VDP. After the
appliance has been deployed, management is performed using the VMware vSphere® Web Client
with any supported Web browser. Adobe Flash must be installed in the Web browser.
14
4 vCPUs, 4 GB RAM
850GB, 1.6 TB or 3.1 TB
Data Protection
SLES 11 64-bit
vCenter Server 5.1
vSphere
vSphere 4.0 or higher
vSphere Web
Client
Deduplication Store
(.vmdk files)
The VDP appliance is deployed with four processors (vCPUs) and 4GB of RAM. Three
configurations of usable backup storage capacity are available: .5TB, 1TB and 2TB, which
respectively consume 850GB, 1,300GB and 3,100GB of actual storage capacity. Proper planning
should be performed to help ensure that proper sizing and additional storage capacity cannot be
added after the appliance is deployed. Storage capacity requirements are based on the number of
virtual machines being backed up, amount of data, retention periods and typical data change rates.
The deduplication store completes the following processes:
• Integrity check – This operation verifies and maintains data integrity on the deduplication store.
Data Recovery completes an incremental integrity check every 24 hours. These checks verify
the integrity of restore points that have been added to the deduplication store since the most
recent full or incremental integrity check. Data Recovery performs an integrity check of all
restore points once a week.
14
Configuration - Network, vCenter, and system settings
Rollback - Roll back repository of backups (more on this later)
Data Protection
Upgrade - Upgrade VDP appliance
VDP is deployed using the vSphere Client from a prepackaged Open Virtualization Archive (.ova)
file. The .ova files are labeled to easily identify the amount of backup storage capacity included with
the appliance.
After the appliance is deployed and powered on, a Web browser is used to access the VDP-configure
user interface (UI) and perform the initial configuration. The first time the user connects to the
VDP-configure UI, it will be running in installation mode. With the installation mode wizard, items
such as IP address, host name, DNS, time zone and vCenter Server connection information are
configured. Upon successful completion of the installation mode wizard, the appliance must be
rebooted. This reboot can take up to 30 minutes to complete as the appliance finishes initial
configuration.
After the initial configuration, the VDP-configure utility runs in maintenance mode. In this mode,
the VDP-configure UI is used to perform functions such as starting and stopping services on the
appliance, collecting logs and rolling back the appliance to a previous valid configuration state.
The vSphere Web Client is used to create and maintain backup jobs and perform entire virtual
machine restores, as well as for reporting and configuration of VDP.
Select objects -
containers (data
center, folder, clusters,
and so on) and
individual virtual
machines
Creating and editing a backup job is accomplished using the Backup tab of the VDP UI in the
vSphere Web Client. Individual virtual machines can be selected for backup. Containers such as
datacenters, clusters, hosts, resource pools and folders also can be selected for backup. All virtual
machines in the container at the time the backup job runs are backed up. New virtual machines
added to the container are included when the next backup job runs. Similarly, any virtual machines
removed from the container no longer are backed up.
Backup jobs can be scheduled daily, weekly or monthly. Each job runs once on the day it is
scheduled and begins when the backup window opens (default is 8:00 p.m. local time). As many as
eight backup jobs can run simultaneously on each VDP appliance.
14
Data Protection
The restore of an entire virtual machine is performed using the Restore tab of the VDP UI in the
vSphere Web Client. The administrator can browse the list of virtual machines backed up by VDP
and then select one or more restore points. By leveraging CBT during a restore of a virtual machine
to its original location, VDP offers fast and efficient recovery. During the restore process, VDP
queries VADP to determine which blocks have changed since the selected restore point, and it
recovers only those blocks. This process reduces data transfer in the vSphere environment during a
recovery operation and decreases recovery time. VDP compares and evaluates the workload of the
two restore methods (full-image restore and restore leveraging CBT) and uses the method that
results in the fastest restore time. This process is useful in scenarios where the change rate since the
selected restore point is high and the overhead of a CBT analysis operation would be more costly
than that of a full-image recovery. VDP intelligently determines which deployment method will
result in the fastest recovery time. A new virtual machine name and destination datastore also can be
specified to prevent overwriting an existing virtual machine. Choosing a restore location other than
the original results in a full-image restore.
It also is possible to restore individual files and folders/directories in a virtual machine. A File Level
Restore is performed using a Web-based tool called vSphere Data Protection Restore Client. The
process enables end users to perform restores on their own without the assistance of a VDP
administrator. The end user can select a restore point and then browse the file system as it looked at
the time that the backup was performed. After the end user locates the item or items to be restored, a
destination (on the local machine) is selected and the job is started. The progress of the restore job
can also be monitored in the tool.
VDP appliance
14
capacity
Data Protection
Success and failures
at a glance
List of virtual
machines, Backup
Jobs, and so on, with
the ability to filter
The Reports tab displays the following information: VDP Appliance Status, Used Capacity, backup
job information, virtual machine backup details, and so on. There are links to the Event Console and
Task Console for additional information and troubleshooting purposes. Users can filter the list of
virtual machines by means of several criteria, including Virtual Machine Name, Backup Jobs, and
Last Successful Backup date. The details section of the virtual machine information displays the
Virtual Machine Name, guest operating system, backup status, backup date, and other useful items.
In addition to the reporting capabilities of its UI, VDP can be configured to send email reports,
which can be scheduled at a specific time once per day on any or every day of the week. Similar to
the UI, these email messages contain details on the VDP appliance, backup jobs, and the virtual
machines that are backed up.
vCenter Server requires that the vCenter Server database and VMware VCMSDS (Active Directory
Application Mode (ADAM)) data is backed up. The ADAM data is backed up every 5 minutes into
the vCenter Server database. To back up the latest update of ADAM data, ensure that the VMware
VirtualCenter Management Webservices service is running for at least 5 minutes before stopping the
other vCenter Server services.
If you run vCenter Server in a virtual machine, use a backup solution designed for backing up
virtual infrastructure. But if you run vCenter Server on a physical server, use any backup solution
designed for backing up physical infrastructure. In both cases, you should obtain a full image of the
vCenter Server host.
For details on restoration of vCenter Server data, see VMware knowledge base article 1023985 at
http://www.vmware.com/kb/1023985.
14
changing the configuration, and after upgrading the host.
ESXi configuration:
Use the vicfg-cfgbackup command.
Data Protection
This command backs up and restores the hosts configuration.
Run the command from VMware vSphere® Command-Line Interface.
After you configure an ESXi host, back up your configuration. Always back up your host
configuration after you change the configuration or upgrade the ESXi image. When you perform a
configuration backup, the serial number is backed up with the configuration and is restored when
you restore the configuration. But the serial number is not preserved when you run the recovery CD
(ESXi Embedded) or perform the repair operation (ESXi Installable). The recommended procedure
is to first back up the configuration, run the recovery CD or repair operation if needed, and then
restore the configuration.
Use the vicfg-cfgbackup command to do the backup. Run this command from VMware
vSphere® Command-Line Interface (vCLI). You can install vCLI on your Windows or Linux
system or import VMware vSphere® Management Assistant. For information about importing or
installing vCLI, see vSphere Command-Line Interface Installation and Reference Guide at
http://www.vmware.com/support/pubs.
You can use the recovery CD or the repair operation (on the ESXi installation CD) if the host does
not boot because the file partitions or Master Boot Record on the installation disk might be
corrupted. Perform this recovery procedure when VMware Customer Service directs you to.
When you plan your backup strategy for virtual machines, be aware of
the various techniques that complement virtual infrastructures.
14
Data Recovery is an agentless, 64-bit, Linux-based virtual appliance
used for backing up and recovering virtual machines.
Data Protection
Data Recovery uses deduplication store technology to make efficient
use of the backup storage.
Data can be restored per virtual machine, per virtual disk, or per file.
Questions?
Patch Management 15
Slide 15-1 g
Module 15
15
Patch Management
15
controlled, and systematic fashion.
Patch Management
15
Eliminates many security breaches that exploit older vulnerabilities.
Update Manager reduces the diversity of systems in an environment:
Patch Management
Makes management easier
Reduces security risks
Update Manager keeps machines running more smoothly:
Patches include bug fixes
Makes troubleshooting easier
VMware vSphere® Update Manager™ enables centralized, automated patch and version
management for VMware® vSphere® and supports VMware ESXi™ hosts, virtual machine
hardware, VMware® Tools™ and virtual appliances. Updates that you specify can be applied to
ESXi hosts, virtual machine hardware, and virtual appliances that you scan. With Update Manager,
you can perform the following tasks:
• Scan for compliance and apply updates to virtual machine hardware, appliances and hosts
• Directly upgrade hosts, virtual machine hardware, VMware Tools, and virtual appliances
• Apply third-party software on hosts
Keeping the patch versions up to date for virtual machine hardware and ESXi hosts helps reduce the
number of vulnerabilities in an environment and the range of problems requiring solutions. All
systems require ongoing patching and reconfiguration or other solutions. Reducing the diversity of
systems in an environment and keeping them in compliance are security best practices. Additionally,
since patches include bug fixes, Update Manager keeps environments operating properly and
without service interruption or errors.
CAUTION
After you upgrade or migrate your host to ESXi 5.x, you cannot roll back to your version 4.x ESXi
software. Back up your host configuration before performing an upgrade or migration. If the
upgrade or migration fails, you can reinstall the 4.x ESXi software and restore your host
configuration.
In addition to patching your ESXi hosts, VMware Tools, and virtual machine hardware, you still
must continue to protect the guest operating system and applications running in the virtual machine.
Continue to protect the guest operating system and applications as you would on a physical system.
VMware® does provide solutions that will assist you with this. One example is to use VMware®
vCenter Configuration Manager™. For information about vCenter Configuration Manager, go to
http://www.vmware.com/products/configuration-manager. Another example is to use VMware®
vCenter™ Protect™ Update Catalog. For information about this product, go to http://
www.vmware.com/products/datacenter-virtualization/vcenter-protect-update-catalog.
NOTE
vCenter Configuration Manager can also be used for patching and patch management. This course
will deal specifically with how Update Manager is used to perform these functions.
15
For VMware® patches: https://hostupdate.vmware.com
For third-party patches: URL of third-party source
Patch Management
Creation of baselines and baseline groups
Scanning:
Inventory systems are scanned for baseline compliance.
Remediation:
Inventory systems that are not current can be automatically patched.
Reduces the number of reboots required after VMware Tools updates
Update Manager uses a set of operations to ensure effective patch and upgrade management.
This process begins by downloading information about a set of security patches. One or more of
these patches are aggregated to form a baseline. Multiple baselines can be added to a baseline group.
You can use baseline groups to combine different types of baselines and then scan and remediate an
inventory object against all of them as a whole. If a baseline group contains both upgrade and patch
baselines, the upgrade runs first.
A collection of virtual appliances and ESXi hosts can be scanned for compliance with a baseline or
a baseline group and remediated (updated or upgraded). These processes can be started manually or
through scheduled tasks.
vCenter Server
database
optional
download
server
patch patch
database VMware vSphere® database
Client with
Update Manager Update Manager
server Internet
plug-in
VMware
patch source
third-party
patch source
15
NOTE
UMDS 5.1 can be installed only on 64-bit Windows operating systems.
Patch Management
You can install Update Manager on the same computer as vCenter Server or on a different computer.
Update Manager runs on these Windows versions:
• Windows Server 2003 [Standard/Enterprise/Datacenter] 64-bit (SP2 required)
• Windows Server 2003 R2 [Standard/Enterprise/Datacenter] 64-bit (SP2 required)
• Windows Server 2008 [Standard/Enterprise/Datacenter] 64-bit
• Windows Server 2008 [Standard/Enterprise/Datacenter] 64-bit SP2
• Windows Server 2008 [Standard/Enterprise/Datacenter] 64-bit R2
• Windows Server 2008 [Standard/Enterprise/Datacenter] 64-bit R2 Service Pack 1
You can install Update Manager only on a 64-bit machine.
15
Update Manager link.
Gather information about the environment into which you are installing Update Manager, including:
Patch Management
• The vCenter Server system that Update Manager will work with. The necessary information
includes:
• The vCenter Server IP address or host name
• Port numbers (in most cases, the default Web service ports, 80 and 443, are used)
• Administrative credentials (the Administrator account is often used)
• The system DNS name plus the user name and password for the database that Update Manager
will work with.
During the installation, you can configure Update Manager to work with an Internet proxy server.
The Update Manager client component is delivered as a plug-in for the vSphere Client. After
installing Update Manager, install the Update Manager plug-in in any vSphere Client that you will
use to manage Update Manager.
In the vSphere Client menu bar, select Plug-ins > Manage Plug-ins. In the Plug-in Manager
window, click Download and Install for the Update Manager plug-in. The installed plug-in appears
under Installed Plug-ins.
The disk storage requirements for Update Manager vary depending on your deployment. Make sure
that you have at least 20GB of free space in which to store patch data. Depending on the size of your
deployment, Update Manager requires a minimum amount of free space per month for database
usage.
Before installing Update Manager, you must create a database instance and configure it to ensure
that all Update Manager database tables are placed in it. Update Manager can handle small-scale
environments using the bundled SQL Server 2008 R2 Express. For environments with more than
5 hosts and 50 virtual machines, create either an Oracle or a SQL Server database for Update
Manager. For large scale environments, you should set up the Update Manager database on a
different computer than the Update Manager server and the vCenter Server database.
Modify
Update
Manager
configuration
properties.
You can modify the following administrative settings for Update Manager. Select Home >
Solutions and Applications > Update Manager and click the Configuration tab:
• Network Connectivity – Network settings, such as IP address or host name for patch store.
• Download Settings – Where to obtain patches and where to configure the proxy settings.
• Download Schedule – How frequently to download patches. This setting has no effect on an
optional download server, which is separate from the Update Manager server.
• Notification Check Schedule – How frequently to check for notifications about patch recalls,
patch fixes, and alerts.
• Virtual Machine Settings – Whether to take a snapshot of the virtual machines before
remediation to enable rollback and how long to keep snapshots. Snapshots use disk space, but
they also protect you if the upgrade fails.
• ESXi Host/Cluster Settings – How Update Manager responds to a failure that might occur
when placing an ESXi host in maintenance mode. This setting also allows you to temporarily
disable VMware vSphere® Distributed Power Management™ (DPM), VMware vSphere®
High Availability admission control, and VMware vSphere® Fault Tolerance for cluster updates
to succeed.
• vApp Settings – Enable or disable smart reboot of virtual appliances after remediation.
15
Virtual machine upgrade
for hardware or VMware Tools
Patch Management
Virtual appliance upgrade
Update Manager includes a
number of default baselines.
A baseline group consists of multiple baselines:
Can contain one upgrade baseline per type and
one or more patch and extension baselines
When you scan hosts, virtual machines, and virtual appliances, you evaluate them against baselines
and baseline groups to determine their level of compliance.
Baselines contain a collection of one or more patches, extensions, bug fixes, or upgrades. Baselines
can be classified as upgrade, extension, or patch baselines.
An extension refers to additional software for ESXi hosts. This additional software might be
VMware software or third-party software. Examples of extensions include the following:
• Additional features
• Updated drivers for hardware
• Common Information Model (CIM) providers for managing third-party modules on the host
• Improvements to the performance or usability of existing host features.
Baseline types:
• Host patch – A set of patches to apply to a host or set of hosts, based on applicability
• Host extension – A fixed set of extensions for your ESXi host
• Host upgrade – An upgrade release that allows you to upgrade hosts to a particular release
version
15
Patch Management
A host patch is
added to this
baseline.
To create a baseline, select Home > Solutions and Applications > Update Manager and click the
Baselines and Groups tab. Click the Create link to start the New Baseline wizard. Enter a name
and description for your baseline. Select one of the five baseline types.
If you are creating a patch baseline, you must also select a patch option: Fixed or Dynamic.
A fixed baseline remains the same even if new patches are added to the repository. With a fixed
patch baseline, the user manually specifies all updates included in the baseline from all the patches
available in Update Manager. Fixed updates are typically used to check whether systems are
prepared to deal with particular problems. For example, you might use fixed baselines to check for
compliance with patches to prevent computer worms.
A dynamic baseline is updated when new patches meeting the specified criteria are added to the
repository. The criteria that you can specify are patch vendor, product, severity, and release dates. As
the set of available updates changes, dynamic patch baselines are updated as well. You can explicitly
include or exclude an update.
To view compliance information and remediate objects in the inventory against specific baselines
and baseline groups, attach existing baselines and baseline groups to these objects.
Although you can attach baselines and baseline groups to individual objects, attaching them to
container objects, such as folders, hosts, clusters, and datacenters, is more efficient. Attaching a
baseline to a container object attaches the baseline to all objects in the container. On the slide, a host
patch baseline named ESXi Host Update is attached to a cluster object named Lab Cluster. The host
patch baseline is attached to the two hosts in Lab Cluster: sc-goose01 and sc-goose02.
To attach baselines to ESXi hosts:
3. Click Attach.
4. Select the baselines or baseline group that you want to attach to the object.
To attach baselines to virtual machines, templates, and virtual appliances, go to the VMs and
Templates inventory view.
15
Patch Management
Scanning is the process in which attributes of a set of hosts, virtual machines, or virtual appliances
are evaluated against patches, extensions, and upgrades in the attached baselines and baseline
groups. You can configure Update Manager to scan virtual machines, virtual appliances, and ESXi
hosts against baselines and baseline groups by scheduling or manually initiating scans to generate
compliance information.
If the object that you select is a container object, all child objects are also scanned. The larger the
virtual infrastructure and the higher up in the object hierarchy that you begin the scan, the longer the
scan takes.
After you have an inventory object attached to a baseline, perform a scan by right-clicking the object
and selecting Scan for Updates. Or click the Scheduled Tasks button and create a scheduled task.
To schedule the scan, select Home > Management > Scheduled Tasks. In the toolbar, click New. In
the Schedule Task dialog box, select the task Scan for Updates. The Schedule a Scan wizard allows
you do define the object to scan, the type of scan to perform, and the time to perform the scan.
A scheduled task is useful because it can automatically scan an object for problems. This scan
catches new objects that do not match a defined baseline. Using a dynamic baseline, instead of a
fixed baseline, discovers new vulnerabilities and needed updates.
15
Patch Management
In this example,
the scan found
two noncompliant
hosts.
To view compliance of different hosts or virtual machines with different Update Manager patch
baselines, select the object in the appropriate inventory view and click the Update Manager tab. To
view virtual machine compliance, you must use the VMs and Templates inventory view.
The results of the scan provide information on the degree of conformance with baselines and
baseline groups. Information includes the time the last scan was completed at this level and the total
number of compliant and noncompliant baselines. For each baseline or baseline group, the scan
results report the number of virtual machines, appliances, or hosts that are compliant, noncompliant,
or unknown.
On the slide, the hosts in the cluster named Lab Cluster were scanned. After viewing compliance
information, the next step is to remediate the host. Before remediation, you can perform an
additional step on host objects called staging.
Staging allows you to download the patches and extensions from the Update Manager server to the
ESXi hosts, without applying the patches and extensions immediately. Staging patches and
extensions speeds up the remediation process because the patches and extensions are already
available locally on the hosts. You can reduce the downtime during remediation by staging patches
and extensions whose installation requires that a host enter maintenance mode. Staging patches and
extensions itself does not require that the hosts enter maintenance mode.
You can remediate virtual machines, virtual appliances, and hosts by using either user-initiated
remediation or regularly scheduled remediation. To remediate an object, right-click the inventory
object and select Remediate.
For ESXi hosts in a cluster, the remediation process is sequential. When you remediate a cluster of
hosts and one of the hosts fails to enter maintenance mode, Update Manager reports an error and the
process fails. The hosts in the cluster that did get remediated stay at the updated level. The ones that
were to be remediated after the failed host are not updated. When you remediate hosts against
baseline groups containing an upgrade baseline and patch or extension baselines, the upgrade is
performed.
For multiple clusters under a datacenter, the remediation processes run in parallel. If the remediation
process fails for one of the clusters in a datacenter, the remaining clusters are still remediated.
To remediate virtual machines and virtual appliances together, they must be in one container, such as
a folder, VMware vSphere® vApp(s)™, or a datacenter. You must then attach a baseline group or a
set of individual virtual appliance or virtual machine baselines to the container. If you attach a
baseline group, it can contain both virtual machine and virtual appliance baselines. The virtual
machine baselines apply to virtual machines only. The virtual appliance baselines apply to virtual
appliances only.
15
Patch Management
Some updates require that a host enters maintenance mode before remediation. Virtual machines and
appliances cannot run when a host is in maintenance mode.
To reduce the host remediation downtime at the expense of virtual machine availability, you can
choose to shut down or suspend virtual machines and virtual appliances before remediation. In a
VMware vSphere® Distributed Resource Scheduler™ (DRS) cluster, if you do not power off the
virtual machines, the remediation takes longer but the virtual machines are available during the entire
remediation process, because they are migrated with VMware vSphere® vMotion® to other hosts.
Select Retry entering maintenance mode in case of failure, specify the number of retries, and
specify the time to wait between retries. Update Manager waits for the retry delay period and retries
putting the host into maintenance mode as many times as you indicate in Number of retries field.
Update Manager does not remediate hosts on which virtual machines have connected CD, DVD, or
floppy drives. In clustered environments, connected media devices might prevent vMotion if the
destination host does not have an identical device or mounted ISO image, which in turn prevents the
source host from entering maintenance mode.
15
Patch Management
Remediation of hosts in a cluster requires that you temporarily disable cluster features like DPM and
vSphere HA admission control. You should also turn off VMware vSphere Fault Tolerance if it is
enabled on any of the virtual machines on a host. Disconnect the removable devices connected to
the virtual machines on a host, so that they can be migrated with vMotion.
Before you start the remediation process, you can generate a report that shows which cluster, host,
or virtual machine is with enabled cluster features. On the Cluster Remediation Options page of the
Remediate wizard, click Generate Report. The Cluster Remediation Options Report shows the
name of the cluster, host, or virtual machine on which a problem is reported. The report also
displays recommendations on how to fix the problem.
15
No longer applies the recalled patch to any host:
Patch is flagged as recalled in the database.
Patch Management
Deletes the patch binaries from its patch repository
Does not uninstall recalled patches from ESXi hosts:
Instead, it waits for a newer patch and applies that to make a host
compliant.
Typically, hosts are put into maintenance mode before remediation if the update requires it. Virtual
machines cannot run when a host is in maintenance mode. vCenter Server migrates the virtual
machines to other hosts in a cluster before the noncompliant host is put in maintenance mode.
vCenter Server can migrate the virtual machines if the cluster is configured for vMotion and if DRS
and Enhanced vMotion Compatibility (EVC) are enabled. EVC is not a prerequisite for VMware
vSphere® Storage vMotion® migration. EVC guarantees that the CPUs of the hosts are compatible.
For other containers or individual hosts that are not in a cluster, migration with vMotion cannot be
performed.
Update Manager 5.1 can patch and upgrade your ESXi hosts based on available cluster capacity and
can remediate an optimal number of ESXi hosts simultaneously without virtual machine downtime.
Additionally, for scenarios where turnaround time is more important than virtual machine uptime,
you have the choice to remediate all ESXi hosts in a cluster simultaneously.
In this lab, you will install, configure, and use Update Manager.
1. Install Update Manager.
2. Install the Update Manager plug-in into the vSphere Client.
3. Modify cluster settings.
4. Configure Update Manager.
15
5. Create a patch baseline.
6. Attach a baseline and scan for updates.
Patch Management
7. Stage the patches onto the ESXi hosts.
8. Remediate the ESXi hosts.
Update Manager patches and updates ESXi 5.1 hosts as well earlier
versions of hosts, virtual machines, templates, and virtual appliances.
Update Manager reduces security vulnerabilities by keeping systems
up to date and by reducing the diversity of systems in an environment.
Update Manager no longer patches guest operating systems or the
15
applications running within guest operating systems.
Patch Management
Questions?
16
VMware Management Assistant
16
VMware Management Assistant
16
VMware Management Assistant
A few methods exist for accessing the command prompt on a VMware ESXi™ host.
VMware® recommends that you use VMware vSphere® Command-Line Interface (vCLI) or
VMware vSphere® Management Assistant (vMA) to run commands against your ESXi hosts. Run
commands directly in VMware vSphere® ESXi™ Shell only in troubleshooting situations.
An ESXi system includes a direct console user interface (DCUI) that enables you to start and stop
the system and perform a limited set of maintenance and troubleshooting tasks. The DCUI includes
ESXi Shell, which is disabled by default. You can enable ESXi Shell in the DCUI or by using
VMware vSphere® Client™.
You can enable local shell access or remote shell access:
• Local shell access enables you to log in to the shell directly from the DCUI.
• Secure Shell (SSH) is a remote shell that enables you to connect to the host with a shell, such as
PuTTY.
ESXi Shell includes all ESXCLI commands, a set of deprecated esxcfg- commands, and a set of
commands for troubleshooting and remediation.
To access ESXi Shell locally, you must have physical access to the
DCUI and administrator privileges.
By default, the local ESXi Shell is disabled:
Enable the local ESXi Shell from the DCUI or from the VMware
vSphere® Client.
After you enable ESXi Shell access, you can access the local shell:
In the main DCUI screen, press Alt+F1 to open a virtual console
window to the host.
16
Local users with administrator privileges automatically have local
shell access:
If you have access to the DCUI, you can enable the ESXi Shell from there.
To enable the ESXi Shell in the DCUI:
1. In the DCUI of the ESXi host, press F2 and provide credentials when prompted.
You can access ESXi Shell remotely with a secure shell client like
SSH or PuTTY.
The SSH service must be enabled first.
This service is disabled by default.
Disable SSH access when you are done using it.
If you enable SSH access, do so only for a limited time. SSH should never be left open on an ESXi
host in a production environment.
If SSH is enabled for the ESXi Shell, you can run shell commands by using an SSH client, such as
SSH or PuTTY.
To enable SSH from the vSphere Client:
5. Change the SSH options. To change the Startup policy across reboots, click Start and stop
with host and reboot the host.
6. Click OK.
5. Change the ESXi Shell options. To change the Startup policy across reboots, click Start and
stop with host and reboot the host.
6. Click OK.
The ESXi Shell timeout setting specifies how long, in minutes, you can leave an unused session
open. By default, the timeout for the ESXi Shell is 0, which means the session remains open even if
it is unused. If you change the timeout, for example, to 30 minutes, you have to log in again after the
timeout period has elapsed.
16
To modify the ESXi Shell Timeout:
4. Click OK.
vCLI provides a command-line interface for ESXi hosts. Multiple ESXi hosts can be managed from
a central system on which vCLI is installed.
Normally, vCLI commands require you to enter options that specify the server name, the user name,
and the password for the server that you want to run the command against. Methods exist that enable
you to bypass entering the user name and password options, and, sometimes, the server name
option. Two of these methods are described later in this module.
For details about vCLI, see Getting Started with vSphere Command-Line Interfaces and vSphere
Command-Line Interface Concepts and Examples at http://www.vmware.com/support/pubs/vsphere-
esxi-vcenter-server-pubs.html.
16
VMware Management Assistant
vMA is a downloadable appliance that includes several components, including vCLI. vMA enables
administrators to run scripts or agents that interact with ESXi hosts and VMware® vCenter
Server™ systems without having to authenticate each time. vMA is easy to download, install, and
configure through the vSphere Client.
Hardware requirements:
AMD Opteron, rev E or later
Intel Processors with EM64T and VT enabled
Software requirements:
vMA can be deployed on the following:
vSphere ESX 4.0 Update 2 or later
vSphere ESXi 4.1, 5.0, and 5.1
vCenter Server 4.0 Update 2 or later
vCenter Server 4.1, 5.0, and 5.1
By default, vMA uses the following:
One virtual processor
600MB of RAM
3GB virtual disk
To set up vMA, you must have an ESXi host. Because vMA runs a 64-bit Linux guest operating
system, the ESXi host on which it runs must support 64-bit virtual machines.
The 3GB virtual disk size requirement might increase, depending on the extent of centralized
logging enabled on the vMA appliance.
The recommended memory for vMA is 600MB.
16
system or ESXi hosts or both.
4. Initialize vi-fastpass authentication.
vCenter Server
vMA
vMA commands directly targeted at the hosts are sent using the VMware vSphere® SDK for Perl
API. Commands sent to the host through the vCenter Server system are first sent to the vCenter
Server system, using the vSphere SDK for Perl API. Using a private protocol that is internal to
vCenter Server, commands are sent from the vCenter Server system to the host.
16
VMware Management Assistant
vMA is deployed like any other virtual appliance. After the appliance is deployed to the
infrastructure, the user can power it on and start configuring vMA.
The vMA appliance is available from the download page on the VMware Web site.
After vMA is deployed, your next step is to configure the appliance. When you start the vMA
virtual machine the first time, you can configure it. The appliance can be configured either by
opening a console to the appliance or by pointing a Web browser to the appliance.
The vi-admin account is the administrative account on the vMA appliance and exists by default.
During the initial power-on, you are prompted to choose a password for this user account.
Although the vMA appliance is Linux-based, logging in as root has been disabled.
16
3. Run vifp listservers to verify that the vCenter Server system has
been added as a target.
After you configure vMA, you can add target servers that run the supported vCenter Server or ESXi
versions. The vifp interface enables administrators to add, list, and remove target servers and to
manage the vi-admin user’s password.
After a server is added as a vMA target, you must run the vifptarget command. This command
enables seamless authentication for remote vCLI and vSphere SDK for Perl API commands. Run
vifptarget <server> before you run vCLI commands or vSphere SDK for Perl scripts against
that system. The system remains a vMA target across vMA reboots, but running vifptarget again
is required after each logout.
You can establish multiple servers as target servers and then call vifptarget once to initialize all
servers for vi-fastpass authentication. You can then run commands against any target server without
additional authentication. You can use the --server option to specify the server on which to run
commands.
ESXi
vCenter
Server
The vMA authentication interface enables users and applications to authenticate with the target
servers by using vi-fastpass or Active Directory (AD). While adding a server as a target, the
administrator can determine whether the target must use vi-fastpass or AD authentication. For vi-
fastpass authentication, the credentials that a user has on the vCenter Server system or ESXi host are
stored in a local credential store. For AD authentication, the user is authenticated with an AD server.
When you add an ESXi host as a fastpass target server, vi-fastpass creates two users with obfuscated
passwords on the target server and stores the password information on vMA:
• vi-admin with administrator privileges
• vi-user with read-only privileges
The creation of vi-admin and vi-user does not apply for AD authentication targets. When you add a
system as an AD target, vMA does not store information about the credentials. To use the AD
authentication, the administrator must configure vMA for AD.
vMA can be configured for Active Directory (AD), so the ESXi hosts
and vCenter Server systems can be added to vMA without having to
store passwords in the vMA credential store.
Active
Directory
16
VMware Management Assistant
vCenter Server
Configure vMA for Active Directory authentication so that ESXi hosts and vCenter Server systems
added to Active Directory can be added to vMA. Joining the vMA to Active Directory prevents you
from having to store the passwords in the vMA credential store. This approach is a more secure way
of adding targets to vMA.
Ensure that the DNS server configured for vMA is the same as the DNS server of the domain. You
can change the DNS server by using the vMA Console to the Web UI.
Ensure that the domain is accessible from vMA. Ensure that you can ping the ESXi and vCenter
Server systems that you want to add to vMA. Ensure also that pinging resolves the IP address to the
target servers domain.
To add vMA to a domain:
3. Restart vMA.
16
vmkfstools
For many of the vCLI commands, you might have used scripts with corresponding service
commands with an esxcfg prefix to manage ESX 3.x hosts. To facilitate easy migration from ESX/
ESXi 3.x and later versions of ESXi, a copy of each vicfg- command that uses an esxcfg- prefix is
included in the vCLi package.
Commands with the esxcfg prefix are available mainly for compatibility reasons and might
become obsolete.
esxcfg-advcfg vicfg-advcfg
esxcfg-cfgbackup vicfg-cfgbackup
esxcfg-nics vicfg-nics
16
esxcfg-vswitch vicfg-vswitch
Host management commands can stop and reboot ESXi hosts, back up configuration information,
and manage host updates. You can also use a host management command to make your host join an
AD domain or exit from a domain.
16
--passthroughauth Use Microsoft Windows Security SSPI
Specifies Domain-level authentication
--passthroughauthpackage
protocol to be used.
The slide lists options that are available for all vCLI commands.
16
Connects to URL for VMware vSphere® Web
--url
Services SDK
--savesessionfile Saves a session to the specified file. The session expires if it has been
unused for 30 minutes.
--server Uses the specified ESXi of vCenter Server system. Default is localhost.
--sessionfile Uses the specified session file to load a previously saved session. The
session must be unexpired.
--url Connects to the specified vSphere Web Services SDK URL.
--username Uses the specified user name. If you do not specify a user name and
password on the command line, the system prompts you and does not
echo your input to the screen.
--vihost When you run a vCLI command with --server option pointing to vCenter
Server system, use --vihost to specify the ESXi host to run the command
against.
Examples:
vicfg-hostops –-server esxi01 –-username mike
-–password vmware1! –-operation shutdown
vicfg-hostops –-server esxi01 –-username mike
-–password vmware1! –-operation reboot --force
vicfg-hostops –-server esxi01 –-username mike
–-operation shutdown –-cluster “LabCluster”
The command prompts for user names and passwords if you do not
specify them.
An ESXi host can be shut down and restarted using the vicfg-host command options. If a host
managed by vCenter Server is shut down by this command, the host is disconnected from vCenter
Server but not removed from the inventory.
No equivalent ESXCLI command is available.
You can shut down or reboot all hosts in a cluster or datacenter by using the --cluster or
--datacenter option.
In the first and second examples, the connection options (<conn_options>) used are --server,
--username, and --password. But in the third example, the --password option is omitted. In
this case, you are prompted to enter the password when you run this command.
NOTE
vicfg- commands will be deprecated in future releases. Use esxcli commands instead where
possible.
Examples:
vicfg-hostops –-server vc01 –-username administrator
--operation info –-cluster “LabCluster”
vicfg-hostops –-server vc01 –-username administrator
16
--operation enter –-action poweroff
vicfg-hostops:
A host can be placed in maintenance mode by using the vicfg-hostops command. When the
command is run, the host does not enter maintenance mode until all of the virtual machines running
on the host are either shut down, migrated, or suspended.
vicfg-hostops does not work with VMware vSphere® Distributed Resource Scheduler™ (DRS).
You can put all hosts in a cluster or datacenter in maintenance mode by using the --cluster or
--datacenter option.
The --operation info option can be used to check whether the host is in maintenance mode or
in the Entering Maintenance Mode state.
esxcli namespace
esxcli fcoe namespace
esxcli hardware namespace
esxcli iscsi namespace
esxcli license namespace
esxcli network namespace
esxcli software namespace
esxcli storage namespace
esxcli system namespace
esxcli vm namespace
You can manage many aspects of an ESXi host with the ESXCLI command set. You can run
ESXCLI commands as vCLI commands or run them in the ESXi Shell in troubleshooting situations.
vicfg- commands will be deprecated in future releases. Use esxcli commands instead where
possible.
The slide lists the hierarchy of name spaces and commands for each ESXCLI name space.
NOTE
vicfg- commands will be deprecated in future releases. Use esxcli commands instead where
possible.
Use the esxcli command with the vm namespace to list all the
virtual machine processes.
esxcli <conn_options> vm process list
16
VMware Management Assistant
With the esxcli vm command you can display all the virtual machine processes on the ESXi system.
This command lists only running virtual machines on the system.
The resxtop commands enable command-line monitoring and collection of data for all system
resources: CPU, memory, disk, and network. When used interactively, this data can be viewed on
different types of screens, one each for CPU statistics, memory statistics, network statistics, and disk
adapter statistics. This data includes some metrics and views that cannot be accessed using the
overview or advanced performance charts. The three modes of execution for resxtop are the
following:
• Interactive mode (the default mode) – All statistics are displayed as they are collected, showing
how the ESXi host is running in real time.
• Batch mode – Statistics are collected so that the output can be saved in a file and processed later.
• Replay mode – Data that was collected by the vm-support command is interpreted and played
back as resxtop statistics. This mode does not process the output of batch mode.
For more details on resxtop, see vSphere Resource Management Guide at
http://www.vmware.com/support/pubs.
16
VMware Management Assistant
To run resxtop interactively, you must first log in to a system with VMware vSphere® Command-
Line Interface (vCLI) installed. Download and install a vCLI package on a Linux host or deploy
VMware vSphere® Management Assistant (vMA) to your ESXi host. vMA is a preconfigured
Linux appliance. Versions of the vCLI package are available for Linux and Windows systems.
However, because resxtop is based on a Linux tool, it is only available in the Linux version of
vCLI.
After vCLI is set up and you have logged in to the vCLI system, start resxtop from the command
prompt. For remote connections, you can connect to an ESXi host either directly or through vCenter
Server.
resxtop has the following connection parameters:
--server [server] --username [username] --password [password] --vihost
[vihost]
[server] – A required field that refers to the name of the remote host to connect to. If connecting
directly to the ESXi host, use the name of that host. If your connection to the ESXi host is indirect
(that is, through vCenter Server), use the name of the vCenter Server system for this option.
The following command line is another example of running resxtop to monitor the ESXi
host named esxi01.vmeduc.com. However, this time the user logs directly in to the ESXi host as
user root:
# resxtop --server esxi01.vmeduc.com --username root
In both examples, you are prompted to enter the password of the user that you are logging in as, for
example, administrator or root.
16
v Virtual disk view
q Quit
host
statistics
Per world
statistics
(CPU screen)
Per virtual
machine
statistics
Here is an example of the output generated from resxtop. You can view several screens. The CPU
screen is the default. resxtop refreshes the screen every 5 seconds by default.
resxtop displays statistics based on worlds. A world is equivalent to a process in other operating
systems. A world can represent a virtual machine and a VMkernel component. The following
column headings help you understand worlds:
• ID – World ID. In some contexts, resource pool ID or virtual machine ID.
• GID – Resource pool ID of the running world’s resource pool or virtual machine.
• NAME – Name of running world. In some contexts, resource pool name or virtual machine
name.
To filter the output so that only virtual machines are shown, type V (uppercase V) in the resxtop
window. This command hides the system worlds so that you can concentrate on the virtual machine
worlds.
16
Use vm-support and resxtop to create a file with sampled
performance data and replay the file. For example:
resxtop can also be run in batch mode. In batch mode, the output is stored in a file, and the data
can be read by using the Windows Perfmon utility. You must prepare for running resxtop in batch
mode.
To prepare to run resxtop in batch mode
1. Start resxtop to redirect the output to a file, as shown in the slide. The filename must have a
.csv extension. The utility does not enforce this extension, but the postprocessing tools require
it.
2. Use tools like Microsoft Excel and Perfmon to process the statistics collected.
In batch mode, resxtop rejects interactive commands. In batch mode, the utility runs until it
produces the number of iterations requested (by using the -n option) or until you end the process by
pressing Ctrl+C.
1. Use the vm-support command to capture sampled performance data in a file. For example, the
vm-support -S -d 300 -l 30 command runs vm-support in snapshot mode. The -S
restricts the collection of diagnostic data, the -d 300 collects data for 300 seconds (five
minutes) and the -l 30 sets up a 30-second sampling interval.
2. Replay the file with the resxtop command. For example, resxtop -r <filename> replays
the captured performance data in an resxtop window.
vm-support can be run from a remote command line. For details see:
• http://kb.vmware.com/selfservice/microsites/
search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1010705
http://kb.vmware.com/selfservice/microsites
search.do?cmd=displayKC&docType=kc&externalId=1967&sliceId=1&docTypeID=DT_KB_1_1&
dialogID=343976180&stateId=1%200%20343980403
In this lab, you will use vMA to manage networking, manage storage,
and monitor hosts.
1. Log in to vMA and connect to your vCenter Server and ESXi host.
2. Create a standard virtual switch.
3. Configure storage.
16
VMware Management Assistant
Questions?
16
VMware Management Assistant
Module 17
17
Installing VMware vSphere 5.1 Components
17
Installing VMware vSphere 5.1 Components
Lesson 1:
Installing ESXi
17
Installing VMware vSphere 5.1 Components
17
LUN with unpartitioned space: SATA, SCSI, or Serial Attached SCSI
In a typical interactive installation, you boot the ESXi installer and respond to the installer prompts
to install ESXi to the local host disk. The installer reformats and partitions the target disk and
installs the ESXi boot image. If you have not installed ESXi on the target disk before, all data
located on the drive is overwritten, including hardware vendor partitions, operating system
partitions, and associated data.
Observe the following considerations:
• In an interactive installation, the system prompts you for the required system information.
• Verify that the server hardware clock is set to UTC. This setting is in the system BIOS.
• Consider disconnecting your network storage. This action decreases the time that it takes the
installer to search for available disk drives. When you disconnect network storage, files on the
disconnected disks are unavailable at installation.
CAUTION
Do not disconnect a LUN that contains an existing ESXi installation. Do not disconnect a
VMware vSphere® VMFS datastore that contains the installation of another ESXi host. These
actions can affect the outcome of the installation.
1. Insert the ESXi installer CD/DVD into the CD/DVD drive or attach the Installer USB flash
drive
2. Restart the machine.
3. Set the BIOS to boot from the CD-ROM device or the USB flash drive.
17
Installing VMware vSphere 5.1 Components
You must have the ESXi 5.1 ISO file on CD, DVD, or USB flash drive
media.
Boot from the media to start the ESXi installer.
Make sure that you select a disk that is not formatted with VMware
vSphere® VMFS.
Choose a volume
that has not been
formatted with
VMFS.
Be careful when choosing the disk on which to install ESXi. Do not rely on the disk order in the list
to select a disk. If the disk you selected contains data, the Confirm Disk Selection page is displayed.
For example, when installing ESXi on the local disk, the local disk might not be the first disk in
the list.
To select the disk on which to install ESXi:
1. On the Select a Disk page, select the drive on which to install ESXi and press Enter.
CAUTION
Upgraded systems do not use the GPT format but keep the older MS-DOS-based partition label.
17
Installing VMware vSphere 5.1 Components
boot LUN
Configuring a boot LUN can be used in situations where you do not want to configure local storage
or are using diskless systems, such as blade servers. Consider the benefits of booting from SAN:
• Servers can be denser and run cooler without internal storage.
• You can replace servers and have the new server point to the old boot location.
• Servers without local disks often take up less rack space.
• You can back up the system boot images in the SAN as part of the overall SAN backup
procedures. You can also use advanced array features, such as snapshots, on the boot image.
• Creation and management of the operating system image is easier and more efficient.
• You can access the boot disk through multiple paths, which protects the disk from being a single
point of failure.
CAUTION
Multipathing to a boot LUN is supported only on active-active arrays.
17
cannot set up a diagnostic partition on the SAN LUN. Instead, you use the VMware vSphere®
Management Assistant (vMA) to collect diagnostic information from your host and store it for
analysis.
Lesson 2:
Installing vCenter Server
17
Installing VMware vSphere 5.1 Components
17
Simple configuration through a Web browser
Linux-based Offers same user experience as Windows-based
version
17
Installing VMware vSphere 5.1 Components
The vSphere 5.1 Single Sign On feature simplifies the login process for the Cloud Infrastructure
Suite. You can log into the management infrastructure a single time through the vSphere Web Client
or the API. You can perform operations across all components of the Cloud Infrastructure Suite
without having to log into the components separately.
Single Sign On operates across all Cloud Infrastructure Suite components that support this feature.
Authentication services supporting multiple Identity Sources Active Directory, LDAP, NIS Single
Sign On is required for Inventory Service, vCenter Server, and VMware vSphere® Web Client.
The two installation modes available for Single Sign On are the following:
• Simple Install
• Individual component install
Parameter Description
Fully Qualified Domain Name FQDN or IP Address for the Single Sign-On server.
Service Account Information Service account information for the service to run in.
To install vCenter Single Sign On as a new installation, create the only node in a basic vCenter
Single Sign On installation or the first node in a high availability or multisite installation.
Once the installation is complete, back up the vCenter Single Sign On configuration and database.
Single Sign On running on a separate host from vCenter Server has the following minimum
hardware requirements:
• Intel or AMD dual core x64 processor with two or more logical cores
• 3GB or memory.
• 2GB disk storage
• 1Gbps Network speed
Requirements are higher for disk and memory if the Single Sign On database runs on the same host
machine.
17
Installing VMware vSphere 5.1 Components
Inventory Services store vCenter Server application and inventory data, enabling you to search and
access inventory objects across linked vCenter Servers.
You can install Inventory Services and vCenter Server together on a single host machine using the
vCenter Server Simple Install option. This option is appropriate for small deployments.
Inventory Services running on a separate host has the following hardware requirements:
• Intel or AMD x64 processor with two or more logical cores each with a speed of 2GHz
• 3GB of memory
• 2GB of disk storage
• 1Gbps network speed
17
Installing VMware vSphere 5.1 Components
vCenter Server requires a database to store and organize server data. vCenter Server supports SQL
Server, Oracle, and IBM DB2 databases. You must have administration credentials to log in to these
databases. Contact your database administrator for these credentials.
Or you can install the bundled Microsoft SQL Server 2008 Express database. This database is
intended to be used for small deployments of up to 5 hosts and 50 virtual machines.
VMware vSphere® Update Manager™ also requires a database. Update Manager can use the
vCenter Server database. But VMware recommends using one database for vCenter Server and
another database for Update Manager. For smaller deployments, you might not require a separate
database for Update Manager.
For more about the vCenter Server database requirements, see the documentation at
http://www.vmware.com/support/pubs.
17
Installing VMware vSphere 5.1 Components
The size of the database varies with the number of hosts and virtual machines to manage and the
number of statistics to be collected. VMware provides tools to help you estimate the size of your
database.
The VMware vCenter Server 5.x Database Sizing Calculator (for Microsoft SQL Server or Oracle)
is an Excel spreadsheet that estimates the size of the vCenter Server database. This estimate is
calculated from the information that you enter, such as the number of hosts and virtual machines.
vCenter Server also provides you with a database estimation calculator in which you enter the
number of hosts and virtual machines in your inventory. The what-if calculator uses these numbers
to determine how much database space is required for the collection interval configuration that you
defined.
To access the what-if calculator:
2. Click Statistics in the left pane. The calculator does not change the size of the vCenter Server
database.
Before beginning the vCenter Server installation, make sure that the
following prerequisites are met:
Ensure that vCenter Server hardware and software requirements are
met.
Ensure that the vCenter Server system belongs to a domain rather than
a workgroup.
Create a vCenter Server database, unless you are using the default
database.
Obtain and assign a static IP address and a host name to the vCenter
Server system.
Before you begin the vCenter Server installation procedure, make sure that the following
prerequisites are met:
• Make sure that the system that you use for vCenter Server meets the hardware and software
requirements.
• Make sure that the system that you use for vCenter Server belongs to a domain and not a
workgroup. If the system is assigned to a workgroup, vCenter Server is unable to discover all
domains and systems available on the network when using such features as Guided
Consolidation.
• Create a vCenter Server database, unless you want to use SQL Server 2008 Express, the default
vCenter Server database.
• Obtain and assign a static IP address and host name to the Windows server that will host
vCenter Server. This IP address must have a valid (internal) DNS registration that resolves
properly from all managed ESXi hosts.
• You can deploy vCenter Server behind a firewall. But make sure that you do not have a network
address translation firewall between vCenter Server and the hosts that it will manage.
Use the VMware® vCenter Installer to install vCenter Server and its
components.
Install vCenter
Server.
Install vSphere
Update Manager.
Install vSphere
Client.
17
Installing VMware vSphere 5.1 Components
To install vCenter Server and its components, use the VMware vCenter Installer. The VMware
vCenter Installer enables you to install the vCenter Server software, the vSphere Client, and the
server components of vCenter Server modules.
To start the VMware vCenter Installer:
When first setting up your vCenter Server Linked Mode group, you must install the first vCenter
Server instance as a standalone instance. The reason for this requirement is that you do not yet have
a remote vCenter Server machine to join. Subsequent vCenter Server instances can join the first
vCenter Server instance or other vCenter Server instances that have joined the Linked Mode group
(as shown on the slide).
NOTE
DNS must be operational for Linked Mode replication to work.
The vCenter Server Installation wizard asks for the following data.
Parameter Description
17
vCenter Server Web service
Select if vCenter Server will manage hosts that power
Ephemeral port configuration
on more than 2,000 virtual machines simultaneously.
1. Click the vCenter Server link in the VMware vCenter Installer main window. The vCenter
Server installation wizard prompts you for the following information:
• User name, organization, and license key – If you omit the license key, vCenter Server is
installed in evaluation mode. After installation, you can use the vSphere Client to enter the
vCenter Server license.
• Database information – On the Database Options page of the vCenter Server installer, you
must choose between the default database or an existing supported database.
If you choose to use an existing SQL Server database, you must create a DSN. The DSN
contains specific information about the database that the ODBC driver requires to connect
to it. If you are using an existing supported database, you are also prompted to enter the
database user name and password.
• SYSTEM account or user-specified account – The vCenter Server Service page of the
vCenter Server installer gives you the option to use the Windows SYSTEM account or a
user-specified account for running the vCenter Server service.
Instead of using the vCenter Server Appliance, you can install vCenter Server on
a Windows system.
After vCenter Server is installed, a number of services start on reboot and can be
managed from Windows Control Panel (Administrative Tools > Services).
17
Installing VMware vSphere 5.1 Components
After vCenter Server is installed, several new Windows services are visible on the vCenter Server
system:
• VMware® vCenter™ Orchestrator™ Configuration – A service for VMware® vCenter™
Orchestrator™, a workflow engine that can help administrators automate existing manual tasks.
• VMware VirtualCenter Management Webservices – Enables configuration of vCenter
management services.
• VMware VirtualCenter Server – The heart of vCenter Server, this service provides centralized
management of virtual machines and ESXi hosts.
• VMware VCMSDS – Provides vCenter Server LDAP directory services.
The VMware Tools Service (shown on the slide) is not installed during the vCenter Server
installation. It is installed when VMware® Tools™ is installed into the guest operating system of a
virtual machine. VMware Upgrade Helper is not installed during the vCenter Server installation. It
is a service that VMware Tools uses whenever a virtual machine’s hardware is upgraded to a newer
version.
17
Installing VMware vSphere 5.1 Components
Lesson 3:
vCenter Linked Mode
17
Installing VMware vSphere 5.1 Components
vCenter Linked Mode enables VMware vSphere® administrators to search across multiple vCenter
Server system inventories from a single VMware vSphere® Client™ session. For example, you
might want to simplify management of inventories associated with remote offices or multiple
datacenters. Likewise, you might use vCenter Linked Mode to configure a recovery site for disaster
recovery purposes.
vCenter Linked Mode enables you to do the following:
• Log in simultaneously to all vCenter Server systems for which you have valid credentials
• Search the inventories of all vCenter Server systems in the group
• Search for user roles on all the vCenter Server systems in the group
• View the inventories of all vCenter Server systems in the group in a single inventory view
• With vCenter Linked Mode, you can have up to ten linked vCenter Server systems and up to
3,000 hosts across the linked vCenter Server systems. For example, you can have ten linked
vCenter Server systems, each with 300 hosts, or five vCenter Server systems with 600 hosts
each. But you cannot have two vCenter Server systems with 1,500 hosts each, because that
exceeds the limit of 1,000 hosts per vCenter Server system. vCenter Linked Mode supports
30,000 powered-on virtual machines (and 50,000 registered virtual machines) across linked
vCenter Server systems.
Connection information
Certificates and thumbprints
Licensing information
17
User roles
For inventory searches, vCenter Linked Mode relies on a Java-based Web application called the
query service, which runs in Tomcat Web services.
When you use search for an object, the following operations take place:
1. The vSphere Client logs in to a vCenter Server system.
2. The vSphere Client obtains a ticket to connect to the local query service.
3. The local query service connects to the query services on other vCenter Server instances to do a
distributed search.
4. Before returning the results, vCenter Server filters the search results according to permissions.
The search service queries Active Directory (AD) for information about user permissions. So you
must be logged in to a domain account to search all vCenter Server systems in vCenter Linked
Mode. If you log in with a local account, searches return results only for the local vCenter Server
system, even if it is joined to other systems in a Linked Mode group.
2. Click the icon in the Search Inventory field at the top right of the vSphere Client window.
17
Installing VMware vSphere 5.1 Components
vCenter Linked Mode is implemented through the vCenter Server installer. You can create a Linked
Mode group during vCenter Server installation. Or you can use the installer to add or remove
instances.
When adding a vCenter Server instance to a Linked Mode group, the user running the installer must
be an administrator. Specifically, the user must be a local administrator on the machine where
vCenter Server is being installed and on the target machine of the Linked Mode group. Generally,
the installer must be run by a domain user who is an administrator on both systems.
The following requirements apply to each vCenter Server system that is a member of a Linked
Mode group:
• Do not join a version 5.0 vCenter Server to earlier versions of vCenter Server, or an earlier
version of vCenter Server to a version 5.0 vCenter Server. Upgrade any vCenter Server instance
to version 5.0 before joining it to a version 5.0 vCenter Server. The vSphere Client does not
function correctly with vCenter Server systems in groups that have both version 5.0 and earlier
versions of vCenter Server.
• DNS must be operational for Linked Mode replication to work.
17
see vCenter Server and Host Management Guide at http://www.vmware.com/support/pubs/vsphere-
esxi-vcenter-server-pubs.html.
During the installation of vCenter Server, you have the option of joining a Linked Mode group. If
you do not join during installation, you can join a Linked Mode group after vCenter Server has
been installed.
To join a vCenter Server system to a Linked Mode group:
1. Select Start > Programs > VMware > vCenter Server Linked Mode Configuration. Click
Next.
2. Select Modify linked mode configuration. Click Next.
3. Click Join this vCenter Server instance to an existing linked mode group or another
instance. Click Next.
4. When prompted, enter the remaining networking information. Click Finish. The vCenter Server
instance is now part of a Linked Mode group.
After you form a Linked Mode group, you can log in to any single instance of vCenter Server. From
that single instance, you can view and manage the inventories of all the vCenter Server instances in
the group. The delay is usually less than 15 seconds. A new vCenter Server instance might take a
few minutes to be recognized and published by the existing instances because group members do
not read the global data often.
17
identify and correct failures.
• On the vSphere Client Home page, click vCenter Service Status. The vCenter Service Status
page enables you to view information that includes:
• A list of all vCenter Server systems and their services
• A list of all vCenter Server plug-ins
• The status of all listed items
• The date and time of the last change in status
• Messages associated with the change in status.
17
Installing VMware vSphere 5.1 Components
When you join a vCenter Server system to a Linked Mode group, the roles defined on each vCenter
Server system in the group are replicated to the other systems in the group.
If the roles defined on each vCenter Server system are different, the role lists of the systems are
combined into a single common list. For example, vCenter Server 1 has a role named Role A and
vCenter Server 2 has a role named Role B. Both systems will have both Role A and Role B after the
systems are joined in a Linked Mode group.
If two vCenter Server systems have roles with the same name, the roles are combined into a single
role if they contain the same privileges on each vCenter Server system. If two vCenter Server
systems have roles with the same name that contain different privileges, this conflict must be
resolved by renaming at least one of the roles. You can choose to resolve the conflicting roles either
automatically or manually.
If you choose to reconcile the roles automatically, the role on the joining system is renamed to
<vCenter_Server_system_name> <role_name>. <vCenter_Server_system_name> is the name of the
vCenter Server system that is joining the Linked Mode group, and <role_name> is the name of the
original role. To reconcile the roles manually, connect to one of the vCenter Server systems with the
vSphere Client. Then rename one instance of the role before joining the vCenter Server system to
the Linked Mode group. If you remove a vCenter Server system from a Linked Mode group, the
vCenter Server system retains all the roles that it had as part of the group.
1
You can isolate a vCenter Server instance
from a Linked Mode group in two ways:
Use the vCenter Server Linked Mode 3
Configuration wizard.
Use Windows Add/Remove Programs.
You can also isolate (remove) a vCenter Server instance from a Linked Mode group, for example, to
manage the vCenter Server instance as a standalone instance. One way to isolate the instance is to
start the vCenter Server Linked Mode Configuration wizard. Another way is to use the Windows
Add/Remove Programs (click Change) in the vCenter Server system operating system. Either
method starts the vCenter Server wizard. Modify the vCenter Server configuration as shown on the
slide.
To use the vCenter Server Linked Mode Configuration wizard to isolate a vCenter Server
instance from a Linked Mode group:
1. Select Start > All Programs > VMware > vCenter Server Linked Mode Configuration.
3. Click Isolate this vCenter Server instance from linked mode group. Click Next.
4. Click Continue.
5. Click Finish. The vCenter Server instance is no longer part of the Linked Mode group.
17
Installing VMware vSphere 5.1 Components
Lesson 5:
Image Builder
17
Installing VMware vSphere 5.1 Components
core CIM
hypervisor providers
plug-in
drivers
components
An ESXi image is a customizable software bundle that contains all the software necessary to run on
an ESXi host.
An ESXi image includes the following:
• The base ESXi software, also called the core hypervisor
• Specific hardware drivers
• Common Information Model (CIM) providers
• Specific applications or plug-in components
ESXi images can be installed on a hard disk, or the ESXi image can run entirely in memory.
17
Installing VMware vSphere 5.1 Components
The ESXi image includes one or more VMware installation bundles (VIBs).
A VIB is an ESXi software package. In VIBs, VMware and its partners package solutions, drivers,
CIM providers, and applications that extend the ESXi platform. An ESXi image should always
contain one base VIB. Other VIBs can be added to include additional drivers, CIM providers,
updates, patches, and applications.
The challenge of using a standard ESXi image is that the image might
be missing desired functionality.
missing
CIM
provider
? missing
driver
standard
ESXi ISO image: missing
vendor
Base providers plug-in
Base drivers
Standard ESXi images are provided by VMware and are available on the VMware Web site. ESXi
images can also be provided by VMware partners.
The challenge that administrators face when using the standard ESXi image provided by VMware is
that the standard image is sometimes limited in functionality. For example, the standard ESXi image
might not contain all the drivers or CIM providers for a specific set of hardware. Or the standard
image might not contain vendor-specific plug-in components.
To create an ESXi image that contains custom components, use Image Builder.
17
Installing VMware vSphere 5.1 Components
Image Builder is a utility for customizing ESXi images. Image Builder consists of a server and
VMware vSphere® PowerCLI™ cmdlets. These cmdlets are used to create and manage VIBs,
image profiles, software depots, and software channels. Image Builder cmdlets are implemented as
Microsoft PowerShell cmdlets and are included in vSphere PowerCLI. Users of Image Builder
cmdlets can use all vSphere PowerCLI features.
Image
Builder
17
Installing VMware vSphere 5.1 Components
To use Image Builder, the first step is to install vSphere PowerCLI and all prerequisite software. The
Image Builder snap-in is included with the vSphere PowerCLI installation.
To install Image Builder, you must install:
• Microsoft .NET 2.0
• Microsoft PowerShell 1.0 or 2.0
• vSphere PowerCLI which includes the Image Builder cmdlets
After you start the vSphere PowerCLI session, the first task is to verify that the execution policy is set
to Unrestricted. For security reasons, Windows PowerShell supports an execution policy feature. It
determines whether scripts are allowed to run and whether they must be digitally signed. By default,
the execution policy is set to Restricted, which is the most secure policy. If you want to run scripts or
load configuration files, you can change the execution policy by using the Set-ExecutionPolicy
cmdlet. To view the current execution policy, use the Get-ExecutionPolicy cmdlet.
The next task is to connect to your vCenter Server system. The Connect-VIServer cmdlet enables
you to start a new session or reestablish a previous session with a vSphere server.
For more about installing Image Builder and its prerequisite software, see vSphere Installation and
Setup Guide. For more about vSphere PowerCLI, see vSphere PowerCLI Installation Guide. Both
books are at http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html.
OEM VIBs
Before you create or customize an ESXi image, Image Builder must be able to access one or more
software depots.
The Add-EsxSoftwareDepot cmdlet enables you to add software depots or offline bundle ZIP
files to Image Builder. A software depot consists of one or more software channels. By default, this
cmdlet adds all software channels in the depot to Image Builder. The Get-EsxSoftwareChannel
cmdlet retrieves a list of currently connected software channels. The Remove-EsxSoftwareDepot
cmdlet enables you to remove software depots from Image Builder.
After adding the software depot to Image Builder, verify that you can read the software depot.
The Get-EsxImageProfile cmdlet retrieves a list of all published image profiles in the
software depot.
Other Image Builder cmdlets that might be useful include Set-EsxImageProfile and Compare-
EsxImageProfile. The Set-EsxImageProfile cmdlet modifies a local image profile and
performs validation tests on the modified profile. The cmdlet returns the modified object but does
not persist it. The Compare-EsxImageProfile cmdlet shows whether two image profiles have the
same VIB list and acceptance levels.
17
and then customize it.
OEM VIBs
driver Image
VIBs ISO image
Builder
PXE-bootable
Image
OEM VIBs
Finally, after creating an image profile, you can generate an ESXi image. The Export-
EsxImageProfile cmdlet exports an image profile as an ISO image or ZIP file. An ISO image can
be used to boot an ESXi host. A ZIP file can be used by vSphere Update Manager for remediating
ESXi hosts. The exported image profile can also be used with Auto Deploy to boot ESXi hosts.
For the complete list of Image Builder cmdlets, see vSphere Image Builder PowerCLI Reference at
http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html.
In this lab, you will use Image Builder to export an image profile.
1. Export an image profile to an ISO image.
17
Installing VMware vSphere 5.1 Components
Lesson 6:
Auto Deploy
17
Installing VMware vSphere 5.1 Components
17
Simplifies host recovery
A boot disk is not necessary.
Without the use of Auto Deploy, the ESXi host’s image (binaries, VIBs), configuration, state, and
log files are stored on the boot device.
With Auto Deploy, a boot device no longer holds the host’s information. Instead, the information is
stored off the host and managed by vCenter Server:
• Image state – Executable software to run on the ESXi host. The information is part of the image
profile, which can be created and customized with the Image Builder snap-in in vSphere
PowerCLI.
• Configuration state – Configurable settings that determine how the host is configured.
Examples include virtual switch settings, boot parameters, and driver settings. Host profiles are
created using the host profile user interface in the vSphere Client.
• Running state – Settings that apply while the ESXi host is up and running. This state also
includes the location of the virtual machine in the inventory and the virtual machine autostart
information. This state information is managed by the vCenter Server instance.
Event recording – Information found in log files and core dumps. This information can be managed
by vCenter Server, using add-in components like the VMware vSphere® ESXi™ Dump Collector
and the Syslog Collector.
Auto Deploy
PowerCLI
Auto Deploy
server
fetch of predefined
image profiles
host and VIBs
profile
engine
17
image profiles
public depot
Auto Deploy has a rules engine that determines which ESXi image
and host profiles can be used on selected hosts.
The rules engine maps software images and host profiles to hosts,
based on the attributes of the host:
For example, rules can be based on IP or MAC address.
The -AllHosts option can be used for every host.
For new hosts, the Auto Deploy server checks with the rules engine
before serving image and host profiles to a host.
vSphere PowerCLI cmdlets are used to set, evaluate, and update
image profile and host profile rules.
The rules engine includes rules and rule sets.
17
Installing VMware vSphere 5.1 Components
When installing Auto Deploy, the software can be on its own server or it can be installed on the
same host as the vCenter Server instance. Setting up Auto Deploy includes installing the software
and registering Auto Deploy with a vCenter Server system.
The VMware® vCenter™ Server Appliance™ has the Auto Deploy software installed by default.
vSphere PowerCLI must be installed on a system that can be reached by the vCenter Server system
and the Auto Deploy server.
Installing vSphere PowerCLI includes installing Microsoft PowerShell. For Windows 7 and
Windows Server 2008, Windows PowerShell is installed by default. For Windows XP or Windows
2003, Windows PowerShell must be installed before installing vSphere PowerCLI.
The image profile can come from a public depot or it can be a zipped file stored locally. The local
image profile can be created and customized using vSphere PowerCLI. However, the base ESXi
image must be part of the image profile.
If you are using a host profile, save a copy of the host profile to a location that can be reached by the
Auto Deploy server.
For more about preparing your system and installing the Auto Deploy server, see vSphere Installation
and Setup Guide at http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html.
Autodeployed hosts perform a preboot execution environment (PXE) boot. PXE uses DHCP and
Trivial File Transfer Protocol (TFTP) to boot an operating system over a network.
A DHCP server and a TFTP server must be configured. The DHCP server assigns IP addresses to
each autodeployed host on startup and points the host to a TFTP server to download the gPXE
configuration files. The ESXi hosts can use the infrastructure’s existing DHCP and TFTP servers, or
new DHCP and TFTP servers can be created for use with Auto Deploy. Any DHCP server that
supports the next-server and filename options can be used.
The vCenter Server Appliance can be used as the Auto Deploy server, DHCP server, and TFTP
server. The Auto Deploy service, DHCP service, and TFTP service are included in the appliance.
17
Auto TFTP DHCP
OEM VIBs Deploy
2. The DHCP server assigns an IP address to the ESXi host and instructs the host to contact the
TFTP server.
3. The ESXi host downloads the gPXE image file and the gPXE configuration file from the
TFTP server.
Auto
OEM VIBs Deploy cluster A cluster B
The gPXE configuration file instructs the host to make an HTTP boot request to the Auto Deploy
server.
The hosts image profile, host profile, and cluster are determined.
vCenter Server
waiter
17
Auto
OEM VIBs Deploy cluster A cluster B
The image is pushed to the host, and the host profile is applied.
vCenter Server
Auto
OEM VIBs Deploy cluster A cluster B
The Auto Deploy server streams the VIBs specified in the image profile to the host. Optionally, a
host profile can also be streamed to the host.
The host boots based on the image profile and the host profile received by Auto Deploy. Auto
Deploy also assigns the host to the vCenter Server instance with which it is registered.
The host is placed in the appropriate cluster, if specified by a rule. The ESXi
image and information on the image profile, host profile, and cluster to use
are held on the Auto Deploy server.
vCenter Server
image
profile Host Profile
rules engine Host Profile
host profile
image profile,
ESXi host profile,
VIBs cluster info
ESXi
driver host
VIBs waiter
ESXi
17
image
When an autodeployed ESXi host is rebooted, a slightly different sequence of events takes place:
1. The ESXi host goes through the PXE boot sequence, as it does in the initial boot sequence.
2. The DHCP server assigns an IP address to the ESXi host and instructs the host to contact the
TFTP server.
3. The host downloads the gPXE image file and the gPXE configuration file from the
TFTP server.
ESXi
image
17
OEM VIBs Auto Deploy
cluster A cluster B
The ESXi image is downloaded from the Auto Deploy server to the
host. The host profile is downloaded from vCenter Server to the host.
vCenter Server
image
profile Host Profile
rules engine Host Profile
host profile
image profile,
ESXi host profile,
VIBs cluster info
driver ESXi
VIBs waiter host
ESXi
image
TFTP DHCP
OEM VIBs Auto Deploy
In this step, the subsequent boot sequence differs from the initial boot sequence.
When an ESXi host is booted for the first time, Auto Deploy queries the rules engine for
information about the host. The information about the host’s image profile, host profile, and cluster
is stored on the Auto Deploy server.
On subsequent reboots, the Auto Deploy server uses the saved information instead of using the rules
engine to determine this information. Using the saved information saves time during subsequent
boots because the host does not have to be checked against the rules in the active rule set. Auto
Deploy checks the host against the active rule set only once, during the initial boot.
vCenter Server
image
profile Host Profile
rules engine Host Profile
host profile
image profile,
ESXi host profile,
VIBs cluster info
ESXi
driver
waiter host
VIBs
ESXi
image
17
OEM VIBs Auto Deploy
cluster A cluster B
Auto Deploy stateless caching PXE boots the ESXi host and loads the image in memory like
stateless ESXi hosts. However, when the host profile is applied to the ESXi host, the image running
in memory is copied to a boot device. The saved image acts as a backup in the event the PXE,
infrastructure or Auto Deploy servers are unavailable. If the host needs to reboot and it cannot
contact the DHCP, TFTP, or Auto Deploy server, the network boot will timeout and the host will
reboot using the cached disk image.
While stateless caching can potentially help ensure availability of an Auto Deployed ESXi host by
enabling the host to boot in the event of an outage affecting the DHCP, TFTP or Auto Deploy
servers, stateless caching does not guarantee that the image is current or the vCenter server is
available after the boot up. Stateless caching’s primary benefit is it enables you to boot the host in
order to help troubleshoot and resolve problems that prevent a successful PXE boot.
It’s also important to note that unlike stateless ESXi hosts, stateless caching requires a dedicated
boot device be assigned to the host.
esx,local
17
Installing VMware vSphere 5.1 Components
Stateless caching is configured using host profiles. When configuring stateless caching, you can
choose to save the image to a local boot device or a USB disk. You also have the option of leaving
the local VMFS intact or overwriting it.
To configure stateless caching:
2. Select the host profile you want to configure, and click on Edit Profile from the tool bar.
3. Expand the System Image Cache Configuration tree and highlight System Image Cache
Profile Settings.
4. From the pull-down menu on the Configuration Details tab, select Enable stateless caching on
the host.
Enter first disk arguments if needed and whether or not you want to overwrite the local VMFS
and select OK.
The host copies the state locally. Reboots are stateless only if the
PXE and Auto Deploy servers are reached.
vCenter Server
Image
Image
image
Profile Host Profile
Profile Host Profile
profile host profile
rules engine
image profile
host profile
ESXi cache
VIBs Image/Config
Driver
VIBs Waiter
Auto
OEM VIBs Deploy
All subsequent reboots will boot from the Auto Deploy server unless the DHCP, TFTP, or the Auto
Deploy servers are not available. If either infrastructure server is not available, the host will boot
from the locally saved image.
No attempt is made to update the cached image. If the host regains the ability to boot stateless, the
local image does not change, even after the stateless image changes. The local image may become
stale as a result.
The ESXi host initially boots using Auto Deploy. All subsequent
reboots use local disks.
The benefits of Auto Deploy stateful installations include the
following:
Quickly and efficiently provision hosts
Once provisioned no further requirement on the PXE and Auto Deploy
servers.
Some of the disadvantages of using stateful Installations include the
following:
Over time the configuration might become out of sync with the Auto
Deploy image.
Patching and updating ESXi hosts need to be done using traditional
17
methods.
Stateful installation is configured using host profiles. When configuring stateful installation you can
choose to save the image to a local boot device or a USB disk. You also have the option of leaving
the local VMFS intact or overwriting it.
To configure stateful installation:
2. Select the host profile that you want to configure, and click on Edit Profile from the tool bar.
3. Expand the System Image Cache Configuration tree and highlight System Image Cache
Profile Settings.
4. From the pull-down menu on the Configuration Details tab, select Enable stateful installs on
the host.
5. Enter first disk arguments if needed and whether or not you want to overwrite the local VMFS
and select OK.
Initial boot up uses Auto Deploy to install the image on the server.
Subsequent reboots are performed from local storage.
vCenter Server
Image
Image
image
Profile Host Profile
Profile Host Profile
profile host profile
rules engine
image profile
host profile
ESXi
ESXi cache
VIBs
VIBs Image/Config
Driver
VIBs Waiter
17
Auto
OEM VIBs Deploy
You can change a rule set, for example, to require a host to boot from a different image profile. You
can also require a host to use a different host profile. Unprovisioned hosts that you boot are
automatically provisioned according to these modified rules. For all other hosts, Auto Deploy
applies the new rules only when you test their rule compliance and perform remediation.
If the vCenter Server instance is unavailable, the stateless host contacts the Auto Deploy server to
determine which image and host profile to use. If a host is in a VMware vSphere® High Availability
cluster, Auto Deploy retains the state information so deployment works for the stateless host even if
the vCenter Server instance is not available. If the host is not in a vSphere HA cluster, the vCenter
Server system must be available to supply information to the Auto Deploy server.
If the Auto Deploy server becomes unavailable, stateless hosts that are already autodeployed remain
up and running. However, you will be unable to boot or reboot hosts. VMware recommends
installing Auto Deploy in a virtual machine and placing the virtual machine in a vSphere HA cluster
to keep it available.
For details about the procedure and the commands used for testing and repairing rule compliance,
see vSphere Installation and Setup Guide at http://www.vmware.com/support/pubs/vsphere-esxi-
vcenter-server-pubs.html.
vSphere Update Manager supports ESXi 5.0 and 5.1 hosts that use
Auto Deploy to boot.
Update Manager can patch hosts but cannot update the ESXi image
used to boot the host.
Update Manager can remediate only patches that do not require a
reboot (live-install).
Patches requiring reboot cannot be installed.
Workflow for patching includes the following steps:
1. Update the image that Auto Deploy uses with patches manually. If a
reboot is possible, rebooting is all that is required to update the host.
2. If a reboot cannot be performed, create a baseline in Update Manager
and remediate the host.
17
Installing VMware vSphere 5.1 Components
Update Manager now supports PXE installations. Update Manager updates the hosts but does not
update the image on the PXE server.
Only live-install patches can be remediated with Update Manager. Any patch that requires a reboot
cannot be installed on a PXE host. The live-install patches can be from VMware or a third party.
In this lab, you will configure Auto Deploy to boot ESXi hosts.
1. Install Auto Deploy.
2. Configure the DHCP server and TFTP server for Auto Deploy.
3. Use vSphere PowerCLI to configure Auto Deploy.
4. (For vClass users only) Configure the ESXi host to boot from the
network.
5. (For non-vClass users) Configure the ESXi host to boot from the
network.
6. View the autodeployed host in the vCenter Server inventory.
7. (Optional) Apply a host profile to the autodeployed host.
8. (Optional) Define a rule to apply the host profile to an autodeployed
host when it boots.
17
Installing VMware vSphere 5.1 Components