You are on page 1of 266

VMware vSphere:

Fast Track
Lecture Manual – Volume 3
ESXi 5.1 and vCenter Server 5.1

VMware® Education Services


VMware, Inc.
www.vmware.com/education
VMware vSphere:
Fast Track
ESXi 5.1 and vCenter Server 5.1
Part Number EDU-EN-FT51-LECT3
Lecture Manual – Volume 3
Revision A

Copyright/Trademark
Copyright © 2012 VMware, Inc. All rights reserved. This manual and its accompanying
materials are protected by U.S. and international copyright and intellectual property laws.
VMware products are covered by one or more patents listed at http://www.vmware.com/go/
patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States
and/or other jurisdictions. All other marks and names mentioned herein may be trademarks
of their respective companies.
The training material is provided “as is,” and all express or implied conditions,
representations, and warranties, including any implied warranty of merchantability, fitness for
a particular purpose or noninfringement, are disclaimed, even if VMware, Inc., has been
advised of the possibility of such claims. This training material is designed to support an
instructor-led training course and is intended to be used for reference purposes in
conjunction with the instructor-led training course. The training material is not a standalone
training tool. Use of the training material for self-study without class attendance is not
recommended.
These materials and the computer programs to which it relates are the property of, and
embody trade secrets and confidential information proprietary to, VMware, Inc., and may not
be reproduced, copied, disclosed, transferred, adapted or modified without the express
written approval of VMware, Inc.
Course development: John Tuffin
Technical editing: PJ Schemenaur
Production and publishing: Ron Morton

www.vmware.com/education
TA B L E OF C ONTENTS
MODULE 13 Storage Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .681
You Are Here . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .682
Importance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .683
Module Lessons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .684
Lesson 1: Storage APIs and Profile-Driven Storage . . . . . . . . . . . . . . .685
Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .686
VMware vSphere Storage APIs: Array Integration. . . . . . . . . . . . . . . .687
VMware vSphere Storage APIs: Storage Awareness . . . . . . . . . . . . . .689
Benefits Provided by Storage Vendor Providers . . . . . . . . . . . . . . . . . .691
Configuring a Storage Vendor Provider . . . . . . . . . . . . . . . . . . . . . . . .692
Profile-Driven Storage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .693
Storage Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .695
Virtual Machine Storage Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . .696
Overview of Steps for Configuring Profile-Driven Storage . . . . . . . . .697
Using the Virtual Machine Storage Profile . . . . . . . . . . . . . . . . . . . . . .699
Checking Virtual Machine Storage Compliance . . . . . . . . . . . . . . . . . .700
Identifying Advanced Storage Options . . . . . . . . . . . . . . . . . . . . . . . . .701
N_Port ID Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .702
N_Port ID Virtualization Requirements . . . . . . . . . . . . . . . . . . . . . . . .703
vCenter Server Storage Filters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .704
Identifying and Tagging SSD Devices . . . . . . . . . . . . . . . . . . . . . . . . .706
Configuring Software iSCSI Port Binding . . . . . . . . . . . . . . . . . . . . . .707
VMFS Resignaturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .709
Pluggable Storage Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .710
VMware Default Multipathing Plug-In . . . . . . . . . . . . . . . . . . . . . . . . .712
Overview of the MPP Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .713
Path Selection Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .715
Lab 25 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .716
Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .717
Lesson 2: Storage I/O Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .718
Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .719
What Is Storage I/O Control? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .720
Storage I/O Control Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . .721
Storage I/O Control Automatic Threshold Detection . . . . . . . . . . . . . .722
Storage I/O Control Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . .723
Configuring Storage I/O Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .724
Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .726
Lesson 3: Datastore Clusters and Storage DRS . . . . . . . . . . . . . . . . . .727
Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .728
What Is a Datastore Cluster? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .729
Datastore Cluster Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .730
Relationship of Host Cluster to Datastore Cluster . . . . . . . . . . . . . . . .731
Storage DRS Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .732

VMware vSphere: Fast Track i


Initial Disk Placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .733
Migration Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .734
Datastore Correlation Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .735
Configuration of Storage DRS Migration Thresholds. . . . . . . . . . . . . .736
Storage DRS Affinity Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .738
Adding Hosts to a Datastore Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . .739
Adding Datastores to the Datastore Cluster . . . . . . . . . . . . . . . . . . . . .740
Storage DRS Summary Information . . . . . . . . . . . . . . . . . . . . . . . . . . .741
Storage DRS Migration Recommendations . . . . . . . . . . . . . . . . . . . . .742
Storage DRS Maintenance Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . .743
Backups and Storage DRS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .744
Storage DRS: vSphere Technology Compatibility . . . . . . . . . . . . . . . .745
Storage DRS - Array Feature Compatibility . . . . . . . . . . . . . . . . . . . . .746
Storage DRS and Storage I/O Control. . . . . . . . . . . . . . . . . . . . . . . . . .747
Lab 26 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .748
Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .749
Key Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .750

MODULE 14 Data Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .751


You Are Here . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .752
Importance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .753
Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .754
Traditional Backup Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .755
Backup Challenges in Virtualized Environments . . . . . . . . . . . . . . . . .757
Virtual Architecture Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .758
vSphere Storage APIs - Data Protection . . . . . . . . . . . . . . . . . . . . . . . .759
Offloaded Backup Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .760
Changed Block Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .761
Data Deduplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .763
VMware vSphere Data Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . .764
vSphere Data Protection Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .765
vSphere Data Protection Key Components . . . . . . . . . . . . . . . . . . . . . .766
VDP Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .767
VDP Deployment and Configuration . . . . . . . . . . . . . . . . . . . . . . . . . .769
Virtual Machine Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .770
Restoring a Virtual Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .771
File Level Restore (FLR). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .772
VDP Reporting: User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .773
Backing Up vCenter Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .774
Backing Up ESXi Host Configuration Data . . . . . . . . . . . . . . . . . . . . .775
Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .776
Key Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .777

ii VMware vSphere: Fast Track


MODULE 15 Patch Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .779
You Are Here . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .780
Importance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .781
Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .782
Update Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .783
Update Manager Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .785
Update Manager Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .786
Installing Update Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .788
Configuring Update Manager Settings . . . . . . . . . . . . . . . . . . . . . . . . .790
Baseline and Baseline Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .791
Creating a Baseline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .793
Attaching a Baseline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .794
Scanning for Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .795
Viewing Compliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .797
Remediating Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .798
Maintenance Mode and Remediation . . . . . . . . . . . . . . . . . . . . . . . . . .800
Remediation Options for a Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . .802
Patch Recall Notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .803
Remediation Enabled for DRS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .804
Lab 27 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .805
Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .806
Key Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .807

MODULE 16 VMware Management Assistant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .809


You Are Here . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .810
Importance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811
Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .812
Methods to Run Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .813
ESXi Shell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .814
Accessing ESXi Shell Locally. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .815
Accessing ESXi Shell Remotely . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .816
vCLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .818
vMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .819
vMA Hardware and Software Requirements . . . . . . . . . . . . . . . . . . . .820
Configuring vMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .821
Connecting to the Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .822
Deploying vMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .823
Configuring vMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .824
Adding a Target Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .825
vMA Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .826
Joining vMA to Active Directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . .827
Command Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .828
vMA Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .829

Contents iii
esxcfg Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .830
esxcfg Equivalent vicfg Commands Examples . . . . . . . . . . . . . . . . . . .831
Managing Hosts with vMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .832
Common Connection Options for vCLI Execution (1) . . . . . . . . . . . . .833
Common Connection Options for vCLI Execution (2) . . . . . . . . . . . . .835
vicfg Command Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .836
Entering and Exiting Host Maintenance Mode . . . . . . . . . . . . . . . . . . .837
esxcli Command Hierarchies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .838
Example esxcli command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .839
resxtop Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .840
Using resxtop Interactively . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .841
Navigating resxtop. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .843
Sample Output from resxtop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .844
Using resxtop in Batch and Replay Modes . . . . . . . . . . . . . . . . . . . . . .845
Lab 28 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .847
Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .848
Key Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .849

MODULE 17 Installing VMware vSphere 5.1 Components . . . . . . . . . . . . . . . . . . . .851


You Are Here . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .852
Importance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .853
Module Lessons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .854
Lesson 1: Installing ESXi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .855
Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .856
ESXi Hardware Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .857
Installing ESXi 5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .858
Installing ESXi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .860
Booting from SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .862
Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .864
Lesson 2: Installing vCenter Server . . . . . . . . . . . . . . . . . . . . . . . . . . .865
Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .866
vCenter Server Deployment Options. . . . . . . . . . . . . . . . . . . . . . . . . . .867
Single-Server Solution or Distributed Solution . . . . . . . . . . . . . . . . . . .868
vCenter Single Sign On . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .869
Single Sign On Installation Wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . .870
vCenter Inventory Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .871
vCenter Server Hardware and Software Requirements . . . . . . . . . . . . .872
vCenter Database Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .874
Considerations for Calculating the Database Size. . . . . . . . . . . . . . . . .875
Before Installing vCenter Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .876
Installing vCenter Server and Its Components . . . . . . . . . . . . . . . . . . .877
Standalone Instance or Linked Mode Group . . . . . . . . . . . . . . . . . . . . .878
vCenter Server Installation Wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . .879

iv VMware vSphere: Fast Track


vCenter Server Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .881
Lab 29 (Optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .882
Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .883
Lesson 3: vCenter Linked Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .884
Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .885
vCenter Linked Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .886
vCenter Linked Mode Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . .887
Searching Across vCenter Server Instances . . . . . . . . . . . . . . . . . . . . .888
Basic Requirements for vCenter Linked Mode . . . . . . . . . . . . . . . . . . .890
Joining a Linked Mode Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .892
vCenter Service Monitoring: Linked Mode Groups . . . . . . . . . . . . . . .893
Resolving Role Conflicts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .895
Isolating a vCenter Server Instance . . . . . . . . . . . . . . . . . . . . . . . . . . . .896
Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .897
Lesson 4: Image Builder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .898
Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .899
What Is an ESXi Image? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .900
VMware Infrastructure Bundles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .901
ESXi Image Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .902
What Is Image Builder? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .903
Image Builder Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .904
Building an ESXi Image: Step 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .905
Building an ESXi Image: Step 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .906
Building an ESXi Image: Step 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .907
Using Image Builder to Build an Image: Step 4 . . . . . . . . . . . . . . . . . .908
Lab 30 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .909
Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .910
Lesson 5: Auto Deploy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 911
Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .912
What Is Auto Deploy? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .913
Where Are the Configuration and State Information Stored? . . . . . . . .914
Auto Deploy Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .915
Rules Engine Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .916
Software Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .917
PXE Boot Infrastructure Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .918
Initial Boot of an Autodeployed ESXi Host: Step 1 . . . . . . . . . . . . . . .919
Initial Boot of an Autodeployed ESXi Host: Step 2 . . . . . . . . . . . . . . .920
Initial Boot of an Autodeployed ESXi Host: Step 3 . . . . . . . . . . . . . . .921
Initial Boot of an Autodeployed ESXi Host: Step 4 . . . . . . . . . . . . . . .922
Initial Boot of an Autodeployed ESXi Host: Step 5 . . . . . . . . . . . . . . .923
Subsequent Boot of an Autodeployed ESXi Host: Step 1 . . . . . . . . . . .924
Subsequent Boot of an Autodeployed ESXi Host: Step 2 . . . . . . . . . . .925
Subsequent Boot of an Autodeployed ESXi Host: Step 3 . . . . . . . . . . .926

Contents v
Subsequent Boot of an Autodeployed ESXi Host: Step 4 . . . . . . . . . . .927
Auto Deploy Stateless Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .928
Stateless Caching Host Profile Configuration . . . . . . . . . . . . . . . . . . . .929
Auto Deploy Stateless Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .930
Auto Deploy Stateful Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .931
Stateful Installation Host Profile Configuration . . . . . . . . . . . . . . . . . .932
Auto Deploy Stateful Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .933
Managing the Auto Deploy Environment . . . . . . . . . . . . . . . . . . . . . . .934
Using Auto Deploy with Update Manager to Upgrade Hosts . . . . . . . .935
Lab 31 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .936
Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .937
Key Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .938

vi VMware vSphere: Fast Track


MODULE 13

Storage Scalability 13

13
Slide 13-1 g y

Storage Scalability
Module 13

VMware vSphere: Fast Track 681


You Are Here
Slide 13-2

Course Introduction High Availability and Fault Tolerance

Introduction to Virtualization Network Scalability

Creating Virtual Machines Host and Management Scalability

VMware vCenter Server Storage Scalability

Configuring and Managing Virtual Networks Data Protection

Configuring and Managing Virtual Storage Patch Management

Virtual Machine Management VMware Management Assistant

Installing VMware vSphere Components


Access and Authentication Control

Resource Management and Monitoring

682 VMware vSphere: Fast Track


Importance

13
Slide 13-3

Storage Scalability
As the enterprise grows, new scalability features in VMware
vSphere® enable the infrastructure to handle the growth efficiently.
Datastore growth and balancing issues can be addressed
automatically with VMware vSphere® Storage DRS™.

Module 13 Storage Scalability 683


Module Lessons
Slide 13-4

Lesson 1: Storage APIs and Profile-Driven Storage


Lesson 2: VMware vSphere Storage I/O Control
Lesson 3: Datastore Clusters and Storage DRS

684 VMware vSphere: Fast Track


Lesson 1: Storage APIs and Profile-Driven Storage

13
Slide 13-5

Storage Scalability
Lesson 1:
Storage APIs and Profile-Driven Storage

Module 13 Storage Scalability 685


Learner Objectives
Slide 13-6

After this lesson, you should be able to do the following:


ƒ Describe VMware vSphere® Storage APIs – Array Integration (VAAI).
ƒ Describe VMware vSphere Storage APIs – Storage Awareness
(VASA).
ƒ Configure and use profile-driven storage.

686 VMware vSphere: Fast Track


VMware vSphere Storage APIs: Array Integration

13
Slide 13-7

Storage Scalability
VAAI helps storage vendors provide hardware assistance to
accelerate VMware® I/O operations that are more efficiently
accomplished in the storage hardware.
VAAI includes the following API subsets:
ƒ Hardware Acceleration APIs:
• Allows arrays to integrate with vSphere to transparently offload certain
storage operations to the array:
- This integration significantly reduces the CPU overhead on the host.
- Support for NAS plug-ins for array integration exists.
ƒ Array Thin Provisioning APIs:
• Allows the monitoring of space on thin-provisioned storage arrays:
- This functionality helps to prevent out-of-space conditions and to perform
space reclamation.

Storage APIs is a family of APIs used by third-party hardware, software, and storage providers to
develop components that enhance several vSphere features and solutions. This module describes
two sets of Storage APIs: Array Integration and Storage Awareness. For a description of other APIs
from this family, see http://www.vmware.com/technical-resources/virtualization-topics/virtual-
storage/storage-apis.html.
VMware vSphere® Storage APIs – Array Integration (VAAI) is a set of protocol interfaces and
VMkernel APIs between VMware ESXi™ and storage arrays. In a virtualized environment, virtual
disks are files located on a VMware vSphere® VMFS datastore. Disk arrays cannot interpret the
VMFS datastore’s on-disk data layout, so the VMFS datastore cannot leverage hardware functions
per virtual machine or per virtual disk file. The goal of VAAI is to help storage vendors provide
hardware assistance to accelerate VMware I/O operations that are more efficiently accomplished in
the storage hardware. VAAI plug-ins can improve data transfer performance and are transparent to
the end user.

Module 13 Storage Scalability 687


Storage vendors can take advantage of the following features:
• Hardware Acceleration for NAS – This plug-in enables NAS arrays to integrate with VMware®
vSphere® to transparently offload certain storage operations to the array, such as offline cloning
(cold migrations, cloning from templates). This integration reduces CPU overhead on the host.
Hardware Acceleration for NAS is deployed as a plug-in that is not shipped with ESXi. This
plug-in is developed and distributed by the storage vendor but signed by the VMware®
certification program. Array/device firmware enabled for Hardware Acceleration for NAS must
use the Hardware Acceleration for NAS features. The storage vendor is responsible for the
support of the plug-in.
• Array Thin Provisioning – This extension assists in monitoring disk space usage on thin-
provisioned storage arrays. Monitoring this usage helps prevent the condition where the disk is
out of space. Monitoring usage also helps when reclaiming disk space.
No installation steps are required for the Array Thin Provisioning extensions. Array Thin
Provisioning works on all VMFS-3 and VMFS-5 volumes. Device firmware enabled for this
API is required to take advantage of the Array Thin Provisioning features. ESXi continuously
checks for firmware that is compatible with Array Thin Provisioning. After the firmware is
upgraded, ESXi starts using the Array Thin Provisioning features.

688 VMware vSphere: Fast Track


VMware vSphere Storage APIs: Storage Awareness

13
Slide 13-8

VMware vSphere® Storage APIs – Storage Awareness (VASA)

Storage Scalability
enables a storage vendor to develop a software component (known
as a storage vendor provider) for its storage arrays.
ƒ A storage vendor provider gets information from the storage array
about available storage topology, capabilities, and state.

storage vSphere
vendor
vCenter
Client
provider Server

storage
device

VMware® vCenter Server™ connects


to a storage vendor provider.
ƒ Information from the storage vendor provider
is displayed in the VMware vSphere® Client™.

Today, vSphere administrators do not have visibility in VMware® vCenter Server™ into the storage
capabilities of the storage array on which their virtual machines are stored. Virtual machines are
provisioned to a storage black box. All the vSphere administrator sees of the storage is a logical unit
number (LUN) identifier, such as a Network Address Authority ID (NAA ID) or a T10 identifier.
VMware vSphere Storage APIs – Storage Awareness (VASA) is a set of software APIs that a storage
vendor can use to provide information about their storage array to vCenter Server. Information
includes storage topology, capabilities, and the state of the physical storage devices. Administrators
now have visibility into the storage on which their virtual machines are located because storage
vendors can make this information available.
vCenter Server gets the information from a storage array by using a software component called a
VASA provider. A VASA provider is written by the storage array vendor. The VASA provider can
exist on either the storage array processor or on a standalone host. This decision is made by the
storage vendor. Storage devices are identified to vCenter Server with a T10 identifier or an NAA ID.
VMware recommends that vendors use these types of identifiers so that devices can be matched
between the VASA provider and vCenter Server.

Module 13 Storage Scalability 689


The VASA provider acts as a server in the vSphere environment. vCenter Server connects to the
VASA provider to obtain information about available storage topology, capabilities, and state. The
information is viewed in the VMware vSphere® Client™. A VASA provider can report information
about one or more storage devices. A VASA provider can support connections to a single or multiple
vCenter Server instances.
For information about the concepts of VASA and developing a VASA provider, see VASA
Programming Guide at http://www.vmware.com/support/pubs.

690 VMware vSphere: Fast Track


Benefits Provided by Storage Vendor Providers

13
Slide 13-9

Storage Scalability
Storage vendor providers benefit vSphere administrators by:
ƒ Allowing administrators to be aware of the topology, capabilities, and
state of the physical storage devices on which their virtual machines
are located
ƒ Allowing them to monitor the health and usage of their physical storage
devices
ƒ Assisting administrators in choosing the right storage in terms of space,
performance, and service-level agreement requirements:
• Done by using virtual machine storage profiles

A VASA provider supplies capability information in the form of descriptions of specific storage
attributes.
Types of capability information include the following:
• Performance capabilities, such as the number and type of spindles for a volume or the I/O
operations or megabytes/second
• Disaster recovery information, such as recovery point objective and recovery time objective
metrics for disaster recovery
• Space efficiency, such as type of compression used or if thick-provisioned format is used
This information allows vSphere administrators:
• To be more aware of the topology, capabilities, and state of the physical storage devices on
which their virtual machines are located.
• To monitor the health and usage of their physical storage devices.
To choose the right storage in terms of space, performance, and service-level agreement
requirements. Storage capabilities can be displayed in the vSphere Client. Virtual machine storage
profiles can be created to make sure that the storage being used for virtual machines complies with
the required levels of service.

Module 13 Storage Scalability 691


Configuring a Storage Vendor Provider
Slide 13-10

Select Home > Administration > Storage Providers.

After adding a storage provider,


the storage vendor provider is listed
in the Vendor Providers pane.

If your storage supports a VASA provider, use the vSphere Client to register and manage the VASA
provider. The Storage Providers icon on the vSphere Client Home page allows you to configure the
VASA provider. All system storage capabilities that are presented by the VASA provider are
displayed in the vSphere Client. The new Storage Capabilities panel appears in a datastore’s
Summary tab.
To register a VASA provider, the storage vendor provides a URL, a login account, and a password.
Users log in to the VASA provider to get array information. vCenter Server must trust the VASA
provider host. So a security certificate from the VASA provider must be installed on the vCenter
Server system. For procedures, see the VASA provider documentation.

692 VMware vSphere: Fast Track


Profile-Driven Storage

13
Slide 13-11

Storage Scalability
Profile-driven storage enables the creation of datastores that provide
different levels of service.
Profile-driven storage can be
used to do the following:
ƒ Categorize datastores based
on system-defined or user-
defined levels of service: gold silver bronze uncategorized

• For example, user-defined


levels might be gold, silver,
and bronze.
ƒ Provision a virtual machine’s
disks on “correct” storage
ƒ Check that virtual machines
comply with user-defined
storage requirements compliant not compliant

Profile-driven storage enables the creation of datastores that provide varying levels of service. With
profile-driven storage, you can use storage capabilities and virtual machine storage profiles to
ensure that virtual machines use storage that provides a certain level of capacity, performance,
availability, redundancy, and so on.
Profile-driven storage minimizes the amount of storage planning that the administrator must do for
each virtual machine. For example, the administrator can use profile-driven storage to create basic
storage tiers. Datastores with similar capabilities are tagged to form a gold, silver, and bronze tier.
Redundant, high-performance storage might be tagged as the gold tier, Nonredundant, medium-
performance storage might be tagged as the bronze tier.
Profile-driven storage can be used during the provisioning of a virtual machine to ensure that a
virtual machine’s disks are placed on the storage that is best for its situation. For example, profile-
driven storage can help you ensure that the virtual machine running a critical I/O-intensive database
is placed in the gold tier. Ideally, the administrator wants to create the best match of predefined
virtual machine storage requirements with available physical storage properties.

Module 13 Storage Scalability 693


Finally, profile-driven storage can be used during the ongoing management of the virtual machines.
An administrator can periodically check whether a virtual machine has been migrated to or created
on inappropriate storage, potentially making it noncompliant. Storage information can also be used
to monitor the health and usage of the storage and report to the administrator if the virtual machine’s
storage is not compliant.

694 VMware vSphere: Fast Track


Storage Capabilities

13
Slide 13-12

Storage Scalability
Storage capabilities:
ƒ System defined – From storage vendor providers
ƒ User-defined

storage vendor storage vendor


datastore A –
provider 1 – provider 2 –
USER-DEFINED
SYSTEM SYSTEM
CAPABILITIES
CAPABILITIES CAPABILITIES

vCenter Server

Profile-driven storage is achieved by using two key components: storage capabilities and virtual
machine storage profiles.
A storage capability outlines the quality of service that a storage system can deliver. It is a guarantee
that the storage system can provide a specific set of characteristics. The two types of storage
capabilities are system-defined and user-defined.
A system-defined storage capability is one that comes from a storage system that uses a VASA
vendor provider. The vendor provider informs vCenter Server that it can guarantee a specific set of
storage features by presenting them as a storage capability. vCenter Server recognizes the capability
and adds it to the list of storage capabilities for that storage vendor. vCenter Server assigns the
system-defined storage capability to each datastore that you create from that storage system.
A user-defined storage capability is one that you can define and associate with datastores. Examples
of user-defined capabilities are:
• Storage array type
• Replication status
• Storage tiers, such as gold, silver and bronze datastores
A user-defined capability can be associated with multiple datastores. You can associate a user-
defined capability with a datastore that already has a system-defined capability.

Module 13 Storage Scalability 695


Virtual Machine Storage Profiles
Slide 13-13

Virtual machine storage vendor storage vendor


storage profiles: datastore A –
provider 1 – provider 2 –
USER-DEFINED
ƒ Contain one or SYSTEM
CAPABILITIES
SYSTEM
CAPABILITIES
CAPABILITIES
more storage
capabilities
ƒ Are associated with
one or more virtual virtual machine
machines storage profiles

ƒ Can be used to
test that virtual
machines reside on
compliant storage

compliant compliant not compliant

Storage capabilities are used to define a virtual machine storage profile. A virtual machine storage
profile lists the storage capabilities that virtual machine home files and virtual disks require to run
the applications in the virtual machine. A virtual machine storage profile is created by an
administrator, who can create different storage profiles to define different levels of storage
requirements. The virtual machine home files (.vmx, .vmsd, .nvram, .log, and so on) and the
virtual disks (.vmdk) can have separate virtual machine storage profiles.
With a virtual machine storage profile, a virtual machine can be checked for storage compliance. If
the virtual machine is placed on storage that has the same capabilities as those defined in the virtual
machine storage profile, the virtual machine is storage-compliant.

696 VMware vSphere: Fast Track


Overview of Steps for Configuring Profile-Driven Storage

13
Slide 13-14

Storage Scalability
To configure profile-driven storage:
1. View existing storage capabilities.
2. (Optional) Create user-defined storage capabilities.
3. Associate user-defined storage capabilities with a datastore or
datastore cluster.
4. Enable the VM Storage Profiles function on a host or cluster.
5. Create a virtual machine storage profile.
6. Associate a virtual machine storage profile with a virtual machine.

An overview of the steps to configure profile-driven storage:


1. Before you add your own user-defined storage capabilities, view the system-defined storage
capabilities that your storage system defines. You are checking to see whether any of the
system-defined storage capabilities match your virtual machines’ storage requirements. For you
to view system-defined storage capabilities, your storage system must use a VASA provider.
2. Create necessary user-defined storage capabilities based on the storage requirements of your
virtual machines.
3. After you create user-defined storage capabilities, associate these capabilities with datastores.
Whether or not a datastore has a system-defined storage capability, you can assign a user-
defined storage capability to it. A datastore can have only one user-defined and only one
system-defined storage capability at a time.
4. Virtual machine storage profiles are disabled by default. Before you can use virtual machine
storage profiles, you must enable them on a host or a cluster.

Module 13 Storage Scalability 697


5. Create a virtual machine storage profile to define storage requirements for a virtual machine
and its virtual disks. Assign user-defined or system-defined storage capabilities or both to the
virtual machine storage profile.
Associate a virtual machine storage profile with a virtual machine to define the storage capabilities
that are required by the applications running on the virtual machine. You can associate a virtual
machine storage profile with a powered-off or powered-on virtual machine.

698 VMware vSphere: Fast Track


Using the Virtual Machine Storage Profile

13
Slide 13-15

Storage Scalability
Use the virtual machine storage profile when you create, clone, or
migrate a virtual machine.

When you create, clone, or migrate a virtual machine, you can associate the virtual machine with a
virtual machine storage profile. When you select a virtual machine storage profile, the vSphere
Client displays the datastores that are compatible with the capabilities of the profile. You can then
select a datastore or a datastore cluster. If you select a datastore that does not match the virtual
machine storage profile, the vSphere Client shows that the virtual machine is using noncompliant
storage.
When a virtual machine storage profile is selected, datastores are now divided into two categories:
compatible and incompatible. You can still choose other datastores outside of the virtual machine
storage profile, but these datastores put the virtual machine into a noncompliant state.
By using virtual machine storage profiles, you can easily see which storage is compatible and
incompatible. You can eliminate the need to ask the SAN administrator, or refer to a spreadsheet of
NAA IDs, each time that you deploy a virtual machine.

Module 13 Storage Scalability 699


Checking Virtual Machine Storage Compliance
Slide 13-16

After clicking the Check Compliance Now link

You can associate a virtual machine storage profile with a virtual machine or individual virtual
disks. When you select the datastore on which a virtual machine should be located, you can check
whether the selected datastore is compliant with the virtual machine storage profile.
To check the storage compliance of a virtual machine:

• In the Virtual Machines tab of the virtual machine storage profile, click the Check
Compliance Now link.
If you check the compliance of a virtual machine whose host or cluster has virtual machine storage
profiles disabled, the virtual machine will be noncompliant because the feature is disabled.
Virtual machine storage compliance can also be viewed from the virtual machine’s Summary tab.

700 VMware vSphere: Fast Track


Identifying Advanced Storage Options

13
Slide 13-17

Storage Scalability
Some advanced storage options include the following:
ƒ N_Port ID virtualization (NPIV)
ƒ vCenter Server storage filters
ƒ Identifying and tagging solid state drive (SSD) devices
ƒ Software iSCSI port binding
ƒ VMware vSphere® VMFS (VMFS) resignaturing
ƒ Pluggable storage architecture (PSA)

Module 13 Storage Scalability 701


N_Port ID Virtualization
Slide 13-18

NPIV assigns a virtual World Wide Name and virtual N_Port ID to an


individual virtual machine.
ƒ NPIV gives each virtual machine an identity on the SAN.
NPIV benefits:
ƒ Track storage traffic per virtual machine.
ƒ Zone and mask LUNs per virtual machine.
ƒ Leverage SAN quality-of-service per virtual machine.
ƒ Improve I/O performance through per virtual machine array-level caching.
Configure NPIV if you have the following ESXi host
requirements:
ƒ A management requirement to monitor SAN
WWN WWN WWN
LUN usage at the virtual machine level
ƒ A security requirement to be able to zone
a specific LUN to a specific virtual machine WWN

In normal ESXi operation, only the Fibre Channel HBA has a World Wide Name (WWN) and
N_Port ID. N_Port ID Virtualization (NPIV) is used to assign a virtual WWN and virtual N_Port ID
to a virtual machine. NPIV is most useful in two situations:
• Configure NPIV if there is a management requirement to be able to monitor SAN LUN usage
down to the virtual machine level. Because a WWN is assigned to an individual virtual
machine, the virtual machine’s LUN usage can be tracked by SAN management software.
• NPIV is also useful for access control. Because Fibre Channel zoning and array-based
LUN masking use WWNs, access control can be configured down to the individual virtual
machine level.

702 VMware vSphere: Fast Track


N_Port ID Virtualization Requirements

13
Slide 13-19

Storage Scalability
NPIV requires the following:
ƒ Virtual machines use RDMs.
ƒ Fibre Channel HBAs support NPIV.
ƒ Fibre Channel switches support NPIV.
ƒ ESXi hosts have access to all LUNs used by their virtual machines.
NPIV cannot be used with virtual machines configured with VMware
vSphere® Fault Tolerance (FT).

The requirements to configure NPIV are listed on the slide. For more about NPIV, see vSphere
Storage Guide at http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html.

Module 13 Storage Scalability 703


vCenter Server Storage Filters
Slide 13-20

Filter Name Description Key


VMFS Filter Filters out storage devices, or LUNs, config.vpxd.filter.vmfsFilter
that are already used by a VMware
vSphere VMFS datastore on any host
managed by vCenter Server.
RDM Filter Filters out LUNs that are already config.vpxd.filter.rdmFilter
referenced by an RDM on any host
managed by vCenter Server. The
LUNs do not show up as candidates
to be formatted with VMFS or to be
used by a different RDM.
Same Host and Filters out LUNs ineligible for use as config.vpxd.filter.SameHostAndTran
Transports Filter VMFS datastore extents because of sportsFilter
host or storage type incompatibility.
Host Rescan Automatically rescans and updates config.vpxd.filter.hostRescanFilter
Filter VMFS datastores after you perform
datastore management operations.
The filter helps provide a consistent
view of all VMFS datastores on all
hosts managed by vCenter Server.

vCenter Server provides storage filters to help you avoid storage device corruption or performance
degradation that can be caused by an unsupported use of storage devices. The filters below are
available by default:
• VMFS Filter - Filters out storage devices, or LUNs, that are already used by a VMFS datastore
on any host managed by vCenter Server. The LUNs do not show up as candidates to be
formatted with another VMFS datastore or to be used as an RDM.
• RDM Filter - Filters out LUNs that are already referenced by an RDM on any host managed by
vCenter Server. The LUNs do not show up as candidates to be formatted with VMFS or to be
used by a different RDM. If you need virtual machines to access the same LUN, the virtual
machines must share the same RDM mapping file. For information about this type of
configuration, see the vSphere Resource Management documentation.

704 VMware vSphere: Fast Track


• Same Host and Transports Filter - Filters out LUNs ineligible for use as VMFS datastore

13
extents because of host or storage type incompatibility. Prevents you from adding the following
LUNs as extents:
• LUNs not exposed to all hosts that share the original VMFS datastore.

Storage Scalability
• LUNs that use a storage type different from the one the original VMFS datastore uses. For
example, you cannot add a Fibre Channel extent to a VMFS datastore on a local storage
device.
• Host Rescan Filter - Automatically rescans and updates VMFS datastores after you perform
datastore management operations. The filter helps provide a consistent view of all VMFS
datastores on all hosts managed by vCenter Server.
To change the filter behavior:

1. In the vSphere Client, select Administration > vCenter Server Settings.

2. In the settings list, select Advanced Settings.

3. In the Key text box, type the key you want to change.

4. To disable the setting, type False for the key.

5. Click Add.

6. Click OK.
Before making any changes to the device filters, consult with the VMware support team.

Module 13 Storage Scalability 705


Identifying and Tagging SSD Devices
Slide 13-21

The VMkernel can automatically detect, tag, and enable an SSD.


Use the vSphere Client to identify an SSD.
By knowing which storage is ESXi host’s Storage panel
SSD, you can use that storage on the Summary tab
for the following:
ƒ Quicker VMware vSphere®
Storage vMotion® migrations
among hosts that share the
same SSD
ƒ Improving a virtual machine’s
performance by placing its
swap file on it

The VMkernel can now automatically detect, tag, and enable an SSD. ESXi detects SSD devices
through an inquiry mechanism based on the T10 standard. This mechanism allows ESXi to discover
SSD devices on many storage arrays. Devices that cannot be autodetected (that is, arrays that are not
T10-compliant) can be tagged as SSD by setting up new Pluggable Storage Architecture Storage
Array Type Plug-in claim rules.
You can use the vSphere Client to identify your SSD storage. The storage section in the ESXi host’s
Summary tab identifies the drive type. The drive type shows you whether a storage device is SSD.
The benefits to using SSD include:
• Quicker VMware vSphere® Storage vMotion® migrations can occur among hosts that share
the same SSD.
• SSD can be used as swap space for improved system performance when under memory
contention.

706 VMware vSphere: Fast Track


Configuring Software iSCSI Port Binding

13
Slide 13-22

Storage Scalability
From the Network Configuration tab, select the Storage Adapters link.
Highlight the software initiator and select properties.

iSCSI port binding enables a software or a hardware dependent iSCSI initiator to be bound to a
specific VMkernel adapter. If you are using dependent hardware iSCSI adapters, then you must bind
the adapter to a VMkernel port to function properly.
By default, all network adapters appear as active. If you are using multiple VMkernel ports on a
single switch, you must override this setup, so that each VMkernel interface maps to only one
corresponding active NIC. For example, vmk1 maps to vmnic1 and vmk2 maps to vmnic2.
To bind a iSCSI adapter to a VMkernel port, create a virtual VMkernel adapter for each physical
network adapter on your host. If you use multiple VMkernel adapters, set up the correct network
policy:
1. Click the Configuration tab, and click Storage Adapters in the Hardware panel.

2. The list of available storage adapters appears.

3. Select the software or dependent iSCSI adapter to configure and click Properties.

4. In the iSCSI Initiator Properties dialog box, click the Network Configuration tab.

5. Click Add and select a VMkernel adapter to bind with the iSCSI adapter.You can bind the
software iSCSI adapter to one or more VMkernel adapters. For a dependent hardware iSCSI
adapter, only one VMkernel interface associated with the correct physical NIC is available.

Module 13 Storage Scalability 707


6. Click OK.

7. The network connection appears on the list of VMkernel port bindings for the iSCSI adapter.

8. Verify that the network policy for the connection is compliant with the binding requirements.

708 VMware vSphere: Fast Track


VMFS Resignaturing

13
Slide 13-23

VMFS UUID:

Storage Scalability
VMFS_1
4e26f26a-9fe2664c-c9c7-000c2988e4dd

resignature
StorageVMFS_1
replicate Array Replication

protected site recovery site


New UUID:
snap-snapID#-VMFS_1
4e26f26a-9fe2664c-c9c7-000c2988e4dd
Datastore resignaturing overwrites the original VMFS UUID:
ƒ The LUN copy that contains the VMFS datastore that you resignature is no
longer treated as a LUN copy. The LUN appears as an independent
datastore with no relation to the source of the copy.
ƒ A spanned datastore can be resignatured only if all its extents are online.

When a LUN is replicated or a copy is made, the resulting LUN copy is identical, byte-for-byte,
with the original LUN. As a result, the original LUN contains a VMFS datastore with UUID X, and
the LUN copy appears to contain an identical copy of a VMFS datastore (a VMFS datastore with the
same UUID). ESXi can determine whether a LUN contains the VMFS datastore copy and does not
mount it automatically.
The LUN copy must be resignatured before it is mounted. When a datastore resignature is
performed, consider the following points:
• Datastore resignaturing is irreversible because it overwrites the original VMFS UUID.
• The LUN copy that contains the VMFS datastore that you resignature is no longer treated as a
LUN copy. Instead it appears as an independent datastore with no relation to the source of the
copy.
• A spanned datastore can be resignatured only if all its extents are online.
• The resignaturing process is crash-and-fault tolerant. If the process is interrupted, you can
resume it later.
The default format of the new label assigned to the datastore is snap-snapID-oldLabel (where
snapID is an integer and oldLabel is the label of the original datastore).

Module 13 Storage Scalability 709


Pluggable Storage Architecture
Slide 13-24

PSA is a collection of vStorage APIs that allow third-party hardware


vendors to insert code directly into the SCSI middle layer.
ƒ The PSA allows third-party software developers to design their own
load-balancing techniques and failover mechanisms for particular
storage array types.
ƒ Third-party vendors can add support for new arrays into the SCSI
middle layer without having to provide internal information or intellectual
property about the array to VMware.
VMware provides a generic multipathing plug-in (MPP) called the
Native Multipathing Plug-in (NMP).
PSA coordinates the operation of the NMP and third-party MPPs or
the built-in Storage Array Type Plug-in (SATP).

The Pluggable Storage Architecture (PSA) sits in the SCSI middle layer of the VMkernel I/O stack.
The VMware Native Multipathing Plug-in (NMP) supports all storage arrays on the VMware
storage hardware compatibility list. The NMP also manages sub-plug-ins for handling multipathing
and load balancing.
The PSA discovers available storage paths and, based on a set of predefined rules, determines which
multipathing plug-in (MPP) should be given ownership of the path. The MPP associates a set of
physical paths with a specific storage device or LUN. The details of handling path failover for a
given storage array are delegated to a sub-plug-in called the Storage Array Type Plug-in (SATP).
The SATP is associated with paths. The details for determining which physical path is used to issue
an I/O request (load balancing) to a storage device are handled by a sub-plug-in called the Path
Selection Plug-in (PSP). The PSP is associated with logical devices.

710 VMware vSphere: Fast Track


PSA tasks:

13
• Load and unload multipathing plug-ins
• Handle physical path discovery and removal (through scanning)

Storage Scalability
• Route I/O requests for a specific logical device to an appropriate multipathing plug-in
• Handle I/O queuing to the physical storage HBAs and to the logical devices
• Implement logical device bandwidth sharing between virtual machines
• Provide logical device and physical path I/O statistics
NMP tasks:
• Manage physical path claiming and unclaiming
• Create, register, and deregister logical devices
• Associate physical paths with logical devices
• Process I/O requests to logical devices
• Select an optimal physical path for the request (load balancing)
• Perform actions necessary to handle failures and requests retries
• Support management tasks like abort or reset of logical devices

Module 13 Storage Scalability 711


VMware Default Multipathing Plug-In
Slide 13-25

The top-level plug-in is the MPP.


The VMware default MPP is the NMP, which includes SATPs and Path
Selection Plug-ins (PSPs).

PSA

VMware NMP
The default MPP
is the NMP, which VMware SATP VMware PSP
includes the
SATPs and PSPs. VMware SATP VMware PSP

VMware SATP VMware PSP

The PSA uses plug-ins to manage and access storage.


The top-level plug-in is the MPP. All storage is accessed through an MPP. MPPs can be supplied by
storage vendors or by VMware.
The VMware default MPP is the NMP. The NMP includes SATPs and PSPs.

712 VMware vSphere: Fast Track


Overview of the MPP Tasks

13
Slide 13-26

Storage Scalability
The PSA:
ƒ Discovers available storage (physical paths)
ƒ Uses predefined claim rules to assign each device to an MPP
An MPP claims a physical path and exports a logical device.
Details of path failover for a specific path are delegated to the SATP.
Details for determining which physical path is used to a storage
device (load balancing) are handled by the PSP.

The PSA has two major tasks. The first task is to discover what storage devices are available on a
system. Once storage is detected, the second task is to assign predefined claim rules to control the
storage device.
Each device should be claimed by only one claim rule. Claim rules come from and are used by MPPs.
So when a device is claimed by a rule, it is being claimed by the MPP associated with that rule. The
MPP is actually claiming a physical path to a storage device. Once the path has been claimed, the
MPP exports a logical device. Only an MPP can associate a physical path with a logical device.
Within each MPP there are two sub-plug-in types. These are SATPs and PSPs.
The SATP is associated with physical paths and controlling path failover. SATPs are covered in
detail later.

Module 13 Storage Scalability 713


The PSP handles which physical path is used to issue an I/O request. This activity is load balancing.
PSPs are associated with logical devices. PSPs are covered in detail later.
A single MPP can support multiple SATPs and PSPs. The modular nature of the PSA allows for the
possibility of SATPs and PSPs from different third-party vendors. For example, a storage device
could be configured to be managed by the MPP written by the same vendor, while using a VMware
SATP and a PSP from some other vendor.
If the storage vendor has not supplied an MPP, SATP, or PSP, a VMware MPP, SATP, or PSP is
assigned by default.
This modularity can also cause problems. An MPP, SATP, or PSP might be assigned to a storage
device incorrectly. The physical hardware might not support correctly the feature set that has been
assigned. Troubleshooting might involve switching a device to a different MPP, SATP, or PSP.

714 VMware vSphere: Fast Track


Path Selection Example

13
Slide 13-27

Storage Scalability
Information about all functional
paths is forwarded by the SATP to
the PSP. The PSP chooses which
path to use.

SATP PSP VMkernel


4 1 storage stack
NMP PSA
5 2

HBA 1 HBA 2
3

When a virtual machine issues an I/O request to a logical device managed by the NMP, the
following takes place:
1. The NMP calls the PSP assigned to this logical device.

2. The PSP selects an appropriate physical path to send the I/O, load-balancing the I/O if
necessary.
3. If the I/O operation is successful, the NMP reports its completion. If the I/O operation reports
an error, the NMP calls an appropriate SATP.
4. The SATP interprets the error codes and, when appropriate, activates inactive paths and fails
over to the new active path.
5. The PSP is then called to select a new active path from the available paths to send the I/O.

Module 13 Storage Scalability 715


Lab 25
Slide 13-28

In this lab, you will work with profile-driven storage.


1. Create a VMFS datastore configuration for this lab.
2. Create a user-defined storage capability.
3. Create a virtual machine storage profile.
4. Enable your host to use virtual machine storage profiles.
5. Associate storage profiles with virtual machines.

716 VMware vSphere: Fast Track


Review of Learner Objectives

13
Slide 13-29

Storage Scalability
You should be able to do the following:
ƒ Describe vSphere Storage APIs – Array Integration.
ƒ Describe vSphere Storage APIs – Storage Awareness.
ƒ Configure and use profile-driven storage.

Module 13 Storage Scalability 717


Lesson 2: Storage I/O Control
Slide 13-30

Lesson 2:
Storage I/O Control

718 VMware vSphere: Fast Track


Learner Objectives

13
Slide 13-31

Storage Scalability
After this lesson, you should be able to do the following:
ƒ Describe VMware vSphere® Storage I/O Control.
ƒ Configure Storage I/O Control.

Module 13 Storage Scalability 719


What Is Storage I/O Control?
Slide 13-32

Storage I/O Control allows cluster-wide storage I/O prioritization:


ƒ Allows for better without
Storage I/O
with Storage
I/O Control
workload Control
consolidation Data Print Online Mail Data Print Online Mail

ƒ Helps reduce Mining Server Store Server Mining Server Store Server

extra costs
associated with
overprovisioning
ƒ Is used to
balance I/O load
in a datastore
cluster enabled
for Storage DRS

during high I/O from a noncritical application

VMware vSphere® Storage I/O Control extends the constructs of shares and limits to handle storage
I/O resources. Storage I/O Control is a proportional-share IOPS scheduler that, under contention,
throttles IOPS. You can control the amount of storage I/O that is allocated to virtual machines
during periods of I/O congestion. Controlling storage I/O ensures that more important virtual
machines get preference over less important virtual machines for I/O resource allocation.
You can use Storage I/O Control with or without VMware vSphere® Storage DRS™. There are two
thresholds: one for standalone Storage I/O Control and one for Storage DRS. For Storage DRS,
latency statistics are gathered by Storage I/O Control for an ESXi host and sent to vCenter Server
and stored in the vCenter Server database. With these statistics, Storage DRS can make the decision
on whether a virtual machine should be migrated to another datastore.

720 VMware vSphere: Fast Track


Storage I/O Control Configuration

13
Slide 13-33

Storage Scalability
The latency thresholds for the Storage I/O Control can be set using
injector-based models or can be manually set.
Injector-based models are latency thresholds that are measured
when peak throughput is measured.
ƒ The benefit of using injector-based models is that Storage I/O Control
determines the best threshold for a datastore.
• The latency threshold is set to the value determined by the injector when
90% of the throughput value is achieved.
ƒ You can also set the latency threshold manually:
• Latency setting is 30 ms by default.
Storage I/O Control is set to stats only mode by default.
ƒ Storage I/O Control doesn't enforce throttling but does gather statistics.
ƒ Storage DRS now has statistics in advance for new datastores being
added to the datastore cluster.

The default latency threshold for Storage I/O Control is 30 milliseconds. The default setting might
be fine for some storage devices, but other devices might reach their latency threshold well before or
after the default setting is reached. For example, solid state disks (SSDs) typically reach their
contention point sooner than the default setting protects against. Not all devices are created equally.
Storage I/O Control can automatically determine an optimal latency threshold by using injector
based models to determine the latency setting. The injector determines and sets the latency threshold
when 90% of the throughput is reached.
Storage I/O Control is set to stats only mode by default. Stats only mode collects and stores statistics
but does not perform throttling on the storage device. Storage DRS can use the stored statistics
immediately after initial configuration or when new datastores are added.

Module 13 Storage Scalability 721


Storage I/O Control Automatic Threshold Detection
Slide 13-34

Through device modeling, Storage I/O

Latency
Control determines the peak throughput of Lpeak
the device.
The injector-based models measure the Lα
peak latency value when the throughput is
at its peak. Load
The Threshold is then set (by default) to
90% of this value. Tpeak

Throughput
You can still do the following:
ƒ Change the percentage value. Tα
ƒ Manually set the congestion threshold.

Load

Storage I/O Control can have its latency threshold set automatically by using the I/O injector model
to determine the peak throughput of a datastore. The resulting peak throughput measurement can be
used to determine the peak latency of a datastore. Storage I/O control can then set the latency
threshold to 90% of the peak latency.

722 VMware vSphere: Fast Track


Storage I/O Control Requirements

13
Slide 13-35

Storage Scalability
Datastores that enabled for Storage I/O Control must be managed by
a single vCenter Server system.
Storage I/O Control is supported for Fibre Channel, iSCSI, and NFS
storage.
Storage I/O Control does not support datastores with multiple
extents.
Verify whether your automated tiered storage array is certified as
compatible with Storage I/O Control.

Storage I/O Control has several requirements and limitations.


Storage I/O Control is not supported for raw device mappings.
Before using Storage I/O Control on datastores that are backed by arrays with automated storage
tiering capabilities, verify that your automated tiered storage array has been certified to be
compatible with Storage I/O Control. See the online VMware Compatibility Guide at
http://www.vmware.com/resources/compatibility.
Automated storage tiering is the ability of an array (or group of arrays) to migrate LUNs/volumes or
parts of LUNs/volumes to different types of storage media (solid-state drive, Fibre Channel, SAS,
SATA) based on user-set policies and current I/O patterns. No special certification is required for
arrays that do not have these automatic migration/tiering features, including those that provide the
ability to manually migrate data between different types of storage media.

Module 13 Storage Scalability 723


Configuring Storage I/O Control
Slide 13-36

To configure Storage I/O Control:


1. Enable Storage I/O Control for the datastore.
2. Set the number of storage I/O shares and upper limit of I/O operations
per second (IOPS) allowed for each virtual machine.

Example: Two virtual machines running Iometer


(VM1:1,000 shares, VM2: 2,000 shares)

Without shares/limits With shares/limits


IOPS Iometer latency IOPS Iometer latency
VM1 1,500 20ms 1,080 31ms
VM2 1,500 21ms 1,900 16ms

Storage I/O Control provides quality-of-service capabilities for storage I/O in the form of I/O shares
and limits that are enforced across all virtual machines accessing a datastore, regardless of which
host they are running on. Using Storage I/O Control, vSphere administrators can ensure that the
most important virtual machines get adequate I/O resources even in times of congestion.
When you enable Storage I/O Control on a datastore, ESXi begins to monitor the device latency that
hosts observe when communicating with that datastore. When device latency exceeds a threshold,
the datastore is considered to be congested, and each virtual machine that accesses that datastore is
allocated I/O resources in proportion to their shares.

724 VMware vSphere: Fast Track


When you allocate storage I/O resources, you can limit the IOPS that are allowed for a virtual

13
machine. By default, the number of IOPS allowed for a virtual machine is unlimited. If the limit that
you want to set for a virtual machine is in terms of megabytes per second instead of IOPS, you can
convert megabytes per second into IOPS based on the typical I/O size for that virtual machine. For

Storage Scalability
example, a backup application has a typical I/O size of 64KB. To restrict a backup application to
10MB per second, set a limit of 160 IOPS (10MB per second / 64KB I/O size = 160 I/Os per
second).
On the slide, virtual machines VM1 and VM2 are running an I/O load generator called Iometer.
Each virtual machine is running on a different host, but they are running the same type of workload:
16KB random reads. The shares of VM2 are set to twice as many shares as VM1, which implies that
VM2 is more important than VM1. With Storage I/O Control disabled, the IOPS that each virtual
machine achieves, as well as their I/O latency, is identical. But with Storage I/O Control enabled, the
IOPS achieved by the virtual machine with more shares (VM2) are greater than the IOPS of VM1.
The example assumes that each virtual machine is running enough load to cause a bottleneck on the
datastore.
To enable Storage I/O Control on a datastore:

1. In the Datastores and Datastore Clusters inventory view, select a datastore and click the
Configuration tab.
2. Click the Properties link.

3. Under Storage I/O Control, select the Enabled check box.

4. Click Close.
To set the storage I/O shares and limits:

1. Right-click the virtual machine in the inventory and select Edit Settings.

2. In the Virtual Machine Properties dialog box, click the Resources tab.
By default, all virtual machine shares are set to Normal (1000), with unlimited IOPS.
For more about Storage I/O Control, see vSphere Resource Management Guide at
http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html.

Module 13 Storage Scalability 725


Review of Learner Objectives
Slide 13-37

You should be able to do the following:


ƒ Describe Storage I/O Control.
ƒ Configure Storage I/O Control.

726 VMware vSphere: Fast Track


Lesson 3: Datastore Clusters and Storage DRS

13
Slide 13-38

Storage Scalability
Lesson 3:
Datastore Clusters and Storage DRS

Module 13 Storage Scalability 727


Learner Objectives
Slide 13-39

After this lesson, you should be able to do the following:


ƒ Create a datastore cluster.
ƒ Configure Storage DRS.
ƒ Explain how Storage I/O Control and Storage DRS complement each
other.

728 VMware vSphere: Fast Track


What Is a Datastore Cluster?

13
Slide 13-40

Storage Scalability
A datastore cluster is a collection of
datastores that are grouped together
without functioning together. 2TB
datastore
cluster
A datastore cluster enabled for Storage
DRS is a collection of datastores
working together to balance:
ƒ Capacity 500GB 500GB 500GB 500GB
ƒ I/O latency

The datastore cluster serves as a container or folder. The user can store datastores in the container,
but the datastores work as separate entities.
A datastore cluster that is enabled for Storage DRS is a collection of datastores designed to work as
a single unit. In this type of datastore cluster, Storage DRS balances datastore use and I/O latency.

Module 13 Storage Scalability 729


Datastore Cluster Rules
Slide 13-41

General rules for datastore clusters (with or without Storage DRS):


ƒ Datastores from different arrays can be added to the same datastore cluster.
• LUNs from arrays of different types can adversely affect performance if they
are not equally performing LUNs.
ƒ Datastore clusters must contain similar or interchangeable datastores.
ƒ Datastore clusters support only VMware vSphere® ESXi™ 5.0 hosts.
Rules specific to datastore clusters enabled for Storage DRS:
ƒ Do not mix VMFS and NFS datastores in the same datastore cluster.
ƒ Do not mix replicated datastores with nonreplicated datastores.
ƒ You can mix VMFS-3 and VMFS-5 datastores in the same datastore cluster.

Datastores and hosts that are associated with a datastore cluster must meet certain requirements to
use datastore cluster features successfully.
A datastore cluster can contain a mix of datastores with different sizes and I/O capacities, and can be
from different arrays and vendors. However, LUNs with different performance characteristics can
cause performance problems.
All hosts attached to the datastores in a datastore cluster must be ESXi 5.0 and later. ESXi 4.x and
earlier hosts cannot be included in a datastore cluster.
NFS and VMFS datastores cannot be combined in the same datastore cluster enabled for Storage
DRS. Storage DRS cannot move virtual machines between NFS and VMFS datastores.
VMFS-3 and VMFS-5 datastores can be added to the same Storage DRS cluster. But performance of
these datastores should be similar.

730 VMware vSphere: Fast Track


Relationship of Host Cluster to Datastore Cluster

13
Slide 13-42

Storage Scalability
The relationship between a VMware vSphere® High Availability /
VMware vSphere® Distributed Resource Scheduler™ cluster and a
datastore cluster can be one to one, one to many, or many to many.
one to one one to many many to many
host cluster host host cluster host host clusters

datastore cluster datastore clusters datastore clusters

Host clusters and datastore clusters can coexist in the virtual infrastructure. A host cluster refers to a
VMware vSphere® Distributed Resource Scheduler™ (DRS)/VMware vSphere® High Availability
(vSphere HA) cluster.
Load balancing by DRS and Storage DRS can occur at the same time. DRS balances virtual
machines across hosts based on CPU and memory usage. Storage DRS load-balances virtual
machines across storage, based on storage capacity and IOPS.
A host that is not part of a host cluster can also use a datastore cluster.

Module 13 Storage Scalability 731


Storage DRS Overview
Slide 13-43

Storage DRS provides the following functions:


ƒ Initial placement of virtual machines based on storage capacity and,
optionally, on I/O latency
ƒ Use of Storage vMotion to migrate virtual machines based on storage
capacity
ƒ Use of Storage vMotion to migrate virtual machines based on I/O
latency
ƒ Configuration in either manual or fully automated modes
ƒ Use of affinity and anti-affinity rules to govern virtual disk location
ƒ Use of fully automated, storage maintenance mode to clear a LUN of
virtual machine files

Storage DRS manages the placement of virtual machines in a datastore cluster, based on the space
usage of the datastores. It attempts to keep usage as even as possible across the datastores in the
datastore cluster.
Storage vMotion migration of virtual machines can also be a way of keeping the datastores
balanced.
Optionally, the user can configure Storage DRS to balance I/O latency across the members of the
datastore cluster as a way to help mitigate performance issues that are caused by I/O latency.
Storage DRS can be set up to work in either manual or fully automated mode:
• Manual mode presents migration and placement recommendations to the user, but nothing is
executed until the user accepts the recommendation.
Fully automated mode automatically handles initial placement and migrations based on
runtime rules.

732 VMware vSphere: Fast Track


Initial Disk Placement

13
Slide 13-44

Storage Scalability
When virtual machines are created, cloned, or migrated:
ƒ You select a datastore cluster, rather than a single datastore.
• Storage DRS selects a member datastore based on capacity and optionally
on IOPS load.
ƒ By default, a virtual machine’s files are placed on the same datastore in
the datastore cluster.
• Storage DRS affinity and anti-affinity rules can be created to change this
behavior.

When a virtual machine is created, cloned, or migrated, the user has the option of selecting a
datastore cluster on which to place the virtual machine files. When the datastore cluster is selected,
Storage DRS chooses a member datastore (a datastore in the datastore cluster) based on storage use.
Storage DRS attempts to keep the member datastores evenly used.
By default, Storage DRS locates all the files that make up a virtual machine on the same datastore.
However, Storage DRS anti-affinity rules can be created so that virtual machine disk files can be
placed on different datastores in the cluster.

Module 13 Storage Scalability 733


Migration Recommendations
Slide 13-45

Migration recommendations are executed:


ƒ When the IOPS response time is exceeded
ƒ When the space utilization threshold is exceeded
ƒ Space utilization is checked every five minutes.
ƒ IOPS load history is checked every eight hours.
ƒ Storage DRS selects a datastore based on utilization and IOPS load.
ƒ Load balancing is based on IOPS workload, which ensures that no
datastore exceeds a particular VMkernel I/O latency level.

Storage DRS provides as many recommendations as necessary to balance the space and, optionally,
the IOPS resources of the datastore cluster.
Reasons for migration recommendations include:
• Balancing space usage in the datastore
• Reducing datastore I/O latency
• Balancing datastore IOPS load
Storage DRS can also make mandatory recommendations based on whether:
• A datastore is out of space
• Storage DRS anti-affinity or affinity rules are being violated
• A datastore is entering maintenance mode
Storage DRS also considers moving powered-off virtual machines to balance datastores.

734 VMware vSphere: Fast Track


Datastore Correlation Detector

13
Slide 13-46

Storage Scalability
Datastore correlation refers to datastores that are created on the
same physical set of spindles.
Storage DRS detects datastore correlation by doing the following:
ƒ Measuring individual datastore performance
ƒ Measuring combined datastore performance
If latency increases on multiple datastores when a load placed on
one, the datastores are correlated.
Correlation is determined by a long running background process.
Anti-affinity rules can use correlation detection to ensure the virtual
machines or virtual disks are on different spindles.
Datastore correlation is enabled by default.

The purpose of datastore correlation is to help the decision making process in Storage DRS when
deciding where to move a virtual machine. For example, you gain little advantage by moving a
virtual machine from one datastore to another if both datastores are backed by the same set of
physical spindles on the array.
The datastore correlation detector uses the I/O injector to determine if a source and destination
datastore are using the same back-end spindles.
The detector works by monitoring the load on one datastore and monitoring the latency on another.
If latency increases on other datastores when a load is placed on one datastore, the datastores are
correlated.
The datastore correlation detector can also be used for Anti-Affinity rules, making sure that virtual
machines and virtual disks are not only kept on separate datastores, but also kept on different
spindles on the back-end.

Module 13 Storage Scalability 735


Configuration of Storage DRS Migration Thresholds
Slide 13-47

Datastores and Datastore Clusters inventory


view > right-click datacenter > New Datastore
Cluster.

Option for
including I/O
latency in
balancing

Configuration
settings for
Advanced
utilized space
settings
and latency
for latency
thresholds
thresholds

In the SDRS Runtime Rules page of the wizard, select or deselect the Enable I/O metric for SDRS
recommendations check box to enable or disable IOPS metric inclusion. When I/O load balancing
is enabled, Storage I/O Control is enabled for all the datastores in the datastore cluster if it is not
already enabled. When this option is deselected, you disable:
• IOPS load balancing among datastores in the datastore cluster
• Initial placement for virtual disks based on IOPS metric
Space is the only consideration when placement and balancing recommendations are made.

736 VMware vSphere: Fast Track


Storage DRS thresholds can be configured to determine when Storage DRS recommends or

13
performs initial placement or migration recommendations:
• Utilized Space – Determines the maximum percentage of consumed space allowed before
Storage DRS recommends or performs an action.

Storage Scalability
• I/O Latency – Indicates the maximum latency allowed before recommendations or migrations
are performed. This setting is applicable only if the Enable I/O metric for SDRS
recommendations check box is selected
Click the Show Advanced Options link to view advanced options:
• No recommendations until utilization difference between source and destination is –
Defines the space utilization threshold. For example, datastore A is 80 percent full and datastore
B is 83 percent full. If you set the threshold to 5, no recommendations are made. If you set the
threshold to 2, a recommendation is made or a migration occurs.
• Evaluate I/O load every – Defines how often Storage DRS checks space or IOPS load
balancing or both.
I/O imbalance threshold – Defines the aggressiveness of IOPS load balancing if the Enable
I/O metric for SDRS recommendations check box is selected.

Module 13 Storage Scalability 737


Storage DRS Affinity Rules
Slide 13-48

datastore cluster datastore cluster datastore cluster

Intra-VM VMDK affinity Intra-VM VMDK VM anti-affinity


ƒ Keep a virtual machine’s anti-affinity ƒ Keep virtual machines
VMDKs together on the ƒ Keep a virtual on different datastores.
same datastore. machines’s VMDKs on ƒ Rule is similar to the
ƒ Maximize virtual machine different datastores. DRS anti-affinity rule.
availability when all disks ƒ Rule can be applied to
ƒ Maximize availability of
are needed in order to run. all or a subset of a a set of redundant
ƒ Rule is on by default for virtual machine’s disks. virtual machines.
all virtual machines.

By default, all of a virtual machine’s disks can be on the same datastore.


A user might want the virtual disks on different datastores. For example, a user can place a system
disk on one datastore and place the data disks on another. In this case, the user can set up a virtual
machine disk (VMDK) anti-affinity rule, which keeps a virtual machine’s virtual disks on separate
datastores.
Virtual machine anti-affinity rules keep virtual machines on separate datastores. This rule is useful
when redundant virtual machines must always be available.

738 VMware vSphere: Fast Track


Adding Hosts to a Datastore Cluster

13
Slide 13-49

Storage Scalability
Select the host cluster that will use the datastore cluster.
ƒ If no host clusters are created, the user can select individual ESXi hosts
to use the datastore cluster.

Datastores and Datastore Clusters inventory


view > right-click datacenter > New Datastore
Cluster.

You can configure a host cluster or individual hosts to use the datastore cluster enabled for
Storage DRS.

Module 13 Storage Scalability 739


Adding Datastores to the Datastore Cluster
Slide 13-50

Select the datastores to add to the datastore cluster.

VMware recommends
selecting datastores that
all hosts can access.

You can select one or more datastores in the Available Datastores pane. The Show Datastores
drop-down menu enables you to filter the list of datastores to display. VMware recommends that all
hosts have access to the datastores that you select.
In the example, all datastores accessed by all hosts in the vCenter Server inventory are displayed.
All datastores are accessible by all hosts, except for the datastores Local01 and Local02.

740 VMware vSphere: Fast Track


Storage DRS Summary Information

13
Slide 13-51

Storage Scalability
A panel on the datastore cluster’s Summary tab displays the Storage
DRS settings.

The vSphere Storage DRS panel on the Summary tab of the database cluster displays the Storage
DRS settings:
• I/O metrics – Displays whether or not the I/O metric inclusion option is enabled
• Storage DRS – Indicates whether Storage DRS is enabled or disabled
• Automation level – Indicates either manual or fully automated mode
• Utilized Space threshold – Displays the space threshold setting
• I/O latency threshold – Displays the latency threshold setting

Module 13 Storage Scalability 741


Storage DRS Migration Recommendations
Slide 13-52

Use the Storage DRS tab to monitor for migration recommendations.

The Storage DRS tab displays the Recommendations view by default. In this view, datastore cluster
properties are displayed. Also displayed are the migration recommendations and the reasons for the
recommendations.
To refresh recommendations, click the Run Storage DRS link.
To apply recommendations, click Apply Recommendations.
The Storage DRS tab has two other views. The Faults view displays issues that occurred when
applying recommendations. The History view maintains a migration history.

742 VMware vSphere: Fast Track


Storage DRS Maintenance Mode

13
Slide 13-53

Storage DRS maintenance mode allows you to take a datastore out of

Storage Scalability
use in order to service it.
Storage DRS maintenance mode:
ƒ Evacuates virtual machines from a datastore placed in maintenance
mode:
• Registered virtual machines (on or off) are moved.
• Templates and unregistered virtual machines are not moved.

Storage DRS allows you to place a datastore in maintenance mode. A datastore enters or leaves
maintenance mode only as the result of your performing the task. Storage DRS maintenance mode is
available to datastores in a datastore cluster enabled for Storage DRS. Standalone datastores cannot
be placed in maintenance mode.
When a datastore cluster enters Storage DRS maintenance mode, only registered virtual machines
are moved to other datastores in the datastore cluster. Unregistered virtual machines, templates, ISO
images, and other nonvirtual machine files are not moved. The datastore does not enter maintenance
mode until all files on the datastore are moved. So you must manually move these files off the
datastore in order for the datastore to enter Storage DRS maintenance mode.
If the datastore cluster is set to fully automated mode, virtual machines are automatically migrated
to other datastores.
If the datastore cluster is set to manual mode, migration recommendations are displayed in the
Storage DRS tab. The virtual disks cannot be moved until the recommendations are accepted.
To place a datastore into Storage DRS maintenance mode:

1. Go to the Datastores and Datastore Clusters inventory view.


Right-click the datastore in the datastore cluster enabled for Storage DRS and select Enter SDRS
Maintenance Mode.
Module 13 Storage Scalability 743
Backups and Storage DRS
Slide 13-54

Backing up virtual machines can add latency to a datastore.


You can schedule a task to disable Storage DRS behavior for the
duration of the backup.

Scheduled tasks can be configured to change Storage DRS behavior. Scheduled tasks can be used to
change the Storage DRS configuration of the datastore cluster to match enterprise activity. For
example, if the datastore cluster is configured to perform migrations based on I/O latency, you might
disable the use of I/O metrics by Storage DRS during the backup window. You can reenable I/O
metrics use after the backup window ends.
To set up a Storage DRS scheduled task for a datastore cluster:

1. In the Datastores and Datastore Clusters inventory view, right-click the datastore cluster and
select Edit Settings.
2. In the left pane, select SDRS Scheduling and click Add.
3. In the Set Time page, enter the start time, end time, and days that the task should run. Click
Next.
4. In the Start Settings page, enter a description and modify the Storage DRS settings as you want
them to be when the task starts. Click Next.
5. In the End Settings page, enter a description and modify the Storage DRS settings as you want
them to be when the task ends. Click Next.
6. Click Finish to save the scheduled task.

744 VMware vSphere: Fast Track


Storage DRS: vSphere Technology Compatibility

13
Slide 13-55

Storage Scalability
Supported/Not Migration
Feature or product
supported recommendation

VMware snapshots Supported Fully Automated

Raw device mapping


Supported Fully Automated
pointer files

VMware Thin-
Supported Fully Automated
Provisioned Disks
VMware vSphere
Not Supported Not Supported
Linked Clones
VMware vSphere
Supported Manual
Storage Metro Cluster

VMware® vCenter™
Not Supported Not Supported
Site Recovery Manager

VMware vCloud®
Not Supported Not Supported
Director™

The table shows some features that are supported with Storage DRS. For information about Storage
DRS features and requirements, see vSphere Resource Management Guide at
http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html.

Module 13 Storage Scalability 745


Storage DRS - Array Feature Compatibility
Slide 13-56

Migration
Feature or product Initial placement
recommendations
Array-Based
Supported Manual
Snapshots

Array-Based
Supported Manual
Duplication
Array-Based Thin
Supported Manual
Provisioning
Array-Based Auto- Manual (only capacity
Supported
Tiering load balancing)
Array-Based
Supported Fully Automated
Replication
VMware vSphere®
Not Supported Not Supported
Replication

The table shows some array features that are supported with Storage DRS. For information about
Storage DRS and supported array features and requirements, go to the vSphere Storage DRS
Interoperability whitepaper at http://www.vmware.com/resources/techresources/10286

746 VMware vSphere: Fast Track


Storage DRS and Storage I/O Control

13
Slide 13-57

Storage Scalability
Storage DRS and Storage I/O Control are complementary solutions:
ƒ Storage I/O Control is set to stats only mode by default.
• Storage DRS works to avoid I/O bottlenecks.
• Storage I/O Control manages unavoidable I/O bottlenecks.
ƒ Storage I/O Control works in real time.
ƒ Storage DRS does not use real-time latency to calculate load
balancing.
ƒ Storage DRS and Storage I/O Control provide you with the
performance that you need in a shared environment, without having to
massively overprovision storage.

Both Storage DRS and Storage I/O Control work with IOPS and should be used together. Storage
DRS works to avoid IOPS bottlenecks. Storage I/O Control is enabled when you enable Storage
DRS. Storage DRS is used to manage unavoidable IOPS bottlenecks, such as short, intermittent
bottlenecks, and congestion on every datastore in the datastore cluster.
Storage I/O Control runs in real time. It continuously checks for latency and controls I/O
accordingly.
Storage DRS uses IOPS load history to determine migrations. Storage DRS runs infrequently and
does analysis to determine long-term load balancing.
Storage I/O Control monitors the I/O metrics of the datastores. Storage DRS uses this information to
determine whether a virtual machine should be moved from one datastore to another.

Module 13 Storage Scalability 747


Lab 26
Slide 13-58

In this lab, you will create a datastore cluster and configure


Storage DRS:
1. Create a datastore cluster enabled for Storage DRS.
2. Perform a datastore evacuation with datastore maintenance mode.
3. Manually run Storage DRS and apply migration recommendations.
4. Acknowledge Storage DRS alarms.
5. Clean up for the next lab.

748 VMware vSphere: Fast Track


Review of Learner Objectives

13
Slide 13-59

Storage Scalability
You should be able to do the following:
ƒ Create a datastore cluster.
ƒ Configure Storage DRS.
ƒ Explain how Storage I/O Control and Storage DRS complement each
other.

Module 13 Storage Scalability 749


Key Points
Slide 13-60

ƒ vSphere Storage API  Array Integration consists of APIs for hardware


acceleration and array thin provisioning.
ƒ vSphere Storage API – Storage Awareness allows storage vendors to
provide information about the capabilities of their storage arrays to
vCenter Server.
ƒ Profile-driven storage is a feature that introduces storage compliance to
vCenter Server.
ƒ Storage I/O Control allows clusterwide storage I/O prioritization.
ƒ Storage DRS allows an easy way for an organization to balance its
storage utilization and minimize the effect of I/O latency.
ƒ A datastore cluster enabled for Storage DRS is a collection of
datastores working together to balance storage capacity and I/O
latency.
Questions?

750 VMware vSphere: Fast Track


MODULE 14

Data Protection 14
Slide 14-1
Data Protection
Module 14

14
Data Protection

VMware vSphere: Fast Track 751


You Are Here
Slide 14-2

Course Introduction High Availability and Fault Tolerance

Introduction to Virtualization Network Scalability

Creating Virtual Machines Host and Management Scalability

VMware vCenter Server Storage Scalability

Configuring and Managing Virtual Networks Data Protection

Configuring and Managing Virtual Storage Patch Management

Virtual Machine Management VMware Management Assistant

Installing VMware vSphere Components


Access and Authentication Control

Resource Management and Monitoring

752 VMware vSphere: Fast Track


Importance
Slide 14-3

Over time, your VMware vSphere® environment might undergo

14
changes to its hardware or software configuration. In addition,
application data goes through constant change.
From a manageability perspective, making regular backups of your

Data Protection
vSphere environment is important.
Backing up virtual machines requires strategies that leverage
virtualization architecture to perform highly efficient backups.

Module 14 Data Protection 753


Learner Objectives
Slide 14-4

After this module, you should be able to do the following:


ƒ Describe the problems when using traditional backup in virtual
infrastructure.
ƒ Describe solutions for backing up and restoring virtual machines that
use virtual infrastructure.
ƒ Discuss technologies that make virtual machine backup and restore
operations faster and easier.
ƒ Describe how to back up and restore a virtual machine.
ƒ Describe how to back up and restore VMware® vCenter Server™.
ƒ Discuss a strategy for backing up and restoring a VMware vSphere®
ESXi™ host’s configuration data.

754 VMware vSphere: Fast Track


Traditional Backup Method
Slide 14-5

physical server physical server

14
backup agent
backup server (uses nearly 100 percent of

Data Protection
server resources during backup)

operating system operating system

x86 architecture x86 architecture

network
connection
backup to tape or disk data to back up

When you think of traditional backup and restore methods, you imagine a physical environment
with a single operating system running on a single physical server. The traditional backup and
restore process relies on a software backup agent that is installed in the operating system. The
backup and restore model requires a backup server to be configured with a tape or disk subsystem
for data to be written to. At regular intervals, the backup server establishes a TCP/IP session with
the backup agent. After the connection is established, the backup agent begins to read all the file
systems on all the disks that are configured for that host.
When you are considering traditional backup methods for virtual machines, the advantages to this
model include:
• A process that is well understood
• The ease of deploying and managing for administrators
• Features and functionality that you are accustomed to

Module 14 Data Protection 755


Among the disadvantages are:
• Licensing costs
• High host resource utilization
• Longer backup windows
• Slow recovery times
• Increased storage capacity requirements
• The need to install a backup agent

756 VMware vSphere: Fast Track


Backup Challenges in Virtualized Environments
Slide 14-6

traditional backup method


backup

14
server
VMware vSphere

Data Protection
Excessive physical resource use generated by each virtual machine.
Backup agents installed in virtual machines monopolize host CPU
resources during backups, which results in less CPU resource for
other virtual machines running on that ESXi host.
I/O resources like network and storage are also saturated with read
and write operations during backup.

Unlike virtual machines, physical servers do not share resources. A physical server has uninhibited
access to 100 percent of its resources. Virtual machines must share available server resources. As
with a single physical system, back up of a single virtual machine uses nearly all of the available
physical server resources. With physical servers, you can schedule backups concurrently, with
backups running on other physical servers in the environment.
Performing concurrent backup operations on virtual machines, especially those running on the same
physical server, places a heavy strain on CPU, network, and shared storage resources.
If you must install a backup agent into your virtual machines, consider a strategy that performs a
backup of virtual machines that uses different networks and datastores. And limit the number of
virtual machine backups to one per VMware ESXi™ host.
The backup process can strain I/O resources because the backup process copies data from client to
server. The backup process also requires a significant amount of CPU cycles to identify which data
to back up and which data to leave alone. Transfer of the data across the network can also be
burdensome. Adding to the overhead is the fact that some backup programs work to eliminate
redundancies in data, which requires additional CPU cycles to complete. Combined, all of this
processing consumes an excessive amount of server resources, especially CPU resources.

Module 14 Data Protection 757


Virtual Architecture Advantages
Slide 14-7

No backup agent installation is required.


Use of virtual machine snapshot functions is enabled.
Backup processing is offloaded from ESXi hosts to a backup server.
Virtual machines see the same virtual hardware.
Virtual disks can be thin-provisioned.
Faster backup and recovery times are enabled through the use of
changed block tracking and data deduplication.
A single backup image can be created.
Both image-level and file-level restoration can be performed.

Because of the flexibility of virtual architectures, they have many advantages in regard to backup
strategies. These advantages offer the prospect of saving you time and money as well as introducing
new technologies, not available in physical architectures, into your datacenter.
Virtual machines always see the same set of virtual hardware, regardless of the hardware installed
on the physical server, which makes bare-metal recoveries easier. Physical servers require separate
processes to create bare-metal and file-level restores. Virtual machines require only an image-level
backup, which can be used for both bare-metal and file-level restoration. Virtual machines are not
required to have a backup agent installed, because backup solutions created for virtual architectures
can directly access VMware® vSphere® datastores. Direct access to the datastore enables
offloading of backup processing to a server other than the host on which the virtual machines are
running. Direct datastore access also means that the backups do not consume network bandwidth.

758 VMware vSphere: Fast Track


vSphere Storage APIs - Data Protection
Slide 14-8

VMware vSphere® Storage APIs  Data Protection (VADP):


ƒ

14
Enables backup and recovery of entire virtual machine images across
SAN storage or local area networks
ƒ Is an API that is directly integrated with backup tools from third-party

Data Protection
vendors
ƒ Enables you to remove load from the host and consolidate backup load
onto a central backup server
ƒ Protects virtual machines that use any type of storage supported by
ESXi (Fibre Channel, iSCSI, NAS, or local storage)

VMware vSphere® Storage APIs – Data Protection (VADP) requires no software installation,
because it is built in to the ESXi framework and can be used to run a full backup.
VADP provides the following features:
• Backing up VMware® guests does not use a temporary directory. So VADP does not use as
much storage on the proxy backup server.
• If you have VMware® vSphere Data Protection, you can restore individual files.
• If you have Data Protection, you can run incremental backups after running an initial full
backup.
For more about VADP, go to http://www.vmware.com/products/vstorage-apis-for-data-protection.
On this page, click backup software for a list of third-party vendors who have integrated VADP
into their backup tools.

Module 14 Data Protection 759


Offloaded Backup Processing
Slide 14-9

backup
server

VMware vSphere
mount virtual
disks to backup
server

tape disk

Configure the storage environment so that the backup server can


access the storage volumes that are managed by the ESXi hosts.
Backup processing is offloaded from the ESXi host to the backup
server, which prevents local ESXi resources from becoming
overloaded.
Perhaps one of the biggest bottlenecks can be the backup server that is handling all the backup
coordination tasks. One of these backup tasks is copying data from point A to point B. Other backup
tasks do a lot of CPU processing. For example, tasks are performed to determine what data to back
up and what not to back up. Other tasks are performed to deduplicate data and compress data that is
written to the target.
A server with insufficient CPU resources can greatly reduce backup performance. It is important not
to skimp on resources for your backup server. A physical server or virtual machine with an ample
amount of memory and CPU capacity is necessary for the best backup performance possible.
The motivation to use LAN-free backups is to reduce the stress on the physical resources of the
ESXi host when virtual machines are backed up. LAN-free backups reduce the stress by offloading
backup processing from the ESXi host to a backup proxy server.
You can configure your environment for LAN-free backups to the backup server, also called the
backup proxy server. For LAN-free backups, the backup server must be able to access the storage
managed by the ESXi hosts on which the virtual machines to back up are running.
If you are using NAS or direct-attached storage, ensure that the backup proxy server is accessing the
volumes with a network-based transport. If you will be running a direct SAN backup, zone the SAN
and configure the disk subsystem host mappings. The host mappings must be configured so that all
ESXi hosts and the backup proxy server access the same disk volumes.
760 VMware vSphere: Fast Track
Changed Block Tracking
Slide 14-10

virtual machine H/W virtual machine H/W version 9


version 7 ctkEnabled=true
ctkEnabled=true scsi0:0.ctkEnabled=true

14
scsi0:0.ctkEnabled=true scsi1:0.ctkEnabled=true

Data Protection
-ctk.vmdk -ctk.vmdk -ctk.vmdk

Copy only file blocks that have changed since the last backup.
Changed block tracking enables faster incremental backups and
near-continuous data protection.

With changed block tracking (CBT), the VMkernel tracks changed blocks of a virtual machine’s
disk. The implementation of CBT alleviates the burden of the backup applications having to scan or
track changed blocks. The result is much quicker incremental backups because scanning an entire
virtual machine disk for changes since the last backup is no longer necessary.
The CBT feature can be accessed by third-party applications as part of VADP. Applications can use
the API to query the VMkernel to return the blocks of data that have changed on a virtual disk since
the last backup operation. You can use CBT on any type of virtual disk, thick or thin, and on any
datastore type, including NFS and iSCSI datastores. You cannot use CBT on physical mode raw
device mappings (RDMs).
Use of CBT depends on the virtual machine hardware version. Virtual machines created with
hardware version 7 or later can use CBT. The feature is not enabled by default. You must enable
CBT on each virtual machine.

Module 14 Data Protection 761


To enable CBT:

• Add ctkEnabled=true and scsi#:#.ctkEnabled=true (where #:# is the disk controller,


followed by the disk target number) to the virtual machine configuration.
After a virtual machine is CBT-enabled, you must perform a task (such as a power cycle, suspend,
or resume operation) and create or delete a snapshot. You perform these tasks so that the virtual
machine's disks are closed and reopened. Closing and reopening the virtual disks allows the change
tracking filter to be enabled for that virtual machin.
The CBT feature stores information about changed blocks in a -ctk.vmdk file that is created for
each virtual disk for which CBT is enabled. The state of each file is located in this file. For tracking
purposes, each block is stored using sequence numbers that tell queries from applications whether
the block has been modified. A -ctk file exists for each virtual disk and snapshot disk, for example,
<VM_name>.vmdk, <VM_name>-flat.vmdk, <VM_name>-ctk.vmdk, <VM_name>-
000001.vmdk, <VM_name>-000001-delta.vmdk, and <VM_name>-000001-ctk.vmdk.

The size of the -ctk.vmdk file is static. The file does not expand past the initial size unless the size
of the virtual disks is changed. File size changes are based on the size of a virtual disk, which is
about .5MB for every 10GB. CBT cannot be used with virtual machines created with VMware
products before VMware vSphere 4.0. As a result, virtual machines created before vSphere 4.0 take
longer to back up.

762 VMware vSphere: Fast Track


Data Deduplication
Slide 14-11

storage array

14
backup 1

backup 2

Data Protection
backup 3

unique file blocks


Data deduplication has the following characteristics:
ƒ Does not store twice those blocks with the same information as a
previous backup
ƒ Reduces 12 file blocks to 4 unique file blocks
ƒ Saves storage capacity
ƒ Provides faster backup performance

Data deduplication greatly minimizes the amount of storage for backups and reduces the overall cost
of ownership for data protection. Deduplicated backups mean that the backup operation:
• Evaluates blocks that will be saved
• Compares them to blocks that have already been saved
• Identifies blocks containing identical data
Duplicate blocks (blocks with the same information as a previous backup) are not stored twice.
Deduplication store technology completes three processes:
• Integrity check – Verifies and maintains the data integrity of the backup store
• Recatalog – Synchronizes restore points with the contents of the backup store
• Reclaim – Reclaims space on the backup store
A well-known limitation to backup of individual files is file locking. Files that are in use by the
operating system are typically locked and cannot be accessed by the backup agent. With special
programs that can be leveraged by the backup software, locked or opened files can be included in
the backup operation. The ability to back up these locked files results in application-consistent
backups. Microsoft Volume Shadow Copy Service (VSS) is an example of this kind of program.

Module 14 Data Protection 763


VMware vSphere Data Protection
Slide 14-12

Easy, disk-based backup and recovery solution for virtual machines:


ƒ Preconfigured virtual machine appliance
ƒ Agent-less, web-based backup and recovery management
ƒ Patented de-duplication technology
ƒ Entire virtual machine and file-level restores
ƒ Included with all versions of VMware vSphere (except Essentials)

With vSphere 5.1, VMware is releasing a new backup and recovery solution for virtual machines
called vSphere Data Protection (VDP). This solution is fully integrated with VMware® vCenter
Server™ and provides agentless, disk-based backup of virtual machines to deduplicated storage.
Benefits of VDP include the following:
• It ensures fast, efficient protection for virtual machines even if they are powered off.
• It uses patented deduplication technology across all backup jobs, significantly reducing disk
space consumption.
• VMware vSphere® APIs – Data Protection (VADP) and Changed Block Tracking (CBT) are
utilized to reduce load on the vSphere hosts and minimize backup window requirements.
• It performs full virtual machine and File-Level Restore (FLR) without installing an agent in
every virtual machine.
• Installation and configuration is simplified using an appliance form factor.
• The VDP appliance and its backups are protected using a checkpoint and rollback mechanism.
• Windows and Linux files can easily be restored by the end user with a Web browser.

764 VMware vSphere: Fast Track


vSphere Data Protection Details
Slide 14-13

Scalability (maximums):
ƒ

14
100 virtual machines per VDP appliance
ƒ 10 appliances per vCenter Server
ƒ

Data Protection
2 TB of deduplicated storage
• Appliances deployed with .5 TB, 1 TB, or 2 TB

Efficient backup mechanism:


ƒ vSphere API for Data Protection (VADP)
ƒ Changed Block Tracking (CBT)
ƒ Variable-length segment size
ƒ Deduplication across all virtual machines performed in VDP appliance
ƒ SCSI hot add, which avoids copying entire .vmdk across network

VADP enables backup software to perform centralized virtual machine backups without the
disruption and overhead of running backup tasks from inside each virtual machine.
A key factor in eliminating redundant data at a segment (or subfile) level is the method for
determining segment size. Fixed-block or fixed-length segments are commonly employed by
snapshot and some deduplication technologies. Unfortunately, even small changes to a dataset can
change all fixed-length segments in a dataset (for example, inserting data at the beginning of a file).
This change to all fixed-length segments occurs despite the fact that very little of the dataset has
been changed. VDP uses an intelligent variable-length method for determining segment size that
examines the data to determine logical boundary points, eliminating the inefficiency.
VDP uses a patented method for segment size determination designed to yield optimal efficiency
across all systems. VDP’s algorithm analyzes the binary structure of a data set (all the 0s and 1s that
make up a dataset) in order to determine segment boundaries that are context-dependent. Variable-
length segments average 24 KB in size and are compressed to an average of 12 KB. By analyzing
the binary structure within the VMDK files, VDP works for all file types and sizes and intelligently
de-duplicates the data.
In vSphere 5.x, the SCSI HotAdd feature is enabled only for vSphere editions Enterprise and higher,
which have Hot Add licensing enabled. No separate Hot Add license is available for purchase as an
add-on.

Module 14 Data Protection 765


vSphere Data Protection Key Components
Slide 14-14

Virtual Machine Appliance:


ƒ Linux virtual machine .ova
package
vSphere vSphere
ƒ Easy, fast deployment
vSphere Infrastructure:
ƒ Changed Block Tracking
ƒ VSS in VMware Tools
Appliance Storage Sizes:
ƒ .5 TB
ƒ 1 TB
ƒ 2TB
vCenter Integration:
vCenter Server
VMware vSphere®
Web Client ƒ Manage through Web client

VDP requires vCenter Server 5.1 or higher. vCenter Server can be the traditional Windows
implementation—or the Linux-based VMware® vCenter™ Server Appliance™.
VDP is deployed as a preconfigured Linux-based appliance. Each appliance supports as many as
100 virtual machines, and as many as 10 VDP appliances can be deployed per vCenter Server
instance. The Windows-based VMware vSphere® Client™ is used to deploy VDP. After the
appliance has been deployed, management is performed using the VMware vSphere® Web Client
with any supported Web browser. Adobe Flash must be installed in the Web browser.

766 VMware vSphere: Fast Track


VDP Architecture
Slide 14-15

vCenter vSphere Data VDP Appliance


Server Protection
ƒ

14
4 vCPUs, 4 GB RAM
ƒ 850GB, 1.6 TB or 3.1 TB
ƒ

Data Protection
SLES 11 64-bit
vCenter Server 5.1
vSphere
vSphere 4.0 or higher

vSphere Web
Client

Deduplication Store
(.vmdk files)

The VDP appliance is deployed with four processors (vCPUs) and 4GB of RAM. Three
configurations of usable backup storage capacity are available: .5TB, 1TB and 2TB, which
respectively consume 850GB, 1,300GB and 3,100GB of actual storage capacity. Proper planning
should be performed to help ensure that proper sizing and additional storage capacity cannot be
added after the appliance is deployed. Storage capacity requirements are based on the number of
virtual machines being backed up, amount of data, retention periods and typical data change rates.
The deduplication store completes the following processes:
• Integrity check – This operation verifies and maintains data integrity on the deduplication store.
Data Recovery completes an incremental integrity check every 24 hours. These checks verify
the integrity of restore points that have been added to the deduplication store since the most
recent full or incremental integrity check. Data Recovery performs an integrity check of all
restore points once a week.

Module 14 Data Protection 767


• Recatalog – This operation ensures that the catalog of restore points is synchronized with the
contents of the deduplication store. This operation runs when an inconsistency is detected
between the catalog and the deduplication store. While the recatalog operation is in progress, no
other operation is allowed on the deduplication store.
• Reclaim – This operation reclaims space on the deduplication store. The reclaim process can be
the result of the appliance enforcing the retention policy and deleting expired restore points.
This operation runs daily or when a backup job requires more space than is available on the
deduplication store. While the reclaim operation is in progress, backups to the deduplication
store are not allowed. But restore operations from the deduplication store are allowed.

768 VMware vSphere: Fast Track


VDP Deployment and Configuration
Slide 14-16

After installation, VDP-configure is in “maintenance mode”:


ƒ Status - Services and logs

14
ƒ Configuration - Network, vCenter, and system settings
ƒ Rollback - Roll back repository of backups (more on this later)

Data Protection
ƒ Upgrade - Upgrade VDP appliance

VDP is deployed using the vSphere Client from a prepackaged Open Virtualization Archive (.ova)
file. The .ova files are labeled to easily identify the amount of backup storage capacity included with
the appliance.
After the appliance is deployed and powered on, a Web browser is used to access the VDP-configure
user interface (UI) and perform the initial configuration. The first time the user connects to the
VDP-configure UI, it will be running in installation mode. With the installation mode wizard, items
such as IP address, host name, DNS, time zone and vCenter Server connection information are
configured. Upon successful completion of the installation mode wizard, the appliance must be
rebooted. This reboot can take up to 30 minutes to complete as the appliance finishes initial
configuration.
After the initial configuration, the VDP-configure utility runs in maintenance mode. In this mode,
the VDP-configure UI is used to perform functions such as starting and stopping services on the
appliance, collecting logs and rolling back the appliance to a previous valid configuration state.
The vSphere Web Client is used to create and maintain backup jobs and perform entire virtual
machine restores, as well as for reporting and configuration of VDP.

Module 14 Data Protection 769


Virtual Machine Backup
Slide 14-17

Select objects -
containers (data
center, folder, clusters,
and so on) and
individual virtual
machines

Creating and editing a backup job is accomplished using the Backup tab of the VDP UI in the
vSphere Web Client. Individual virtual machines can be selected for backup. Containers such as
datacenters, clusters, hosts, resource pools and folders also can be selected for backup. All virtual
machines in the container at the time the backup job runs are backed up. New virtual machines
added to the container are included when the next backup job runs. Similarly, any virtual machines
removed from the container no longer are backed up.
Backup jobs can be scheduled daily, weekly or monthly. Each job runs once on the day it is
scheduled and begins when the backup window opens (default is 8:00 p.m. local time). As many as
eight backup jobs can run simultaneously on each VDP appliance.

770 VMware vSphere: Fast Track


Restoring a Virtual Machine
Slide 14-18

14
Data Protection
The restore of an entire virtual machine is performed using the Restore tab of the VDP UI in the
vSphere Web Client. The administrator can browse the list of virtual machines backed up by VDP
and then select one or more restore points. By leveraging CBT during a restore of a virtual machine
to its original location, VDP offers fast and efficient recovery. During the restore process, VDP
queries VADP to determine which blocks have changed since the selected restore point, and it
recovers only those blocks. This process reduces data transfer in the vSphere environment during a
recovery operation and decreases recovery time. VDP compares and evaluates the workload of the
two restore methods (full-image restore and restore leveraging CBT) and uses the method that
results in the fastest restore time. This process is useful in scenarios where the change rate since the
selected restore point is high and the overhead of a CBT analysis operation would be more costly
than that of a full-image recovery. VDP intelligently determines which deployment method will
result in the fastest recovery time. A new virtual machine name and destination datastore also can be
specified to prevent overwriting an existing virtual machine. Choosing a restore location other than
the original results in a full-image restore.

Module 14 Data Protection 771


File Level Restore (FLR)
Slide 14-19

Restore individual files from backup:


ƒ VMware vSphere® Data Protection™ Restore Client:
• http://<VDP ip address>:8580/flr
ƒ VMware® Tools™ must be installed.
ƒ Windows NTFS and Linux LVM, Ext 2, Ext 3 are supported (basic
disks).
View all restore points
or filter by date

It also is possible to restore individual files and folders/directories in a virtual machine. A File Level
Restore is performed using a Web-based tool called vSphere Data Protection Restore Client. The
process enables end users to perform restores on their own without the assistance of a VDP
administrator. The end user can select a restore point and then browse the file system as it looked at
the time that the backup was performed. After the end user locates the item or items to be restored, a
destination (on the local machine) is selected and the job is started. The progress of the restore job
can also be monitored in the tool.

772 VMware vSphere: Fast Track


VDP Reporting: User Interface
Slide 14-20

VDP appliance

14
capacity

Data Protection
Success and failures
at a glance

List of virtual
machines, Backup
Jobs, and so on, with
the ability to filter

Details for object

The Reports tab displays the following information: VDP Appliance Status, Used Capacity, backup
job information, virtual machine backup details, and so on. There are links to the Event Console and
Task Console for additional information and troubleshooting purposes. Users can filter the list of
virtual machines by means of several criteria, including Virtual Machine Name, Backup Jobs, and
Last Successful Backup date. The details section of the virtual machine information displays the
Virtual Machine Name, guest operating system, backup status, backup date, and other useful items.
In addition to the reporting capabilities of its UI, VDP can be configured to send email reports,
which can be scheduled at a specific time once per day on any or every day of the week. Similar to
the UI, these email messages contain details on the VDP appliance, backup jobs, and the virtual
machines that are backed up.

Module 14 Data Protection 773


Backing Up vCenter Server
Slide 14-21

Choose a complementary backup solution that matches your vCenter


Server deployment.
Perform a full backup of your vCenter Server system.
Before starting backups, stop these services:
ƒ VMware vCenter Server
ƒ VMware VCMSDS (the Active Directory Application Mode database)
ƒ Database service (SQL, Oracle, DB2)
Back up the vCenter Server database, using recommended practices
provided by the database vendor.
Back up the SSL certificates.
Back up the vpxd.cfg file.

vCenter Server requires that the vCenter Server database and VMware VCMSDS (Active Directory
Application Mode (ADAM)) data is backed up. The ADAM data is backed up every 5 minutes into
the vCenter Server database. To back up the latest update of ADAM data, ensure that the VMware
VirtualCenter Management Webservices service is running for at least 5 minutes before stopping the
other vCenter Server services.
If you run vCenter Server in a virtual machine, use a backup solution designed for backing up
virtual infrastructure. But if you run vCenter Server on a physical server, use any backup solution
designed for backing up physical infrastructure. In both cases, you should obtain a full image of the
vCenter Server host.
For details on restoration of vCenter Server data, see VMware knowledge base article 1023985 at
http://www.vmware.com/kb/1023985.

774 VMware vSphere: Fast Track


Backing Up ESXi Host Configuration Data
Slide 14-22

Always back up your host configuration after installation, after

14
changing the configuration, and after upgrading the host.
ESXi configuration:
ƒ Use the vicfg-cfgbackup command.

Data Protection
• This command backs up and restores the host’s configuration.
• Run the command from VMware vSphere® Command-Line Interface.

After you configure an ESXi host, back up your configuration. Always back up your host
configuration after you change the configuration or upgrade the ESXi image. When you perform a
configuration backup, the serial number is backed up with the configuration and is restored when
you restore the configuration. But the serial number is not preserved when you run the recovery CD
(ESXi Embedded) or perform the repair operation (ESXi Installable). The recommended procedure
is to first back up the configuration, run the recovery CD or repair operation if needed, and then
restore the configuration.
Use the vicfg-cfgbackup command to do the backup. Run this command from VMware
vSphere® Command-Line Interface (vCLI). You can install vCLI on your Windows or Linux
system or import VMware vSphere® Management Assistant. For information about importing or
installing vCLI, see vSphere Command-Line Interface Installation and Reference Guide at
http://www.vmware.com/support/pubs.
You can use the recovery CD or the repair operation (on the ESXi installation CD) if the host does
not boot because the file partitions or Master Boot Record on the installation disk might be
corrupted. Perform this recovery procedure when VMware Customer Service directs you to.

Module 14 Data Protection 775


Review of Learner Objectives
Slide 14-23

You should be able to do the following:


ƒ Describe the problems when using traditional backup in virtual
infrastructure.
ƒ Describe solutions for backing up and restoring virtual machines that
use virtual infrastructure.
ƒ Discuss technologies that make virtual machine backup and restore
operations faster and easier.
ƒ Describe how to back up and restore a virtual machine.
ƒ Describe how to back up and restore vCenter Server.
ƒ Discuss a strategy for backing up and restoring an ESXi host’s
configuration data.

776 VMware vSphere: Fast Track


Key Points
Slide 14-24

ƒ When you plan your backup strategy for virtual machines, be aware of
the various techniques that complement virtual infrastructures.

14
ƒ Data Recovery is an agentless, 64-bit, Linux-based virtual appliance
used for backing up and recovering virtual machines.

Data Protection
ƒ Data Recovery uses deduplication store technology to make efficient
use of the backup storage.
ƒ Data can be restored per virtual machine, per virtual disk, or per file.
Questions?

Module 14 Data Protection 777


778 VMware vSphere: Fast Track
MODULE 15

Patch Management 15
Slide 15-1 g
Module 15

15
Patch Management

VMware vSphere: Fast Track 779


You Are Here
Slide 15-2

Course Introduction High Availability and Fault Tolerance

Introduction to Virtualization Network Scalability

Creating Virtual Machines Host and Management Scalability

VMware vCenter Server Storage Scalability

Configuring and Managing Virtual Networks Data Protection

Configuring and Managing Virtual Storage Patch Management

Virtual Machine Management VMware Management Assistant

Installing VMware vSphere Components


Access and Authentication Control

Resource Management and Monitoring

780 VMware vSphere: Fast Track


Importance
Slide 15-3

Over time, your VMware vSphere® environment might undergo


change in its hardware or software configuration, or in the form of
software updates or patches.
From a manageability and scalability perspective, you should
implement changes to your vSphere environment in an orderly,

15
controlled, and systematic fashion.

Patch Management

Module 15 Patch Management 781


Learner Objectives
Slide 15-4

After this module, you should be able to do the following:


ƒ Describe VMware vSphere® Update Manager™.
ƒ List the steps to install Update Manager.
ƒ Use Update Manager:
• Create and attach a baseline.
• Scan an inventory object.
• Remediate an inventory object.

782 VMware vSphere: Fast Track


Update Manager
Slide 15-5

Update Manager enables centralized, automated patch and version


management for VMware vSphere® ESXi™ hosts, virtual machine
hardware, VMware® Tools™, and virtual appliances.
Update Manager reduces security risks:
ƒ Reduces the number of vulnerabilities.

15
ƒ Eliminates many security breaches that exploit older vulnerabilities.
Update Manager reduces the diversity of systems in an environment:

Patch Management
ƒ Makes management easier
ƒ Reduces security risks
Update Manager keeps machines running more smoothly:
ƒ Patches include bug fixes
ƒ Makes troubleshooting easier

VMware vSphere® Update Manager™ enables centralized, automated patch and version
management for VMware® vSphere® and supports VMware ESXi™ hosts, virtual machine
hardware, VMware® Tools™ and virtual appliances. Updates that you specify can be applied to
ESXi hosts, virtual machine hardware, and virtual appliances that you scan. With Update Manager,
you can perform the following tasks:
• Scan for compliance and apply updates to virtual machine hardware, appliances and hosts
• Directly upgrade hosts, virtual machine hardware, VMware Tools, and virtual appliances
• Apply third-party software on hosts
Keeping the patch versions up to date for virtual machine hardware and ESXi hosts helps reduce the
number of vulnerabilities in an environment and the range of problems requiring solutions. All
systems require ongoing patching and reconfiguration or other solutions. Reducing the diversity of
systems in an environment and keeping them in compliance are security best practices. Additionally,
since patches include bug fixes, Update Manager keeps environments operating properly and
without service interruption or errors.

Module 15 Patch Management 783


Update Manager 5.1 can scan and remediate hosts, virtual machines, and virtual appliances:
• ESXi 3.5, 4.x, and 5.x
• Host upgrades of VMware® ESX® or ESXi 4.x to ESXi 5.x
• Upgrades of VMware Tools and virtual machine hardware for virtual machines
• Upgrades of virtual appliances
• Bug fixes

CAUTION
After you upgrade or migrate your host to ESXi 5.x, you cannot roll back to your version 4.x ESXi
software. Back up your host configuration before performing an upgrade or migration. If the
upgrade or migration fails, you can reinstall the 4.x ESXi software and restore your host
configuration.
In addition to patching your ESXi hosts, VMware Tools, and virtual machine hardware, you still
must continue to protect the guest operating system and applications running in the virtual machine.
Continue to protect the guest operating system and applications as you would on a physical system.
VMware® does provide solutions that will assist you with this. One example is to use VMware®
vCenter Configuration Manager™. For information about vCenter Configuration Manager, go to
http://www.vmware.com/products/configuration-manager. Another example is to use VMware®
vCenter™ Protect™ Update Catalog. For information about this product, go to http://
www.vmware.com/products/datacenter-virtualization/vcenter-protect-update-catalog.

NOTE
vCenter Configuration Manager can also be used for patching and patch management. This course
will deal specifically with how Update Manager is used to perform these functions.

784 VMware vSphere: Fast Track


Update Manager Capabilities
Slide 15-6

Enables cross-platform upgrade from VMware® ESX® to VMware ESXi™


Automated patch downloading:
ƒ Begins with information-only downloading
ƒ Is scheduled at regular configurable intervals
ƒ Contacts the following sources for patching ESXi hosts:

15
• For VMware® patches: https://hostupdate.vmware.com
• For third-party patches: URL of third-party source

Patch Management
Creation of baselines and baseline groups
Scanning:
ƒ Inventory systems are scanned for baseline compliance.
Remediation:
ƒ Inventory systems that are not current can be automatically patched.
Reduces the number of reboots required after VMware Tools updates

Update Manager uses a set of operations to ensure effective patch and upgrade management.
This process begins by downloading information about a set of security patches. One or more of
these patches are aggregated to form a baseline. Multiple baselines can be added to a baseline group.
You can use baseline groups to combine different types of baselines and then scan and remediate an
inventory object against all of them as a whole. If a baseline group contains both upgrade and patch
baselines, the upgrade runs first.
A collection of virtual appliances and ESXi hosts can be scanned for compliance with a baseline or
a baseline group and remediated (updated or upgraded). These processes can be started manually or
through scheduled tasks.

Module 15 Patch Management 785


Update Manager Components
Slide 15-7

VMware vCenter database hosts


Server™ system server

vCenter Server
database

optional
download
server
patch patch
database VMware vSphere® database
Client™ with
Update Manager Update Manager
server Internet
plug-in

VMware
patch source
third-party
patch source

The major components of Update Manager:


• Update Manager server – The Update Manager server can be installed directly on the
VMware® vCenter Server™ system or on a separate system. The system can be either a
physical or a virtual machine. Update Manager 5.1 can only be installed on a 64-bit operating
system. If you upgrade an existing 32-bit Update Manager server, you must backup and restore
the previous patch database or migrate the database using the migration tool.
• Patch database – You can use the same database server that is used by vCenter Server
(Windows-based or Linux appliance), but the server will require a unique database with a DSN
system ODBC connection already configured. For a Windows vCenter Server system, if you do
not specify an existing database server, the software installs SQL Server 2005 Express.
• Update Manager plug-in – This plug-in runs on the same system on which the VMware
vSphere® Client™ is installed. The Update Manager 5.1 Client can be installed on both 32-bit
and 64-bit operating systems, and must be the same version as the Update Manager server.
• Guest agents – Guest agents are installed into virtual machines from the Update Manager server
and are used in the scanning and remediation operations.

786 VMware vSphere: Fast Track


• (Optional) Download server – If your Update Manager server lacks direct access to the Internet,
you can create a download server outside the internal network for downloading patches. You
then load them to the Update Manager server by using portable media, such as DVDs, or a
shared repository, such as a shared folder or URL.
• The Update Manager Download Service (UMDS) is an optional module of Update Manager,
which is used on the download server to download patches. With the UMDS in Update Manger
5.1 you can:
• Configure multiple download URLs
• Restrict downloads to product version and type that are relevant to your environment

15
NOTE
UMDS 5.1 can be installed only on 64-bit Windows operating systems.

Patch Management

Module 15 Patch Management 787


Installing Update Manager
Slide 15-8

Update Manager must be installed on a Windows 64-bit machine.


To install, start the VMware vCenter Installer and click VMware
vSphere Update Manager.
Information needed during the installation:
ƒ vCenter Server host name, user name, and password
ƒ Choice of database: use default or existing database
ƒ Update Manager port settings:
• Host name, ports, proxy settings (if necessary)
ƒ Destination folder and location for downloading patches
To install the Update Manager client:
ƒ Install the Update Manager Extension plug-in into the vSphere Client.

You can install Update Manager on the same computer as vCenter Server or on a different computer.
Update Manager runs on these Windows versions:
• Windows Server 2003 [Standard/Enterprise/Datacenter] 64-bit (SP2 required)
• Windows Server 2003 R2 [Standard/Enterprise/Datacenter] 64-bit (SP2 required)
• Windows Server 2008 [Standard/Enterprise/Datacenter] 64-bit
• Windows Server 2008 [Standard/Enterprise/Datacenter] 64-bit SP2
• Windows Server 2008 [Standard/Enterprise/Datacenter] 64-bit R2
• Windows Server 2008 [Standard/Enterprise/Datacenter] 64-bit R2 Service Pack 1
You can install Update Manager only on a 64-bit machine.

788 VMware vSphere: Fast Track


If the vCenter Server database is installed on the same machine as Update Manager database,
requirements for memory size are higher. For minimum best performance:
• Have two or more logical cores, each with a speed of 2GHz.
• 2GB of RAM is required if Update Manager and vCenter Server are on different machines
• 4GB of RAM is required if Update Manager and vCenter Server are on the same machine
• VMware recommends that you use a Gigabit connection between Update Manager and the
ESXi hosts. However, a 10/100 Mbps is acceptable.
To install Update Manager, start the VMware vCenter Installer and click the VMware vSphere

15
Update Manager link.
Gather information about the environment into which you are installing Update Manager, including:

Patch Management
• The vCenter Server system that Update Manager will work with. The necessary information
includes:
• The vCenter Server IP address or host name
• Port numbers (in most cases, the default Web service ports, 80 and 443, are used)
• Administrative credentials (the Administrator account is often used)
• The system DNS name plus the user name and password for the database that Update Manager
will work with.
During the installation, you can configure Update Manager to work with an Internet proxy server.
The Update Manager client component is delivered as a plug-in for the vSphere Client. After
installing Update Manager, install the Update Manager plug-in in any vSphere Client that you will
use to manage Update Manager.
In the vSphere Client menu bar, select Plug-ins > Manage Plug-ins. In the Plug-in Manager
window, click Download and Install for the Update Manager plug-in. The installed plug-in appears
under Installed Plug-ins.
The disk storage requirements for Update Manager vary depending on your deployment. Make sure
that you have at least 20GB of free space in which to store patch data. Depending on the size of your
deployment, Update Manager requires a minimum amount of free space per month for database
usage.
Before installing Update Manager, you must create a database instance and configure it to ensure
that all Update Manager database tables are placed in it. Update Manager can handle small-scale
environments using the bundled SQL Server 2008 R2 Express. For environments with more than
5 hosts and 50 virtual machines, create either an Oracle or a SQL Server database for Update
Manager. For large scale environments, you should set up the Update Manager database on a
different computer than the Update Manager server and the vCenter Server database.

Module 15 Patch Management 789


Configuring Update Manager Settings
Slide 15-9

By default, all patch sources


are enabled. Additional
patch sources can be added
if necessary.

Modify
Update
Manager
configuration
properties.

You can modify the following administrative settings for Update Manager. Select Home >
Solutions and Applications > Update Manager and click the Configuration tab:
• Network Connectivity – Network settings, such as IP address or host name for patch store.
• Download Settings – Where to obtain patches and where to configure the proxy settings.
• Download Schedule – How frequently to download patches. This setting has no effect on an
optional download server, which is separate from the Update Manager server.
• Notification Check Schedule – How frequently to check for notifications about patch recalls,
patch fixes, and alerts.
• Virtual Machine Settings – Whether to take a snapshot of the virtual machines before
remediation to enable rollback and how long to keep snapshots. Snapshots use disk space, but
they also protect you if the upgrade fails.
• ESXi Host/Cluster Settings – How Update Manager responds to a failure that might occur
when placing an ESXi host in maintenance mode. This setting also allows you to temporarily
disable VMware vSphere® Distributed Power Management™ (DPM), VMware vSphere®
High Availability admission control, and VMware vSphere® Fault Tolerance for cluster updates
to succeed.
• vApp Settings – Enable or disable smart reboot of virtual appliances after remediation.

790 VMware vSphere: Fast Track


Baseline and Baseline Groups
Slide 15-10

A baseline consists of one or more patches, extensions, or upgrades.


Five types of example of default baselines for hosts
baselines:
ƒ Host patch
ƒ Host extension
ƒ Host upgrade

15
ƒ Virtual machine upgrade
for hardware or VMware Tools

Patch Management
ƒ Virtual appliance upgrade
Update Manager includes a
number of default baselines.
A baseline group consists of multiple baselines:
ƒ Can contain one upgrade baseline per type and
one or more patch and extension baselines

When you scan hosts, virtual machines, and virtual appliances, you evaluate them against baselines
and baseline groups to determine their level of compliance.
Baselines contain a collection of one or more patches, extensions, bug fixes, or upgrades. Baselines
can be classified as upgrade, extension, or patch baselines.
An extension refers to additional software for ESXi hosts. This additional software might be
VMware software or third-party software. Examples of extensions include the following:
• Additional features
• Updated drivers for hardware
• Common Information Model (CIM) providers for managing third-party modules on the host
• Improvements to the performance or usability of existing host features.
Baseline types:
• Host patch – A set of patches to apply to a host or set of hosts, based on applicability
• Host extension – A fixed set of extensions for your ESXi host
• Host upgrade – An upgrade release that allows you to upgrade hosts to a particular release
version

Module 15 Patch Management 791


• VMware Tools upgrade (to match host) – An upgrade release that checks virtual machines for
compliance with the latest VMware Tools version on the host. Update Manager supports
upgrading of VMware Tools for virtual machines on hosts that are running ESXi 4.0 and later.
• Virtual machine hardware upgrade (to match host) – An upgrade release that checks the virtual
hardware of a virtual machine for compliance with the latest version supported by the host.
Update Manager supports upgrading to virtual hardware version 8.0 on hosts that are running
ESXi 5.x.
• Virtual appliance upgrade – A set of patches to the operating system or application in the virtual
appliance
Baseline groups are assembled from existing baselines. They might contain one upgrade baseline
per type and one or more patch and extension baselines, or a combination of multiple patch and
extension baselines.
Administrators can create, edit, delete, attach, or detach baselines and baseline groups. For large
organizations with different groups or divisions, each group can define its own baselines.

792 VMware vSphere: Fast Track


Creating a Baseline
Slide 15-11
To create a baseline:
1. Click Create.
2. Specify name and description.
3. Choose a baseline type.
4. For a patch baseline, select a patch option: Fixed or Dynamic.
5. Select patches to add to the baseline.

15
Patch Management
A host patch is
added to this
baseline.

To create a baseline, select Home > Solutions and Applications > Update Manager and click the
Baselines and Groups tab. Click the Create link to start the New Baseline wizard. Enter a name
and description for your baseline. Select one of the five baseline types.
If you are creating a patch baseline, you must also select a patch option: Fixed or Dynamic.
A fixed baseline remains the same even if new patches are added to the repository. With a fixed
patch baseline, the user manually specifies all updates included in the baseline from all the patches
available in Update Manager. Fixed updates are typically used to check whether systems are
prepared to deal with particular problems. For example, you might use fixed baselines to check for
compliance with patches to prevent computer worms.
A dynamic baseline is updated when new patches meeting the specified criteria are added to the
repository. The criteria that you can specify are patch vendor, product, severity, and release dates. As
the set of available updates changes, dynamic patch baselines are updated as well. You can explicitly
include or exclude an update.

Module 15 Patch Management 793


Attaching a Baseline
Slide 15-12
To view compliance information and remediate inventory objects, first attach
a baseline or baseline group to an object.
For improved efficiency, attach a baseline to a container object instead of to
an individual object.

To view compliance information and remediate objects in the inventory against specific baselines
and baseline groups, attach existing baselines and baseline groups to these objects.
Although you can attach baselines and baseline groups to individual objects, attaching them to
container objects, such as folders, hosts, clusters, and datacenters, is more efficient. Attaching a
baseline to a container object attaches the baseline to all objects in the container. On the slide, a host
patch baseline named ESXi Host Update is attached to a cluster object named Lab Cluster. The host
patch baseline is attached to the two hosts in Lab Cluster: sc-goose01 and sc-goose02.
To attach baselines to ESXi hosts:

1. Go to the Hosts and Clusters inventory view.

2. Select the object and click the Update Manager tab.

3. Click Attach.

4. Select the baselines or baseline group that you want to attach to the object.
To attach baselines to virtual machines, templates, and virtual appliances, go to the VMs and
Templates inventory view.

794 VMware vSphere: Fast Track


Scanning for Updates
Slide 15-13

Scanning evaluates the inventory object against the baseline or baseline


group.
A scan can be performed manually or automatically, using a scheduled task.

15
Patch Management
Scanning is the process in which attributes of a set of hosts, virtual machines, or virtual appliances
are evaluated against patches, extensions, and upgrades in the attached baselines and baseline
groups. You can configure Update Manager to scan virtual machines, virtual appliances, and ESXi
hosts against baselines and baseline groups by scheduling or manually initiating scans to generate
compliance information.
If the object that you select is a container object, all child objects are also scanned. The larger the
virtual infrastructure and the higher up in the object hierarchy that you begin the scan, the longer the
scan takes.
After you have an inventory object attached to a baseline, perform a scan by right-clicking the object
and selecting Scan for Updates. Or click the Scheduled Tasks button and create a scheduled task.
To schedule the scan, select Home > Management > Scheduled Tasks. In the toolbar, click New. In
the Schedule Task dialog box, select the task Scan for Updates. The Schedule a Scan wizard allows
you do define the object to scan, the type of scan to perform, and the time to perform the scan.
A scheduled task is useful because it can automatically scan an object for problems. This scan
catches new objects that do not match a defined baseline. Using a dynamic baseline, instead of a
fixed baseline, discovers new vulnerabilities and needed updates.

Module 15 Patch Management 795


To upgrade VMware Tools and virtual machine hardware, a supported guest operating system must
be running in the virtual machine. The following list identifies the supported guest operating
systems included with the initial release of Update Manger 5.1:
• Windows XP Professional 32-bit (SP3 required)
• Windows XP Professional 64-bit (SP2 required)
• Windows Server 2003 [Standard/Enterprise/Datacenter] 32-bit (SP2 required)
• Windows Server 2003 [Standard/Enterprise/Datacenter] 64-bit (SP2 required)
• Windows Server 2003 R2 [Standard/Enterprise/Datacenter] 32-bit (SP2 required)
• Windows Server 2003 R2 [Standard/Enterprise/Datacenter] 64-bit (SP2 required)
• Windows Vista [Business/Enterprise] 32-bit (SP2 required)
• Windows Vista [Business/Enterprise] 64-bit (SP2 required)
• Windows Server 2008 [Standard/Enterprise/Datacenter] 32-bit
• Windows Server 2008 [Standard/Enterprise/Datacenter] 64-bit
• Windows Server 2008 [Standard/Enterprise/Datacenter] 32-bit SP2
• Windows Server 2008 [Standard/Enterprise/Datacenter] 64-bit SP2
• Windows 7 [Professional/Enterprise] 32-bit
• Windows 7 [Professional/Enterprise] 64-bit
• Windows 7 [Professional/Enterprise] 32-bit SP1
• Windows 7 [Professional/Enterprise] 64-bit SP1
• Windows Server 2008 [Standard/Enterprise/Datacenter] 64-bit R2
• Windows Server 2008 [Standard/Enterprise/Datacenter] 64-bit R2 Service Pack 1
• Red Hat Enterprise Linux 4
• Red Hat Enterprise Linux 5
• Red Hat Enterprise Linux 6

796 VMware vSphere: Fast Track


Viewing Compliance
Slide 15-14

15
Patch Management
In this example,
the scan found
two noncompliant
hosts.

After the scan, patches and


updates can be staged first and
then remediated at a later time.

To view compliance of different hosts or virtual machines with different Update Manager patch
baselines, select the object in the appropriate inventory view and click the Update Manager tab. To
view virtual machine compliance, you must use the VMs and Templates inventory view.
The results of the scan provide information on the degree of conformance with baselines and
baseline groups. Information includes the time the last scan was completed at this level and the total
number of compliant and noncompliant baselines. For each baseline or baseline group, the scan
results report the number of virtual machines, appliances, or hosts that are compliant, noncompliant,
or unknown.
On the slide, the hosts in the cluster named Lab Cluster were scanned. After viewing compliance
information, the next step is to remediate the host. Before remediation, you can perform an
additional step on host objects called staging.
Staging allows you to download the patches and extensions from the Update Manager server to the
ESXi hosts, without applying the patches and extensions immediately. Staging patches and
extensions speeds up the remediation process because the patches and extensions are already
available locally on the hosts. You can reduce the downtime during remediation by staging patches
and extensions whose installation requires that a host enter maintenance mode. Staging patches and
extensions itself does not require that the hosts enter maintenance mode.

Module 15 Patch Management 797


Remediating Objects
Slide 15-15

You can remediate virtual machines, templates, virtual appliances,


and hosts.
You can perform the remediation immediately or schedule it for a
later date.

You can remediate virtual machines, virtual appliances, and hosts by using either user-initiated
remediation or regularly scheduled remediation. To remediate an object, right-click the inventory
object and select Remediate.
For ESXi hosts in a cluster, the remediation process is sequential. When you remediate a cluster of
hosts and one of the hosts fails to enter maintenance mode, Update Manager reports an error and the
process fails. The hosts in the cluster that did get remediated stay at the updated level. The ones that
were to be remediated after the failed host are not updated. When you remediate hosts against
baseline groups containing an upgrade baseline and patch or extension baselines, the upgrade is
performed.
For multiple clusters under a datacenter, the remediation processes run in parallel. If the remediation
process fails for one of the clusters in a datacenter, the remaining clusters are still remediated.
To remediate virtual machines and virtual appliances together, they must be in one container, such as
a folder, VMware vSphere® vApp(s)™, or a datacenter. You must then attach a baseline group or a
set of individual virtual appliance or virtual machine baselines to the container. If you attach a
baseline group, it can contain both virtual machine and virtual appliance baselines. The virtual
machine baselines apply to virtual machines only. The virtual appliance baselines apply to virtual
appliances only.

798 VMware vSphere: Fast Track


Update Manager supports remediation for the following inventory objects:
• Powered-on, suspended, or powered-off virtual machines and templates for VMware Tools and
virtual machine hardware upgrade.
• Powered-on virtual appliances that are created with VMware® Studio™ 2.0 and later, for
virtual appliance upgrade.
• ESXi hosts for patch, extension, and upgrade remediation.

15
Patch Management

Module 15 Patch Management 799


Maintenance Mode and Remediation
Slide 15-16

Power off or suspend


virtual machines
Option for
PXE-booted
ESXi 5.0

Some updates require that a host enters maintenance mode before remediation. Virtual machines and
appliances cannot run when a host is in maintenance mode.
To reduce the host remediation downtime at the expense of virtual machine availability, you can
choose to shut down or suspend virtual machines and virtual appliances before remediation. In a
VMware vSphere® Distributed Resource Scheduler™ (DRS) cluster, if you do not power off the
virtual machines, the remediation takes longer but the virtual machines are available during the entire
remediation process, because they are migrated with VMware vSphere® vMotion® to other hosts.
Select Retry entering maintenance mode in case of failure, specify the number of retries, and
specify the time to wait between retries. Update Manager waits for the retry delay period and retries
putting the host into maintenance mode as many times as you indicate in Number of retries field.
Update Manager does not remediate hosts on which virtual machines have connected CD, DVD, or
floppy drives. In clustered environments, connected media devices might prevent vMotion if the
destination host does not have an identical device or mounted ISO image, which in turn prevents the
source host from entering maintenance mode.

800 VMware vSphere: Fast Track


The option Disable any removable media devices connected to the virtual machine on the host
exists for this reason. After remediation, Update Manager reconnects the removable media devices
if they are still available.
The check box under ESXi 5.x Patch Settings to enable Update Manager to patch powered-on PXE
booted ESXi hosts appears only when you remediate hosts against patch or extension baselines.

15
Patch Management

Module 15 Patch Management 801


Remediation Options for a Cluster
Slide 15-17

When remediating hosts in a cluster, you must


temporarily disable certain cluster features:
VMware vSphere® Distributed Power
Management™, VMware vSphere® High
Availability, and VMware vSphere® Fault
Tolerance.

You can generate a


report that
identifies problems
before remediation
occurs.

Remediation of hosts in a cluster requires that you temporarily disable cluster features like DPM and
vSphere HA admission control. You should also turn off VMware vSphere Fault Tolerance if it is
enabled on any of the virtual machines on a host. Disconnect the removable devices connected to
the virtual machines on a host, so that they can be migrated with vMotion.
Before you start the remediation process, you can generate a report that shows which cluster, host,
or virtual machine is with enabled cluster features. On the Cluster Remediation Options page of the
Remediate wizard, click Generate Report. The Cluster Remediation Options Report shows the
name of the cluster, host, or virtual machine on which a problem is reported. The report also
displays recommendations on how to fix the problem.

802 VMware vSphere: Fast Track


Patch Recall Notification
Slide 15-18

At regular intervals, Update Manager contacts VMware to download


notifications about patch recalls, new fixes, and alerts.
ƒ Notification Check Schedule is selected by default.
On receiving patch recall notifications, Update Manager:
ƒ Generates a notification in the notification tab

15
ƒ No longer applies the recalled patch to any host:
• Patch is flagged as recalled in the database.

Patch Management
ƒ Deletes the patch binaries from its patch repository
ƒ Does not uninstall recalled patches from ESXi hosts:
• Instead, it waits for a newer patch and applies that to make a host
compliant.

At regular intervals, Update Manager contacts VMware to download information (notifications)


about patch recalls, new fixes, and alerts. You can change the schedule by modifying the
Notification Check Schedule setting in the Update Manager Configuration tab.
When patches with problems or potential problems are released, these patches are recalled in the
metadata, and Update Manager marks them as recalled. If you try to install a recalled patch, Update
Manager notifies you that the patch is recalled and does not install it on the host. If you have already
installed such a patch, Update Manager notifies you that the recalled patch is installed on certain
hosts. Update Manager also deletes all the recalled patches from the Update Manager patch
repository.
When a new patch is released, Update Manager downloads it and prompts you to install it to fix the
problems that the recalled patch might cause. If you try to install the recalled patch, Update Manager
alerts you that the patch is recalled and that you must install a fix.

Module 15 Patch Management 803


Remediation Enabled for DRS
Slide 15-19

Eliminate downtime for virtual


UM + DRS
machines when patching ESXi
hosts:
1. Update Manager puts host in
maintenance mode.
2. VMware vSphere® Distributed
Resource Scheduler™ moves
virtual machines to available host.
3. Update Manager patches host
and then exits maintenance
mode.
4. DRS moves virtual machines
back per rule. !
maintenance mode

Typically, hosts are put into maintenance mode before remediation if the update requires it. Virtual
machines cannot run when a host is in maintenance mode. vCenter Server migrates the virtual
machines to other hosts in a cluster before the noncompliant host is put in maintenance mode.
vCenter Server can migrate the virtual machines if the cluster is configured for vMotion and if DRS
and Enhanced vMotion Compatibility (EVC) are enabled. EVC is not a prerequisite for VMware
vSphere® Storage vMotion® migration. EVC guarantees that the CPUs of the hosts are compatible.
For other containers or individual hosts that are not in a cluster, migration with vMotion cannot be
performed.
Update Manager 5.1 can patch and upgrade your ESXi hosts based on available cluster capacity and
can remediate an optimal number of ESXi hosts simultaneously without virtual machine downtime.
Additionally, for scenarios where turnaround time is more important than virtual machine uptime,
you have the choice to remediate all ESXi hosts in a cluster simultaneously.

804 VMware vSphere: Fast Track


Lab 27
Slide 15-20

In this lab, you will install, configure, and use Update Manager.
1. Install Update Manager.
2. Install the Update Manager plug-in into the vSphere Client.
3. Modify cluster settings.
4. Configure Update Manager.

15
5. Create a patch baseline.
6. Attach a baseline and scan for updates.

Patch Management
7. Stage the patches onto the ESXi hosts.
8. Remediate the ESXi hosts.

Module 15 Patch Management 805


Review of Learner Objectives
Slide 15-21

You should be able to do the following:


ƒ Describe Update Manager.
ƒ List the steps to install Update Manager.
ƒ Use Update Manager:
• Create and attach a baseline.
• Scan an inventory object.
• Remediate an inventory object.

806 VMware vSphere: Fast Track


Key Points
Slide 15-22

ƒ Update Manager patches and updates ESXi 5.1 hosts as well earlier
versions of hosts, virtual machines, templates, and virtual appliances.
ƒ Update Manager reduces security vulnerabilities by keeping systems
up to date and by reducing the diversity of systems in an environment.
ƒ Update Manager no longer patches guest operating systems or the

15
applications running within guest operating systems.

Patch Management
Questions?

Module 15 Patch Management 807


808 VMware vSphere: Fast Track
MODULE 16

VMware Management Assistant 16


Slide 16-1 g
Module 16

16
VMware Management Assistant

VMware vSphere: Fast Track 809


You Are Here
Slide 16-2

Course Introduction High Availability and Fault Tolerance

Introduction to Virtualization Network Scalability

Creating Virtual Machines Host and Management Scalability

VMware vCenter Server Storage Scalability

Configuring and Managing Virtual Networks Data Protection

Configuring and Managing Virtual Storage Patch Management

Virtual Machine Management VMware Management Assistant

Installing VMware vSphere Components


Access and Authentication Control

Resource Management and Monitoring

810 VMware vSphere: Fast Track


Importance
Slide 16-3

Performing configuration and troubleshooting tasks from the


command line is a very useful skill that a VMware vSphere®
administrator should have.
VMware vSphere® Command-Line Interface (vCLI), which is available
with VMware vSphere® Management Assistant (vMA), provides the
administrator with these command-line capabilities.

16
VMware Management Assistant

Module 16 VMware Management Assistant 811


Learner Objectives
Slide 16-4

After this module, you should be able to do the following:


ƒ Understand the purpose of the vCLI commands.
ƒ Discuss the options for running commands.
ƒ Deploy and configure vMA.

812 VMware vSphere: Fast Track


Methods to Run Commands
Slide 16-5

Ways to get command-line access on an VMware ESXi™ host:


ƒ VMware vSphere® ESXi™ Shell
ƒ vMA, which includes the vCLI package

16
VMware Management Assistant
A few methods exist for accessing the command prompt on a VMware ESXi™ host.
VMware® recommends that you use VMware vSphere® Command-Line Interface (vCLI) or
VMware vSphere® Management Assistant (vMA) to run commands against your ESXi hosts. Run
commands directly in VMware vSphere® ESXi™ Shell only in troubleshooting situations.

Module 16 VMware Management Assistant 813


ESXi Shell
Slide 16-6

ESXi Shell includes a set of fully supported ESXCLI commands and a


set of commands for diagnosing and repairing ESXi hosts.
Use ESXi Shell only at the request of VMware® technical support.
ƒ You should be familiar with how ESXi Shell works in case VMware
technical support directs you to use it.
ESXi Shell can be accessed:
ƒ Locally, from the direct console user interface (DCUI)
ƒ Remotely, from a Secure Shell (SSH) session

An ESXi system includes a direct console user interface (DCUI) that enables you to start and stop
the system and perform a limited set of maintenance and troubleshooting tasks. The DCUI includes
ESXi Shell, which is disabled by default. You can enable ESXi Shell in the DCUI or by using
VMware vSphere® Client™.
You can enable local shell access or remote shell access:
• Local shell access enables you to log in to the shell directly from the DCUI.
• Secure Shell (SSH) is a remote shell that enables you to connect to the host with a shell, such as
PuTTY.
ESXi Shell includes all ESXCLI commands, a set of deprecated esxcfg- commands, and a set of
commands for troubleshooting and remediation.

814 VMware vSphere: Fast Track


Accessing ESXi Shell Locally
Slide 16-7

To access ESXi Shell locally, you must have physical access to the
DCUI and administrator privileges.
By default, the local ESXi Shell is disabled:
ƒ Enable the local ESXi Shell from the DCUI or from the VMware
vSphere® Client™.
After you enable ESXi Shell access, you can access the local shell:
ƒ In the main DCUI screen, press Alt+F1 to open a virtual console
window to the host.

16
Local users with administrator privileges automatically have local
shell access:
ƒ

VMware Management Assistant


Shared root access is no longer required.

If you have access to the DCUI, you can enable the ESXi Shell from there.
To enable the ESXi Shell in the DCUI:

1. In the DCUI of the ESXi host, press F2 and provide credentials when prompted.

2. Scroll to Troubleshooting Options and press Enter.

3. Select Enable ESXi Shell and press Enter.


On the left, Enable ESXi Shell changes to Disable ESXi Shell. On the right, ESXi Shell is
Disabled changes to ESXi Shell is Enabled.
4. Press Esc until you return to the main DCUI screen.
Local users that are assigned to the administrator role automatically have local shell access.
Assigning local shell access to the administrator role prevents the root account from being shared by
multiple users. Sharing the root account presents security issues and makes auditing the host
difficult.

Module 16 VMware Management Assistant 815


Accessing ESXi Shell Remotely
Slide 16-8

You can access ESXi Shell remotely with a secure shell client like
SSH or PuTTY.
ƒ The SSH service must be enabled first.
• This service is disabled by default.
ƒ Disable SSH access when you are done using it.

Enable SSH on an ESXi host only as a last resort for


troubleshooting. Enabling SSH creates a major security
vulnerability and reduces ESXi resources.

If you enable SSH access, do so only for a limited time. SSH should never be left open on an ESXi
host in a production environment.
If SSH is enabled for the ESXi Shell, you can run shell commands by using an SSH client, such as
SSH or PuTTY.
To enable SSH from the vSphere Client:

1. Select the host and click the Configuration tab.

2. Click Security Profile in the Software panel.

3. In Services, click Properties.

4. Select SSH and click Options.

5. Change the SSH options. To change the Startup policy across reboots, click Start and stop
with host and reboot the host.
6. Click OK.

816 VMware vSphere: Fast Track


To enable the local or remote ESXi Shell from the vSphere Client:

1. Select the host and click the Configuration tab.


2. Click Security Profile in the Software panel.
3. In Services, click Properties.

4. Select ESXi Shell and click Options.

5. Change the ESXi Shell options. To change the Startup policy across reboots, click Start and
stop with host and reboot the host.
6. Click OK.
The ESXi Shell timeout setting specifies how long, in minutes, you can leave an unused session
open. By default, the timeout for the ESXi Shell is 0, which means the session remains open even if
it is unused. If you change the timeout, for example, to 30 minutes, you have to log in again after the
timeout period has elapsed.

16
To modify the ESXi Shell Timeout:

VMware Management Assistant


1. In the Direct Console, follow these steps.

2. Select Modify ESXi Shell timeout and press Enter.

3. Enter the timeout value in minutes and press Enter.


In the vSphere Client, follow these steps:
1. In the Configuration tab’s Software panel, click Advanced Settings.

2. In the left panel, click UserVars.

3. Find UserVars.ESXiShellTimeOut and enter the timeout value in minutes.

4. Click OK.

Module 16 VMware Management Assistant 817


vCLI
Slide 16-9

ƒ The vCLI command set enables you to run common system


administration commands against ESXi hosts.
ƒ You can run most vCLI commands against a VMware® vCenter
Server™ system and target the ESXi hosts that it manages.
ƒ vCLI commands normally require the following options to connect and
log in to a server:
• --server <name>
• --username <user>
• --password <string>
ƒ vCLI commands run on top of the VMware vSphere® SDK for Perl.
ƒ vCLI commands are available as a standalone installation package for
Linux or Windows systems packaged with vMA

vCLI provides a command-line interface for ESXi hosts. Multiple ESXi hosts can be managed from
a central system on which vCLI is installed.
Normally, vCLI commands require you to enter options that specify the server name, the user name,
and the password for the server that you want to run the command against. Methods exist that enable
you to bypass entering the user name and password options, and, sometimes, the server name
option. Two of these methods are described later in this module.
For details about vCLI, see Getting Started with vSphere Command-Line Interfaces and vSphere
Command-Line Interface Concepts and Examples at http://www.vmware.com/support/pubs/vsphere-
esxi-vcenter-server-pubs.html.

818 VMware vSphere: Fast Track


vMA
Slide 16-10

vMA is a virtual appliance that includes the following:


ƒ SUSE Linux Enterprise Server 11 SP1
ƒ VMware® Tools™
ƒ vCLI
ƒ vSphere SDK for Perl
ƒ Java JRE version 1.6
ƒ vi-fastpass an authentication component for
the appliance

16
VMware Management Assistant
vMA is a downloadable appliance that includes several components, including vCLI. vMA enables
administrators to run scripts or agents that interact with ESXi hosts and VMware® vCenter
Server™ systems without having to authenticate each time. vMA is easy to download, install, and
configure through the vSphere Client.

Module 16 VMware Management Assistant 819


vMA Hardware and Software Requirements
Slide 16-11

Hardware requirements:
ƒ AMD Opteron, rev E or later
ƒ Intel Processors with EM64T and VT enabled
Software requirements:
ƒ vMA can be deployed on the following:
• vSphere ESX 4.0 Update 2 or later
• vSphere ESXi 4.1, 5.0, and 5.1
• vCenter Server 4.0 Update 2 or later
• vCenter Server 4.1, 5.0, and 5.1
By default, vMA uses the following:
ƒ One virtual processor
ƒ 600MB of RAM
ƒ 3GB virtual disk

To set up vMA, you must have an ESXi host. Because vMA runs a 64-bit Linux guest operating
system, the ESXi host on which it runs must support 64-bit virtual machines.
The 3GB virtual disk size requirement might increase, depending on the extent of centralized
logging enabled on the vMA appliance.
The recommended memory for vMA is 600MB.

820 VMware vSphere: Fast Track


Configuring vMA
Slide 16-12

Deploy Configure Add Targets Authenticate

To set up a vMA appliance:


1. Deploy vMA from a URL or a downloaded file.
2. Configure vMA virtual machine and time-zone network settings.
3. Add target servers to vMA. Target servers include the vCenter Server

16
system or ESXi hosts or both.
4. Initialize vi-fastpass authentication.

VMware Management Assistant


You have to initialize vi-fastpass only if you want to enter vCLI commands without specifying a
user name and password for the vCenter Server system or an ESXi host.

Module 16 VMware Management Assistant 821


Connecting to the Infrastructure
Slide 16-13

vMA command paths

vCenter Server

vMA

vSphere SDK for Perl API


private vCenter protocol ESXi host

vMA commands directly targeted at the hosts are sent using the VMware vSphere® SDK for Perl
API. Commands sent to the host through the vCenter Server system are first sent to the vCenter
Server system, using the vSphere SDK for Perl API. Using a private protocol that is internal to
vCenter Server, commands are sent from the vCenter Server system to the host.

822 VMware vSphere: Fast Track


Deploying vMA
Slide 16-14

Deploy vMA like any other virtual


appliance.

16
VMware Management Assistant
vMA is deployed like any other virtual appliance. After the appliance is deployed to the
infrastructure, the user can power it on and start configuring vMA.
The vMA appliance is available from the download page on the VMware Web site.

Module 16 VMware Management Assistant 823


Configuring vMA
Slide 16-15

Configure vMA at the command prompt or through the Web interface:


ƒ https://<appliance_name_or_IP_address>:5480
ƒ Log in as vi-admin.
From the Web interface,
you can do the following:
ƒ Configure https://vma.vclass.local:5480
time-zone
settings
ƒ Configure
network and
proxy server
settings
ƒ Update vMA
to the latest
version

After vMA is deployed, your next step is to configure the appliance. When you start the vMA
virtual machine the first time, you can configure it. The appliance can be configured either by
opening a console to the appliance or by pointing a Web browser to the appliance.
The vi-admin account is the administrative account on the vMA appliance and exists by default.
During the initial power-on, you are prompted to choose a password for this user account.
Although the vMA appliance is Linux-based, logging in as root has been disabled.

824 VMware vSphere: Fast Track


Adding a Target Server
Slide 16-16

A target server is a server that you access from vMA:


ƒ Either a vCenter Server system or ESXi host
To add a vCenter Server system as a target server:
1. Log in as vi-admin.
2. Run vifp addserver <vCenter_Server_system>.
a. Enter a vCenter Server user name with administrator privilege.
b. Enter the user’s password.
c. Agree to store this information in the credential store.

16
3. Run vifp listservers to verify that the vCenter Server system has
been added as a target.

VMware Management Assistant


4. Run vifptarget -s <vCenter_Server_system> to set the target as
the default for the current vMA session.
5. Test operation by running vicfg-nics –l –vihost <ESXi_host>.

After you configure vMA, you can add target servers that run the supported vCenter Server or ESXi
versions. The vifp interface enables administrators to add, list, and remove target servers and to
manage the vi-admin user’s password.
After a server is added as a vMA target, you must run the vifptarget command. This command
enables seamless authentication for remote vCLI and vSphere SDK for Perl API commands. Run
vifptarget <server> before you run vCLI commands or vSphere SDK for Perl scripts against
that system. The system remains a vMA target across vMA reboots, but running vifptarget again
is required after each logout.
You can establish multiple servers as target servers and then call vifptarget once to initialize all
servers for vi-fastpass authentication. You can then run commands against any target server without
additional authentication. You can use the --server option to specify the server on which to run
commands.

Module 16 VMware Management Assistant 825


vMA Authentication
Slide 16-17

The vi-fastpass authentication component


vMA
supports unattended authentication to vCenter
Server system or ESXi host targets: authenticated
ƒ Prevents the user from having to continually add commands
login credentials to every command being
executed
ƒ Facilitates unattended scripted operations
logging

ESXi

vCenter
Server

The vMA authentication interface enables users and applications to authenticate with the target
servers by using vi-fastpass or Active Directory (AD). While adding a server as a target, the
administrator can determine whether the target must use vi-fastpass or AD authentication. For vi-
fastpass authentication, the credentials that a user has on the vCenter Server system or ESXi host are
stored in a local credential store. For AD authentication, the user is authenticated with an AD server.
When you add an ESXi host as a fastpass target server, vi-fastpass creates two users with obfuscated
passwords on the target server and stores the password information on vMA:
• vi-admin with administrator privileges
• vi-user with read-only privileges
The creation of vi-admin and vi-user does not apply for AD authentication targets. When you add a
system as an AD target, vMA does not store information about the credentials. To use the AD
authentication, the administrator must configure vMA for AD.

826 VMware vSphere: Fast Track


Joining vMA to Active Directory
Slide 16-18

vMA can be configured for Active Directory (AD), so the ESXi hosts
and vCenter Server systems can be added to vMA without having to
store passwords in the vMA credential store.

Active
Directory

16
VMware Management Assistant
vCenter Server

vMA ESXi host

Configure vMA for Active Directory authentication so that ESXi hosts and vCenter Server systems
added to Active Directory can be added to vMA. Joining the vMA to Active Directory prevents you
from having to store the passwords in the vMA credential store. This approach is a more secure way
of adding targets to vMA.
Ensure that the DNS server configured for vMA is the same as the DNS server of the domain. You
can change the DNS server by using the vMA Console to the Web UI.
Ensure that the domain is accessible from vMA. Ensure that you can ping the ESXi and vCenter
Server systems that you want to add to vMA. Ensure also that pinging resolves the IP address to the
target servers domain.
To add vMA to a domain:

1. From the vMA console, run the following command:


sudo domainjoin-cli join <domain_name> <domain_admin_user>

2. When prompted, provide the Active Directory administrator’s password.

3. Restart vMA.

Module 16 VMware Management Assistant 827


Command Structure
Slide 16-19

vCLI syntax on a vMA appliance:


<command> <conn_options> <target_option> <command_options>

A vCLI command targeted directly at an ESXi host:


vicfg-nics --server ESXa --username mike --password vmware -l

A vCLI command targeted at an ESXi host through a vCenter Server


instance:
vicfg-nics --server vC1 --username vcadmin --password vmware --vihost ESXa -l

The slide shows syntax and vCLI command examples.


The <target_option> is necessary only if you are sending the command to an ESXi host through
the vCenter Server system. In this case, the <target_option> specifies to which host the vCenter
Server system should forward the command.
The example command (vicfg-nics -l) displays information about the physical network
interface cards on an ESXi host.

828 VMware vSphere: Fast Track


vMA Commands
Slide 16-20

vMA includes the following commands:


ƒ esxcli
ƒ resxtop
ƒ svmotion
ƒ vicfg-* commands
ƒ esxcfg-* commands (deprecated)
ƒ vifs
ƒ vihostupdate

16
ƒ vmkfstools

VMware Management Assistant


The vCLI command set is part of vMA. For more information about the commands included in
vMA, see Getting Started with vSphere Command-Line Interfaces and vSphere Command-Line
Interface Concepts and Examples at http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-
server-pubs.html.

Module 16 VMware Management Assistant 829


esxcfg Commands
Slide 16-21

Many vCLI commands used esxcfg commands in scripts to manage


ESX or ESXi 3.x and 4.x hosts:
ƒ In vCLI many vicfg commands are equivalent to esxcfg commands.
ƒ Commands that use esxcfg are still available for compatibility reasons
and might become obsolete.
ƒ Use vicfg commands when developing new scripts.

For many of the vCLI commands, you might have used scripts with corresponding service
commands with an esxcfg prefix to manage ESX 3.x hosts. To facilitate easy migration from ESX/
ESXi 3.x and later versions of ESXi, a copy of each vicfg- command that uses an esxcfg- prefix is
included in the vCLi package.
Commands with the esxcfg prefix are available mainly for compatibility reasons and might
become obsolete.

830 VMware vSphere: Fast Track


esxcfg Equivalent vicfg Commands Examples
Slide 16-22

esxcfg Command Equivalent vicfg Command

esxcfg-advcfg vicfg-advcfg

esxcfg-cfgbackup vicfg-cfgbackup

esxcfg-nics vicfg-nics

16
esxcfg-vswitch vicfg-vswitch

VMware Management Assistant


The slide lists some examples of vicfg commands for which an esxcfg prefix is available.

Module 16 VMware Management Assistant 831


Managing Hosts with vMA
Slide 16-23

Host management task vMA command

Reboot and shut down hosts. vicfg-hostops

Enter and exit maintenance mode. vicfg-hostops

Back up and restore host


vicfg-cfgbackup
configuration settings.
Add ESXi hosts to an Active
vicfg-authconfig
Directory domain.

Host management commands can stop and reboot ESXi hosts, back up configuration information,
and manage host updates. You can also use a host management command to make your host join an
AD domain or exit from a domain.

832 VMware vSphere: Fast Track


Common Connection Options for vCLI Execution (1)
Slide 16-24

Connection Option Description

--cacertsfile Specifies the CA certificate file

--config Path to a configuration file

--credstore Name of credential store file

--encoding Specifies the encoding to use

16
--passthroughauth Use Microsoft Windows Security SSPI
Specifies Domain-level authentication
--passthroughauthpackage
protocol to be used.

VMware Management Assistant


--password Log in password

--portnumber Uses specified port to connect

The slide lists options that are available for all vCLI commands.

--cacertsfile Used to specify the Certificate Authority file in PEM format,


to verify the identity of the vCenter Server system or the
ESXi host to run the command on. Can be used, for example,
to prevent man in the middle attacks.

--config Uses the configuration file at the specified location. Specify


a path that is readable for the current directory.

--credstore Name of credential store file. Defaults to <HOME>/


.vmware/credstore/vicredentials.xml on Linux and
<APPDATA>/VMware/credstore/vicredentials.xml
on Windows. Commands for setting up the credential store
are included in the vSphere SDK for Perl.

--encoding Specifies the encoding to be used. Use --encoding to specify


the encoding vCLI should map to when it is running on a
foreign language system.

Module 16 VMware Management Assistant 833


--passthroughauth This option specifies the system should use the Microsoft
Windows Security Support Provider Interface (SSPI) for
authentication. Trusted users are not prompted for a user
name and password.

--passthroughtauthpackage Use this option with --passthroughauth to specify a


domain-level authentication protocol to be used by
Windows. By default, SSPI uses the Negotiate protocol
which means that client and server to negotiate a protocol
that both support.

--password Uses the specified password when used with --username to


log in to the server.

--portnumber Uses the specified port to connect to the system specified by


--server. Default port is 443.

834 VMware vSphere: Fast Track


Common Connection Options for vCLI Execution (2)
Slide 16-25

Connection Option Description

--protocol Uses the specified protocol to connect

--savesessionfile Saves the session to the specified file

--server The ESXi or vCenter Server host

--sessionfile Uses the specified file to load a saved session

16
Connects to URL for VMware vSphere® Web
--url
Services SDK

VMware Management Assistant


--username User name to log in to system.

--vihost Name of ESXi host to run the command against

--protocol Uses the specified protocol to connect to the system specified by


--server. Default is HTTPS.

--savesessionfile Saves a session to the specified file. The session expires if it has been
unused for 30 minutes.
--server Uses the specified ESXi of vCenter Server system. Default is localhost.
--sessionfile Uses the specified session file to load a previously saved session. The
session must be unexpired.
--url Connects to the specified vSphere Web Services SDK URL.
--username Uses the specified user name. If you do not specify a user name and
password on the command line, the system prompts you and does not
echo your input to the screen.
--vihost When you run a vCLI command with --server option pointing to vCenter
Server system, use --vihost to specify the ESXi host to run the command
against.

Module 16 VMware Management Assistant 835


vicfg Command Example
Slide 16-26

Use the vicfg-hostops command with the shutdown or reboot


operation:
ƒ vicfg-hostops <conn_options> --operation shutdown <cmd_options>
ƒ vicfg-hostops <conn_options> --operation reboot <cmd_options>

Examples:
ƒ vicfg-hostops –-server esxi01 –-username mike
-–password vmware1! –-operation shutdown
ƒ vicfg-hostops –-server esxi01 –-username mike
-–password vmware1! –-operation reboot --force
ƒ vicfg-hostops –-server esxi01 –-username mike
–-operation shutdown –-cluster “LabCluster”
The command prompts for user names and passwords if you do not
specify them.

An ESXi host can be shut down and restarted using the vicfg-host command options. If a host
managed by vCenter Server is shut down by this command, the host is disconnected from vCenter
Server but not removed from the inventory.
No equivalent ESXCLI command is available.
You can shut down or reboot all hosts in a cluster or datacenter by using the --cluster or
--datacenter option.
In the first and second examples, the connection options (<conn_options>) used are --server,
--username, and --password. But in the third example, the --password option is omitted. In
this case, you are prompted to enter the password when you run this command.

NOTE
vicfg- commands will be deprecated in future releases. Use esxcli commands instead where
possible.

836 VMware vSphere: Fast Track


Entering and Exiting Host Maintenance Mode
Slide 16-27

Use the vicfg-hostops command with the enter, exit, or info


operations:
ƒ vicfg-hostops <conn_options> --operation enter <cmd_options>
ƒ vicfg-hostops <conn_options> --operation exit <cmd_options>
ƒ vicfg-hostops <conn_options> --operation info <cmd_options>

Examples:
ƒ vicfg-hostops –-server vc01 –-username administrator
--operation info –-cluster “LabCluster”
ƒ vicfg-hostops –-server vc01 –-username administrator

16
--operation enter –-action poweroff

vicfg-hostops:

VMware Management Assistant


ƒ Does not work with VMware vSphere® Distributed Resource Scheduler™ (DRS)
ƒ Suspends the virtual machines by default:
• Use the –-action poweroff option to power off virtual machines.

A host can be placed in maintenance mode by using the vicfg-hostops command. When the
command is run, the host does not enter maintenance mode until all of the virtual machines running
on the host are either shut down, migrated, or suspended.
vicfg-hostops does not work with VMware vSphere® Distributed Resource Scheduler™ (DRS).
You can put all hosts in a cluster or datacenter in maintenance mode by using the --cluster or
--datacenter option.
The --operation info option can be used to check whether the host is in maintenance mode or
in the Entering Maintenance Mode state.

Module 16 VMware Management Assistant 837


esxcli Command Hierarchies
Slide 16-28

ƒ esxcli namespace
ƒ esxcli fcoe namespace
ƒ esxcli hardware namespace
ƒ esxcli iscsi namespace
ƒ esxcli license namespace
ƒ esxcli network namespace
ƒ esxcli software namespace
ƒ esxcli storage namespace
ƒ esxcli system namespace
ƒ esxcli vm namespace

You can manage many aspects of an ESXi host with the ESXCLI command set. You can run
ESXCLI commands as vCLI commands or run them in the ESXi Shell in troubleshooting situations.
vicfg- commands will be deprecated in future releases. Use esxcli commands instead where
possible.
The slide lists the hierarchy of name spaces and commands for each ESXCLI name space.

NOTE
vicfg- commands will be deprecated in future releases. Use esxcli commands instead where
possible.

838 VMware vSphere: Fast Track


Example esxcli command
Slide 16-29

Use the esxcli command with the vm namespace to list all the
virtual machine processes.
ƒ esxcli <conn_options> vm process list

16
VMware Management Assistant
With the esxcli vm command you can display all the virtual machine processes on the ESXi system.
This command lists only running virtual machines on the system.

Module 16 VMware Management Assistant 839


resxtop Utility
Slide 16-30

Use the resxtop utility to examine real-time resource usage for


ESXi hosts.
resxtop can be run in these modes:
ƒ Interactive mode
ƒ Batch mode
ƒ Replay mode

The resxtop commands enable command-line monitoring and collection of data for all system
resources: CPU, memory, disk, and network. When used interactively, this data can be viewed on
different types of screens, one each for CPU statistics, memory statistics, network statistics, and disk
adapter statistics. This data includes some metrics and views that cannot be accessed using the
overview or advanced performance charts. The three modes of execution for resxtop are the
following:
• Interactive mode (the default mode) – All statistics are displayed as they are collected, showing
how the ESXi host is running in real time.
• Batch mode – Statistics are collected so that the output can be saved in a file and processed later.
• Replay mode – Data that was collected by the vm-support command is interpreted and played
back as resxtop statistics. This mode does not process the output of batch mode.
For more details on resxtop, see vSphere Resource Management Guide at
http://www.vmware.com/support/pubs.

840 VMware vSphere: Fast Track


Using resxtop Interactively
Slide 16-31

To start resxtop interactively:


ƒ Log in to a system installed with vCLI.
ƒ Run resxtop with one or more connection parameters. Example:
• # resxtop --server vc01.vmeduc.com
--username administrator --vihost esxi01.vmeduc.com
• # resxtop --server esxi01.vmeduc.com --username mike

16
VMware Management Assistant
To run resxtop interactively, you must first log in to a system with VMware vSphere® Command-
Line Interface (vCLI) installed. Download and install a vCLI package on a Linux host or deploy
VMware vSphere® Management Assistant (vMA) to your ESXi host. vMA is a preconfigured
Linux appliance. Versions of the vCLI package are available for Linux and Windows systems.
However, because resxtop is based on a Linux tool, it is only available in the Linux version of
vCLI.
After vCLI is set up and you have logged in to the vCLI system, start resxtop from the command
prompt. For remote connections, you can connect to an ESXi host either directly or through vCenter
Server.
resxtop has the following connection parameters:
--server [server] --username [username] --password [password] --vihost
[vihost]
[server] – A required field that refers to the name of the remote host to connect to. If connecting
directly to the ESXi host, use the name of that host. If your connection to the ESXi host is indirect
(that is, through vCenter Server), use the name of the vCenter Server system for this option.

Module 16 VMware Management Assistant 841


• [vihost] – If connecting indirectly (through vCenter Server), this option refers to the name of
the ESXi host that you want to monitor. You must use the name of the ESXi host as shown in
the vCenter Server inventory.
If connecting directly to the ESXi host, this option is not used.
• [portnumber] – Port number to connect to on the remote server. The default port is 443, and
unless this is changed on the server, this option is not needed.
• [username] – User name to be authenticated when connecting to the remote host. The remote
server prompts you for a password.
The following command line is an example of running resxtop to monitor the ESXi host named
esxi01.vmeduc.com. Instead of logging in to the ESXi host, the user logs in to the vCenter Server
system named vc01.vmeduc.com as user administrator to access the ESXi host:
# resxtop --server vc01.vmeduc.com
--username administrator --vihost esxi01.vmeduc.com

The following command line is another example of running resxtop to monitor the ESXi
host named esxi01.vmeduc.com. However, this time the user logs directly in to the ESXi host as
user root:
# resxtop --server esxi01.vmeduc.com --username root

In both examples, you are prompted to enter the password of the user that you are logging in as, for
example, administrator or root.

842 VMware vSphere: Fast Track


Navigating resxtop
Slide 16-32

When using resxtop in interactive mode, type a character to change


the screen or behavior. Commands are case-sensitive.

c CPU view (default) n Network view


m Memory view f/F Add or remove statistic
columns
d Disk (adapter) view
V Virtual machine view
u Disk (device) view
h Help

16
v Virtual disk view
q Quit

VMware Management Assistant


resxtop supports several single-key commands when run in interactive mode. Type these
characters to change the screen or behavior:
• c – Switch to the CPU resource utilization screen (this is the default screen).
• m – Switch to the memory resource utilization screen.
• d – Switch to the storage (disk) adapter resource utilization screen.
• u – Switch to the storage (disk) device resource utilization screen.
• v – Switch to the virtual disk resource utilization screen.
• n – Switch to the network resource utilization screen.
• f/F – Display a panel for adding or removing statistics columns on the current panel.
• V – Display only virtual machines in the screen.
• h – Display the help screen.
• q – Quit interactive mode.
The single-key commands are case-sensitive. Using the wrong case can produce unexpected results.

Module 16 VMware Management Assistant 843


Sample Output from resxtop
Slide 16-33

host
statistics

Per world
statistics
(CPU screen)

Type V (uppercase V):

Per virtual
machine
statistics

Here is an example of the output generated from resxtop. You can view several screens. The CPU
screen is the default. resxtop refreshes the screen every 5 seconds by default.
resxtop displays statistics based on worlds. A world is equivalent to a process in other operating
systems. A world can represent a virtual machine and a VMkernel component. The following
column headings help you understand worlds:
• ID – World ID. In some contexts, resource pool ID or virtual machine ID.
• GID – Resource pool ID of the running world’s resource pool or virtual machine.
• NAME – Name of running world. In some contexts, resource pool name or virtual machine
name.
To filter the output so that only virtual machines are shown, type V (uppercase V) in the resxtop
window. This command hides the system worlds so that you can concentrate on the virtual machine
worlds.

844 VMware vSphere: Fast Track


Using resxtop in Batch and Replay Modes
Slide 16-34

To run resxtop in batch mode and print all performance counters:


ƒ resxtop –a –b > analysis.csv
ƒ The –a option shows all statistics.
Always start your virtual machines before running resxtop in batch
mode.
ƒ resxtop will produce virtual machine data based only on the virtual
machines that were running at the time the command was launched.
To run resxtop in replay mode:

16
ƒ Use vm-support and resxtop to create a file with sampled
performance data and replay the file. For example:

VMware Management Assistant


• vm-support –S –d 300 –l 30
• resxtop –r <filename>

resxtop can also be run in batch mode. In batch mode, the output is stored in a file, and the data
can be read by using the Windows Perfmon utility. You must prepare for running resxtop in batch
mode.
To prepare to run resxtop in batch mode

1. Run resxtop in interactive mode.


2. In each screen, select the columns you want.

3. Type W (uppercase) to save this configuration to a file (by default ~/.esxtop4rc).


To run resxtop in batch mode

1. Start resxtop to redirect the output to a file, as shown in the slide. The filename must have a
.csv extension. The utility does not enforce this extension, but the postprocessing tools require
it.
2. Use tools like Microsoft Excel and Perfmon to process the statistics collected.
In batch mode, resxtop rejects interactive commands. In batch mode, the utility runs until it
produces the number of iterations requested (by using the -n option) or until you end the process by
pressing Ctrl+C.

Module 16 VMware Management Assistant 845


To run resxtop in replay mode:

1. Use the vm-support command to capture sampled performance data in a file. For example, the
vm-support -S -d 300 -l 30 command runs vm-support in snapshot mode. The -S
restricts the collection of diagnostic data, the -d 300 collects data for 300 seconds (five
minutes) and the -l 30 sets up a 30-second sampling interval.
2. Replay the file with the resxtop command. For example, resxtop -r <filename> replays
the captured performance data in an resxtop window.
vm-support can be run from a remote command line. For details see:
• http://kb.vmware.com/selfservice/microsites/
search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1010705
http://kb.vmware.com/selfservice/microsites
search.do?cmd=displayKC&docType=kc&externalId=1967&sliceId=1&docTypeID=DT_KB_1_1&
dialogID=343976180&stateId=1%200%20343980403

846 VMware vSphere: Fast Track


Lab 28
Slide 16-35

In this lab, you will use vMA to manage networking, manage storage,
and monitor hosts.
1. Log in to vMA and connect to your vCenter Server and ESXi host.
2. Create a standard virtual switch.
3. Configure storage.

16
VMware Management Assistant

Module 16 VMware Management Assistant 847


Review of Learner Objectives
Slide 16-36

You should be able to do the following:


ƒ Understand the purpose of the vCLI commands.
ƒ Discuss the options for running commands.
ƒ Deploy and configure vMA.

848 VMware vSphere: Fast Track


Key Points
Slide 16-37

ƒ vCLI is a command-line interface to manage the infrastructure, either by


running commands or by executing scripts.
ƒ vMA is a virtual appliance that is used to manage the infrastructure at
the command prompt. vMA includes vCLI.
ƒ vMA can be used to monitor, configure, and manage hosts, storage,
and virtual networking.

Questions?

16
VMware Management Assistant

Module 16 VMware Management Assistant 849


850 VMware vSphere: Fast Track
MODULE 17

Installing VMware vSphere 5.1


Components 17
Slide 17-1

Module 17

17
Installing VMware vSphere 5.1 Components

VMware vSphere: Fast Track 851


You Are Here
Slide 17-2

Course Introduction High Availability and Fault Tolerance

Introduction to Virtualization Network Scalability

Creating Virtual Machines Host and Management Scalability

VMware vCenter Server Storage Scalability

Configuring and Managing Virtual Networks Data Protection

Configuring and Managing Virtual Storage Patch Management

Virtual Machine Management VMware Management Assistant

Installing VMware vSphere Components


Access and Authentication Control

Resource Management and Monitoring

852 VMware vSphere: Fast Track


Importance
Slide 17-3

Understanding the options in deploying VMware ESXi™ hosts gives


the user the ability to select deployment options that best fit the
enterprise.

17
Installing VMware vSphere 5.1 Components

Module 17 Installing VMware vSphere 5.1 Components 853


Module Lessons
Slide 17-4

Lesson 1: Installing ESXi


Lesson 2: Installing vCenter Server
Lesson 3: vCenter Server Linked Mode
Lesson 4: Image Builder
Lesson 5: Auto Deploy

854 VMware vSphere: Fast Track


Lesson 1: Installing ESXi
Slide 17-5

Lesson 1:
Installing ESXi

17
Installing VMware vSphere 5.1 Components

Module 17 Installing VMware vSphere 5.1 Components 855


Learner Objectives
Slide 17-6

After this lesson, you should be able to do the following:


ƒ Describe how to install VMware ESXi™ interactively.
ƒ Identify the basic requirements for a boot-from-SAN configuration.

856 VMware vSphere: Fast Track


ESXi Hardware Prerequisites
Slide 17-7

Processor – 64-bit x86 CPU:


ƒ Requires at least two cores
ƒ ESXi supports a broad range of x64 multicore processors
Memory – 2GB RAM minimum
One or more Ethernet controllers:
ƒ Gigabit and 10 Gigabit Ethernet controllers are supported.
ƒ For best performance and security, use separate Ethernet controllers for the
management network and the virtual machine networks.
Disk storage:
ƒ A SCSI adapter, Fibre Channel adapter, converged network adapter, iSCSI
adapter, or internal RAID controller
ƒ A SCSI disk, Fibre Channel logical unit number (LUN), iSCSI disk, or RAID

17
LUN with unpartitioned space: SATA, SCSI, or Serial Attached SCSI

Installing VMware vSphere 5.1 Components


VMware ESXi™ requires a 64-bit server (AMD Opteron, Intel Xeon, or Intel Nehalem). The server
can have up to 160 logical CPUs (cores or hyperthreads) and can support up to 512 virtual CPUs per
host. A minimum of 2GB of memory is required. An ESXi host can have up to 2TB of memory.
The ESXi host must have:
• One or more Ethernet controllers
• A basic SCSI controller
• An internal RAID controller
• A SCSI disk or a local RAID logical unit number (LUN)
ESXi supports installing on and booting from SATA disk drives, SCSI disk drives, or Serial
Attached SCSI disk drives.
For more about the installation and setup of ESXi, see vSphere Installation and Setup Guide at
http://www.vmware.com/support/pubs. For more about configuration maximums, see Configuration
Maximums for VMware vSphere 5.1 at
http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html.

Module 17 Installing VMware vSphere 5.1 Components 857


Installing ESXi 5.1
Slide 17-8

Installation Required/ Default


Comments
option optional selection
Required for
Host name static IP None
settings
Install Must be at least 5GB if you install the
Required None
location components on a single disk
Keyboard
Required U.S. English
layout
VLAN ID Optional None VLAN ID range: 0–4094
IP address Optional DHCP Configure a static IP address or use DHCP
Calculated based on IP to configure the network.
Subnet mask Optional
address
IP address, subnet mask, gateway, and
Based on IP address DNS network settings can be changed after
Gateway Optional
and subnet mask installation.
Based on IP address
Primary DNS Optional Secondary DNS server can also be defined.
and subnet mask
Root
Optional None Must contain 6–64 characters
password

In a typical interactive installation, you boot the ESXi installer and respond to the installer prompts
to install ESXi to the local host disk. The installer reformats and partitions the target disk and
installs the ESXi boot image. If you have not installed ESXi on the target disk before, all data
located on the drive is overwritten, including hardware vendor partitions, operating system
partitions, and associated data.
Observe the following considerations:
• In an interactive installation, the system prompts you for the required system information.
• Verify that the server hardware clock is set to UTC. This setting is in the system BIOS.
• Consider disconnecting your network storage. This action decreases the time that it takes the
installer to search for available disk drives. When you disconnect network storage, files on the
disconnected disks are unavailable at installation.

CAUTION
Do not disconnect a LUN that contains an existing ESXi installation. Do not disconnect a
VMware vSphere® VMFS datastore that contains the installation of another ESXi host. These
actions can affect the outcome of the installation.

858 VMware vSphere: Fast Track


Be prepared to record the values that you enter during the installation. These notes are useful if you
must reinstall and re-enter the values that you originally chose.
If you are installing ESXi on a disk that contains a previous installation of ESXi or a VMFS
datastore, the installer provides you with options for upgrading. You are prompted to migrate the
existing ESXi settings and asked whether to preserve existing VMFS datastores.
To begin an ESXi installation:

1. Insert the ESXi installer CD/DVD into the CD/DVD drive or attach the Installer USB flash
drive
2. Restart the machine.

3. Set the BIOS to boot from the CD-ROM device or the USB flash drive.

17
Installing VMware vSphere 5.1 Components

Module 17 Installing VMware vSphere 5.1 Components 859


Installing ESXi
Slide 17-9

You must have the ESXi 5.1 ISO file on CD, DVD, or USB flash drive
media.
Boot from the media to start the ESXi installer.
Make sure that you select a disk that is not formatted with VMware
vSphere® VMFS.
Choose a volume
that has not been
formatted with
VMFS.

Be careful when choosing the disk on which to install ESXi. Do not rely on the disk order in the list
to select a disk. If the disk you selected contains data, the Confirm Disk Selection page is displayed.
For example, when installing ESXi on the local disk, the local disk might not be the first disk in
the list.
To select the disk on which to install ESXi:

1. On the Select a Disk page, select the drive on which to install ESXi and press Enter.

2. Press F1 for information about the selected disk.


All newly installed hosts in VMware® vSphere® 5.1 use the GUID Partition Table (GPT) format
instead of the MS-DOS-style partition label. This change supports ESXi installation on disks larger
than 2TB, up to a maximum of 64TB.

860 VMware vSphere: Fast Track


The partition table itself is fixed as part of the binary image and is written to the disk at the time the
system is installed. The ESXi installer leaves the scratch and VMFS partitions blank. ESXi creates
them when the host is rebooted for the first time after installation or upgrade. The scratch partition is
used to store the output of the vm-support command, a command that you need when you create a
support bundle for VMware® technical support. The scratch partition is 4GB. The rest of the disk is
formatted as a VMFS-5 partition.

CAUTION
Upgraded systems do not use the GPT format but keep the older MS-DOS-based partition label.

17
Installing VMware vSphere 5.1 Components

Module 17 Installing VMware vSphere 5.1 Components 861


Booting from SAN
Slide 17-10

ESXi can be booted from SAN: ESXi

ƒ Supported for Fibre Channel SAN


ƒ Supported for iSCSI and Fibre Channel over SAN
for certain qualified storage adapters
SAN connections must be made through a
switched topology unless the array is certified
for direct-connect.
The ESXi host must have exclusive access to its
own boot LUN.
Use different LUNs for VMFS datastores and
boot partitions.

boot LUN

Configuring a boot LUN can be used in situations where you do not want to configure local storage
or are using diskless systems, such as blade servers. Consider the benefits of booting from SAN:
• Servers can be denser and run cooler without internal storage.
• You can replace servers and have the new server point to the old boot location.
• Servers without local disks often take up less rack space.
• You can back up the system boot images in the SAN as part of the overall SAN backup
procedures. You can also use advanced array features, such as snapshots, on the boot image.
• Creation and management of the operating system image is easier and more efficient.
• You can access the boot disk through multiple paths, which protects the disk from being a single
point of failure.

CAUTION
Multipathing to a boot LUN is supported only on active-active arrays.

862 VMware vSphere: Fast Track


To enable boot from SAN, you must perform several tasks. The tasks depend on which storage
protocol that you are using. Boot from SAN is supported for the following storage protocols:
• Fibre Channel and Fibre Channel over Ethernet (FCoE)
• Hardware iSCSI
• Software and dependent hardware iSCSI
You must configure a diagnostic partition on a shared SAN LUN. The diagnostic partition is
accessed by multiple hosts and can store fault information for more than one host.
• If more than one ESXi host uses the same LUN as the diagnostic partition, that LUN must be
zoned so that all the servers can access it.
• Each server requires 100MB of space, so the size of the LUN determines how many servers can
share it. Each ESXi host is mapped to a diagnostic slot. VMware recommends at least 16 slots
(1,600MB) of disk space if servers share a diagnostic partition.
• If the device has only one diagnostic slot, all ESXi hosts sharing that device map to the same
slot. This setup can easily create problems. If two ESXi hosts perform a core dump at the same
time, the core dumps are overwritten on the diagnostic partition.
• If you use iSCSI Boot Firmware Table (iBFT) to boot an ESXi host from a SAN LUN, you

17
cannot set up a diagnostic partition on the SAN LUN. Instead, you use the VMware vSphere®
Management Assistant (vMA) to collect diagnostic information from your host and store it for
analysis.

Installing VMware vSphere 5.1 Components


Finally, when setting up your host to boot from SAN, you must first boot the host from the VMware
installation media. This action requires changing the system boot sequence in the BIOS. Because
changing the boot sequence in the BIOS is vendor-specific, see the vendor documentation for
instructions.
For complete details about configuring boot from SAN using Fibre Channel, FCoE, or iSCSI, see
the guides at http://www.vmware.com/support/pubs.

Module 17 Installing VMware vSphere 5.1 Components 863


Review of Learner Objectives
Slide 17-11

You should be able to do the following:


ƒ Describe how to install ESXi interactively.
ƒ Identify the basic requirements for a boot-from-SAN configuration.

864 VMware vSphere: Fast Track


Lesson 2: Installing vCenter Server
Slide 17-12

Lesson 2:
Installing vCenter Server

17
Installing VMware vSphere 5.1 Components

Module 17 Installing VMware vSphere 5.1 Components 865


Learner Objectives
Slide 17-13

After this lesson, you should be able to do the following:


ƒ Identify system requirements to install VMware® vCenter Server™.
ƒ Install vCenter Server on a supported Windows operating system.

866 VMware vSphere: Fast Track


vCenter Server Deployment Options
Slide 17-14

physical host ƒ Deployed on physical host or virtual machine and


installed with a supported version of Windows.
ƒ Reasons to use Windows-based VMware vCenter
- or - Server instead of the VMware® vCenter™ Server
Appliance™:
• Support staff trained only on Windows operating
systems
• Applications that depend on a specific Windows
virtual machine version
• You prefer to use a physical host

ƒ Deployed as a virtual appliance that runs the SuSE


Linux operating system:
• No operating system license required
•

17
Simple configuration through a Web browser
Linux-based • Offers same user experience as Windows-based
version

Installing VMware vSphere 5.1 Components


vCenter Server can run on a physical machine or a virtual machine.
When using a physical machine for the VMware® vCenter Server™ system:
• A dedicated physical server is required.
• vCenter Server is not susceptible to potential VMware® vSphere® outage.
• vCenter Server performance is limited only by the system hardware.
When using a virtual machine for the vCenter Server system:
• A dedicated physical server is not required.
• vCenter Server is susceptible to potential vSphere outage.
• The vCenter Server instance can be migrated from one system to another during maintenance
activities.
• vCenter Server must contend for resources with the other virtual machines on the host.

Module 17 Installing VMware vSphere 5.1 Components 867


Single-Server Solution or Distributed Solution
Slide 17-15

Virtual Machine Virtual Machine Virtual Machine


Single Sign On Server vCenter Server
Single Sign On Server Virtual Machine
Inventory Service
Single Sign-On Server
Inventory Service
Virtual Machine Virtual Machine
Database Server Virtual Machine
Inventory Service vCenter Server
vCenter Server vSphere Web Client
vSphere Web Client
VMware vSphere® Web Client
VMware vSphere® Update Manager™
Virtual Machine Virtual Machine
Virtual Machine
Database
Database Server Server
vSphere Update Manager
vSphere Update Manager
Single vCenter
Distributed Server
vCenter 5.1 Solution
Server solution
In vSphere versions before vSphere 5.1, vCenter Server was installed in a single operation that also
silently installed the Inventory Service on the same host machine. For small vSphere deployments,
vCenter Server 5.1 provides a vCenter Server Simple Install option that installs VMware®
vCenter™ Single Sign-On, and vCenter Server on the same host or virtual machine.
Alternatively, to customize the location and setup of each component, you can install the
components separately by selecting the individual installation options, in the following order:
vCenter Single Sign On, Inventory Service, and vCenter Server. Each component can be installed in
a different host or virtual machine.
For the first installation of vCenter Server with vCenter Single Sign On, you must install all three
components in the vSphere environment: Single Sign On Server, Inventory Service, and vCenter
Server. In subsequent installations of vCenter Server in your environment, you do not need to install
Single Sign On. One Single Sign On server can serve your entire vSphere environment. After you
install vCenter Single Sign On once, you can connect all new vCenter Server instances to the same
authentication server.

868 VMware vSphere: Fast Track


vCenter Single Sign On
Slide 17-16

Two Installation modes are available:


ƒ Simple Install
ƒ Individual component install

17
Installing VMware vSphere 5.1 Components
The vSphere 5.1 Single Sign On feature simplifies the login process for the Cloud Infrastructure
Suite. You can log into the management infrastructure a single time through the vSphere Web Client
or the API. You can perform operations across all components of the Cloud Infrastructure Suite
without having to log into the components separately.
Single Sign On operates across all Cloud Infrastructure Suite components that support this feature.
Authentication services supporting multiple Identity Sources Active Directory, LDAP, NIS Single
Sign On is required for Inventory Service, vCenter Server, and VMware vSphere® Web Client.
The two installation modes available for Single Sign On are the following:
• Simple Install
• Individual component install

Module 17 Installing VMware vSphere 5.1 Components 869


Single Sign On Installation Wizard
Slide 17-17

Parameter Description

Single Sign On Deployment Create a new Single Sign-On installation or join an


Type existing installation.
Basic Single Sign On with one node or create a
Select Node Type
multinode installation.
Password for the Administrator
Set the password for the Administrator account.
Account
Single Sign-On Database Database type for Single Sign On.

Fully Qualified Domain Name FQDN or IP Address for the Single Sign-On server.

Service Account Information Service account information for the service to run in.

Destination Folder Program folder to install Single Sign-On program files.

Port Settings HTTPS port to connect to the Single Sign-On server.

To install vCenter Single Sign On as a new installation, create the only node in a basic vCenter
Single Sign On installation or the first node in a high availability or multisite installation.
Once the installation is complete, back up the vCenter Single Sign On configuration and database.
Single Sign On running on a separate host from vCenter Server has the following minimum
hardware requirements:
• Intel or AMD dual core x64 processor with two or more logical cores
• 3GB or memory.
• 2GB disk storage
• 1Gbps Network speed
Requirements are higher for disk and memory if the Single Sign On database runs on the same host
machine.

870 VMware vSphere: Fast Track


vCenter Inventory Services
Slide 17-18

Stores vCenter Server application and inventory data:


ƒ Enables you to search and access inventory objects across linked
vCenter Servers
Required with vCenter Server 5.1:
ƒ Supports login by Single Sign On
Used by the vSphere Web Client.
Can be deployed on the same host or separate host to vCenter
Server.
Part of vSphere Simple Install or installed as a separate component.
Inventory Service is included with the vCenter Server Appliance.

17
Installing VMware vSphere 5.1 Components
Inventory Services store vCenter Server application and inventory data, enabling you to search and
access inventory objects across linked vCenter Servers.
You can install Inventory Services and vCenter Server together on a single host machine using the
vCenter Server Simple Install option. This option is appropriate for small deployments.
Inventory Services running on a separate host has the following hardware requirements:
• Intel or AMD x64 processor with two or more logical cores each with a speed of 2GHz
• 3GB of memory
• 2GB of disk storage
• 1Gbps network speed

Module 17 Installing VMware vSphere 5.1 Components 871


vCenter Server Hardware and Software Requirements
Slide 17-19

Hardware requirements (physical or virtual machine):


ƒ Number of CPUs – Two 64-bit CPUs or one 64-bit dual-core processor
ƒ Processor – 2.0GHz or higher Intel or AMD processor*
ƒ Memory – 4GB RAM minimum*
ƒ Disk storage – 4GB minimum*
ƒ Networking – Gigabit connection recommended
* Higher if database, SSO, and Inventory Services runs on the same machine
Software requirements:
ƒ 64-bit operating system is required.
ƒ See “vSphere Compatibility Matrixes.”

vCenter Server hardware must meet the following requirements:


• CPU – Two 64-bit CPUs or one 64-bit dual-core processor
• Processor – 2.0GHz or faster Intel or AMD processor
• Memory – 4GB RAM
• Disk storage – 4GB
• Networking – Gigabit connection recommended (10/100 Ethernet adapter minimum)
Processor, memory, and disk requirements increase if the database, SSO, and Inventory Services run
on the same machine. They might also increase because of the number of hosts and virtual machines
that are managed. For example, to manage up to 1,000 hosts and 10,000 powered-on virtual
machines, the vCenter Server system should have eight cores, 16GB of memory, and 10GB of disk
space.
Make sure that your operating system supports vCenter Server. vCenter Server requires a 64-bit
operating system, and the 64-bit system database source name (DSN) is required for vCenter Server
to connect to its database.

872 VMware vSphere: Fast Track


vCenter Server requires the Microsoft .NET 3.5 SP1 Framework. If the .NET 3.5 SP1 Framework is
not installed on your system, the vCenter Server installer installs it. The .NET 3.5 SP1 Framework
installation might require Internet connectivity to download more files.
If you plan to use the Microsoft SQL Server 2008 R2 Express database that is bundled with vCenter
Server, Microsoft Windows Installer version 4.5 (MSI 4.5) is required on your system. You can
download MSI 4.5 from the Microsoft Web site. You can also install MSI 4.5 directly from the
vCenter Server autorun.exe installer.
Both vCenter Server and Microsoft Internet Information Services (IIS) use port 80 as the default
port for direct HTTP connections. This conflict can cause vCenter Server to fail to restart after the
installation of vSphere Authentication Proxy. To resolve a conflict between IIS and vCenter Server
for port 80, take one of the following actions:
• If you installed IIS before installing vCenter Server, change the port for vCenter Server direct
HTTP connections from 80 to another value.
• If you installed vCenter Server before installing IIS, before restarting vCenter Server, change
the binding port of the IIS default Web site from 80 to another value.

17
Installing VMware vSphere 5.1 Components

Module 17 Installing VMware vSphere 5.1 Components 873


vCenter Database Requirements
Slide 17-20

Each vCenter Server instance must have a connection to a database to


organize all the configuration data.
Supported databases:
ƒ Microsoft SQL Server 2005 SP3 (required):
• SP4 recommended
ƒ Microsoft SQL Server 2008
ƒ Microsoft SQL Server 2008 R2 Express:
• Microsoft SQL Server 2008
ƒ Oracle 10g R2 and 11g
ƒ IBM DB2 9.5 and 9.7
Default database – Microsoft SQL Server 2008 R2 Express:
ƒ Included with vCenter Server
ƒ Used for product evaluations and demonstrations
ƒ Also used for small deployments (up to five hosts and 50 virtual machines)

vCenter Server requires a database to store and organize server data. vCenter Server supports SQL
Server, Oracle, and IBM DB2 databases. You must have administration credentials to log in to these
databases. Contact your database administrator for these credentials.
Or you can install the bundled Microsoft SQL Server 2008 Express database. This database is
intended to be used for small deployments of up to 5 hosts and 50 virtual machines.
VMware vSphere® Update Manager™ also requires a database. Update Manager can use the
vCenter Server database. But VMware recommends using one database for vCenter Server and
another database for Update Manager. For smaller deployments, you might not require a separate
database for Update Manager.
For more about the vCenter Server database requirements, see the documentation at
http://www.vmware.com/support/pubs.

874 VMware vSphere: Fast Track


Considerations for Calculating the Database Size
Slide 17-21

Use the vCenter Server


Database Sizing
Calculator:
ƒ For Microsoft SQL
Server and Oracle
Or use the what-if
calculator built in to
vCenter Server.

17
Installing VMware vSphere 5.1 Components
The size of the database varies with the number of hosts and virtual machines to manage and the
number of statistics to be collected. VMware provides tools to help you estimate the size of your
database.
The VMware vCenter Server 5.x Database Sizing Calculator (for Microsoft SQL Server or Oracle)
is an Excel spreadsheet that estimates the size of the vCenter Server database. This estimate is
calculated from the information that you enter, such as the number of hosts and virtual machines.
vCenter Server also provides you with a database estimation calculator in which you enter the
number of hosts and virtual machines in your inventory. The what-if calculator uses these numbers
to determine how much database space is required for the collection interval configuration that you
defined.
To access the what-if calculator:

1. Select Administration > vCenter Server Settings in the menu bar.

2. Click Statistics in the left pane. The calculator does not change the size of the vCenter Server
database.

Module 17 Installing VMware vSphere 5.1 Components 875


Before Installing vCenter Server
Slide 17-22

Before beginning the vCenter Server installation, make sure that the
following prerequisites are met:
ƒ Ensure that vCenter Server hardware and software requirements are
met.
ƒ Ensure that the vCenter Server system belongs to a domain rather than
a workgroup.
ƒ Create a vCenter Server database, unless you are using the default
database.
ƒ Obtain and assign a static IP address and a host name to the vCenter
Server system.

Before you begin the vCenter Server installation procedure, make sure that the following
prerequisites are met:
• Make sure that the system that you use for vCenter Server meets the hardware and software
requirements.
• Make sure that the system that you use for vCenter Server belongs to a domain and not a
workgroup. If the system is assigned to a workgroup, vCenter Server is unable to discover all
domains and systems available on the network when using such features as Guided
Consolidation.
• Create a vCenter Server database, unless you want to use SQL Server 2008 Express, the default
vCenter Server database.
• Obtain and assign a static IP address and host name to the Windows server that will host
vCenter Server. This IP address must have a valid (internal) DNS registration that resolves
properly from all managed ESXi hosts.
• You can deploy vCenter Server behind a firewall. But make sure that you do not have a network
address translation firewall between vCenter Server and the hosts that it will manage.

876 VMware vSphere: Fast Track


Installing vCenter Server and Its Components
Slide 17-23

Use the VMware® vCenter™ Installer to install vCenter Server and its
components.

Install vCenter
Server.
Install vSphere
Update Manager.

Install vSphere
Client.

17
Installing VMware vSphere 5.1 Components
To install vCenter Server and its components, use the VMware vCenter Installer. The VMware
vCenter Installer enables you to install the vCenter Server software, the vSphere Client, and the
server components of vCenter Server modules.
To start the VMware vCenter Installer:

• Run autorun.exe from the installation media.

Module 17 Installing VMware vSphere 5.1 Components 877


Standalone Instance or Linked Mode Group
Slide 17-24

When first setting up your vCenter Server Linked Mode group, you must install the first vCenter
Server instance as a standalone instance. The reason for this requirement is that you do not yet have
a remote vCenter Server machine to join. Subsequent vCenter Server instances can join the first
vCenter Server instance or other vCenter Server instances that have joined the Linked Mode group
(as shown on the slide).

NOTE
DNS must be operational for Linked Mode replication to work.

878 VMware vSphere: Fast Track


vCenter Server Installation Wizard
Slide 17-25

The vCenter Server Installation wizard asks for the following data.
Parameter Description

User name and organization User identification


License key Evaluation or valid license key
Default database or remote database connection
Database information
information
SYSTEM account information User for running the vCenter Server service
Destination folder Software location
Standalone or join a Linked Standalone instance or enable two or more vCenter
Mode group Server inventories to be visible from the vSphere Client.
Ports used for communicating with client interfaces and
Ports
managed hosts
Java Virtual Machine memory configuration for the
JVM memory

17
vCenter Server Web service
Select if vCenter Server will manage hosts that power
Ephemeral port configuration
on more than 2,000 virtual machines simultaneously.

Installing VMware vSphere 5.1 Components


To start vCenter Server installation:

1. Click the vCenter Server link in the VMware vCenter Installer main window. The vCenter
Server installation wizard prompts you for the following information:
• User name, organization, and license key – If you omit the license key, vCenter Server is
installed in evaluation mode. After installation, you can use the vSphere Client to enter the
vCenter Server license.
• Database information – On the Database Options page of the vCenter Server installer, you
must choose between the default database or an existing supported database.
If you choose to use an existing SQL Server database, you must create a DSN. The DSN
contains specific information about the database that the ODBC driver requires to connect
to it. If you are using an existing supported database, you are also prompted to enter the
database user name and password.
• SYSTEM account or user-specified account – The vCenter Server Service page of the
vCenter Server installer gives you the option to use the Windows SYSTEM account or a
user-specified account for running the vCenter Server service.

Module 17 Installing VMware vSphere 5.1 Components 879


The primary reason to use a user-specified account is to enable the use of Windows
authentication for SQL Server. Security is another reason. The built-in SYSTEM account
has more permissions and rights on the system than vCenter Server requires, which can
contribute to security problems.
Even if you do not use Windows authentication for SQL Server, you might want to set up a
local user-specified account for vCenter Server. The only requirement is that the user-
specified account must be an administrator on the local machine.
• Destination folder for software – The name of the default folder in which the vCenter
Server software is installed. You can change the folder name during installation.
• Whether to install a standalone vCenter Server instance or to join it to a Linked Mode
group – If this instance of vCenter Server is the first instance that you are installing, install
vCenter Server as a standalone instance. A Linked Mode group enables you to view and
manage the inventories of multiple vCenter Server instances.
• vCenter ports – vCenter Server must be able to send data to every managed host and to
receive data from every client interface. VMware uses the following ports for
communication:
• 443 (HTTPS)
• 80 (HTTP)
• 902 (User Datagram Protocol heartbeat)
• 8080 (Web Services HTTP)
• 8443 (Web Services HTTPS
• 60099 (Web Services Change Service Notification)
• 389 (LDAP)
• 636 (SSL)
Unless you have a specific reason to change the ports, use the default ports assigned.
• JVM memory – vCenter Server includes a service called VMware VirtualCenter
Management Webservices. This service requires 1–4GB of additional memory. To
optimally configure Webservices, during installation you can specify the maximum
Webservices JVM memory in relation to the inventory size. For example, if you have a
small inventory (fewer than 100 hosts), select a JVM memory size of 1,024MB. If you
have a large inventory (more than 400 hosts), select a JVM memory size of 4,096MB.
• Ephemeral port configuration – This option prevents the pool of available ephemeral ports
from being exhausted in situations where vCenter Server will manage hosts on which more
than 2000 virtual machines will power on simultaneously.
For more about vCenter Server installation, see vSphere Installation and Setup Guide at
http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html.

880 VMware vSphere: Fast Track


vCenter Server Services
Slide 17-26

Instead of using the vCenter Server Appliance, you can install vCenter Server on
a Windows system.

After vCenter Server is installed, a number of services start on reboot and can be
managed from Windows Control Panel (Administrative Tools > Services).

17
Installing VMware vSphere 5.1 Components
After vCenter Server is installed, several new Windows services are visible on the vCenter Server
system:
• VMware® vCenter™ Orchestrator™ Configuration – A service for VMware® vCenter™
Orchestrator™, a workflow engine that can help administrators automate existing manual tasks.
• VMware VirtualCenter Management Webservices – Enables configuration of vCenter
management services.
• VMware VirtualCenter Server – The heart of vCenter Server, this service provides centralized
management of virtual machines and ESXi hosts.
• VMware VCMSDS – Provides vCenter Server LDAP directory services.
The VMware Tools Service (shown on the slide) is not installed during the vCenter Server
installation. It is installed when VMware® Tools™ is installed into the guest operating system of a
virtual machine. VMware Upgrade Helper is not installed during the vCenter Server installation. It
is a service that VMware Tools uses whenever a virtual machine’s hardware is upgraded to a newer
version.

Module 17 Installing VMware vSphere 5.1 Components 881


Lab 29 (Optional)
Slide 17-27

In this lab, you will install vCenter Server components.


1. Uninstall vCenter Server.
2. Uninstall the vCenter Inventory Service.
3. Uninstall the vCenter Single Sign on.
4. Uninstall the SQL Server Instance.
5. Install vCenter Server.

882 VMware vSphere: Fast Track


Review of Learner Objectives
Slide 17-28

You should be able to do the following:


ƒ Identify system requirements to install vCenter Server.
ƒ Install vCenter Server on a supported Windows operating system.

17
Installing VMware vSphere 5.1 Components

Module 17 Installing VMware vSphere 5.1 Components 883


Lesson 3: vCenter Linked Mode
Slide 17-29

Lesson 3:
vCenter Linked Mode

884 VMware vSphere: Fast Track


Learner Objectives
Slide 17-30

After this lesson, you should be able to do the following:


ƒ Describe vCenter Linked Mode.
ƒ List vCenter Linked Mode benefits.
ƒ List vCenter Linked Mode requirements.
ƒ Join a vCenter Server system to a Linked Mode group.
ƒ View and manage vCenter Server inventories in a Linked Mode group.
ƒ Isolate a vCenter Server system from a Linked Mode group.

17
Installing VMware vSphere 5.1 Components

Module 17 Installing VMware vSphere 5.1 Components 885


vCenter Linked Mode
Slide 17-31

vCenter Linked Mode helps you


manage multiple vCenter Server
instances.
vCenter Linked Mode enables you to
do the following:
ƒ Log in simultaneously to all vCenter
Server systems
ƒ View and search the inventories of all
vCenter Server systems
You cannot migrate hosts or virtual
machines between vCenter Server
systems in vCenter Linked Mode.

vCenter Linked Mode enables VMware vSphere® administrators to search across multiple vCenter
Server system inventories from a single VMware vSphere® Client™ session. For example, you
might want to simplify management of inventories associated with remote offices or multiple
datacenters. Likewise, you might use vCenter Linked Mode to configure a recovery site for disaster
recovery purposes.
vCenter Linked Mode enables you to do the following:
• Log in simultaneously to all vCenter Server systems for which you have valid credentials
• Search the inventories of all vCenter Server systems in the group
• Search for user roles on all the vCenter Server systems in the group
• View the inventories of all vCenter Server systems in the group in a single inventory view
• With vCenter Linked Mode, you can have up to ten linked vCenter Server systems and up to
3,000 hosts across the linked vCenter Server systems. For example, you can have ten linked
vCenter Server systems, each with 300 hosts, or five vCenter Server systems with 600 hosts
each. But you cannot have two vCenter Server systems with 1,500 hosts each, because that
exceeds the limit of 1,000 hosts per vCenter Server system. vCenter Linked Mode supports
30,000 powered-on virtual machines (and 50,000 registered virtual machines) across linked
vCenter Server systems.

886 VMware vSphere: Fast Track


vCenter Linked Mode Architecture
Slide 17-32

VMware vSphere® Client™

vCenter Tomcat vCenter Tomcat vCenter Tomcat


Server Web service Server Web service Server Web service

AD LDS instance AD LDS instance AD LDS instance

vCenter Server instance vCenter Server instance vCenter Server instance

ƒ Connection information
ƒ Certificates and thumbprints
ƒ Licensing information

17
ƒ User roles

Installing VMware vSphere 5.1 Components


vCenter Linked Mode uses Microsoft Active Directory Lightweight Directory Services (AD LDS) to
store and synchronize data across multiple vCenter Server instances. AD LDS is an implementation
of Lightweight Directory Access Protocol (LDAP). AD LDS is installed as part of the vCenter
Server installation. Each AD LDS instance stores data from all the vCenter Server instances in the
group, including information about roles and licenses. This information is regularly replicated across
all the AD LDS instances in the connected group to keep them in sync. After you have configured
multiple vCenter Server instances in a group, the group is called a Linked Mode group.
Using peer-to-peer networking, the vCenter Server instances in a Linked Mode group replicate
shared global data to the LDAP directory. The global data for each vCenter Server instance includes:
• Connection information (IP addresses and ports)
• Certificates and thumbprints
• Licensing information
• User roles
• The vSphere Client can connect to other vCenter Server instances by using the connection
information retrieved from AD LDS. The Apache Tomcat Web service running on vCenter
Server enables the search capability across multiple vCenter Server instances. All vCenter
Server instances in a Linked Mode group can access a common view of the global data.

Module 17 Installing VMware vSphere 5.1 Components 887


Searching Across vCenter Server Instances
Slide 17-33

VMware vSphere Client


1
4
vCenter Tomcat 3 vCenter Tomcat 3 vCenter Tomcat 3
Server 2 Web service Server Web service Server Web service

AD LDS instance AD LDS instance AD LDS instance

vCenter Server instance vCenter Server instance vCenter Server instance

For inventory searches, vCenter Linked Mode relies on a Java-based Web application called the
query service, which runs in Tomcat Web services.
When you use search for an object, the following operations take place:
1. The vSphere Client logs in to a vCenter Server system.

2. The vSphere Client obtains a ticket to connect to the local query service.

3. The local query service connects to the query services on other vCenter Server instances to do a
distributed search.
4. Before returning the results, vCenter Server filters the search results according to permissions.
The search service queries Active Directory (AD) for information about user permissions. So you
must be logged in to a domain account to search all vCenter Server systems in vCenter Linked
Mode. If you log in with a local account, searches return results only for the local vCenter Server
system, even if it is joined to other systems in a Linked Mode group.

888 VMware vSphere: Fast Track


The vSphere Client provides a search-based interface that enables an administrator to search through
the entire inventory and its attributes. For example, an administrator might search for virtual
machines that match a search string and that reside on hosts whose names match another search
string.
1. Select Home > Inventory > Search to display the search page.

2. Click the icon in the Search Inventory field at the top right of the vSphere Client window.

3. Select the type of inventory item to search for:


• Virtual Machines
• Folders
• Hosts
• Datastores
• Networks
• Inventory, which finds matches to the search criteria in any of the available managed object
types

17
Installing VMware vSphere 5.1 Components

Module 17 Installing VMware vSphere 5.1 Components 889


Basic Requirements for vCenter Linked Mode
Slide 17-34

vCenter Server version:


ƒ Linked Mode groups that contain both vCenter Server 5.0 and earlier
versions of vCenter Server are not supported.
Domain controller:
ƒ Domain user account requires the following privileges:
• Member of Admin group
• Act as part of operating system
• Log in as a service
ƒ vCenter Server instances can be in different domains if the domains
trust one another.
DNS server:
ƒ DNS name must match machine name.
Clocks synchronized across instances

vCenter Linked Mode is implemented through the vCenter Server installer. You can create a Linked
Mode group during vCenter Server installation. Or you can use the installer to add or remove
instances.
When adding a vCenter Server instance to a Linked Mode group, the user running the installer must
be an administrator. Specifically, the user must be a local administrator on the machine where
vCenter Server is being installed and on the target machine of the Linked Mode group. Generally,
the installer must be run by a domain user who is an administrator on both systems.
The following requirements apply to each vCenter Server system that is a member of a Linked
Mode group:
• Do not join a version 5.0 vCenter Server to earlier versions of vCenter Server, or an earlier
version of vCenter Server to a version 5.0 vCenter Server. Upgrade any vCenter Server instance
to version 5.0 before joining it to a version 5.0 vCenter Server. The vSphere Client does not
function correctly with vCenter Server systems in groups that have both version 5.0 and earlier
versions of vCenter Server.
• DNS must be operational for Linked Mode replication to work.

890 VMware vSphere: Fast Track


• The vCenter Server instances in a Linked Mode group can be in different domains if the
domains have a two-way trust relationship. Each domain must trust the other domains on which
vCenter Server instances are installed.
• All vCenter Server instances must have network time synchronization. The vCenter Server
installer validates that the machine clocks are no more than 5 minutes apart.
Consider the following before you configure a Linked Mode group:
• vCenter Server users see the vCenter Server instances on which they have valid permissions.
• You can join a vCenter Server instance to a standalone instance that is not part of a domain. If
you do so, add the standalone instance to a domain and add a domain user as an administrator.
• The vCenter Server instances in a Linked Mode group do not have to have the same domain
user login. The instances can run under different domain accounts. By default, they run as the
LocalSystem account of the machine on which they are running, which means that they are
different accounts.
• During vCenter Server installation, if you enter an IP address for the remote instance of vCenter
Server, the installer converts it into a fully qualified domain name.
For a complete list of the requirements and considerations for implementing vCenter Linked Mode,

17
see vCenter Server and Host Management Guide at http://www.vmware.com/support/pubs/vsphere-
esxi-vcenter-server-pubs.html.

Installing VMware vSphere 5.1 Components

Module 17 Installing VMware vSphere 5.1 Components 891


Joining a Linked Mode Group
Slide 17-35

Join a system to a Linked Mode group.


After installing vCenter Server
Start > Programs > VMware >
vCenter Server Linked Mode
During installation of vCenter Server Configuration

During the installation of vCenter Server, you have the option of joining a Linked Mode group. If
you do not join during installation, you can join a Linked Mode group after vCenter Server has
been installed.
To join a vCenter Server system to a Linked Mode group:

1. Select Start > Programs > VMware > vCenter Server Linked Mode Configuration. Click
Next.
2. Select Modify linked mode configuration. Click Next.

3. Click Join this vCenter Server instance to an existing linked mode group or another
instance. Click Next.
4. When prompted, enter the remaining networking information. Click Finish. The vCenter Server
instance is now part of a Linked Mode group.
After you form a Linked Mode group, you can log in to any single instance of vCenter Server. From
that single instance, you can view and manage the inventories of all the vCenter Server instances in
the group. The delay is usually less than 15 seconds. A new vCenter Server instance might take a
few minutes to be recognized and published by the existing instances because group members do
not read the global data often.

892 VMware vSphere: Fast Track


vCenter Service Monitoring: Linked Mode Groups
Slide 17-36

Use the vCenter Service


Status window to quickly

17
identify and correct failures.

Installing VMware vSphere 5.1 Components


When logged in to a vCenter Server system that is part of a Linked Mode group, you can monitor
the health of services running on each server in the group.
To display the status of vCenter services:

• On the vSphere Client Home page, click vCenter Service Status. The vCenter Service Status
page enables you to view information that includes:
• A list of all vCenter Server systems and their services
• A list of all vCenter Server plug-ins
• The status of all listed items
• The date and time of the last change in status
• Messages associated with the change in status.

Module 17 Installing VMware vSphere 5.1 Components 893


The vCenter Service Status plug-in is used to track multiple vCenter Server extensions and view
the overall health of the vCenter Server system. The plug-in is also useful for confirming that
communications are valid when a Linked Mode configuration is enabled. In this way, an
administrator can, at a glance, determine the service status for each member of a Linked
Mode group:
• VirtualCenter Management Service (the main vCenter Server service)
• vCenter Management Webservices
• Query Service
• Ldap health monitors
The Ldap health monitor is the component that represents AD LDS (VMwareVCMSDS in Windows
Services). It is the LDAP directory service that vCenter Server uses. This health monitor can be
helpful in troubleshooting communication issues among vCenter Server instances that have been
configured in a Linked Mode group.

894 VMware vSphere: Fast Track


Resolving Role Conflicts
Slide 17-37

Roles are replicated when a vCenter Server system is joined to a


Linked Mode group.
ƒ If role names differ on vCenter Server systems, they are combined into
a single common list and each system will have all the user roles.
ƒ If role names are identical, they are combined into a single role (if they
contain the same privileges).
ƒ If role names are identical, and the roles contain different privileges,
these roles must be reconciled.
• One of the roles must be renamed.

17
Installing VMware vSphere 5.1 Components
When you join a vCenter Server system to a Linked Mode group, the roles defined on each vCenter
Server system in the group are replicated to the other systems in the group.
If the roles defined on each vCenter Server system are different, the role lists of the systems are
combined into a single common list. For example, vCenter Server 1 has a role named Role A and
vCenter Server 2 has a role named Role B. Both systems will have both Role A and Role B after the
systems are joined in a Linked Mode group.
If two vCenter Server systems have roles with the same name, the roles are combined into a single
role if they contain the same privileges on each vCenter Server system. If two vCenter Server
systems have roles with the same name that contain different privileges, this conflict must be
resolved by renaming at least one of the roles. You can choose to resolve the conflicting roles either
automatically or manually.
If you choose to reconcile the roles automatically, the role on the joining system is renamed to
<vCenter_Server_system_name> <role_name>. <vCenter_Server_system_name> is the name of the
vCenter Server system that is joining the Linked Mode group, and <role_name> is the name of the
original role. To reconcile the roles manually, connect to one of the vCenter Server systems with the
vSphere Client. Then rename one instance of the role before joining the vCenter Server system to
the Linked Mode group. If you remove a vCenter Server system from a Linked Mode group, the
vCenter Server system retains all the roles that it had as part of the group.

Module 17 Installing VMware vSphere 5.1 Components 895


Isolating a vCenter Server Instance
Slide 17-38

1
You can isolate a vCenter Server instance
from a Linked Mode group in two ways:
ƒ Use the vCenter Server Linked Mode 3
Configuration wizard.
ƒ Use Windows Add/Remove Programs.

You can also isolate (remove) a vCenter Server instance from a Linked Mode group, for example, to
manage the vCenter Server instance as a standalone instance. One way to isolate the instance is to
start the vCenter Server Linked Mode Configuration wizard. Another way is to use the Windows
Add/Remove Programs (click Change) in the vCenter Server system operating system. Either
method starts the vCenter Server wizard. Modify the vCenter Server configuration as shown on the
slide.
To use the vCenter Server Linked Mode Configuration wizard to isolate a vCenter Server
instance from a Linked Mode group:

1. Select Start > All Programs > VMware > vCenter Server Linked Mode Configuration.

2. Click Modify linked mode configuration. Click Next.

3. Click Isolate this vCenter Server instance from linked mode group. Click Next.

4. Click Continue.
5. Click Finish. The vCenter Server instance is no longer part of the Linked Mode group.

896 VMware vSphere: Fast Track


Review of Learner Objectives
Slide 17-39

You should be able to do the following:


ƒ Describe vCenter Linked Mode.
ƒ List vCenter Linked Mode benefits.
ƒ List vCenter Linked Mode requirements.
ƒ Join a vCenter Server system to a Linked Mode group.
ƒ View and manage vCenter Server inventories in a Linked Mode group.
ƒ Isolate a vCenter Server system from a Linked Mode group.

17
Installing VMware vSphere 5.1 Components

Module 17 Installing VMware vSphere 5.1 Components 897


Lesson 4: Image Builder
Slide 17-40

Lesson 5:
Image Builder

898 VMware vSphere: Fast Track


Learner Objectives
Slide 17-41

After this lesson, you should be able to do use VMware vSphere®


ESXi™ Image Builder CLI to create an ESXi image.

17
Installing VMware vSphere 5.1 Components

Module 17 Installing VMware vSphere 5.1 Components 899


What Is an ESXi Image?
Slide 17-42

An ESXi image is a software bundle that consists of four main


components.

core CIM
hypervisor providers

plug-in
drivers
components

An ESXi image is a customizable software bundle that contains all the software necessary to run on
an ESXi host.
An ESXi image includes the following:
• The base ESXi software, also called the core hypervisor
• Specific hardware drivers
• Common Information Model (CIM) providers
• Specific applications or plug-in components
ESXi images can be installed on a hard disk, or the ESXi image can run entirely in memory.

900 VMware vSphere: Fast Track


VMware Infrastructure Bundles
Slide 17-43

VMware® infrastructure bundles (VIBs) are software packages that


are added to an ESXi image.
A VIB is used to package any of the following ESXi image
components:
ƒ ESXi base image
ƒ Drivers
ƒ CIM providers
ƒ Plug-ins and other components
A VIB specifies relationships with other VIBs:
ƒ VIBs that the VIB depends on
ƒ VIBs that the VIB conflicts with

17
Installing VMware vSphere 5.1 Components
The ESXi image includes one or more VMware installation bundles (VIBs).
A VIB is an ESXi software package. In VIBs, VMware and its partners package solutions, drivers,
CIM providers, and applications that extend the ESXi platform. An ESXi image should always
contain one base VIB. Other VIBs can be added to include additional drivers, CIM providers,
updates, patches, and applications.

Module 17 Installing VMware vSphere 5.1 Components 901


ESXi Image Deployment
Slide 17-44

The challenge of using a standard ESXi image is that the image might
be missing desired functionality.

missing
CIM
provider

? missing
driver

standard
ESXi ISO image: missing
vendor
ƒ Base providers plug-in
ƒ Base drivers

Standard ESXi images are provided by VMware and are available on the VMware Web site. ESXi
images can also be provided by VMware partners.
The challenge that administrators face when using the standard ESXi image provided by VMware is
that the standard image is sometimes limited in functionality. For example, the standard ESXi image
might not contain all the drivers or CIM providers for a specific set of hardware. Or the standard
image might not contain vendor-specific plug-in components.
To create an ESXi image that contains custom components, use Image Builder.

902 VMware vSphere: Fast Track


What Is Image Builder?
Slide 17-45

Image Builder is a set of command-line utilities that are used to


create and manage image profiles.
ƒ An image profile is a group of VIBs that are used to create an ESXi
image.
Image Builder enables the administrator to build customized ESXi
boot images:
ƒ Used for booting disk-based ESXi installations
ƒ Used by VMware vSphere® Auto Deploy™ to boot an ESXi host in
memory
Image Builder is based on VMware vSphere® PowerCLI™.
ƒ The Image Builder cmdlets are included with the vSphere PowerCLI
tools.

17
Installing VMware vSphere 5.1 Components
Image Builder is a utility for customizing ESXi images. Image Builder consists of a server and
VMware vSphere® PowerCLI™ cmdlets. These cmdlets are used to create and manage VIBs,
image profiles, software depots, and software channels. Image Builder cmdlets are implemented as
Microsoft PowerShell cmdlets and are included in vSphere PowerCLI. Users of Image Builder
cmdlets can use all vSphere PowerCLI features.

Module 17 Installing VMware vSphere 5.1 Components 903


Image Builder Architecture
Slide 17-46

software depot VIB:


ƒ ESXi software package:
image profile • Provided by VMware and its partners
Image profile:
ESXi
VIBs
driver
VIBs
ƒ Defines an ESXi image
ƒ Consists of one or more VIBs
Software depot:
ƒ Logical grouping of VIBs and image
profiles
OEM VIBs security
VIBs
ƒ Can be online or offline
Software channel:
software channels ƒ Used to group different types of VIBs
at a software depot

The Image Builder architecture consists of the following components:


• VIB – VIBs are software packages that consist of packaged solutions, drivers, CIM providers,
and applications that extend the ESXi platform. VIBs are available in software depots.
• Image profile – Image profiles define ESXi images. Image profiles consist of one or more
VIBs. An image profile always includes a base VIB and might include other VIBs. You use
vSphere PowerCLI to examine and define an image profile.
• Software depot – A software depot is a logical grouping of VIBs and image profiles. The
software depot is a hierarchy of files and folders and can be available through an HTTP URL
(online depot) or a ZIP file (offline depot). VMware and its partners make software depots
available to users.
• Software channel – VMware and its partners that are hosting software depots use software
channels to group different types of VIBs. For example, a software channel can be created to
group all security updates. A VIB can be in multiple software channels. You do not have to
connect to a software channel to use the associated VIBs. Software channels are available to
facilitate filtering.

904 VMware vSphere: Fast Track


Building an ESXi Image: Step 1
Slide 17-47

Start the vSphere PowerCLI session.


1. Verify that the execution policy is set to unrestricted.
• Cmdlet:
Get-ExecutionPolicy
Windows host with
2. Connect to your vCenter Server vSphere PowerCLI
system. and Image Builder snap-in
• Cmdlet:
Connect-VIServer

Image
Builder

17
Installing VMware vSphere 5.1 Components
To use Image Builder, the first step is to install vSphere PowerCLI and all prerequisite software. The
Image Builder snap-in is included with the vSphere PowerCLI installation.
To install Image Builder, you must install:
• Microsoft .NET 2.0
• Microsoft PowerShell 1.0 or 2.0
• vSphere PowerCLI which includes the Image Builder cmdlets
After you start the vSphere PowerCLI session, the first task is to verify that the execution policy is set
to Unrestricted. For security reasons, Windows PowerShell supports an execution policy feature. It
determines whether scripts are allowed to run and whether they must be digitally signed. By default,
the execution policy is set to Restricted, which is the most secure policy. If you want to run scripts or
load configuration files, you can change the execution policy by using the Set-ExecutionPolicy
cmdlet. To view the current execution policy, use the Get-ExecutionPolicy cmdlet.
The next task is to connect to your vCenter Server system. The Connect-VIServer cmdlet enables
you to start a new session or reestablish a previous session with a vSphere server.
For more about installing Image Builder and its prerequisite software, see vSphere Installation and
Setup Guide. For more about vSphere PowerCLI, see vSphere PowerCLI Installation Guide. Both
books are at http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html.

Module 17 Installing VMware vSphere 5.1 Components 905


Building an ESXi Image: Step 2
Slide 17-48

Connect to a software depot. software depot


1. Add a software
depot to Image image
Builder. Windows host with profile
vSphere PowerCLI
• Cmdlet: and Image Builder
Add-EsxSoftwareDepot snap-in ESXi
2. Verify that the VIBs
software depot can
driver
be read. Image VIBs
• Cmdlet: Builder
Get-EsxImageProfile

OEM VIBs

Before you create or customize an ESXi image, Image Builder must be able to access one or more
software depots.
The Add-EsxSoftwareDepot cmdlet enables you to add software depots or offline bundle ZIP
files to Image Builder. A software depot consists of one or more software channels. By default, this
cmdlet adds all software channels in the depot to Image Builder. The Get-EsxSoftwareChannel
cmdlet retrieves a list of currently connected software channels. The Remove-EsxSoftwareDepot
cmdlet enables you to remove software depots from Image Builder.
After adding the software depot to Image Builder, verify that you can read the software depot.
The Get-EsxImageProfile cmdlet retrieves a list of all published image profiles in the
software depot.
Other Image Builder cmdlets that might be useful include Set-EsxImageProfile and Compare-
EsxImageProfile. The Set-EsxImageProfile cmdlet modifies a local image profile and
performs validation tests on the modified profile. The cmdlet returns the modified object but does
not persist it. The Compare-EsxImageProfile cmdlet shows whether two image profiles have the
same VIB list and acceptance levels.

906 VMware vSphere: Fast Track


Building an ESXi Image: Step 3
Slide 17-49

Clone and modify an image profile. software depot


1. Clone an image profile.
• Cmdlet: image
New-EsxImageProfile Windows host with profile
vSphere PowerCLI
2. Modify an image profile. and Image Builder
• Cmdlets: snap-in ESXi
Add-EsxSoftwarePackage VIBs
Remove-EsxSoftwarePackage
driver
Image VIBs
Builder
Clone the default ESXi
image provided by VMware

17
and then customize it.
OEM VIBs

Installing VMware vSphere 5.1 Components


Cloning a published profile is the easiest way to create a custom image profile. Cloning a profile is
useful when you want to remove a few VIBs from a profile. Cloning is also useful when you want to
use hosts from different vendors and want to use the same basic profile, with the addition of vendor-
specific VIBs. VMware partners or large installations might consider creating a profile from the
beginning.
The New-EsxImageProfile cmdlet enables you to create an image profile or clone an
image profile.
To add one or more software packages (VIBs) to an image profile, use the Add-
EsxSoftwarePackage cmdlet. Likewise, the Remove-EsxSoftwarePackage cmdlet enables you
to remove software packages from an image profile. The Get-EsxSoftwarePackage cmdlet
retrieves a list of VIBs in an image profile.

Module 17 Installing VMware vSphere 5.1 Components 907


Using Image Builder to Build an Image: Step 4
Slide 17-50

Generate a new ESXi image.


software depot ƒ Cmdlet: Export-ESXImageProfile
image
profile Windows host with
vSphere PowerCLI
and Image Builder
ESXi snap-in
VIBs

driver Image
VIBs ISO image
Builder

PXE-bootable
Image

OEM VIBs

Finally, after creating an image profile, you can generate an ESXi image. The Export-
EsxImageProfile cmdlet exports an image profile as an ISO image or ZIP file. An ISO image can
be used to boot an ESXi host. A ZIP file can be used by vSphere Update Manager for remediating
ESXi hosts. The exported image profile can also be used with Auto Deploy to boot ESXi hosts.
For the complete list of Image Builder cmdlets, see vSphere Image Builder PowerCLI Reference at
http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html.

908 VMware vSphere: Fast Track


Lab 30
Slide 17-51

In this lab, you will use Image Builder to export an image profile.
1. Export an image profile to an ISO image.

17
Installing VMware vSphere 5.1 Components

Module 17 Installing VMware vSphere 5.1 Components 909


Review of Learner Objectives
Slide 17-52

You should be able to use Image Builder to create an ESXi image.

910 VMware vSphere: Fast Track


Lesson 5: Auto Deploy
Slide 17-53

Lesson 6:
Auto Deploy

17
Installing VMware vSphere 5.1 Components

Module 17 Installing VMware vSphere 5.1 Components 911


Learner Objectives
Slide 17-54

After this lesson, you should be able to do the following:


ƒ Understand the purpose of Auto Deploy.
ƒ Configure Auto Deploy.
ƒ Use Auto Deploy to deploy stateless ESXi.

912 VMware vSphere: Fast Track


What Is Auto Deploy?
Slide 17-55

Auto Deploy is a method introduced in vSphere 5.0 for deploying


ESXi hosts:
ƒ The ESXi host’s state and configuration run entirely in memory.
ƒ When the ESXi host is shut down, the state information is cleared from
memory.
ƒ Based on preboot execution environment (PXE) boot
ƒ Works with Image Builder, vCenter Server, and Host Profiles
Benefits of Auto Deploy:
ƒ Large numbers of ESXi hosts can be deployed quickly and easily.
ƒ A standard ESXi image can be shared across many hosts.
ƒ The host image is decoupled from the physical server:
•

17
Simplifies host recovery
ƒ A boot disk is not necessary.

Installing VMware vSphere 5.1 Components


VMware vSphere® Auto Deploy™ is a new method for provisioning ESXi hosts in vSphere 5.0.
With Auto Deploy, vCenter Server loads the ESXi image directly into the host memory. When the
host boots, the software image and optional configuration are streamed into memory. All changes
made to the state of the host are stored in RAM. When the host is shut down, the state of the host is
lost but can be streamed into memory again when the host is powered back on.
Unlike the other installation options, Auto Deploy does not store the ESXi state on the host disk.
vCenter Server stores and manages ESXi updates and patching through an image profile and,
optionally, the host configuration through a host profile.
Auto Deploy enables the rapid deployment of many hosts. Auto Deploy simplifies ESXi host
management by eliminating the necessity to maintain a separate boot image for each host. A
standard ESXi image can be shared across many hosts. When a host is provisioned using Auto
Deploy, the host image is decoupled from the physical server. The host can be recovered without
having to recover the hardware or restore from a backup. Finally, Auto Deploy eliminates the need
for a dedicated boot device, thus freeing the boot device for other uses.

Module 17 Installing VMware vSphere 5.1 Components 913


Where Are the Configuration and State Information Stored?
Slide 17-56

Because Auto Deploy can be configured without a boot disk, all


information on the state of the host is stored in or managed by
vCenter Server.
vCenter
boot disk Server

Image state: ESXi base, drivers, CIM


providers … image profile
Configuration state: networking,
storage, date and time, firewall, admin host profile
password …
Running state: VM inventory,
vSphere HA state, license, vSphere vCenter Server
DPM configuration
Event recording: log files, core dump add-in components

Without the use of Auto Deploy, the ESXi host’s image (binaries, VIBs), configuration, state, and
log files are stored on the boot device.
With Auto Deploy, a boot device no longer holds the host’s information. Instead, the information is
stored off the host and managed by vCenter Server:
• Image state – Executable software to run on the ESXi host. The information is part of the image
profile, which can be created and customized with the Image Builder snap-in in vSphere
PowerCLI.
• Configuration state – Configurable settings that determine how the host is configured.
Examples include virtual switch settings, boot parameters, and driver settings. Host profiles are
created using the host profile user interface in the vSphere Client.
• Running state – Settings that apply while the ESXi host is up and running. This state also
includes the location of the virtual machine in the inventory and the virtual machine autostart
information. This state information is managed by the vCenter Server instance.
Event recording – Information found in log files and core dumps. This information can be managed
by vCenter Server, using add-in components like the VMware vSphere® ESXi™ Dump Collector
and the Syslog Collector.

914 VMware vSphere: Fast Track


Auto Deploy Architecture
Slide 17-57

Auto Deploy
PowerCLI

host profiles host profile Image Builder


rules engine image
and UI PowerCLI
profiles
answer files

Auto Deploy
server
fetch of predefined
image profiles
host and VIBs
profile
engine

ESXi HTTP fetch of images or VIBs


VIBs and
host and host profiles

17
image profiles

public depot

Installing VMware vSphere 5.1 Components


The Auto Deploy infrastructure consists of several components:
• Auto Deploy server – Serves images and host profiles to ESXi hosts. The Auto Deploy server is
at the heart of the Auto Deploy infrastructure.
• Auto Deploy rules engine – Tells the Auto Deploy server which image and which host profiles
to serve to which host. You use Auto Deploy PowerCLI to define the rules that assign image
profiles and host profiles to hosts.
• Image profiles – Define the set of VIBs with which to boot ESXi hosts. VMware and its
partners make image profiles and VIBs available in public software depots. Use Image Builder
PowerCLI to examine the depot and the Auto Deploy rules engine to specify which image
profile to assign to which host. You can create a custom image profile based on the public
image profiles and VIBs in the depot and apply that image profile to the host.
• Host profiles – Templates that define an ESXi host’s configuration, such as networking or
storage setup. You can save the host profile for an individual host and reuse it to reprovision
that host. You can save the host profile of a template host and use that profile for other hosts.
• Answer files – Store information that the user provides during the boot process. Only one
answer file exists for each host.

Module 17 Installing VMware vSphere 5.1 Components 915


Rules Engine Basics
Slide 17-58

Auto Deploy has a rules engine that determines which ESXi image
and host profiles can be used on selected hosts.
The rules engine maps software images and host profiles to hosts,
based on the attributes of the host:
ƒ For example, rules can be based on IP or MAC address.
ƒ The -AllHosts option can be used for every host.
For new hosts, the Auto Deploy server checks with the rules engine
before serving image and host profiles to a host.
vSphere PowerCLI cmdlets are used to set, evaluate, and update
image profile and host profile rules.
The rules engine includes rules and rule sets.

The rules engine includes the following:


• Rules – Rules can assign image profiles and host profiles to a set of hosts or specify the
inventory location (folder or cluster) of a host on the target vCenter Server system. A rule can
identify target hosts by boot MAC address, SMBIOS asset tag, BIOS UIID, or fixed DHCP IP
address. In most cases, rules apply to multiple hosts. You use vSphere PowerCLI to create rules.
After you create a rule, you must add it to a rule set. After you add a rule to a rule set, you
cannot edit it.
• Active rule set – When a newly started host contacts the Auto Deploy server with a request for
an image, the Auto Deploy server checks the active rule set for matching rules. The image
profile, host profile, and vCenter Server inventory location that are mapped by matching rules
are then used to boot the host. If more than one item is mapped by the rules, the Auto Deploy
server uses the item that is first in the rule set.
• Working rule set – The working rule set enables you to test changes to rules before making
them active. For example, you can use vSphere PowerCLI commands for testing compliance
with the working rule set.

916 VMware vSphere: Fast Track


Software Configuration
Slide 17-59

Install Auto Deploy and register it with a vCenter Server instance:


ƒ Can be installed on the vCenter Server system
ƒ Included with the vCenter Server Appliance
The installation binary is included with the vCenter Server installation
media.
Install vSphere PowerCLI on a server that can reach the vCenter
Server system and the Auto Deploy server.
The user can set up an online or offline software depot:
ƒ An online depot is a URL where the image is stored.
ƒ An offline depot is a local zipped file that contains the image.
ƒ Both are configured and maintained using Image Builder.

17
Installing VMware vSphere 5.1 Components
When installing Auto Deploy, the software can be on its own server or it can be installed on the
same host as the vCenter Server instance. Setting up Auto Deploy includes installing the software
and registering Auto Deploy with a vCenter Server system.
The VMware® vCenter™ Server Appliance™ has the Auto Deploy software installed by default.
vSphere PowerCLI must be installed on a system that can be reached by the vCenter Server system
and the Auto Deploy server.
Installing vSphere PowerCLI includes installing Microsoft PowerShell. For Windows 7 and
Windows Server 2008, Windows PowerShell is installed by default. For Windows XP or Windows
2003, Windows PowerShell must be installed before installing vSphere PowerCLI.
The image profile can come from a public depot or it can be a zipped file stored locally. The local
image profile can be created and customized using vSphere PowerCLI. However, the base ESXi
image must be part of the image profile.
If you are using a host profile, save a copy of the host profile to a location that can be reached by the
Auto Deploy server.
For more about preparing your system and installing the Auto Deploy server, see vSphere Installation
and Setup Guide at http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html.

Module 17 Installing VMware vSphere 5.1 Components 917


PXE Boot Infrastructure Setup
Slide 17-60

Auto Deploy requires a PXE boot infrastructure.


ƒ The ESXi host must get its IP address from a DHCP server.
ƒ The DHCP server must be configured to direct the ESXi host to a Trivial
File Transfer Protocol (TFTP) server to download the PXE file.
ƒ The following DHCP services can be used:
• The organization’s current DHCP server
• A new DHCP server installed for Auto Deploy use
• The DHCP service included with the vCenter Server Appliance
ƒ A TFTP server must be accessible from the DHCP server and the
vCenter Server instance.
• TFTP services are included with vCenter Server Appliance.

Autodeployed hosts perform a preboot execution environment (PXE) boot. PXE uses DHCP and
Trivial File Transfer Protocol (TFTP) to boot an operating system over a network.
A DHCP server and a TFTP server must be configured. The DHCP server assigns IP addresses to
each autodeployed host on startup and points the host to a TFTP server to download the gPXE
configuration files. The ESXi hosts can use the infrastructure’s existing DHCP and TFTP servers, or
new DHCP and TFTP servers can be created for use with Auto Deploy. Any DHCP server that
supports the next-server and filename options can be used.
The vCenter Server Appliance can be used as the Auto Deploy server, DHCP server, and TFTP
server. The Auto Deploy service, DHCP service, and TFTP service are included in the appliance.

918 VMware vSphere: Fast Track


Initial Boot of an Autodeployed ESXi Host: Step 1
Slide 17-61

The ESXi host boots from the PXE boot server.


vCenter Server

image Host Profile


profile Host Profile
host profile
rules engine
ESXi
VIBs
ESXi
host
driver
VIBs
“waiter” gPXE DHCP
image request

17
Auto TFTP DHCP
OEM VIBs Deploy

Installing VMware vSphere 5.1 Components


When an autodeployed host boots up for the first time, a certain sequence of events occurs:
1. When the ESXi host is powered on, the ESXi host starts a PXE boot sequence.

2. The DHCP server assigns an IP address to the ESXi host and instructs the host to contact the
TFTP server.
3. The ESXi host downloads the gPXE image file and the gPXE configuration file from the
TFTP server.

Module 17 Installing VMware vSphere 5.1 Components 919


Initial Boot of an Autodeployed ESXi Host: Step 2
Slide 17-62

The ESXi host contacts the Auto Deploy server.


vCenter Server

image Host Profile


profile Host Profile
host profile
rules engine
ESXi
VIBs
ESXi
host
driver
VIBs
“waiter”

Auto
OEM VIBs Deploy cluster A cluster B

The gPXE configuration file instructs the host to make an HTTP boot request to the Auto Deploy
server.

920 VMware vSphere: Fast Track


Initial Boot of an Autodeployed ESXi Host: Step 3
Slide 17-63

The host’s image profile, host profile, and cluster are determined.
vCenter Server

image Host Profile


profile Host Profile
host profile
rules engine
ESXi
VIBs
image profile X ESXi
host profile 1
host
driver cluster B
VIBs

“waiter”

17
Auto
OEM VIBs Deploy cluster A cluster B

Installing VMware vSphere 5.1 Components


The Auto Deploy server queries the rules engine for the following information about the host:
• The image profile to use
• The host profile to use
• Which cluster the host belongs to (if any)
The rules engine maps software and configuration settings to hosts, based on the attributes of the
host. For example, you can deploy image profiles or host profiles to two clusters of hosts by writing
two rules, each matching on the network address of one cluster.
For hosts that have not yet been added to vCenter Server, Auto Deploy checks with the rule engine
before serving image profiles and host profiles to hosts.

Module 17 Installing VMware vSphere 5.1 Components 921


Initial Boot of an Autodeployed ESXi Host: Step 4
Slide 17-64

The image is pushed to the host, and the host profile is applied.
vCenter Server

image Host Profile


profile Host Profile
host profile
rules engine
ESXi
VIBs
image profile,
host profile, ESXi
driver cluster info host
VIBs
“waiter”

Auto
OEM VIBs Deploy cluster A cluster B

The Auto Deploy server streams the VIBs specified in the image profile to the host. Optionally, a
host profile can also be streamed to the host.
The host boots based on the image profile and the host profile received by Auto Deploy. Auto
Deploy also assigns the host to the vCenter Server instance with which it is registered.

922 VMware vSphere: Fast Track


Initial Boot of an Autodeployed ESXi Host: Step 5
Slide 17-65

The host is placed in the appropriate cluster, if specified by a rule. The ESXi
image and information on the image profile, host profile, and cluster to use
are held on the Auto Deploy server.
vCenter Server
image
profile Host Profile
rules engine Host Profile
host profile
image profile,
ESXi host profile,
VIBs cluster info
ESXi
driver host
VIBs “waiter”

ESXi

17
image

OEM VIBs Auto Deploy


cluster A cluster B

Installing VMware vSphere 5.1 Components


If a rule specifies a target folder or a cluster on the vCenter Server instance, the host is added to that
location. If no rule exists, Auto Deploy adds the host to the first datacenter.
If a host profile was used without an answer file and the profile requires specific information from
the user, the host is placed in maintenance mode. The host remains in maintenance mode until the
user reapplies the host profile and answers any outstanding questions.
If the host is part of a fully automated DRS cluster, the cluster rebalances itself based on the new
host, moving virtual machines onto the new host.
To make subsequent boots quicker, the Auto Deploy server stores the ESXi image as well as the
following information:
• The image profile to use
• The host profile to use
• The location of the host in the vCenter Server inventory

Module 17 Installing VMware vSphere 5.1 Components 923


Subsequent Boot of an Autodeployed ESXi Host: Step 1
Slide 17-66

The autodeployed host is rebooted, and the PXE boot sequence


starts.
vCenter Server
image
profile Host Profile
rules engine Host Profile
host profile
image profile,
ESXi host profile,
VIBs cluster info
ESXi
driver host
VIBs “waiter”
gPXE DHCP
ESXi image Request
image

OEM VIBs Auto Deploy TFTP DHCP

When an autodeployed ESXi host is rebooted, a slightly different sequence of events takes place:
1. The ESXi host goes through the PXE boot sequence, as it does in the initial boot sequence.

2. The DHCP server assigns an IP address to the ESXi host and instructs the host to contact the
TFTP server.
3. The host downloads the gPXE image file and the gPXE configuration file from the
TFTP server.

924 VMware vSphere: Fast Track


Subsequent Boot of an Autodeployed ESXi Host: Step 2
Slide 17-67

The Auto Deploy server is contacted by the ESXi host.


vCenter Server
image
profile Host Profile
rules engine Host Profile
host profile
image profile,
ESXi host profile,
VIBs cluster info

driver HTTP boot ESXi


VIBs “waiter” request host

ESXi
image

17
OEM VIBs Auto Deploy

cluster A cluster B

Installing VMware vSphere 5.1 Components


As in the initial boot sequence, the gPXE configuration file instructs the host to make an HTTP boot
request to the Auto Deploy server.

Module 17 Installing VMware vSphere 5.1 Components 925


Subsequent Boot of an Autodeployed ESXi Host: Step 3
Slide 17-68

The ESXi image is downloaded from the Auto Deploy server to the
host. The host profile is downloaded from vCenter Server to the host.
vCenter Server
image
profile Host Profile
rules engine Host Profile
host profile
image profile,
ESXi host profile,
VIBs cluster info

driver ESXi
VIBs “waiter” host

ESXi
image
TFTP DHCP
OEM VIBs Auto Deploy

In this step, the subsequent boot sequence differs from the initial boot sequence.
When an ESXi host is booted for the first time, Auto Deploy queries the rules engine for
information about the host. The information about the host’s image profile, host profile, and cluster
is stored on the Auto Deploy server.
On subsequent reboots, the Auto Deploy server uses the saved information instead of using the rules
engine to determine this information. Using the saved information saves time during subsequent
boots because the host does not have to be checked against the rules in the active rule set. Auto
Deploy checks the host against the active rule set only once, during the initial boot.

926 VMware vSphere: Fast Track


Subsequent Boot of an Autodeployed ESXi Host: Step 4
Slide 17-69

The host is placed into the appropriate cluster.

vCenter Server
image
profile Host Profile
rules engine Host Profile
host profile
image profile,
ESXi host profile,
VIBs cluster info

ESXi
driver
“waiter” host
VIBs

ESXi
image

17
OEM VIBs Auto Deploy
cluster A cluster B

Installing VMware vSphere 5.1 Components


Finally, the ESXi host is placed in its assigned cluster on the vCenter Server instance.

Module 17 Installing VMware vSphere 5.1 Components 927


Auto Deploy Stateless Caching
Slide 17-70

Auto Deploy stateless caching saves the image and configuration to


a local disk, but the host continues to perform stateless reboots.
Requirements include a dedicated boot device.
If the host is unable to reach the PXE host or the Auto Deploy server,
the host boots using the local image:
ƒ The image on the local disk can be used as a backup.
ƒ Stateless caching can be configured to overwrite or preserve existing
VMFS.

Auto Deploy stateless caching PXE boots the ESXi host and loads the image in memory like
stateless ESXi hosts. However, when the host profile is applied to the ESXi host, the image running
in memory is copied to a boot device. The saved image acts as a backup in the event the PXE,
infrastructure or Auto Deploy servers are unavailable. If the host needs to reboot and it cannot
contact the DHCP, TFTP, or Auto Deploy server, the network boot will timeout and the host will
reboot using the cached disk image.
While stateless caching can potentially help ensure availability of an Auto Deployed ESXi host by
enabling the host to boot in the event of an outage affecting the DHCP, TFTP or Auto Deploy
servers, stateless caching does not guarantee that the image is current or the vCenter server is
available after the boot up. Stateless caching’s primary benefit is it enables you to boot the host in
order to help troubleshoot and resolve problems that prevent a successful PXE boot.
It’s also important to note that unlike stateless ESXi hosts, stateless caching requires a dedicated
boot device be assigned to the host.

928 VMware vSphere: Fast Track


Stateless Caching Host Profile Configuration
Slide 17-71

Configuring an Auto Deploy enabled server for stateless caching


includes doing the following:
ƒ Creating a host profile with stateless caching configured.
ƒ Booting an ESXi host using Auto Deploy:
• The host profile is applied.
• The ESXi image is cached to disk.
The host runs stateless under normal operations.

esx,local

17
Installing VMware vSphere 5.1 Components
Stateless caching is configured using host profiles. When configuring stateless caching, you can
choose to save the image to a local boot device or a USB disk. You also have the option of leaving
the local VMFS intact or overwriting it.
To configure stateless caching:

1. From the vCenter server home window click on Host Profiles.

2. Select the host profile you want to configure, and click on Edit Profile from the tool bar.

3. Expand the System Image Cache Configuration tree and highlight System Image Cache
Profile Settings.
4. From the pull-down menu on the Configuration Details tab, select Enable stateless caching on
the host.
Enter first disk arguments if needed and whether or not you want to overwrite the local VMFS
and select OK.

Module 17 Installing VMware vSphere 5.1 Components 929


Auto Deploy Stateless Caching
Slide 17-72

The host copies the state locally. Reboots are stateless only if the
PXE and Auto Deploy servers are reached.
vCenter Server
Image
Image
image
Profile Host Profile
Profile Host Profile
profile host profile
rules engine
image profile
host profile
ESXi cache
VIBs Image/Config

Driver
VIBs “Waiter”

Auto
OEM VIBs Deploy

All subsequent reboots will boot from the Auto Deploy server unless the DHCP, TFTP, or the Auto
Deploy servers are not available. If either infrastructure server is not available, the host will boot
from the locally saved image.
No attempt is made to update the cached image. If the host regains the ability to boot stateless, the
local image does not change, even after the stateless image changes. The local image may become
stale as a result.

930 VMware vSphere: Fast Track


Auto Deploy Stateful Installation
Slide 17-73

The ESXi host initially boots using Auto Deploy. All subsequent
reboots use local disks.
The benefits of Auto Deploy stateful installations include the
following:
ƒ Quickly and efficiently provision hosts
ƒ Once provisioned no further requirement on the PXE and Auto Deploy
servers.
Some of the disadvantages of using stateful Installations include the
following:
ƒ Over time the configuration might become out of sync with the Auto
Deploy image.
ƒ Patching and updating ESXi hosts need to be done using traditional

17
methods.

Installing VMware vSphere 5.1 Components


Setting up stateful installation is similar to configuring stateless caching. The difference is instead of
attempting to use Auto Deploy to boot the host, the host will do a one-time PXE boot to install
ESXi. Once the image is cached to disk, the host will boot from the disk image on all subsequent
reboots.

Module 17 Installing VMware vSphere 5.1 Components 931


Stateful Installation Host Profile Configuration
Slide 17-74

Stateful installation host profile configuration includes the following:


ƒ Create the host profile with stateful install configured.
ƒ Boot the ESXi host using Auto Deploy:
• The ESXi image is saved to disk.
• The host profile is applied.
ƒ Future boots are directed to the local disk:
• No future attempts are made to contract the PXE or Auto Deploy Servers.

Stateful installation is configured using host profiles. When configuring stateful installation you can
choose to save the image to a local boot device or a USB disk. You also have the option of leaving
the local VMFS intact or overwriting it.
To configure stateful installation:

1. From the vCenter server home window, click on Host Profiles.

2. Select the host profile that you want to configure, and click on Edit Profile from the tool bar.

3. Expand the System Image Cache Configuration tree and highlight System Image Cache
Profile Settings.
4. From the pull-down menu on the Configuration Details tab, select Enable stateful installs on
the host.
5. Enter first disk arguments if needed and whether or not you want to overwrite the local VMFS
and select OK.

932 VMware vSphere: Fast Track


Auto Deploy Stateful Installation
Slide 17-75

Initial boot up uses Auto Deploy to install the image on the server.
Subsequent reboots are performed from local storage.
vCenter Server
Image
Image
image
Profile Host Profile
Profile Host Profile
profile host profile
rules engine
image profile
host profile
ESXi
ESXi cache
VIBs
VIBs Image/Config

Driver
VIBs “Waiter”

17
Auto
OEM VIBs Deploy

Installing VMware vSphere 5.1 Components


With stateful installation, the Auto Deploy server is used to provision new ESXi hosts. The first
time the host boots, it uses the PXE host, DHCP server, and Auto Deploy server just like a stateless
host. All subsequent reboots use the image saved to the local disk.
Once the image is written to the host’s local storage, the image is used for all future reboots. The
initial boot of a system is the equivalent of an ESXi host installation to a local disk. No attempt is
made to update the image, which means over time the image can become stale. Manual processes
need to be implemented to manage configuration updates and patches, losing most of the key
benefits of Auto Deploy.

Module 17 Installing VMware vSphere 5.1 Components 933


Managing the Auto Deploy Environment
Slide 17-76

If you change a rule set:


ƒ Unprovisioned hosts boot automatically according to the new rules.
ƒ For all other hosts, Auto Deploy applies new rules only when you test a
host’s rule compliance and perform remediation.
If vCenter Server becomes unavailable:
ƒ Host deployment continues to work because the Auto Deploy server
retains the state information:
• Hosts must be part of a VMware vSphere® High Availability cluster.
ƒ The host contacts the Auto Deploy server to determine which image
and host profile to use.
If Auto Deploy becomes unavailable:
ƒ You will be unable to boot or reboot a host.
The vCenter and Auto Deploy Servers do not need to be available for
stateless or stateful configurations.

You can change a rule set, for example, to require a host to boot from a different image profile. You
can also require a host to use a different host profile. Unprovisioned hosts that you boot are
automatically provisioned according to these modified rules. For all other hosts, Auto Deploy
applies the new rules only when you test their rule compliance and perform remediation.
If the vCenter Server instance is unavailable, the stateless host contacts the Auto Deploy server to
determine which image and host profile to use. If a host is in a VMware vSphere® High Availability
cluster, Auto Deploy retains the state information so deployment works for the stateless host even if
the vCenter Server instance is not available. If the host is not in a vSphere HA cluster, the vCenter
Server system must be available to supply information to the Auto Deploy server.
If the Auto Deploy server becomes unavailable, stateless hosts that are already autodeployed remain
up and running. However, you will be unable to boot or reboot hosts. VMware recommends
installing Auto Deploy in a virtual machine and placing the virtual machine in a vSphere HA cluster
to keep it available.
For details about the procedure and the commands used for testing and repairing rule compliance,
see vSphere Installation and Setup Guide at http://www.vmware.com/support/pubs/vsphere-esxi-
vcenter-server-pubs.html.

934 VMware vSphere: Fast Track


Using Auto Deploy with Update Manager to Upgrade Hosts
Slide 17-77

vSphere Update Manager supports ESXi 5.0 and 5.1 hosts that use
Auto Deploy to boot.
Update Manager can patch hosts but cannot update the ESXi image
used to boot the host.
Update Manager can remediate only patches that do not require a
reboot (live-install).
ƒ Patches requiring reboot cannot be installed.
Workflow for patching includes the following steps:
1. Update the image that Auto Deploy uses with patches manually. If a
reboot is possible, rebooting is all that is required to update the host.
2. If a reboot cannot be performed, create a baseline in Update Manager
and remediate the host.

17
Installing VMware vSphere 5.1 Components
Update Manager now supports PXE installations. Update Manager updates the hosts but does not
update the image on the PXE server.
Only live-install patches can be remediated with Update Manager. Any patch that requires a reboot
cannot be installed on a PXE host. The live-install patches can be from VMware or a third party.

Module 17 Installing VMware vSphere 5.1 Components 935


Lab 31
Slide 17-78

In this lab, you will configure Auto Deploy to boot ESXi hosts.
1. Install Auto Deploy.
2. Configure the DHCP server and TFTP server for Auto Deploy.
3. Use vSphere PowerCLI to configure Auto Deploy.
4. (For vClass users only) Configure the ESXi host to boot from the
network.
5. (For non-vClass users) Configure the ESXi host to boot from the
network.
6. View the autodeployed host in the vCenter Server inventory.
7. (Optional) Apply a host profile to the autodeployed host.
8. (Optional) Define a rule to apply the host profile to an autodeployed
host when it boots.

936 VMware vSphere: Fast Track


Review of Learner Objectives
Slide 17-79

You should be able to do the following:


ƒ Understand the purpose of Auto Deploy.
ƒ Configure Auto Deploy.
ƒ Use Auto Deploy to deploy stateless ESXi.

17
Installing VMware vSphere 5.1 Components

Module 17 Installing VMware vSphere 5.1 Components 937


Key Points
Slide 17-80

ƒ vCenter Linked Mode enables a single vSphere Client to view and


manage the inventories of multiple vCenter Server systems.
ƒ ESXi installation requires little configuration during installation.
ƒ vCenter Server installed on Windows operating systems can run on
physical machines or virtual machines.
ƒ Image Builder enables the administrator to create customized ESXi
boot images.
ƒ Auto Deploy is a new method for deploying ESXi hosts, where the ESXi
host’s state and configuration run entirely in memory.
Questions?

938 VMware vSphere: Fast Track

You might also like