You are on page 1of 132

VMware vSphere®

Knowledge Transfer Kit


Overview

© 2016 VMware Inc. All rights reserved.


Agenda
• Overview
• Install ESXi
• Install and deploy vCenter Server
• Product Updates
– VMware ESXi™ Host
– VMware vCenter Server™
– VMware vCenter Appliance dramatically enhanced
– VMware vSphere Web Client / HTML5 Client
– VMware vSphere vMotion®
– Availability Features
• VMware vSphere High Availability
– Virtual SAN
– Storage

2
Overview
Virtualization Overview
• Virtualization
– Abstracts traditional physical machine resources and runs workloads as virtual machines
– Each virtual machine runs a guest operating system and applications
– The operating system and applications don’t know that they are virtualized

4
Virtualization Overview (cont.)
• Hypervisors
– Partition computing resources of a server
for multiple virtual machines
– Hypervisors alone lack coordination for
higher availability and efficiency
– The VMware vSphere Hypervisor is ESXi

• VMware vSphere
– A suite of software that extends beyond basic
host partitioning by aggregating infrastructure
resources and providing additional services
such as dynamic resource scheduling
– Serves as the foundation of the software-
defined data center (SDDC)

5
Cloud Computing and the SDDC
• IT as a Service (ITaaS)
– Abstracts complexity in the enterprise data center
– Achieves economies of scale
– Renews focus on application services
• Availability
• Security
• Scalability

Management

Cloud OS

Enterprise
Cloud

6
vSphere – Use Case Examples
Foundation for Business Solutions – “Adopt a More Agile Infrastructure”
• Security and Compliance: deploy cost-effective and adaptive security services built on vSphere
• Business Continuity: slash downtime with VMware vSphere High Availability, VMware vSphere Fault
Tolerance, VMware vSphere Distributed Resource Scheduler™, VMware vSphere vMotion, and
VMware vCenter™ Site Recovery Manager™
• Server Consolidation: cut capital and operating costs while increasing IT service delivery
• Business Critical Applications: increased agility and outstanding reliability

Foundation for Virtual Desktop – “Enable Data to Follow the User”


• vSphere is the Supporting Infrastructure for any VMware Horizon™ View™ Deployment
• Access a Virtual Desktop from Anywhere and with Any Device

Cloud Computing – “The Foundation for the VMware vRealize® Suite”


• vSphere Enables the Cloud and Choice (Private, Public, or Hybrid)
• Thousands of vCloud Providers Available Today
• Support for Existing and Future Applications

7
Foundation for the Software-Defined Enterprise
End User Desktop Mobile
Computing
Virtual Workspace

Applications Traditional Modern SaaS

Software-Defined Data Center

Policy-Based
Management and
Automation Cloud Automation Cloud Operations Cloud Business

Virtualized Infrastructure
Abstract and Pool
Hybrid Cloud
VMware and
vCloud Data Center
Partners
Compute Network Storage
Abstraction = Abstraction = Abstraction =
Server Virtual Software-Defined Private Public
Virtualization Networking Storage Clouds Clouds

Physical
Hardware 8
Compute Network Storage
Software-Defined Data Center – IT Outcomes
IT Service Delivery Time
Secure, Faster Delivery
of Mobile Apps
in Minutes
App and Business
OpEx Infrastructure Delivery Mobility Improved Security
Reduction Automation to Effort Ratio

Streamlined and
Security Controls
Automated Data Center
Native to Infrastructure
Operations

Capex Improved
Reduction Uptime

High Availability
Data Center
and Resilient
Virtualization and
Infrastructure
Hybrid Cloud Extensibility

New Model of IT

9
vSphere Installation and Setup Process

vSphere is a sophisticated product with multiple


components to install and set up. To ensure a
successful vSphere deployment, understand the
sequence of tasks required

Installing vSphere platforms includes the following


tasks:
Install ESXi host
vSphere System Requirement
• ESXi Hardware Requirement
• vCenter Server Appliance Requirement
• Install ESXi Interactively
• Customize ESXi
ESXi Hardware Requirement
• supported server platform.
• at least two CPU cores.
• a minimum of 4 GB of physical RAM.
• to support 64-bit virtual machines, support for hardware virtualization (Intel VT-x or AMD
RVI) must be enabled on x64 CPUs.
• one or more Gigabit or faster Ethernet controllers.
• boot disk of at least 8 GB for USB or SD devices, and 32 GB for other device types such as
HDD, SSD, or NVMe.
• SCSI disk or a local, non-network, RAID LUN with unpartitioned space for the virtual
machines.
• Serial ATA (SATA), a disk connected through supported SAS controllers

https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.esxi.upgrade.doc/
GUID-DEB8086A-306B-4239-BF76-E354679202FC.html
vCenter Server Appliance Requirement
Form factor Number of Memory
vCPUs
Tiny environment (up to 10 hosts or 100 virtual machines) 2 12 GB

Small environment (up to 100 hosts or 1,000 virtual machines) 4 19 GB

Medium environment (up to 400 hosts or 4,000 virtual machine) 8 28 GB

Large environment (up to 1,000 hosts or 10,000 virtual 16 37 GB


machines)
X-Large environment (up to 2,000 hosts or 35,000 virtual 24 56 GB
machines)
vCenter Server Appliance Requirement (cont)
Default Large Storage X-Large
Form factor Storage Size Size Storage Size
Tiny environment (up to 10 hosts or 100 virtual 415 GB 1490 GB 3245 GB
machines)
Small environment (up to 100 hosts or 1,000 480 GB 1535 GB 3295 GB
virtual machines)
Medium environment (up to 400 hosts or 4,000 700 GB 1700 GB 3460 GB
virtual machine)
Large environment (up to 1,000 hosts or 10,000 1065 GB 1765 GB 3525 GB
virtual machines)
X-Large environment (up to 2,000 hosts or 1805 GB 1905 GB 3665 GB
35,000 virtual machines)
vCenter Server Appliance Requirement (cont)
Use of the vSphere Client requires a supported web browser.

VMware has tested and supports the following guest operating systems and browser
versions for the vSphere Client.

Supported Guest Operating Systems


• Windows 32-bit and 64-bit
• Mac OS

Supported Browser Versions


• Google Chrome 75 or later
• Mozilla Firefox 60 or later
• Microsoft Edge 44 or later
Note: Later versions of these browsers are likely to work, but have not been
tested.
Install ESXi Interactively
1. Insert the ESXi installer CD/DVD into the CD/DVD-ROM drive, or attach the Installer
USB flash drive and restart the machine.
2. Set the BIOS to boot from the CD-ROM device or the USB flash drive. See your
hardware vendor documentation for information on changing boot order.
3. On the Select a Disk page, select the drive on which to install ESXi, and press Enter.
4. Select the keyboard type for the host. You can change the keyboard type after
installation in the direct console.
5. Enter the root password for the host. You can change the password after installation in
the direct console.
6. Press Enter to start the installation.
7. When the installation is complete, remove the installation CD, DVD, or USB flash drive.
8. Press Enter to reboot the host.
Customize ESXi

• Press F2 to customize the system


• Login with root username & password
• Configure Management Network
– Network Addapter
– VLAN
– IPV4/IPV6 configuration
– DNS
– Custom DNS Suffixes

• Restart & Test management network


• Reset System Configuration
Customize ESXi (cont)

• Test management network


• Restart management network
• Network Restore Options
• Configure Keyboard
• Troubleshooting Options
• View System Logs
• View Support Information
• Reset System Configuration
Connect to ESXi with browser

• Allow to managing
the ESXi Host with
the vSphere Web
Client

• Provides the simplest


way to operate its
virtual machines
ESXi settings

From Web Interface Host -> Manage -> System


ESXi datastore
Datastores in VMware vSphere
are storage containers for files.
They could be located on a local
server hard drive or across the
network on a SAN. Datastores
hide the specifics of each storage
device and provide a uniform
model for storing virtual machine Types of Datastores
files. - VMFS (version 5 and 6)
- NFS (version 3 and 4.1)
Datastores are used to hold - vSAN
virtual machine files, templates, - vVol
and ISO images.
https://docs.vmware.com/en/VMware-vSphere/7.0/
com.vmware.vsphere.storage.doc/GUID-D5AB2BAD-C69A-4B8D-B468-
25D86B8D39CE.html
Create a virtual machine in ESXi

• Click the Create/Register VM button at the


top of the page
• Select the Create a new virtual
machine option from the pop-up menu
• Select a name and Guest OS
• Select the datastore on which you wish
to store your VM
• Customize settings (CPU, memory,
disk, network interfaces, so)
• Install an OS in VMware ESXI

Ref: https://support.us.ovhcloud.com/hc/en-us/articles/360003263859-How-to-
Create-a-VM-in-VMware-ESXi-6-5
Install and deploy vCenter Server
vCenter Server
The vCenter Server Appliance is a preconfigured Pre-Requisites
virtual machine that is optimized for running vCenter To deploy vCenter Server 7.0
Server and the associated services appliance you need:
vCenter Center Server provides a centralized • An ESXi host 6.5 or later, or on a
platform for management, operation, resource vCenter Server instance 6.5 or
provisioning, and performance evaluation of virtual later.
machines and hosts. • Fully Qualified Domain Name for
vCenter and it is reachable from
The vCenter Server appliance package contains the the client machine from which you
following software: are deploying the appliance.
• Photon OS 3.0 • Client machine, ESXi host, the
• The vSphere authentication services vCenter appliance use the same
• PostgreSQL DNS server.
• VMware vSphere Lifecycle Manager Extension
• VMware vCenter Lifecycle Manage
vCenter Server requirement
Hardware & Storage Requirement Network Requirement
You can select the vCenter Server Unused Static IP with FQDN and
Appliance based on the size of your designated ports should be opened
vSphere environment. The option that for communication, Refer – Ports
you select determines the number of information
CPUs and the amount of memory for the
appliance. Software Requirement
vCenter Server 7.0 Software, which
The storage requirements are different can be downloaded from VMware
for each vSphere environment size and Site
depend on your database size
requirements.
vCenter Server deployment
There two types of deployment available, in our scenario we are using GUI mode
and this deployment process includes a series of two stages.

Stage 1 – OVA Deployment Stage 2 – Appliance Setup


In the first stage, you can choose the The second stage provides a setup wizard to
deployment type and appliance settings on the configure the appliance time synchronization and
deployment wizard. vCenter Single Sign-On. This stage completes
the initial setup and starts the services of the
newly deployed appliance
Prerequisites for Deploying the vCenter Server Appliance
General Prerequisites
• Download and mount the vCenter Server Installer.

Target System Prerequisites


• Verify that your system meets the minimum software and hardware
requirements.
• If you want to deploy the appliance on an ESXi host, verify that the ESXi host is
not in lockdown or maintenance mode and not part of a fully automated DRS
cluster.
• If you want to deploy the appliance on a DRS cluster of the inventory of a
vCenter Server instance, verify that the cluster contains at least one ESXi host
that is not in lockdown or maintenance mode.
• If you plan to use NTP servers for time synchronization, verify that the NTP
servers are running and that the time between the NTP servers and the target
server on which you want to deploy the appliance is synchronized.
Prerequisites for Deploying the vCenter Server Appliance (cont)
vCenter Enhanced Linked Mode Prerequisites
• When deploying a new vCenter Server as part of an Enhanced Linked
Mode deployment, create an image-based backup of the existing
vCenter Server nodes in your environment. You can use the backup as
a precaution in case there is a failure during the deployment process.

Network Prerequisites
• If you plan to assign a static IP address and an FQDN as a system
name in the network settings of the appliance, verify that you have
configured the forward and reverse DNS records for the IP address.

• Deployment process takes around 20-30 minutes most of the time


Set Up vCenter Server
• vCenter Server configuration
• Time synchronization mode
• SSH Access
• SSO Configuration This stage can
take between 10-
• Configure CEIP 20 minutes on
• Ready to complete average.

Rules for domain name:


The domain name must conform to the RFC 1025 standars:
- must have at least two names, separate by a dot (.)
- each name can include leteers, numbers and/or a dash (-)
- the first and last character of every names should include a letter and/or
a number
- each name must not exceed 63 characters, and the entire
domain name must not exceed 253 characters
Adding ESXi to vCenter Server
• New Datacenter
• (Optional) New Cluster
• Add Host

• Name and location


• Host name or IP Address
• Location:
• Connection settings
• Username
• Password
• Host summary
• Assign license
• Lockdown mode
• VM location
• Ready to complete
Auto start VM
1. Access the settings:
1.1 Give focus to the host in vCenter.
1.2 Click the Configuration tab.
1.3 Select Virtual Machine Startup / Shutdown under Software.
1.4 Click Properties in the upper right hand side of the window.

2. Select the options you want:


2.1 In order to be able to configure any options, enable Allow virtual machines to start and stop automatically with
the system.
2.2 Enter a value for the Default Startup Delay, in order to delay the startup activity for a period of time after the
boot process completes.
2.3 Enter a value for the Default Shutdown Delay, in order to delay the shutdown activity for a period of time
2.4 To start up the virtual machines in a particular order, configure the three Startup Order categories:
Automatic: This category allows you to choose the sequence, by moving machines into this category, then
arranging them in order.
Any order: In this category, the machines are started in whatever sequence the host prefers (more or less
randomized).
Manual: In this category, the default, the machines are not automatically restarted. You must power them on
manually.
https://kb.vmware.com/s/article/850
vCenter Server Management UI
Prerequisites
Verify that the vCenter Server Appliance is successfully deployed and running.
If you are using Internet Explorer, verify that TLS 1.0, TLS 1.1, and TLS 1.2 are enabled
in the security settings.

Procedure
In a Web browser, go to the vCenter Server Appliance Management Interface,
https://appliance-IP-address-or-FQDN:5480.
Log in as root.
The default root password is the password you set while deploying the vCenter Server
Appliance.
Backing up VCSA
Prerequisites
• You must have an FTP, FTPS, HTTP, HTTPS, or SCP server up and running with
sufficient disk space to store the backup.
• Dedicate a separate folder on your server for each file-based backup.

Procedure
• In a Web browser, go to the vCenter Server Appliance Management Interface,
https://appliance-IP-address-or-FQDN:5480.
• Log in as root. Option Description
Backup protocol Select the protocol to use to connect to your backup server. You can
• In the vCenter Server Appliance select FTP, FTPS, HTTP, HTTPS, or SCP.
Management Interface, click Summary.  
For FTP, FTPS, HTTP, or HTTPS the path is relative to the home
• Click Backup. directory configured for the service. For SCP, the path is absolute to
the remote systems root directory.
• The Backup Appliance wizard opens.Backup location Enter the server address and backup folder in which to store the
• Enter the backup protocol and backup files.
Port Enter the default or custom port of the backup server.
location details. User name Enter a user name of a user with write privileges on the backup server.
Password Enter the password of the user with write privileges on the backup
server.
Restore vCenter Server Appliance
Prerequisites
• Verify that your system meets the
minimum software and hardware
requirements. See System Requirements for
the vCenter Server Appliance and Platform
Services Controller Appliance.
• Download and Mount the vCenter Server
Appliance Installer.
• If the vCenter Server instance is part of a
vCenter High Availability cluster, you must
power off the active, passive, and witness
nodes of the cluster before restoring the
vCenter Server.
Restore vCenter Server Appliance (cont)
Stage 1 - Deploy a New Appliance On the Home page, click Restore.
In stage 1 of the restore process, you Review the Introduction page to understand the restore
deploy the OVA file, which is included in process and click Next.
the vCenter Server Appliance GUI installer. Read and accept the license agreement, and click Next.
On the Enter backup details page, enter the details of the
backup file that you want to restore, and click Next.

Stage 2 - Transfer Data to the Newly Review the backup details and click Next.
Deployed Appliance On the Ready to complete page, review the details, click
After the OVA deployment finishes, you Finish, and click OK to complete stage 2 of the restore
are redirected to stage 2 of the restore process.
process in which the data from the backup The restore process restarts the vCenter Server Appliance
location is copied to the newly deployed Management Service
vCenter Server Appliance. Click Close to exit the wizard.
You are redirected to the vCenter Server Appliance Getting
Started page.
Ref: https
://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vcenter
.install.doc/GUID-3EAED005-B0A3-40CF-B40D-85AD247D7EA4.
Product Updates
ESXi Enhancements
• Scalability improvements
– Maximum cluster size increased to 64 hosts
– 8000 virtual machines per cluster
– 576 logical CPUs per host
– 12 TB of RAM per host
– 1024 virtual machines per host

• Security enhancements
– VM Encryption
– Encrypted vMotion
– UEFI / Secure Boot
– Enhanced VIB authentication
– Multi-factor Authentication
– Role-based Access Control
– Audit Quality Logging
38
vCenter Server Enhancements
• vCenter architecture and deployment in > 6.x has been simplified to two deployment models

Embedded Virtual Machine or Server External Virtual Machine or Server

Platform Services Platform Services


Controller
Controller

vCenter Server

Virtual Machine or Server

vCenter Server

39
vCenter Server Enhancements (cont.)
• The services are split out as follows
– The Platform Services Controller installation is a set of infrastructure services, which include
• VMware vCenter Single Sign-On™
• License service
• Lookup service
• VMware Directory Services
• VMware Certificate Authority (CA)
– The vCenter service installation is the remainder of the vCenter supporting services including
• vCenter Server
• vSphere Web Client
• Inventory Service
• VMware vSphere Auto Deploy™
• VMware vSphere ESXi Dump Collector
• vSphere Syslog Collector on Windows and vSphere Syslog Service for the VMware vCenter Server Appliance™

40
Virtual Machine or Server
vCenter Server Enhancements (cont.)
Platform Services
• Platform Services Controller introduced in Controller
vSphere 6.0, enhanced in 6.5
– In addition to hosting the services, the
Platform Service Controller replicates
information such as licenses, roles and
permissions, and tags with other PSCs Virtual Machine or Server Virtual Machine or Server
– This allows for a single pane of glass of the
environment with Enhanced Linked mode
vCenter Server vCenter Server
• Enhanced Linked mode
– Linked mode prior to 6.x was using Microsoft
ADS/ADAM which has been replaced with
Enhanced Linked mode
– Platform Service Controller’s now replicate all
information required for Linked mode
– Enhanced Linked mode is now enabled by – vCenter Appliance now supported with
default in an environment Enhanced Linked mode
– Mixing Windows and Appliance platforms
supported 41
vCenter Server Enhancements (cont.)
• vSphere Update Manager
– Now built into the vCenter Appliance, no additional installation required
– vSphere Update Manager for Windows has not changed and is still a separate install if running
windows.
• vCenter Appliance Enhancements:
– No longer requires an external database. It uses a built in PostgreSQL database.
– vCenter High Availability a built in solution for protecting vCenter only available with the appliance
– vCenter Migration tool allows migration between Windows and Appliance installations
– Backup and restore a native feature of vCenter.

42
vCenter Server Enhancements (cont.)
• vCenter for Windows and vCenter Appliance support the same scalability numbers and
features

Metric Maximum

Hosts Per vCenter Server System 2,000

Powered-on VMs per vCenter Server System 25,000

Hosts Per Cluster 64

Virtual Machines Per Cluster 8,000

Enhanced Linked Mode Yes

43
vSphere Web Client Enhancements
• vSphere Web Client performance and
usability improvements
– Login times have been improved significantly
– GUI interface has been improved such that it
more closely mimics the VMware vSphere
Client™
– Right click menus are better organized and
more consistent across interfaces
– New drop-down menu for fast navigation
– Task pane has been relocated to the bottom of
the screen
– Dockable UI for easy interface customization

• In addition…..something awesome…

44
Introducing vSphere Client (HTML5)

Feedback tool for


providing instant
Search is made more feedback directly to
prominent for large product teams
scale environments
Alarms moved to
bottom to expand
horizontal working
area

45
VMware vSphere
vMotion Enhancements

• vSphere vMotion has been enhanced


– Encrypted vMotion now supported
– Cross Cloud vMotion allowing vMotion of
workloads between on- and off-premesis
clouds

46
HA / DRS / vSAN Feature Enhancements
• vSphere HA has been enhanced
– Proactive HA now available allowing for stats to be taken and used for High Availability protection

• vSphere DRS has also been enhanced


– DRS Policies allow for customization of DRS Functionality
– Network Aware DRS, allowing for network statistics to be used for DRS Calculations

• VMware vSAN Enhancements


– Allows for systems to use it as an iSCSI target.
– 2-node direct connect networking allowed
– Supports Cloud Native Applications

47
Other Storage Enhancements
• Virtual volumes has been redesigned to allow for better utilization of storage resources
• Storage Policies now support Virtual Volumes Replication
• Enhancements to the components allowed in storage policies.

48
Questions

49
VMware vSphere®
Knowledge Transfer Kit
Architecture Overview

© 2015 VMware Inc. All rights reserved.


Agenda
• Architecture Overview • Content Library
• VMware ESXi™ • VMware Certificate Authority (CA)
• Virtual machines • Storage
• VMware vCenter Server™ – iSCSI Storage Architecture

– New Platform Services Controller – NFS Storage Architecture


Recommendations – Fibre Channel Architecture
• VMware vSphere® vMotion® – Other Storage Architectural Concepts

• Availability • Networking
– VMware vSphere High Availability
– VMware vSphere Fault Tolerance
– VMware vSphere Distributed Resource
Scheduler™

51
Architecture Overview
High-Level VMware vSphere Architectural Overview
VMware vSphere

VMware vCenter Server

Availability Scalability
Manage • VMware vSphere vMotion
• DRS and DPM
Application • VMware vSphere Storage
vMotion • Hot Add
Services • VMware vSphere High • Over
Availability Commitment
• VMware vSphere FT
• Content Library
• VMware Data Recovery

Cluster
Storage Network
• vSphere VMFS
• VMware Virtual • Standard vSwitch
Infrastructure ESXi ESXi ESXi Volumes • Distributed vSwitch
Services • VMware vSAN • VMware NSX
• Thin Provisioning • VMware vSphere
Network I/O Control
• vSphere Storage I/O
Control

53
Physical Resources
VMware ESXi
ESXi
• ESXi is bare metal VMware vSphere
Hypervisor
• ESXi installs directly onto the physical

VM
server enabling direct access to all server

wa
resources

re
– ESXi is in control of all CPU, memory, network

ES
and storage resources

i X
– Allows for virtual machines to be run at near
native performance, unlike hosted hypervisors
• ESXi 6.0 allows
– Utilization of up to 576 physical CPUs per host
– Utilization of up to 12 TB of RAM per host
– Deployment of up to 1024 virtual machines per
host

55
ESXi Architecture

CLI Commands
for Configuration
ESXi Host
And Support

Agentless Agentless
Systems Hardware
Management Monitoring

VMware Common VMware VMware


Management Information Management Management
Framework Model (CIM) Framework Framework

Local Support Console (ESXi Shell)

VMkernel

Network and Storage 56


Virtual Machines
Virtual Machines
Virtual Machine
• The software computer and consumer of
resources that ESXi is in charge of App App App
• VMs are containers that can run any almost
Operating System
any operating system and application.
• Segregated environment which does not
cross boundaries unless via network or Network /
CPU RAM Disk
otherwise permitted through SDK access Video Cards

• Each VM has access to its own resources


Keyboard Mouse SCSI CD / DVD
Controller
• VMs generally do not realize that they are
virtualized ESXi Host

58
Virtual Machine Architecture
• Virtual machines consist of files stored on a vSphere VMFS or NFS datastore
– Configuration file (.vmx)
– Swap files (.vswp)
– BIOS files (.nvram)
– Log files (.log)
– Template file (.vmtx)
– Raw device map file (<VM_name>-rdm.vmdk)
– Disk descriptor file (.vmdk)
– Disk data file (VM_name>-flat.vmdk)
– Suspend state file (.vmss)
– Snapshot data file (.vmsd)
– Snapshot state file (.vmsn)
– Snapshot disk file (<VM_name>-delta.vmdk)

59
VMware vCenter Server
VMware vCenter™
• vCenter is the management platform for
vSphere environments
• Provides much of the feature set that comes
with vSphere, such as vSphere High
Availability
• Also provides SDK access into the
environment for solutions such as VMware
vRealize™ Automation™
• vCenter Server is available in two flavors
– vCenter for Windows
– vCenter Server Appliance
• A single vCenter Server running version 6.5
can manage
2000 hosts
– 25,000 virtual machines

61
vCenter Architecture

• In vCenter, the architecture has changed dramatically compared to 5.x


• Provided by Platform Services Controllers
– VMware vCenter Single Sign-On™
– License service
– Lookup service All services are provided
– VMware Directory Services from either a
– VMware Certificate Authority Platform Services Controller
• Provided by vCenter Server Service or
– vCenter Server vCenter Server instance
– VMware vSphere Web Client
– VMware vSphere Auto Deploy™
– VMware vSphere ESXi Dump Collector
– vSphere Syslog Collector on Windows and vSphere Syslog Service for
VMware vCenter Server Appliance™
– vSphere Update Manager (included with appliance only)
62
vCenter Architecture (cont.)
• Two basic architectures are supported as a result of this change
• Platform Services Controller is either Embedded or External to vCenter Server
• Choosing a mode depends on the size and feature requirements for the environment

External Virtual Machine or Server Embedded Virtual Machine or Server


Platform Services Platform Services
Controller Platform Services Controller
Controller
Platform Services
Controller

Virtual Machine or Server vCenter Server

vCenter Server

63
vCenter Architecture (cont.)
These architectures are Recommended
• Enhanced Linked Mode is a major feature that impacts the architecture
– When using Enhanced Linked Mode it is recommended to use an external Platform Service Controller
– For details about architectures that VMware recommends and the Implications of using them, see
VMware KB article, List of Recommended topologies for vSphere 6.x (2108548) (
http://kb.vmware.com/kb/2108548)
Virtual Machine or Server Virtual Machine or Server

Virtual Machine or Server


Platform Services Platform Services
Controller Controller

Platform Services
Controller
Virtual Machine or Server

Load Balancer

Virtual Machine or Server Virtual Machine or Server


Virtual Machine or Server Virtual Machine or Server Virtual Machine or Server

vCenter Server vCenter Server


vCenter Server vCenter Server vCenter Server

Enhanced Linked Mode Enhanced Linked Mode


(No High Availability) (With High Availability) 64
vCenter Architectures (cont.)
These architectures are Not Recommended
Virtual Machine or Server Virtual Machine or Server Virtual Machine or Server Virtual Machine or Server

Platform Services Platform Services Platform Services


vCenter Server
Controller Controller Controller

vCenter Server vCenter Server vCenter Server

Enhanced Linked Mode Enhanced Linked Mode


(Embedded PSCs) (Embedded PSC with External vCenter)
Virtual Machine or Server Virtual Machine or Server

Platform Services Platform Services


Controller Controller

vCenter Server

Virtual Machine or Server Virtual Machine or Server

vCenter Server vCenter Server

Enhanced Linked Mode


(Embedded PSC linked with External PSC) 65
vCenter Architecture (cont.)
• Enhanced Linked Mode has the following maximums
– The architecture should also adhere to these maximums to be supported

Description Scalability Maximum

Number of Platform Services Controllers per domain 10

Maximum Platform Services Controllers per vSphere site (behind a single load balancer) 4

Maximum objects in a vSphere domain (users, groups, solution users) 1,000,000

Maximum number of VMware solutions connected to a single Platform Services Controller 10

Maximum number of VMware products/solutions per vSphere domain 10

66
vCenter Architecture – vCenter Server Components

Platform Services
Additional Services: Controller (Including
• VMware vSphere Update vCenter Single Sign-On)
Manager™ for Windows
• vRealize Orchestrator User vSphere Web Client
Access Microsoft Active
Database Core and VMware
Control VMware HTML5 Directory Domain
Server Distributed vSphere Web Client
Services API
Third-Party
Applications
ESXi Management
Plug-Ins

ESXi hosts

vCenter Server Database


(windows only)
67
vCenter Architecture – ESXi and vCenter Server Communication
How vCenter Server components and ESXi hosts communicate

vCenter Server
& Platform
Services Controller

vpxd
TCP
443, 9443
TCP/UDP
902

hostd vpxa

ESXi Host

68
VMware vSphere vMotion
vSphere vMotion
• vSphere vMotion allows for live migration
of virtual machines between compatible
ESXi hosts
– Compatibility determined by CPU, network,
and storage access
• Encrypted vMotion is a feature of
vSphere 6.5
• With vSphere 6.5, migrations can occur
– Between clusters
– Between datastores
– Between networks
– Between vCenter Servers
– Over long distances as long as RTT is
<100ms
– Cross-Cloud
70
vSphere vMotion Architecture
• vSphere vMotion involves transferring the
entire execution state of the virtual machine
from the source host to the destination
• Primarily happens over a high-speed network
• The execution state primarily consists of the
following components
– The virtual device state, including the state of the
CPU, network and disk adaptors, SVGA, and so
on
– External connections with devices, including
networking and SCSI devices
– The virtual machine’s physical memory

• Generally a single ping is lost, and users do


not even know a VM has changed hosts

71
vSphere vMotion Architecture – Pre-Copy
When a vSphere vMotion is initiated a second VM container is started and pre-copy of the memory is
initiated

ESXi Host 1 ESXi Host 2

VM A VM A

Memory
Bitmap

Memory Pre-Copy
vMotion
Network

Production
Network

VM End User
72
vSphere vMotion Architecture – Memory Checkpoint
• When enough data is copied, the VM is quiesced
• Checkpoint data is sent with the final changes
• ARP is sent and VM is active on the destination host
• The source VM is stopped
ESXi Host 1 ESXi Host 2

VM A VM A

Memory
Bitmap

Checkpoint Data
vMotion
Network

Production
Network

73
VM End User
VMware vSphere Storage vMotion Architecture
• vSphere Storage vMotion works in very Read/Write
much the same way as vSphere vMotion, I/O to Virtual
only the disks are migrated instead Disk

• It works as follows VM VM
1. Initiate storage migration
Mirror Driver
2. Use the VMkernel data mover or VMware
vSphere Storage APIs - Array Integration VMkernel Data Mover
(VAAI) to copy data
3. Start a new virtual machine process
4. Use the mirror driver to mirror I/O calls to file
Storage Array
blocks that have already been copied to
virtual disk on the destination datastore
5. Cut over to the destination VM process to VAAI
Source Datastore Destination Datastore
begin accessing the virtual disk copy

74
vSphere Storage vMotion Architecture –
Simultaneously Change
• vSphere vMotion also allows both storage and host to be changed at the same time
• In vSphere 6.x – the VM can be migrated between networks and vCenter Servers

ESXi
Datastore Network vCenter
Host

vSphere vMotion Network


Network A Network B

VMware ESXi VMware ESXi

vCenter vCenter
Server Server

75
Availability
VMware vSphere High Availability
VMware vSphere Fault Tolerance
VMware vSphere Distributed Resource Scheduler
vSphere vMotion Architecture –
Long-Distance vSphere vMotion

Cross Continental

• Targeting cross continental USA


• Up to 100ms RTT

Performance

• Maintain standard vSphere


vMotion guarantees

77
Availability
VMware vSphere High Availability
vSphere High Availability
• vSphere High Availability is an availability • Agents on the ESXi hosts monitor for the
solution that monitors hosts and restarts following types of failures
virtual machines in the case of a host failure
Infrastructure Connectivity Application
• VM component protection Host failures Host network Guest OS
isolated hangs/crashes
• Proactive HA
VM crashes Datastore incurs Application
• OS and application-independent, requiring PDL or APD hangs/crashes
event
no complex configuration changes

79
vSphere High Availability Architecture – Overview
• Cluster of ESXi hosts created up to 64 hosts
– One of the hosts is elected as master when HA is enabled

• Availability heartbeats occur through network and storage


• HA’s agent communicates on the following networks by default
– Management network (or)
– VMware vSAN™ network (if vSAN is enabled)

Network heartbeats

Storage heartbeats

Master

80
vSphere High Availability Architecture – Host Failures

Master

81
vSphere High Availability Architecture – Host Failures

Master declares
Master slave host dead

82
vSphere High Availability Architecture – Host Failures

New master elected


and resumes master
Master duties

83
vSphere High Availability Architecture – Network Partition

A B

Master

84
vSphere High Availability Architecture – Host Isolation

Master

85
vSphere High Availability Architecture – VM Monitoring

Master

86
vSphere High Availability Architecture – VM Component Protection

Master
87
Availability
VMware vSphere Fault Tolerance
vSphere FT
• vSphere FT is an availability solution that
provides continuous availability for virtual
machines
– Zero downtime
– Zero data loss

• No loss of TCP connections


• Completely transparent to guest software
• No dependency on guest OS, applications
• No application specific management and
learning
• Supports up to 4 vCPUs in VMs with
vSphere 6.x
– Uses fast check pointing rather than
record/replay functionality

89
vSphere FT Architecture
• vSphere FT creates two complete virtual machines when enabled with vSphere 6.x
• This includes a complete copy of
– VMX configuration files
– VMDK files including the ability to use separate datastores

Primary VM Secondary VM

.vmx file .vmx file

VMDK VMDK VMDK VMDK


Datastore 1 VM Network Datastore 2 VM Network

90
vSphere FT Architecture – Memory Checkpoint
• vSphere FT in vSphere 6.5 uses fast checkpoint technology
– This is similar to how vSphere vMotion works, but it is done continuously (rather than once)
– The fast checkpoint is a snapshot of all data not just memory (memory, disks, devices, and so on)
– vSphere FT logging network has a minimum requirement of 10 Gbps NIC

ESXi Host 1 ESXi Host 2

VM A VM A

Memory
bitmap

Fast Checkpoint Data


vSphere FT
Logging network

Production
network

VM End User 91
Availability
VMware Sphere Distributed Resource Scheduler
DRS
DRS
• DRS is a technology that monitors load and
resource usage and will use vSphere vMotion to
balance virtual machines across hosts in a
cluster
– DRS also Includes VMware Distributed Power
Management (DPM) which allows for hosts to be
evacuated and powered off during periods of low
utilization
VMware DPM
• DRS uses vSphere vMotion functionality migrate
VMs
• Can be used in three ways
– Fully automated – where DRS acts on
recommendations automatically
– Partially automated – where DRS only acts for initial
VM power-on placement and an administrator has to
approve recommendations
– Manual – where administrator approval is required
for all movements
93
DRS Architecture ESXi Host 1 ESXi Host 1

• DRS generates migration recommendations


based on how aggressive it has been
configured
• For example
– The three hosts on the left side of the following ESXi Host 2 ESXi Host 2
figure are unbalanced
– Host 1 has six virtual machines, its resources
might be overused while ample resources are
available on Host 2 and Host 3
– DRS migrates (or recommends the migration
of) virtual machines from Host 1 to Host 2 and ESXi Host 3 ESXi Host 3
Host 3
– On the right side of the diagram, the properly
load balanced configuration of the hosts that
results appears

94
Distributed Power Management Architecture
ESXi Host 1 ESXi Host 1

• DPM generates migration recommendations


similar to DRS, but in terms of achieving
power savings
– It can be configured for how aggressively you
want to save power
ESXi Host 2 ESXi Host 2
• For example
– The three hosts on the left side of the following
figure have virtual machines running, but they
are mostly idle
– DPM determines that given the load of the
environment shutting down Host 3 will not ESXi Host 3 ESXi Host 3
impact the level of performance for the VMs
– DPM migrates (or recommends the migration
Host
of) virtual machines from Host 3 to Host 2 and by
Host 1 and puts Host 3 into standby mode ta nd
S
– On the right side of the diagram, the power
managed configuration of the hosts appears
95
Content Library
Content Library
• The Content Library is a distributed template, media and script library for vCenter Server
• Similar to the VMware vCloud® 5.5 Content Catalog and VMware vCloud Connector® Content
Sync
• Tracks versions for generational content, cannot be used to revert to older versions

vCenter vCenter

3
21 2
3
Content Library Content Library
Subscribe
(Publisher) (Subscriber)
2
1 1 2
1 Sync
1
2 1 2
1

97
Content Library Architecture – Publication and Subscription
• Publication and subscription allow libraries to be shared between vCenter Servers
• Provides a single source for information that can be configured to download and sync
according to schedules or timeframes
vCenter vCenter

Templates HTTP GET

Other

Transfer Service Transfer Service


Subscribe using URL
Content Library Service Content Library Service

Subscription URL (to lib.json)

Password (optional)

98
Content Library Architecture – Content Synchronization
• Content Synchronization occurs when content changes
• Simple versioning used to denote the modification, and the item is transferred

vCenter vCenter

HTTP GET

Transfer Service VMware Content Transfer Service


Subscription Protocol Content Library Service
Content Library Service
(vCSP)

lib.json items.json item.json


VCDB VCDB

99
VMware Certificate Authority
Certificates in vSphere 6.x
• vCenter 5.x solutions had its TCP/IP connections secured with SSL
– Required a unique certificate for each solution

• In vSphere 6.x, the various listening ports have been replaced with a single endpoint

Reverse
Web Proxy
(port 443)

vCenter vCenter vSphere Storage


Inventory vSphere
Server Single Update Policy
Service Web Client
Service Sign-On Manager Service

• This is the reverse HTTP proxy, which will route traffic to the appropriate service based on the
type of request
• This means only one endpoint certificate is needed
101
VMware Certificate Authority
• In vSphere 6.x, vCenter ships with an internal Certificate Authority (CA)
– Called the VMware Certificate Authority

• An instance of the VMware CA is included with each Platform Services Controller node
• Issues certificates for VMware components under its personal authority in the vSphere eco-
system
• Runs as part of the Infrastructure Identity Core Service Group
– Directory service
– Certificate service
– Authentication framework

• VMware CA issues certificates only to clients that present credentials from VMDirectory in its
own identity domain
– It also posts its root certificate to its own server node in VMware Directory Services

102
How is the VMware Certificate Authority Used?
• Machine’s SSL certificate
– Used by reverse proxy on every vSphere node
– Used by the VMware Directory Service on Platform Services Controller and Embedded nodes
– Used by VPXD on Management and Embedded nodes

• Solution users’ certificates


• Single Sign-On signing certificates

103
VMware Endpoint Certificate Store
• Certificate Storage and Trust are now handled by the VMware Endpoint Certificate Store
• Serves as a local “wallet” for certificates, for private keys and secret keys, which can be stored
in key stores
• Runs as part of the Authentication Framework Service
• Runs on every Embedded, Platform Services Controller and Management node
• Some key-stores are special
– Trusted certificates key store
– Machine SSL cert key store

104
How is VMware Endpoint Certificate Store Used?
• Machine SSL store
– Holds the machine SSL certificate

• Trusted roots store


– Holds trusted root certificates from all VMware CA instances running on every infrastructure controller in
the SSO identity domain
– Holds third-party trusted root certificates that were uploaded to VMDir and were downloaded to every
VMware Endpoint Certificate Store instance
– Solutions use the contents of this key-store to verify certificates

• Solution key-stores
– Following key stores hold private keys and solution user certificates
• Machine Account Key Store (Platform Service Controller, Management, Embedded nodes)
• VPXD Key Store (Management, Embedded nodes)
• VPXD Extension Key Store (Management, Embedded nodes)
• VMware vSphere Client™ Key Store (Management, Embedded nodes)

105
Storage
iSCSI Storage Architecture
NFS Storage Architecture
Fibre Channel Architecture
Other Storage Architectural Concepts
Storage
• Both local and/or shared storage are a core
requirement for full utilization of ESXi VMware
features ESXi
hosts
• Many kinds of storage can be used with
vSphere
– Local disks
Datastore
types VMware vSphere VMFS NFS
– Fibre Channel (FC) SANs
– iSCSI SANs
– NAS SANs File
system
– Virtual SAN
Storage VSAN
– Virtual Volumes (VVOLs) technology Local
FC FCoE iSCSI or NAS
Disks
VVOL
• They are generally formatted either:
– A VMFS file system
– The file system of the NFS Server

107
Storage – Protocol Features
• Each different protocol has its own set of supported features
• All major of features are supported by all protocols
Supports
Supports Boot Supports VMware Supports Raw
Storage Protocol vSphere High Supports DRS
from SAN vSphere vMotion Device Mapping
Availability

Fibre Channel ● ● ● ● ●

FCoE ● ● ● ● ●

iSCSI ● ● ● ● ●

NFS ● ● ●

Direct Attached Storage ● ●

vSAN ● ● ●
VMware Virtual
Volumes
● ● ●

108
Storage
iSCSI Storage Architecture
Storage Architecture – iSCSI
• iSCSI storage utilizes regular IP traffic over a standard network to transport iSCSI commands
• The ESXi host connects through one of several types of iSCSI initiator

110
Storage Architecture – iSCSI Components
• All iSCSI systems share a common set of components that are used to provide the storage
access

111
Storage Architecture – iSCSI Addressing
• Other than the standard IP addresses, iSCSI targets are identified by names as well

iSCSI target name:


iqn.1992-08.com.mycompany:stor1-47cf3c25
or
eui.fedcba9876543210
iSCSI alias: stor1
IP address: 192.168.36.101

iSCSI initiator name:


iqn.1998-01.com.vmware:train1-64ad4c29
or
eui.1234567890abcdef
iSCSI alias: train1
IP address: 192.168.36.88

112
Storage
NFS Storage Architecture
Storage Architecture – NFS Components
• Much like iSCSI, NFS accesses storage
over the network
NAS device or a Directory to share
server with with the ESXi host
storage over the network

ESXi host with VMkernel port


NIC mapped to defined on virtual
virtual switch switch

114
Storage Architecture – Addressing and Access Control with NFS
• ESXi Accesses NFS through NFS Server
address / name through a VMkernel port
• NFS version 4.1 and NFS version 3 are
available with vSphere 6.x
• Different features are supported with
192.168.81.33
different versions of the protocol
– NFS 4.1 supports multipathing unlike NFS 3
– NFS 3 supports all features, NFS 4.1 does not
support Storage DRS, VMware vSphere
Storage I/O Control, VMware vCenter Site
Recovery Manager™, and Virtual Volumes
• Dedicated switches are not required for NFS
configurations 192.168.81.72
VMkernel port
configured with
IP address

115
Storage
Fibre Channel Architecture
Storage Architecture – Fibre Channel
• Unlike network storage such as NFS or iSCSI, Fibre Channel does not generally use an IP
network for storage Access.
– The exception here is when using Fibre Channel over Ethernet (FCoE)

117
Storage Architecture – Fibre Channel Addressing and Access
Control
• Zoning and LUN masking are used for access control to storage LUNs

118
Storage Architecture – FCoE Adapters
Hardware FCoE Software FCoE
• FCoE adapters allow access to Fibre
Channel Storage over Ethernet
ESXi Host ESXi 5.x Host
connections
• Enables expansion to Fibre Channel Network FC Network Software
SANs when no Fibre Channel Driver Driver Driver FC
infrastructure exits in many cases Converged NIC
10 Gigabit
• Both hardware and software adapters Network with FCoE
Ethernet
are allowed Adapter Support
– Hardware adapters are often called
converged network adapters (CNAs) FCoE Switch
– Many times both a NIC and a HBA are
presented from the single card in the FC Frames to FC
Ethernet IP Frames
clients Storage Arrays
to LAN Devices

FC
LAN
SAN
119
Storage
Other Storage Architectural Concepts
Multipathing
• Multipathing enables continued access to
SAN LUNs if hardware fails
• It also can provide load balancing based on
the path policy selected

121
vSphere Storage I/O Control
With vSphere
Without vSphere
Storage
• vSphere Storage I/O Control allows traffic to Storage I/O
Control
I/O Control

be prioritized during periods of contention


Data Print Online Microsoft Data Print Online Microsoft
– Brings the compute style shares/limits to Mining Server Store Exchange Mining Server Store Exchange

storage infrastructure
• Monitors device latency and acts when it
over exceeds a threshold
• Allows for important virtual machines to
have priority access to resources

During high I/O from non-critical application

122
Datastore Clusters
• A collection of datastores with shared resources similar to ESXi host clusters
• Allow for management to be done as a shared management interface
• Storage DRS can be used to manage the resource and ensure they are balanced
• Can be managed by using the following constructs
– Space utilization
– I/O latency load balancing
– Affinity rules for virtual disks

123
Software-Defined Storage
• Software-defined storage is a software
construct which is used by
– Virtual Volumes
– vSAN

• Uses storage policy-based management to


assign policies to virtual machines for
storage access
• Policies are assigned on a per disk basis,
rather than a per datastore basis
• Key tenant to the software-defined data
center
• vSAN is discussed in much greater detail in
the VMware vSAN Knowledge Transfer Kit.

124
Networking
Networking
• Networking is also a core resource for vSphere
• Two core types of switches are provided
– Standard virtual switches
• Virtual switch configuration for a single host
– Distributed virtual switches
• Data center level virtual switches that provide a consistent network configuration for virtual machines as they
migrate across multiple hosts
• Third-Party switches are NO LONGER allowed in vSphere 6.5 including:
– Cisco Nexus 1000v

• There are two basic types of connectivity as well


– Virtual machine port groups
– VMkernel port groups
• For IP storage, vSphere vMotion migration, vSphere FT, Virtual SAN, provisioning, and so on
• For the ESXi management network

126
Networking Architecture

VM1 VM2 VM3

Management
Network

VMkernel

Test VLAN 101


Production VLAN 102
IP Storage VLAN 103
Management VLAN 104
127
Network Architecture – Standard Compared to Distributed
Distributed vSwitch
Standard vSwitch

128
Network Architecture – NIC Teaming and Load Balancing
• NIC Teaming enables multiple NICs to be connected to a single virtual switch for continued
access to networks if hardware fails
– This also can enables load balancing (if appropriate)

• Load Balancing Policies


– Route based on Originating Virtual Port
– Route based on Source MAC Hash
– Route based on IP Hash
– Route based on Physical NIC Load
– Use Explicit Failover Order

• Many available policies are configured on any type of switch


– Route based on Physical NIC Load is only available on VMware vSphere Distributed SwitchTM

129
VMware vSphere Network I/O Control

• vSphere Network I/O Control allows traffic to


be prioritized during periods of contention
– Brings the compute style of shares/limits to
storage infrastructure
• Monitors device latency and acts when it
over exceeds a threshold
• Allows important virtual machines or Virtual Switch
services to have priority access to
resources

10 GigE

130
Software-Defined Networking
• Software-Defined Networking is a software
construct that allows your physical network
to be treated as a pool of transport capacity,
with network and security services attached
to VMs with a policy-driven approach
• Decouples the network configuration from
the physical infrastructure
• Allows for security and micro-segmentation
of traffic
• Key tenant to the software-defined data
center (SDDC)

131
Questions

132

You might also like