You are on page 1of 113

VMware vSphere 5.

5 Virtual SAN Hosted Beta


Lab Guide

Virtual SAN Hosted Beta

Table of Contents
Lab Overview ................................................................................................................................... 7 How to submit a bug in Hosted Beta ........................................................................................... 7 Virtual SAN .................................................................................................................................. 8 Virtual SAN Overview ...................................................................................................................... 9 What is Virtual SAN ................................................................................................................ 9 Key Components................................................................................................................... 10 Customer Benefits ................................................................................................................ 10 Primary Use Cases ............................................................................................................... 11 Virtual SAN Requirements ............................................................................................................. 12 vCenter Server ...................................................................................................................... 12 vSphere ................................................................................................................................. 12 Disk & Network...................................................................................................................... 12 Module 1 Virtual SAN Setup and Enable ................................................................................... 14 Setup of Virtual SAN Network and Enable Cluster ................................................................... 15 Easy Setup ............................................................................................................................ 15 Setup Virtual SAN Network ................................................................................................... 15 Navigate from the Home to Hosts & Clusters ....................................................................... 16 Navigate to esx-01a.corp.local.............................................................................................. 16 Add Virtual SAN Network ...................................................................................................... 17 Virtual SAN traffic .................................................................................................................. 17 Select Target Device ............................................................................................................. 18 Select Network ...................................................................................................................... 18 Target Device Selected ......................................................................................................... 19 Specify Virtual SAN for Port Group ....................................................................................... 19 Use IPv4 DHCP .................................................................................................................... 20 Ready to complete ................................................................................................................ 20 vmk3 VSAN Network Added ................................................................................................. 21 Enable Virtual SAN ............................................................................................................... 21 Turn ON Virtual SAN ............................................................................................................. 22 Refresh.................................................................................................................................. 22 All hosts participating in Virtual SAN .................................................................................... 23 Create Disk Group for Virtual SAN ....................................................................................... 23 Claim Disks for Virtual SAN Use ........................................................................................... 24 Hosts and Disks Selected ..................................................................................................... 25 Task Begins .......................................................................................................................... 25 Refresh.................................................................................................................................. 26 vsanDatastore ....................................................................................................................... 27 Verify Storage Provider Status.............................................................................................. 28 Select VM Storage Policies ................................................................................................... 29
LAB GUIDE /2

Virtual SAN Hosted Beta

VM Storage Polices in vCenter ............................................................................................. 30 Create my first VM storage policy ......................................................................................... 31 Create a new VM Storage Policy .......................................................................................... 31 Rule-Sets .............................................................................................................................. 31 Create a Rule ........................................................................................................................ 32 Default Virtual Machine Storage Policies .............................................................................. 32 Hosts and Clusters View ....................................................................................................... 33 Default Storage Service Level during Storage vMotion ........................................................ 33 Select Migration Type ........................................................................................................... 34 Move the VM to a Virtual SAN Datastore ............................................................................. 34 Review Selection................................................................................................................... 34 Verification ............................................................................................................................ 35 Physical Disk Placement ....................................................................................................... 36 Storage vMotion from a Virtual SAN datastore ..................................................................... 37 Select Migration Type ........................................................................................................... 37 Move the VM to a non Virtual SAN datastore ....................................................................... 38 Useful Virtual SAN CLI Commands ............................................................................................... 39 Open PuTTY ......................................................................................................................... 39 ssh to esx-01a.corp.local ...................................................................................................... 40 Login to esx-01a.corp.local ................................................................................................... 40 vsan Commands ................................................................................................................... 41 vsan cluster ........................................................................................................................... 41 vsan network ......................................................................................................................... 42 vsan storage.......................................................................................................................... 42 vsan policy ............................................................................................................................ 43 Conclusion ............................................................................................................................ 43 Module 2 Virtual SAN with vMotion, Storage vMotion and HA Interoperability .......................... 44 Build VM Storage Policies ......................................................................................................... 45 Enable Storage Policies ........................................................................................................ 45 Select VM Storage Policies ................................................................................................... 45 Create VM storage policy...................................................................................................... 46 Create a new rule for Tier 2 Apps ......................................................................................... 46 Rule-Sets .............................................................................................................................. 47 Create a Rule on Number of Failures tolerate ...................................................................... 48 How many failures to tolerate? ............................................................................................. 48 Matching Resources ............................................................................................................. 49 Ready to complete ................................................................................................................ 49 Tier 2 Apps Rule Ready ........................................................................................................ 50 vMotion & Storage vMotion ....................................................................................................... 51 Virtual SAN Interoperability ................................................................................................... 51
LAB GUIDE /3

Virtual SAN Hosted Beta

Storage vMotion from NFS to vsanDatastore ....................................................................... 52 Migrate base-sles VM ........................................................................................................... 52 Change datastore ................................................................................................................. 53 Select vsanDatastore ............................................................................................................ 53 Review and Finish ................................................................................................................. 54 Storage vMotion underway ................................................................................................... 54 Review the new destination .................................................................................................. 55 vMotion from host with local storage to host without local storage ....................................... 55 Change Host ......................................................................................................................... 56 Allow Host Selection ............................................................................................................. 56 Select esx-04 Host ................................................................................................................ 57 Migrate VM back to esx-01a ................................................................................................. 58 vSphere HA and Virtual SAN Interoperability ........................................................................... 59 vSphere HA & Virtual SAN Interoperability ........................................................................... 59 Enable HA on the cluster ...................................................................................................... 60 Turn ON vSphere HA ............................................................................................................ 60 HA Enabled ........................................................................................................................... 61 Host Failure No running VMs ................................................................................................ 62 Reboot esx-02a ..................................................................................................................... 63 esx-02 Host Failure ............................................................................................................... 63 Other hosts in Virtual SAN cluster status .............................................................................. 64 Check base-sles VM Home .................................................................................................. 64 Check base-sles Hard Disk 1................................................................................................ 65 Host Failure with Running VMs ............................................................................................. 65 Start VM ................................................................................................................................ 66 Identify Host .......................................................................................................................... 66 VM Storage Policies .............................................................................................................. 67 vMotion to esx-03a ................................................................................................................ 67 Select esx-03 as the host ...................................................................................................... 68 Confirm esx-03a.corp.local ................................................................................................... 69 Reboot esx-03a ..................................................................................................................... 69 Host Status ............................................................................................................................ 70 base-sles Status.................................................................................................................... 71 Refresh.................................................................................................................................. 72 base-sles has restarted on another host .............................................................................. 73 Quorum to run the VM .......................................................................................................... 73 Conclusion ............................................................................................................................ 73 Module 3 Virtual SAN Storage Level Agility ............................................................................... 74 Setting up our environment ................................................................................................... 75 Enter into Maintenance Mode ................................................................................................... 76
LAB GUIDE /4

Virtual SAN Hosted Beta

Moving a vSphere Host out of the cluster ............................................................................. 77 Defining your VM Storage Policies............................................................................................ 78 Decisions when creating a VM Storage Policy ..................................................................... 78 Storage Policies .................................................................................................................... 78 Creating a VM Storage Policy (1).............................................................................................. 79 Creating a VM Storage Policy (2) ......................................................................................... 79 Creating a VM Storage Policy (3) ......................................................................................... 80 Creating a VM Storage Policy (4) ......................................................................................... 80 Creating a VM Storage Policy (5) ......................................................................................... 81 Creating a VM Storage Policy (6) ......................................................................................... 81 Creating a VM Storage Policy (7) ......................................................................................... 82 Create a Virtual Machine and apply VM Storage Policy ....................................................... 82 Create a Virtual Machine and apply VM Storage Policy (2) ................................................. 83 Create a Virtual Machine and apply VM Storage Policy (3) ................................................. 84 Create a Virtual Machine and apply VM Storage Policy (4) ................................................. 85 View Physical Disk Placement of the VM ............................................................................. 86 Understanding the storage requirements of a VM .................................................................... 87 Overview of the Capabilities of a VM Storage Policy............................................................ 87 Understanding VM Storage Policies ..................................................................................... 88 Modify VM Storage Policies (1)............................................................................................. 90 Modify VM Storage Policies (2)............................................................................................. 90 Resync VM with the Policy Change (1) ................................................................................ 91 Resync VM with the Policy Change (2) ................................................................................ 91 View Physical Disk Placement of the VM ............................................................................. 92 Scaling out your Compute and Storage resources ................................................................... 93 Adding a Compute Node ....................................................................................................... 93 Verify vsanDatastore access ................................................................................................ 93 Add a Compute Node with Local Storage (1) ....................................................................... 94 Add a Compute Node with Local Storage (2) ....................................................................... 94 Add a Compute Node with Local Storage (3) ....................................................................... 95 Add a Compute Node with Local Storage (4) ....................................................................... 95 Add a Compute Node with Local Storage (5) ....................................................................... 96 Add a Compute Node with Local Storage (6) ....................................................................... 98 Add a Compute Node with Local Storage (7) ....................................................................... 99 Add a Compute Node with Local Storage (8) ..................................................................... 100 Add a Compute Node with Local Storage (9) ..................................................................... 101 Add a Compute Node with Local Storage (10) ................................................................... 102 Add a Compute Node with Local Storage (11) ................................................................... 102 Verify vsanDatastore Disk Groups ...................................................................................... 104 View vsanDatastore Capacity ............................................................................................. 105
LAB GUIDE /5

Virtual SAN Hosted Beta

Changing VM Storage Policies on the fly ................................................................................ 106 Modify the VM Storage Policy (1) ....................................................................................... 106 Modify the VM Storage Policy (2) ....................................................................................... 106 Resync Virtual Machine with Policy Changes (1) ............................................................... 107 Resync Virtual Machine with Policy Changes (2) ............................................................... 108 Virtual SAN Command Line and Troubleshooting .................................................................. 110 Which interface is Virtual SAN using for communication? .................................................. 110 Which disks have been claimed by Virtual SAN? ............................................................... 111 Get Cluster details .............................................................................................................. 112 Conclusion .......................................................................................................................... 112 Virtual SAN Summary ......................................................................................................... 113

LAB GUIDE /6

Virtual SAN Hosted Beta

Lab Overview
How to submit a bug in Hosted Beta
We want to make the best use of your time while getting valuable feedback from you on your experience of using our products. As you progress through the Hosted Beta lab, well record all of your lab activity. When youre ready to give us some feedback, use the vSubABug tool, to tell us what you think, and when you click submit, well grab the last few minutes of your lab activity and all the relevant logs so our engineers can get some great context on what you did in the lead up to that feedback. Double click on the vSubABug icon on your VMware View Desktop control center.

You will be prompted to enter your email or station number. Enter your email and click ok. Describe the Bug. Click Add for each one and then Click on Submit TheBugs!

Its that simple.

LAB GUIDE /7

Virtual SAN Hosted Beta

Virtual SAN
This lab is focused on a new storage feature in vSphere; Virtual SAN (VSAN). The lab is broken up into three modules. Each Module depends on the next, so it is preferred that they be taken in order. The first three Modules will take about 120 minutes to complete. Please be aware of some reboot time needed at the end of Module 2. If you plan to continue from Module 2 to 3 factor in a few extra minutes before you can start Module 3. The Modules are: Module 1 Virtual SAN Setup, Enable and Build Storage Policies (60 Minutes) Module 2 Virtual SAN with vMotion, Storage vMotion and HA Interoperability (30 minutes) Module 3 Virtual SAN Storage Level Agility (30 minutes) There are average times next to each module as to how long it will take to complete. Depending on experience your time may be more or less.

LAB GUIDE /8

Virtual SAN Hosted Beta

Virtual SAN Overview


VMwares plan for software defined storage is to focus on a set of VMware initiatives around local storage, shared storage and storage/data services. In essence, we want to make vSphere a platform for storage services. Software defined storage aims to provide storage services and service level agreement automation through a software layer on the hosts that integrates with, and abstracts, the underlying hardware.

What is Virtual SAN


Virtual SAN (VSAN) is a new storage solution from VMware that is fully integrated with vSphere. It is an object based storage system and a platform for VM Storage Policies that aims to simplify virtual machine storage placement decisions for vSphere administrators. It is fully integrated with core vSphere features such as vSphere High Availability, DRS and vMotion. Its goal is to provide both high availability as well as scale out storage functionality. It can also be thought of in the context of Quality of Service (QoS) in so far as VM Storage Policies can be created which defined the level of performance and availability required on a per virtual machine basis. Virtual SAN (VSAN) is many things A Storage Solution that is fully integrated with vSphere A Platform for Policy Based Storage to simplify Virtual Machine deployments decisions A Highly Available Clustered Storage Solution A Scale-Out Storage System A Quality Of Service implementation (for its storage objects)

LAB GUIDE /9

Virtual SAN Hosted Beta

Key Components

Hypervisor-based software-defined storage Aggregates local HDDs to provide a clustered data store for VM consumption Leverages local SSDs as a cache Distributed RAID object-based (Redundant Array of Independent Disks) architecture provides no single point of failure Policy-based VM-storage management for end-to-end SLA enforcement Integrated with vCenter Integrated with vSphere HA, DRS and vMotion Scale-Out Storage: 3-8 nodes

Customer Benefits

LAB GUIDE /10

Virtual SAN Hosted Beta

VMware recognizes the significant cost of storage in many virtualization projects. Many projects stall, or are canceled due to the fact that to meet the storage requirements of the project, the storage simply becomes too expensive. Using a hybrid approach of SSD for performance and HDD for capacity, Virtual SAN (VSAN) is aimed at re-enabling projects that require a less expensive storage solution. Easy to setup, configure & manage Eliminate performance bottlenecks and single points of failure Lower storage TCO

Primary Use Cases

LAB GUIDE /11

Virtual SAN Hosted Beta

Virtual SAN Requirements


The following section details the hardware and software requirements necessary to create a Virtual SAN (VSAN) cluster

vCenter Server
Virtual SAN (VSAN) requires a vCenter Server running 5.5. Virtual SAN can be managed by both the Windows version of vCenter Server and the vCenter Server Appliance (VCSA). Virtual SAN is configured and monitored via the vSphere Web Client and this also needs to be version 5.5.

vSphere
Virtual SAN (VSAN) requires at least 3 vSphere hosts (where each host has local storage) in order to form a supported Virtual SAN cluster. This is to allow the cluster to meet the minimum availability requirements of tolerating at least one component failure. The vSphere hosts must be running vSphere version 5.5 at a minimum. With fewer hosts there is a risk to the availability of virtual machines if a component is unavailable. The maximum number of hosts supported is 8 in the initial release of Virtual SAN. Each vSphere host in the cluster that contributes local storage to Virtual SAN must have at least one hard disk drive (HDD) and at least one solid state disk drive (SSD).

Disk & Network

A HBA or a Pass-thru RAID Controller is required (a RAID Controller which can present disks directly to the host without a RAID configuration) A combination of HDD & SSD devices are required (a minimum of 1 HDD & 1 SSD [SAS or SATA]), VMware recommends a 1:10 ratio between SSD and HDD capacity. The SSD will provide both a write buffer and a read cache. The more SSD capacity in the host, the greater the performance since more I/O can be cached. Not every node in a Virtual SAN (VSAN) cluster needs to have local storage. Hosts with no local storage can still leverage distributed datastore.
LAB GUIDE /12

Virtual SAN Hosted Beta

Each vSphere host must have at least one network interface card (NIC). The NIC must be at least 1 GB capable. However, as a best practice, VMware is recommending 10 GB network interface cards. With a Distributed Switch, NIOC can also be enabled to dedicate bandwidth to the Virtual SAN network A Distributed Switch can be optionally configured between all hosts in the Virtual SAN cluster, although VMware Standard Switches (VSS) will also work. A Virtual SAN VMkernel port must be configured for each host..

The VMkernel port is labeled Virtual SAN. This port is used for inter--cluster node communication and also for read and writes when one of the vSphere hosts in the cluster owns a particular virtual machine, but the actual data blocks making up the virtual machine files are located on a different vSphere host in the cluster. In this case, I/O will need to traverse the network configured between the hosts in the cluster.

LAB GUIDE /13

Virtual SAN Hosted Beta

Module 1 Virtual SAN Setup and Enable

LAB GUIDE /14

Virtual SAN Hosted Beta

Setup of Virtual SAN Network and Enable Cluster


In this chapter we will add a VMkernel adapter to the esx-01a host for Virtual SAN use, enable Virtual SAN and create a disk group. Easy Setup

Virtual SAN (VSAN) is configured in just a few clicks.

Setup Virtual SAN Network

Begin by launching Firefox browser and login to vSphere Web Client. We are using the Windows based vCenter so your login will be Administrator. FYI Virtual SAN (VSAN) is supported on both Windows and the appliance versions of vCenter server. User name: Administrator Password: VMware1!
LAB GUIDE /15

Virtual SAN Hosted Beta

Navigate from the Home to Hosts & Clusters

Double-Click Hosts & Clusters 22 Navigate to esx-01a.corp.local

1 2 3

Expand the navigation on the left side and click esx-01a.corp.local

LAB GUIDE /16

Virtual SAN Hosted Beta

Add Virtual SAN Network

1 2

4 3

With esx-01a.corp.local selected navigate to Manage -> Networking -> VMkernel adapters We must now add a VMkernel adapter for the Virtual SAN traffic. Click the icon to add a new adapter (4) Virtual SAN traffic

Select VMkernel Network Adapter. Click Next

LAB GUIDE /17

Virtual SAN Hosted Beta

Select Target Device

We have already attached each host to a distributed switch and created a Virtual SAN port group. You must select the port group to use for this host. Click 'Browse'. Select Network

Select the VSAN Network and click 'OK'.

LAB GUIDE /18

Virtual SAN Hosted Beta

Target Device Selected

After VSAN Network is selected your screen should look like the above. Then click Next Specify Virtual SAN for Port Group

Keep the default settings but select Virtual SAN traffic. Click Next

LAB GUIDE /19

Virtual SAN Hosted Beta

Use IPv4 DHCP

Keep the default settings. Click Next Ready to complete

Click Finish if your screen looks like the above

LAB GUIDE /20

Virtual SAN Hosted Beta

vmk3 VSAN Network Added

You should now see vmk3 added for the VSAN Network. A VMkernel adapter for Virtual SAN Traffic must be added to each host in the cluster. We have already repeated the above steps for esx-02a, esx-03a, and esx-04a for you. Feel free to click on each host to see the VSAN VMkernel adapter. If you don't add this to each host a "mis-configuration warning" will appear on the Virtual SAN General tab.

Enable Virtual SAN


2 3 1 5

Once our network adapters are in place we can turn on Virtual SAN at the Cluster level. Select Cluster Site A then navigate to Manage > Settings > Virtual SAN > General > Edit

LAB GUIDE /21

Virtual SAN Hosted Beta

Turn ON Virtual SAN

Check "Turn ON Virtual SAN", click OK. We are going to keep "Manual" selected for this lab. What this means is that we must manually add disks. The Automatic option will add all empty disks on the hosts to be claimed by Virtual SAN. IGNORE the license warning "You must assign a license key to the cluster before the evaluation period of Virtual SAN expires". A valid license has been applied for you. Refresh

Click the Refresh icon to see the changes

LAB GUIDE /22

Virtual SAN Hosted Beta

All hosts participating in Virtual SAN

After the refresh you should see all 4 hosts in the Virtual SAN cluster, but no disks are yet in use. Create Disk Group for Virtual SAN

2 3 1

From here we will create a new disk group that will use all eligible disks. Select Cluster Site A > Manage > Settings > Virtual SAN > Disk Management > Claim Disks (5)

LAB GUIDE /23

Virtual SAN Hosted Beta

Claim Disks for Virtual SAN Use

Click "Select all eligible disks" In this lab we will claim all free disks that meet the Virtual SAN (VSAN) rules. Note the rules that must apply for a host & disk to be seen on this page. Each disk group must contain at least one SSD. The SSD is used for write cache/read buffer and the HDDs are used for data disks/capacity. VMware recommends, as a best practice, a 1:10 ratio between SSD and HDD capacity.

LAB GUIDE /24

Virtual SAN Hosted Beta

Hosts and Disks Selected

All hosts and disks should now be selected. Click "OK". Note you can select any combination of eligible hosts and disks to meet your requirements. In this lab we will take all unclaimed disks. Task Begins

Recent Tasks will show work underway. Due to the amount of hosts and disks selected this process will take about 2 minutes to complete.

LAB GUIDE /25

Virtual SAN Hosted Beta

Refresh

After 2 minutes Refresh the web client to show the new disk group created. Congratulations, your Virtual SAN is enabled with multiple valid disk groups.

LAB GUIDE /26

Virtual SAN Hosted Beta

vsanDatastore

1 3 4 5

A vsanDatastore has also been created. To see the capacity navigate to Datastores > Manage > Settings > vsanDatastore > General Ignore the ds-site-nfs01 (inactive) message. This is a result of the lab environment and you will find this datastore active. The capacity shown is an aggregate of the HDDs taken from each of the vSphere hosts in the cluster (less some vsanDatastore overhead). The SSD volumes are not considered when the capacity calculation is made.

LAB GUIDE /27

Virtual SAN Hosted Beta

Verify Storage Provider Status

2 1 3

For each vSphere host to be aware of the capabilities of Virtual SAN and to communicate between vCenter and the storage layer a Storage Provider is created. Each vSphere host has a storage provider once the Virtual SAN cluster is formed. The storage providers will be registered automatically with SMS (Storage Management Service) by vCenter. However, it is best to verify that the storage providers on one of the vSphere hosts has successfully registered and is active, and that the other storage providers from the remaining vSphere hosts in the cluster are registered and are in standby mode. Navigate to the vCenter server > Manage > Storage Providers to check the status. In this four-node cluster, one of the Virtual SAN providers is online and active, while the other three are in Standby. Each vSphere host participating in the Virtual SAN cluster will have a provider, but only one needs to be active to provide Virtual SAN datastore capability information. Should the active provider fail for some reason one of the standby storage providers will take over.

LAB GUIDE /28

Virtual SAN Hosted Beta

View Storage Policies

VM Storage Policies are similar in some respects to the vSphere 5.0 & 5.1 Profile Driven Storage feature. VM Storage Policies are enabled on vSphere 5.5 when you enable a Virtual SAN Cluster To begin return to Home screen Select VM Storage Policies

Click VM Storage Policies

LAB GUIDE /29

Virtual SAN Hosted Beta

VM Storage Polices in vCenter

Click Enable VM Storage Policies per compute resource icon with the check mark

You will notice that VM Storage Policies is enabled on the Virtual SAN Cluster Cluster Site A automatically, as a VSAN Cluster, when enabled, turns on VM Storage Policies automatically. This is proved, as host esx-05a.corp.local has a status of Unknown Your screen should look like the above, you can Close the window The capabilities of the vsanDatastore should now be visible during VM Storage Policy creation. By using a subset of the capabilities a vSphere admin will be able to create a storage policy for their VM to guarantee Quality of Service (QoS).
LAB GUIDE /30

Virtual SAN Hosted Beta

Create my first VM storage policy

You should be back at VM Storage Policies. Click the icon with the plus sign to create a new storage policy Create a new VM Storage Policy

In this example we walk through create a new storage policy rule for a print server In the Name field enter Print Server, Click Next to continue Rule-Sets

Spend a moment reading this page to learn about rule-sets. Click Next when ready
LAB GUIDE /31

Virtual SAN Hosted Beta

Create a Rule

1 2

Select VSAN from the capabilities list (1) Then click "Add capability..." (2) to view the Capabilities available Click Cancel to exit the wizard In Module 3 you will take a deeper dive on rule capabilities. When you enable Virtual SAN a VM Storage Policy with the following capabilities will be created by default Number of failures to tolerate = 1 and Force Provisioning = 1 Default Virtual Machine Storage Policies We will verify this behavior when doing a Storage vMotion

LAB GUIDE /32

Virtual SAN Hosted Beta

Hosts and Clusters View

Click on the Home icon in the vSphere Client. Then click Hosts and Clusters on the Main Screen.

Default Storage Service Level during Storage vMotion

Select Cluster Site A Navigate to base-sles Right Click on base-sles and select the Migrate Option
LAB GUIDE /33

Virtual SAN Hosted Beta

Select Migration Type

Select the Change Datastore option and click Next Move the VM to a Virtual SAN Datastore

Select None from the VM Storage Policy dropdown Pick the vsanDatastore from the Compatible datastores and click Next Review Selection

Review the changes being made and click 'Finish' to migrate the VM.

LAB GUIDE /34

Virtual SAN Hosted Beta

Verification

The Storage vMotion will take a few minutes to complete. In the Summary page of the base-sles VM you will notice the Storage Policy is blank and that the VM base-sles resides on the vsanDatastore. Now lets verify if the default VM Storage Policy is applied to the objects belonging to the VM base-sles on datastore vsanDatastore

LAB GUIDE /35

Virtual SAN Hosted Beta

Physical Disk Placement


2 3

As a final step, you might be interested in seeing how your virtual machines objects have been placed on the vsanDatastore. To view the placement, select base-sles > Manage > VM Storage Policies > Hard disk 1. The Physical Disk Placement will show you on which host the components of your objects reside. The RAID 1 indicates that the VMDK has a replica. By default any VM deployed to the vsanDatastore is mirrored for availability as the default Virtual SAN VM Storage Policy is enforced.

LAB GUIDE /36

Virtual SAN Hosted Beta

Storage vMotion from a Virtual SAN datastore In preparation for Module 2, we will Storage VMotion the Virtual Machine base-sles back to the NFS datastore

Select Cluster Site A Navigate to base-sles Right Click on base-sles and select the Migrate Option

Select Migration Type

Select the Change Datastore option and click Next

LAB GUIDE /37

Virtual SAN Hosted Beta

Move the VM to a non Virtual SAN datastore

Keep the default virtual disk format and VM Storage Policy Keep existing VM storage polices Pick ds-site-a-nfs01 from the list of datastores and click Next

LAB GUIDE /38

Virtual SAN Hosted Beta

Useful Virtual SAN CLI Commands


In this lesson, we will provide some useful commands to use with Virtual SAN. Feel free to follow along. Do note that if you run any commands outside the scope of this lesson, you could potentially have an adverse effect on the lab and may not be able to continue with any remaining Modules. The esxcli commands included with vsan are: vsan datastore vsan network vsan storage vsan cluster vsan policy

Open PuTTY

From the Desktop, open PuTTY.

LAB GUIDE /39

Virtual SAN Hosted Beta

ssh to esx-01a.corp.local

Double-click on esx-01a.

Login to esx-01a.corp.local
Using username "root". Using keyboard-interactive authentication. Password: The time and date of this login have been sent to the system logs. VMware offers supported, powerful system administration tools. Please see www.vmware.com/go/sysadmintools for details. The ESXi Shell can be disabled by an administrative user. See the vSphere Security documentation for more information. ~#

Login to esx-01a as: Login as: root Password: VMware1!

LAB GUIDE /40

Virtual SAN Hosted Beta

vsan Commands
~ # esxcli vsan Usage: esxcli vsan {cmd} [cmd options] Available Namespaces: datastore network storage cluster maintenancemode policy trace

Commands for VSAN datastore configuration Commands for VSAN host network configuration Commands for VSAN physical storage configuration Commands for VSAN host cluster configuration Commands for VSAN maintenance mode operation Commands for VSAN storage policy configuration Commands for VSAN trace configuration

By typing: esxcli vsan This will give you a list of all the possible esxcli commands related to vsan, with a brief description for each.

vsan cluster
~ # esxcli vsan cluster get Cluster Information Enabled: true Current Local Time: 2013-09-18T09:55:40Z Local Node UUID: 5228df36-776b-505a-35cd-005056808f33 Local Node State: AGENT Local Node Health State: HEALTHY Sub-Cluster Master UUID: 52290240-9add-3201-0a17-00505680ff72 Sub-Cluster Backup UUID: 5228efe9-3da8-ff3b-44d7-0050568033b1 Sub-Cluster UUID: 52d1c8ca-c7b4-8853-d6f4-159265c9554e Sub-Cluster Membership Entry Revision: 8 Sub-Cluster Member UUIDs: 52290240-9add-3201-0a17-00505680ff72, 5228efe9-3da8ff3b-44d7-0050568033b1, 5228df36-776b-505a-35cd-005056808f33, 5228eece-e9ba-0af28616-005056809b63, 5228f336-8733-e2d9-0ea5-00505680d045 Sub-Cluster Membership UUID: fb582f52-71e8-f226-b5a7-00505680ff72

To view details about the Virtual SAN Cluster, like its Health or whether it is a Master or Backup Node, you can type the following: esxcli vsan cluster get

LAB GUIDE /41

Virtual SAN Hosted Beta

vsan network
~ # esxcli vsan network list Interface VmkNic Name: vmk3 IP Protocol: IPv4 Interface UUID: e5072952-1cc0-ee9c-b96f-005056808f33 Agent Group Multicast Address: 224.2.3.4 Agent Group Multicast Port: 23451 Master Group Multicast Address: 224.1.2.3 Master Group Multicast Port: 12345 Multicast TTL: 5

To view networking details, you can execute this command: esxcli vsan network list

vsan storage
~ # esxcli vsan storage list mpx.vmhba2:C0:T1:L0 Device: mpx.vmhba2:C0:T1:L0 Display Name: mpx.vmhba2:C1:T0:L0 Is SSD: false VSAN UUID: 523c0dc6-9744-c275-ef38-f195d5c22682 VSAN Disk Group UUID: 52777487-f70a-0af3-198e-9ffc747ab13b VSAN Disk Group Name: mpx.vmhba2:C1:T0:L0 Used by this host: true In CMMDS: true Checksum: 14554848699992102318 Checksum OK: true mpx.vmhba2:C0:T0:L0 Device: mpx.vmhba2:C0:T0:L0 Display Name: mpx.vmhba2:C0:T0:L0 Is SSD: true VSAN UUID: 52777487-f70a-0af3-198e-9ffc747ab13b VSAN Disk Group UUID: 52777487-f70a-0af3-198e-9ffc747ab13b VSAN Disk Group Name: mpx.vmhba2:C0:T0:L0 Used by this host: true In CMMDS: true Checksum: 654352745454525052 Checksum OK: true

To view the details on the physical storage devices on this host that are part of the Virtual SAN, you can use this command: esxcli vsan storage list

LAB GUIDE /42

Virtual SAN Hosted Beta

vsan policy
~ # esxcli vsan policy getdefault Policy Class Policy Value ------------------------------------------------------------------cluster (("hostFailuresToTolerate" i1) ("forceProvisioning" i1)) vdisk (("hostFailuresToTolerate" i1) ("forceProvisioning" i1)) vmnamespace (("hostFailuresToTolerate" i1) ("forceProvisioning" i1)) vmswap (("hostFailuresToTolerate" i1) ("forceProvisioning" i1))

To view the Policies in effect, such as how many failures the VSAN can tolerate, the command can be executed: esxcli vsan policy getdefault

Conclusion
This concludes Module 1 Virtual SAN Setup and Enable

LAB GUIDE /43

Virtual SAN Hosted Beta

Module 2 Virtual SAN with vMotion, Storage vMotion and HA Interoperability

LAB GUIDE /44

Virtual SAN Hosted Beta

Build VM Storage Policies


In this chapter we will create a storage policy for Tier 2 Apps.

Enable Storage Policies

VM Storage Policies are similar in some respects to the vSphere 5.0 & 5.1 Profile Driven Storage feature.

To begin return to Home screen Select VM Storage Policies

Click VM Storage Policies

LAB GUIDE /45

Virtual SAN Hosted Beta

Create VM storage policy

You should be back at VM Storage Policies. Click the icon with the plus sign to create a new storage policy Create a new rule for Tier 2 Apps

In this example we will create a new storage policy rule for our Tier 2 Apps. In the Name field enter Tier 2 Apps, Description: Storage Policy for Tier 2 Apps Click Next to continue

LAB GUIDE /46

Virtual SAN Hosted Beta

Rule-Sets

Rule-sets are a way of using storage from different vendors, for example you can have single bronze policy with one Virtual SAN Rule-Set and one 3rd party storage vendor Rule-Set. When Tier 2 Apps is chosen as the storage service level at VM deployment time both Virtual SAN and the 3rd party storage will match the requirements in the policy. We have already briefly looked at rule-sets in Module 1. Click Next when ready

LAB GUIDE /47

Virtual SAN Hosted Beta

Create a Rule on Number of Failures tolerate

I want the VMs which have this policy associated with them to tolerate at least one component failure (host, network or disk). Select VSAN from the capabilities list (1) Select Number of failures to tolerate (2) How many failures to tolerate?

Number of Failures to tolerate field appears. Enter 1 Click "Next"

LAB GUIDE /48

Virtual SAN Hosted Beta

Matching Resources

And the nice thing about this is immediately I can tell whether or not any datastores are capable of understanding the requirement in the matching resources window. As you can see vsanDatastore is capable of understanding these requirements that I have placed in the VM Storage Policy. Note that there is no guarantee that the datastore can meet the requirements in the VM Storage Policy. It simply means that the requirements in the storage policy can be understood by the datastores which show up in the matching resources. Click Next. Ready to complete

Review the rules added and click Finish

LAB GUIDE /49

Virtual SAN Hosted Beta

Tier 2 Apps Rule Ready

This is where we start to define the requirements for our VMs and the applications running in the VMs. Now we simply tell the storage layer what our requirements are by selecting the appropriate VM Storage Policy during VM deployment and the storage layer takes care of deploying the VM in such a way that it meets those requirements.

LAB GUIDE /50

Virtual SAN Hosted Beta

vMotion & Storage vMotion


In this chapter we will examine the interoperability of Virtual SAN with core vSphere features such as vMotion & Storage vMotion. You will power on a virtual machine called base-sles which resides on host esx-01a.corp.local. This is a very small virtual machine but will be sufficient for the purposes of this lab. Virtual SAN Interoperability

Supported
VM Snapshots vSphere HA vSphere DRS vMotion Storage vMotion SRM/VR VDP/VDPA

Not Applicable
SIOC Storage DRS Fault Tolerance (FT) vSphere Flash Read Cache

Futures
Horizon View vCloud Director > 2TB VMDKs

Virtual SAN is fully integrated with many of VMware's storage and availability features. In this module we will turn on HA and use vMotion, but you will notice that many other availability features are supported. SIOC is not applicable because Virtual SAN takes the performance requirements from policy settings. Storage DRS is not applicable because Virtual SAN (VSAN) is a single datastore. DPM may include hosts in a VSAN cluster so we don't want to power-off hosts that may impact the storage policy.

LAB GUIDE /51

Virtual SAN Hosted Beta

Storage vMotion from NFS to vsanDatastore

Navigate to Hosts & Clusters > base-sles > Summary Migrate base-sles VM

Right-Click base-sles and select Migrate

LAB GUIDE /52

Virtual SAN Hosted Beta

Change datastore

Choose the option to Change datastore. Click "Next" Select vsanDatastore

In the VM Storage Policy drop down select Tier 2 Apps Based on the storage policy the disk format and destination datastore will be selected. Click Next

LAB GUIDE /53

Virtual SAN Hosted Beta

Review and Finish

Click Finish Storage vMotion underway

In this example the migration will take about 2 minutes to complete.

LAB GUIDE /54

Virtual SAN Hosted Beta

Review the new destination

Notice in the Summary screen the Storage Policy is now compliant and applied against the vsanDatastore. This demonstrates that you can migrate from traditional datastore formats such as NFS & VMFS to the new vsanDatastore format.

vMotion from host with local storage to host without local storage

LAB GUIDE /55

Virtual SAN Hosted Beta

Now let's take a look at hosts which are in the Virtual SAN cluster and do not have any local storage. These hosts can still use the vsanDatastore to host VMs. At this point, the virtual machine base-sles resides on the vsanDatastore. The VM is currently on a host that contributes local storage to the vsanDatastore (esx01a.corp.local). We will now move this to a host (esx-04a.corp.local) that does not have any local storage. Once again select the base-sles virtual machine from the inventory. From the Actions drop down menu, once again select Migrate. This time we choose the option to Change host. Change Host

Select Change Host. Click Next Allow Host Selection

LAB GUIDE /56

Virtual SAN Hosted Beta

Select Cluster Site A (1) Check "Allow host selection within this Cluster" at bottom of screen. Click Next Select esx-04 Host

Select esx-04a.corp.local. Click Next, then Finish When the migration has completed, you will see how hosts that do not contribute any local storage to the vsanDatastore can still run virtual machines. This means that Virtual SAN can be scaled out on a compute only basis.

LAB GUIDE /57

Virtual SAN Hosted Beta

Migrate VM back to esx-01a

To complete this chapter migrate the VM back to esx-01a.corp.local which has local storage making up the Virtual SAN datastore, Follow the steps above and then click Finish

LAB GUIDE /58

Virtual SAN Hosted Beta

vSphere HA and Virtual SAN Interoperability


In this final step we will provide details on how to evaluate Virtual SAN with vSphere HA.

vSphere HA & Virtual SAN Interoperability

First, lets examine the object layout of the base-sles virtual machine. Select the base-sles > Manage > VM Storage Policies > VM Home This storage object has 3 components, two of which are replicas making up a RAID-1 mirror. The third is a witness disk that is used for tie breaking. The next object is the disk, which you looked at in Module 1. Just to recap, this has a "Number of Disk Stripes Per Object" set to 2; therefore there is a RAID-0 stripe component across two disks. To mirror an object with a striped width of 2, 4 disks are required. Again, since Number of Failures to Tolerate is set to 1 there is also a RAID-1 configuration to replicate the stripe. So we have two RAID-0 (stripe) configurations, and a RAID-1 to mirror the stripes. The witnesses are once again used for tie-breaking functionality in the event of failures. The next step is to invoke some failures in the cluster to see how this impacts the components that make up our virtual machine storage objects, but also how Virtual SAN & vSphere HA interoperate to enable availability.

LAB GUIDE /59

Virtual SAN Hosted Beta

Enable HA on the cluster

Navigate to the Cluster Site A > Manage > Settings > vSphere HA > Edit Turn ON vSphere HA

Check the box to Turn ON vSphere HA. Click OK. By default, the vSphere HA Admission Controls have been set to tolerate a single host failure. You can examine this if you wish by opening the Admission Control settings to verify.

LAB GUIDE /60

Virtual SAN Hosted Beta

HA Enabled

Select Cluster Site A > Summary After enabling HA you will see a warning about insufficient resources to satisfy vSphere HA failover level. This is a transient warning and will eventually go away after a few moments, once the HA cluster has finished configuring. You can try refreshing to remove it. The cluster summary tab will show a vSphere HA overview (3)

LAB GUIDE /61

Virtual SAN Hosted Beta

Host Failure No running VMs

In this first failure scenario we will take one of the hosts out of the cluster. This host does not have any running VMs but we will use it to examine how the Virtual SAN (VSAN) replicas provide continuous availability for the VM and how the Admission Control setting in vSphere HA and the Number of Failures to Tolerate are met. Select esx-02a.corp.local > Reboot

LAB GUIDE /62

Virtual SAN Hosted Beta

Reboot esx-02a

Click OK esx-02 Host Failure

Navigate to esx-02a.corp.local > Summary In a short time we see warnings and errors related to the fact that vCenter can no longer reach the HA Agent and then we see errors related to host connection and power status.

LAB GUIDE /63

Virtual SAN Hosted Beta

Other hosts in Virtual SAN cluster status

If we check on other hosts in the cluster, we see VSAN communication warnings. Navigate to esx-01a.corp.local > Summary Check base-sles VM Home

2 3

Navigate to base-sles > Manage > VM Storage Policies > VM Home (4)
LAB GUIDE /64

Virtual SAN Hosted Beta

With one host out of the cluster, object components that were held on that host are displayed as Absent and Object not found. Check base-sles Hard Disk 1

For base-sles take a look at the Hard disk 1 Basically any components on the rebooted host show up as Absent. When the host rejoins the cluster all components are resynchronized and put back in the Active state when this completes. A bitmap of blocks that have changed between replicas is maintained and this is referenced to resynchronize the components. Now we can see one part of Virtual SAN availability and how virtual machines continue to run even if components go absent.

Host Failure with Running VMs


Wait for the host to reboot from the previous test and that all components show up as Active before continuing. Remember that we have only set Number of Failures to Tolerate to 1. In this next example we will halt the vSphere host that contains the running VM base-sles. Here we will see interoperability between HA and Virtual SAN.

LAB GUIDE /65

Virtual SAN Hosted Beta

Start VM
2

Navigate to base-sles Summary page and Right-Click to Power On Identify Host

Once started make a note of the host it's running on. In this case it's esx-04a.corp.local. If you completed the migration step earlier it may show esx-01a.

LAB GUIDE /66

Virtual SAN Hosted Beta

VM Storage Policies
2 3

1 4

Navigate to base-sles > Manage > VM Storage Policies > Hard disk 1 We can see which host is acting as Witness and which provides the RAID 1 components. Just for fun, we will vMotion the VM to a RAID 1 Component host and halt that host. vMotion to esx-03a

LAB GUIDE /67

Virtual SAN Hosted Beta

If the VM is already running on esx-03a.corp.local (a host which also has a RAID 1 component) you can skip this step, otherwise Migrate to esx-03a.corp.local Right Click base-sles > Migrate Select esx-03 as the host

Complete the Wizard to Migrate the VM to esx-03a.corp.local. Remember to check "Allow host selection within this cluster" on Step 2

LAB GUIDE /68

Virtual SAN Hosted Beta

Confirm esx-03a.corp.local
2

Navigate back to the Summary screen to confirm that esx-03a.corp.local is the host. Reboot esx-03a

Right-Click esx-03a host and Reboot

LAB GUIDE /69

Virtual SAN Hosted Beta

Host Status

The Host and VM will become unreachable.

LAB GUIDE /70

Virtual SAN Hosted Beta

base-sles Status

2 3

1 4

Navigate to base-sles > Manage > VM Storage Policies > Hard disk 1 Depending on how quick you are navigating in the browser you may notice that base-sles is disconnected and the Component RAID 1 disk is Absent. Why is the Raid 1 component is termed Absent? In this particular failure scenario, i.e. host failure, Virtual SAN will identify which objects are out of compliance and starts a timer with a timeout period of 60 minutes. If the component, in this case a mirror, comes back within 60 minutes, any differences will be synchronized, and the object, in this case the VM home or Hard disk, will return to compliance. If the component does not return within 60 minutes, Virtual SAN will create a new mirror copy.

LAB GUIDE /71

Virtual SAN Hosted Beta

Refresh
1

3 4

2 5

Refresh and you will soon see a change to the RAID 1 Components and that base-sles is now available. ` Alarms will also be generated and a Warning found on the Summary page of each host (1) The vSphere HA agent on this host cannot reach some of the management network addresses of other hosts....Host cannot communicate with all other nodes in the VSAN enabled cluster

LAB GUIDE /72

Virtual SAN Hosted Beta

base-sles has restarted on another host


2

You will notice that HA has kicked in and restarted base-sles on another host, esx04a.corp.local Navigate to base-sles > Summary Quorum to run the VM
1 4 2 3

Finally navigate to Datastores > vsanDatastore > Manage > Settings Notice that the halted host is no longer responding under Disk Groups. If the failure persists for longer than 60 minutes the components will be rebuilt on the remaining disks in the cluster.

Conclusion
This concludes Module 2 Virtual SAN with vMotion, Storage vMotion and HA Interoperability

LAB GUIDE /73

Virtual SAN Hosted Beta

Module 3 Virtual SAN Storage Level Agility

LAB GUIDE /74

Virtual SAN Hosted Beta

Setting up our environment

From the Main screen, select the Home tab, and then click Hosts and Clusters We will carry out the following steps to prepare our Lab environment for additional exercises that we will carry out later. As the cluster Cluster Site A has DRS set to Partially Automated due to lab requirements, we will have to migrate any VMs manually to ensure we can enter into maintenance mode. NOTE however vSphere DRS is fully supported with Virtual SAN (VSAN) If a VM is already running on esx-04a.corp.local you can skip this step, otherwise Migrate to esx-03a.corp.local Right Click your VM, in this case, base-sles > Migrate Complete the Wizard to Migrate the VM to esx-03a.corp.local. Remember to check "Allow host selection within this cluster"

LAB GUIDE /75

Virtual SAN Hosted Beta

Enter into Maintenance Mode

1 2

1. Put the vSphere host called esx-04a.corp.local into Maintenance Mode. Right click the vSphere Host called esx-04a.corp.local and select the option Enter Maintenance Mode. You may have to de-select "Move powered-off and suspended virtual machines to other hosts in the cluster"

Notice the message pertaining maintenance mode request, as the host esx-04a.corp.local is a part of Virtual SAN enabled cluster. Click OK

LAB GUIDE /76

Virtual SAN Hosted Beta

Since the host esx-04a.corp.local is part of a DRS cluster, you will see a warning popup. 3. Click OK Moving a vSphere Host out of the cluster

1. Move the vSphere host called esx04-a.corp.local out of cluster. We can use a Drag and Drop operation for this. 2. Select the vSphere host called esx-04a.corp.local and drag it over the Datacenter Site A.

LAB GUIDE /77

Virtual SAN Hosted Beta

Defining your VM Storage Policies


This lesson will walk through Defining Storage Polices for your Virtual Machines. Decisions when creating a VM Storage Policy

Storage Policies

To begin from the Home screen, select VM Storage Policies By default, when you enable Virtual SAN (VSAN) on a Cluster, VM Storage Policies are automatically enabled. By using a subset of the capabilities, a vSphere admin will be able to create a storage policy for their VM to guarantee Quality of Service (QoS).

LAB GUIDE /78

Virtual SAN Hosted Beta

Creating a VM Storage Policy (1)

You should be back at VM Storage Policies. Click the icon with the plus sign to create a new storage policy This icon represents Create New VM Storage Policy Creating a VM Storage Policy (2)

Give the VM Storage Policy a name. Enter VDI-Desktops as the Name, and enter "VM Storage Policy for VDI Desktops" for the description Click Next to continue

LAB GUIDE /79

Virtual SAN Hosted Beta

Creating a VM Storage Policy (3)

Next we get a description of rule sets. Rule-sets are a way of using storage from different vendors, for example you can have single bronze policy with one VSAN Rule-Set and one 3rd party storage vendor RuleSet. When bronze is chosen as the storage service level at VM deployment time, both VSAN and the 3rd party storage will match the requirements in the policy. Spend a moment reading this page to learn more about rule-sets. Click Next when ready

Creating a VM Storage Policy (4)

LAB GUIDE /80

Virtual SAN Hosted Beta

The next step is to select a subset of the vendor-specific capabilities. To begin you need to select the vendor, in this case it is called VSAN. Select Number of failures to tolerate Creating a VM Storage Policy (5)

The next step is to add the capabilities required for the virtual machines that you wish to deploy in your environment. In this particular example, I wish to specify an availability requirement. In this case, I want the VMs which have this policy associated with them to be tolerant of at least one component failure (host, network or disk). Click "Next" Creating a VM Storage Policy (6)

And the nice thing about this is immediately I can tell whether or not any datastores are capable of understanding the requirement in the matching resources window. As you can see, my vsanDatastore is capable of understanding these requirements that I have placed in the VM Storage Policy: Note that this is no guarantee that the datastore can meet the requirements in the storage service level.
LAB GUIDE /81

Virtual SAN Hosted Beta

It simply means that the requirements in the VM Storage Policy can be understood by the datastores which show up in the matching resources. This is where we start to define the requirements for our VMs and the applications running in the VMs. Now, we simply tell the storage layer what the requirements are by selecting the appropriate VM Storage Policy during VM deployment, and the storage layer takes care of deploying the VM in such a way that it meets those requirements. Click Next Click Finish once you have reviewed the rules

Creating a VM Storage Policy (7)

Complete the creation of the VM Storage Policy. This new policy should now appear in the list of VM Storage Policies. Create a Virtual Machine and apply VM Storage Policy

Create a virtual machine, which uses the VDI-Desktops profile created earlier. Right click on a vSphere host in the Cluster and select New Virtual Machine Give the VM a name e.g. Windows 2008
LAB GUIDE /82

Virtual SAN Hosted Beta

Create a Virtual Machine and apply VM Storage Policy (2)

When it comes to selecting storage, you can now specify a VM Storage Policy (in this case VDI-Desktops). This will show that vsanDatastore is Compatible as a storage device, meaning once again that it understands the requirements placed in the storage policy. It does not mean that the vsanDatastore will implicitly be able to accommodate the requirements just that it understands them. This is an important point to understand about Virtual SAN (VSAN).

LAB GUIDE /83

Virtual SAN Hosted Beta

Create a Virtual Machine and apply VM Storage Policy (3)

Continue with the creation of this Virtual Machine, selecting the defaults for the remaining steps, including compatibility with vSphere 5.5 and later and Windows 2008 R2 (64-bit) as the Guest OS. When you get to 2f. Customize hardware step, in the Virtual Hardware tab, expand the New Hard Disk virtual hardware and you will see storage service level set to VDIDesktops. Reduce the Memory to 512 MB. Reduce the Hard Disk Size to 1GB in order for it to be replicated across hosts (the default size is 40GB we want to reduce this as this is a small lab environment, but needless to say it will work just fine in a physical environment) Click Next and click Finish

LAB GUIDE /84

Virtual SAN Hosted Beta

Create a Virtual Machine and apply VM Storage Policy (4)

Complete the wizard. When the VM is created, look at its Summary tab and check the compliance state in the VM Storage Policies window. It should say Compliant with a green check mark.

LAB GUIDE /85

Virtual SAN Hosted Beta

View Physical Disk Placement of the VM

2 3

As a final step, you might be interested in seeing how your virtual machines objects have been placed on the vsanDatastore. To view the placement, select your Virtual Machine > Manage > VM Storage Policies. If you select one of your objects, the Physical Disk Placement will show you on which host the components of your objects reside, as shown in the example. The RAID 1 indicates that the VMDK has a replica. This is to tolerate a failure, the value that was set to 1 in the policy. So we can continue to run if there is a single failure in the cluster. The witness is there to act as a tiebreaker. If one host fails, and one component is lost, then this witness allows a quorum of storage objects to still reside in the cluster. Notice that all three components are on different hosts for this exact reason. At this point, we have successfully deployed a virtual machine with a level of availability that can be used as the base image for our VDI desktops. Examining the layout of the object above, we can see that a RAID1 configuration has been put in place by Virtual SAN placing each replica on different hosts. This means that in the event of a host, disk or network failure on one of the hosts, the virtual machine will still be available.
LAB GUIDE /86

Virtual SAN Hosted Beta

Understanding the storage requirements of a VM


In this lesson, we will walk through the Storage requirements of a VM.

Overview of the Capabilities of a VM Storage Policy

When you use Virtual SAN, you can define virtual machine storage requirements, such as performance and availability, in the form of a policy. The policy requirements are then pushed down to the Virtual SAN layer when a virtual machine is being created. The virtual disk is distributed across the Virtual SAN datastore to meet the requirements. When you enable Virtual SAN on a host cluster, a single Virtual SAN datastore is created. In addition, enabling Virtual SAN configures and registers the Virtual SAN storage provider that uses VASA to communicate a set of the datastore capabilities to vCenter Server. When you know storage requirements of your virtual machines, you can create a storage policy referencing capabilities that the datastore advertises. You can create several policies to capture different types or classes of requirements.

LAB GUIDE /87

Virtual SAN Hosted Beta

Understanding VM Storage Policies


Number of Failures To Tolerate: Defines the number of host, disk or network failures a storage object can tolerate. For n failures tolerated "n+1" copies of the object are created and "2n+1" hosts contributing storage are required. Default value: 1, Maximum value: 3. This capability sets a requirement on the storage object to tolerate at least Number of Failures To Tolerate. This is the number of concurrent host, network or disk failures that may occur in the cluster and still ensuring the availability of the object. If this property is populated, it specifies that configurations must contain at least Number of Failures To Tolerate + 1 replicas and may also contain an additional number of witnesses to ensure that the objects data are available (maintain quorum). Witness disks provide a quorum when failures occur in the cluster or a decision has to be made when a split-brain situation arises. One aspect worth noting is that any disk failure on a single host is treated as a failure for this metric. Therefore the object cannot persist if there is a disk failure on host A and a host failure on host B when you have Number of Failures To Tolerate set to 1. Number of Disk Stripes Per Object: The number of HDDs across which each replica of a storage object is striped. A value higher than 1 may result in better performance (e.g. when flash read cache misses need to get serviced from HDD), but also results in higher use of system resources. Default value: 1, Maximum value: 12. To understand the impact of stripe width, let us examine in first in the context of write operations and then in the context of read operations. Since all writes go to SSD (write buffer), the value of an increased stripe width may or may not improve performance. This is because there is no guarantee that the new stripe will use a different SSD. The new stripe may be placed on a HDD in the same disk group and thus the new stripe will use the same SSD. The only occasion where an increased stripe width could add value is when there is there are many, many writes to destage from SSD to HDD. From a read perspective, an increased stripe width will help when you are experiencing many read cache misses. If one takes the example of a virtual machine consuming 2,000 read operations per second and experiencing a hit rate of 90%, then there are 200 read operations that need to be serviced from HDD. In this case, a single HDD which can provide 150 iops, cannot be able to service all of those read operations, so an increase in stripe width would help on this occasion to meet the virtual machine I/O requirements. In general, the default stripe width of 1 should meet most, if not all virtual machine workloads. Stripe width is a capability that should only change when write destaging or read cache misses are identified as a performance constraint.

LAB GUIDE /88

Virtual SAN Hosted Beta

Flash Read Cache Reservation: Flash capacity reserved as read cache for the storage object. Specified as a percentage of the logical size of the object. To be used only for addressing read performance issues. Reserved flash capacity cannot be used for other objects. Unreserved flash is shared fairly among all objects. Default value: 0%, Maximum value: 100% Cache Reservation is specified as a percentage of the logical size of the storage object (i.e. VMDK). This is specified as a percentage value (%) with up to 4 decimal places. This fine granular unit size is needed so that administrators can express sub 1% units. Take the example of a 1TB disk. If we limited the read cache reservation to 1% increments, this would mean cache reservations in increments of 10GB, which in most cases is far too much for a single virtual machine. Note: You do not have to set a reservation in order to get cache. All virtual machines equally share the read cache of an SSD. The reservation should be left at 0 (default) unless you are trying to solve a real performance problem and you believe dedicating read cache is the solution. In the initial version of Virtual SAN, there is no proportional share mechanism for this resource.

Object Space Reservation: Percentage of the logical size of the storage object that will be reserved (thick provisioned) upon VM provisioning. The rest of the storage object is thin provisioned. Default value: 0%, Maximum value: 100% All objects deployed on VSAN are thinly provisioned. The Object Space Reservation is the amount of space to reserve specified as a percentage (%) of the total object address space. This is a property used for specifying a thick provisioned storage object. If Object Space Reservation is set to 100%, all of the storage capacity requirements of the VM are offered up front (thick). Force Provisioning: If this option is "Yes", the object will be provisioned even if the policy specified in the storage policy is not satisfiable with the resources currently available in the cluster. VSAN will try to bring the object into compliance if and when resources become available. Default value: No. If this parameter is set to a Yes value, the object will be provisioned even if the policy specified in the VM Storage Policy is not satisfied by the datastore. The virtual machine will be shown as non-compliant in the VM Summary tab, and relevant VM Storage Policy views in the UI. When additional resources become available in the cluster, VSAN will bring this object to a compliant state. However, if there is not enough space in the cluster to satisfy the reservation requirements of at least one replica, the provisioning will fail even if Force Provisioning is turned on.

LAB GUIDE /89

Virtual SAN Hosted Beta

Modify VM Storage Policies (1)


Scenario: Customer notices that the VM deployed with the VDI-Desktop policy is getting a 90% read cache hit rate. This implies that 10% of reads need to be serviced from HDD. At peak time, this VM is doing 3000 iops. Therefore, there are 300 reads that need to be serviced from HDD. The specifications on the HDDs imply that each disk can do 150 iops, meaning that a single disk cannot service these additional 300 iops. To meet the I/O requirements of the VM implies that a stripe width of two disks should be implemented.

Modify VM Storage Policies (2)

The first step is to edit the VDI-Desktops profile created earlier and add a stripe width requirement to the policy. Navigate back to Rules & Profiles, select VM Storage Policy, select the VDI-Desktop policy and click on Edit. In the Rule-Set 1, add a new capability called Number of disk stripes per object and set the value to 2. This is the number of disks that the stripe will span. Click OK

LAB GUIDE /90

Virtual SAN Hosted Beta

Resync VM with the Policy Change (1)

You will observe a popup which states that the policy is already in use by a number of Virtual Machines. We will need to synchronize the virtual machine with the policy after saving the changes. You have 2 options: Manually later or Now Select Now and click Yes.

Resync VM with the Policy Change (2)


1 3 2

Staying on the VDI-Desktops policy, click on the Monitor tab. In the VMs & Virtual Disks view, you will see that the Compliance Status is Compliant.

LAB GUIDE /91

Virtual SAN Hosted Beta

View Physical Disk Placement of the VM

1 3

This task may take a little time. We will now re-examine the layout of the storage object to see if the request to create a stripe width of 2 has been implemented. From the VM Storage Polices View, select Virtual Machines > Windows 2008 > VM Storage Policies & select the Hard Disk 1 object. Now we can see that the disk layout has changed significantly. Because we have requested a stripe width of two, the components that make up the stripe are placed in a RAID-0 configuration. And since we still have our failures to tolerate requirement, these RAID-0s must be mirrored by a RAID-1. And because we now have multiple components distributed across the 3 hosts, additional witnesses are needed in case of a host failure.

LAB GUIDE /92

Virtual SAN Hosted Beta

Scaling out your Compute and Storage resources


In this lesson we will show that you can increase the Compute power of the Virtual SAN cluster. We will do this by adding an additional Host to the Cluster. This host will not have any local storage and thus will not contribute to the vsanDatastore, but will be able to run VM's from the vsanDatastore. We will also show that adding another Host, this time with local storage, how we can increase the storage capacity of the Virtual SAN Datastore.

Adding a Compute Node

We previously moved the vSphere host called esx-04.a.corp.local out of the cluster. This vSphere host does not have any local storage, so cannot contribute storage to the vsanDatastore, but can contribute compute resources. Drag the vSphere host back into the Cluster. Take the host out of Maintenance Mode, right click the host and select Exit Maintenance Mode

Verify vsanDatastore access


2 3

With the esx-04a.corp.local host selected, select the Related Objects tab and select Datastores. Here you will see that the vSphere host has access to the vsanDatastore, even though it did not contribute storage to the Datastore. Notice our vsanDatastore capacity is still around 118GB (less some vsanDatastore overhead)

LAB GUIDE /93

Virtual SAN Hosted Beta

Add a Compute Node with Local Storage (1)

We are going to look at the ability to add another vSphere host with storage to the Virtual SAN (VSAN) cluster and observe the scale-out capabilities of the VSAN. At this point, we have four vSphere hosts in the cluster, although only three are contributing local storage to the Virtual SAN datastore. Lets check the status of the vsanDatastore. Navigate to the vsanDatastore > Summary tab. The ~5GB consumed reflect the stripe and replicas for our current VM created earlier.

Add a Compute Node with Local Storage (2)

There is a fifth vSphere host (esx-05a.corp.local) in your inventory that has not yet been added to the cluster. We will do that now and examine how the vsanDatastore seamlessly grows to include this new capacity. Navigate to the cluster object in the inventory, right click and select the action Move hosts into cluster.

LAB GUIDE /94

Virtual SAN Hosted Beta

Add a Compute Node with Local Storage (3)

From the list of available hosts (you should only see esx-05a.corp.local), select this host. Click OK. Select "Put all of this host's virtual machines in the cluster's root resource pool. Resource pools currently present on the host will be deleted" Click OK

Add a Compute Node with Local Storage (4)

Once the vSphere host esx-05a.corp.local has been added to the Cluster, you will notice that an alert that the Host cannot communicate with all the other nodes in the VSAN enabled cluster and VSAN network is not configured

LAB GUIDE /95

Virtual SAN Hosted Beta

Add a Compute Node with Local Storage (5)

The next step is to add a Virtual SAN network to this host. Create a VSAN VMkernel network adapter to this host using the distributed port group called VSAN Network. Select the esx-05a.corp.local host. Select the Manage tab, and select Networking. Select VMkernel Adapters and click the Add host Networking icon Select VMkernel Network Adapter In the Select an existing distributed port group, select Browse... and pick the port group called VSAN Network In the Enable services section, pick Virtual SAN traffic Leave the IPv4 settings as Obtain IPv4 settings automatically.

LAB GUIDE /96

Virtual SAN Hosted Beta

Once completed, you may have to reconfigure esx-05a.corp.local for vSphere HA

Navigate to esx-05a.corp.local, right click, and select All vCenter Actions > Reconfigure for vSphere HA

A task will be launched, Reconfigure vSphere HA host


LAB GUIDE /97

Virtual SAN Hosted Beta

Once completed the alert will disappear and esx-05a.corp.local will be added successfully to the Virtual SAN enabled cluster

Add a Compute Node with Local Storage (6)


2 3 1

Now that we have the Network and HA configured, lets look at the Disk Management Select Cluster Site A > Manage > Settings > Virtual SAN > Disk Management

LAB GUIDE /98

Virtual SAN Hosted Beta

Select the Host called esx-05a.corp.local In the Show dropdown, select Ineligible Here we can see that there are 3 Disks Ineligible (3 Non-SSD or Magnetic Disks) to be added to the vsanDatastore. One of the requirements for adding Disks to a Virtual SAN Cluster is that they need to be blank, or have no disk partitions present. These disks have VMFS partitions; we will look at that in a little while. The 2GB disk is actually our vSphere Boot Disk. The disks that we are interested in are the 2 X 10GB disks. e.g. mpx.vmhba1:C0:T1:L0 and mpx.vmhba2:C0:T1:L0 FYI: This lab just enforces the fact that the disks need to be blank or have no partitions on them before adding them to Virtual SAN Cluster. In a production environment, consult with your Storage Admin before removing any partitions from disks. These may be valid VMFS Partitions that are in use by Virtual Machines. Exercise caution when deleting VMFS Partitions.

Add a Compute Node with Local Storage (7)


Open the PuTTY application on your Desktop. Find the Saved Session called esx-05a, select esx-05a and click Open Login as username of root and a password of VMware1!

LAB GUIDE /99

Virtual SAN Hosted Beta

Add a Compute Node with Local Storage (8)


~ # esxcli storage core device partition list Device Partition --------------------------------mpx.vmhba2:C0:T1:L0 0 mpx.vmhba2:C0:T1:L0 2 mpx.vmhba2:C0:T0:L0 0 mpx.vmhba1:C0:T1:L0 0 mpx.vmhba1:C0:T1:L0 2 mpx.vmhba1:C0:T0:L0 0 mpx.vmhba1:C0:T0:L0 1 mpx.vmhba1:C0:T0:L0 5 mpx.vmhba1:C0:T0:L0 6 mpx.vmhba1:C0:T0:L0 7 mpx.vmhba1:C0:T0:L0 8 Start Sector -----------0 6144 0 0 6144 0 64 8224 520224 1032224 1257504 End Sector ---------- ---20971520 20971487 20971520 20971520 20971487 4194304 8192 520192 1032192 1257472 1843200 Type Size ----------0 10737418240 fb 10734255616 0 10737418240 0 10737418240 fb 10734255616 0 2147483648 0 4161536 6 262127616 6 262127616 fc 115326976 6 299876352

Lets now look at the Disk Partitions that are on these disks. Run the following command: esxcli storage core device partition list We can use the partedUtil command to interrogate the partitions on the disks.
~# partedUtil getptbl /vmfs/devices/disks/mpx.vmhba1\:C0\:T1\:L0 gpt 1305 255 63 20971520 2 6144 20971486 AA31E02A400F11DB9590000C2911D1B8 vmfs 0 # # ~# partedUtil getptbl /vmfs/devices/disks/mpx.vmhba2\:C0\:T1\:L0 gpt 1305 255 63 20971520 2 6144 20971486 AA31E02A400F11DB9590000C2911D1B8 vmfs 0

Run the following commands against each disk:


partedUtil getptbl /vmfs/devices/disks/mpx.vmhba1\:C0\:T1\:L0 partedUtil getptbl /vmfs/devices/disks/mpx.vmhba2\:C0\:T1\:L0

Here we can see that we have a VMFS Partition on partition 2 on each of the disks.

LAB GUIDE /100

Virtual SAN Hosted Beta

Add a Compute Node with Local Storage (9)

~# partedUtil delete /vmfs/devices/disks/mpx.vmhba1\:C0\:T1\:L0 # # ~# partedUtil delete /vmfs/devices/disks/mpx.vmhba2\:C0\:T1\:L0

Lets now remove these VMFS Partitions so that the Disks will be Eligible for use with Virtual SAN (VSAN). FYI: This lab just enforces the fact that the disks need to be blank or have no partitions on them before adding them to a Virtual SAN Cluster. In a production environment, consult with your Storage Admin before removing any partitions from disks. These may be valid VMFS Partitions that are in use by Virtual Machines. Exercise caution when deleting VMFS Partitions. Run the following commands: In these commands we are deleting partition 2 (the VMFS partition), note there is a space between the Disks MPX reference and the partition number.
partedUtil delete /vmfs/devices/disks/mpx.vmhba1\:C0\:T1\:L0 2 partedUtil delete /vmfs/devices/disks/mpx.vmhba2\:C0\:T1\:L0 2

This will remove the VMFS partitions from the disks.

LAB GUIDE /101

Virtual SAN Hosted Beta

Add a Compute Node with Local Storage (10)


2 1 3

Back in the vSphere Web Client Select Cluster Site A > Manage > Settings > Disk Management > esx-05a.corp.local In the Show: section, select Not in use Here we can see that we have now 3 disks (1 SSD and 2 Magnetic Disks) available to us to use for Virtual SAN.

Add a Compute Node with Local Storage (11)

Select Create a new disk group

LAB GUIDE /102

Virtual SAN Hosted Beta

Select the SSD disk from the top section and the 2 HDD Disks from the lower section Click OK

Create a new disk group task is initiated

LAB GUIDE /103

Virtual SAN Hosted Beta

Verify vsanDatastore Disk Groups

Here you will see that the vSphere host called esx-05a.corp.local is now contributing its local storage to the Virtual SAN Cluster. We can see that the Disk Group on this host is made up of one SSD disk with 10GB capacity and 2 x 10 GB HDD's each of 10 GB.

LAB GUIDE /104

Virtual SAN Hosted Beta

View vsanDatastore Capacity


1 3

Revisit the vsanDatastore Summary view and check if the size has increased with the addition of the new host & disks. Select the Storage > vsanDatastore > Summary You should observe that the capacity of the vsanDatastore has seamlessly increased from ~118GB to ~138GB with the addition of two x 10GB HDDs. (remember that SSDs do not contribute towards capacity).

LAB GUIDE /105

Virtual SAN Hosted Beta

Changing VM Storage Policies on the fly


In this lesson, we will once again modify the VM Storage Policy and make some more changes. This time we will apply the VM Storage Policy manually and watch what happens the physical disk layout of the VM

Modify the VM Storage Policy (1)

Navigate back to Rules & Profiles, select VM Storage Policy, select the VDI-Desktops policy and click on Edit. In the Rule-Set1, add a new capability called Number of disk Stripes per object and set the value to 3. This is the number of disks that the stripe will span. Click OK

Modify the VM Storage Policy (2)

LAB GUIDE /106

Virtual SAN Hosted Beta

You will observe a popup which states that the VM Storage Policy in Use. We will need to synchronize the virtual machine with the policy after saving the changes. Select Manually later and Click Yes

Resync Virtual Machine with Policy Changes (1)


2 3

Back in the Inventory tree, select the Virtual Machine that you created earlier e.g. Windows 2008 Select the Manage > VM Storage Policies. Since we changed the VM Storage Policy capabilities, you will notice that the Compliance Status is now Out of Date. This means that we need to Reapply the VM Storage Policy to all out of date entities.

LAB GUIDE /107

Virtual SAN Hosted Beta

Resync Virtual Machine with Policy Changes (2)

Click on the Reapply the VM Storage Policy to all out of date entities icon (3rd from left) to reapply policy to all out of date entities.

Answer Yes to the popup. The compliance state should now change once the updated policy is applied. Now we can see that the disk layout has changed significantly. Because we have requested a stripe width of three, the components that make up the stripe are placed in a RAID-0 configuration. And since we still have our failures to tolerate requirement, these RAID-0s must be mirrored by a RAID-1. And because we now have multiple components distributed across the 5 hosts, additional witnesses are needed in case of a host failure.

LAB GUIDE /108

Virtual SAN Hosted Beta

LAB GUIDE /109

Virtual SAN Hosted Beta

Virtual SAN Command Line and Troubleshooting


You can use esxcli commands to troubleshoot your Virtual SAN environment. The following commands are available: esxcli vsan network list: Verify which VMkernel adapters are used for Virtual SAN communication. esxcli vsan storage list: List storage disks that were claimed by Virtual SAN. esxcli vsan cluster get: Get Virtual SAN cluster information. You can run these commands on a vSphere Host command line locally or remotely. On your Desktop, the PuTTY application is available for you to open a ssh session to an vSphere host. Launch the PuTTY application. In the Saved Sessions section, select a vSphere host that is contributing Local Storage to the Virtual SAN cluster e.g. esx-01a Click Open Login with the following credentials Login as: root Password: VMware1! You are now logged in to the vSphere host.

Which interface is Virtual SAN using for communication?


Run the following command: esxcli vsan network list
~ # esxcli vsan network list Interface VmkNic Name: vmk3 IP Protocol: IPv4 Interface UUID: e5072952-1cc0-ee9c-b96f-005056808f33 Agent Group Multicast Address: 224.2.3.4 Agent Group Multicast Port: 23451 Master Group Multicast Address: 224.1.2.3 Master Group Multicast Port: 12345 Multicast TTL: 5

Here we can see that VMkernel interface vmk3 is used for Virtual SAN traffic.

LAB GUIDE /110

Virtual SAN Hosted Beta

Which disks have been claimed by Virtual SAN?


Run the following command: esxcli vsan storage list
~ # esxcli vsan storage list mpx.vmhba2:C0:T1:L0 Device: mpx.vmhba2:C0:T1:L0 Display Name: mpx.vmhba2:C1:T0:L0 Is SSD: false VSAN UUID: 523c0dc6-9744-c275-ef38-f195d5c22682 VSAN Disk Group UUID: 52777487-f70a-0af3-198e-9ffc747ab13b VSAN Disk Group Name: mpx.vmhba2:C1:T0:L0 Used by this host: true In CMMDS: true Checksum: 14554848699992102318 Checksum OK: true mpx.vmhba2:C0:T0:L0 Device: mpx.vmhba2:C0:T0:L0 Display Name: mpx.vmhba2:C0:T0:L0 Is SSD: true VSAN UUID: 52777487-f70a-0af3-198e-9ffc747ab13b VSAN Disk Group UUID: 52777487-f70a-0af3-198e-9ffc747ab13b VSAN Disk Group Name: mpx.vmhba2:C0:T0:L0 Used by this host: true In CMMDS: true Checksum: 654352745454525052 Checksum OK: true

mpx.vmhba1:C0:T1:L0 Device: naa.6000c29545c09f34844bdc1ccaf7a7b9 Display Name: mpx.vmhba1:C0:T1:L0 Is SSD: false VSAN UUID: 52fa0fd3-4a0a-0f03-ab62-cc0ccda18410 VSAN Disk Group UUID: 52777487-f70a-0af3-198e-9ffc747ab13b VSAN Disk Group Name: mpx.vmhba1:C0:T1:L0 Used by this host: true In CMMDS: true Checksum: 15060996719604146982 Checksum OK: true

Here we can see some interesting information about the disks that are used by Virtual SAN (VSAN). We can see the Device information, if it is an SSD disk, VSAN Disk Group information and if the disk is in use. Note here that one of the disks is an SSD disk and the other 2 Disks are not.

LAB GUIDE /111

Virtual SAN Hosted Beta

Get Cluster details


esxcli vsan cluster get
~ # esxcli vsan cluster get Cluster Information Enabled: true Current Local Time: 2013-09-18T09:55:40Z Local Node UUID: 5228df36-776b-505a-35cd-005056808f33 Local Node State: AGENT Local Node Health State: HEALTHY Sub-Cluster Master UUID: 52290240-9add-3201-0a17-00505680ff72 Sub-Cluster Backup UUID: 5228efe9-3da8-ff3b-44d7-0050568033b1 Sub-Cluster UUID: 52d1c8ca-c7b4-8853-d6f4-159265c9554e Sub-Cluster Membership Entry Revision: 8 Sub-Cluster Member UUIDs: 52290240-9add-3201-0a17-00505680ff72, 5228efe9-3da8ff3b-44d7-0050568033b1, 5228df36-776b-505a-35cd-005056808f33, 5228eece-e9ba-0af28616-005056809b63, 5228f336-8733-e2d9-0ea5-00505680d045 Sub-Cluster Membership UUID: fb582f52-71e8-f226-b5a7-00505680ff72

Here we can get some information about the Virtual SAN Cluster 1. Local Node UUID of the vSphere host that you ran the command from. 2. The Local Node State, this can be Master, Backup or Agent 3. The Node Health State 4. The UUID of the Master and Backup Nodes 5. The number of Members in the Cluster (Sub-Cluster Member UUIDs), in our case, we have 5 Nodes in the Virtual SAN cluster

Conclusion
This concludes Module 3 Virtual SAN Storage Level Agility

LAB GUIDE /112

Virtual SAN Hosted Beta

Virtual SAN Summary

Here is a quick summary of what you have learned.

LAB GUIDE /113

VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright 2013 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.

You might also like