You are on page 1of 33

Hyper-converged Virtual SAN

Deployment Best Practices

April 2020

This document covers best practices and


design guidelines for deploying
SANsymphony in Hyper or Hybrid-converged
configurations.

The authority on real-time data


Table of contents

Changes to this document 3


Storage virtualization deployment models 4
Microsoft Hyper-V Role 6
The SANsymphony virtual machine settings 7
Microsoft Hyper-V deployment examples 8
Using Fibre Channel mirrors 8
With Microsoft Cluster Services 8
Without Microsoft Cluster Services 10
Using ISCSI mirrors 12
With Microsoft Cluster Services 12
Without Microsoft Cluster Services 14
Microsoft Hyper-V deployment technical notes 16
NIC Teaming 16
MS Initiator ‘ports’ 16
Creating Loopback connections 16
Using Virtual NIC connections (vNICs) 16
Using Physical NIC connections (pNICs) 17
Maintaining Persistent Reservation connections (Microsoft Cluster Services) 17
SANsymphony’s Management connection 18
VMware ESXi 19
The ESXi Host’s settings 19
The SANsymphony virtual machine settings 21
VMware ESXi deployment examples 23
A single DataCore Server 23
A pair of highly-available DataCore Servers 24
Known issues 25
Microsoft Windows 26
ISCSI Loopback connections 26
SANsymphony’s built-in Loopback port 27
VMware ESXi 28
ISCSI connections (general) 28
Server Hardware 29
VMware Tools 29
Failover 29
Citrix Hypervisor 30
Storage Repositories 30

Previous Changes 31
Changes to this document
The most recent version of this document is available from here:
https://datacore.custhelp.com/app/answers/detail/a_id/838

Changes since December 2019


Updated
Microsoft Hyper-V Role
Clarified support of standalone product “Microsoft Hyper-V Server”.
VMware ESXi
Added statement about vCenter alerts of “high memory usage” for the SSY VM.
Added link to VMware’s “Performance Best Practices” document.

For previous changes made to this document please see page 31


Storage virtualization deployment models
This document is only concerned with either the ‘Hyper-converged’ or ‘Hybrid-converged’
deployment models. However, for completeness, all of the possible deployments are listed
below.

Traditional Storage Virtualization

The Host servers and the Storage arrays, managed by SANsymphony, are all physically
separate from the DataCore Server.

Converged

Only the Host servers are physically separate from the DataCore Server. All Storage arrays
managed by SANsymphony are internal to the DataCore Server.

Page | 4 Hyper-converged Virtual SAN Deployment – Best Practices


Storage virtualization deployment models

Hyper-Converged

The Host servers are all guest virtual machines that run on the same hypervisor server as
SANsymphony. All Storage arrays managed by SANsymphony are internal to the DataCore
Server.

On the root partition of a Windows server (e.g. Microsoft Hyper-V)

In a virtual machine (e.g. VMware ESXi, Citrix Hypervisor, Oracle VM Server, Microsoft Hyper-V)

Hybrid-Converged

A combination of both Hyper-converged and Traditional Storage Virtualization.

Page | 5 Hyper-converged Virtual SAN Deployment – Best Practices


Microsoft Hyper-V Role
Two different configurations are possible when using Windows with the Hyper-V Role:

Running on the Root/Parent Partition

SANsymphony runs in the root/parent partition of the Windows operating system along with
the Hyper-V feature.

Running inside a virtual machine

SANsymphony runs in its own dedicated, Hyper-V Windows virtual machine.

Note: The standalone “Microsoft Hyper-V Server” products, which consist of a cut-down
version of Windows containing only the hypervisor, Windows Server driver model, and
virtualization components, are not supported.

Page | 6 Hyper-converged Virtual SAN Deployment – Best Practices


Microsoft Hyper-V

The SANsymphony virtual machine settings


Memory
When running a DataCore Server inside a Virtual Machine, do not enable Hyper-V's 'Dynamic
Memory' setting for the SANsymphony Virtual Machine as this may cause less memory than
expected to be assigned to SANsymphony’s cache driver.

Fix the 'Startup RAM' to the required amount.

Here is an example where a SANsymphony Virtual Machine has had its Startup RAM fixed to
always use 30GB on startup:

This does not apply to other Hosts running in Hyper-V Virtual Machines.

Page | 7 Hyper-converged Virtual SAN Deployment – Best Practices


Microsoft Hyper-V deployment examples
Using Fibre Channel mirrors
With Microsoft Cluster Services
Here is an overview of best practices for a ‘minimum configuration’ highly available, hyper-converged deployment installed on
the Root/Parent partition when using fibre channel mirrors with Microsoft Cluster Services.

An example of IP networking settings for this configuration is shown on the next page.

Page | 8 Hyper-converged Virtual SAN Deployment – Best Practices


Hyper-V deployment examples using Fibre Channel mirrors

An example of IP networking settings when using fibre channel mirrors with Microsoft Cluster Services.

More explanatory technical notes can be found on page 16.

Page | 9 Hyper-converged Virtual SAN Deployment – Best Practices


Hyper-V deployment examples using Fibre Channel mirrors

Without Microsoft Cluster Services


Here is an overview of best practices for a ‘minimum configuration’ highly available, hyper-converged deployment installed on
the Root/Parent partition when using fibre channel mirrors without Microsoft Cluster Services.

An example of IP networking settings for this configuration is shown on the next page.

Page | 10 Hyper-converged Virtual SAN Deployment – Best Practices


Hyper-V deployment examples using Fibre Channel mirrors

An example of IP networking settings when using fibre channel mirrors without Microsoft Cluster Services.

More explanatory technical notes can be found on page 16.

Page | 11 Hyper-converged Virtual SAN Deployment – Best Practices


Using ISCSI mirrors
With Microsoft Cluster Services
Here is an overview of best practices for a ‘minimum configuration’ highly available, hyper-converged deployment installed on
the Root/Parent partition when using iSCSI mirrors with Microsoft Cluster Services.

An example of IP networking settings for this configuration is shown on the next page.

Page | 12 Hyper-converged Virtual SAN Deployment – Best Practices


Hyper-V deployment examples using iSCSI mirrors

An example of IP networking settings when using iSCSI mirrors with Microsoft Cluster Services.

More explanatory technical notes can be found on page 16.

Page | 13 Hyper-converged Virtual SAN Deployment – Best Practices


Hyper-V deployment examples using iSCSI mirrors

Without Microsoft Cluster Services


Here is an overview of best practices for a ‘minimum configuration’ highly available, hyper-converged deployment installed on
the Root/Parent partition when using iSCSI channel mirrors without Microsoft Cluster Services.

An example of IP networking settings for this configuration is shown on the next page.

Page | 14 Hyper-converged Virtual SAN Deployment – Best Practices


Hyper-V deployment examples using iSCSI mirrors

An example of IP networking settings when using iSCSI mirrors without Microsoft Cluster Services.

More explanatory technical notes can be found on page 16.

Page | 15 Hyper-converged Virtual SAN Deployment – Best Practices


Microsoft Hyper-V deployment technical notes
NIC Teaming
DataCore do not recommend using NIC teaming with iSCSI initiator or target ports.

Also see
On using NIC Teaming with SANsymphony
https://datacore.custhelp.com/app/answers/detail/a_id/1300

MS Initiator ‘ports’
While technically, the MS initiator is available from ‘any’ network interface on the DataCore
Server, the convention here is to treat MS Initiator connections used explicitly for any iSCSI
Loopback connections as a separate, dedicated, IP address.

Creating Loopback connections


Using SANsymphony’s built-in loopback ports
Do not use SANsymphony’s built-in loopback ports for serving virtual disks to the hypervisor.
Please see the ‘Known Issues’ section on page 26 for more information.

Using iSCSI connections


Always connect to a DataCore Front End port never from it when creating an iSCSI loopback
(i.e. never initiate from the same IP address as a DataCore Front End port). Also make sure
that the Microsoft initiator connection is initiating from a different MAC address than the
DataCore Frontend port (i.e. do not create the Loopback ‘within’ the same NIC that has
multiple IP addresses configured it).

While looping back on the same IP/MAC address technically works, the overall performance
for virtual disks using this I/O path will be severely restricted compared to looping virtual
disks between different MAC addresses.

Using Virtual NIC connections (vNICs)


The following illustration is an example showing the correct initiator/target loopback
connections (highlighted in red) when using vNICs.

Note that the DataCore Front End ports are never used as the initiator - only the target - and
that the interface used for the Microsoft Initiator never connects back to itself.

Here is the same example but with the incorrect initiator/target loopback connections
(highlighted in red) when using vNICs.

Page | 16 Hyper-converged Virtual SAN Deployment – Best Practices


Hyper-V deployment technical notes

Using Physical NIC connections (pNICs)


With a pNIC loopback connection both the Microsoft Initiator and the DataCore Front End
port each get assigned their own physical NIC which are both connected to an IP switch and
logically connected to each other, creating the ‘loop’.

The following illustration is an example showing the correct initiator/target loopback


connections (highlighted in red) when using pNICs.

Note that the DataCore Front End ports are never used as the initiator - only the target - and
that the interface used for the Microsoft Initiator never connects back to itself.

Performance using a vNIC Loopback compared to a pNIC Loopback


While DataCore have no preference over using virtual or physical NICs as loopbacks
mappings, internal testing has shown that pNIC loopback configurations have been able to
give as much as 8 times more throughput compared to vNIC loopback configurations under
the same I/O workloads.

Maintaining Persistent Reservation connections (Microsoft Cluster Services)


If the DataCore Servers are configured within a Microsoft Cluster, an additional iSCSI Initiator
connection is required for each MS Initiator designated vNIC/pNIC to ensure that all Cluster
Reservations can be checked for/released by any server should the SANsymphony software
be stopped (or unavailable) from one side of the high available pair.

Using the same MS Initiator IP address as those configured for iSCSI Loopback connections
connect them into the corresponding ‘remote’ DataCore Server’s Front End port as shown
below. The following illustration is an example showing the correct initiator/remote target
connections (highlighted in red) when using vNICs.

Page | 17 Hyper-converged Virtual SAN Deployment – Best Practices


Hyper-V deployment technical notes

The same correct initiator/remote target connections (highlighted in red) when using pNICs.

Note that unlike vNICs, an IP switch is required to provide both the ‘loop’ connection
between the physical NICs and a route to the ‘remote’ DataCore Front End port on the other
DataCore Server.

SANsymphony’s Management connection


Avoid using the same IP address of any iSCSI port as SANsymphony’s Management IP
address. Heavy iSCSI I/O can interfere with the passing of configuration information and
status updates between DataCore Servers in the same Server Group. This can lead to an
unresponsive SANsymphony Console, problems connecting to other DataCore Servers in the
same Server Group (or remote Replication Group) and can also prevent SANsymphony
PowerShell Cmdlets from working as expected as they also need to ‘connect’ to the Server
Group to perform their functions.

Page | 18 Hyper-converged Virtual SAN Deployment – Best Practices


VMware ESXi
Each SANsymphony instance runs in a dedicated Windows guest virtual machine on the
ESXi Host.

The ESXi Host’s settings


Host Server BIOS

 Turbo Boost = Disabled.


Although VMware recommend enabling this feature when tuning for ‘latency sensitive’
workloads, DataCore’s recommendation to disable will prevent disruption on the CPUs
used by the SANsymphony virtual machine.

Physical Network Interfaces

 ‘Interrupt moderation/throttle’ should be ‘disabled’

esxcli system module parameters set -m=module_name -p="parameter_string"

Here is an example using a specific Intel 10GbE interface:

esxcli system module parameters set -m ixgbe -p "InterruptThrottleRate=0

Please refer to VMware’s own documentation on how to discover and disable this setting
for your own Network Interface.

vCenter

 ‘Virtual machine memory usage’ should be ‘disabled’ for the SANsymphony virtual
machines

See: “Memory usage alarm triggers for certain types of Virtual Machines in ESXi 6.x”
https://kb.vmware.com/s/article/2149787

Page | 19 Hyper-converged Virtual SAN Deployment – Best Practices


VMware ESXi – The Host’s settings

VMkernel

 ISCSI VHMBA
‘Delayed ACK’ = ‘Disabled’
Via ‘vSphere Client → Advanced Settings → Delayed Ack’
Then uncheck both ‘inherit from parent’ and ‘DelayedAck’

A reboot of the ESXi host will be needed.

 Port Binding
This is only supported with SANsymphony version 10.0 PSP 6 Update 5 or later.

This applies to all iSCSI Ports configured in the SANsymphony virtual machine.
For earlier versions of SANsymphony please refer to the ‘Known Issues - VMware ESXi
Hosts/ iSCSI’ section of this document.

Also see

Configuring advanced driver module parameters in ESX/ESXi


https://kb.vmware.com/s/article/1017588

vSphere Command-Line Interface Reference


https://code.vmware.com/docs/6676/vsphere-command-line-interface-reference

ESX/ESXi hosts might experience read or write performance issues


with certain storage arrays
https://kb.vmware.com/s/article/1002598

Page | 20 Hyper-converged Virtual SAN Deployment – Best Practices


VMware ESXi – The SANsymphony virtual machine settings

The SANsymphony virtual machine settings


CPU
 Set to ‘High Shares’.
 Reserve all CPUs used for the SANsymphony virtual machine.
Example: if one CPU runs at 2 GHz and four CPUs are to be reserved then set the reserved
value to 8 GHz (i.e. 4 x 2 GHz).

Memory
 Set to ‘High Shares’.
 Set ‘Memory Reservation’ to ‘All’.

Latency Sensitivity
 Set to ‘High’. [1]
Via ‘VM settings → Options tab → Latency Sensitivity’.

See also:
“Performance Best Practices for VMware vSphere 6.7”
https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/perf
ormance/vsphere-esxi-vcenter-server-67-performance-best-practices.pdf

Virtual Network Interfaces (vNIC)


 Use VMXNET3
 ‘Interrupt coalescing’ must be set to ‘Disabled’.
Via ‘VM settings → Options tab → Advanced General → Configuration Parameters →
ethernetX.coalescingScheme’
(where ‘X’ is the number of the NIC to apply the parameter)

UEFI Secure Boot


 Disable any ‘Secure Boot’ option for virtual machines that will be used as SANsymphony
DataCore Servers otherwise the installation of DataCore’s system drivers will fail.

Also see:
Enable or Disable UEFI Secure Boot for a Virtual Machine
ESXi 6.7
https://docs.vmware.com/en/VMware-
vSphere/6.7/com.vmware.vsphere.security.doc/GUID-898217D4-689D-4EB5-866C-
888353FE241C.html

ESXi 6.5
https://docs.vmware.com/en/VMware-
vSphere/6.5/com.vmware.vsphere.security.doc/GUID-898217D4-689D-4EB5-866C-
888353FE241C.html

ISCSI Settings - General


 DataCore Servers in virtual machines running SANsymphony 10.0 PSP 7 update 2 or
earlier should run the appropriate PowerShell scripts found attached to the Technical
Support Answer: SANsymphony - iSCSI Best Practices
https://datacore.custhelp.com/app/answers/detail/a_id/1626

1
Testing is recommended. Depending on underlying hardware, better performance may be possible by
setting this to ‘Normal’.

Page | 21 Hyper-converged Virtual SAN Deployment – Best Practices


VMware ESXi – The SANsymphony virtual machine settings

Disk storage

 PCIe connected - SSD/Flash/NVMe


Internal storage to the ESXi host. DataCore recommend using VMware’s VMDirectPath I/O
pass-through.
Also see
Configuring VMDirectPath I/O pass-through devices
https://kb.vmware.com/s/article/1010789

VMware vSphere VMDirectPath I/O: Requirements for Platforms and Devices


https://kb.vmware.com/s/article/2142307

Fibre Channel HBA's supported by VMware using VMDirectPath I/O


https://www.vmware.com/resources/compatibility/search.php

 Fibre Channel attached storage


Storage attached directly to the ESXi Host using Fibre Channel connections.
DataCore recommend using VMware’s VMDirectPath I/O pass-through, assigning all
physical Fibre Channel HBAs to the SANsymphony virtual machine. This allows
SANsymphony to detect and bind the DataCore Fibre Channel driver as a Back-end
connection to the storage.

 SAS/SATA/SCSI attached storage


Storage attached directly to the ESXi Host using either SAS/SATA/SCSI connections.
If VMware’s VMDirectPath I/O pass-through is not appropriate for the storage array then
contact the storage vendor to find out which is their own preferred VMware ‘SCSI Adaptor’
for highest performance (e.g. VMware’s own Paravirtual SCSI Adapter or the LSI Logic SCSI
Adaptor).

 ESXi Raw Device Mapping (RDM)


‘Raw’ storage devices presented to the SANsymphony virtual machine via the ESXi
hypervisor.

When using RDM as storage for use in SANsymphony Disk Pools, disable ESXi’s ‘SCSI
INQUIRY Caching’. This allows SANsymphony to detect and report any unexpected changes
in storage device paths managed by ESXi’s RDM feature.
Also see
Virtual Machines with RDMs Must Ignore SCSI INQUIRY Cache
https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.storage.doc/GUID-
2852DA1C-507B-4B85-B211-B5CD3C8DC6F2.html

If VMware’s VMDirectPath I/O pass-through is not appropriate for the storage being used as
RDM then contact the storage vendor to find out which is their own preferred VMware ‘SCSI
Adaptor’ for highest performance (e.g. VMware’s own Paravirtual SCSI Adapter or the LSI
Logic SCSI Adaptor).

 VMDK disk devices


Storage created from VMFS datastores presented to the SANsymphony virtual machine via
the ESXi hypervisor. DataCore recommend using VMware’s Paravirtual SCSI (PVSCSI)
adapter using one VMDK per Datastore and provision them as ‘Eager Zeroed Thick’.
Also see
Configuring disks to use VMware Paravirtual SCSI (PVSCSI) adapters
https://kb.vmware.com/kb/1010398

Page | 22 Hyper-converged Virtual SAN Deployment – Best Practices


VMware ESXi deployment examples
A single DataCore Server
A single virtual machine configured with virtual disks served over one or more iSCSI
Loopback connections. A very reliable and extremely fast DataCore Caching as well as
including the entire suite of DataCore server features.

More physical DataCore Servers can be added to create a highly available configuration. See
the following pages.

Page | 23 Hyper-converged Virtual SAN Deployment – Best Practices


VMware ESXi deployment examples

A pair of highly-available DataCore Servers


Two virtual machines configured with virtual disks served to the local ESXi Servers – ESXi A
and ESXi B - over one or more iSCSI Loopback connections. The virtual disks are also as a
synchronously mirrored between the DataCore servers using either iSCSI or Fibre Channel
connections.

Each ESXi host’s own VMkernel is configured to login to two separate iSCSI targets:

 The first path is connected to the local DataCore Server


 The second path is connected to the remote DataCore Server

Notes

For clarity, so as not to make the diagram too complicated, the example shows just one
SANsymphony Front-End port per DataCore Server. Use more Front-End port connections to
increase overall throughput.

Page | 24 Hyper-converged Virtual SAN Deployment – Best Practices


Known issues
The following is intended to make DataCore Software users aware of any issues that affect
performance, access or may give unexpected results under particular conditions when
SANsymphony is used in Hyper-converged configurations.

These Known Issues may have been found during DataCore’s own testing but others may
have been reported by our users when a solution was found that was not to do with
DataCore’s own products.

DataCore cannot be held responsible for incorrect information regarding another vendor’s
products and no assumptions should be made that DataCore has any communication with
these other vendors regarding the issues listed here.

We always recommend that the vendor’s should be contacted directly for more information
on anything listed in this section.

For ‘Known issues’ that apply to DataCore Software’s own products, please refer to the
relevant DataCore Software Component’s release notes.

Page | 25 Hyper-converged Virtual SAN Deployment – Best Practices


Known Issues – Microsoft Windows

Microsoft Windows
ISCSI Loopback connections

SANsymphony’s ‘Preferred Server’ setting is ignored for virtual disks using iSCSI
Loopback connections.

For SANsymphony 10.0 PSP 5 Update 2 and earlier


This issue affects all hyper-converged DataCore Servers in the same Server Group.
Set MPIO path state to ‘Standby’ for each virtual disk (see below for instructions).

For SANsymphony 10.0 PSP 6 and later


This issue affects only any hyper-converged DataCore Servers in the same Server Group
where the DataCore Server has one or more virtual disks served to them from another
DataCore Server in the same group. I.e. any Server group with three or more DataCore
Servers where one of more of the virtual disks is served to all DataCore Servers (instead of just
the two that are part of the mirror pair).

Apart from the two DataCore Servers that are managing the mirrored storage sources in a
virtual disk -- and will be able to detect the preferred server setting – the ‘other’ DataCore
Servers will not be able to (for that same virtual disk). Set MPIO path state to ‘Standby’ for
each virtual disk served to the ‘other’ DataCore Server(s). See below.

1. Open the MS iSCSI initiator UI and under Targets, select the remote target IQN
address

2. Click devices and identify the Device target port with disk index to update

3. Navigate to MS MPIO and view DataCore virtual disk.


4. Select a path and click Edit

5. Verify the correct Target device

6. Set the path state to Standby and click OK to save the change

Page | 26 Hyper-converged Virtual SAN Deployment – Best Practices


Known Issues – Microsoft Windows

SANsymphony’s built-in Loopback port

Affects clustered, hyper-converged DataCore Servers


Do not use the DataCore Loopback port to serve mirrored Virtual Disks to Hyper-V
Virtual Machines

This known issue only applies when SANsymphony is installed in the root partitions of
clustered, Hyper-V Windows servers where virtual disks are ‘looped-back’ to the Windows
Operating system for use by Hyper-V Virtual Machines.

Also see: https://docs.datacore.com/SSV-WebHelp/Hyper-


converged_virtual_san_software.htm

A limitation exists in the DataCore SCSI Port driver - used by the DataCore Loopback Ports -
whereby if the DataCore Server providing the ‘Active’ cluster path is stopped, the remaining
DataCore Server providing the ‘Standby’ path for the Hyper-V VMs is unable to take the SCSI
Reservation (previously held by the stopped DataCore Server). This will result in a SCSI
Reservation Conflict and prevent any Hyper-V VM from being able to access the DataCore
Disks presented by the remaining DataCore Server.

In this case please use iSCSI connections as ‘Loopbacks’ for SANsymphony DataCore Disks
presented to Hyper-V Virtual Machines.

Page | 27 Hyper-converged Virtual SAN Deployment – Best Practices


Known Issues – VMware ESXi

VMware ESXi
ISCSI connections (general)
Affects ESX 6.x and 5.x
Applies to SANsymphony 10.0 PSP 6 Update 5 and earlier

ESX Hosts with IP addresses that share the same IQN connecting to the same DataCore
Server Front-end port is not supported (this also includes ESXi 'Port Binding'). The Front End
port will only accept the ‘first’ login and a unique iSCSI Session ID (ISID) will be created. All
subsequent connections coming from the same IQN even if it is a different interface will
result in an ISCSI Session ID (ISID) ‘conflict’ and the subsequent login attempt will be
rejected by the DataCore iSCSI Target. No further iSCSI logins will be allowed for this IQN
whilst there is already one active ISID connected.

Also note that if an unexpected iSCSI event results in a logout of an already-connected iSCSI
session then, during the reconnection phase, one of the other interface that shares the
same IQN but was rejected previously may now be able to login and this will prevent the
previously-connected interface from being able to re-connect.

See the following examples of supported and not-supported configuration when using
SANsymphony 10.0 PSP 6 Update 5 or earlier:

Example 1 – The supported configuration


The ESX Host has four network interfaces, each with its own IP address, each with the same IQN:
192.168.1.1 (iqn.esx1)
192.168.2.1 (iqn.esx1)
192.168.1.2 (iqn.esx1)
192.168.2.2 (iqn.esx1)

Two DataCore Servers each have two front end ports, each with their own IP address and each with their own IQN:
192.168.1.101 (iqn.dcs1-1)
192.168.2.101 (iqn.dcs1-2)
192.168.1.102 (iqn.dcs2-1)
192.168.2.102 (iqn.dcs2-2)

Each interface of the ESX Host connects to a separate Port on each DataCore Server:
(iqn.esx1) 192.168.1.1 ← ISCSI Fabric 1 → 192.168.1.101 (iqn.dcs1-1)
(iqn.esx1) 192.168.2.1 ← ISCSI Fabric 2 → 192.168.2.101 (iqn.dcs1-2)
(iqn.esx1) 192.168.1.2 ← ISCSI Fabric 1 → 192.168.1.102 (iqn.dcs2-1)
(iqn.esx1) 192.168.2.2 ← ISCSI Fabric 2 → 192.168.2.102 (iqn.dcs2-2)
This type of configuration is very easy to manage, especially if there are any connection problems.

Example 2 – the un-supported configuration


Using the same IP addresses as example above here is a possible scenario that would be unsupported;

(iqn.esx1) 192.168.1.1 ← ISCSI Fabric 1 → 192.168.1.101 (iqn.dcs1-1)


(iqn.esx1) 192.168.2.1 ← ISCSI Fabric 2 → 192.168.2.101 (iqn.dcs1-2)
(iqn.esx1) 192.168.1.2 ← ISCSI Fabric 1 → 192.168.1.101 (iqn.dcs1-1)
(iqn.esx1) 192.168.2.2 ← ISCSI Fabric 2 → 192.168.2.102 (iqn.dcs2-2)

Note that in this ‘un-supported’ example, both Interfaces from ESX1 are connected to the same Interface on the
DataCore Server.

Page | 28 Hyper-converged Virtual SAN Deployment – Best Practices


Known Issues – VMware ESXi

Server Hardware
Affects ESX 6.5
HPE ProLiant Gen10 Servers Running VMware ESXi 6.5 (or Later) and Configured with a
Gen10 Smart Array Controller may lose connectivity to Storage Devices.
Search https://support.hpe.com/hpesc/public/home using keyword a00041660en_us

Affects ESX 6.x and 5.x


vHBAs and other PCI devices may stop responding when using Interrupt Remapping
See https://kb.vmware.com/s/article/1030265

VMware Tools
Affects ESX 6.5
VMware Tools Version 10.3.0 Recall and Workaround Recommendations
VMware has been made aware of issues in some vSphere ESXi 6.5 configurations with the
VMXNET3 network driver for Windows that was released with VMware Tools 10.3.0.
As a result, VMware has recalled the VMware Tools 10.3.0 release. This release has been
removed from the VMware Downloads page - see: https://kb.vmware.com/s/article/57796

Affects ESX 6.x and 5.x


Ports are exhausted on Guest VM after a few days when using VMware Tools 10.2.0
VMware Tools 10.2.0 version is not recommended by VMware - see:
https://kb.vmware.com/s/article/54459

Failover
Affects ESX 6.7 only
Failover/Failback takes significantly longer than expected.
Users have reported to DataCore that before applying ESXi 6.7, Patch Release ESXi-6.7.0-
20180804001 (or later) failover could take in excess of 5 minutes. DataCore are
recommending (as always) to apply the most up-to-date patches to your ESXi operating
system. Also see: https://kb.vmware.com/s/article/56535

Page | 29 Hyper-converged Virtual SAN Deployment – Best Practices


Known Issues – Citrix Hypervisor

Citrix Hypervisor

Affects the hyper-converged DataCore Server virtual machine


Storage Repositories

SR’s may not get re-attached automatically to a virtual machine after a reboot of the Citrix
Hypervisor Host.

This prevents the DataCore Server’s virtual machine being able to access any storage used by
SANsymphony’s own Disk Pools which will prevent other virtual machines -- running on the
same Citrix Hypervisor Host as the DataCore Server -- from being able to access their virtual
disks. DataCore recommend to serve all virtual disks to virtual machines that are running on
the ‘other’ Citrix Hypervisor Hosts rather than directly to virtual machines running on the
same Citrix Hypervisor as the DataCore Server virtual machine.

Please consult with Citrix for more information.

Page | 30 Hyper-converged Virtual SAN Deployment – Best Practices


Previous Changes
December 2019
Updated
Microsoft Hyper-V deployment technical notes
Added a statement on NIC Teaming with iSCSI connections.

October 2019
Added
General
This document has been reviewed for SANsymphony 10.0 PSP 9.
No additional settings or configurations are required.

June 2019
Added
Microsoft Hyper-V
The SANsymphony virtual machine settings - Memory
When running a DataCore Server inside a Virtual Machine, do not enable Hyper-V's 'Dynamic Memory' setting for
the SANsymphony Virtual Machine as this may cause less memory than expected to be assigned to
SANsymphony’s cache driver.

VMware ESXi
Disk storage - ESXi Raw Device Mapping (RDM)
When using RDM as storage for use in SANsymphony Disk Pools, disable ESXi’s ‘SCSI INQUIRY Caching’. This allows
SANsymphony to detect and report any unexpected changes in storage device paths managed by ESXi’s RDM
feature.
Also see
Virtual Machines with RDMs Must Ignore SCSI INQUIRY Cache
https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.storage.doc/GUID-2852DA1C-507B-4B85-
B211-B5CD3C8DC6F2.html

May 2019
Added
Storage virtualization deployment models
New section added.

Microsoft Hyper-V deployment examples Users have reported to DataCore that before applying ESXi 6.7, Patch
The information on Hyper-converged Hyper-V deployment has been rewritten along with new detailed diagrams
and explanatory technical notes.

October 2018
Added
The SANsymphony virtual machine settings - ISCSI Settings - General
DataCore Servers in virtual machines running SANsymphony 10.0 PSP 7 Update 2 or earlier should run the
appropriate PowerShell scripts found attached to the Customer Services FAQ:
SANsymphony - iSCSI Best Practices
https://datacore.custhelp.com/app/answers/detail/a_id/1626

Known Issues – Failover


Affects ESX 6.7 only
Failover/Failback takes significantly longer than expected.
Users have reported to DataCore that before applying ESXi 6.7, Patch Release ESXi-6.7.0-20180804001 (or later)
failover could take in excess of 5 minutes. DataCore are recommending (as always) to apply the most up-to-date
patches to your ESXi operating system. Also see: https://kb.vmware.com/s/article/56535

September 2018
Added
VMware ESXi deployment
The information in this section was originally titled ‘Configure ESXi host and Guest VM for DataCore Server for low
latency’. The following settings have been added:
 Turbo Boost should be disabled on the ESXi Host

Page | 31 Hyper-converged Virtual SAN Deployment – Best Practices


Previous Changes

 Latency Sensitivity should be set to ‘High’

Known Issues
Refer to the specific entries in this document for more details.
 ESX Hosts with IP addresses that share the same IQN connecting to the same DataCore Server Front-end
port is not supported (this also includes ESXi 'Port Binding'). Applies to SANsymphony 10.0 PSP 6 Update 5
and earlier.
 HPE ProLiant Gen10 Servers Running VMware ESXi 6.5 (or Later) may lose connectivity to Storage Devices.
 vHBAs and other PCI devices may stop responding when using Interrupt Remapping
 VMware tools 10.0.2.0 issues and 10.0.3.0 recall with workaround.

Updated
Configure ESXi host and Guest VM for DataCore Server for low latency
This section has been renamed to ‘VMware ESXi deployment’. The following settings have been updated:
 CPU
 Virtual SCSI Controller
 Disable interrupt coalescing
Refer to the specific entries in this document for more details and make sure that your existing settings are
changed/updated accordingly.

Removed
Microsoft Windows – Configuration Examples
Examples using the DataCore Loopback Port in a Hyper-V Cluster configuration have been removed as
SANsymphony is currently unable to assign PRES registrations to all other non-active Loopback connects for a
virtual disk. This could lead to certain failure scenarios where MSCS Reservation Conflicts prevent access for
some/all of the virtual disks on the remaining Hyper-V Cluster. Use ISCSI loopback connections instead.

VMware ESXi deployment


The information in this section was originally titled ‘Configure ESXi host and Guest VM for DataCore Server for low
latency’. The following settings have been removed:
 NUMA/vCPU affinity

DataCore are no longer making recommendations regarding NUMA/vCPU affinity settings as we have found that
different server vendors have different NUMA settings that can be changed, and that many users have reported
that making these changes made no difference and, in extreme cases, caused slightly worse performance than
before. Users who may have already changed these settings based using the previous document and are running
without issue do not need to revert anything.

SANsymphony vs Virtual SAN Licensing


This section has been removed as it no longer applies to the current licensing model.

Appendix A
This has been removed. The information that was there has been incorporated into relevant sections throughout
this document.

May 2018
Added
Updated the VSAN License feature to include 1 registered host
Removed example with MS Cluster using Loopback ports (currently not supported)

Updated
DataCore Server in Guest VM on ESXi table (corrected errors, added references to VMware documentation and
added Disable Delayed ACK)

Removed
ESXi 5.5 because of issues only fixed in ESXi 6.0 update2
https://kb.vmware.com/s/article/2129176

September 2016

Added
First publication of document

Page | 32 Hyper-converged Virtual SAN Deployment – Best Practices


The authority on real-time data

Copyright © 2019 by DataCore Software Corporation. All rights reserved.

DataCore, the DataCore logo and SANsymphony are trademarks of DataCore Software Corporation. Other DataCore
product or service names or logos referenced herein are trademarks of DataCore Software Corporation. All other
products, services and company names mentioned herein may be trademarks of their respective owners.

ALTHOUGH THE MATERIAL PRESENTED IN THIS DOCUMENT IS BELIEVED TO BE ACCURATE, IT IS PROVIDED “AS IS”
AND USERS MUST TAKE ALL RESPONSIBILITY FOR THE USE OR APPLICATION OF THE PRODUCTS DESCRIBED AND
THE INFORMATION CONTAINED IN THIS DOCUMENT. NEITHER DATACORE NOR ITS SUPPLIERS MAKE ANY
EXPRESS OR IMPLIED REPRESENTATION, WARRANTY OR ENDORSEMENT REGARDING, AND SHALL HAVE NO
LIABILITY FOR, THE USE OR APPLICATION OF ANY DATACORE OR THIRD PARTY PRODUCTS OR THE OTHER
INFORMATION REFERRED TO IN THIS DOCUMENT. ALL SUCH WARRANTIES (INCLUDING ANY IMPLIED
WARRANTIES OF MERCHANTABILITY, NON-INFRINGEMENT, FITNESS FOR A PARTICULAR PURPOSE AND AGAINST
HIDDEN DEFECTS) AND LIABILITY ARE HEREBY DISCLAIMED TO THE FULLEST EXTENT PERMITTED BY LAW.

No part of this document may be copied, reproduced, translated or reduced to any electronic medium or machine-
readable form without the prior written consent of DataCore Software Corporation

You might also like