Professional Documents
Culture Documents
October 2019
Updated
General
This document has been reviewed for SANsymphony 10.0 PSP 9.
No additional settings or configurations are required.
Removed
General
Any information that is specific to ESXi versions 5.0 and 5.1 have been removed as these are
considered ‘End of General Support’. See: https://kb.vmware.com/s/article/2145103
Notes:
Notes:
ESXi
5.5 Tested/Works
6.x Tested/Works
Notes:
Notes:
ESXi
5.5 Tested/Works
6.0 Tested/Works
Notes:
Requires the 'DataCore SANsymphony Storage Replication Adapter’ please see the release
notes from https://datacore.custhelp.com/app/downloads
Storage IO Control
Applies to all versions of SANsymphony 10.x
ESXi
6.0 and
N/A
earlier
6.5 Tested/Works
6.7 Tested/Works
Notes:
No additional configuration is required on the DataCore Server.
1
Only supported/works with DataCore’s SANsymphony Storage Replication Adaptor 2.0 or later
Qualified
This combination has been tested by DataCore and all the host-specific settings listed in
this document applied using non-mirrored, mirrored and Dual Virtual Disks.
Not Qualified
This combination has not yet been tested by DataCore using Mirrored or Dual Virtual Disks
types. DataCore cannot guarantee 'high availability' (failover/failback, continued access etc.)
even if the host-specific settings listed in this document are applied. Self-qualification may
be possible please see Technical Support FAQ #1506
Mirrored or Dual Virtual Disks types are configured at the users own risk; however, any
problems that are encountered while using VMware versions that are 'Not Qualified' will still
get root-cause analysis.
Non-mirrored Virtual Disks are always considered 'Qualified' - even for 'Not Qualified'
combinations of VMware/SANsymphony.
Not Supported
This combination has either failed 'high availability' testing by DataCore using Mirrored or
Dual Virtual Disks types; or the operating System's own requirements/limitations (e.g. age,
specific hardware requirements) make it impractical to test. DataCore will not guarantee
'high availability' (failover/failback, continued access etc.) if the host-specific settings listed
in this document are applied. Mirrored or Dual Virtual Disks types are configured at the
users own risk. Self-qualification is not possible.
Mirrored or Dual Virtual Disks types are configured at the users own risk; however, any
problems that are encountered while using VMware versions that are 'Not Supported' will
get best-effort Technical Support (e.g. to get access to Virtual Disks) but no root-cause
analysis will be done.
Non-mirrored Virtual Disks are always considered 'Qualified' – even for 'Not Supported'
combinations of VMware/SANsymphony.
For any problems that are encountered while using VMware versions that are EOSL, EOA or
EOD with DataCore Software, only best-effort Technical Support will be performed (e.g. to
get access to Virtual Disks). Root-cause analysis will not be done.
Port roles
Ports that are used to serve Virtual Disks to Hosts should only have the Front End role
checked. While it is technically possible to check additional roles on a Front End port (i.e.
Mirror and Backend), this may cause unexpected results after stopping the SANsymphony
software.
Any port with front-end role (and is serving Virtual Disks to Hosts) also has either the mirror
and/or backend role enabled will remain ‘active’ even when the SANsymphony software is
stopped. There is some slight difference in behavior depending on the version of
SANsymphony installed.
Front-end ports that are serving Virtual Disks but remain active after the SANsymphony
software has been stopped can cause unexpected results for some Host operating systems
as they continue to try to access Virtual Disks from the ‘active’ port on the now-stopped
DataCore Server. This, in turn, may end up delaying Host fail-over or result in complete loss of
access from the Host’s application/Virtual Machines.
Multipathing
The Multipathing Support option should be enabled so that Mirrored Virtual Disks or Dual
Virtual Disks can be served to Hosts from all available DataCore FE ports. Also see the
Multipathing Support section from the SANsymphony Help: https://docs.datacore.com/SSV-
WebHelp/Hosts.htm
ALUA support
The ALUA support option (Asymmetrical Logical Unit Access) should be enabled if required
and if Multipathing Support has been also been enabled (see above). Please refer to the
Operating system compatibility table on page 4 to see which combinations of VMware ESXi
and SANsymphony support ALUA. More information on Preferred Servers and Preferred
Paths used by the ALUA function can be found on in Appendix A on page 27.
While DataCore cannot guarantee that a disk device's NAA is used by a Host's operating
system to identify a disk device served to it over different paths generally we have found that
it is. And while there is sometimes a convention that all paths by the same disk device should
always using the same LUN 'number' to guarantee consistency for device identification, this
may not be technically true. Always refer to the Host Operating System vendor’s own
documentation for advice on this.
DataCore's Software does, however always try to create mappings between the Host's ports
and the DataCore Server's Front-end (FE) ports for a Virtual Disk using the same LUN number
where it can. The software will first find the next available (lowest) LUN 'number' for the Host-
DataCore FE mapping combination being applied and will then try to apply that same LUN
number for all other mappings that are being attempted when the Virtual Disk is being
served. If any Host-DataCore FE port combination being requested at that moment is already
using that same LUN number (e.g. if a Host has other Virtual Disks served to it from previous)
then the software will find the next available LUN number and apply that to those specific
Host-DataCore FE mappings only.
Advanced settings
DiskMaxIOSize
By default, ESXi’s will send I/O requests up to a maximum of 32MB in size. SANsymphony will
split ‘large’ I/O requests into much smaller request sizes and for sequential I/O patterns this
will not, usually, have any noticeable effect on overall latency on the ESXi Host. For other,
more random I/O patterns however, the Host may have to wait for longer for each of its large
I/O requests to complete because all of the now smaller requests must each be completed
on the DataCore Server(s) before SANsymphony can allow the next ‘large’ I/O request to be
sent from the Host and this can significantly increase overall latency between the Host and
the DataCore Server.
Therefore, DataCore strongly recommend that the DiskMaxIOSize setting be reduced so that,
in the case of non-sequential I/O, there is no significant additional wait time for I/O requests
to complete. Using vSphere, select the ESX Host and click on the ‘Configure’ tab. From the
System options, choose ‘Advanced System Settings’:
Also see
Tuning ESX/ESXi for better storage performance by modifying the maximum I/O block
size
https://kb.vmware.com/s/article/1003469
If the PSP requires a different SATP than the original PSP setting, then DataCore recommend
that the Virtual Disks are first unserved from the Host, then the old SATP is removed, the new
SATP is applied and finally the Virtual Disks are be re-served and re-discovered by the Host.
Using different PSPs for the same Virtual Disk across multiple Hosts
While technically possible, this is not supported by DataCore as we cannot guarantee the
behavior of the any/all of the VMware ESXi Hosts sharing this Virtual Disk.
Always verify that DataCore Virtual Disks have been set correctly
Either use the vSphere client UI or run the following command in the ESXi console:
This command lists all devices that only use DataCore’s unique NAA identifier - part of a
SANsymphony Virtual Disk’s SCSI Standard Inquiry Data - with an additional 3 lines of output
above and below that should also include the PSP and the SATP.
Also see
esxcli storage nmp satp rule add -V DataCore -M "Virtual Disk" -s VMW_SATP_ALUA -c
tpgs_on -P VMW_PSP_RR -O iops=10
DataCore recommends changing the IOPS value to 10 (from the default of 1000) as this has
been found in testing to improve performance.
Notes
SATP rules are persistent and will get claimed during the boot process for all existing
Virtual Disks and any new Virtual Disks served later.
It is possible to apply these additional settings on-the-fly to a running ESXi Host (i.e.
without any IO disruption), by using esxcli. However, this is not the same as
configuring an SATP rule and, if done this way, will have no persistence over the next
reboot. So on the ‘next’ reboot, the rule will get claimed as it had originally been
defined and these on-the-fly made changes will no longer be in place (and need to be
remade manually).
For Round Robin, Hosts must have the ALUA setting enabled in the SANsymphony console.
For Round Robin, DataCore recommends configuring Hosts with an explicit Server or using
the 'Auto select' setting.
When the Host's Preferred Server setting is either ‘Auto Select’ the Host’s paths to the
DataCore Server listed first in the Virtual Disk’s details tab will be set as ‘Active Optimized’. In
the case of a named DataCore Server then that DataCore Server’s paths will be used as the
‘Active Optimized’ server, regardless of its order in the Virtual Disk’s details tab.
All paths from the Host to the other DataCore Server(s) will be set as 'Active Non-optimized'.
This will set all paths on all DataCore Servers to be ‘Active Optimized’ and while this may
seem, at first, ideal it may end up causing worse performance than expected.
When there are significant distances between DataCore Servers and Hosts (e.g. across links
between remote data centers), then sending I/O Round Robin to ‘remote’ DataCore Servers
compared to a Host’s location may cause noticeable delays/latency while the I/O travels over
the remote links and back compared to just sending the I/O to DataCore Servers that are
‘local’ to the Host.
Therefore, testing is advised before using the ‘All’ preferred setting in production to make
sure that the I/O speeds between servers is adequate.
Also see
Fixed PSP
esxcli storage nmp satp rule add -V DataCore -M "Virtual Disk" -s VMW_SATP_ALUA -c
tpgs_on -P VMW_PSP_FIXED
For Fixed, Hosts must have the ALUA setting enabled in the SANsymphony console.
For Fixed, Hosts must use the ‘All’ setting as this will set all paths on all DataCore Servers to
be ‘Active Optimized’ which Fixed always expects when failing to/from a DataCore Server.
Using a different ‘Preferred Server’ setting would not guarantee that one or more of the
DataCore Servers would be ‘Active No- Optimized’ which may cause failover/failback to not
work as expected.
Unlike Round Robin however Hosts will not send IO to all paths of the preferred DataCore
Server when using Fixed and the 'active/preferred' paths are not actually controlled by the
DataCore Server’s ‘Preferred Server’ setting but by the Fixed PSP on the ESXi Host.
Please refer to VMware's own documentation on how to configure 'active' paths when using
Fixed PSP.
Also see
esxcli storage nmp satp rule add -V DataCore -M "Virtual Disk" -s VMW_SATP_DEFAULT_AA
-P VMW_PSP_MRU
For MRU, Hosts must not have the ALUA setting enabled in the SANsymphony console.
As there is no ALUA setting enabled the ‘Preferred Server’ setting is ignored. The
'active/preferred' paths are not actually controlled by the DataCore Server’s ‘Preferred Server’
setting but by the Fixed PSP on the ESXi Host.
Please refer to VMware's own documentation on how to configure 'active' paths when using
Fixed PSP.
Also see
Known issues
The following is intended to make DataCore Software users aware of any issues that affect
performance, access or may give unexpected results under particular conditions when
SANsymphony is used in configurations with VMware ESXi Hosts.
These Known Issues may have been found during DataCore’s own testing but others may
have been reported by our users when a solution was found that was not to do with
DataCore’s own products.
DataCore cannot be held responsible for incorrect information regarding another vendor’s
products and no assumptions should be made that DataCore has any communication with
these other vendors regarding the issues listed here.
We always recommend that the vendor’s should be contacted directly for more information
on anything listed in this section.
For ‘Known issues’ that apply to DataCore Software’s own products, please refer to the
relevant DataCore Software Component’s release notes.
Failover
Affects ESX 6.x and 5.5
Sharing the same ‘inter-site’ connection for both Front end and Mirror ports may result
in loss of access to Virtual Disks for ESXi Hosts if a failure occurs on that shared
connection.
Sharing the same physical connection for both FE and MR ports will work as expected as
long as everything is healthy. Any kind of failure-event over this ‘single’ link, may cause both
mirror and front end I/O to fail at the same time and this will result in one or more Virtual
Disks being unexpectedly inaccessible to one or more ESXi Hosts even though there is an
available I/O path to one of the DataCore Servers.
Even though the DataCore Server will issue the correct SCSI notification back to the ESXi
Hosts (i.e. ‘LUN_NOT_AVAILABLE’) to tell it that the path to the Virtual Disk is no longer
available, the ESXi host will ignore this SCSI response and continue to try to access the
virtual disks on a path to be reported by VMware as either 'Permanent Device Loss' (PDL) or
an 'All-Paths-Down' (APD). ESXi will not attempt any 'failover' (HA) or ‘move’ of the VM (Fault
Tolerance) and will lose access to the Virtual Disk.
Because of this ESXi failover limitation, DataCore cannot guarantee failover for a
configuration where ESX Hosts are serving Virtual Disks over physical link(s) where, at the
same time, the DataCore Servers are using these same physical link(s) for Mirror I/O
(between DataCore Servers).
Therefore DataCore recommend that at least two physically separate links are used. One
for Mirror I/O and the other for FE I/O.
See: https://kb.vmware.com/s/article/56535
See https://kb.vmware.com/s/article/2144657.
ISCSI connections
Affects ESX 6.x and 5.5
ESXi hosts experience degraded IO performance on iSCSI network when Delayed ACK is
'enabled' on ESXi its software iSCSI initiator.
For more specific information and how to disable the 'Delayed ACK' feature on ESXi Hosts:
See https://kb.vmware.com/s/article/1002598
The Front End port will only accept the ‘first’ login and a unique iSCSI Session ID (ISID) will be created.
All subsequent connections coming from a different interface (but that shares the same IQN as the
first login) will create an ISID conflict and be rejected by the SANsymphony software. No further iSCSI
logins will be allowed. Also note that if a SCSI event causes a logout of the iSCSI session then another
interface (sharing the same IQN) may be able to login and prevent a previously connected iSCSI
interface from being able to re-connect.
See the following examples of supported and not-supported configuration when using SANsymphony
10.0 PSP 6 Update 5 or earlier:
192.168.1.1 (iqn.esx1)
192.168.2.1 (iqn.esx1)
192.168.1.2 (iqn.esx1)
192.168.2.2 (iqn.esx1)
Two DataCore Servers each have 2 FE ports with their own IP address and own IQN:
192.168.1.101 (iqn.dcs1-1)
192.168.2.101 (iqn.dcs1-2)
192.168.1.102 (iqn.dcs2-1)
192.168.2.102 (iqn.dcs2-2)
Each Interface of the ESX Host connects to a separate FE Port on each DataCore Server:
This type of configuration is very easy to manage, especially if there are any connection problems.
Both Interfaces from ESX1 are connected to the same Interface on the DataCore Server.
Server Hardware
Affects ESX 6.5
HPE ProLiant Gen10 Servers Running VMware ESXi 6.5 (or Later) and Configured with a
Gen10 Smart Array Controller may lose connectivity to Storage Devices.
See https://kb.vmware.com/s/article/1030265.
VAAI
Affects ESX 6.x and 5.5
Thin Provisioning Block Space Reclamation (VAAI UNMAP) does not work if the volume
is not native VMFS-5 (i.e. it is converted from VMFS-3) or the partition table of the LUN
was created manually
See: https://kb.vmware.com/s/article/2048466
See: https://kb.vmware.com/s/article/1005009
See https://kb.vmware.com/s/article/2113956.
If ESXi Hosts are connected to other storage arrays contact VMware to see if it is safe to
disable this setting for these arrays.
VMotion
Affects ESX 6.x
VMs get corrupted on vVOL datastores after vMotion
When a VM residing on a vVOL datastore is migrated using vMotion to another host by
either DRS or manually, and if the VM has one or more of the following features enabled:
CBT
VFRC
IOFilter
VM Encryption
A corruption of data/backups/replicas and/or performance degradation is experienced after
vMotion.
See: https://kb.vmware.com/s/article/55800
This following VMware article provides steps to work around the issue.
https://kb.vmware.com/s/article/1011754
Notes
While SANsymphony will always attempt to match a LUN on all Host for a Virtual Disk, in
some cases it is not possible to do so – e.g. A vDisk is served to an ESXi host that already has
vDisks mapped and the vDisk had already been served to other ESXi hosts previously;
matching the LUN across all ESXi hosts may not be possible because this would conflict
with existing mappings for other vDisks.
Also see: Serving Virtual Disks - To more than one Host port on page 8
This is what the Paths should look like (see following page):
Notes
Use the ESXi 'esxtop' command (e.g. using either ‘d’ or ‘u’ switches) to show actual activity
on the expected paths and/or devices.
VMware Tools
Affects ESX 6.5
VMware Tools Version 10.3.0 Recall and Workaround Recommendations
VMware has been made aware of issues in some vSphere ESXi 6.5 configurations with the
VMXNET3 network driver for Windows that was released with VMware Tools 10.3.0.
This release has been removed from the VMware Downloads page.
See: https://kb.vmware.com/s/article/57796
See: https://kb.vmware.com/s/article/54459
See https://kb.vmware.com/s/article/2144153.
See https://kb.vmware.com/s/article/1037959
Specifically read the 'additional notes' (under the section 'VMware vSphere support for
running Microsoft clustered configurations').
See https://kb.vmware.com/s/article/1016106.
If for any reason the Storage Source on the preferred DataCore Server becomes unavailable,
and the Host Access for the Virtual Disk is set to Offline or Disabled, then the other DataCore
Server will be designated the ‘Active Optimized’ side. The Host will be notified by both
DataCore Servers that there has been an ALUA state change, forcing the Host to re-check the
ALUA state of both DataCore Servers and act accordingly.
If the Storage Source on the preferred DataCore Server becomes unavailable but the Host
Access for the Virtual Disk remains Read/Write, for example if only the Storage behind the
DataCore Server is unavailable but the FE and MR paths are all connected or if the Host
physically becomes disconnected from the preferred DataCore Server (e.g. Fibre Channel or
iSCSI Cable failure) then the ALUA state will not change for the remaining, ‘Active Non-
optimized’ side. However, in this case, the DataCore Server will not prevent access to the Host
nor will it change the way READ or WRITE IO is handled compared to the ‘Active Optimized’
side, but the Host will still register this DataCore Server’s Paths as ‘Active Non-Optimized’
which may (or may not) affect how the Host behaves generally.
Also see
In the case where the Preferred Server is set to ‘All’, then both DataCore Servers are
designated ‘Active Optimized’ for Host IO.
All IO requests from a Host will use all Paths to all DataCore Servers equally, regardless of the
distance that the IO has to travel to the DataCore Server. For this reason, the ‘All’ setting is
not normally recommended. If a Host has to send a WRITE IO to a ‘remote’ DataCore Server
(where the IO Path is significantly distant compared to the other ‘local’ DataCore Server),
then the WAIT times accrued by having to send the IO not only across the SAN to the remote
DataCore Server, but for the remote DataCore Server to mirror back to the local DataCore
Server and then for the mirror write to be acknowledged from the local DataCore Server to
the remote DataCore Server and finally for the acknowledgement to be sent to the Host back
across the SAN, can be significant.
The benefits of being able to use all Paths to all DataCore Servers for all Virtual Disks are not
always clear cut. Testing is advised.
So for example, if the Preferred Server is designated as DataCore Server A and the Preferred
Paths are designated as DataCore Server B, then DataCore Server B will be the ‘Active
Optimized’ Side not DataCore Server A.
In a two-node Server group there is usually nothing to be gained by making the Preferred
Path setting different to the Preferred Server setting and it may also cause confusion when
trying to diagnose path problems, or when redesigning your DataCore SAN with regard to
Host IO Paths.
For Server Groups that have three or more DataCore Servers, and where one (or more) of
these DataCore Servers shares Mirror Paths between other DataCore Servers setting the
Preferred Path makes more sense.
So for example, DataCore Server A has two mirrored Virtual Disks, one with DataCore Server
B, and one with DataCore Server C and DataCore Server B also has a mirrored Virtual Disk
with DataCore Server C then using just the Preferred Server setting to designate the ‘Active
Optimized’ side for the Host’s Virtual Disks becomes more complicated. In this case the
Preferred Path setting can be used to override the Preferred Server setting for a much more
granular level of control.
Manual Reclamation
SANsymphony checks for ‘zero’ block data by sending read I/O to the storage. When all the
blocks of an allocated SAU are detected as having ‘zero’ data on them, the storage used by
the SAU is then reclaimed.
Mirrored Virtual Disks will receive the manual reclamation ‘request’ on all DataCore Servers
involved in the mirror configuration at the same time and each DataCore Server will read
from its own storage. The Manual reclamation ‘request’ is not sent to replication destination
DataCore Servers from the source. Replication destinations will need to be manually
reclaimed separately.
Important Note
Thin Provisioning Block Space Reclamation (VAAI UNMAP) does not work if the volume
is not native VMFS-5 (i.e. it is converted from VMFS-3) or the partition table of the LUN
was created manually
See: https://kb.vmware.com/s/article/2048466
Using vSphere
Add a new ‘Hard Disk’ to the ESXi Datastore of a size less than or equal to the free space
reported by ESXi and choose ‘Disk Provisioning: Thick Provisioned Eager Zero’. Once the
creation of the VMDK has completed (and storage has been reclaimed from the Disk
Pool), this VMDK can be deleted.
Where ‘[size]’ is less than or equal to the free space reported by ESXi.
Once the creation of the VMDK has completed (and storage has been reclaimed from the
Disk Pool), this VMDK can be deleted.
Also see
ESXi 6.5 Hosts: Space Reclamation Requests from Guest Operating Systems
https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.storage.doc/GUID-
5E1396BE-6EA8-4A6B-A458-FC9718E2C55B.html
Also see
Updated
The DataCore Server’s settings – Port Roles
Removed
General
All information regarding SANsymphony-V 9.x as this version is end of life (EOL).
Please see: End of life notifications for DataCore Software products
https://datacore.custhelp.com/app/answers/detail/a_id/1329
For more information.
June 2019
Updated
Appendix B - Reclaiming Storage from Disk Pools
Defragmenting data on Virtual Disks
March 2019
Updated
The VMware ESXi Host's settings
Advanced settings – DiskMaxIOSize
An explanation has been added on why DataCore recommend that the default value for an ESXi host is changed.
December 2018
Updated
VMware Path Selection Policies – Round Robin PSP
November 2018
Updated
VMware ESXi compatibility lists - VMware Site Recovery Manager (SRM)
ESXi SANsymphony 9.0 PSP 4 Update 4 (1) SANsymphony 10.0 (all versions)
ESX 6.5 is now supported using DataCore’s SANsymphony Storage Replication Adaptor 2.0. ESX 6.7 is currently not
qualified.
Please see DataCore’s SANsymphony Storage Replication Adaptor 2.0 release notes from
https://datacore.custhelp.com/app/downloads
October 2018
Added
Known Issues - Failover
Affects ESX 6.7 only
Failover/Failback takes significantly longer than expected.
Users have reported to DataCore that before applying ESXi 6.7, Patch Release ESXi-6.7.0-20180804001 (or later)
failover could take in excess of 5 minutes. DataCore are recommending (as always) to apply the most up-to-date
patches to your ESXi operating system. Also see: https://kb.vmware.com/s/article/56535
Updated
VMware ESXi compatibility lists
VMware VVOL VASA API 2.0
ESXi 9.0 PSP 4 Update 4 10.0 PSP 3 and earlier 10.0 PSP 4 and later
Previously ESX 5.x was incorrectly listed as VVOL/VASA compatible with 10.0 PSP 4 and later.
September 2018
Added
Known Issues
VSphere Client and VSphere Web Client
Affects ESX 6.x and 5.x
Cannot extend datastore through vCenter Server
If a SANsymphony Virtual Disk served to more than one ESX Host is not using the same LUN on all Front End
paths for all Hosts and then has its logical size extended, vSphere may not be able to display the LUN in its UI to
then expand the VMware datastore. This article provides steps to work around the issue.
See: https://kb.vmware.com/s/article/1011754
VMware Tools
Affects ESX 6.5
VMware Tools Version 10.3.0 Recall and Workaround Recommendations
VMware has been made aware of issues in some vSphere ESXi 6.5 configurations with the VMXNET3 network
driver for Windows that was released with VMware Tools 10.3.0.
As a result, VMware has recalled the VMware Tools 10.3.0 release. This release has been removed from the VMware
Downloads page - see: https://kb.vmware.com/s/article/57796
August 2018
Added
Known Issues
Affects ESX 6.5
HPE ProLiant Gen10 Servers Running VMware ESXi 6.5 (or Later) and Configured with a Gen10 Smart Array
Controller may lose connectivity to Storage Devices.
Search https://support.hpe.com/hpesc/public/home using keyword a00041660en_us
ESXi SANsymphony 9.0 PSP 4 Update 4 (2) SANsymphony 10.0 (all versions)
Updated
Appendix B - Reclaiming storage from Disk Pools
Reclaiming storage on the Host using VAAI - Space reclamation granularity setting
DataCore now recommend using a setting of 1MB.
Previously the recommendation was 4MB (to reflect the smallest possible Disk Pool SAU size) however VMware
disable UNMAP commands to the storage if this setting is greater than 1MB. See VMware’s own documentation
‘Thin Provisioning and Space Reclamation/ Storage Space Reclamation/ Space Reclamation Requests from VMFS
Datastores’ for more information.
July 2018
Added
VMware ESXi compatibility lists - VSphere 6.5 – Storage IO Control
2
Earlier versions are ‘End of Life’. See: https://datacore.custhelp.com/app/answers/detail/a_id/1329
May 2018
Added
The VMware ESXi Host's settings – Advanced Settings
Enable VM Component Protection (VMCP)
VMware ESXi 6.x only. Enable VM Component Protection (VMCP) on your HA cluster to allow the cluster to react to
“all paths down” and “permanent device loss” conditions by restarting the VMs.
ESXi hosts need to perform a rescan whenever Virtual Disks are unserved
See https://kb.vmware.com/s/article/2004605 and https://kb.vmware.com/s/article/1003988 . Without a rescan on
the Host, ESXi will continue to send SCSI commands to DataCore Server Frontend Ports for LUNs that are no
longer served. This causes the DataCore Server to have to send back an appropriate ‘ILLEGAL_REQUEST’ SCSI
response each time the missing LUN is probed for by the Host. In extreme cases, when large numbers of Virtual
Disks are unserved, the amount of SCSI commands generated by this send-and-response will significantly affect
performance any Host that is using the Front End Port(s) for existing Virtual Disks
Updated
VMware Path Selection Policies - Configuring the Round Robin Path Selection Policy
While it is still possible to use VMware ESXi’s built-in, generic 'VMW_SATP_ALUA' rule, e.g.:
DataCore are now recommending that this custom SATP rule be used instead:
esxcli storage nmp satp rule add -V DataCore -M "Virtual Disk" -s VMW_SATP_ALUA -c tpgs_on -P
VMW_PSP_RR -O iops=10
Important notes for users that were already using the previously documented custom SATP rule:
The difference between the old and new custom rule is the addition of the -O iops=10
Remove the existing custom rule before trying to add the new one e.g:
esxcli storage nmp satp rule remove -V DataCore -M "Virtual Disk" -s VMW_SATP_ALUA -c tpgs_on -P
VMW_PSP_RR
While it is possible to adjust the existing rule using the command line - see
https://kb.vmware.com/s/article/2069356 - this is not persistent over a reboot, therefore DataCore do not
recommend following the command in the VMware KB article.
February 2018
Added
The VMware ESXi Host's settings – Advanced Settings
Updated
General
This document has been reviewed for SANsymphony 10.0 PSP 7.
Removed
General – ESXi 4.x and earlier
All references to ESXi version 4.x have now been removed as this product is considered end of technical guidance
from VMware. Note: SANsymphony-V 9.0 PSP 4 Update 4 is still considered qualified with ESXi 4.1.x. and earlier
versions of ESXi are all considered as not supported. However, if there is still a specific requirement to use ESXi 4.1
with SANsymphony-V 9.0 PSP 4 Update 4, then contact DataCore Technical Support who will be able to give
advice on any relevant information that has now been removed from this document.
October 2017
Updated
When connecting ESXi Hosts to DataCore Servers
ESX Hosts with IP addresses that share the same IQN connecting to the same DataCore Server Front-end port is
not supported in SANsymphony versions 10.0 PSP 6 Update 5 and earlier (this also includes ESXi 'Port Binding').
Please see the ISCSI Connections section on for more specific information, with examples.
August 2017
Added
Known Issues – applies to all versions of ESXi
ESXi hosts experience degraded IO performance on iSCSI network when Delayed ACK is 'enabled' on ESXi its
software iSCSI initiator.
See https://kb.vmware.com/s/article/1002598 for more specific information and how to disable the 'Delayed ACK'
feature on ESXi Hosts. A reboot of the ESXi Host will be required.
Updated
Appendix C - Reclaiming storage
Added updated information specific to ESX 6.5 with regard to VMware's 'Space Reclamation Requests from Guest
Operating Systems' feature with VMFS6.
June 2017
Updated
Compatibility List – VMware ESXi Path Selection Policies
There was an inconsistency between what was reported in the table and the compatibility notes. Previously the
MRU PSP had stated, in the table, that it was 'Qualified' with ESX versions 4.x and 5.x. However the compatibility
notes stated that MRU (while not on VMware's Hardware Compatibility List), was actually considered 'Not Qualified'
by DataCore Software, rather than 'Not Supported'. The table has now been corrected to reflect the compatibility
notes.
May 2017
Added
Known Issues – all ESX versions – When connecting ESXi Hosts to DataCore Servers
After upgrading to VMware ESXi 6.0 Update 3 ESX paths will only report as 'Active'. No paths will report as 'Active
(I/O)' - regardless of the Path Selection Policy.
April 2017
Added
Known Issues – all ESX versions – Converged Network Adaptors
When using QLogic's Dual-Port, 10Gbps Ethernet-to-PCIe Converged Network Adaptor (CNA)
Disable both the adaptor's BIOS and the 'Select a LUN to Boot from' option.
This was previously documented in 'Known Issues - Third Party Hardware and Software'
https://datacore.custhelp.com/app/answers/detail/a_id/1277
Updated
VMware ESXi Compatibility lists – VMware ESXi Path Selection Policies (PSP)
The information regarding the Most Recently Used (MRU) PSP and ESXi 6.x. was incorrectly listed as 'Supported'. It
has been corrected to 'Not Qualified'.
February 2017
Added
VMware ESXi compatibility notes
VMware 'Fault Tolerant' or 'High Available' Clusters
Explained a specific configuration set up that DataCore cannot support when using VMware FT or HA clusters and
the reasons for that. This is also referred to again in the 'Known Issues' section.
November 2016
Updated
Appendix C - Reclaiming storage
Automatic and Manual reclamation
These two sections have been re-written with more detailed explanations and technical notes.
October 2016
Updated
The VMware ESXi Host's settings - ISCSI Connections
ESX Hosts with IP addresses that share the same IQN connecting to the same DataCore Server Front-end port is
not supported (this also includes ESXi 'Port Binding'). The supported configuration example has been updated to
make it more obvious as to what is required (along with the same, corresponding changes made to the
unsupported example so that the comparison is easy to spot).
September 2016
Added
Known Issues - general
There has been a general re-organization of this section separating all issues into subsections determined by the
version of ESXi that the known issue refers to.
Updated
The VMware ESXi Host's settings – ISCSI Connections
The information that was previously in the 'Known Issues' section regarding connections from multiple NICs
sharing the same IQN has been moved to this section as it affects all versions of ESX and is not so much a 'Known
Issue' than a configuration requirement.
August 2016
Added
Known Issues
ESXi/ESX hosts with visibility to RDM LUNs being used by MSCS nodes with RDMs may take a long time to start or
during LUN rescan. Applies to ESX 6.x, 5.x and 4.x. Please see: https://kb.vmware.com/s/article/1016106
July 2016
Added
The DataCore Server's settings
Added link:
Video: Configuring ESX Hosts in the DataCore Management Console
https://datacore.custhelp.com/app/answers/detail/a_id/1637
Updated
This document has been reviewed for SANsymphony 10.0 PSP 5.
Known Issues
vMotion causing loss of access to filesystem for MSCS cluster nodes (2144153)
This was previously listed as "Running Microsoft Cluster Services in a Virtual Machine on Virtual Disks that have
more than one Front End mapping to each DataCore Server may cause unexpected loss of access". A
Knowledgebase article has now been released by VMware https://kb.vmware.com/s/article/2144153
April 2016
Updated
Known Issues - VMware 6.0
Storage PDL responses may not trigger path failover in vSphere 6.0
https://kb.vmware.com/s/article/2144657.
Note: This affects both vSphere 6.0 and 6.0 U1 customers. A fix is available in 6.0 U2.
February 2016
Updated
List of qualified VMware Versions - Qualification notes on VMware-specific functions
Removed references specific to 'End of Life' versions of SANsymphony-V – this includes all versions of
SANsymphony-V 8.x and any version of 9.x that are PSP 3 or earlier.
December 2015
Updated
List of qualified VMware Versions - Qualification notes on VMware-specific functions
Path Selection Policies and VMware ESX 6.x
For ESX 6.x, Fixed and Round Robin Path Selection Policies are both tested and supported by DataCore and both
are also listed on VMware's own Hardware Compatibility List.
November 2015
Updated
SANsymphony-V 8.x and all versions of SANsymphony 9.x before PSP4 Update 4 are now ‘End of Life’. Please see:
End of Life Notifications https://datacore.custhelp.com/app/answers/detail/a_id/1329
October 2015
Updated
Known Issues – VMware ESXi 5.x and 6.x
DataCore have been informed that there is now a ‘hotfix’ from VMware for the previously documented known issue
“Running Microsoft Cluster Services in a Virtual Machine on Virtual Disks that have more than one Front End
mapping to each DataCore Server may cause unexpected loss of access” (VMware’s own SR#15597438602). Contact
VMware for more information.
July 2015
Added
List of qualified VMware ESXi Versions - Notes on qualification
This section has been updated and new information added regarding the definitions of all ‘qualified’, ‘unqualified’
and ‘not supported…’ labels. A new section on Linux distributions that are no longer in development has also been
added at the end of this section.
Known Issues
Moved some of the information from the Host Configuration section where problems can arise into the ‘Known
Issues’ section. The ISCSI Port Binding is no longer considered supported as even if configured to be used in
different subnets (as previously recommended) the sharing of IQNs for different iSCSI Initiators on the ESXi Hosts
cannot be avoided and this can lead to situations where different IP Addresses with the same IQN try to log into
the same DataCore FE Port and will not be able to. Please read the ‘Known Issues’ section for more detail.
May 2015
Added
Known Issues – VMware ESXi 5.x and 6.x
An issue has been identified by VMware regarding Microsoft Clusters in Virtual Machines using SANsymphony-V
Virtual Disks served to more than one path on the same ESX host, which lead to unexpected loss of access. Under
‘heavy’ load the new VMFS heartbeat process used by ESX 5.5. Update 2 and 6.x may fail with false ‘ATS
miscompare’ message.
Updated
VMware ESXi 6.x - generally
Sections that apply to only VMware ESXi 6.x have been explicitly labelled to avoid ambiguity.
April 2015
Added
VMware ESXi Path Selection Policies (all)
It has been observed that different versions of ESXi may or may not ‘auto configure’ the correct SATP claim rule for
Round Robin or Fixed Path Selection Policies when presented with a Virtual Disks from SANsymphony-V. Therefore
more explicit instructions on how to create a custom rules has been added.
Note: Existing SANsymphony-V installations probably do not need to worry about this new information as it does
not conflict with what was stated previously; but DataCore recommend that you review the section just to make
sure that your Virtual Disks are correctly configured.
Updated
List of qualified VMware ESXi Versions
Added VMware ESXi 6.x
Updated
Appendix D - Moving from Most Recently Used to Round Robin or Fixed Path Selection Policies
Added more information about how to reduce the likelihood for downtime (by using vMotion).
November
Added
Known Issues
Most of the information was moved from the ‘Known Issues: Third Party Hardware and Software’ document:
https://datacore.custhelp.com/app/answers/detail/a_id/1277
Updated
List of qualified VMware ESXi versions
‘Not Supported’ has now been changed to mean explicitly ‘Not Supported for Mirrored or Dual Virtual Disks’. Single
Virtual Disks are now always considered supported.
July
Updated
VMware ESXi Path Selection Policies – all types
The command to verify that a given SATP type had been set was incorrect for the later versions of VMware ESXi. It
was listed as:
esxcli nmp satp listrules -s [SATP_Type]
and should have been listed as:
esxcli storage nmp satp rule list -s [SATP_Type]
June
Updated
List of qualified VMware ESXi Versions
Updated to include SANsymphony-V 10.x
May
This document combines all of DataCore’s VMware information from older Technical Bulletins into a single
document including:
Added
Host Settings: VMware ESXi All Versions:
Notes on VMware iSCSI Port Binding
‘Fixed’ is supported (this was inconsistently documented across the different Technical Bulletins) but only with the
Preferred Server setting set to ‘All’.
‘Most Recently Used’ must only be used without the ALUA option set on the Host. However, no versions of VMware
ESXi, without the ALUA option set, have been qualified with SANsymphony-V, so this Path Selection Policy is
considered ‘unqualified’.
Appendix A: This section gives more detail on the Preferred Server and Preferred Path settings with regard to how it
may affect a Host.
Appendix B: This section incorporates information regarding “Reclaiming Space in Disk Pools” (from Technical
Bulletin 16) that is specific to VMware Hosts.
Appendix C: This section adds additional information regarding “VMware’s vStorage APIs for Array Integration (VAAI)
with SANsymphony-V”.
Appendix D: This section adds more comprehensive steps for “Moving from Most Recently Used to Fixed or Round
Robin Path Selection Policy”.
Updated
DataCore Server Settings: VMware ESXi 4.0.x Hosts: Regarding Virtual Disk Names.
Host Settings: SCSI Reservation locking between VMware ESXi Hosts.
VMware ESXi Path Selection Policies: Previously the Preferred Server setting of ‘All’ was explicitly stated to not be
used within the SANsymphony-V Management Console. However, ‘Fixed’ requires that the Host’s Preferred Server
setting is set to ‘All’. ‘Round Robin’ may use the ‘All’ setting although caution is advised and more explanation is
provided in Appendix A why it may not be advisable.
An overall improvement of the explanations to most of the required Host Settings and DataCore Server Settings.
January 2014
Updated
The note on how to move from ‘Most Recently Used’, with the ALUA option not checked to ‘Fixed’/RR Path with the
ALUA option checked for a DataCore Disk with regard to SANsymphony-V 9.0 PSP3 and later versions.
December 2013
Added
VSphere ESXi 5.5 is qualified and no additional settings (from all previous 5.x versions) are needed. The SCSI UNMAP
primitive is supported from SANsymphony-V 9.0 PSP4.
Updated
DataCore Server configuration settings section (‘Virtual Disks mapped to more than one Host may need to use the
same LUN ‘number’ …’) for SANsymphony-V. Added a ‘warning’ note at the start of each Path Selection Policy (PSP),
cautioning the user, that a VM’s Operating System configuration may not be supported by VMware for a particular
PSP (i.e. of publication VMware state that MSCS VMs are not supported for Round Robin PSP).
April 2013
Removed
All references to SANmelody as this product is now ‘End of Life’ as of December 31, 2012
March 2013
Added
Use VMFS5 for VSphere Metro Storage Clusters (vMSC).
February 2013
Updated
The ‘General notes on path selection policies’. To allow for different behavior with the VMware vCenter Integration
function of SANsymphony-V.
October 2012
Removed
All but one of the Advanced Settings; all other settings are no longer needed and can be ignored (there is no
requirement to reset or change the existing values for these other settings and they can be left as they are).
July 2012
Added
support for SANsymphony-V 9.x, no new technical information. Added extra steps to set the default path selection
policy to ‘Fixed’ instead of ‘MRU’ under the ‘Fixed/Round Robin path selection policy’ section. Added note under
‘General’ section that:
i. VAAI is now supported - with SANsymphony-V 9.x and ESXi 5.x.
ii. Strengthened warning that MRU is not supported with ALUA
June 2012
Added
Two new settings to be applied under the ‘General’ section for the Hosts (Disk.UseLunReset Disk.UseDeviceReset).
May 2012
Updated
The DataCore Server and Host minimum requirements.
Removed
All references to ‘End of Life’ versions that are no longer supported as of December 31 2011. Updated notes at the
start of ‘General notes for Path Selection Policies’. Updated copyright. Added note to ‘General notes on path
selection policies for ESXi 5.x’ on selecting the preferred path of Virtual Disk with multiple connections for
VMW_PSP_FIXED to the same DataCore Server.
December 2011
Initial publication of Technical Bulletin.
June 2013
Added
A ‘warning’ note at the start of each Path Selection Policy (PSP), cautioning the user, that a VM’s Operating System
configuration may not be supported by VMware for a particular PSP (i.e. of publication VMware state that MSCS VMs
are not supported for Round Robin PSP).
April 2013
Removed
All references to SANmelody as this is now ‘End of Life’ of December 31 2012. Updated the DataCore Server
Configuration Settings added Preferred Server notes.
July 2012
Added
Support for SANsymphony-V 9.x. No new settings required. Added notes under ‘General’ section that:
i. VAAI is not supported with SANsymphony-V and ESXi 4.1.
ii. Strengthened warning that MRU is not supported with ALUA
June 2012
Added
Two new settings to be applied under the ‘General’ section for the Hosts (Disk.UseLunReset Disk.UseDeviceReset).
May 2012
Updated
The DataCore Server and Host minimum requirements. Removed all references to ‘End of Life’ SANsymphony and
SANmelody versions that are no longer supported as of December 31 2011. Added notes at the start of ‘General’ notes
for Path Selection Policies. Updated copyright. Updated ‘Fixed AP and Round Robin Path Selection Policy’ with
regard to ‘preferred paths’. Existing users should re-check their configurations and make any appropriate changes
as necessary.
November 2011
Updated
October 2011
Removed
All references to ‘End of Life’ SANsymphony and SANmelody versions that are no longer supported as of July 31 2011.
Moved known issues out of this Technical Bulletin and into the ‘Known Issues: Third Party Software/Hardware with
DataCore Servers’ document. Added MRU path policy. Added important note on how to verify path selection policy
in each case. For SANsymphony-V the first 12 characters of the Virtual Disk name no longer needs to be unique.
February 2011
Added
Support for SANsymphony-V 8.x.
September 2010
Initial publication of Technical Bulletin.
June 2013
Added
A ‘warning’ note at the start of each Path Selection Policy (PSP), cautioning the user, that a VM’s Operating System
configuration may not be supported by VMware for a particular PSP (i.e. of publication VMware state that MSCS VMs
are not supported for Round Robin PSP).
April 2013
Removed
All references to SANmelody as this product is now ‘End of Life’ as of December 31, 2012
July 2012
Added
Support for SANsymphony-V 9.x. No new settings required. Corrected option for SCSI.CRTimeoutDuringBoot and
added back SCSI.ConflictRetries in ESXi(i) Host configuration settings - General.
June 2012
Added
Two new settings to be applied under the ‘General’ section for the Hosts (Disk.UseLunReset Disk.UseDeviceReset).
November 2011
Updated
URI to VMware SAN Configuration guides changed.
October 2011
Removed
All references to ‘End of Life versions that are no longer supported as of July 31 2011. Moved all issues not specific to
configuring Hosts or DataCore Servers out of this Technical Bulletin and into the ‘Known Issues: Third Party
Software/Hardware with DataCore Servers’ document. Added important note on how to verify path selection policy
in each case. Changed requirement for Most Recently Used managed path policy – do not use the ‘ALUA’ option.
March 2011
Added
Support for SANsymphony-V 8.x
June 2010
Added
Support for 'Round-Robin' path selection policy with SANsymphony 7.0 PSP 3 Update 4 and SANmelody 3.0 PSP 3
update 4.
December 2009
Added
Support for 'Fixed Path' path selection policy with SANsymphony 7.0 PSP 3 and SANmelody 3.0 PSP 3. Previously
only MRU was supported
October 2009
Initial publication of Technical Bulletin
DataCore, the DataCore logo and SANsymphony are trademarks of DataCore Software Corporation. Other DataCore
product or service names or logos referenced herein are trademarks of DataCore Software Corporation. All other
products, services and company names mentioned herein may be trademarks of their respective owners.
ALTHOUGH THE MATERIAL PRESENTED IN THIS DOCUMENT IS BELIEVED TO BE ACCURATE, IT IS PROVIDED “AS IS”
AND USERS MUST TAKE ALL RESPONSIBILITY FOR THE USE OR APPLICATION OF THE PRODUCTS DESCRIBED AND
THE INFORMATION CONTAINED IN THIS DOCUMENT. NEITHER DATACORE NOR ITS SUPPLIERS MAKE ANY
EXPRESS OR IMPLIED REPRESENTATION, WARRANTY OR ENDORSEMENT REGARDING, AND SHALL HAVE NO
LIABILITY FOR, THE USE OR APPLICATION OF ANY DATACORE OR THIRD PARTY PRODUCTS OR THE OTHER
INFORMATION REFERRED TO IN THIS DOCUMENT. ALL SUCH WARRANTIES (INCLUDING ANY IMPLIED
WARRANTIES OF MERCHANTABILITY, NON-INFRINGEMENT, FITNESS FOR A PARTICULAR PURPOSE AND AGAINST
HIDDEN DEFECTS) AND LIABILITY ARE HEREBY DISCLAIMED TO THE FULLEST EXTENT PERMITTED BY LAW.
No part of this document may be copied, reproduced, translated or reduced to any electronic medium or machine-
readable form without the prior written consent of DataCore Software Corporation