Professional Documents
Culture Documents
2. Introduction .................................................................................................. 6
3
3. Solution Overview ........................................................................................ 7
7. Results ....................................................................................................... 44
9. Conclusion ................................................................................................. 52
Extensive testing has been performed to simulate real-world workloads and conditions of a
MSSQL environment on Nutanix. The sizing data and recommendations made in this document
are based upon multiple testing iterations and thorough technical validation. The solution and
testing data was completed with MSSQL deployed on VMware vSphere, both running on the
Nutanix Virtual Computing Platform. Testing validation was done using SQLIO, SQLSIM and
HammerDB.
The Nutanix platform offers the ability to run both MSSQL and VM workloads simultaneously on
the same platform. Density for MSSQL deployments will be primarily driven by the database’s
CPU and storage requirements. Test validation has shown it is preferred to increase the
number of MSSQL VMs on the Nutanix platform, as compared to scaling the number of SQL
instances, to fully take advantage of its performance and capabilities. From an I/O standpoint
the Nutanix platform was able to handle the throughput and transaction requirements of a
demanding MSSQL server given NDFS localized I/O and server attached flash.
A single VM running SQLIO was able to get over 35,000 random IOPS and over 1.2 GBps of
sequential throughput. During the tests latencies peaked at 1 ms for read, however were
normally in the microseconds range. Write latencies were at 1 ms for 8 KB and 64 KB block
sizes and peaked at <5 ms for 512 KB.
Sizing for the pods was determined after careful consideration of performance as well as
accounting for additional resource for N+1 failover capabilities.
Audience
6
This best practices document is part of the Nutanix Solutions Library and is intended for
architecting, designing, managing, and/or supporting Nutanix infrastructures. Consumers of this
document should be already familiar with VMware vSphere, Microsoft SQL Server (MSSQL),
and Nutanix.
This document has been broken down to address key items for each role focusing on the
enablement of a successful design, implementation, and transition to operation.
Purpose
If you are interested in the high-level best practices continue with the ‘Best Practice Checklist’
section below.
Controller
Controller User
User VM(s)
VM(s)
VM
VM
VM I/O
Passthrough Hypervisor
Hypervisor
SCSI
SCSI Controller
Controller
HDD
HDD
HDD
HDD
HDD
HDD
HDD
HDD
SSD
SSD
SSD
SSD
In addition, local storage from all nodes is virtualized into a unified pool by the Nutanix
Distributed File System (NDFS). In effect, NDFS acts like an advanced NAS that uses local
SSDs and disks from all nodes to store virtual machine data. Virtual machines running on the
cluster write data to NDFS as if they were writing to shared storage.
User
User VM(s)
VM(s) User
User VM(s)
VM(s) User
User VM(s)
VM(s)
Hypervisor
Hypervisor VM I/O Hypervisor
Hypervisor VM I/O ... Hypervisor
Hypervisor VM I/O
SCSI
SCSI Controller
Controller Controller
Controller SCSI
SCSI Controller
Controller Controller
Controller SCSI
SCSI Controller
Controller Controller
Controller
VM
VM VM
VM VM
VM
HDD
HDD
HDD
HDD
HDD
HDD
HDD
HDD
HDD
HDD
HDD
HDD
HDD
HDD
HDD
HDD
HDD
HDD
HDD
HDD
HDD
HDD
HDD
HDD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
NDFS
SCALE
SCALE
VM
VM 11 ... VM
VM N
N VM
VM 11 ... VM
VM N
N VM
VM 11 ... VM
VM N
N
Hypervisor
Hypervisor Hypervisor
Hypervisor Hypervisor
Hypervisor
Cache
Cache Cache
Cache Cache
Cache
Cache Cache Cache
CVM
CVM CVM
CVM CVM
CVM
Storage
Storage Storage
Storage Storage
Storage
...
NDFS
Figure 3 Elastic Deduplication Engine
Inspired by the Google File System, NDFS delivers a unified pool of storage from all nodes
across the cluster, leveraging techniques including striping, replication, auto-tiering, error
detection, failover’ and automatic recovery. This pool can then be presented as shared storage
resources to VMs for seamless support of features like vMotion, HA, and DRS, along with
industry-leading data management features. Additional nodes can be added in a plug-and-play
manner in this high-performance scale-out architecture to build a cluster that will easily grow as
your needs do.
The Nutanix platform operates and scales Microsoft SQL Server (MSSQL) in conjunction with
the other hosted services providing a single scalable platform for all deployments. For existing
sources and platforms, interaction with the MSSQL platform on Nutanix will occur over the
network. The figure below shows a high-level view of the MSSQL on Nutanix solution: 9
Users
Users
Operational
Service Intelligence
Interaction
Data Integration
External
External Sources
Sources
Desktops
Desktops Applications/Data
Applications/Data Servers
Servers
MSSQL VMs
SCALE
SCALE
The Nutanix approach of modular scale-out enables customers to select any initial deployment
size and grow in more granular data and compute increments. This removes the hurdle of a
large up-front infrastructure purchase that a customer will need many months or years to grow
into, ensuring a faster time-to-value for the implementation.
Memory
Content
Cache
NDFS EDE
OpLog*
SSD
Drain Cache
Extent Store
HDD
NDFS ILM
Data IO Detail
The figure below describes the high-level IO path for VMs and MSSQL VMs running on Nutanix.
As shown, all IO operations are handled by NDFS and occur on the local node to provide the
highest possible IO performance. Data written to the MSSQL VMs occurs locally for all VMs on
the same ESXi node and over 10GbE for VMs and sources hosted on another node or remotely.
VM
VM 11 ... VM
VM N
N VM
VM 11 ... VM
VM N
N VM
VM 11 ... VM
VM N
N
Hypervisor
Hypervisor Hypervisor
Hypervisor ... Hypervisor
Hypervisor
Cache
Cache Cache
Cache Cache
Cache
Cache Cache Cache
CVM
CVM CVM
CVM CVM
CVM
Storage
Storage Storage
Storage Storage
Storage
External
External
10GbE
10GbE Network
Network
Sources
Sources
NDFS
Figure 6 Data IO Detail
The figure below describes the detailed IO path for VMs and MSSQL VMs running on Nutanix.
All write IOs, including data being input to the MSSQL VMs, will occur locally on the local node’s
SSD tier to provide the highest possible performance. Read requests for the MSSQL VMs
occur locally and are served from the high performance in-memory read cache (if cached) or the
SSD or HDD tier depending on placement. Each node will also cache frequently accessed data
in the read cache for any local data (VM data, MSSQL Server data). Nutanix ILM will continue
to constantly monitor data and the IO patterns to choose the appropriate tier placement.
VM
VM 11 ... VM
VM N
N VM
VM 11 ... VM
VM N
N VM
VM 11 ... VM
VM N
N
Hypervisor
Hypervisor Hypervisor
Hypervisor ... Hypervisor
Hypervisor
Cache
Cache Cache
Cache Cache
Cache
Cache Cache Cache
CVM
CVM CVM
CVM CVM
CVM
Storage
Storage Storage
Storage Storage
Storage
External
External
10GbE
10GbE Network
Network
Sources
Sources
NDFS
Figure 7 Data IO Detail Expanded
Nutanix enables you to run multiple workloads all on the same scalable converged infrastructure
o Modular incremental scale: With the Nutanix solution you can start small and scale. A
single Nutanix block (four nodes) provides from 20-40+ TB storage and up to 80+ cores 1
in a compact footprint. Given the modularity of the solution, you can granularly scale 2
per-node giving you the ability to accurately match supply with demand and minimize the
upfront CapEx.
o High performance: Up to 100,000 random read IOPS and up to 3 GB/s of sequential
throughput in a compact 2U cluster. ILM keeps indexes and heavily access data in the
high performance SSD and cache tiers.
o Integrated: The Nutanix platform provides full support for VAAI allowing you to leverage
all the latest advancements from VMware and taking your solution to the next level.
o Elastic Deduplication: The Nutanix Elastic Deduplication Engine provides granular
deduplication of data to increase cache efficiency. The engine will utilize the unique
fingerprints of data and only bring one copy up into the Nutanix Content Cache. This
allows for the highest possible cache utilization and higher performance for VM’s
accessing common data - eliminating the issues normally seen with full clones or P2V
migrations.
o Data efficiency: The Nutanix solution is truly VM-centric for all compression and
deduplication policies. Unlike traditional solutions that perform these tasks mainly at the
LUN level, the Nutanix solution provides all of these capabilities at the VM and file level,
greatly increasing efficiency and simplicity. By allowing for both inline and post-process
compression capabilities and cache and on-disk deduplication, the Nutanix solution
breaks the bounds set by traditional solutions.
o Effective Information Lifecycle Management: Nutanix incorporates heat-optimized
tiering (HOT), which leverages multiple tiers of storage and optimally places data on the
tier that provides the best performance. The architecture was built to support local disks
attached to the controller VM (SSD, HDD) as well as remote (NAS) and cloud-based
source targets. The tiering logic is fully extensible, allowing new tiers to be dynamically
added and extended. The Nutanix system continuously monitors data-access patterns to
determine whether access is random, sequential, or a mixed workload. Random I/O
workloads are maintained in an SSD tier to minimize latencies. Sequential workloads
can be automatically placed into HDD to improve endurance.
o Business continuity and data protection: Native snapshotting and replication features
provide an extensive DR and protection capability. VSS provides integration for
application consistent snapshots and a SRA for VMware SRM integration.
o Enterprise-grade cluster management: A simplified and intuitive Apple-like approach
to managing large clusters, including a converged GUI that serves as a single pane of
glass for servers and storage, alert notifications, and bonjour mechanism to auto-detect
new nodes in the cluster.
o High-density architecture: Nutanix uses an advanced server architecture in which 8
Intel CPUs (up to 80+ cores) and up to 1TB of memory are integrated into a single 2U
appliance. Coupled with data archiving and compression, Nutanix can reduce desktop
hardware footprints by up to 4x.
Core Components
o MSSQL
▫ Performance and Scalability
o Utilize multiple drives for TempDB Log/Data and Database Log/Data
Start with a minimum of 2 for small environments or 4 for larger
environments
Look for PAGEIOLATCH_XX contention and scale number of drives
as necessary
o Utilize a 64KB NTFS allocation unit size for MSSQL drives
o Enabled locked pages in memory for MSSQL Server service account (NOTE:
if this setting is used the VM’s memory must be locked, only applies with
memory > 8GB)
o TempDB Data Files
Set TempDB size between 1 and 10% of instance database sizes
If number of cores < 8
# of cores = # of data files
If number of cores > 8
Use 8 data files to being with
Look for contention for in-memory allocation
(PAGELATCH_XX) and scale 4 files at a time until contention
is eliminated
o Database Data files
Size appropriately and enable AUTOGROW respective to database
growth
Do not AUTOSHRINK data and log files
Supporting Components
o Network
▫ Utilize and optimize QoS for NDFS and database traffic
Scenario Definition
OLTP – These workloads are transactional in nature and can heavy a good deal
Transactional of updates and inserts depending on the workload. Depending on size
and workload, write I/O and latency is extremely important for the
highest possible application performance. The quantity of transactions
processed per second is a key metrics for OLTP databases.
OLAP – These workloads are analytical in nature and rely a great deal on
Analytics & batched ETL or data loading and batched reporting. These workloads
Reporting are primarily sequential in nature and can require reading great
quantities of data. Sequential write performance is critical for data
ingest and quantity can be determined by load frequency and quantity.
These workloads are traditionally utilized for data warehousing and
reporting.
Below are some initial recommendations for MSSQL Server VM sizing based upon assumed
workload characterizations:
Note: These are recommendations for sizing and should be modified after a current state
analysis.
Table 3: VM Configuration
Parameter Configuration 1
8
Network adapter Vmxnet3
Storage adapter Minimum of 2 x PVSCSI (OS + DB), 4 for larger databases
VMware tools Latest installed
Memory Locked (preferred)
VM Logging Disabled
Latency Sensitivity High (vSphere 5.5 only)
Advanced VM Name: numa.vcpu.min
configuration parameters Value: For less than 8 vCPU set to number of vCPUs (eg. 4)
OS Optimization
Configure your Windows image to Microsoft Server best practices, with some of the following
additional recommendations:
o Set display to “Adjust for best performance”
o Enable locked pages in memory
o Enable fast file initialization for MSSQL Server Service Account
o Try to minimize utilization of Windows page files, if necessary, put the windows page file
on a dedicated disk
Configure applicable startup options for MSSQL Server through the MSSQL Server
Configuration Manager. Below are some commonly utilized trace flags:
As mentioned, the MSSQL VMs will co-exist with other VMs running on the Nutanix platform.
DRS anti-affinity is utilized to split all of the MSSQL VMs amongst the ESXi hosts in the cluster.
This is leveraged to keep a 1:1 ratio between MSSQL VMs and ESXi nodes/Nutanix Controllers 1
to ensure the highest possible performance. 9
DRS anti-affinity rules for SQL Server VMs
SQL
SQL 1..N
1..N ... VM
VM 1..N
1..N SQL
SQL 1..N
1..N ... VM
VM 1..N
1..N SQL
SQL 1..N
1..N ... VM
VM 1..N
1..N
Hypervisor
Hypervisor Hypervisor
Hypervisor Hypervisor
Hypervisor
Cache
Cache Cache
Cache Cache
Cache
Cache Cache Cache
CVM
CVM CVM
CVM CVM
CVM
Storage
Storage Storage
Storage Storage
Storage
...
NDFS
Figure 8 MSSQL + VM Node Placement
VM Component Mapping
The figure below shows the mapping for the MSSQL VM’s storage. The quantity of VMDKs will
be driven by the estimated database size and I/O requirements. As mentioned above any
PAGEIOLATCH_XX waits would signify a need for more disks. A minimum of six dives for
MSSQL VMs is recommended, however will vary. More information on disk quantity and sizing
can be found below in the MSSQL Component Sizing section below.
SQL VM
SQL DB Disks
OS + SQL
Pagefile
[DATA|LOG] N
TempDB Data
TempDB Log
DB Data
DB Log
...
nGB VMDK
nGB VMDK
nGB VMDK
nGB VMDK
nGB VMDK
...
NFS Datastore
The following section covers the storage sizing and considerations for running MSSQL on
Nutanix. NOTE: It is always a good practice to add a buffer for contingency and growth.
2
Step 1 - Calculate the estimated required storage: 1
Required Storage = (Number of Databases x Average Database Size) x
Average growth rate per year
For example, if there are 10 databases with an average size of 500 GB and a 20% growth rate
per year:
Step 2 - Calculate the total required number of Nutanix nodes by storage capacity:
For example, if there is 20 TB of required storage, the replication factor is 2 (default) and there
is 4 TB of storage per node (varies per model):
With Nutanix you have the ability to start with a certain capacity and incrementally scale
capacity as needed. If all nodes are deployed initially this step can be skipped.
For example, if there are 10 Nutanix nodes required and 50% of nodes initially deployed and 4
TB of storage per node (varies per model):
Meaning that after 40 days the required storage capacity would breach the initial storage
capacity and Nutanix nodes must be scaled to facilitate the demand up to the full 100% of the
required nodes based upon storage capacity.
The Nutanix Virtual Computing Platform provides an ideal combination of both high-
performance compute with localized storage to meet any demand. True to this capability, this
document contains zero reconfiguration of, or customization to, the Nutanix product to optimize
for this use case. 2
3
The figure below shows a high-level example of the relationship between a Nutanix block, node,
storage pool and container
Container 1 -
CTR-RF2-VM-01 ... Container N
HDD
HDD
HDD
HDD
HDD
HDD
HDD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
Nutanix Nutanix Nutanix Nutanix ... Nutanix Nutanix Nutanix Nutanix
Node Node Node Node Node Node Node Node
Nutanix Block Nutanix Block
Figure 10 Nutanix Component Architecture
The table below shows the Nutanix storage pool and container configuration.
Designed for true linear scaling, a Leaf Spine network architecture is leveraged. A Leaf Spine
architecture consists of two network tiers: an L2 Leaf and an L3 Spine based on 40GbE and
non-blocking switches. This architecture maintains consistent performance without any
throughput reduction due to a static maximum of three hops from any node in the network. 2
4
The figure below shows a design of a scale-out Leaf Spine network architecture which provides
20Gb active throughput from each node to its L2 Leaf and scalable 80Gb active throughput from
each Leaf to Spine switch providing scale from 1 Nutanix block to thousands without any impact
to available bandwidth
A Nutanix NX-3450 was utilized as the target environment and provided all MSSQL VM hosting.
The Nutanix block was connected to an Arista 7050S top-of-rack switch via 10GbE.
Test
Test Client(s)
Client(s) Test
Test Target(s)
Target(s)
Sessions
Assumptions
o SQLIO size: 2x memory and 2x cache size
o SQLIOSim size: 2x memory and 2x cache size 2
o HammerDB size: 32 warehouses 6
o Disk configurations: 1,2,4,8
Hardware
o Storage/Compute: 1 Nutanix NX-3450
o Network: Arista 7050Q(L3 Spine)/7050S(L2 Leaf) Series Switches
Nutanix
o Version: 3.5
MSSQLIO
o Version: 1.5
MSSQLSIM
o Version: 9.00.1399.05
HammerDB
o Version: 2.14
#########################################################
## 2
7
## Script: AddDisks
## Language: PowerCLI
##
##########################################################
# Inputs
$vmPrefix = "TM4-MSSQL-"
$numDisks = 8
$diskCapacity = 100
$vms | %{
$vm = $_
# Add disks
1..$numDisks | %{
#########################################################
##
##
##########################################################
select disk 1
online disk
assign letter=F
active
select disk 2
online disk
assign letter=G
active
select disk 3
online disk
assign letter=H
active
2
9
select disk 4
online disk
assign letter=I
active
select disk 5
online disk
assign letter=J
active
select disk 6
online disk
assign letter=K
active
select disk 7
assign letter=L
3
format fs=ntfs label="64K_3" unit=64K quick
0
active
select disk 8
online disk
assign letter=M
active
#########################################################
##
## Script: NTNX-Run-SQLIO 3
## Author: Steven Poitras 1
## Description: Automate SQLIO testing using Powershell
## Language: Powershell
##
#########################################################
function NTNX-Run-SQLIO {
<#
.NAME
NTNX-Run-SQLIO
.SYNOPSIS
.DESCRIPTION
.NOTES
Authors: VMwareDude
.LINK
www.nutanix.com
.PARAMETER Iterations
.PARAMETER Duration
.PARAMETER RorW
.PARAMETER RandOrSeq
.PARAMETER BlockSize
.PARAMETER OutstandingOps
.PARAMETER TargetFile
This parameter specifies the target file or param file to use (eg.
test.dat)
.PARAMETER TargetType
.PARAMETER OutputCSV
.EXAMPLE
#>
Param(
[Parameter(Mandatory=$True,ValueFromPipeline=$True)]
[int]$Iterations,
[Parameter(Mandatory=$True,ValueFromPipeline=$True)]
[int]$Duration,
[Parameter(Mandatory=$True,ValueFromPipeline=$True)]
[ValidateSet("R","W")]
[string]$RorW,
[ValidateSet("random","sequential")]
[string]$RandOrSeq,
3
[Parameter(Mandatory=$True,ValueFromPipeline=$True)]
3
[ValidateSet("8","64","512")]
[string]$BlockSize,
[Parameter(Mandatory=$True,ValueFromPipeline=$True)]
[string]$Threads,
[Parameter(Mandatory=$True,ValueFromPipeline=$True)]
[string]$OutstandingOps,
[Parameter(Mandatory=$True,ValueFromPipeline=$True)]
[string]$TargetFile,
[Parameter(Mandatory=$True,ValueFromPipeline=$True)]
[ValidateSet("param","file")]
[string]$TargetType,
[Parameter(Mandatory=$False,ValueFromPipeline=$True)]
[string]$TargetNum,
[Parameter(Mandatory=$True,ValueFromPipeline=$True)]
[string]$OutputCSV
BEGIN {
$fSecs = "-s$Duration"
3
$fRorW = "-k$RorW"
4
$fRandOrSeq = "-f$RandOrSeq"
$fBlockSize = "-b$BlockSize"
$fThreads = "-t$Threads"
$fOps = "-o$OutstandingOps"
$resultsCSV = $OutputCSV
} PROCESS {
1..$Iterations | %{
if ($TargetNum -eq 2) {
$iops = $r.Split("`n")[14].Split(":")[1].Trim()
$mbps = $r.Split("`n")[15].Split(":")[1].Trim()
$lat = $r.Split("`n")[18].Split(":")[1].Trim()
$iops = $r.Split("`n")[18].Split(":")[1].Trim()
$mbps = $r.Split("`n")[19].Split(":")[1].Trim()
3
$lat = $r.Split("`n")[22].Split(":")[1].Trim()
5
} elseif ($TargetNum -eq 6) {
$iops = $r.Split("`n")[22].Split(":")[1].Trim()
$mbps = $r.Split("`n")[23].Split(":")[1].Trim()
$lat = $r.Split("`n")[26].Split(":")[1].Trim()
} else {
$iops = $r.Split("`n")[10].Split(":")[1].Trim()
$mbps = $r.Split("`n")[11].Split(":")[1].Trim()
$lat = $r.Split("`n")[14].Split(":")[1].Trim()
File = $TargetFile
NumDisks = $TargetNum
ReadWrite = $RorW
RanSeq = $RandOrSeq
BlockSize = $BlockSize
NumThreads = $Threads
OutstandingOps = $OutstandingOps
IOPS = $iops
Throughput = $mbps
Latency = $lat
} END {
#########################################################
##
## Language: Powershell
##
#########################################################
$gOutputCSV = "X:\TESTING\tm4sql201-results2.csv"
$gIterations = 5
$gDuration = 30
$gVarIterations = 1,8,16,32
$gFiles =
("J:\Testfile.Dat","file",1),("C:\SQLIO\param2d.txt","param",2),("C:\SQ
LIO\param4d.txt","param",4)
$gBlockSizes = "8","64","512"
$gRorW = "R","W"
$gRandOrSeq = "random","sequential"
$gFiles | %{
$tFile = $_[0]
$tFileType = $_[1]
$gBlockSizes | %{
$tBlockSize = $_
$gRorW | %{
$tRorW = $_
3
8
# For random and sequential
$gRandOrSeq | %{
$tRandOrSeq = $_
} else {
$gVarIterations | %{
$tVarInt = $_
} 3
} 9
}
SQLIO Benchmark
MSSQLIO is a performance benchmarking tool for benchmarking and comparing storage I/O 4
subsystem performance. NOTE: Make-a-file.exe should be used to generate the files used by 0
SQLIO with random real data to simulate a real workload, by default zeros are used.
For more information about MSSQLIO visit the Microsoft SQLIO Subsystem Benchmark Tool
page: http://www.microsoft.com/en-us/download/details.aspx?id=20163
SQLIOSim Benchmark
For more information about MSSQLIO visit the Microsoft SQLIOSim Utility support site:
http://support.microsoft.com/kb/231619
HammerDB Benchmark
For more information about MSSQLIO visit the HammerDB Sourceforge site:
http://hammerora.sourceforge.net/
SQLIO
Based user experience and industry standards, recommend ideal ranges for these values are
kept below the following values:
SQLIOSim
Based user experience and industry standards, recommend ideal ranges for these values are
kept below the following values:
Performance Graphs
The performance graphs show a plot of the data as well as a trend line. Below various aspects
of the graphs are highlighted:
The single VM SQLIO results highlighted the performance for a single VM with 8 vCPU and 8 4
GB memory running on the Nutanix platform. The size of the test data file generated using 4
Make-a-file.exe was 16 GB to double Windows memory and Nutanix Cache sizes.
Over 20,000 SQLIO test runs were completed to stress the environment and eliminate any
variance or performance outliers. The platform showed ample IOPS and throughput based upon
a varying set of block sizes and workloads.
NOTE: the results displayed below are for a single VM running on a single Nutanix node. By
default a Nutanix deployment will have a minimum of 3 nodes and the results below are not
applicable to the full platform’s performance.
The figure below shows the SQLIO IOPS and throughput performance based upon the block
size. Results showed the single SQLIO VM was able to facilitate ~35,000 random IOPS with a
8 KB block size and ~16,000 IOPS with a 64 KB block size. Sequential I/O peaked with a 512
KB block size at ~1,200 MBps (1.2 GBps) and was ~950 MBps (.95 GBps) with a 64 KB block
size.
SQLIO - Single VM
Aggregate I/O by Block Size
Throughput (MBps)
40,000 1,500
30,000 1,000
IOPS
20,000
10,000 500
0 0
8 KB 64 KB 512 KB
Block Size
SQLIO - Single VM 4
5
Latency by Block Size
6
5
Latency (ms)
4
3
Read
2
1 Write
0
8 KB 64 KB 512 KB
Block Size
The figure below shows the SQLIO operation latency for read and write workloads based upon
the number of outstanding ops. Results showed read latency kept consistently under 1ms and
in microseconds under 16 outstanding ops. Write latency varied between <1 ms and 5 ms
during the tests.
SQLIO - Single VM
Latency by Outstanding Ops
6
5
Latency (ms)
4
3
Read
2
1 Write
0
1 8 16
Oustanding Ops
This highlights the ability to incrementally scale out the number of SQL servers and maintain
performance as the number of databases/instances scale, as well as the ability to eliminate any 4
noisy-neighbor scenarios. 6
SQLIO - Single VM
Controller VM CPU Utilization
100
CPU Utilization (%)
80 CVM-A
60 CVM-B*
40 CVM-C
20 CVM-D
0
Figure 16 SQLIO - Single VM - Controller VM CPU Usage
The SQLIO random tests were run to simulate a random I/O workload on NDFS. Random I/O
workloads are important from the random nature of database data file I/Os. Testing showed
ample IOPS numbers for a single VM which will scale linearly with the number of SQL servers 4
or Nutanix nodes. 7
Block Size Read I/O Write I/O Read Throughput Write Throughput
(KB) (IOPS) (IOPS) (MBps) (Mbps)
Random
8 35,968 16,437 281 281
64 15,496 6,236 919 919
512 2,433 1,145 1,216 1216
The figure below shows the SQLIO random I/O test performance for read and write workloads
based upon block size. IOPS peaked using a smaller (8 KB) block size.
SQLIO - Single VM
Random IOPS by Block Size
40,000
35,000
30,000
25,000
IOPS
20,000 Read
15,000 Write
10,000
5,000
0
512 KB 64 KB 8 KB
Figure 17 SQLIO - Single VM - Random IOPS by Block Size
SQLIO - Single VM 4
8
Read IOPS by Outstanding Ops
40,000
35,000
30,000
25,000
IOPS
20,000 512 KB
15,000
10,000
64 KB
5,000 8 KB
0
1 8 16
Outstanding Ops
The figure below shows the SQLIO random I/O test performance for read and write workloads
based upon the number disks. Results showed IOPS increasing with the increased disk
quantity and showed a minimum of 4 disks is necessary to provide ideal performance.
SQLIO - Single VM
IOPS by Disk Quantity
40,000
35,000
30,000
25,000
IOPS
20,000
15,000 Read
10,000 Write
5,000
0
1 2 4 6
Disk Quantity
The SQLIO sequential tests were run to simulate a sequential I/O workload on NDFS.
Sequential I/O workloads are important from the sequential nature of database log file and
TempDB I/Os. Testing showed ample throughput numbers for a single VM which will scale
linearly with the number of SQL servers or Nutanix nodes. 4
9
Table 11: SQLIO Sequential I/O Results – Single VM
Block Size Read I/O Write I/O Read Throughput Write Throughput
(KB) (IOPS) (IOPS) (MBps) (Mbps)
Sequential
8 35,905 15,341 280 280
64 15,496 11,120 968 968
512 2,391 1,741 1,741 1,195
The figure below shows the SQLIO sequential I/O test performance for read and write
workloads based upon block size. Throughput peaked using a larger (512 KB) block size,
however, showed ample throughput using 64 KB block sizes.
SQLIO - Single VM
Sequential Throughput by Block Size
1,400
Throughput (MBps)
1,200
1,000
800
Read
600
400 Write
200
0
8 KB 64 KB 512 KB
Figure 20 SQLIO - Single VM - Sequential Throughput by Block Size
SQLIO - Single VM 5
0
Read Throughput by Outstanding Ops
1,400
Throughput (MBps)
1,200
1,000
800
512 KB
600
400 64 KB
200 8 KB
0
1 8 16
Outstanding Ops
The figure below shows the SQLIO sequential I/O test performance for read and write
workloads based upon the number disks. Results showed throughput increasing with the
increased disk quantity primarily for write I/O and showed a minimum of 4 disks is necessary to
provide ideal performance.
SQLIO - Single VM
Sequential Throughput by Disk Quantity
1,400
Throughput (MBps)
1,200
1,000
800
600 Read
400
Write
200
0
1 2 4 6
Disk Quantity
A single VM running SQLIO was able to get over 35,000 random IOPS and over 1.2 GBps of
sequential throughput. During the tests latencies peaked at 1 ms for read, however were
normally in the microseconds range. Write latencies were at 1 ms for 8 KB and 64 KB block
sizes and peaked at <5 ms for 512 KB.
Sizing for the pods was determined after careful consideration of performance as well as
accounting for additional resource for N+1 failover capabilities.
The MSSQL on Nutanix solution provides a single high-density platform for MSSQL, VM hosting
and application delivery. This modular pod based approach enables these deployments to
easily be scaled.
Hardware
o Storage / Compute 5
▫ Nutanix NX-3450 3
o Per node specs (4 nodes per 2U block):
CPU: 2x Intel Xeon E5-2670
Memory: 256 GB Memory
SSD: 2x400 GB Intel S3700
HDD: 4 x 1 TB SATA Drives
o Network
▫ Arista 7050Q - L3 Spine
▫ Arista 7050S - L2 Leaf
Software
o Nutanix
▫ Version: NOS 3.5
o Windows Server
▫ 2008 R2
o MSSQL
▫ 2008
o Infrastructure
▫ ESXi 5.1.0 patch 2
▫ vCenter 5.1.0 patch 2
VM
o Nutanix Controller
▫ CPU: 8 vCPU
▫ Memory: 48 GB
o SQLIO Server Configuration:
▫ OS: Windows Server 2008 R2
▫ 4 vCPU / 8 GB Memory
About Nutanix
Nutanix is the recognized leader in the emerging Virtual Computing Platform market. The
Nutanix solution converges compute and storage resources into a single appliance, delivering a
powerful, modular building block for virtual datacenters. It incorporates the same advanced,
distributed software architecture that powers leading IT innovators such as Google, Facebook
and Amazon – but is tailored for mainstream enterprises and government agencies. The
Nutanix solution enables easy deployment of any virtual workload, including large-scale virtual
desktop initiatives (VDI), development/test apps, big data (Hadoop) projects and more. Nutanix
customers can radically simplify and scale out their datacenter infrastructures with cost-effective
appliances that can be deployed in under 30 minutes for rapid time to value.