You are on page 1of 20

Reference Architecture

Optimizing VMware vSAN™


Performance on a Dell EMC®
PowerEdge™ R7525 Server
with NVMe™ Namespaces
and KIOXIA CM6 Series SSDs

Published May 2021


KIOXIA:
Tyler Nelson
Adil Rahman
Scott Harlin

Participating Companies:
AMD, Inc.
Dell, Inc.
VMware®, Inc.
Table of Contents

Introduction  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

Background  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

The Industry Challenge  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

System Architecture  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

System Configuration  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Hardware Configuration  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Host Installation / VMware vCenter Setup  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Network Configuration  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

VMkernel Adapter Configuration  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

NVMe Namespace Configuration  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Cluster Configuration  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

VMware vSAN Configuration  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Creating Fault Domains  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Creating VM Storage Policies  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Test Methodology  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Test Cases  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Test Results  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Without NVMe Namespaces  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

With Two (2) NVMe Namespaces  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

With Five (5) NVMe Namespaces  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

With Eight (8) NVMe Namespaces  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

Summary of IOPS Performance  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

Summary of Throughput Performance  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

Test Analysis  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

Recommendations  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

Summary  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Summary of Tuning Parameters and Settings  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Reference Architecture

Introduction
This reference architecture (RA) presents the synergy between NVMe protocol namespaces and VMware vSAN virtualization software to demonstrate
increases in SSD input/output operations per second (IOPS) performance. It includes hardware and software tuning, and the test process, methodology,
results and analysis for a configuration that features a Dell EMC PowerEdge R7525 rack server with two-socket AMD EPYC™ CPUs and the ability to utilize
up to twenty-four (24) PCIe® 4.0 NVMe SSDs per server. The testing conducted showcases the raw computing power of a hyper-converged infrastructure
(HCI) in combination with fast, scalable local storage. Using an aggregate of twelve (12) KIOXIA CM6 Series PCIe 4.0 NVMe SSDs, the configuration was
able to achieve random read performance up to 896,555 IOPS1.

Configuration variability is also addressed to enable storage architects, system integrators, channel partners and other key stakeholders to create flexible
designs that scale up to meet data center compute and storage needs. An optimal configuration is also presented that will help application administrators
and storage architects obtain optimal storage performance from this configuration. Additional benefits include:

Summary

A Dell EMC PowerEdge R7525 server cluster with KIOXIA CM6 Series PCIe 4.0 NVMe SSDs
demonstrates impressive IOPS and throughput performance, with low-latencies in a vSAN HCI environment

More clusters and drives increase system performance as vSAN virtualization performance
scales with the cluster size and available SSDs

NVMe protocol namespaces with vSAN virtualization enables full utilization of CM6 Series SSD capacity:

• Into both cache and capacity tiers


• Paramount in scaling local storage when duplicated in HCIs

Efficient scaling of compute and storage resources reduces data center cost without sacrificing quality

vSAN virtualization with NVMe namespaces are applicable to many different real-world workloads

Background
Businesses rely on IT specialists to assemble the compute, storage and networking resources for their respective data centers and to address critical
demands such as exponential data growth, technological advances and constant upgrades to hardware resources. Having to upgrade hardware for
each resource pool is both expensive in cost and in lost productivity as the upgrades require additional testing to determine compatibility with the pre-
existing infrastructure.

These hardware challenges have pushed many IT departments to move to a software-defined, HCI approach that virtualizes compute and storage
resources using a combination of commercially-available server hardware and very fast SSDs for local-attached storage. Each hyper-converged compute/
storage node in the virtualized cluster runs a hypervisor that contributes its resources into a cluster of other hyper-converged nodes. The hypervisors
are controllable by management software to form and manage the server clusters, and to assign resource allocations depending on individual virtual
machine (VM) needs. The HCI approach enables data centers to evolve with small server clusters and/or scaled storage based on their unique business
requirements.

For this RA, KIOXIA selected VMware Inc. components that included the VMware ESXi™ hypervisor, VMware vSphere® enterprise-scale virtualization
platform, and VMware vCenter® management software, as they are helping to evolve HCIs. These platforms serve to abstract compute resources from
various hosts into a cluster of resources that can be provisioned for use by VMs or containers. When VMware vSAN enterprise-class, storage virtualization
software is combined with VMware vSphere, IT departments can manage compute and storage resources within a single platform. VMware vSAN
software integrates seamlessly with the VMware vSphere stack for easy management, and the KIOXIA CM6 Series SSDs and namespace configuration
helps to minimize latency and deliver fast IOPS performance. It also can utilize NVMe SSDs as local storage enabling them to be divided into cache and
capacity tiers to increase performance.

© 2021 KIOXIA Europe GmbH. All rights reserved.


Reference Architecture | Optimizing VMware vSAN™ Performance on a Dell EMC® PowerEdge™R7525 Server with NVMe™ Namespaces and KIOXIA CM6 Series SSDs | May 2021 | Rev. 1.0

business.kioxia.com 3
Reference Architecture

NVMe SSDs are recommended for HCIs and represent the latest and fastest protocol for flash-based storage media. They are built on the high-speed
PCIe interface that uses lanes to directly connect to the CPU and delivers significantly improved performance and low-latency versus SAS and SATA
options. The NVMe protocol can support up to 65,535 input/output (I/O) queues, each having 65,535 commands per queue, where multiple CPU cores
can access the queues in parallel simultaneously. The NVMe protocol also supports namespaces that enable individual logical block addresses (LBA)
to be accessible to host software, and each LBA has a namespace ID that the underlying storage controller uses to identify it as a unique drive.

Before the NVMe protocol existed, an SSD would need to be dedicated as a cache OR capacity drive as VMware vSAN software was only able to utilize
600 gigabyte2 (GB) of a cache drive. It is common for the cache drives to partially utilize capacity, which in turn negatively impacted the ability to scale
local-based storage. Using namespaces, SSD capacity can be fully utilized as both cache AND data storage.

The Industry Challenge


In an HCI, entire disks are used for caching while others are used for data. There are also limits to the number of cache devices and disk groups that
can be used. Each cache device in VMware software requires an entire disk be allocated to cache with a limited cache size of 600GB. To obtain optimal
performance from a dedicated cache SSD, a system administrator would need to purchase one 3 DWPD3, 800GB or 1.6 terabyte2 (TB) SSD and allocate
the entire disk to cache. This process is highly inefficient, under-utilizes the capacity of higher end and faster performing SSDs, and makes it difficult to
scale storage capacities independent of nodes, forcing system admins to use lower performing SSDs to meet the cache requirements.

System Architecture
The system architecture and associated SSDs were selected, with different configurations tested, to determine an optimal setup. KIOXIA CM6 Series
PCIe 4.0 NVMe SSDs with 7.68TB capacities were chosen for their PCIe 4.0 capability and DWPD. In order to utilize namespaces (so that the SSD can
perform as both a cache and data storage), larger capacity points were selected to utilize higher overall data written per day. Instead of using a smaller
3 DWPD SSD for cache, a larger 1 DWPD drive was chosen so that the SSD can utilize higher overall data written per day as follows:

7.68TB x 1 DWPD = 7.68TB written per day VS


800GB x 3 DWPD = 2.4TB written per day

The larger drive allows a higher overall daily data write than a smaller 3 DWPD SSD.

Testing was then conducted to demonstrate the performance of the configuration running NVMe namespaces and VMware vSAN software. The system
configuration used for testing included:

System Configuration

Server Configuration: Dell EMC PowerEdge R7525


Total number of systems 3
BIOS and version PowerEdge R7525 Version 1.7.3
Operating system and version VMWare ESXi 7.0.1 17325551 U1 P25
Last date of OS patching 10/05/2020
Power management policy High performance
Processor Configuration:
Number of processors 2
Vendor and model AMD EPYC 7552 48-Core Processor
Core count 48
Core frequency 2200 MHz
Stepping 0
Memory Module Configuration:
Total system memory 256 GB
Number of memory modules 16
Vendor and model Micron® MTA18ASF2G72PDZ-2G9E1TI 16 GB 2RX8 PC4-2933Y-RE2-12

© 2021 KIOXIA Europe GmbH. All rights reserved.


Reference Architecture | Optimizing VMware vSAN™ Performance on a Dell EMC® PowerEdge™R7525 Server with NVMe™ Namespaces and KIOXIA CM6 Series SSDs | May 2021 | Rev. 1.0

business.kioxia.com 4
Reference Architecture

Size 16 GB
Type ECC DDR-4
Speed (MHz) 2933 MHz
NVMe SSD Configuration:
Number of SSDs 12 (4 per system)
Drive vendor and model KIOXIA KCM6XRUL7T68 (CM6 Series)
Capacity (GB) 7680
SSD information 2.5-inch4 (15mm) PCIe Gen 4.0, NVMe v1.4

Hardware Configuration

BIOS Processor Settings: Dell EMC PowerEdge R7525


Logical Processor Enabled
Virtualization Technology Enabled
IOMMU Support Enabled
L1 Stream HW Prefetcher Enabled
L2 Stream HW Prefetcher Enabled
MADT Core Enumeration Linear
NUMA Nodes Per Socket 1
L3 cache as NUMA Domain Disabled
Minimum SEV non-ES ASID 1
x2APIC Mode Enabled
Number of CCDs per Processor All
Number of Cores per CCD All
BIOS Memory Settings:
System Memory Testing Disabled
DRAM Refresh Delay Performance
Memory Operating Mode Optimizer Mode
Memory Interleaving Auto
Opportunistic Self-Refresh Disabled
Correctable Error Logging Enabled

There are many key hardware settings that can be tuned to obtain optimal performance from a VMware vSAN setup - sizing and selecting the right
hardware for the application are critical.

The AMD EPYC CPU family was selected because it is aimed at the data center server market for HCI applications. The 7552 processor has 48 cores
each in a dual socket configuration for a total of 96 cores working with 128
PCIe Gen4 lanes. This high core count coupled with 2.2GHz speed enables
increased performance and synergizes the NVMe systems to allow for an
increased number of connected NVMe drives and denser configurations. As
a result, business-critical and performance-centric workloads may thrive in
this vSAN environment.

The test set-up also utilized a balanced memory configuration that consisted
of eight (8) equal memory modules per processor for maximizing memory
performance. In addition, the Memory Operating Mode was set to ‘Optimizer
mode,’ enabling the DRAM controllers to operate independently in 64-bit
mode and to allow for optimized memory performance. The R7525 server
used a 16 x 16GB DIMM configuration and memory DIMM locations (Figure 1).

Other memory configurations were tested during the test setup phase
including four (4), eight (8) and sixteen (16) DIMM configurations, however,
testing showed that a 16 x 16GB DIMM configuration produced the best
performance results. In order to optimize the memory and CPU performance
of the cluster relating to varying workloads, it is essential to ensure that the
Figure 1 highlights the DIMM locations (in red) used for the
non-uniform memory access (NUMA) of the CPU is configured properly for 16 x 16GB DIMM configuration in the R7525 server
(Source: Dell EMC PowerEdge Installation and Service Manual5)

© 2021 KIOXIA Europe GmbH. All rights reserved.


Reference Architecture | Optimizing VMware vSAN™ Performance on a Dell EMC® PowerEdge™R7525 Server with NVMe™ Namespaces and KIOXIA CM6 Series SSDs | May 2021 | Rev. 1.0

business.kioxia.com 5
Reference Architecture

memory modules. The NUMA memory design dictates memory access times based on memory locations relative to each individual processor. The
NUMA nodes per socket (NPS) enables configuration of the NUMA memory domains per socket. An NPS setting of ‘1’ was selected for testing, and has
eight (8) interleaving channels that creates two NUMA nodes for the system (one per physical socket).

The final part of the test setup is SSD placement and important that the NVMe device I/O to each processor is separated. This is achieved by installing
NVMe drives to specific locations in the front bay of the server and mapped to either processor. The ideal configuration for a 4-drive setup, with a dual
socket server, is to connect two (2) drives to each processor. Drive placement is at the front bay of each server with a boot drive highlighted in yellow in
Bay #1 (Figure 2) and four (4) KIOXIA CM6 PCIe 4.0 NVMe SSDs for vSAN applications highlighted in red in Bays #2, #3, #23 and #24.

Figure 2 represents SSD placement with a boot drive (in yellow) and four SSDs for vSAN (in red)
(Source: Dell EMC PowerEdge Installation and Service Manual6)

Host Installation / VMware vCenter Setup


Once the hardware has been selected - rack and stack7 has been completed - and the BIOS updated, it is time for VMware vSphere to be installed. For the
RA tests conducted, VMware ESXi 7.0.1 17325551 U1 P25 was installed locally on each of the three (3) hosts. Each host had an ESXi hypervisor installed
locally to an NVMe SSD that was placed into ‘drive bay number 1’ of the server. This installation did not provide the redundancy of a Boot Optimized
Server Storage (BOSS) card, and was deployed primarily for testing purposes.

Once the ESXi hypervisor was installed on the three hosts, the first management IP address was assigned for basic management. A VMware vCenter
Server® Appliance (VCSA) version 7.0.1.00200 - Build 17327517 was installed and the hosts were connected to that. The VCSA is a preconfigured virtual
machine (VM) optimized for running VMware vCenter Server and other services on a Linux® OS. Once installed, all other host configurations and
management was performed using VMware vCenter.

Network Configuration
The standard configuration for a VMware host is typically a minimum of four (4) Virtual Local Area Networks (VLANs), in addition to the VM networks.
Many system administrators prefer to segment their Intelligent Platform Management Interface (IPMI), VMware management, VMware vMotion and
vSAN traffic into different VLANs to keep the data traffic separated. Many configurations utilize at least two network ports, sometimes three, with one
being a dedicated IPMI. For example, an integrated Dell Remote Access Controller (iDRAC) used to update and manage Dell systems, or other out-of-
band solutions, usually reside on a separate 1 gigabit per second (Gb/s) interface.

The standard configuration for vSAN traffic as recommended by VMware, Inc. requires 10GB/s networking at a minimum. For many configurations, at
least two (2) network ports would be required and connected to different switches for redundancy. If 10GB/s networking is utilized, then four (4) network
ports are recommended, with two (2) ports carrying a trunk of vSAN, management and vMotion traffic, with the other two (2) ports used for VM traffic.
The recommended configurations can be found here.

If 100Gb/s networking is used, two (2) ports are recommended for redundancy, and all traffic can be handled by two (2) trunk ports. Network sizing
should depend on total VM traffic requirements, VM density, and account for vSAN, management and vMotion traffic.

© 2021 KIOXIA Europe GmbH. All rights reserved.


Reference Architecture | Optimizing VMware vSAN™ Performance on a Dell EMC® PowerEdge™R7525 Server with NVMe™ Namespaces and KIOXIA CM6 Series SSDs | May 2021 | Rev. 1.0

business.kioxia.com 6
Reference Architecture

Once the cabling, VLANs, trunks and switch ports have been setup, the host network needs to be configured. For RA testing, each of the three hosts
were configured with dual 100Gb/s networks connected to physical 100Gb/s capable network switches as depicted in the screen shot below:

A VMware Distributed vSwitch was created, and configured for 9,000 maximum transmission units (MTUs) as this is the VMware allowed maximum and
helps eliminate packet overhead for faster storage transactions. Therefore, the vSwitch and network port should be set to a 9,000 MTU. However, VM
and management traffic can use a lower MTU by configuring the port groups and kernel adapters to a lower MTU, if needed. The Distributed vSwitch
was configured with two (2) VLANS - one dedicated to vSAN traffic - one for handling VM data, VMware management, vMotion and provisioning traffic.

The hosts were then added to the Distributed vSwitch, and the management VMkernel port was migrated to the DVSwitch. The two (2) 100Gb/s NICs
were also added to the Distributed vSwitch as shown in the screen shot below:

VMkernel Adapter Configuration


Once the physical network and DVSwitch are configured, the next step is to setup the VMkernel ports. Technically, only one VMkernel port is required,
but depending on the network setup, more can be used. If vMotion, vSAN, and management are all on three (3) different VLANs, then three (3) VMkernel
ports would be necessary - one for each service. The VMkernel adapters are used by the ESXi hypervisor to communicate with the outside world and
the VMware infrastructure.

For RA testing, two (2) VMkernel adapters were configured. The first adapter was setup for host management, vMotion and provisioning to enable the
host to be managed by the VCSA. It also enables live workload migration from one server to another without any downtime (vMotion) as well as automated
VM virtual provisioning as depicted in the screen shot below:

© 2021 KIOXIA Europe GmbH. All rights reserved.


Reference Architecture | Optimizing VMware vSAN™ Performance on a Dell EMC® PowerEdge™R7525 Server with NVMe™ Namespaces and KIOXIA CM6 Series SSDs | May 2021 | Rev. 1.0

business.kioxia.com 7
Reference Architecture

The second VMkernel adapter was configured strictly to address VMware vSAN virtualization traffic as seen in the screen shot below:

For RA testing, VSphere replication, vSAN witness and VM fault tolerance were not enabled on the server cluster. If these features are required in a
configuration, they will typically be segmented into another VLAN with a VMkernel port that is configured for this segment of traffic.

NVMe Namespace Configuration


For each KIOXIA CM6 Series PCIe 4.0 NVMe SSD deployed for RA testing, there were a variety of namespaces configured through the ESXi hypervisor
shell using ‘esxcli nvme’ commands. The first namespace of each SSD was allocated 600GB capacity for caching, while the other seven (7) namespaces
were allocated the remaining capacity in 925GB chunks for data storage. The ‘esxcli nvme’ commands and tables below outline the varying namespace
configurations.

To Add Namespaces:
For Caching (600GB):
esxcli nvme device namespace create -A vmhba5 -c 1258291200 -p 0 -f 0 -m 0 -s 1258291200
esxcli nvme device namespace attach -A vmhba5 -c 1 -n 1

For Data Storage (925GB):


esxcli nvme device namespace create -A vmhba5 -c 1939865600 -p 0 -f 0 -m 0 -s 1939865600
esxcli nvme device namespace attach -A vmhba5 -c 1 -n 2

To Delete Namespaces:
esxcli nvme device namespace detach -A vmhba5 -c 1 -n 1
esxcli nvme device namespace delete -A vmhba5 -n 1

Summary of NVMe Namespace Configurations:

1+1 Namespaces Configured in CM6 Series SSDs 1+7 Namespaces Configured in CM6 Series SSDs
Namespace ID Tier Type Size (GB) Namespace ID Tier Type Size (GB)
1 Cache 600 1 Cache 600
2 Capacity 6400 2 Capacity 925
1+4 Namespaces Configured in CM6 Series SSDs 3 Capacity 925
Namespace ID Tier Type Size (GB) 4 Capacity 925
1 Cache 600 5 Capacity 925
2 Capacity 1580 6 Capacity 925
3 Capacity 1580 7 Capacity 925
4 Capacity 1580 8 Capacity 925
5 Capacity 1580

© 2021 KIOXIA Europe GmbH. All rights reserved.


Reference Architecture | Optimizing VMware vSAN™ Performance on a Dell EMC® PowerEdge™R7525 Server with NVMe™ Namespaces and KIOXIA CM6 Series SSDs | May 2021 | Rev. 1.0

business.kioxia.com 8
Reference Architecture

In the first configuration without NVMe namespaces, each CM6 Series SSD was recognized by the host as an NVMe controller. In the configuration with
namespaces, each CM6 Series SSD was also recognized by the host as an NVMe controller and was represented as a single target with a number of
devices and paths respective to the number of namespaces configured on each drive. This is depicted in the screen shot below:

When first installed onto the host, the drives appear as a single disk device. Once the namespaces were created and attached to CM6 Series SSDs, they
appeared as unique local storage drives in VMware vCenter as seen in the screen shot below:

Cluster Configuration

The next step in the RA testing process is to create a server cluster. A server cluster was created in the VMware vSphere virtualization platform using
three (3) Dell EMC PowerEdge R7525 servers and twelve (12) KIOXIA CM6 Series PCIe 4.0 NVMe SSDs split evenly amongst the hosts. The server
cluster was created with all services disabled and the hosts were then added to the cluster while in maintenance mode. Once hosts are added to the
cluster, High Availability (HA) and Disaster Recovery System (DRS) services can be enabled. For the RA tests, both HA and DRS services were disabled
to prevent automatic migrations. The total compute and storage resources available to the cluster that was created is depicted in the screen shot below:

© 2021 KIOXIA Europe GmbH. All rights reserved.


Reference Architecture | Optimizing VMware vSAN™ Performance on a Dell EMC® PowerEdge™R7525 Server with NVMe™ Namespaces and KIOXIA CM6 Series SSDs | May 2021 | Rev. 1.0

business.kioxia.com 9
Reference Architecture

VMware vSAN Configuration


The next step in the RA testing process enables the vSAN services and creates a vSAN datastore. The standard options available within VMware vSAN
software, along with VMware Skyline™ health diagnostics8, enabled the virtualized datastore to be created.

vSAN Configuration Settings


Space Efficiency None
Data-at-rest encryption Off
Data-in-transit encryption Off
Large scale cluster support Off

Upon initial creation of the vSAN platform, only a single CM6 Series SSD with its respective namespaces were added from each host to create the first
disk group. This process was repeated for the remaining three drives per host, adding one SSD at a time, so that each host contained four (4) disk groups
that included four (4) CM6 Series SSDs, each with either one (1), two (2), four (4), five (5) or eight (8) namespaces.

By configuring CM6 Series SSDs in this manner, each disk group on the host consists of all namespaces from a single drive and promotes cache drive
de-staging efficiencies between the cache and the data namespaces. When the four (4) disk groups were combined, the four (4) CM6 Series SSDs are
represented as thirty-two (32) drives per host as depicted in the screen shot below:

Though a total of ninety-six (96) disks appear in the vSAN platform, there are only twelve (12) total disks in the cluster. The vSAN virtualized platform was
successfully created in the storage tab of the VMware vCenter management software as shown below (see ‘R7525’ under the R7525-Storage folder
highlighted).

© 2021 KIOXIA Europe GmbH. All rights reserved.


Reference Architecture | Optimizing VMware vSAN™ Performance on a Dell EMC® PowerEdge™R7525 Server with NVMe™ Namespaces and KIOXIA CM6 Series SSDs | May 2021 | Rev. 1.0

business.kioxia.com 10
Reference Architecture

Creating Fault Domains

A single fault domain was established for each host in the three node cluster to ensure that data can be replicated to another host within the vSAN
configuration, and depicted in the screen shot below:

Creating VM Storage Policies

The RA test was performed using varying storage policies for the VMs. All possible parameters for storage policies were kept as default. Thick provisioning
was also enabled in the storage policy to reserve capacity, however, in vSAN environments, all objects are treated as thin-provisioned and do not write
zeroes, and should not impact test performance.

Varying stripe sizes from ‘1’ to ‘5’ were selected and tested for each drive configuration and their respective namespaces. Before each run, the proper
storage policy was chosen by changing the cluster performance service under vSAN services in the configuration tab of the cluster, and by selecting
which storage policy to use for newly created VMs in HCIBench9. Aside from the various stripe sizes, the storage policy for each VM was configured per
the table below and depicted in the screen shot below:

© 2021 KIOXIA Europe GmbH. All rights reserved.


Reference Architecture | Optimizing VMware vSAN™ Performance on a Dell EMC® PowerEdge™R7525 Server with NVMe™ Namespaces and KIOXIA CM6 Series SSDs | May 2021 | Rev. 1.0

business.kioxia.com 11
Reference Architecture

vSAN Storage Policy Settings


Policy Structure
Host based services Enable host based rules OFF
Enable rules for “vSAN” Storage ON
Datastore specific rules
Enable tag based placement rules OFF
vSAN
Site disaster tolerance None – standard cluster
Availability
Failures to tolerate 1 failure – RAID-1 (Mirroring)
IOPS limit for object 0
Object space reservation Thick provisioning
Advanced Policy Rules Flash read cache reservation (%) 0
Disable object checksum OFF
Force provisioning OFF

© 2021 KIOXIA Europe GmbH. All rights reserved.


Reference Architecture | Optimizing VMware vSAN™ Performance on a Dell EMC® PowerEdge™R7525 Server with NVMe™ Namespaces and KIOXIA CM6 Series SSDs | May 2021 | Rev. 1.0

business.kioxia.com 12
Reference Architecture

Test Methodology
The system configuration was tested using HCIBench version 2.5.1, which deploys VMs and generates the benchmark workload through the use of
VDBENCH as the benchmarking tool. The storage policy striping was also adjusted for the vSAN datastore, along with the number of disk groups that
were utilized within the vSAN and based on the number of drives installed. For the RA test runs, the tuning parameters included the following:

HCIBench Test Parameters Values


Number of VMs 12, 24, 30
Number of CPU 4
Size of RAM in GB 8
Number of Data Disk 1, 4, 8, 12, 16, 20, 24
Size of Data Disk in GiB10 14
Additional Test Parameters Values
Number of Disk Groups 2, 4
Stripe Number 1, 2, 3, 4, 5

The standard ‘easy run’ workload examples within HCIBench were also used as they replicate real-world workloads in a very similar fashion.

Test Cases
HCIBench test cases were used to measure storage performance for the following workloads: 100% sequential writes; 100% random reads, 70%
read/30% write mixed random workloads, and 50%/50% mixed random workloads, and included testing with and without NVMe namespaces as follows:

Test 1: Simulated a large data upload or VM Creation


(utilized large 256k block size sequential writes)

Test 2: Emulated a common random read environment


(included many real world scenarios such as analytic workloads in 4k block sizes)

Test 3: Emulated the most common VM workload for enterprise data centers
(delivered a workload similar to a large cluster of diverse VMs - 70% random reads in 4k block sizes)

Test 4: Emulated a common read/write database workload


(delivered 50% random reads and writes in 8k blocks)

Test Results
For the following RA test results, each individual configuration was tested with a variety of parameters, with and without namespaces, as described in
the test cases above. The best test results were chosen, and depicted in the performance chart below:

Without NVMe Namespaces

HCIBench Test Parameters Value


Number of VMs 24
Number of CPUs 4
Size of RAM in GB 8
Number of Data Disks 8
Size of Data Disk in GiB 14
Additional Test Parameters Value
Number of Disk Groups 4
Stripe Number 1

© 2021 KIOXIA Europe GmbH. All rights reserved.


Reference Architecture | Optimizing VMware vSAN™ Performance on a Dell EMC® PowerEdge™R7525 Server with NVMe™ Namespaces and KIOXIA CM6 Series SSDs | May 2021 | Rev. 1.0

business.kioxia.com 13
Reference Architecture

Without NVMe Namespaces Performance Chart

IOPS Throughput Latency


Workload Description Read / Write Ratio Block Size No. of Threads
(in IOPS) (in MB/s) (in ms)
HCIBench Test 1: (vdb-8vmdk-100ws-256k-0rdpct-
Sequential 0% / 100% 256k 1 1,987.4 582.8 41.14
0randompct-1threads-1601400604)
HCIBench Test 2: (vdb-8vmdk-100ws-4k-100rdpct-
Random 100% / 0% 4k 4 232,483.0 881.6 1.71
100randompct-4threads-1601402722)
HCIBench Test 3: (vdb-8vmdk-100ws-4k-70rdpct-
Random 70% / 30% 4k 4 96,241.3 396.2 3.78
100randompct-4threads-1601404873)
HCIBench Test 4: (vdb-8vmdk-100ws-8k-50rdpct-
Random 50% / 50% 8k 4 35,209.9 333.2 9.02
100randompct-4threads-1601407263)

With Two (2) NVMe Namespaces

HCIBench Test Parameters Value


Number of VMs 24
Number of CPUs 4
Size of RAM in GB 8
Number of Data Disks 8
Size of Data Disk in GiB 14
Additional Test Parameters Value
Number of Disk Groups 4
Stripe Number 4

With Two (2) NVMe Namespaces Performance Chart

IOPS Throughput Latency


Workload Description Read / Write Ratio Block Size No. of Threads
(in IOPS) (in MB/s) (in ms)
HCIBench Test 1: (vdb-8vmdk-100ws-256k-0rdpct-
Sequential 0% / 100% 256k 4 10,300.6 2,575.2 74.524
0randompct-1threads-1601400604)
HCIBench Test 2: (vdb-8vmdk-100ws-4k-100rdpct-
Random 100% / 0% 4k 4 697,25.8 2,723.2 1.1245
100randompct-4threads-1601402722)
HCIBench Test 3: (vdb-8vmdk-100ws-4k-70rdpct-
Random 70% / 30% 4k 4 379,858.8 1,483.8 2.0448
100randompct-4threads-1601404873)
HCIBench Test 4: (vdb-8vmdk-100ws-8k-50rdpct-
Random 50% / 50% 8k 4 236,972.6 1,851.3 3.2308
100randompct-4threads-1601407263)

With Five (5) NVMe Namespaces:

HCIBench Test Parameters Value


Number of VMs 24
Number of CPUs 4
Size of RAM in GB 8
Number of Data Disks 8
Size of Data Disk in GiB 14
Additional Test Parameters Value
Number of Disk Groups 4
Stripe Number 2

© 2021 KIOXIA Europe GmbH. All rights reserved.


Reference Architecture | Optimizing VMware vSAN™ Performance on a Dell EMC® PowerEdge™R7525 Server with NVMe™ Namespaces and KIOXIA CM6 Series SSDs | May 2021 | Rev. 1.0

business.kioxia.com 14
Reference Architecture

With Five (5) NVMe Namespaces Performance Chart

IOPS Throughput Latency


Workload Description Read / Write Ratio Block Size No. of Threads
(in IOPS) (in MB/s) (in ms)
HCIBench Test 1: (vdb-8vmdk-100ws-256k-0rdpct-
Sequential 0% / 100% 256k 4 8,688.6 2,172.0 88.5394
0randompct-1threads-1601400604)
HCIBench Test 2: (vdb-8vmdk-100ws-4k-100rdpct-
Random 100% / 0% 4k 4 693,480.5 2,708.9 1.1052
100randompct-4threads-1601402722)
HCIBench Test 3: (vdb-8vmdk-100ws-4k-70rdpct-
Random 70% / 30% 4k 4 437,519.9 1,709.0 1.747
100randompct-4threads-1601404873)
HCIBench Test 4: (vdb-8vmdk-100ws-8k-50rdpct-
Random 50% / 50% 8k 4 277,119.2 2,165.0 2.7601
100randompct-4threads-1601407263)

With Eight (8) NVMe Namespaces:

HCIBench Test Parameters Value


Number of VMs 24
Number of CPU 4
Size of RAM in GB 8
Number of Data Disk 8
Size of Data Disk in GiB 14
Additional Test Parameters Value
Number of Disk Groups 4
Stripe Number 1

With Eight (8) NVMe Namespaces Performance Chart

IOPS Throughput Latency


Workload Description Read / Write Ratio Block Size No. of Threads
(in IOPS) (in MB/s) (in ms)
HCIBench Test 1: (vdb-8vmdk-100ws-256k-0rdpct-
Sequential 0% / 100% 256k 4 15,700.7 3,925.1 49.0193
0randompct-1threads-1601400604)
HCIBench Test 2: (vdb-8vmdk-100ws-4k-100rdpct-
Random 100% / 0% 4k 4 896,555.8 3,502.1 0.8637
100randompct-4threads-1601402722)
HCIBench Test 3: (vdb-8vmdk-100ws-4k-70rdpct-
Random 70% / 30% 4k 4 480,309.0 1,876.1 1.5959
100randompct-4threads-1601404873)
HCIBench Test 4: (vdb-8vmdk-100ws-8k-50rdpct-
Random 50% / 50% 8k 4 421,026.9 3,289.2 1.8408
100randompct-4threads-1601407263)

Summary: Increasing NVMe Namespaces IOPS Performance

© 2021 KIOXIA Europe GmbH. All rights reserved.


Reference Architecture | Optimizing VMware vSAN™ Performance on a Dell EMC® PowerEdge™R7525 Server with NVMe™ Namespaces and KIOXIA CM6 Series SSDs | May 2021 | Rev. 1.0

business.kioxia.com 15
Reference Architecture

The IOPS performance advantages using NVMe namespaces under four different workloads was significantly better in all cases when compared to
not using NVMe namespaces. The difference in performance between an eight (8) namespace configuration and a configuration without namespaces
is illustrated below:

With 8 Namespaces Without Namespaces Namespace Advantage


Test Results
(in IOPS) (in IOPS) (higher is better)
Test 1 – 256KB Write 15,700.7 1,987.4 ~790%
Test 2 - Read 896,555.8 232,483.0 ~385%
Test 3 - Mixed VM 480,309.0 96,241.3 ~499%
Test 4 - Database 421,026.9 35,209.9 ~1,195%

The RA test results indicate that a 4-drive configuration using namespaces outperforms an identical configuration without namespaces. As the number
of namespaces increases, so does IOPS performance for the configurations tested. Depending on the test case workload, IOPS performance was 385%
to 1,195% better when namespaces were used.

The throughput performance advantages using NVMe namespaces under four different workloads was also significantly better in all cases when
compared to not using NVMe namespaces as illustrated below:

Summary: Increasing NVMe Namespaces Throughput Performance

© 2021 KIOXIA Europe GmbH. All rights reserved.


Reference Architecture | Optimizing VMware vSAN™ Performance on a Dell EMC® PowerEdge™R7525 Server with NVMe™ Namespaces and KIOXIA CM6 Series SSDs | May 2021 | Rev. 1.0

business.kioxia.com 16
Reference Architecture

Test Analysis
Extensive testing was performed with varying parameters across two (2), three (3), four (4) and five (5) disk groups in VMware vSAN software. Testing
was also performed with drives in different slots. The results of the RA testing shows that a four (4) disk group configuration is optimal, especially if the
four (4) drives are connected in bays in which two (2) drives are connected to each processor. The striping of the storage policies for the VMs was also
tested ranging from ‘1’ to ‘5’. For these tests, there was no noticeable trend as to which stripes performed the best. VMware recommends the stripe
policy to be set to ‘1,’ but it can be tuned depending on the number of SSDs that are installed in each server, as well as for read-intensive versus write-
intensive operations. Higher stripes show faster writes but there is a read penalty for faster write operations.

When individual VMs were provided additional CPUs, RAM and the size of each individual data disk, there was no noticeable change in performance.
As a result, these sizes were kept constant at four (4), eight (8) and fourteen (14), respectively. It was noted that as the number of VMs increased, IOPS
performance plateaued at twenty-four (24) VMs, indicating a bottleneck at the vSAN level. In addition, performance decreased when the number of
data disks went past eight (8).

With performance as fast as 896,555.8 IOPS, the RA tested system configuration can handle a large amount of VMs while still delivering latency of only
~0.86ms. Achieving a latency of 0.86ms means that the system configuration maintained 0.86ms on average through the duration of the test run. This
result compared very well to sequential write latency in a write-intensive environment (at ~49ms). The use of NVMe namespaces enables CM6 Series
SSDs to be utilized as both cache and capacity storage while delivering very fast response times.

The Storage Policy was set to ‘mirroring’ to enable redundancy, and a common practice used in many data centers. Mirroring requires that each write I/O
be replicated to different hosts, while the read I/O can be issued directly from a single SSD. Since VMware vSAN virtualization delivers software-based
replication, local SSDs can be used as a Storage Area Network (SAN). To achieve this, each VM write operation must follow this path:

A write operation from a VM requires a ‘write to cache’ on the local host

The write operation is then de-staged from cache and requires a ‘write to capacity disk’ on the local host

A second write operation from cache is sent to a second host over a network for redundancy

© 2021 KIOXIA Europe GmbH. All rights reserved.


Reference Architecture | Optimizing VMware vSAN™ Performance on a Dell EMC® PowerEdge™R7525 Server with NVMe™ Namespaces and KIOXIA CM6 Series SSDs | May 2021 | Rev. 1.0

business.kioxia.com 17
Reference Architecture

The manner by which data replication is handled between hosts could result in significant overhead in each SSD stack. Since data must be read and
re-written by each of the three SSDs (per host), and represents one-third of the total storage performance, the process of reading and writing local data,
and writing remote data from other hosts, could have an adverse effect on overall storage performance. Since the drives used in the vSAN platform
were software-limited to a queue depth of 2,000, use of namespaces enabled each CM6 Series SSD to receive a logical queue depth of 16,000, or eight
times (8x) that of the physical drive. This recommendation can boost performance.

Recommendations
From the performance testing, the utilization of KIOXIA CM6 Series SSDs and NVMe namespaces with VMware vSAN virtualization demonstrated
a storage performance gain of nearly 500% on a mixed VM workload and an almost 1,200% gain in database performance. The results of these two
workloads are representative of real world storage performance of many organizations that run a broad mix of VMs on their virtualization clusters.
The data also shows decreased latency in these tests as a result of using namespaces. These advantages are significant to enable higher storage
performance on vSAN clusters.

VMware administrators should strongly consider utilizing namespaces in their vSAN configurations to maximize the storage performance that is available
to their VMs. The performance results captured from three servers and twelve SSDs shows that the performance of this configuration can compete with
some all-flash array solutions available in the market today. These recommendations are best implemented if the server cluster design and hardware
purchasing decisions include namespaces, and run on the recommended hardware featured in this RA.

Summary
The Dell EMC PowerEdge R7525 server (with AMD PCIe 4.0 CPUs), VMware vSAN software and KIOXIA CM6 Series PCIe 4.0 SSDs are a powerful
combination to deliver optimal performance in an HCI environment. The participating companies collaborated to tune and optimize hardware and software
with NVMe namespaces to create a reference architecture with previously unpublished settings. Utilizing and tuning the use of NVMe namespaces
greatly improves vSAN performance and enables full utilization of KIOXIA CM6 Series PCIe 4.0 SSDs in both cache and capacity tiers.

Tuned performance resulted in increases from 385% to nearly 1,200% across various operational workloads, compared to the same configuration that
is not tuned.

KIOXIA was the first storage vendor to deliver U.3 conformant enterprise SSDs based on the PCIe 4.0 interface and NVMe specification with its CM6 Series
SSDs and an integral part of this high performance solution using vSAN virtualization and NVMe namespaces on a Dell EMC PowerEdge R7525 server.

Optimized tuning of the CPU, vSAN software and CM6 Series SSDs, in conjunction with proper BIOS and vSAN settings, achieved optimal vSAN performance
in this configuration (PowerEdge R7525 server - NVMe namespaces - CM6 Series SSDs). A summary of the tuning parameters and settings follows:

© 2021 KIOXIA Europe GmbH. All rights reserved.


Reference Architecture | Optimizing VMware vSAN™ Performance on a Dell EMC® PowerEdge™R7525 Server with NVMe™ Namespaces and KIOXIA CM6 Series SSDs | May 2021 | Rev. 1.0

business.kioxia.com 18
Reference Architecture

Summary of Tuning Parameters and Settings

Optimized AMD EPYC CPU Tuning Parameters: Dell EMC PowerEdge R7525
IOMMU Support Enabled
L1 Stream HW Prefetcher Enabled
L2 Stream HW Prefetcher Enabled
NUMA Nodes Per Socket 1
L3 cache as NUMA Domain Disabled
Minimum SEV non-ES ASID 1
x2APIC Mode Enabled
Number of CCDs per Processor All
Number of Cores per CCD All
Memory Selection:
Number of DIMMs 16
BIOS Memory Settings:
System Memory Testing Disabled
DRAM Refresh Delay Performance
Memory Operating Mode Optimizer Mode
Memory Interleaving Auto
BIOS Power Policy:
BIOS Power Setting Maximum Performance
KIOXIA CM6 SSD Tuning Parameters:
Drive Cache Namespace 1 x 600GB
Drive Capacity Namespace 7 x 925GB
Drive Power Level 25W
VMware vSAN Tuning Parameters:
VLAN Configuration Separate VLAN and VMK for vSAN
Network MTU 9000
Configure > Hardware > Overview > Power Management Policy High Performance
VMware vSAN Settings:
Enable host based rules OFF
Enable rules for “vSAN” Storage ON
Enable tag based placement rules OFF
IOPS limit for object 0
Object space reservation Thick provisioning
Flash read cache reservation (%) 0
Disable object checksum OFF
Force provisioning OFF

More CM6 Series SSD information is available here.

© 2021 KIOXIA Europe GmbH. All rights reserved.


Reference Architecture | Optimizing VMware vSAN™ Performance on a Dell EMC® PowerEdge™R7525 Server with NVMe™ Namespaces and KIOXIA CM6 Series SSDs | May 2021 | Rev. 1.0

business.kioxia.com 19
Reference Architecture

TRADEMARKS

The following trademarks, service and/or company names – AMD, the AMD Arrow logo, AMD EPYC and combinations thereof, Advanced Micro Devices, Inc. Dell, Dell EMC, the Dell logo, PowerEdge, Dell Inc., Linux, Linus Torvalds, Micron,
Micron Technology, Inc., NVMe, NVM Express, Inc., PCIe, PCI-SIG, SK hynix, SK Holdings Co., Ltd., VMware vCenter, VMware vCenter Server, VMware ESXi, VMware vSAN, VMware Skyline, VMware vSphere, the VMware logo, VMware, Inc. – are
not applied, registered, created and/or owned by KIOXIA Europe GmbH or by affiliated KIOXIA group companies. However, they may be applied, registered, created and/or owned by third parties in various jurisdictions and therefore protected
against unauthorized use.

NOTES

1
The test configurations were finalized on September 10, 2020 and the testing itself was concluded on September 21, 2020. During testing, the latest hardware updates and software revisions were applied as they became available, but may not
represent the latest updates or revisions available upon publication of this reference architecture.

2
Definition of capacity - KIOXIA defines a megabyte (MB) as 1,000,000 bytes, a gigabyte (GB) as 1,000,000,000 bytes and a terabyte (TB) as 1,000,000,000,000 bytes. A computer operating system, however, reports storage capacity using powers
of 2 for the definition of 1Gbit = 230 bits = 1,073,741,824 bits, 1GB = 230 bytes = 1,073,741,824 bytes and 1TB = 240 bytes = 1,099,511,627,776 bytes and therefore shows less storage capacity. Available storage capacity (including examples of
various media files) will vary based on file size, formatting, settings, software and operating system, and/or pre-installed software applications, or media content. Actual formatted capacity may vary.

3
Drive Write(s) per Day (DWPD): One full drive write per day means the drive can be written and re-written to full capacity once a day, every day, for the specified lifetime. Actual results may vary due to system configuration, usage, and other
factors.

4
2.5-inch indicates the form factor of the SSD and not the drive’s physical size.

5
Source: Dell EMC PowerEdge Installation and Service Manual, page 71, https://dl.dell.com/topicspdf/poweredge-r7525_owners-manual_en-us.pdf.

6
Source: Dell EMC PowerEdge Installation and Service Manual, page 8, https://dl.dell.com/topicspdf/poweredge-r7525_owners-manual_en-us.pdf.

7
Equipment in a server rack is mounted before being moved to the data center for deployment.

8
VMware Skyline health diagnostics for the VMware vSphere virtualization platform is a self-service tool to detect issues using log bundles and provides suggestions to remediate issues.

9
The Hyper-Converged Infrastructure Benchmark (or HCIBench) is a testing tool designed for HCI clusters. It fully automates the end-to-end process of deploying test VMs, coordinating workload runs, aggregating test results, delivering perfor-
mance analysis and collecting necessary data for troubleshooting purposes, all designed in a consistent and controlled way.

10
A Gibibyte (GiB) is a digital data storage scale that is base 2 where 1 GiB = 2 to the 30th power (1,073,741,824) bytes (base 2).

Linear scalability up to maximum configurations is addressed in two VMware Inc. papers: (1) VMware Virtual SAN™ 6.0 Performance – Scalability and Best Practices, https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/
11

whitepaper/products/vsan/vmware-virtual-san6-scalability-performance-white-paper.pdf, and (2) Options in Scalability with vSAN https://blogs.vmware.com/virtualblocks/2016/09/15/options-scalability-virtual-san/.

DISCLAIMERS

The technology tested in this RA is not currently supported by VMware, Inc. and may never be. The test configuration is not recommended for use in any production environment especially where VMware support is required.

Dell has certified the KIOXIA CM6 Series SSDs as part of PowerEdge agnostic (Dell branded) drive offering. For more information about agnostic SSDs, select https://downloads.dell.com/manuals/common/dfd-advantages_config_poweredge_
agnostic_ssd.pdf.

KIOXIA may make changes to specifications and product descriptions at any time. The information presented in this best practices reference architecture is for informational purposes only and may contain technical inaccuracies, omissions and
typographical errors. Any performance tests and ratings are measured using systems that reflect the approximate performance of KIOXIA products as measured by those tests. Any differences in software or hardware configuration may affect
actual performance, and KIOXIA does not control the design or implementation of third party benchmarks or websites referenced in this document. The information contained herein is subject to change and may be rendered inaccurate for many
reasons, including but not limited to any changes in product and/or roadmap, component and hardware revision changes, new model and/or product releases, software changes, firmware changes, or the like. KIOXIA assumes no obligation to
update or otherwise correct or revise this information.

KIOXIA makes no representations or warranties with respect to the contents herein and assumes no responsibility for any inaccuracies, errors or omissions that may appear in this information.

KIOXIA specifically disclaims any implied warranties of merchantability or fitness for any particular purpose. In no event will KIOXIA be liable to any person for any direct, indirect, special or other consequential damages arising from the use of
any information contained herein, even if KIOXIA is expressly advised of the possibility of such damages.

© 2021 KIOXIA Europe GmbH. All rights reserved.


Reference Architecture | Optimizing VMware vSAN™ Performance on a Dell EMC® PowerEdge™R7525 Server with NVMe™ Namespaces and KIOXIA CM6 Series SSDs | May 2021 | Rev. 1.0

business.kioxia.com 20

You might also like