You are on page 1of 17

Installation Guide FUJITSU Integrated System PRIMEFLEX for Microsoft Storage Spaces Direct (4-node)

Installation Guide
PRIMEFLEX for Microsoft Storage Spaces Direct
4-node configuration
This guide shows all required steps to install and configure Storage Spaces Direct (S2D) one a 4-node
PRIMEFLEX for Microsoft Storage Spaces Direct configuration.

Content
Introduction 2
Selecting a PRIMEFLEX for Microsoft Storage Spaces Direct
configuration 2
Installing and cabling the hardware 2
Configuring the server UEFI BIOS 2
Preparing the environment and additional requirements 3
Installing Windows Server 2016 Datacenter on server nodes 4
Install roles and features 5
Install latest Windows Server updates 5
Install latest supported firmware and driver versions 5
Configure the storage network 5
Configure Windows Server cluster 7
Enable Storage Spaces Direct 8
Validate your S2D deployment 10
Plan volumes 10
Create volumes 14
Install Windows Admin Center to manage S2D 17

Page 1 of 17 https://partners.ts.fujitsu.com/com/products/sol/infra/Pages/PF4StorageSpaces.aspx
Installation Guide FUJITSU Integrated System PRIMEFLEX for Microsoft Storage Spaces Direct (4-node)

Introduction
PRIMEFLEX for Microsoft Storage Spaces Direct is an integrated system that includes all the hardware and software to simplify the deployment of
a Microsoft-based hyper-converged IT infrastructure. It uses powerful, energy-efficient FUJITSU PRIMERGY x86 standard servers and Microsoft®
Windows Server® 2016 Datacenter integrated software-defined server and storage technology to reduce complexity and TCO in the data center.

This Installation Guide describes how to configure your 4-Node PRIMEFLEX for Microsoft Storage Spaces Direct solution with Windows Server 2016
Datacenter and Microsoft Storage Spaces Direct.
Please refer to the “Solution Design Guide FUJITSU Integrated System PRIMEFLEX for Microsoft Storage Spaces Direct” if you need assistance in
selection Server Nodes and configurations for your planned Microsoft Storage Spaces Direct Cluster.

Selecting a PRIMEFLEX for Microsoft Storage Spaces Direct configuration


To select a suitable PRIMEFLEX for Microsoft Storage Spaces Direct configuration refer to the “Solution Design Guide FUJITSU Integrated System
PRIMEFLEX for Microsoft Storage Spaces Direct”. It shows all available configurations and configuration options. In this installation guide, we will
only describe the installation and configuration of a 4-node configuration for Microsoft Storage Spaces Direct.

Installing and cabling the hardware


Install the four server nodes in your physical environment. Refer to the “FUJITSU Server PRIMERGY RX2540 M4 Upgrade und Maintenance Manual”
if you have questions regarding these steps. The minimum requirement is to have two NIC ports in each server with minimum 10 Gbit port speed
and a 10 Gbit capable networking infrastructure. Regarding the minimum hardware requirements, you can use any existing 10Gbit network
infrastructure. In the 4-node configuration, we strongly recommend using switches that support RDMA traffic via RoCE / RoCE v2. The main
advantages of using RDMA are reduced CPU load, higher port speed and reduced latency – all important aspects when using Storage Spaces
Direct. When you want to use RDMA traffic via RoCE / RoCE v2 your switches must support the Data Center Bridging (DCB) protocol IEEE802.1Qbb
for Priority Flow Control. Fujitsu offers several different DCB capable switches. Please refer to the “Solution Design Guide FUJITSU Integrated
System PRIMEFLEX for Microsoft Storage Spaces Direct” for details about offered compatible switches and network cabling samples. For
configuring your switches to enable RDMA traffic, please refer to the configuration guide of the switch vendor.

Configuring the server UEFI BIOS


For Storage Spaces Direct Microsoft and Fujitsu recommended using the following settings for your UEFI Server BIOS. Open the UEFI BIOS
configuration on your PRIMERGY Server system and go to the menu “CPU Configuration”. Configure the following values:
■ Hyper Threading: “Enabled”
■ Intel Virtualization Technology: “Enabled”
■ VT-d: “Enabled”
■ Turbo Mode: “Enabled”
■ Override OS Energy Performance: “Enabled”
■ Enabling the above requires adapting a new field called “Energy Performance”. This field should be set to “Performance”
■ CPU C1E Support: CPU C1E Support should be set to “Enabled”
■ Autonomous C-State Support: “Enabled”
■ Package C-State limit: “C2”

Page 2 of 17 https://partners.ts.fujitsu.com/com/products/p/sys/virtcloud/Pages/PF4StorageSpaces.aspx
Installation Guide FUJITSU Integrated System PRIMEFLEX for Microsoft Storage Spaces Direct (4-node)

The final “CPU Configuration” should look like this:

Figure: UEFI BIOS configuration settings

You must configure these values on all four Servers. You can also configure the settings on one server, export the configuration and import the
configuration on the remaining three servers.

Preparing the environment and additional requirements


In Windows Server 2012 R2 and previous versions, a cluster could only be created between member nodes joined to the same domain. Windows
Server 2016 breaks down these barriers and introduces the ability to create a Failover Cluster without Active Directory dependencies. Failover
Clusters can now therefore be created in the following configurations:

 Single-domain Clusters: Clusters with all nodes joined to the same domain
 Multi-domain Clusters: Clusters with nodes which are members of different domains
 Workgroup Clusters: Clusters with nodes which are member servers / workgroup (not domain joined)

Multi-domain clusters and workgroup clusters do not support Hyper-V live migration and file share witness (FSW) quorum configuration.
Therefore Microsoft does not recommend these models for Storage Spaces Direct.

For more information, please see the following Microsoft information on Workgroup and Multi-domain clusters in Windows Server 2016.

The recommendation is one single-domain cluster for the server nodes. The Active Directory infrastructure must be outside of the Storage Spaces
Direct cluster. The infrastructure – including Active Directory Domain Services and DNS Services - must be redundant. In case Active Directory
services are not available no Kerberos-based authentication is possible. Live migrations, SMB direct communication and file share witness is not
working. The cluster resources itself will not fail. The VMs will remain online.

Fujitsu recommends the use of one physical management server for both, Active Directory authentication and File Share Wittness (FSW). The
management server is also recommended to manage the infrastructure by using 3rd party software components as well as Fujitsu ServerView
software, e.g. ServerView Operations Manager. The new Windows Admin Center can also be hosted by this management server.

For the next steps we expect you have a Windows Server based active Directory Domain with a Domain- and Forest functional level of minimum
“Windows Server 2008 R2” (we will use “Fujitsu-S2D.com” as domain Name) and a server where you can place a file share for the witness.

Page 3 of 17 https://partners.ts.fujitsu.com/com/products/p/sys/virtcloud/Pages/PF4StorageSpaces.aspx
Installation Guide FUJITSU Integrated System PRIMEFLEX for Microsoft Storage Spaces Direct (4-node)

Installing Windows Server 2016 Datacenter on server nodes


Install Windows Server 2016 Datacenter on your four server nodes. Ensure all nodes are correctly licensed for Windows Server 2016 Datacenter
based on the license terms. We strongly recommend using the FUJITSU Reseller Option Kit (ROK) OEM Version of Windows Server 2016
Datacenter because this is the optimized license for your FUJITSU PRIMERGY server system. As a minimum requirement each of the PRIMERGY
Systems needs a Windows Server 2016 Datacenter Base License with 16 cores. If your servers have more than 16 physical cores you need to
license these additional cores in each of the four servers with additional core licenses for Windows Server 2016 Datacenter.
It is important to install Windows Server 2016 Datacenter because Storage Spaces Direct is only included in Datacenter Edition. We recommend
installing Windows Server 2016 Datacenter with Desktop Experience. Most Windows Administrators are familiar with the Desktop Experience.

Figure: Windows Server 2016 Datacenter Setup – select installation option.

From a technical perspective, we would recommend the Server Core installation option – Server Core has a smaller footprint, needs less patches
and has a reduced attack surface – but most Windows administrators are not familiar with Windows Server as Server Core installation. The
following steps assume that you have installed your four nodes with Windows Server 2016 (Desktop Experience). If you use the Server Core
installation option, the following steps are similar, but you must execute some of them remotely.
Use appropriate Windows Computer Names for the Servers for your Environment. We will use the Windows Computer Names “S2DNode01”,
“S2DNode02”, “S2DNode03” and “S2DNode04” for the four nodes.
When your servers are installed join them to your Domain. We use the Domain name “Fujitsu-S2D.com” for this Installation Guide. You can
replace “Fujitsu-S2D.com” with your individual Domain name. Ensure that the Account you use for configuring Storage Spaces Direct is Member
of the local Administrators Group on the four servers.

Page 4 of 17 https://partners.ts.fujitsu.com/com/products/p/sys/virtcloud/Pages/PF4StorageSpaces.aspx
Installation Guide FUJITSU Integrated System PRIMEFLEX for Microsoft Storage Spaces Direct (4-node)

Install roles and features


The next step is to install required server roles and features. You can do this by using Windows Admin Center, Server Manager, or PowerShell.
Here are the roles and features to install for the 4-node hyper-converged configuration:
■ Failover Clustering
■ Hyper-V
■ Data-Center-Bridging
■ RSAT-Clustering-PowerShell
■ Hyper-V-PowerShell

To install via PowerShell, use the Install-WindowsFeature cmdlet. Open PowerShell ISE with administrative rights and execute the following
command on each of your four server nodes:

Install-WindowsFeature -Name "Hyper-V", "Failover-Clustering", "Data-Center-Bridging", "RSAT-


Clustering-PowerShell", "Hyper-V-PowerShell"

(This part was inspired by the “Deploy Storage Spaces Direct” section of the Storage for Windows documentation and adapted to the FUJITSU
PRIMEFLEX for Microsoft Storage Spaces Direct 4-Node Configuration)

Install latest Windows Server updates


Now ensure you have installed the latest Windows Server updates. There are several fixes and extensions for Storage Spaces Direct available. Use
Windows Update to search for the latest updates and to install them. Ensure all your nodes have installed the latest updates for Windows Server
2016 before you proceed with the next steps.

Install latest supported firmware and driver versions


Install the latest supported driver and firmware versions. Refer to the FUJITSU document “Support Matrix for PRIMEFLEX for Microsoft Storage
Spaces Direct” to find information about the latest supported versions. Especially for NICs, HBAs and NVMe disks it is very important to work with
the correct supported firmware and driver versions for Storage Spaces Direct.

Configure the storage network


Redundant connections must be provided for both networks. The two network connections for the east-west connection use SMB Direct. It is
mandatory to use the PLAN EP MCX4-LX 25 Gb 2p SFP28 LP controller for this connection. It is SDDC AQ Premium qualified for Storage Spaces
Direct and can be used with RDMA and RoCEv2 as well as without RDMA.

There are different network controller options and different configuration options for the north-south connection. The ports used should be
secured against failure by enabling the teaming option – you can use Load Balancing Failover Teaming (LBFO) or Switch Embedded Teaming
(SET). We will not cover the configuration of the NIC Ports for north-south traffic in this installation guide.

We recommend renaming the two Mellanox Network connections to “Slot 03 Port 1 - SMB01” and “Slot 03 Port 2 - SMB02”. You can use your own
names but they should help you to identify easily the correct port on the NICS:

Figure: Renaming Mellanox NICs in “Network Connections”.

Ensure you renamed the Name for “Device Name” “Mellanox ConnectX-4 Lx Ethernet Adapter #2” to “Slot 03 Port 2 - SMB02” and the Name for
“Device Name” “Mellanox ConnectX-4 Lx Ethernet Adapter” to “Slot 03 Port 1 - SMB01”.

Now open advanced settings for your Mellanox NIC Ports an configure the following values (the configuration must be done on each of your four
servers on both ports of your Mellanox NICs):

Page 5 of 17 https://partners.ts.fujitsu.com/com/products/p/sys/virtcloud/Pages/PF4StorageSpaces.aspx
Installation Guide FUJITSU Integrated System PRIMEFLEX for Microsoft Storage Spaces Direct (4-node)

Figure: Advanced settings for Mellanox NIC

■ DcbxMode: “Host in Charge”


■ Flow Control: "Disabled"
■ NetworkDirect Functionality: "Enabled"
■ Packet Direct: "Enabled"
■ Priority & VLAN Tag: "Priority & VLAN Enabled"
■ Quality of Service: Enabled"
■ R/RoCE Max Frame Size: "2048" (Ensure this value is compatible to the switch configuration and the supported frame size on the switch)
■ VLAN ID: Refer to the following two tables and use the sample values or the values you will prefer in your environment

When you have configured these options, navigate to “Power Management” and unselect the option “Allow the computer to turn off this device
to save power”:

Figure: Configure Power Management for Mellanox NIC

Page 6 of 17 https://partners.ts.fujitsu.com/com/products/p/sys/virtcloud/Pages/PF4StorageSpaces.aspx
Installation Guide FUJITSU Integrated System PRIMEFLEX for Microsoft Storage Spaces Direct (4-node)

For your storage network based on the Mellanox NICs you will find a configuration example in the following table:
Server Node NIC Port IP Address Subnet Mask VLAN ID
S2DNode01 Slot 03 Port 1 - SMB01 192.168.100.111 255.255.255.0 10
S2DNode01 Slot 03 Port 2 - SMB02 192.168.200.111 255.255.255.0 20
S2DNode02 Slot 03 Port 1 - SMB01 192.168.100.112 255.255.255.0 10
S2DNode02 Slot 03 Port 2 - SMB02 192.168.200.112 255.255.255.0 20
S2DNode03 Slot 03 Port 1 - SMB01 192.168.100.113 255.255.255.0 10
S2DNode03 Slot 03 Port 2 - SMB02 192.168.200.113 255.255.255.0 20
S2DNode04 Slot 03 Port 1 - SMB01 192.168.100.114 255.255.255.0 10
S2DNode04 Slot 03 Port 2 - SMB02 192.168.200.114 255.255.255.0 20

You can use the following table to enter your planned values for server name, IP addresses, subnet mask an VLAN ID:
Server Node NIC Port IP Address Subnet Mask VLAN ID
Slot 03 Port 1 - SMB01
Slot 03 Port 2 - SMB02
Slot 03 Port 1 - SMB01
Slot 03 Port 2 - SMB02
Slot 03 Port 1 - SMB01
Slot 03 Port 2 - SMB02
Slot 03 Port 1 - SMB01
Slot 03 Port 2 - SMB02

Configure the NIC ports with your IP address and VLAN ID information. When you have configured your values try to ping Mellanox Port 1 on
S2DNode01 from Mellanox Port 1 on S2DNode02 and vice versa. In addition, try to ping Mellanox Port 2 on S2DNode1 from Mellanox Port 2 on
S2DNode2 and vice versa. If you are not successful check your cabling, IP configuration, VLAN ID and Windows Server Firewall and repeat the
ping text. Repeat this test for each port on each node.

Open an administrative PowerShell ISE session and execute the following command:

Get-NetAdapter |ft Name, InterfaceDescription, Enabled, RdmaAdapterInfo

You should see a list of your RDMA enabled NICs in your server. If you configured everything correctly, your two Mellanox NIC ports should be in
the list. Depending on the configuration of additional NICs in the server, you might see additional NICs in the list.

For more information about network configuration options for Storage Spaces Direct refer to Windows Server 2016 Converged NIC and Guest
RDMA Deployment Guide.

Configure Windows Server cluster


In this step, we configure the Windows Server Cluster. The Windows Server Cluster is required to enable Storage Spaces Direct. Before we can
configure our Cluster, the configuration must be validated.
In this step, you'll run the cluster validation tool to ensure that the server nodes are configured correctly to create a cluster using Storage Spaces
Direct. When cluster validation (Test-Cluster) is run before the cluster is created, it runs the tests that verify that the configuration appears
suitable to successfully function as a failover cluster.

Open PowerShell ISE and use the following PowerShell command to validate if your server nodes are suitable for Windows Server Clustering and
Storage Spaces Direct (replace the server names with your server names):

Test-Cluster –Node S2DNode01, S2DNode02, S2DNode03, S2DNode04 –Include "Storage Spaces Direct",
"Inventory", "Network", "System Configuration"

Open and review the validation report. You should not see any warnings or errors. If you have errors, you must fix them before you can proceed.
If you have warnings, you should check if they are expected.

Now we are ready to configure the Windows Server Cluster. In the next step, you'll create a cluster with the four nodes that you have validated for
cluster creation. When creating the cluster, you'll get a warning that states - "There were issues while creating the clustered role that may

Page 7 of 17 https://partners.ts.fujitsu.com/com/products/p/sys/virtcloud/Pages/PF4StorageSpaces.aspx
Installation Guide FUJITSU Integrated System PRIMEFLEX for Microsoft Storage Spaces Direct (4-node)

prevent it from starting. For more information, view the report file below." You can safely ignore this warning. It's due to no disks being available
for the cluster quorum witness. It’s recommended that a file share witness or cloud witness is configured after creating the cluster.
Open PowerShell ISE with administrative rights and execute the following command (replace “Name” with the name of your cluster, “Node” with
the names of your two cluster nodes and “StaticAddress” with an IP address in your Domain environment):

New-Cluster –Name S2DCluster01 –Node S2DNode01, S2DNode02, S2DNode03, S2DNode04


–NoStorage –StaticAddress 172.18.100.101

A 4-node deployment requires a cluster witness, otherwise you have no guarantee that your cluster can survive two simultaneous node failures.

Figure: 4-Node Cluster with Witness (Source: Microsoft: Understanding cluster and pool quorum)

A 4-node deployment with a cluster witness


■ can survive one server failure
■ can survive one server failure, then another
■ can survive two server failures at once

 Please note: It is strongly recommended to configure a cluster witness!


Read more about cluster and pool quorum the following Microsoft document: Understanding cluster and pool quorum

We recommend configuring a File Share Witness. Open an administrative PowerShell ISE session and execute the following command to
configure a File Share Witness with the Share name “S2D-Witness” and the Server name “FS01.Fujitsu-S2D.com” (replace the values with the
values used in your environment):

Set-ClusterQuorum –FileShareWitness \\FS01.Fujitsu-S2D.com\S2D-Witness

For more information about configuring a file share witness, see Configuring a File Share Witness on a Scale-Out File Server. If you want to
configure a Cloud Witness, see Deploy a Cloud Witness for a Failover Cluster.

(This part was inspired by the “Deploy Storage Spaces Direct” section of the Storage for Windows documentation and adapted to the FUJITSU
PRIMEFLEX for Microsoft Storage Spaces Direct 4-Node Direct Attached Configuration)

Enable Storage Spaces Direct


Before you enable Storage Spaces Direct, ensure your disk drives are empty. Usually they are empty when you configure your PRIMEFLEX for
Microsoft Storage Spaces Direct solution for the first time. However, maybe you already tested something on the server nodes. Run the following
script to ensure your drives you will use for S2D are empty. Substitute “S2DNode01”, “S2DNode02”, “S2DNode03” and “S2DNode04” in the second
line of the script with your server names, to remove everything on the disks including partitions and metadata:

Important notice: The following script will permanently remove any data on any disk drives other than the operating system boot drive!

# Fill in these variables with your values


$ServerList = "S2DNode01", "S2DNode02", "S2DNode03", "S2DNode04"

Invoke-Command ($ServerList) {
Update-StorageProviderCache
Get-StoragePool | ? IsPrimordial -eq $false | Set-StoragePool -IsReadOnly:$false -ErrorAction
SilentlyContinue
Get-StoragePool | ? IsPrimordial -eq $false | Get-VirtualDisk | Remove-VirtualDisk -
Confirm:$false -ErrorAction SilentlyContinue

Page 8 of 17 https://partners.ts.fujitsu.com/com/products/p/sys/virtcloud/Pages/PF4StorageSpaces.aspx
Installation Guide FUJITSU Integrated System PRIMEFLEX for Microsoft Storage Spaces Direct (4-node)

Get-StoragePool | ? IsPrimordial -eq $false | Remove-StoragePool -Confirm:$false -ErrorAction


SilentlyContinue
Get-PhysicalDisk | Reset-PhysicalDisk -ErrorAction SilentlyContinue
Get-Disk | ? Number -ne $null | ? IsBoot -ne $true | ? IsSystem -ne $true | ? PartitionStyle
-ne RAW | % {
$_ | Set-Disk -isoffline:$false
$_ | Set-Disk -isreadonly:$false
$_ | Clear-Disk -RemoveData -RemoveOEM -Confirm:$false
$_ | Set-Disk -isreadonly:$true
$_ | Set-Disk -isoffline:$true
}
Get-Disk | Where Number -Ne $Null | Where IsBoot -Ne $True | Where IsSystem -Ne $True | Where
PartitionStyle -Eq RAW | Group -NoElement -Property FriendlyName
} | Sort -Property PsComputerName, Count

After creating the cluster, use the Enable-ClusterStorageSpacesDirect PowerShell cmdlet, which will put the storage system into the Storage
Spaces Direct mode and do the following automatically:
■ Create a pool: creates a single large pool that has a name like "S2D on S2DCluster01".
■ Configures the Storage Spaces Direct caches: If there is more than one media (drive) type available for Storage Spaces Direct use, it enables
the fastest as cache devices (read and write in most cases)
■ Tiers: creates two tiers as default tiers. One called "Capacity" and the other called "Performance". The cmdlet analyzes the devices and
configures each tier with the mix of device types and resiliency.

Open an administrative PowerShell ISE Windows on one of your two cluster nodes and execute the following command:

Enable-ClusterStorageSpacesDirect

When this command is finished, which may take several minutes, the system will be ready for to create volumes.
(This part was inspired by the “Deploy Storage Spaces Direct” section of the Storage for Windows documentation and adapted to the FUJITSU
PRIMEFLEX for Microsoft Storage Spaces Direct 4-Node Direct Attached Configuration)

Page 9 of 17 https://partners.ts.fujitsu.com/com/products/p/sys/virtcloud/Pages/PF4StorageSpaces.aspx
Installation Guide FUJITSU Integrated System PRIMEFLEX for Microsoft Storage Spaces Direct (4-node)

Validate your S2D deployment


Use Failover Cluster Manager to validate your Windows Server Cluster and S2D Configuration. When you open Failover Cluster Manager you will
see your cluster. When you select your Cluster, you can review your witness configuration and you will see Storage Spaces Direct is enabled:

Figure: Sample screenshot from Failover Cluster manager after enabling S2D

Also review “Enclosures” an ensure you can see your two server nodes, review “Pools” and ensure your previously created Storage Pool will be
displayed and review “Disks” to ensure all drives you expect are listed here

Plan volumes
Volumes are the datastores where you put the files your workloads need, such as VHD or VHDX files for Hyper-V virtual machines. Volumes
combine the drives in the storage pool to introduce the fault tolerance, scalability, and performance benefits of Storage Spaces Direct.
All volumes are accessible by all servers in the cluster at the same time. Once created, they show up at C:\ClusterStorage\ on all nodes of the S2D
cluster.
We recommend making the number of volumes a multiple of the number of servers in your cluster. If you have 4 servers, you will experience
more consistent performance with 4 or 8 total volumes than with 3 or 5. This allows the cluster to distribute volume "ownership" (one server
handles metadata orchestration for each volume) evenly among servers.
Microsoft recommends using the new Resilient File System (ReFS) for Storage Spaces Direct. ReFS is the premier file system purpose-built for
virtualization and offers many advantages, including dramatic performance accelerations and built-in protection against data corruption.
However, it does not yet support certain features, such as Data Deduplication (Data Deduplication Support will be added in Windows Server
2019). If your workload requires a feature that ReFS doesn't support yet, you can use NTFS instead.
With four or more servers, you can choose for each volume whether to use three-way mirroring, dual parity (often called "erasure coding"), or mix
both resiliency types.
Dual parity provides the same fault tolerance as three-way mirroring but with better storage efficiency. With four servers, its storage efficiency is
50% – to store 2 TB of data, you need 4 TB of physical storage capacity in the storage pool. This increases to 66.7% storage efficiency with seven
servers, and continues up to 80.0% storage efficiency. The tradeoff is that parity encoding is more compute-intensive, which can limit its
performance.

Page 10 of 17 https://partners.ts.fujitsu.com/com/products/p/sys/virtcloud/Pages/PF4StorageSpaces.aspx
Installation Guide FUJITSU Integrated System PRIMEFLEX for Microsoft Storage Spaces Direct (4-node)

Figure: Resiliency with four servers (Source: Microsoft: Planning volumes in Storage Spaces Direct)

Which resiliency type to use depends on the needs of your workload. The following table summarizes which workloads are a good fit for each
resiliency type, as well as the performance and storage efficiency of each resiliency type.

Figure: Comparison of Resiliency types with four servers (Source: Microsoft: Planning volumes in Storage Spaces Direct)

When performance matters most


Workloads that have strict latency requirements or need lot’s of mixed random IOPS, such as SQL Server databases or performance-sensitive
Hyper-V virtual machines, should run on volumes that use mirroring to maximize performance!

When capacity matters most


Workloads that write infrequently, such as data warehouses or "cold" storage, should run on volumes that use dual parity to maximize storage
efficiency. Certain other workloads, such as traditional file servers, virtual desktop infrastructure (VDI), or others that don't create lot of fast-
drifting random IO traffic and/or don't require the best performance may also use dual parity, at your discretion. Parity inevitably increases CPU
utilization and IO latency, particularly on writes, compared to mirroring.

When data is written in bulk


Workloads that write in large, sequential passes, such as archival or backup targets, have another option: a volume can mix mirroring and dual
parity (mirror accelerated parity). Writes land first in the mirrored portion and are gradually moved into the parity portion later. This accelerates
ingestion and reduces resource utilization when large writes arrive by allowing the compute-intensive parity encoding to happen over a longer
time. When sizing the portions, consider that the quantity of writes that happen at once (such as one daily backup) should comfortably fit in the
mirror portion. For example, if you ingest 100 GB once daily, consider using mirroring for 150 GB to 200 GB, and dual parity for the rest.

We recommend evaluating your workload intensively to select the appropriate resiliency type for your workload. If you do not have enough
details about the expected workload, we always recommend to only use volumes with the resiliency type mirror (this means three-way-mirror in
a configuration with four nodes). You can specify the resiliency type for each volume – depending on your hardware configuration you could
configure two volumes with three way mirror and two volumes with mirror accelerated parity.

Size of volumes and Footprint


The size of a volume refers to its usable capacity, the amount of data it can store. This is provided by the -Size parameter of the New-Volume
cmdlet and then appears in the Size property when you run the Get-Volume cmdlet. Size is distinct from volume's footprint, the total physical
storage capacity it occupies on the storage pool. The footprint depends on its resiliency type. For example, volumes that use three-way mirroring

Page 11 of 17 https://partners.ts.fujitsu.com/com/products/p/sys/virtcloud/Pages/PF4StorageSpaces.aspx
Installation Guide FUJITSU Integrated System PRIMEFLEX for Microsoft Storage Spaces Direct (4-node)

have a footprint three times their size. If create a volume with 2 TB size in your 4-Node S2D configuration the footprint will be 6 TB. The
footprints of your volumes need to fit in the storage pool.

Figure: Footprint of Volumes (Source: Microsoft: Planning volumes in Storage Spaces Direct)

Reserve capacity for rebuilds


Storage Spaces Direct does not use “hot-spare” disks. Instead, it uses some Reserve Capacity for parallel rebuilds. Leaving some capacity in the
storage pool unallocated gives volumes space to repair "in-place" after drives fail, improving data safety and performance. If there is enough
capacity, an immediate, in-place, parallel repair can restore volumes to full resiliency even before the failed drives are replaced. This repair
process happens automatically.
Microsoft recommends reserving the equivalent of one capacity drive per server for parallel rebuild actions. You may reserve more at your
discretion, but this minimum recommendation guarantees an immediate, in-place, parallel repair can succeed after the failure of any drive.

Figure: Reserve capacity (Source: Microsoft: Planning volumes in Storage Spaces Direct)

The reserve capacity is not reserved automatically. Therefore, we plan for the reserve and volume sizes in the next steps.

We recommend using the Storage Spaces Direct Calculator Preview to calculate your total capacity, reserve capacity and size of your volumes.

Page 12 of 17 https://partners.ts.fujitsu.com/com/products/p/sys/virtcloud/Pages/PF4StorageSpaces.aspx
Installation Guide FUJITSU Integrated System PRIMEFLEX for Microsoft Storage Spaces Direct (4-node)

Our sample configuration shows a 4-Node S2D cluster with 2x 1.6 TB NVMe (used as Cache) and 8x 4 TB HDD in each of the four nodes:

Figure: Sample capacity planning (based on Storage Spaces Direct Calculator Preview)

Based on this sample configuration the usable capacity will be 37.3 TB – the reserve capacity 16 TB.

Figure: Sample capacity planning (based on Storage Spaces Direct Calculator Preview)

(This part was inspired by the “Deploy Storage Spaces Direct” section of the Storage for Windows documentation and adapted to the FUJITSU
PRIMEFLEX for Microsoft Storage Spaces Direct 4-Node Direct Attached Configuration)

Page 13 of 17 https://partners.ts.fujitsu.com/com/products/p/sys/virtcloud/Pages/PF4StorageSpaces.aspx
Installation Guide FUJITSU Integrated System PRIMEFLEX for Microsoft Storage Spaces Direct (4-node)

Create volumes
We recommend using the New-Volume cmdlet to create volumes for Storage Spaces Direct. It provides the fastest and most straightforward
experience. This single cmdlet automatically creates the virtual disk, partitions and formats it, creates the volume with matching name, and
adds it to cluster shared volumes – all in one easy step.
The New-Volume cmdlet has five parameters you'll always need to provide:
■ FriendlyName: Any string you want, for example "Volume01"
■ FileSystem: Either CSVFS_ReFS (recommended) or CSVFS_NTFS
■ StoragePoolFriendlyName: The name of your storage pool, for example "S2D on S2DCluster01"
■ Size: The size of the volume, for example "10TB"
■ ResiliencySettingName: for example, Mirror

Open an administrative PowerShell ISE Window end execute the following lines - based on our sample planning in the previous section
(Planning volumes) create the following volumes:

New-Volume -FriendlyName "Reserve" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D*


-Size 5.4TB -ResiliencySettingName Mirror
New-Volume -FriendlyName "Volume01" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D*
-Size 9TB -ResiliencySettingName Mirror
New-Volume -FriendlyName "Volume02" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D*
-Size 9TB -ResiliencySettingName Mirror
New-Volume -FriendlyName "Volume03" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D*
-Size 9TB -ResiliencySettingName Mirror
New-Volume -FriendlyName "Volume04" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D*
-Size 9TB -ResiliencySettingName Mirror

The volume “Reserve” will have a Footprint of 16.2 TB – this volume will be deleted later. “Volume01”, “Volume02”, “Volume03” and “Volume04”
will be used for your workload.

 An important note for your volume sizes:


Windows, including PowerShell, counts using binary (base-2) numbers, whereas drives are often labeled using decimal (base-10) numbers.
This explains why a "one terabyte" drive, defined as 1,000,000,000,000 bytes, appears in Windows as about "909 GB". This is expected.
When creating volumes using New-Volume, you should specify the Size parameter in binary (base-2) numbers. For example, specifying
"909GB" or "0.909495TB" will create a volume of approximately 1,000,000,000,000 bytes.

Ensure you use the correct values based on this information for “Reserve” and “Volume01”, “Volume02”, “Volume03” and “Volume04”. It is
important to have the reserve capacity of minimum one disk in each server to ensure a rebuild will be possible. If you are not sure with your
calculation, add some buffer for the size of the “Reserve” volume.

In the last step, it is important to delete the “Reserve” volume. If you do not delete the reserve volume S2D will never be able to run an
automatic repair action if a disk fails. You have also to ensure that the free capacity in the S2D Pool after deleting the “Reserve” volume never
will be used by new volumes or by extending existing volumes.
Open an administrative PowerShell ISE Window end execute the following line to remove the “Reserve” Volume:

Remove-VirtualDisk -FriendlyName "Reserve"

Only for deployments with NVMe, SSD and HDD drives:


In deployments with three types of drives, one volume can span the SSD and HDD tiers to reside partially on each. Likewise, in deployments with
four or more servers, one volume can mix mirroring and dual parity to reside partially on each.
To help you create such volumes, Storage Spaces Direct provides default tier templates called Performance and Capacity. The two storage tiers
are always available when you have four or more nodes. If you only use one media type in you capacity layer (e.g. only HDDs or only SSDs) you
can use 100% as three way mirror or 100% as dual parity or X% as three way mirror and Y% as dual parity (you can use any value between 0 and
100 for X and Y). In a configuration where you use SSDs and HDDs in your capacity layer storage spaces direct configures automatically the three
way mirror storage tier on the SSDs and the dual parity storage tier on the HDDs. So when you use SSDs and HDDs in your capacity layer by
default the relation between three way mirror and dual parity is defined by the capacity of SSDs (used three way mirror) and HDDs (used for dual
parity)
To create tiered volumes, reference these tier templates using the StorageTierFriendlyNames and StorageTierSizes parameters of the New-Volume
cmdlet. For example, the following cmdlet creates one volume, which mixes three-way mirroring and dual parity in 30:70 proportions.

New-Volume -FriendlyName "VolumeXY" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D*


-StorageTierFriendlyNames Performance, Capacity -StorageTierSizes 300GB, 700GB

Page 14 of 17 https://partners.ts.fujitsu.com/com/products/p/sys/virtcloud/Pages/PF4StorageSpaces.aspx
Installation Guide FUJITSU Integrated System PRIMEFLEX for Microsoft Storage Spaces Direct (4-node)

The next cmdlet creates one volume, which mixes three-way mirroring and dual parity in 70:30 proportions.

New-Volume -FriendlyName "VolumeXY" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D*


-StorageTierFriendlyNames Performance, Capacity -StorageTierSizes 700GB, 300GB

The next cmdlet creates one volume, which mixes three-way mirroring and dual parity in 50:50 proportions.

New-Volume -FriendlyName "VolumeXY" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D*


-StorageTierFriendlyNames Performance, Capacity -StorageTierSizes 500GB, 500GB

The next cmdlet creates one volume, which uses only three-way mirroring.

New-Volume -FriendlyName "VolumeXY" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D*


-StorageTierFriendlyNames Performance -StorageTierSizes 800GB

The next cmdlet creates one volume, which uses only dual parity.

New-Volume -FriendlyName "VolumeXY" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D*


-StorageTierFriendlyNames Capacity -StorageTierSizes 800GB

When you configure Volumes with storage tiers, you have to ensure that you have enough space for the footprint in the Performance Tier and in
the Capacity Tier – because of the different resiliency levels it is more difficult to calculate the footprint in each storage tier. You can again use
Storage Spaces Direct Calculator Preview for capacity calculation including storage tiers.

Page 15 of 17 https://partners.ts.fujitsu.com/com/products/p/sys/virtcloud/Pages/PF4StorageSpaces.aspx
Installation Guide FUJITSU Integrated System PRIMEFLEX for Microsoft Storage Spaces Direct (4-node)

In the following calculation sample, we have four servers, each with 2x 2TB NVMe Drives, 4x 2TB SSD Drives and 8x 4TB HDD Drives:

Figure: Sample capacity planning (based on Storage Spaces Direct Calculator Preview)

The Enable-ClusterStorageSpacesDirect command configures by default the performance (three way mirror) tier for the SSD drives
(20% in our sample calculation). The HDD Drives are used for the capacity (dual parity) tier (80% in our sample). The NVMe drives are used for
caching.

Figure: Sample capacity planning (based on Storage Spaces Direct Calculator Preview)

The usable capacity in this calculation sample is 65.5 TB.

Page 16 of 17 https://partners.ts.fujitsu.com/com/products/p/sys/virtcloud/Pages/PF4StorageSpaces.aspx
Installation Guide FUJITSU Integrated System PRIMEFLEX for Microsoft Storage Spaces Direct (4-node)

Windows Admin Center can also help you in configuring volumes with storage tiers.

With storage tiers, we recommend also making the number of volumes a multiple of the number of servers in your cluster. If you have 4 servers,
you will experience more consistent performance with 4 or 8 total volumes than with 3 or 5.

(This part was inspired by the “Deploy Storage Spaces Direct” section of the Storage for Windows documentation and adapted to the FUJITSU
PRIMEFLEX for Microsoft Storage Spaces Direct 4-Node Direct Attached Configuration)

Install Windows Admin Center to manage S2D


This step is optional but strongly recommended because several actions are much easier when you use Windows Admin Center for managing
Storage Spaces Direct than other tools. Refer to “FUJITSU PRIMEFLEX for Microsoft Storage Spaces Direct extend your solution with Windows
Admin Center” to read about how to install Windows Admin Center, install the Fujitsu extensions for Windows Admin Center and add your hyper-
converged S2D cluster.

Published by ƒ 2018 Fujitsu, the Fujitsu logo, and other Fujitsu trademarks are trademarks or registered trademarks of
Fujitsu Technology Solutions GmbH Fujitsu Limited in Japan and other countries. PRIMEFLEX is a registered trademark in Europe and other
Mies-van-der-Rohe-Strasse 8 countries. Other company, product and service names may be trademarks or registered trademarks of their
D-80807 Munich respective owners. Technical data subject to modification and delivery subject to availability. Any liability that
the data and illustrations are complete, actual or correct is excluded. Designations may be trademarks and/or
www.fujitsu.com/primeflex copyrights of the respective manufacturer, the use of which by third parties for their own purposes may infringe
the rights of such owner.
2018-12-05 WW EN

Page 17 of 17 https://partners.ts.fujitsu.com/com/products/p/sys/virtcloud/Pages/PF4StorageSpaces.aspx

You might also like