You are on page 1of 41

Azure Stack HCI OS v2

Sustaining Q2 Release
(Hatchet) - EKT
E810, Single Node, Capri 2.3
2-June-2022
EKT Overview

This session provides an overview of the Intel E810 adapters,


Single Node Cluster Configuration & enhancements of the some of
the new features of OMIMSWAC 2.3 from a solutions perspective.

Internal Use - Confidential 2 of Y © Copyright 2020 Dell Inc.


EKT Agenda

• Introduction
• E810
• Single Node
• Capri 2.3

Internal Use - Confidential 6 of Y © Copyright 2020 Dell Inc.


Q&A

Internal Use - Confidential 7 of Y © Copyright 2020 Dell Inc.


E810
Mohaimin

Internal Use - Confidential 8 of Y © Copyright 2020 Dell Inc.


Intel E810-XXV Dual Port 10/25GbE
• Intel E810-XXV Dual Port 10/25GbE SFP28 Adapter, PCIe Full
Height
• Part Number CD16M
• Supported in AX-750, and AX-7525
• Intel E810-XXV Dual Port 10/25GbE SFP28 Adapter, PCIe Low
Profile
• Part Number 6J1N1
• Supported in AX-650, AX-750, and AX-7525
• A-rev driver support for Azure Stack HCI OS 21H2 and Windows
Server 2022 (SWB: G9KVP). No plans to support on earlier OSes.
• Validated for iWARP support only.
• Currently support scalable and stretched cluster deployments only.
Switchless is targeted for a later PI.
Internal Use - Confidential 9 of Y © Copyright 2020 Dell Inc.
Intel E810-XXV Dual Port 10/25GbE
• Switch config will be the same as QLogic FastLinQ 41262 NICs. Documents will be
updated with references to E810 closer to RTS.
• Cannot see RDMA activity on the physical NICs in PerfMon when they are part of a
SET. Can observe RDMA activity on corresponding virtual NICs though.

• BIOS settings (see screenshot)


– Enable NIC + RDMA Mode in Device Settings in System Setup.

Internal Use - Confidential 10 of Y © Copyright 2020 Dell Inc.


AX-750 – 24 drive All-SSD/All-NVMe PCIe Slot Matrix
E810/CX5/CX6LX Riser Config 2 (HL or FL)
DW GPU Capable (FL Riser) BOSS DW GPU Capable (FL Riser)
E810/CX5/CX6LX
SW GPU Capable (HL Riser) SW GPU Capable (HL Riser)
Count Slot
E810/CX5/CX6 E810/CX5/CX6
Up Slot 6,3 LP (2x Max)
2 x 1 GbE
LOM OCP options to 4x Slot 5,4 FH (2x Max)

AX-750 – 24 drive Hybrid NVMe+HDD PCIe Slot Matrix


E810 Riser Config 5
BOSS

Count Slot
HBA355i E810/CX5/CX6

Up Slot 6 (LP)
2 x 1 GbE
LOM OCP options to 2x Slot 7 (FH)

Internal Use - Confidential 11 of Y © Copyright 2020 Dell Inc.


AX-750 – Hybrid SSD + HDD PCIe Slot Matrix
E810
E810 E810 Riser Config 1
BOSS
HBA355i E810

Count Slot
E810/CX5/CX6 E810/CX5/CX6
Up Slot 6,3 (LP)
2 x 1 GbE
LOM OCP options to 6x Slot 5,4,7,1 (FH)

AX-650 – 10 drive PCIe Slot Matrix Riser Config 0-2 – All


SSD/SSD+HDD
B E810/CX5/CX6 or E810/CX5/CX6 or
Riser Config 4 – All NVMe
E810/CX5/CX6
SW GPU SW GPU
O
S 2 x 1 GbE Count Slot
S LOM OCP options
Up Slot 2,1,3 (LP)
to 3x

Internal Use - Confidential 12 of Y © Copyright 2020 Dell Inc.


AX-7525 – 24 drive All NVMe PCIe Slot Matrix
Riser Config 7
BOSS
Count Slot
E810/CX5/CX6
[Required] E810/CX5/CX6
Up Slot 3,6 (LP)
2 x 1 GbE
LOM OCP options to 2x

Internal Use - Confidential 13 of Y © Copyright 2020 Dell Inc.


AX-7525 – 16 drive All NVMe PCIe Slot Matrix
FH NIC or Riser Config 3 HL
SW GPU
FH NIC or FH NIC or
SW GPU SW GPU
Count Slot
LP NIC [Required] BOSS
2 x 1 GbE
Up to Slot 3 (LP)(Required)
OCP options
LOM 4x Slot 2,5,7 (FH)

DW GPU capable
Riser Config 3 FL
DW GPU capable DW GPU capable
Count Slot
E810/CX5/CX6
[Required] BOSS
2 x 1 GbE
Up to Slot 3 (LP)(Required)
OCP options
LOM 1x

Internal Use - Confidential 14 of Y © Copyright 2020 Dell Inc.


AX-7525 – 8 x NVMe + 16 x SSD PCIe Slot Matrix

B Riser Config 3 HL
E810/CX5/CX6 or SW GPU
O
E810/CX5/CX6 or SW GPU
S
E810/CX5/CX6 or SW GPU
S Count Slot
E810/CX5/CX6 E810/CX5/CX6
Up to Slot 3,6 (LP)(1x Required)
2 x 1 GbE
LOM OCP options 4x Slot 2,5,7 (FH)

B Riser Config 3 FL
O DW GPU capable

S
DW GPU capable DW GPU capable
S Count Slot
E810/CX5/CX6
E810/CX5/CX6
[Required]
Up to Slot 3 (Required),6 (LP)
2 x 1 GbE
LOM OCP options 2x

Internal Use - Confidential 15 of Y © Copyright 2020 Dell Inc.


Q&A

Internal Use - Confidential 16 of Y © Copyright 2020 Dell Inc.


Single Node
Arvind

Internal Use - Confidential 17 of Y © Copyright 2020 Dell Inc.


What is Single Node Cluster?
• Cluster with a single node
• No failover to another node possible
• Physical disk is the Fault Domain as against StorageScaleUnit in a
multi-node cluster
• Only single tier configurations are supported at this point for Single
Node HCI
• All-Flash (SSD or NVMe).
• Only supported on 15G.
• Supported on 21H2.

Internal Use - Confidential 18 of Y © Copyright 2020 Dell Inc.


How HA?

• Node level High Availability cannot be supported


• Application-level HA (like SQL Always-On/multiple web
front ends) with VMs running on different Single Node
clusters can be used.
• Hyper-V Replica across different single node clusters
• Storage Replica across different single node clusters

Internal Use - Confidential 19 of Y © Copyright 2020 Dell Inc.


Volumes

• Single Node cluster can sustain multiple drive failures


depending upon type of volume created.
• Two-way mirror volume can sustain single drive failure at a time.
• Three-way mirror volume can sustain two drive failure at a time.
• Cluster Performance History volume is always a two-way mirror volume and as it stands there is no way to
create a three-way mirror volume

Internal Use - Confidential 20 of Y © Copyright 2020 Dell Inc.


Volumes (contd.)

• Commands used to create three way and nested two-


way mirror.

Internal Use - Confidential 21 of Y © Copyright 2020 Dell Inc.


Network Topology/Why RDMA NICs
• RDMA NICs will continue to be part of order codes for
Single Node
• This will help if
• Customers want to expand a single node (When MSFT supports the same later)
• Adapters with higher bandwidth (25/100 GbE) can be used for
• Compute/VM traffic
• Replica Traffic (Hyper-V Replica/Storage Replica),
• Application-level replication
• Shared Nothing Live Migration etc.

• RDMA switches will be optional if the customer wants to use existing


switches at the customer site, as Single Node does not use RDMA.

Internal Use - Confidential 22 of Y © Copyright 2020 Dell Inc.


Network Topology (contd.)
• Here are couple of network topologies/configurations one could
work with to take advantage of the RDMA adapters in the system.

Topology 1

Internal Use - Confidential


Topology 2
23 of Y © Copyright 2020 Dell Inc.
Network Topology (contd.)
• Following commands can be used to set bandwidth weight for
egress traffic
• The Network Adapter can be either RDMA or non-RDMA adapter.
• One can also segregate the traffic between rNDC (non-RDMA) and
RDMA adapters.

Internal Use - Confidential 24 of Y © Copyright 2020 Dell Inc.


Updates (Windows/Hardware)

• Both Windows and Hardware updates are supported


only from Server Manager view only on Windows Admin
Center, in a Single Node Cluster.
• Windows Updates and Hardware Updates have to be
run independently.
• VMs/Cluster resources will go down without warning
when you trigger an update on a Single Node.

Internal Use - Confidential 25 of Y © Copyright 2020 Dell Inc.


Updates (Windows/Hardware)

Internal Use - Confidential 26 of Y © Copyright 2020 Dell Inc.


Q&A

Internal Use - Confidential 27 of Y © Copyright 2020 Dell Inc.


OMIWAC 2.3
Rajeev Kumar

Internal Use - Confidential 28 of Y © Copyright 2020 Dell Inc.


Contents

• Expand storage
• Overview
• Steps to expand storage

• Onboarding HCP policies to Azure


• Overview
• Steps to onboard HCP policies

Internal Use - Confidential 29 of Y © Copyright 2020 Dell Inc.


Expand Storage - Overview
• Dell EMC HCI configuration Profile (HCP) is the specification (collection) from Dell EMC
which captures the best practice configuration recommendation for Azure Stack HCI and
Windows based HCI solutions from Dell EMC. This feature provides the capability to the
MSFT HCI Solutions Customers with the ability to expand their clusters without needing to
add additional nodes by intelligently detecting the Node/Cluster Configurations and allowing
the storage expansion by taking out the complexity and bringing in guided expansion
leveraging the HCP
• Prerequisites:
• All nodes have “OMIWAC Premium” license
• All nodes are either AX or RN
• Model of all nodes are supported as per support matrix
• OS of all nodes are supported as per support matrix

Internal Use - Confidential 30 of Y © Copyright 2020 Dell Inc.


Expand Storage - steps

Internal Use - Confidential 31 of Y © Copyright 2020 Dell Inc.


Expand Storage - steps (contd.)

Internal Use - Confidential 32 of Y © Copyright 2020 Dell Inc.


Available Recommendations

Internal Use - Confidential 33 of Y © Copyright 2020 Dell Inc.


Available Recommendations (contd.)

Internal Use - Confidential 34 of Y © Copyright 2020 Dell Inc.


Resiliency Type Minimum Number of nodes Total Capacity Storage efficiency

Two-way-mirror 2 Usable Capacity x 2 50%


Three-way-mirror 3 Usable Capacity x 3 33.3%
Dual Parity 4 Usable Capacity x 2 to 50% -80%
Usable Capacity x 1.25
Mixed 4 Usable Capacity x 3 to 33.3% - 80%
Usable Capacity x 1.25

Dual parity for Hybrid (SSD/HDD): https://docs.microsoft.com/en-us/azure-


stack/hci/concepts/fault-tolerance#dual-parity-efficiency-for-hybrid-deployments

Dual parity for all-flash(all SSD): https://docs.microsoft.com/en-us/azure-


stack/hci/concepts/fault-tolerance#dual-parity-efficiency-for-all-flash-deployments

Internal Use - Confidential 35 of Y © Copyright 2020 Dell Inc.


Onboarding HCP Policies into Azure -
Overview
• Azure Arc is one of the primary management tool for resource management at cloud and hybrid platform,
its essential that Dell Azure Policies shall help administrators to maintain the compliance with HCP
throughout the lifecycle of the HCI cluster

• This feature shall be supported for cluster running on Dell EMC Integrated system for Microsoft Azure
Stack HCI 21H2. This will be licensed feature which requires “OMIWAC Premium” node-based license

• OMIWAC will help administrator to onboard Dell Azure Policies at Azure Arc level so that administrator can
leverage those policies and can check the compliance against the cluster

• Following are some of the prerequisite:


• User Need to have Azure subscription
• Azure Stack HCI Cluster and WAC gateway must be registered into Azure
• Azure Stack HCI Cluster nodes should have the supported model and all node's model must be same
• Azure Stack HCI Cluster nodes should have the supported OS and all nodes OS must be same
• “OMIWAC Premium License” is required for each of the node

Internal Use - Confidential 36 of Y © Copyright 2020 Dell Inc.


Onboarding policies into Azure – Landing
Page
1) Sign-in to Azure
• Capri will use MS API to sign-in to azure

2) Onboarding Checklist
• User should have permission to create and manage the
policy assignments, policy definitions, policy exemptions
and policy sets in the resource group of cluster nodes in
Azure Arc
• Cluster is registered and connected to Azure Arc

3) Onboard policies
• Need to configure the Cluster settings
• Onboard policies

Internal Use - Confidential 37 of Y © Copyright 2020 Dell Inc.


Registering WAC to Azure

Internal Use - Confidential 38 of Y © Copyright 2020 Dell Inc.


Onboarding HCP Policies-Onboard Checklist Failure

Internal Use - Confidential 39 of Y © Copyright 2020 Dell Inc.


Onboarded HCP policies (WAC)

Internal Use - Confidential 40 of Y © Copyright 2020 Dell Inc.


Onboarded HCP policies (Azure Portal)

Internal Use - Confidential 41 of Y © Copyright 2020 Dell Inc.


Onboarded HCP policies (Azure Portal)

Internal Use - Confidential 42 of Y © Copyright 2020 Dell Inc.


Q&A

Internal Use - Confidential 43 of Y © Copyright 2020 Dell Inc.

You might also like