You are on page 1of 52

Microsoft Exchange

Server
Nutanix Best Practices

Version 2.2 • December 2020 • BP-2036


Microsoft Exchange Server

Copyright
Copyright 2020 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual
property laws.
Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other
marks and names mentioned herein may be trademarks of their respective companies.

Copyright | 2
Microsoft Exchange Server

Contents

1. Executive Summary.................................................................................5

2. Introduction.............................................................................................. 6
2.1. Audience.........................................................................................................................6
2.2. Purpose.......................................................................................................................... 6

3. Nutanix Enterprise Cloud Overview...................................................... 8


3.1. Nutanix HCI Architecture............................................................................................... 9

4. Physical Hardware Design....................................................................10


4.1. NUMA Architecture and Best Practice.........................................................................10
4.2. Understanding RAM Configuration.............................................................................. 11
4.3. BIOS or UEFI Firmware...............................................................................................14
4.4. Power Policy Settings.................................................................................................. 14
4.5. Nutanix CVM Considerations.......................................................................................14

5. Virtual Hardware Design....................................................................... 17


5.1. CPU Definitions............................................................................................................17
5.2. Hyperthreading Technology......................................................................................... 18
5.3. CPU Hot-Add............................................................................................................... 19
5.4. CPU Ready Time......................................................................................................... 19
5.5. Memory Configuration..................................................................................................20
5.6. Network Configuration..................................................................................................21

6. Designing a Microsoft Exchange Solution..........................................24


6.1. Exchange Server Overview......................................................................................... 24
6.2. Exchange Server Sizing...............................................................................................26
6.3. Exchange Server Storage Configuration Best Practices............................................. 34
6.4. Scalability..................................................................................................................... 37

7. Recommended Deployment Options...................................................39

3
Microsoft Exchange Server

7.1. Single Site: Two DAG Copies in One Failure Domain (Least Recommended)............ 39
7.2. Single Site: Two DAG Copies in Two Failure Domains............................................... 39
7.3. Multisite: Two or More DAG Copies in Two or More Failure Domains (Most
Recommended)...............................................................................................................40

8. Backup and Disaster Recovery Options............................................. 41


8.1. Native Exchange Data Protection................................................................................41
8.2. Third-Party Vendor Support......................................................................................... 41

9. MS Exchange on Nutanix Best Practices Checklist...........................43


9.1. General.........................................................................................................................43
9.2. Drive Configuration...................................................................................................... 43
9.3. VMware........................................................................................................................ 43
9.4. RAM..............................................................................................................................44
9.5. vCPUs.......................................................................................................................... 44
9.6. Networking....................................................................................................................44
9.7. Power Policy................................................................................................................ 44
9.8. High Availability............................................................................................................45
9.9. Monitoring.....................................................................................................................45
9.10. Manageability............................................................................................................. 45
9.11. Nutanix Recommendations........................................................................................ 46
9.12. Backup and Disaster Recovery................................................................................. 46

10. Conclusion............................................................................................47

Appendix..........................................................................................................................48
References...........................................................................................................................48
About the Authors............................................................................................................... 48
About Nutanix...................................................................................................................... 49

List of Figures................................................................................................................ 50

List of Tables.................................................................................................................. 52

4
Microsoft Exchange Server

1. Executive Summary
This document makes recommendations for designing, optimizing, and scaling Microsoft
Exchange Server deployments on the Nutanix enterprise cloud. Historically, it has been a
challenge to virtualize Exchange Server because of the high cost of traditional virtualization
stacks and the impact that a SAN-based architecture can have on performance. Businesses and
their IT departments have constantly fought to balance cost, operational simplicity, and consistent
predictable performance.
Nutanix removes many of these challenges and makes virtualizing a business-critical application
such as Exchange Server much easier. The Nutanix distributed storage fabric is a software-
defined solution that provides all the features one typically expects in an enterprise SAN, without
a SAN’s physical limitations and bottlenecks. Exchange Server particularly benefits from the
following features:
• Localized I/O and the use of flash for index and database files to lower operation latency.
• A highly distributed approach that can handle both random and sequential workloads.
• The ability to add new nodes and scale the infrastructure without system downtime or
performance impact.
• Nutanix data protection and disaster recovery workflows that simplify backup operations and
business continuity processes.
Nutanix lets you run both Microsoft Exchange Server and other VM workloads simultaneously on
the same platform. Density for MS Exchange Server deployments is driven by CPU and storage
requirements. To take full advantage of the system’s performance and capabilities, validated
testing shows that it is better to scale out and increase the number of MS Exchange multirole
server VMs on the Nutanix platform than to scale up individual VMs. The Nutanix platform
handles Exchange Server’s demanding storage and compute requirements with localized I/O,
server-attached flash, and distributed data protection capabilities.

1. Executive Summary | 5
Microsoft Exchange Server

2. Introduction

2.1. Audience
This best practice document is part of the Nutanix Solutions Library. We wrote it for those
architecting, designing, managing, and supporting Nutanix infrastructures. Readers should
already be familiar with a hypervisor (VMware vSphere, Microsoft Hyper-V, or the native Nutanix
hypervisor, AHV), Microsoft Exchange Server, and Nutanix.
The document addresses key items for each role, enabling a successful design, implementation,
and transition to operation. Most of the recommendations apply equally to all currently supported
versions of Microsoft Exchange Server. We call out differences between versions as needed.

2.2. Purpose
This document covers the following subject areas:
• Overview of the Nutanix solution.
• The benefits of running Microsoft Exchange Server on Nutanix.
• Overview of high-level MS Exchange Server best practices for Nutanix.
• Design, sizing, and configuration considerations when architecting an Exchange Server
solution on Nutanix.
• Virtualization optimizations for VMware ESXi, Microsoft Hyper-V, and Nutanix AHV.

Table 1: Document Version History

Version
Published Notes
Number
1.0 February 2016 Original publication.
1.1 July 2016 Updated platform overview.
Major updates throughout. Updated for Exchange 2016
2.0 December 2019
and 2019 and extended to cover all hypervisors.
Updated platform overview, terminology, and jumbo
2.1 June 2020
frames guidance.

2. Introduction | 6
Microsoft Exchange Server

Version
Published Notes
Number
2.2 December 2020 Updated the CPU Hot-Add section.

2. Introduction | 7
Microsoft Exchange Server

3. Nutanix Enterprise Cloud Overview


Nutanix delivers a web-scale, hyperconverged infrastructure solution purpose-built for
virtualization and both containerized and private cloud environments. This solution brings the
scale, resilience, and economic benefits of web-scale architecture to the enterprise through the
Nutanix enterprise cloud platform, which combines the core HCI product families—Nutanix AOS
and Nutanix Prism management—along with other software products that automate, secure, and
back up cost-optimized infrastructure.
Available attributes of the Nutanix enterprise cloud OS stack include:
• Optimized for storage and compute resources.
• Machine learning to plan for and adapt to changing conditions automatically.
• Intrinsic security features and functions for data protection and cyberthreat defense.
• Self-healing to tolerate and adjust to component failures.
• API-based automation and rich analytics.
• Simplified one-click upgrades and software life cycle management.
• Native file services for user and application data.
• Native backup and disaster recovery solutions.
• Powerful and feature-rich virtualization.
• Flexible virtual networking for visualization, automation, and security.
• Cloud automation and life cycle management.
Nutanix provides services and can be broken down into three main components: an HCI-
based distributed storage fabric, management and operational intelligence from Prism,
and AHV virtualization. Nutanix Prism furnishes one-click infrastructure management for
virtual environments running on AOS. AOS is hypervisor agnostic, supporting two third-party
hypervisors—VMware ESXi and Microsoft Hyper-V—in addition to the native Nutanix hypervisor,
AHV.

3. Nutanix Enterprise Cloud Overview | 8


Microsoft Exchange Server

Figure 1: Nutanix Enterprise Cloud OS Stack

3.1. Nutanix HCI Architecture


Nutanix does not rely on traditional SAN or network-attached storage (NAS) or expensive storage
network interconnects. It combines highly dense storage and server compute (CPU and RAM)
into a single platform building block. Each building block delivers a unified, scale-out, shared-
nothing architecture with no single points of failure.
The Nutanix solution requires no SAN constructs, such as LUNs, RAID groups, or expensive
storage switches. All storage management is VM-centric, and I/O is optimized at the VM virtual
disk level. The software solution runs on nodes from a variety of manufacturers that are either
entirely solid-state storage with NVMe for optimal performance or a hybrid combination of SSD
and HDD storage that provides a combination of performance and additional capacity. The
storage fabric automatically tiers data across the cluster to different classes of storage devices
using intelligent data placement algorithms. For best performance, algorithms make sure the
most frequently used data is available in memory or in flash on the node local to the VM.
To learn more about the Nutanix enterprise cloud, visit the Nutanix Bible and Nutanix.com.

3. Nutanix Enterprise Cloud Overview | 9


Microsoft Exchange Server

4. Physical Hardware Design


For organizations to successfully deploy Microsoft Exchange Server solutions, it is important to
start with a solid foundation for the database and applications. We cover several considerations
related to the platform’s physical hardware in this section.
When you choose Nutanix, one of the key benefits is that the physical hardware (a “node” in
Nutanix terminology) either arrives preconfigured from the factory (NX and OEM appliances) or
can be automatically configured using the Nutanix Foundation process. When using third-party
hardware from the Nutanix compatibility list, you may need to validate these physical hardware
configurations and settings.

4.1. NUMA Architecture and Best Practice


Nonuniform memory access (NUMA) is a computer memory design where the memory access
time depends on the memory location relative to the processor. With NUMA, a processor can
access its own local memory faster than nonlocal memory (memory local to another processor
or memory shared between processors). Because Exchange Server uses a lot of memory and
is aware of the NUMA topology of the underlying host, it is important to ensure that CPU and
physical memory are properly installed and configured.
The following diagram shows a logical view of NUMA topology on a standard Intel-based dual-
socket system.

Figure 2: NUMA Architecture

4. Physical Hardware Design | 10


Microsoft Exchange Server

A NUMA topology has multiple nodes that are localized structures making up the physical host.
Each NUMA node has its own memory controllers and bus between the CPU and actual memory.
If a NUMA node wants to access nonlocal or remote memory, access times can be many times
slower than local memory access. Because Exchange Server is not NUMA-aware, most of its
work should occur on a local node. It is highly critical to prevent remote memory access issues.

NUMA in a Virtualized Environment


Virtualization allows organizations to create isolated units of compute (commonly referred to as
VMs) that can vary in size and dimensions and do not have to align with the underlying physical
server.
At the hypervisor layer that provides this compute environment for VMs, try to map the NUMA
topology of the physical host to the guest OS and compatible applications, including MS
Exchange Server. Because the three hypervisors supported with Nutanix (VMware ESXi,
Microsoft Hyper-V, and Nutanix AHV) are all NUMA compatible, they can present a virtual NUMA
(or vNUMA) topology to the VM for NUMA-aware applications such as SQL Server to consume.
Because MS Exchange is not aware of vNUMA topology, there are certain restrictions on the
maximum number of multirole servers that each node can host.

4.2. Understanding RAM Configuration


The physical memory of a server is critical in any virtualized environment, but particularly so
for Exchange Server, which requires a careful balance between memory capacity and memory
speed. The number of memory channels available to a system depends on the generation
of its CPUs. As an example, Intel Xeon processors with the Skylake microarchitecture have
six memory channels per CPU, each with up to two DIMMs per channel (DPC), supporting a
maximum of 12 DIMMs per CPU or 24 DIMMs per server (for a two-socket server). A previous
generation (in Nutanix nodes, the Intel E5 Broadwell processor family) has four memory channels
per CPU, each with three DPC, supporting a maximum of 12 DIMMs per CPU.

4. Physical Hardware Design | 11


Microsoft Exchange Server

Figure 3: Intel Memory Architecture

To reduce stress on the memory controllers and CPUs, ensure that memory is in a balanced
configuration—where DIMMs across all memory channels and CPUs are populated identically.
Balanced memory configurations enable optimal interleaving across all memory channels,
maximizing memory bandwidth. To further optimize bandwidth, configure both memory
controllers on the same physical processor socket to match. For the best system-level memory
performance, ensure that each physical processor socket has the same physical memory
capacity.
If DIMMs with different memory capacities populate the memory channels attached to a memory
controller, or if different numbers of DIMMs with identical capacity populate the memory channels,
the memory controller must create multiple interleaved sets. Managing multiple interleaved sets
creates overhead for the memory controller, which in turn reduces memory bandwidth.
For example, the following image shows the relative bandwidth difference between Skylake and
Broadwell systems, each configured with six DIMMs per CPU.

4. Physical Hardware Design | 12


Microsoft Exchange Server

Figure 4: Example DIMM Population Between Skylake and Broadwell

Populating DIMMs across all channels delivers an optimized memory configuration. When
following this approach with Skylake CPUs, populate memory slots in sets of three per memory
controller. If you are populating a server that has two CPUs with 32 GB DIMMs, the six DIMMs
across all channels per CPU deliver a total host capacity of 384 GB, providing the highest level
of performance for that server. For a Broadwell system, a more optimized configuration has
either four or eight DIMMs (as opposed to the six in Skylake). If you are populating a two-socket
system with the same 32 GB DIMMs, eight DIMMs across all channels per CPU deliver a total
host capacity of 512 GB, which is a higher performance configuration.

Note: Each generation of CPUs may change the way it uses memory, so it’s
important to always check the documentation from the hardware vendor to validate
the configuration of the components.

4. Physical Hardware Design | 13


Microsoft Exchange Server

4.3. BIOS or UEFI Firmware


To ensure that all components operate optimally and in a supported manner, Nutanix
recommends always keeping system firmware up to date. Nutanix life cycle management (LCM)
can assist administrators in keeping firmware up to date across supported Nutanix and OEM
platforms.
The Nutanix node installation process applies appropriate BIOS or UEFI settings. Third-party
hardware platforms may need some additional configuration checks. As a best practice for
Nutanix and MS Exchange Server, ensure that NUMA and hyperthreading are enabled, along
with CPU features such as Intel VMX and VT-x. Validate that all CPU C-states are disabled, as
these power management policies can impact CPU efficiency and memory latency, resulting in
poor application and VM performance.

4.4. Power Policy Settings


In addition to disabling the C-states at the BIOS or UEFI level, Nutanix recommends setting the
host power management policy to High Performance for critical applications. This setting only
applies to VMware ESXi and Microsoft Hyper-V; Nutanix AHV enables the high-performance
mode automatically.
In ESXi, select the host and navigate to the Configure tab. Select Power Management and click
High Performance.
In Hyper-V, open the Windows Control Panel and navigate to Power Options, then select High
Performance. You can apply this setting globally using Group Policy.

4.5. Nutanix CVM Considerations


Every host in a Nutanix cluster has a CVM that consumes some of the host’s CPU and memory
to provide all the Nutanix services. The CVM can’t live-migrate to other hosts, as the physical
drives pass through to the CVM using the host hypervisor’s PCI passthrough capability.

4. Physical Hardware Design | 14


Microsoft Exchange Server

Figure 5: Nutanix CVMs in a Cluster

The size of the CVM varies depending on the deployment; however, the typical size is 8 vCPU
and 32 GB of memory. During initial cluster deployment, Nutanix Foundation automatically
configures the vCPU and memory assigned to the CVM. Consult your Nutanix partner
and Nutanix Support if you need to make any changes to these values to ensure that the
configuration is supported.
vCPUs are assigned to the CVM and not necessarily consumed. If the CVM has a low-end
workload, it uses the CPU cycles it needs. If the workload becomes more demanding, the CVM
uses more CPU cycles. Think of the number of CPUs you assign as the maximum number of
CPUs the CVM can use, not the number of CPUs immediately consumed.
When you size any Nutanix cluster, ensure that there are enough resources in the cluster
(especially failover capacity) to handle situations where a node fails or goes down for
maintenance. Sufficient resources are even more important when you’re sizing a solution to host
a critical workload like Microsoft SQL Server or Microsoft Exchange Server.
From Nutanix AOS 5.11 onward, AHV clusters can include a new type of node that does not
run a local CVM. These compute-only (CO) nodes are for specialized use cases (typically, for
large-scale Oracle deployments) and have specific initial cluster requirements with strict scaling
constraints, so engage with your Nutanix partner or Nutanix account manager to see if CO is
advantageous for your environment. Because standard HCI nodes (running CVMs) provide
storage from the distributed fabric, CO nodes cannot benefit from data locality.

Note: CO nodes are not supported for running MS Exchange Server. There are no
plans to support MS Exchange Server on CO nodes in the future.

4. Physical Hardware Design | 15


Microsoft Exchange Server

Figure 6: Nutanix AHV Cluster with HCI and Compute Nodes

Note: CO nodes are only available when running Nutanix AHV as the hypervisor.

4. Physical Hardware Design | 16


Microsoft Exchange Server

5. Virtual Hardware Design

5.1. CPU Definitions


In a virtualized environment, many different components make up a CPU for an application
such as Exchange Server to consume. The following diagram shows the physical and logical
constructs of a CPU.

Figure 7: CPU Terminology

• CPU package
The CPU package is the physical device that contains all the components and cores and
sits on the motherboard. Many different types of CPU packages are available, ranging from
low-core, high-frequency models to high-core, low-frequency models. The right balance
between core and frequency depends on the requirements of the Exchange Server workload,
considering factors such as the number of users, mailbox limit, message size and quantity,
and so on. There are typically two or more CPU packages in a physical host.
• Core

5. Virtual Hardware Design | 17


Microsoft Exchange Server

A CPU package typically has two or more cores. A core is a physical subsection of a
processing chip and contains one interface for memory and peripherals.
• Logical CPU (LCPU)
The LCPU construct presented to the host operating system is a duplication of the core
that allows it to run multiple threads concurrently using hyperthreading (covered in the next
section). Logical CPUs do not have characteristics or performance identical to the physical
core, so do not consider an LCPU as a core in its own right. For example, if a CPU package
has 10 cores, enabling hyperthreading presents 20 logical CPUs to the guest OS.
• Arithmetic logic unit (ALU)
The ALU is the heart of the CPU, responsible for performing mathematical operations on
binary numbers. Typically, the multiple local CPUs on the core share one ALU.
• Hypervisor
In a virtualized environment, the hypervisor is a piece of software that sits between the
hardware and one or more guest operating systems. Each of these guest operating systems
can run its own programs, as the hypervisor presents to it the host hardware’s processor,
memory, and resources.
• vSocket
Each VM running on the hypervisor has a vSocket construct. vCPUs virtually “plug in” to
a vSocket, which helps present the vNUMA topology to the guest OS. There can be many
vSockets per guest VM, or just one.
• vCPU
Guest VMs on the hypervisor use vCPUs as the processing unit and map them to available
LCPUs through the hypervisor’s CPU scheduler. When determining how many vCPUs to
assign to an Exchange Server, always size assuming 1 vCPU = 1 physical core. For example,
if the physical host contains a single 10-core CPU package, do not assigned more than 10
vCPU to the Exchange server. When considering how much to oversubscribe vCPUs in the
virtual environment, use the ratio of vCPU to cores, not logical CPUs.

5.2. Hyperthreading Technology


Hyperthreading is Intel’s proprietary simultaneous multithreading (SMT) implementation used
to improve computing parallelization. Hyperthreading’s main function is to deliver two distinct
pipelines into a single processor execution core. From the perspective of the hypervisor running
on the physical host, enabling hyperthreading makes twice as many processors available.

5. Virtual Hardware Design | 18


Microsoft Exchange Server

In theory, because we ideally want to use all available processor cycles, we should use
hyperthreading all the time. However, in practice, using this technology requires some additional
consideration. There are two key issues when assuming that hyperthreading simply doubles the
amount of CPU on a system:
1. How the hypervisor schedules threads to the physical CPU.
2. The length of time a unit of work stays on a processor without interruption.
Because a processor is still a single entity, appropriately scheduling processes from guest VMs
is critically important. For example, the system should not schedule two threads from the same
process on the same physical core. In such a case, each thread takes turns stopping the other
to change the core’s architectural state, with a negative impact on performance. There is also
a hypervisor CPU scheduling function to keep in mind, which is abstracted from the guest OS
altogether.
Because of this complexity, be sure to size for physical cores and not hyperthreaded cores when
sizing MS Exchange Server in a virtual environment. For mission-critical deployments, start with
no vCPU oversubscription.

5.3. CPU Hot-Add


Nutanix AHV, VMware ESXi, and Microsoft Hyper-V support the ability to dynamically add CPU
resources to running VMs. Although the hot-add function may seem attractive, we strongly
recommend disabling it. When CPU hot-add is enabled, vNUMA for the guest VM is disabled,
which has serious performance implications for both CPU scheduling and accessing physical
memory. These performance implications apply even if you have configured the VM to be within
the NUMA node boundary.

5.4. CPU Ready Time


One of the key statistics that administrators use to determine overall health in a virtualized
environment is CPU ready time. This metric tracks the amount of time a VM is ready to process
a thread but is instead waiting for the hypervisor scheduler for access to a physical processor.
When the guest VM (and thus Exchange Server) is in this waiting state, no operations occur, and
the mail database’s overall processing throughput drops.
Host vCPU oversubscription is the main reason administrators might see a high CPU ready time
—if there is a high ratio of vCPUs to available physical cores and the guest VMs need to use
CPU, then the hypervisor scheduler is under pressure to allot time to the guest’s vCPU. The
only effective way to avoid this contention is to right-size Exchange Server VMs to use what they
need, not some larger amount.

5. Virtual Hardware Design | 19


Microsoft Exchange Server

As a rule, if CPU ready time is over 5 percent for a given VM, there is cause for concern. Always
ensure that the size of the physical CPU and the guest VM is in line with Exchange Server
requirements to avoid this problem.

5.5. Memory Configuration


Virtual Memory Overview
In addition to providing CPU time for guest VMs, the hypervisor also virtualizes VM memory to
isolate workloads from each other and to provide a contiguous, zero-based memory space for
each guest operating system. Each supported hypervisor uses different techniques to manage
memory, but the overall concept is the same.
The following diagram demonstrates a conceptual overview of memory management in a virtual
environment.

Figure 8: Virtual Memory Overview

One of the key differences between Nutanix AHV and VMware ESXi or Microsoft Hyper-V is that
AHV does not support swapping VM memory to disk. With ESXi, it is possible to assign more

5. Virtual Hardware Design | 20


Microsoft Exchange Server

memory to all VMs on a host than there is physical host memory. This overcommitment can be
dangerous, as physical RAM is much faster than even the fastest storage available on Nutanix.
Similarly, with Hyper-V, technologies such as Dynamic Memory and Smart Paging allow the
system to share memory allocation between all guest VMs with physical disks as an overflow.

Memory Reservations
Because memory is one of the most important resources for MS Exchange Server, do not
oversubscribe memory for this application (or for any business-critical applications). As
mentioned earlier, Nutanix AHV does not support memory oversubscription, so all memory is
effectively reserved for all VMs. When using VMware ESXi or Microsoft Hyper-V, ensure that the
host memory is always greater than the sum of all VMs. If there is memory contention on the
hypervisor, guest VMs could start to swap memory space to disk, which has a significant negative
impact on Exchange Server performance.
If you are deploying Exchange Server in an environment where oversubscription may occur,
Nutanix recommends reserving 100 percent of all Exchange Server VM memory to ensure
a consistent level of performance. This reservation prevents the Exchange Server VMs from
swapping virtual memory to disk; however, other VMs that don’t have this reservation may
swap. As virtualization allows you to right-size your MS Exchange deployment, memory
overcommitment offers no potential benefit. Because oversizing causes decreased performance,
be sure to avoid it.

5.6. Network Configuration


Networking Overview
Along with compute resources such as CPU and memory, the network is also a critical piece of
any successful Exchange Server deployment. Without a robust, fully redundant network, access
to databases and supporting applications can be slow, intermittent, or even unavailable.

5. Virtual Hardware Design | 21


Microsoft Exchange Server

Figure 9: Virtual Networking Overview

Although each hypervisor implements virtual networking constructs slightly differently, the basic
concepts remain the same.

Networking Best Practices


Virtual network design is not specific to Microsoft Exchange Server. The following are generic
best practices for designing virtual networks for business-critical application VMs:
• Always use redundant physical network connections to mitigate any failure of either a physical
switch or physical NIC interface on the host.
• Always size network bandwidth to ensure that the network can maintain the same level of
bandwidth during a component failure. For example, if the physical servers have two 10 GB
interfaces, do not configure teaming policies to provide 20 GB of bandwidth to the VMs. If the

5. Virtual Hardware Design | 22


Microsoft Exchange Server

VMs expect 20 Gbps of bandwidth and one of the links fails, performance then drops by 50
percent.
• Consider using hypervisor-based network I/O control to ensure that transient high-throughput
operations such as vMotion or live migration do not negatively impact critical traffic.
• For VMware-based environments, consider using the VMware Distributed Switch. This
construct provides many benefits in a VMware environment, including the ability to use
technologies such as LACP and maintain TCP state during vMotion.
For more detailed information about networking in a Nutanix environment for supported
hypervisors, please refer to the following resources:
• Nutanix AHV Networking best practice guide
• vSphere Networking with Nutanix best practice guide

Jumbo Frames
Note: Exchange Server does not require jumbo frames.

The Nutanix CVM uses the standard Ethernet MTU (maximum transmission unit) of 1,500
bytes for all the network interfaces by default. The standard 1,500 byte MTU delivers excellent
performance and stability. Nutanix does not support configuring the MTU on a CVM's network
interfaces to higher values.
You can enable jumbo frames (MTU of 9,000 bytes) on the physical network interfaces of AHV,
ESXi, or Hyper-V hosts and user VMs if the applications on your user VMs require them. If you
choose to use jumbo frames on hypervisor hosts, be sure to enable them end to end in the
desired network and consider both the physical and virtual network infrastructure impacted by the
change.

5. Virtual Hardware Design | 23


Microsoft Exchange Server

6. Designing a Microsoft Exchange Solution

6.1. Exchange Server Overview


The Microsoft Exchange Server messaging platform provides email, scheduling, and tools
for custom collaboration and messaging service applications. The primary design goals for
Exchange are simplicity of scale, hardware utilization, and failure isolation.

Exchange 2016 Architecture


In Exchange 2016, Microsoft has reduced the number of server roles to two: Mailbox and Edge
Transport.
The Mailbox server in Exchange 2016 includes the server components from the Exchange 2013
Mailbox and Client Access server roles:
• Client Access services, which provide authentication, limited redirection, and proxy services.
They offer all the usual client access protocols (HTTP, POP, IMAP, and SMTP) and don't do
any data rendering.
• Mailbox services, which include the backend client access protocols, Transport service,
Mailbox databases, and Unified Messaging. The Mailbox server handles all activity for the
active mailboxes on that server.
To minimize the attack surface of your Exchange deployment, the Edge Transport role is typically
deployed in the perimeter network, outside the internal Active Directory forest. By handling
all Internet-facing mail flow, it adds layers of message protection and security against viruses
and spam and can apply mail flow rules (also known as transport rules) to control message
movement.
For more information about the Exchange 2016 architecture, see the Microsoft Exchange 2016
architecture document.

6. Designing a Microsoft Exchange Solution | 24


Microsoft Exchange Server

Figure 10: Exchange 2016 Architecture

Along with the Mailbox role, Exchange 2016 allows you to proxy traffic from Exchange 2013
Client Access servers to Exchange 2016 mailboxes. This capability lets you move to Exchange
2016 without having to worry about deploying enough front-end capacity to service new
Exchange 2016 servers.

Updates in Exchange 2019


There is no architectural difference between Exchange 2016 and Exchange 2019. However,
Exchange 2019 removes support for Unified Messaging and includes the security, performance,
and client updates described in Microsoft documentation.

6. Designing a Microsoft Exchange Solution | 25


Microsoft Exchange Server

6.2. Exchange Server Sizing


The cornerstone for any successful MS Exchange deployment is the sizing of the solution.
Microsoft recommends using the Exchange Sizing Calculator (2016 link) to size the solution for
all deployments and for upgrades from one Exchange version to another. A typical Exchange
Server sizing requires changes to multiple fields in the calculator. For simplicity, we categorize
our sizing examples into single and multisite solutions. The sizing calculator for Exchange 2019
is currently only available in the current Cumulative Update (CU) download. To obtain the CU
downloads, you must have access to the Microsoft Volume Licensing Service Center. To see the
calculator used with the Nutanix MS Exchange ESRP for 65,000 mailboxes, visit Nutanix.com.

Compute Sizing Configuration


Microsoft has strict requirements for the maximum and minimum resources that you can allocate
to each Exchange Server. The following tables detail the configuration parameters for each
Exchange Server VM.

Table 2: Exchange 2016 Compute Sizing Configuration

Description Value
Minimum core count per VM 4
Maximum core count per VM 24
8 (Mailbox Server role) or 4 (Edge Transport
Recommended minimum memory per VM (GB)
role)
Recommended maximum memory per VM
192
(GB)
Strictly required. Each VM must fit within the
Specification within NUMA architecture
NUMA boundaries.

Table 3: Exchange 2019 Compute Sizing Configuration

Description Value
Minimum core count per VM 4
Maximum core count per VM 24

6. Designing a Microsoft Exchange Solution | 26


Microsoft Exchange Server

Description Value
128 (Mailbox Server role) or 64 (Edge
Recommended minimum memory per VM (GB)
Transport role)
Recommended maximum memory per VM
256
(GB)
Strictly required. Each VM must fit within the
Specification within NUMA architecture
NUMA boundaries.

Multisite Solution Sizing Example


A multisite solution provides application layer high availability at the same site and offers the
option to have automated DR for the application by spreading the Exchange Servers across
multiple sites (typically two sites). Change the following fields according to your requirements.

Table 4: Multisite Solution Sizing

Description Value Default Value or Change


<Enter the number
Total Number of Tier-X User
of mailboxes in the Input value
Mailboxes / Environment
environment.>
Projected Mailbox Number
0% Use default value
Growth Percentage
<Enter the number of
Total Send / Receive messages sent per mailbox
Input value
Capability / Mailbox / Day per day. Exchange admin
should have details.>
<Enter the average message
Average Message Size (KB) size. Exchange admin should Input value
have details.>
<Desired mailbox size. This
value is directly proportional
to the amount of storage
Initial Mailbox Size (MB) Input value
required per exchange VM.
Start small and scale up
mailbox size.>

6. Designing a Microsoft Exchange Solution | 27


Microsoft Exchange Server

Description Value Default Value or Change


<Desired mailbox size. This
value is directly proportional
to the amount of storage Input value (should be the
Mailbox Size Limit (MB)
required per Exchange VM. same as value above)
Start small and scale up
mailbox size.>
Personal Archive Mailbox Size
0 Use default value
Limit (MB)
Deleted Item Retention Window
14 Use default value
(Days)
Single Item Recovery Enabled Use default value
Calendar Version Storage Enabled Use default value
Multiplication Factor User
100% Use default value
Percentage
IOPS Multiplication Factor 1.00 Use default value
Megacycles Multiplication
1.00 Use default value
Factor
Desktop Search Engines
Enabled (for Online Mode No Use default value
Clients)
Predict IOPS Value? Yes Use default value

Change the values only for the cells highlighted with arrows in the following screenshot, unless
your Exchange administrator has provided more information.

6. Designing a Microsoft Exchange Solution | 28


Microsoft Exchange Server

Figure 11: Tier-1 User Mailbox Configuration Calculator Settings

Note: Always ensure that the initial mailbox size and mailbox limit are both set to
the maximum value that you want. For example, if you want a 1 GB mailbox for each
user, then set both the size and the limit values to 1,024.

If there are multiple profiles for mailbox users (up to a maximum of four), User Mailbox
Configuration allows you to set different configuration settings for each profile. However, to keep
this sizing example simple, we use a single configuration for all the mailboxes.

6. Designing a Microsoft Exchange Solution | 29


Microsoft Exchange Server

CPU and Memory Sizing


In the Server Configuration section, provide the processor information. As an example, for the
Exchange ESRP we used Intel Gold 6148 processors with 20 cores at 2.4 Ghz and a SPECint
rate of 590 for 12 cores.

Figure 12: Server Configuration

A resource for determining a processor’s SPECint rate is http://ewams.net/specintd/. Multiply the


SPECint rate per core for your processor by the number of cores you intend to use for each VM.
In this example, we have 12 cores, each with a SPECint rate of 49.15, giving us a total SPECint
rate of about 590.

Note: Exchange Calculator v9.1 (2016 version) uses the SPECint 2006 rate, but
Exchange 2019 calculator uses SPECint 2017 values. Use the appropriate values for
the calculator version you’re using.

Once you’ve determined and entered the SPECint rate, proceed to the Exchange DAG
Configuration section.

Figure 13: Exchange DAG Configuration

Always keep the database availability group (DAG) multiplier at one. For the number of mailbox
servers hosting active mailboxes, start with two, but be aware that this value may need to
change. For this example, two servers are not enough to host the active mailboxes, even with
a VM with 17 vCPU and 256 GB RAM, as validated in the Role Requirements tab shown in the
following screenshot. The calculator highlights the problem areas in red.

6. Designing a Microsoft Exchange Solution | 30


Microsoft Exchange Server

Figure 14: Role Requirements Results Pane: Server Configuration

When we change the number of servers to four in the Exchange DAG Configuration section, the
Role Requirements tab updates to look like the following screenshot.

6. Designing a Microsoft Exchange Solution | 31


Microsoft Exchange Server

Figure 15: Role Requirements Results Pane: Updated Server Configuration

From the results pane, we know that each Exchange VM in this environment needs to have 10
vCPU and 128 GB RAM, with at least 10,392 GB (approximately 11 TB) of storage. To calculate
the storage required for each Exchange VM, add the disk space requirements in the following
three fields:
• Transport Database Space Required.

6. Designing a Microsoft Exchange Solution | 32


Microsoft Exchange Server

• Database Space Required.


• Log Space Required.
To validate these results and get the total number of Exchange servers required for the solution,
check the Environment Configuration section in the Role Requirements calculations pane.

Figure 16: Role Requirements Calculations Pane

To summarize, for this example, we need 16 Exchange VMs (8 VMs per site, with 4 active and
4 passive on each site). We also need at least nine cores of Active Directory services dedicated
to Exchange per site, which we can achieve by adding Active Directory servers with a CPU
configuration of either 3 VMs with 4 vCPU each or 5 VMs with 2 vCPU each. This configuration
is presented as the Recommended Minimum Number of Global Catalog Cores in the Role
Requirements section.
For a single-site deployment, set the value for site resilient deployment in the Input Sheet to
No. Although Nutanix and Microsoft support single-site deployments, we do not recommend
deploying an Exchange solution without DAGs or application-level high availability.

6. Designing a Microsoft Exchange Solution | 33


Microsoft Exchange Server

6.3. Exchange Server Storage Configuration Best Practices


The following sections outline the storage best practices for running Exchange Server on
Nutanix.

Drive Layout and Configuration


Nutanix is a scale-out platform that distributes disks and I/O across a shared-nothing cluster that
scales in a linear fashion. It is a general best practice to create many smaller disks as opposed
to a few large disks. One key reason for this recommendation is that each vDisk on Nutanix
(regardless of the hypervisor) contains a fixed amount of access to the random write buffer on
the underlying host (called the oplog). The optimal number and size of the virtual disks and
their corresponding Exchange Server data or log files depends on the size and activity of the
Exchange database.
In keeping with the scale-out mindset, it is also important to ensure that the different types of
disks (system, Exchange database files, and logs) are all placed across multiple virtual storage
controllers. This consideration is not relevant for Nutanix AHV, as AHV does not have the
construct of a virtual storage controller. However, for VMware ESXi and Microsoft Hyper-V,
ensure there are multiple virtual SCSI adapters for Exchange Server.
As a starting point for storage, Nutanix recommends the following design choices:
• Separate drives for the OS and Exchange Server install binaries. Place these drives together
on the same virtual storage controller (for example, controller 0).
• With AHV, use a dedicated vDisk for each Exchange database. With vSphere or Hyper-V,
place each Exchange database on a dedicated virtual disk.
• Use a dedicated vDisk for recovery with Exchange native data protection.
• If using volume groups on vSphere, ensure that you create one volume group with multiple
disks within the volume groups for each node. Map each volume group to only one Exchange
Server.
• With AHV, use a dedicated vDisk for transaction log files (AHV). With vSphere or Hyper-V,
place each transaction log files on a dedicated virtual disk.
• When using a backup solution based on Volume Shadow Copy Service (VSS), limit the
number of databases per Exchange instance (VM) to 12 to ensure that the number of vDisks
is less than or equal to 24 (12 for databases and 12 for logs). An alternative valid configuration
that can help avoid known VSS issues places 24 databases on 24 vDisks, with each log
sharing the vDisk with its database.

Note: Because having fewer vDisks can impact performance, scale out the number
of Exchange VMs and use 12 databases per VM for optimal performance and
resiliency in high-performance environments.

6. Designing a Microsoft Exchange Solution | 34


Microsoft Exchange Server

The following diagram shows the suggested initial drive layout for Exchange Server VMs on
Nutanix AHV.

Figure 17: Suggested Initial Drive Layout for Exchange Server VMs on Nutanix AHV

The following diagram shows the disk layout for Exchange Server VMs on VMware vSphere
using vmdk or for Microsoft Hyper-V using multiple SCSI controllers.

Figure 18: Suggested Disk Layout for Exchange Server VMs on VMware vSphere or Microsoft Hyper-V

The following diagram shows the disk layout for Exchange Server VMs on vSphere with volumes.

6. Designing a Microsoft Exchange Solution | 35


Microsoft Exchange Server

Figure 19: Suggested Disk Layout for Exchange Server VMs on VMware vSphere with Volumes

Map the virtual disks configured as volumes (not volume groups) as iSCSI LUNs to each
Exchange VM. Each disk here is a native vDisk mapped as an iSCSI LUN to an Exchange VM
using Windows in-guest iSCSI mapping. When installing Exchange 2016 or 2019, ensure that
the binaries are on a disk backed by a SCSI controller, not on an iSCSI volume. This placement
ensures that MS Exchange services are not impacted if an iSCSI connection issue occurs during
the regular Microsoft patching process, which reboots the Exchange VM. While strictly not a
requirement, a delayed service start or manual start is recommended during patching windows
to ensure Exchange services come up after iSCSI configuration settings take effect. Consider
setting all Exchange services startup schedule with delayed start using the following command:
sc config SVCNAME start=delayed-auto

MetaCacheDatabase in Exchange 2019


MetaCacheDatabase (MCDB) provides flash-tier acceleration for JBOD MS Exchange
deployments. Although MCDB doesn’t provide huge performance benefits on intelligent tiered
systems like the Nutanix distributed storage fabric, it can be configured on Nutanix platform. Note
the following Microsoft recommendations when deploying MCDB.
• If using MCDB in a mixed workload environment, ensure that 10 percent of the system’s SSD
capacity is sized exclusively for Exchange MCDB. If in doubt, add more flash capacity to the
nodes.
• Configure MCDB on a separate vDisk.
• Start with 1 x 1 TB vDisk for MCDB per Exchange VM. If this sizing doesn’t meet the
requirements, reach out to the Solutions team for assistance.
• With vSphere, use PVSCSI as the storage controller for the MCDB vDisk. Alternatively,
configure the MCDB vDisk as a volume disk for iSCSI in-guest mapping.

6. Designing a Microsoft Exchange Solution | 36


Microsoft Exchange Server

• Avoid running multiple Exchange VMs on the same node even if they are all within NUMA
boundaries. With vSphere, use Distributed Resource Scheduler (DRS) rules to split Exchange
VMs onto multiple hosts.

Node and Platform Guidance


Nutanix has node types available in a variety of compute and storage capacities and
performance capabilities to suit almost any virtual workload. All Nutanix OEMs also offer relevant
hardware platforms to meet your compute and storage requirements. When deploying Microsoft
Exchange on Nutanix, consider these key factors:
• The potential Microsoft Exchange configurations (for example, DAG with and without lagged
copies, datacenter configuration, and so on).
• Microsoft licensing implications.
• User Mailbox profiles, including mailbox size and messages per day.
For deployments where you want to maximize the number of mailboxes per Exchange Mailbox
server, Nutanix recommends the NX-8150 node. Nutanix designed the NX-8150 node specifically
for business-critical applications with high compute, storage capacity, performance, and working
set requirements such as Exchange 2016 or Exchange 2019. You can also configure OEM
hardware with 24-disk storage systems for Exchange deployments. For more information on
Nutanix NX and OEM hardware options please see Nutanix Hardware Platforms.
The NX-8150 or similar hardware from an OEM partner is a great fit for Microsoft Exchange
workloads, as it provides high compute performance with enough usable capacity to support
larger mailbox sizes. Although Office365 currently supports up to 50 GB mailboxes, an on-
premises Exchange solution can be configured with any mailbox limit (or even unlimited mailbox
size).
The Nutanix distributed storage fabric amplifies the storage performance you can achieve with
any hardware platform. Nutanix storage uses data locality and intelligent life cycle management
(ILM) for fully automated tiering across the SSDs and HDDs and in-line proactive disk balancing
to ensure that capacity is used evenly throughout the cluster. In extreme cases, the storage fabric
automatically manages imbalances with reactive disk balancing.
To maximize performance and operational efficiency, you can mix the NX-8150 with other node
types to form a single large Nutanix cluster. This design allows you to add and balance storage
and compute capacity intelligently in a Nutanix cluster.

6.4. Scalability
In our earlier sizing example, we were working with 65,000 Exchange users with a 1 GB mailbox
for each user.

6. Designing a Microsoft Exchange Solution | 37


Microsoft Exchange Server

A common question we see is how to expand the solution to support future mailbox growth.
With Nutanix the answer is very simple—add storage nodes to expand capacity. There is no
theoretical limit to the number of storage nodes you can add to a cluster, which means that you
can:
• Start small and scale only when required.
• Avoid Day One oversizing, reducing CapEx.
• Avoid the ongoing OpEx required to maintain oversized infrastructure.
• Enjoy the benefits of the latest CPU, RAM, networking, and flash technologies as you scale.
To increase mailbox size from 1 GB to 2 GB, 4 GB, or even 8 GB, progressively add more
storage nodes to the cluster. The following figure details the number of HCI and storage nodes
required for a variety of mailbox sizes.

Figure 20: Node Counts by Mailbox Size

Storage nodes used in this example have the following minimum specifications:
• 1 Intel Skylake Processor (2.4 Ghz or above)
• 96 GB RAM (preferably 192 GB)
• 4 x 1,920 GB SSD
• 8 x 12 TB HDD
• 2 x 10 GbE Network Interface Card
When adding new nodes to meet growing mailbox capacity requirements, you can migrate MS
Exchange VMs to newer nodes without any service interruption, increasing performance without
any changes at the application layer.
Similarly, when nodes reach end-of-life (EOL), you can add new nodes to the cluster and move
MS Exchange VMs off EOL hardware without any data migration or downtime, resulting in
increased performance without any expert storage, virtualization, or application-level knowledge.

6. Designing a Microsoft Exchange Solution | 38


Microsoft Exchange Server

7. Recommended Deployment Options


In the following recommended deployment options, all data—including the operating system and
Exchange databases and logs—is protected at the infrastructure layer to ensure that no single
component failure (for example, SSD or HDD) impacts the application.

Tip: Be sure that you also safeguard data at the Exchange (application) layer using
DAGs to protect against application and operating system-level failures.

7.1. Single Site: Two DAG Copies in One Failure Domain (Least
Recommended)
In this scenario, the administrator deploys a single Nutanix cluster that hosts two or more
Exchange Mailbox VMs configured at the application layer with two DAG copies.
The advantages of this deployment type are:
• Simple, cost-effective solution without silos of storage capacity.
• Infrastructure delivering ~99.999 percent availability.
The disadvantage of this deployment type is:
• In the unlikely event of a site or cluster-wide failure, the failure affects all Mailbox servers.

7.2. Single Site: Two DAG Copies in Two Failure Domains


In this scenario, the administrator deploys two clusters, each hosting at least one Exchange
Mailbox VM configured at the application layer with two DAG copies spread equally across the
two clusters.
The advantage of this deployment type is:
• In the unlikely event of a total cluster failure, mail services continue to function, though in a
degraded state.
The disadvantages of this deployment type are:
• Potential for significantly increased cost to protect against an extremely unlikely scenario.
• Creation of compute and storage silos that may lead to less technical and business efficiency.

7. Recommended Deployment Options | 39


Microsoft Exchange Server

7.3. Multisite: Two or More DAG Copies in Two or More Failure Domains
(Most Recommended)
In this scenario, the administrator deploys two or more clusters across two or more sites that
each host Exchange Mailbox VMs. Exchange is configured at the application layer with two or
more DAG copies spread equally across the sites and clusters.
The advantage of this deployment type is:
• In the unlikely event of a total site or cluster failure, mail services continue to function, though
in a potentially degraded state.
The disadvantages of this deployment type are:
• Requires multiple sites with WAN connectivity.
• Significantly increased cost.

Table 5: Risk Profiles and Implementation Options

Fault Tolerance Availability Site Resiliency


Single site: two
High: cluster provides node High availability due
DAG copies in one
and HDD tolerance; rack to multiple DAG and
cluster and one None
or site issues can have an distributed storage
physical failure
effect fabric copies
domain

Single site: two


DAG High: cluster provides
High availability due Site resiliency limited
node and HDD tolerance;
copies in two to multiple DAG and to datacenter fire zone
separation of copies
clusters and two distributed storage resiliency and rack
minimizes vulnerability to
physical failure fabric copies resiliency
physical failure
domains

Multisite: two or
High availability due
more DAG copies Very high: cluster provides
to multiple DAG and Intrasite and intersite
in two or more node and HDD tolerance,
distributed storage recoverability
physical failure rack and site-level tolerance
fabric copies
domains

7. Recommended Deployment Options | 40


Microsoft Exchange Server

8. Backup and Disaster Recovery Options


Nutanix takes a VM-centric approach to data protection and disaster recovery, complementing
Microsoft Exchange DAGs. Nutanix uses VM-level snapshots, along with protection domains,
to back up your Microsoft Exchange deployment. A snapshot is a point-in-time copy of a
single vDisk or VM, or of a group of vDisks and VMs. vDisks are grouped together in a Nutanix
protection domain. As RPO and RTO needs change, administrators can move different VMs
between protection domains without the need to move or copy any data.
The unique design of Nutanix storage lets you take many snapshots without a significant
performance impact. This scalable approach, combined with application-level resiliency features,
alleviates the need for dedicated storage systems for short term backups and compliance.
Instead, the Nutanix cluster can store VM snapshots across all nodes in the cluster. External
tools can use snapshots and clones of the Microsoft Exchange VMs to restore a single mailbox or
files within a mailbox.

Note: A snapshot is not a backup of the environment—it is a recovery point. Nutanix


recommends regular backups using either Exchange native data protection or a third-
party vendor.

8.1. Native Exchange Data Protection


Exchange native data protection uses lagged copies of Exchange databases to provide native
recoverability in some scenarios. The Nutanix solution fully supports this approach, which can
scale using our unique storage nodes to support up to the maximum number of lagged copies
supported by Microsoft Exchange (two).
You can scale out the Exchange native data protection solution simply by adding Nutanix nodes
and Exchange MSR instances to support the CPU, RAM, storage capacity, and performance
requirements—the same way you would scale for additional users or mailbox capacity.
You can also configure Exchange with deleted item retention, either in addition to or as a
substitute for lagged database copies, and the scalable and repeatable model easily supports
this configuration as well.

8.2. Third-Party Vendor Support


The Nutanix solution supports several industry-standard backup technologies, many of which
have direct (API) integration with Nutanix AOS, at both the application layer and the storage
layer. Strategic technology partners like Veeam, HYCU, and Commvault all have solutions that

8. Backup and Disaster Recovery Options | 41


Microsoft Exchange Server

directly integrate with both Nutanix AOS and with MS Exchange at the application layer. With
these alliance partners, backing up MS Exchange no longer depends on a single vendor. Visit the
Nutanix Technology Alliance Partners page for more information.

8. Backup and Disaster Recovery Options | 42


Microsoft Exchange Server

9. MS Exchange on Nutanix Best Practices


Checklist
The following is a consolidated list of the best practices covered in this document.

9.1. General
• Perform a current state analysis to identify workloads and sizing.
• Spend time up front to architect a solution that meets both current and future needs.
• Design to deliver consistent performance, reliability, and scale.
• Don’t undersize, don’t oversize—right-size.
• Start with a proof of concept, then test, optimize, iterate, and scale.

9.2. Drive Configuration


• Distribute databases and log files across multiple vDisks (AHV).
• Distribute vDisks across multiple PVSCSI controllers on VMware vSphere (across multiple
SCSI controllers on MS Hyper-V).
• Use the VMware PVSCSI controller for database and log disks, or for all drives if possible.
• Use a 64 KB NTFS allocation for database drives.
• Size for at least 20 percent free disk space on all drives.
• Do not use Windows dynamic disks or other in-guest volume managers.
• Use a maximum of 12 drives for databases and a maximum of 12 drives for logs per
Exchange VM to avoid potential VSS scale issues.
⁃ Alternatively, use a maximum of 24 drives combining databases and logs. Note that this is
option yields lower performance.

9.3. VMware
• Use the VMXNET3 NIC.
• Use the latest VMware VM hardware version.

9. MS Exchange on Nutanix Best Practices Checklist | 43


Microsoft Exchange Server

• Use the PVSCSI controller when possible.


• Remove unneeded hardware (floppy drive, serial port, and so on).
• Do not enable CPU hot-add.
• If evacuating an ESXi or AHV host for maintenance, manually vMotion or live-migrate the
Exchange VMs before putting the host in maintenance mode.

9.4. RAM
• Do not overcommit RAM at the hypervisor host level on ESXi or Hyper-V. (RAM
overcommitment is not possible on AHV.)
• Size each VM to fit within a NUMA node’s memory footprint.

9.5. vCPUs
• Do not overallocate vCPUs on Exchange VMs.
• Do not oversubscribe CPU.
• Account for Nutanix CVM core usage.
• Size VMs to fit within one NUMA node.

9.6. Networking
• Use hypervisor network control mechanisms (for example, VMware NIOC) to ensure minimum
bandwidth for vMotion.
• Use VMware load-based teaming with the VMware vSphere Distributed Switch (vDS).
• Connect Nutanix nodes with redundant 10 Gbps connections as a minimum.
• Use multi-NIC vMotion or live migration.

9.7. Power Policy


• If using Hyper-V hosts, use the high-performance power option.
• If using VMware hosts, use balanced power mode unless the database is highly latency
sensitive.
• Always set the power policy to high performance in the VM guest OS.

9. MS Exchange on Nutanix Best Practices Checklist | 44


Microsoft Exchange Server

9.8. High Availability


• Use Exchange database availability groups (DAGs) for Exchange Server high availability.
• Leave VMware DRS automation at the default level (3).
• To prevent vMotion from causing DAG database flipping, run the following Powershell
commands on one Exchange multirole VM DAG member. If there are multiple DAGs in the
environment, run the command on at least one DAG node in each DAG.
PS C:\ cluster /cluster:yourDAGname /prop SameSubnetDelay=2000
PS C:\ cluster /cluster:yourDAGname /prop SameSubnetThreshold=10
PS C:\ cluster /cluster:yourDAGname /prop CrossSubnetDelay=4000
PS C:\ cluster /cluster:yourDAGname /prop CrossSubnetThreshold=10

• Confirm the settings with the following command:


PS C:\ cluster /cluster:yourDAGname /prop

• Let the hypervisor and DRS manage noisy neighbors and workload spikes.
• Design for and maintain a minimum of N+1 redundancy in all Nutanix clusters.
• Use hypervisor antiaffinity rules to separate FCIs and AAG cluster nodes.
• Use a percentage-based admission control policy for VMware environments.

9.9. Monitoring
• Choose an enterprise monitoring solution for all Exchange Servers.
• Closely monitor drive space.
• Closely monitor application logs during maintenance windows (especially on iSCSI in-guest
volumes).

9.10. Manageability
• Standardize on your Exchange Server build and cumulative updates.
• Use standard drive letters or mount points.
• Use VM templates.
• Join the Exchange Server to the domain and use Windows authentication.
• Test patches and roll them out in a staggered manner during maintenance windows.

9. MS Exchange on Nutanix Best Practices Checklist | 45


Microsoft Exchange Server

9.11. Nutanix Recommendations


• Configure the Nutanix container for replication factor 2.
• Enable compression at the container level.
• Do not use container-level deduplication.
• Enable erasure coding (EC-X).
• Use a single container for the Windows OS, Exchange binaries, databases, and logs where
possible.
• Take Exchange database growth into account when calculating sizing estimates.
• Ensure that a guest VM doesn’t have more than 26 disks at the Windows layer, as too many
disks may cause issues with VSS (volume snapshots).

9.12. Backup and Disaster Recovery


• Establish RTOs and RPOs by mailbox type (Tier-1, Tier-2, and so on).
• Validate that the backup system can meet RTOs and RPOs.
• Use DAGs as a built-in disaster recovery solution.
• Use a lagged copy for quick recovery of Exchange Databases.
• Use third-party backups or an archiving solution like Enterprise Vault for long-term retention of
Exchange email data.

9. MS Exchange on Nutanix Best Practices Checklist | 46


Microsoft Exchange Server

10. Conclusion
Microsoft Exchange Server deployments are crucial to organizations, as email is the backbone
of business communication. At the same time, enterprises are virtualizing Exchange Server to
shrink their datacenter footprint, control costs, and accelerate provisioning. The Nutanix platform
provides the ability to:
• Consolidate all types of Exchange Server VMs onto a single converged platform with excellent
performance.
• Start small and scale Exchange Server VMs as your needs grow. Resize the solution when
the requirements change instead of guessing the number of Exchange Server VMs required.
• Eliminate planned downtime and protect against unplanned issues to deliver continuous
availability of critical databases.
• Reduce operational complexity by using simple, consumer-grade management with complete
insight into application and infrastructure components and performance.
• Keep pace with rapidly growing business needs, without large up-front investments or
disruptive forklift upgrades.

10. Conclusion | 47
Microsoft Exchange Server

Appendix

References
Exchange Server Resources
1. 2016 Sizing Guidelines
2. 2019 Sizing Guidelines
3. Exchange Supportability Matrix

Other Resources
1. Nutanix Third-Party Hardware Compatibility Lists
2. Nutanix Compatibility Matrix (including guest OS options supported on Nutanix AHV)
3. VMware Compatibility Guide (including guest OS options supported on VMware ESXi)
4. Hyper-V Supported Guest OS
5. Nutanix Software Documents (including release notes)
6. Nutanix End-of-Life Information
7. Microsoft Life Cycle Policy (to validate that your versions of Exchange Server and Windows
Server are in support)
8. Server Memory Configuration Options

Nutanix Networking
1. Nutanix AHV Networking best practice guide
2. vSphere Networking with Nutanix best practice guide

About the Authors


Harsha Hosur is a Senior Solutions Architect in the Business-Critical Applications Engineering
organization at Nutanix. Follow Harsha on Twitter at @harsha_hosur.
Josh Odgers is a Principal Architect at Nutanix and a Nutanix Platform Expert (NPX-001). Follow
Josh on Twitter at @josh_odgers.
Michael Webster is the Global Technical Director of Business-Critical Applications Engineering for
Nutanix and a Nutanix Platform Expert (NPX-007). Follow Michael on Twitter at @vcdxnz001.

Appendix | 48
Microsoft Exchange Server

About Nutanix
Nutanix makes infrastructure invisible, elevating IT to focus on the applications and services that
power their business. The Nutanix enterprise cloud software leverages web-scale engineering
and consumer-grade design to natively converge compute, virtualization, and storage into
a resilient, software-defined solution with rich machine intelligence. The result is predictable
performance, cloud-like infrastructure consumption, robust security, and seamless application
mobility for a broad range of enterprise applications. Learn more at www.nutanix.com or follow us
on Twitter @nutanix.

Appendix | 49
Microsoft Exchange Server

List of Figures
Figure 1: Nutanix Enterprise Cloud OS Stack................................................................... 9

Figure 2: NUMA Architecture...........................................................................................10

Figure 3: Intel Memory Architecture................................................................................ 12

Figure 4: Example DIMM Population Between Skylake and Broadwell...........................13

Figure 5: Nutanix CVMs in a Cluster...............................................................................15

Figure 6: Nutanix AHV Cluster with HCI and Compute Nodes........................................16

Figure 7: CPU Terminology..............................................................................................17

Figure 8: Virtual Memory Overview................................................................................. 20

Figure 9: Virtual Networking Overview............................................................................ 22

Figure 10: Exchange 2016 Architecture.......................................................................... 25

Figure 11: Tier-1 User Mailbox Configuration Calculator Settings................................... 29

Figure 12: Server Configuration.......................................................................................30

Figure 13: Exchange DAG Configuration........................................................................ 30

Figure 14: Role Requirements Results Pane: Server Configuration................................31

Figure 15: Role Requirements Results Pane: Updated Server Configuration................. 32

Figure 16: Role Requirements Calculations Pane...........................................................33

Figure 17: Suggested Initial Drive Layout for Exchange Server VMs on Nutanix AHV.... 35

Figure 18: Suggested Disk Layout for Exchange Server VMs on VMware vSphere or
Microsoft Hyper-V........................................................................................................ 35

Figure 19: Suggested Disk Layout for Exchange Server VMs on VMware vSphere with
Volumes........................................................................................................................ 36

50
Microsoft Exchange Server

Figure 20: Node Counts by Mailbox Size........................................................................38

51
Microsoft Exchange Server

List of Tables
Table 1: Document Version History................................................................................... 6

Table 2: Exchange 2016 Compute Sizing Configuration................................................. 26

Table 3: Exchange 2019 Compute Sizing Configuration................................................. 26

Table 4: Multisite Solution Sizing..................................................................................... 27

Table 5: Risk Profiles and Implementation Options......................................................... 40

52

You might also like