Professional Documents
Culture Documents
Server
Nutanix Best Practices
Copyright
Copyright 2020 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual
property laws.
Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other
marks and names mentioned herein may be trademarks of their respective companies.
Copyright | 2
Microsoft Exchange Server
Contents
1. Executive Summary.................................................................................5
2. Introduction.............................................................................................. 6
2.1. Audience.........................................................................................................................6
2.2. Purpose.......................................................................................................................... 6
3
Microsoft Exchange Server
7.1. Single Site: Two DAG Copies in One Failure Domain (Least Recommended)............ 39
7.2. Single Site: Two DAG Copies in Two Failure Domains............................................... 39
7.3. Multisite: Two or More DAG Copies in Two or More Failure Domains (Most
Recommended)...............................................................................................................40
10. Conclusion............................................................................................47
Appendix..........................................................................................................................48
References...........................................................................................................................48
About the Authors............................................................................................................... 48
About Nutanix...................................................................................................................... 49
List of Figures................................................................................................................ 50
List of Tables.................................................................................................................. 52
4
Microsoft Exchange Server
1. Executive Summary
This document makes recommendations for designing, optimizing, and scaling Microsoft
Exchange Server deployments on the Nutanix enterprise cloud. Historically, it has been a
challenge to virtualize Exchange Server because of the high cost of traditional virtualization
stacks and the impact that a SAN-based architecture can have on performance. Businesses and
their IT departments have constantly fought to balance cost, operational simplicity, and consistent
predictable performance.
Nutanix removes many of these challenges and makes virtualizing a business-critical application
such as Exchange Server much easier. The Nutanix distributed storage fabric is a software-
defined solution that provides all the features one typically expects in an enterprise SAN, without
a SAN’s physical limitations and bottlenecks. Exchange Server particularly benefits from the
following features:
• Localized I/O and the use of flash for index and database files to lower operation latency.
• A highly distributed approach that can handle both random and sequential workloads.
• The ability to add new nodes and scale the infrastructure without system downtime or
performance impact.
• Nutanix data protection and disaster recovery workflows that simplify backup operations and
business continuity processes.
Nutanix lets you run both Microsoft Exchange Server and other VM workloads simultaneously on
the same platform. Density for MS Exchange Server deployments is driven by CPU and storage
requirements. To take full advantage of the system’s performance and capabilities, validated
testing shows that it is better to scale out and increase the number of MS Exchange multirole
server VMs on the Nutanix platform than to scale up individual VMs. The Nutanix platform
handles Exchange Server’s demanding storage and compute requirements with localized I/O,
server-attached flash, and distributed data protection capabilities.
1. Executive Summary | 5
Microsoft Exchange Server
2. Introduction
2.1. Audience
This best practice document is part of the Nutanix Solutions Library. We wrote it for those
architecting, designing, managing, and supporting Nutanix infrastructures. Readers should
already be familiar with a hypervisor (VMware vSphere, Microsoft Hyper-V, or the native Nutanix
hypervisor, AHV), Microsoft Exchange Server, and Nutanix.
The document addresses key items for each role, enabling a successful design, implementation,
and transition to operation. Most of the recommendations apply equally to all currently supported
versions of Microsoft Exchange Server. We call out differences between versions as needed.
2.2. Purpose
This document covers the following subject areas:
• Overview of the Nutanix solution.
• The benefits of running Microsoft Exchange Server on Nutanix.
• Overview of high-level MS Exchange Server best practices for Nutanix.
• Design, sizing, and configuration considerations when architecting an Exchange Server
solution on Nutanix.
• Virtualization optimizations for VMware ESXi, Microsoft Hyper-V, and Nutanix AHV.
Version
Published Notes
Number
1.0 February 2016 Original publication.
1.1 July 2016 Updated platform overview.
Major updates throughout. Updated for Exchange 2016
2.0 December 2019
and 2019 and extended to cover all hypervisors.
Updated platform overview, terminology, and jumbo
2.1 June 2020
frames guidance.
2. Introduction | 6
Microsoft Exchange Server
Version
Published Notes
Number
2.2 December 2020 Updated the CPU Hot-Add section.
2. Introduction | 7
Microsoft Exchange Server
A NUMA topology has multiple nodes that are localized structures making up the physical host.
Each NUMA node has its own memory controllers and bus between the CPU and actual memory.
If a NUMA node wants to access nonlocal or remote memory, access times can be many times
slower than local memory access. Because Exchange Server is not NUMA-aware, most of its
work should occur on a local node. It is highly critical to prevent remote memory access issues.
To reduce stress on the memory controllers and CPUs, ensure that memory is in a balanced
configuration—where DIMMs across all memory channels and CPUs are populated identically.
Balanced memory configurations enable optimal interleaving across all memory channels,
maximizing memory bandwidth. To further optimize bandwidth, configure both memory
controllers on the same physical processor socket to match. For the best system-level memory
performance, ensure that each physical processor socket has the same physical memory
capacity.
If DIMMs with different memory capacities populate the memory channels attached to a memory
controller, or if different numbers of DIMMs with identical capacity populate the memory channels,
the memory controller must create multiple interleaved sets. Managing multiple interleaved sets
creates overhead for the memory controller, which in turn reduces memory bandwidth.
For example, the following image shows the relative bandwidth difference between Skylake and
Broadwell systems, each configured with six DIMMs per CPU.
Populating DIMMs across all channels delivers an optimized memory configuration. When
following this approach with Skylake CPUs, populate memory slots in sets of three per memory
controller. If you are populating a server that has two CPUs with 32 GB DIMMs, the six DIMMs
across all channels per CPU deliver a total host capacity of 384 GB, providing the highest level
of performance for that server. For a Broadwell system, a more optimized configuration has
either four or eight DIMMs (as opposed to the six in Skylake). If you are populating a two-socket
system with the same 32 GB DIMMs, eight DIMMs across all channels per CPU deliver a total
host capacity of 512 GB, which is a higher performance configuration.
Note: Each generation of CPUs may change the way it uses memory, so it’s
important to always check the documentation from the hardware vendor to validate
the configuration of the components.
The size of the CVM varies depending on the deployment; however, the typical size is 8 vCPU
and 32 GB of memory. During initial cluster deployment, Nutanix Foundation automatically
configures the vCPU and memory assigned to the CVM. Consult your Nutanix partner
and Nutanix Support if you need to make any changes to these values to ensure that the
configuration is supported.
vCPUs are assigned to the CVM and not necessarily consumed. If the CVM has a low-end
workload, it uses the CPU cycles it needs. If the workload becomes more demanding, the CVM
uses more CPU cycles. Think of the number of CPUs you assign as the maximum number of
CPUs the CVM can use, not the number of CPUs immediately consumed.
When you size any Nutanix cluster, ensure that there are enough resources in the cluster
(especially failover capacity) to handle situations where a node fails or goes down for
maintenance. Sufficient resources are even more important when you’re sizing a solution to host
a critical workload like Microsoft SQL Server or Microsoft Exchange Server.
From Nutanix AOS 5.11 onward, AHV clusters can include a new type of node that does not
run a local CVM. These compute-only (CO) nodes are for specialized use cases (typically, for
large-scale Oracle deployments) and have specific initial cluster requirements with strict scaling
constraints, so engage with your Nutanix partner or Nutanix account manager to see if CO is
advantageous for your environment. Because standard HCI nodes (running CVMs) provide
storage from the distributed fabric, CO nodes cannot benefit from data locality.
Note: CO nodes are not supported for running MS Exchange Server. There are no
plans to support MS Exchange Server on CO nodes in the future.
Note: CO nodes are only available when running Nutanix AHV as the hypervisor.
• CPU package
The CPU package is the physical device that contains all the components and cores and
sits on the motherboard. Many different types of CPU packages are available, ranging from
low-core, high-frequency models to high-core, low-frequency models. The right balance
between core and frequency depends on the requirements of the Exchange Server workload,
considering factors such as the number of users, mailbox limit, message size and quantity,
and so on. There are typically two or more CPU packages in a physical host.
• Core
A CPU package typically has two or more cores. A core is a physical subsection of a
processing chip and contains one interface for memory and peripherals.
• Logical CPU (LCPU)
The LCPU construct presented to the host operating system is a duplication of the core
that allows it to run multiple threads concurrently using hyperthreading (covered in the next
section). Logical CPUs do not have characteristics or performance identical to the physical
core, so do not consider an LCPU as a core in its own right. For example, if a CPU package
has 10 cores, enabling hyperthreading presents 20 logical CPUs to the guest OS.
• Arithmetic logic unit (ALU)
The ALU is the heart of the CPU, responsible for performing mathematical operations on
binary numbers. Typically, the multiple local CPUs on the core share one ALU.
• Hypervisor
In a virtualized environment, the hypervisor is a piece of software that sits between the
hardware and one or more guest operating systems. Each of these guest operating systems
can run its own programs, as the hypervisor presents to it the host hardware’s processor,
memory, and resources.
• vSocket
Each VM running on the hypervisor has a vSocket construct. vCPUs virtually “plug in” to
a vSocket, which helps present the vNUMA topology to the guest OS. There can be many
vSockets per guest VM, or just one.
• vCPU
Guest VMs on the hypervisor use vCPUs as the processing unit and map them to available
LCPUs through the hypervisor’s CPU scheduler. When determining how many vCPUs to
assign to an Exchange Server, always size assuming 1 vCPU = 1 physical core. For example,
if the physical host contains a single 10-core CPU package, do not assigned more than 10
vCPU to the Exchange server. When considering how much to oversubscribe vCPUs in the
virtual environment, use the ratio of vCPU to cores, not logical CPUs.
In theory, because we ideally want to use all available processor cycles, we should use
hyperthreading all the time. However, in practice, using this technology requires some additional
consideration. There are two key issues when assuming that hyperthreading simply doubles the
amount of CPU on a system:
1. How the hypervisor schedules threads to the physical CPU.
2. The length of time a unit of work stays on a processor without interruption.
Because a processor is still a single entity, appropriately scheduling processes from guest VMs
is critically important. For example, the system should not schedule two threads from the same
process on the same physical core. In such a case, each thread takes turns stopping the other
to change the core’s architectural state, with a negative impact on performance. There is also
a hypervisor CPU scheduling function to keep in mind, which is abstracted from the guest OS
altogether.
Because of this complexity, be sure to size for physical cores and not hyperthreaded cores when
sizing MS Exchange Server in a virtual environment. For mission-critical deployments, start with
no vCPU oversubscription.
As a rule, if CPU ready time is over 5 percent for a given VM, there is cause for concern. Always
ensure that the size of the physical CPU and the guest VM is in line with Exchange Server
requirements to avoid this problem.
One of the key differences between Nutanix AHV and VMware ESXi or Microsoft Hyper-V is that
AHV does not support swapping VM memory to disk. With ESXi, it is possible to assign more
memory to all VMs on a host than there is physical host memory. This overcommitment can be
dangerous, as physical RAM is much faster than even the fastest storage available on Nutanix.
Similarly, with Hyper-V, technologies such as Dynamic Memory and Smart Paging allow the
system to share memory allocation between all guest VMs with physical disks as an overflow.
Memory Reservations
Because memory is one of the most important resources for MS Exchange Server, do not
oversubscribe memory for this application (or for any business-critical applications). As
mentioned earlier, Nutanix AHV does not support memory oversubscription, so all memory is
effectively reserved for all VMs. When using VMware ESXi or Microsoft Hyper-V, ensure that the
host memory is always greater than the sum of all VMs. If there is memory contention on the
hypervisor, guest VMs could start to swap memory space to disk, which has a significant negative
impact on Exchange Server performance.
If you are deploying Exchange Server in an environment where oversubscription may occur,
Nutanix recommends reserving 100 percent of all Exchange Server VM memory to ensure
a consistent level of performance. This reservation prevents the Exchange Server VMs from
swapping virtual memory to disk; however, other VMs that don’t have this reservation may
swap. As virtualization allows you to right-size your MS Exchange deployment, memory
overcommitment offers no potential benefit. Because oversizing causes decreased performance,
be sure to avoid it.
Although each hypervisor implements virtual networking constructs slightly differently, the basic
concepts remain the same.
VMs expect 20 Gbps of bandwidth and one of the links fails, performance then drops by 50
percent.
• Consider using hypervisor-based network I/O control to ensure that transient high-throughput
operations such as vMotion or live migration do not negatively impact critical traffic.
• For VMware-based environments, consider using the VMware Distributed Switch. This
construct provides many benefits in a VMware environment, including the ability to use
technologies such as LACP and maintain TCP state during vMotion.
For more detailed information about networking in a Nutanix environment for supported
hypervisors, please refer to the following resources:
• Nutanix AHV Networking best practice guide
• vSphere Networking with Nutanix best practice guide
Jumbo Frames
Note: Exchange Server does not require jumbo frames.
The Nutanix CVM uses the standard Ethernet MTU (maximum transmission unit) of 1,500
bytes for all the network interfaces by default. The standard 1,500 byte MTU delivers excellent
performance and stability. Nutanix does not support configuring the MTU on a CVM's network
interfaces to higher values.
You can enable jumbo frames (MTU of 9,000 bytes) on the physical network interfaces of AHV,
ESXi, or Hyper-V hosts and user VMs if the applications on your user VMs require them. If you
choose to use jumbo frames on hypervisor hosts, be sure to enable them end to end in the
desired network and consider both the physical and virtual network infrastructure impacted by the
change.
Along with the Mailbox role, Exchange 2016 allows you to proxy traffic from Exchange 2013
Client Access servers to Exchange 2016 mailboxes. This capability lets you move to Exchange
2016 without having to worry about deploying enough front-end capacity to service new
Exchange 2016 servers.
Description Value
Minimum core count per VM 4
Maximum core count per VM 24
8 (Mailbox Server role) or 4 (Edge Transport
Recommended minimum memory per VM (GB)
role)
Recommended maximum memory per VM
192
(GB)
Strictly required. Each VM must fit within the
Specification within NUMA architecture
NUMA boundaries.
Description Value
Minimum core count per VM 4
Maximum core count per VM 24
Description Value
128 (Mailbox Server role) or 64 (Edge
Recommended minimum memory per VM (GB)
Transport role)
Recommended maximum memory per VM
256
(GB)
Strictly required. Each VM must fit within the
Specification within NUMA architecture
NUMA boundaries.
Change the values only for the cells highlighted with arrows in the following screenshot, unless
your Exchange administrator has provided more information.
Note: Always ensure that the initial mailbox size and mailbox limit are both set to
the maximum value that you want. For example, if you want a 1 GB mailbox for each
user, then set both the size and the limit values to 1,024.
If there are multiple profiles for mailbox users (up to a maximum of four), User Mailbox
Configuration allows you to set different configuration settings for each profile. However, to keep
this sizing example simple, we use a single configuration for all the mailboxes.
Note: Exchange Calculator v9.1 (2016 version) uses the SPECint 2006 rate, but
Exchange 2019 calculator uses SPECint 2017 values. Use the appropriate values for
the calculator version you’re using.
Once you’ve determined and entered the SPECint rate, proceed to the Exchange DAG
Configuration section.
Always keep the database availability group (DAG) multiplier at one. For the number of mailbox
servers hosting active mailboxes, start with two, but be aware that this value may need to
change. For this example, two servers are not enough to host the active mailboxes, even with
a VM with 17 vCPU and 256 GB RAM, as validated in the Role Requirements tab shown in the
following screenshot. The calculator highlights the problem areas in red.
When we change the number of servers to four in the Exchange DAG Configuration section, the
Role Requirements tab updates to look like the following screenshot.
From the results pane, we know that each Exchange VM in this environment needs to have 10
vCPU and 128 GB RAM, with at least 10,392 GB (approximately 11 TB) of storage. To calculate
the storage required for each Exchange VM, add the disk space requirements in the following
three fields:
• Transport Database Space Required.
To summarize, for this example, we need 16 Exchange VMs (8 VMs per site, with 4 active and
4 passive on each site). We also need at least nine cores of Active Directory services dedicated
to Exchange per site, which we can achieve by adding Active Directory servers with a CPU
configuration of either 3 VMs with 4 vCPU each or 5 VMs with 2 vCPU each. This configuration
is presented as the Recommended Minimum Number of Global Catalog Cores in the Role
Requirements section.
For a single-site deployment, set the value for site resilient deployment in the Input Sheet to
No. Although Nutanix and Microsoft support single-site deployments, we do not recommend
deploying an Exchange solution without DAGs or application-level high availability.
Note: Because having fewer vDisks can impact performance, scale out the number
of Exchange VMs and use 12 databases per VM for optimal performance and
resiliency in high-performance environments.
The following diagram shows the suggested initial drive layout for Exchange Server VMs on
Nutanix AHV.
Figure 17: Suggested Initial Drive Layout for Exchange Server VMs on Nutanix AHV
The following diagram shows the disk layout for Exchange Server VMs on VMware vSphere
using vmdk or for Microsoft Hyper-V using multiple SCSI controllers.
Figure 18: Suggested Disk Layout for Exchange Server VMs on VMware vSphere or Microsoft Hyper-V
The following diagram shows the disk layout for Exchange Server VMs on vSphere with volumes.
Figure 19: Suggested Disk Layout for Exchange Server VMs on VMware vSphere with Volumes
Map the virtual disks configured as volumes (not volume groups) as iSCSI LUNs to each
Exchange VM. Each disk here is a native vDisk mapped as an iSCSI LUN to an Exchange VM
using Windows in-guest iSCSI mapping. When installing Exchange 2016 or 2019, ensure that
the binaries are on a disk backed by a SCSI controller, not on an iSCSI volume. This placement
ensures that MS Exchange services are not impacted if an iSCSI connection issue occurs during
the regular Microsoft patching process, which reboots the Exchange VM. While strictly not a
requirement, a delayed service start or manual start is recommended during patching windows
to ensure Exchange services come up after iSCSI configuration settings take effect. Consider
setting all Exchange services startup schedule with delayed start using the following command:
sc config SVCNAME start=delayed-auto
• Avoid running multiple Exchange VMs on the same node even if they are all within NUMA
boundaries. With vSphere, use Distributed Resource Scheduler (DRS) rules to split Exchange
VMs onto multiple hosts.
6.4. Scalability
In our earlier sizing example, we were working with 65,000 Exchange users with a 1 GB mailbox
for each user.
A common question we see is how to expand the solution to support future mailbox growth.
With Nutanix the answer is very simple—add storage nodes to expand capacity. There is no
theoretical limit to the number of storage nodes you can add to a cluster, which means that you
can:
• Start small and scale only when required.
• Avoid Day One oversizing, reducing CapEx.
• Avoid the ongoing OpEx required to maintain oversized infrastructure.
• Enjoy the benefits of the latest CPU, RAM, networking, and flash technologies as you scale.
To increase mailbox size from 1 GB to 2 GB, 4 GB, or even 8 GB, progressively add more
storage nodes to the cluster. The following figure details the number of HCI and storage nodes
required for a variety of mailbox sizes.
Storage nodes used in this example have the following minimum specifications:
• 1 Intel Skylake Processor (2.4 Ghz or above)
• 96 GB RAM (preferably 192 GB)
• 4 x 1,920 GB SSD
• 8 x 12 TB HDD
• 2 x 10 GbE Network Interface Card
When adding new nodes to meet growing mailbox capacity requirements, you can migrate MS
Exchange VMs to newer nodes without any service interruption, increasing performance without
any changes at the application layer.
Similarly, when nodes reach end-of-life (EOL), you can add new nodes to the cluster and move
MS Exchange VMs off EOL hardware without any data migration or downtime, resulting in
increased performance without any expert storage, virtualization, or application-level knowledge.
Tip: Be sure that you also safeguard data at the Exchange (application) layer using
DAGs to protect against application and operating system-level failures.
7.1. Single Site: Two DAG Copies in One Failure Domain (Least
Recommended)
In this scenario, the administrator deploys a single Nutanix cluster that hosts two or more
Exchange Mailbox VMs configured at the application layer with two DAG copies.
The advantages of this deployment type are:
• Simple, cost-effective solution without silos of storage capacity.
• Infrastructure delivering ~99.999 percent availability.
The disadvantage of this deployment type is:
• In the unlikely event of a site or cluster-wide failure, the failure affects all Mailbox servers.
7.3. Multisite: Two or More DAG Copies in Two or More Failure Domains
(Most Recommended)
In this scenario, the administrator deploys two or more clusters across two or more sites that
each host Exchange Mailbox VMs. Exchange is configured at the application layer with two or
more DAG copies spread equally across the sites and clusters.
The advantage of this deployment type is:
• In the unlikely event of a total site or cluster failure, mail services continue to function, though
in a potentially degraded state.
The disadvantages of this deployment type are:
• Requires multiple sites with WAN connectivity.
• Significantly increased cost.
Multisite: two or
High availability due
more DAG copies Very high: cluster provides
to multiple DAG and Intrasite and intersite
in two or more node and HDD tolerance,
distributed storage recoverability
physical failure rack and site-level tolerance
fabric copies
domains
directly integrate with both Nutanix AOS and with MS Exchange at the application layer. With
these alliance partners, backing up MS Exchange no longer depends on a single vendor. Visit the
Nutanix Technology Alliance Partners page for more information.
9.1. General
• Perform a current state analysis to identify workloads and sizing.
• Spend time up front to architect a solution that meets both current and future needs.
• Design to deliver consistent performance, reliability, and scale.
• Don’t undersize, don’t oversize—right-size.
• Start with a proof of concept, then test, optimize, iterate, and scale.
9.3. VMware
• Use the VMXNET3 NIC.
• Use the latest VMware VM hardware version.
9.4. RAM
• Do not overcommit RAM at the hypervisor host level on ESXi or Hyper-V. (RAM
overcommitment is not possible on AHV.)
• Size each VM to fit within a NUMA node’s memory footprint.
9.5. vCPUs
• Do not overallocate vCPUs on Exchange VMs.
• Do not oversubscribe CPU.
• Account for Nutanix CVM core usage.
• Size VMs to fit within one NUMA node.
9.6. Networking
• Use hypervisor network control mechanisms (for example, VMware NIOC) to ensure minimum
bandwidth for vMotion.
• Use VMware load-based teaming with the VMware vSphere Distributed Switch (vDS).
• Connect Nutanix nodes with redundant 10 Gbps connections as a minimum.
• Use multi-NIC vMotion or live migration.
• Let the hypervisor and DRS manage noisy neighbors and workload spikes.
• Design for and maintain a minimum of N+1 redundancy in all Nutanix clusters.
• Use hypervisor antiaffinity rules to separate FCIs and AAG cluster nodes.
• Use a percentage-based admission control policy for VMware environments.
9.9. Monitoring
• Choose an enterprise monitoring solution for all Exchange Servers.
• Closely monitor drive space.
• Closely monitor application logs during maintenance windows (especially on iSCSI in-guest
volumes).
9.10. Manageability
• Standardize on your Exchange Server build and cumulative updates.
• Use standard drive letters or mount points.
• Use VM templates.
• Join the Exchange Server to the domain and use Windows authentication.
• Test patches and roll them out in a staggered manner during maintenance windows.
10. Conclusion
Microsoft Exchange Server deployments are crucial to organizations, as email is the backbone
of business communication. At the same time, enterprises are virtualizing Exchange Server to
shrink their datacenter footprint, control costs, and accelerate provisioning. The Nutanix platform
provides the ability to:
• Consolidate all types of Exchange Server VMs onto a single converged platform with excellent
performance.
• Start small and scale Exchange Server VMs as your needs grow. Resize the solution when
the requirements change instead of guessing the number of Exchange Server VMs required.
• Eliminate planned downtime and protect against unplanned issues to deliver continuous
availability of critical databases.
• Reduce operational complexity by using simple, consumer-grade management with complete
insight into application and infrastructure components and performance.
• Keep pace with rapidly growing business needs, without large up-front investments or
disruptive forklift upgrades.
10. Conclusion | 47
Microsoft Exchange Server
Appendix
References
Exchange Server Resources
1. 2016 Sizing Guidelines
2. 2019 Sizing Guidelines
3. Exchange Supportability Matrix
Other Resources
1. Nutanix Third-Party Hardware Compatibility Lists
2. Nutanix Compatibility Matrix (including guest OS options supported on Nutanix AHV)
3. VMware Compatibility Guide (including guest OS options supported on VMware ESXi)
4. Hyper-V Supported Guest OS
5. Nutanix Software Documents (including release notes)
6. Nutanix End-of-Life Information
7. Microsoft Life Cycle Policy (to validate that your versions of Exchange Server and Windows
Server are in support)
8. Server Memory Configuration Options
Nutanix Networking
1. Nutanix AHV Networking best practice guide
2. vSphere Networking with Nutanix best practice guide
Appendix | 48
Microsoft Exchange Server
About Nutanix
Nutanix makes infrastructure invisible, elevating IT to focus on the applications and services that
power their business. The Nutanix enterprise cloud software leverages web-scale engineering
and consumer-grade design to natively converge compute, virtualization, and storage into
a resilient, software-defined solution with rich machine intelligence. The result is predictable
performance, cloud-like infrastructure consumption, robust security, and seamless application
mobility for a broad range of enterprise applications. Learn more at www.nutanix.com or follow us
on Twitter @nutanix.
Appendix | 49
Microsoft Exchange Server
List of Figures
Figure 1: Nutanix Enterprise Cloud OS Stack................................................................... 9
Figure 17: Suggested Initial Drive Layout for Exchange Server VMs on Nutanix AHV.... 35
Figure 18: Suggested Disk Layout for Exchange Server VMs on VMware vSphere or
Microsoft Hyper-V........................................................................................................ 35
Figure 19: Suggested Disk Layout for Exchange Server VMs on VMware vSphere with
Volumes........................................................................................................................ 36
50
Microsoft Exchange Server
51
Microsoft Exchange Server
List of Tables
Table 1: Document Version History................................................................................... 6
52