You are on page 1of 50

HPE Reference Architecture for

Microsoft SharePoint Server 2016 on


HPE BladeSystem with MSA 2040
Storage

Reference Architecture
Reference Architecture

Contents
Executive summary ................................................................................................................................................................................................................................................................................................................................ 3
Solution overview ..................................................................................................................................................................................................................................................................................................................................... 3
Solution components............................................................................................................................................................................................................................................................................................................................6
HPE BladeSystem c7000 architecture ..........................................................................................................................................................................................................................................................................6
HPE ProLiant BL460c Gen9 Server Blade ................................................................................................................................................................................................................................................................ 7
HPE MSA 2040 Storage system ......................................................................................................................................................................................................................................................................................... 8
D2700 Disk Enclosure ..................................................................................................................................................................................................................................................................................................................9
VMware vSphere clusters ...................................................................................................................................................................................................................................................................................................... 10
SharePoint Server 2016 components ........................................................................................................................................................................................................................................................................ 10
Design principles .................................................................................................................................................................................................................................................................................................................................. 10
High availability requirements ........................................................................................................................................................................................................................................................................................... 10
High availability features ........................................................................................................................................................................................................................................................................................................ 10
SQL Server AlwaysOn HA features ............................................................................................................................................................................................................................................................................... 11
SharePoint HA features ........................................................................................................................................................................................................................................................................................................... 12
Best practices and configuration guidance .................................................................................................................................................................................................................................................................. 13
Server power mode...................................................................................................................................................................................................................................................................................................................... 13
Configuring HPE MSA 2040 storage.......................................................................................................................................................................................................................................................................... 14
SharePoint............................................................................................................................................................................................................................................................................................................................................20
SQL Server AlwaysOn ............................................................................................................................................................................................................................................................................................................... 21
SharePoint Server 2016 capacity and sizing ............................................................................................................................................................................................................................................................. 24
Workload overview ....................................................................................................................................................................................................................................................................................................................... 24
Sizing the solution ........................................................................................................................................................................................................................................................................................................................ 25
Sizing the storage ......................................................................................................................................................................................................................................................................................................................... 26
Content storage calculations .............................................................................................................................................................................................................................................................................................. 27
Networks ............................................................................................................................................................................................................................................................................................................................................... 28
Lab test environment ................................................................................................................................................................................................................................................................................................................ 28
SharePoint Server 2016 testing overview ............................................................................................................................................................................................................................................................. 33
SharePoint 2016 test results and analysis ............................................................................................................................................................................................................................................................ 34
Summary ...................................................................................................................................................................................................................................................................................................................................................... 42
Implementing a proof-of-concept......................................................................................................................................................................................................................................................................................... 43
Appendix A: Bill of materials ...................................................................................................................................................................................................................................................................................................... 43
Appendix B: SharePoint test methodology .................................................................................................................................................................................................................................................................. 44
Test process ....................................................................................................................................................................................................................................................................................................................................... 44
SharePoint workload description .................................................................................................................................................................................................................................................................................... 47
Resources and additional links ................................................................................................................................................................................................................................................................................................ 50
Reference Architecture Page 3

Executive summary
Balancing the requirements of mission-critical applications on a hardware infrastructure presents numerous challenges and requires significant
upfront planning to ensure acceptable performance, uptime, and resources. This paper presents a virtualized design for implementing and
integrating Microsoft® SharePoint Server 2016 with SQL Server 2014 on an HPE BladeSystem Gen9 c-Class architecture. The design leverages
highly available components and incorporates Hewlett Packard Enterprise and Microsoft best practices for SharePoint 2016. This design was
created by an HPE UC&C Solutions Engineering Team to deliver performance and stability, as well as to provide reserve capacity on the HPE
ProLiant BL460c Gen9 host servers to allow for running additional applications, while also allowing sufficient capacity to accommodate future
solution expansion based on evolving business needs.

HPE ProLiant BL460c server blades provide essentially the same performance as their rack-mounted HPE ProLiant DL380 equivalents, but are
an ideal choice if the need is to deploy multiple CPU-intense applications, such as SharePoint, in a smaller “compute-dense” footprint and with
better power/cooling efficiency. Test results demonstrated that Gen9 server blades can deliver about a 20% performance (SharePoint
throughput) increase over Gen8 servers. This is an advantage if you are considering updating to SharePoint 2016 from a previous version, as
new versions typically require some degree of increased resources. A set of virtualized Web Server VMs running a total of 24 cores has been
shown during lab tests to deliver over 500 requests per second (RPS) when running a broadly applicable collaboration and document
management workload. Note that performance results will always be predicated on overall solution configuration, VM tuning and the workload
applied – thus your resulting performance will vary.

The HPE MSA 2040 SAN storage was chosen to provide an appropriate balance of price/performance suitable for the mid-market, while
delivering features such as tiered storage, high performance dual controllers and integrated management. Details of the MSA 2040 features and
capabilities are presented later in the “Solution components” section.

The solution described in this paper represents a Small and Medium Enterprise sized configuration of SharePoint Server 2016 Release
Candidate (RC) with SQL Server 2014 SP1, designed to support 1,000 users at a concurrency of 20%; with demonstrated scale-up to 2,500
users. The SharePoint and SQL applications are deployed as Virtual Machines (VMs) on HPE ProLiant BL460c Gen9 server blades, within a
single HPE BladeSystem c7000 enclosure on dual 8-node VMware® clusters (16 nodes total) providing a virtualization solution. The workload
consumes the resources of two physical HPE ProLiant BL460c Gen9 server blades, where each of the two server blades resides in a different
VMware vSphere Host cluster in the c7000 enclosure. The remaining server blades in the c7000 can be used by the customer to deploy other
virtualized applications by taking advantage of the two 8–node VMware clusters. Each of the HPE ProLiant BL460c Gen9 virtualization host
servers are configured with 24 cores and 256GB memory. The storage solution consists of one HPE MSA 2040 FC storage enclosure and a SAS
connected HPE D2700 Disk Enclosure. Each of the two enclosures can contain 24 small form factor (SFF) disks comprising a mix of drive types
(SSD, 10K, 7.2K); thus providing multiple tiers of storage for different purposes; and two identical sets of storage for use by the two clusters. Disk
configurations will vary depending on specific customer needs.
Target audience: The target audience for this Reference Architecture are Small and Mid-Size Enterprises who are considering the use of
virtualization (VMware) technology to deploy Microsoft SharePoint Server 2016 with SQL Server 2014 or later. It will be of particular interest to
those considering virtualizing applications on the HPE BladeSystem c7000 Enclosure. A working knowledge of virtualization technologies and
SharePoint deployment concepts is recommended.
Document purpose: This Reference Architecture provides a best practice design for Microsoft SharePoint Server 2016 configuration with
VMware vSphere 6.0, and describes the benefits of deploying that on a Gen9 HPE BladeSystem and HPE MSA 2040 Storage at mid-market
solution scale, with allowance in the design for future solution growth and running additional applications.

This white paper describes a project developed by Hewlett Packard Enterprise in April 2016.
Disclaimer: Products sold prior to the separation of Hewlett-Packard Company into Hewlett Packard Enterprise Company and HP Inc. on
November 1, 2015 may have a product name and model number that differ from current models.

Solution overview
Microsoft SharePoint 2016 facilitates business collaboration in the broadest sense and helps company workers, partners, and customers work
together more effectively. The solution presented in this paper describes a reference architecture providing high availability both from the
infrastructure and application level, based on best practices developed by HPE and Microsoft.
Reference Architecture Page 4

This Reference Architecture demonstrates a highly available solution designed to support from 1,000 SharePoint 2016 users at a concurrency of
20% (200 active users at peak); and discusses scaling this up to 2,500 users (500 active users at peak load with 20% concurrency level)
performing a broadly applicable mix of document management and collaboration tasks.

The goal of system sizing and design is to define a solution that provides predictable, stable performance in support of the business workload. In
SharePoint, the workload is typically defined by the mix of functions the users will execute and by the required throughput. Throughput defines
the function frequency and intensity and is commonly defined in “requests per second” (RPS). The combination of functions and their RPS drives
the solution resource capacities required in terms of server CPU, memory, storage performance and capacity, and network traffic for the
application services that will be deployed on each server in the solution. SharePoint is ideally suited to scale-out deployment strategies using
multiple Virtual Machines (VMs) where each VM can be defined as to its service-specific resource needs (scale-up). Specific services can be
moved off the Front End (FE) servers and onto dedicated application servers to provide the required service capacities. The SharePoint 2016
concept of “MinRole” further reinforces this separation of services onto specific role servers (or VMs) to provide enhanced performance – the
reader is encouraged to read the Microsoft article regarding SharePoint MinRole for more details. IT policies supporting business imperatives
may further define the configuration – a typical requirement being for a highly-available solution.

High availability (HA) can be defined as the solution having no single point of failure that would prevent users from accessing the solution
services at any point in time. Providing HA in a physical server solution involves multiple servers of each role type. For SharePoint this translates
to two SQL servers in an AlwaysOn configuration, a minimum of two SharePoint FE servers, and a minimum of two servers supporting the Search
and Application server roles. In a physical server solution this can easily result in a 6+ server deployment (11+ servers if full SharePoint 2016
MinRoles are adhered to) and the likely under-utilization of some of the server resources. An increasing number of SharePoint deployments are
now leveraging a virtualized environment with the various role services being deployed on specifically configured virtual machines (VMs). This
approach has a number of key advantages. By virtualizing SharePoint roles such as Web Server, Application servers and SQL Server you can
deploy fewer HPE ProLiant virtualization host servers to handle all the tasks previously supported by multiple physical servers. This in turn
reduces the costs around day-to-day operations (power, cooling, physical management, etc.). A further advantage is that each VM can be defined
as to exactly the resources needed by the services running on that VM. Further, as workloads, user behavior and business needs change over
time, the VM resources can be fine-tuned as part of a proactive capacity planning activity that ensures capacity and performance can meet
ongoing demands. Additional VMs can also be created to support changes in service needs, or to quickly deploy temporary requirements such as
a development or test server farm. It is common today for many SharePoint customers to deploy a Quality Assurance / User Acceptance Test
(QA/UAT) farm in parallel with their production farm, so the remaining capacity in the c7000 could be used in part for that purpose. High
availability can be provided by an appropriate combination of infrastructure technologies and by leveraging the HA design principles of
SharePoint and SQL Server. Virtualization can therefore provide a more cost-effective solution with more efficient use of resources and easier
management. The HPE BladeSystem allows in-enclosure expansion up to 16 servers total, so by deploying additional BL460c servers (if your
c7000 enclosure has available space) or assigning/re-purposing existing un-used servers, as may be needed, you can scale-up/-out the virtual
machines (VMs) to support an increased workload or more applications as business needs evolve over time.

The solution described in this reference architecture guide assumes that the customer has already invested in, or plans to deploy HPE blade
technology using a c7000 blade enclosure containing up to 16 BL460c Gen9 servers. The server blades in the c7000 enclosure are divided into
two clusters to establish a cluster failure domain and to provide high availability at the virtualization layer. For purposes of discussion and
solution design, it is assumed the c7000 contains 16 BL460c servers arranged into two identical VMware clusters on nodes 1…8 and nodes
9…16. The FC-connected MSA 2040 and D2700 based SAN storage volumes are configured to be presented to appropriate servers in each
cluster. Further details as to storage design are contained in a later section of this paper. Figure 1 shows example physical ESXi server names and
grouping within each of the two VMware vSphere clusters in the BladeSystem c7000 enclosure.
Reference Architecture Page 5

Figure 1. Example cluster and server location within BladeSystem c7000 enclosure

The highly available deployment model developed for SharePoint 2016 consisted of four SharePoint VMs (for the Web Server and Custom roles)
and two SQL Server 2014 VMs configured with AlwaysOn clustering, split across two hosts. This requires a total of 14 cores for 1,000 users;
scaled up to 22 cores for 2,500 users; on each of two ESXi host servers split evenly across two VMware clusters. The storage needs an estimated
3TB of storage capacity overall for databases, content and backups for the configured SharePoint workload. As SQL AlwaysOn is being used to
provide two separate replicas, the intent is to provide this storage for the two SQL VMs from both the MSA and the D2700 storage enclosures.
The later section covering Design principles describes how the solution components were leveraged to provide all the HA and redundancy
requirements of a mid-market sized solution. Although your specific configuration needs (users and workload) may differ from those discussed,
the design discussion and process of determining a solution will still be relevant and broadly applicable.

The physical servers shown in Figure 1 as VMware host SCS01 and host SCS09 are assumed to be ESXi hosts in separate VMware clusters to
establish a deliberate failure boundary. Hosts SCS08 and SCS16 were designated as the hot spare servers to provide reserve capacity in the
event one of the host servers failed or had to be taken out of service for maintenance (firmware updates, etc.); requiring the hosted VMs to be
moved to a temporary host. Alternate methods are provided by VMware to reserve capacity, and these are discussed later.
Reference Architecture Page 6

Figure 2 shows the highly available SharePoint farm configuration. It provides a separation of various SharePoint service roles (i.e., Web Server
and Search/Application services) onto separate VMs, thus maximizing efficiency and allowing for precise VM tuning matching each role. The
design also leverages the built-in SharePoint Application Role Balancing Service to apportion the service load across multiple App Server VMs.
This design, with deliberate over-sizing, also handles the unlikely event of a VM failure whereby the surviving VMs can handle the total load; or
the deliberate event of taking down a VM or host for periodic maintenance while continuing to provide the service to users.

VMware Host 1 VMware Host 2

Front End Front End


Server Server

Custom Custom
Server Server

SQL SQL
Database Database
Server Server

SQL Server 2014 installed and configured to support SQL


AlwaysON, which requires SQL Server 2012 or later.

Figure 2. SharePoint/SQL role VM design

Note that the above is a simplified version of the full MinRole scale-out as can be accomplished with SharePoint 2016, with the Front End server
being a MinRole with only those related services running on those VMs (plus the Search Query component and Distributed Cache services); and
the remaining services are running on “Custom” MinRole servers supporting the other services. The later section called Design principles
discusses SharePoint 2016 MinRole and its impact and options in more detail.

Solution components
The collaboration solution presented herein is comprised of the SharePoint Server 2016 RC (Release Candidate, as tested) and SQL Server 2014
SP1 applications running on a Windows Server® 2012 R2 guest OS, on multiple VMware vSphere 6.0 (or later) virtual machines (VMs) on HPE
ProLiant BL460c Gen9 servers. The following sections summarize the key features of these components.

HPE BladeSystem c7000 architecture


The HPE BladeSystem c-Class architecture is a flexible infrastructure that makes the computing, network, and storage resources easy to install
and arrange. It creates a general-purpose infrastructure that accommodates your changing business needs. Shared cooling, power, and
management resources support the flexible, modular components. The BladeSystem c-Class architecture is implemented in the HPE
BladeSystem c7000 Enclosure. A 10U-sized unit can support:

• Half-height and full-height blades or both. The blades can be ProLiant server blades, ProLiant workstation blades, Integrity server blades, and
storage blades
• Up to eight interconnect modules. They can be Virtual Connect Fibre Channel Modules, Virtual Connect FlexFabric Modules, Virtual Connect
Flex-10/10D Modules or Ethernet Switches
Reference Architecture Page 7

• One or two Onboard Administrator (OA) management modules. A second Onboard Administrator module acts as a redundant controller in an
active-standby mode
• The Insight Display panel on the front of the enclosure provides an easy way to access the Onboard Administrator locally
• Up to 10 Active Cooling Fans
• Up to 6 power supplies

Front-loading server or storage blades and rear-loading interconnect modules slide into the enclosure and connect with a single mid-plane that
provides data connections, including those for optional interfaces. LAN on Motherboard (LOM) or FlexibleLOM adapters and optional mezzanine
cards on the server blades route network signals to the interconnect modules in the rear of the enclosure. Half-height server blades have two
mezzanine slots; full-height blades have three slots. BladeSystem interconnect modules support a variety of networking standards, such as
Ethernet, Fibre Channel, Fibre Channel over Ethernet (FCoE), iSCSI, and Serial Attached SCSI (SAS). FlexibleLOM adapters are available for
different fabrics such as Ethernet or Fibre Channel over Ethernet (FCOE).

c7000 Enclosure Front c7000 Enclosure Back

Half-height Full-height
server Blade server blade

Interconnect
modules

Onboard
Administrators

Redundant
fans

Redundant
power supplies

Figure 3. Front and back views of the BladeSystem c7000 Enclosure (General Component overview)

The solution designed in this document consists of 16 Half-Height ProLiant BL460c Gen9 Server Blades connected in a single HPE
BladeSystem c7000 Enclosure. It uses HPE Virtual Connect FlexFabric 20/40 F8 Modules connected at Bay 1 and Bay 2 of the interconnect
bays at the rear of the c7000 Enclosure. There is one OA module and an option for a second OA module with the second one acting as a
redundant controller.

HPE ProLiant BL460c Gen9 Server Blade


The HPE ProLiant BL460c Gen9 Server Blade provides an enterprise-class server blade for the data center. It adapts to any demanding
environment, including virtualization, IT and Web infrastructure, collaborative systems, cloud, and high-performance computing. The Gen9 model
delivers a performance increase compared to previous models, using the Intel® Xeon® E5-2600 v3 and v4 processors.
Reference Architecture Page 8

Figure 4. HPE ProLiant BL460c Gen9 Server Blade (Component Layout)

For purposes of performance test comparison, tests were run using both Gen8 and Gen9 BL460c Server Blades, and results clearly show the
performance increase realized by running the latest Gen9 servers.

• The HPE ProLiant BL460c Gen8 Servers use two Intel Xeon E5-2650 @ 2.0GHz processors per server blade. Each processor has 8 cores with
Hyper-Threading enabled
• The HPE ProLiant BL460c Gen9 Servers use two Intel Xeon E5-2690 v3 @ 2.60GHz processors per server blade. Each processor has 12
cores with Hyper-Threading enabled

Each Gen9 server blade has HPE SmartMemory with DDR4 which runs at speeds up to 2,133MHz.

HPE MSA 2040 Storage system


The HPE MSA 2040 Storage arrays address HPE ProLiant customers' shared storage and data protection needs, reduces TCO while dramatically
increasing performance using technologies such as Solid State Drives (SSDs), Snap and Volume copy, replication and Self Encrypting Drives
(SEDs).

The MSA 2040 Storage arrays are positioned to provide an excellent value for customers needing increasing performance to support initiatives
such as consolidation and virtualization. The MSA 2040 delivers this performance by leveraging a new controller architecture with a new
processor, four ports with 4GB cache per controller and using drive technologies such as SSDs.
Reference Architecture Page 9

The MSA 2040 Storage ships standard with 64 Snapshots, and Volume Copy enabled for increased data protection. It also allows replication with
optional Remote Snap software between arrays (FC or iSCSI only).
MSA 2040 key benefits
Simple: Flexible architecture. Easy to set up. Easy to manage.

• Deploy single or dual controllers depending on high-availability and budgetary requirements


• Select disk enclosures with Large or Small Form Factor drives – choice of high-performance Solid State Drives, or Enterprise-class SAS
• Configure and manage the storage solution – integrated web-based management tools are easy to use
Fast: The MSA 2040 sets new standards for $/IOPS in entry SAN, up to 4X today’s competition.

• New high-performance Converged SAN controller and SAS controller offer 4x the performance of today’s other entry-level SAN arrays
• 4-port Converged SAN controller supporting Fibre Channel and iSCSI and a SAS Controller with 4GB Cache translating into better application
response time and ability to support more virtualized environments
• SSD support with integrated “wear gauge” helps improve application performance and allows customers to reduce their operating costs by
reducing foot print and power consumption
• Self-Encrypting Drives (SEDs) designed to safeguard critical personal and business information and to comply with regulatory mandates

Future proof: 2X the bandwidth and the MSA 2040 was the first entry SAN with 16Gb FC and 12Gb SAS

• Choice of 8Gb/16Gb FC, 1GbE/10GbE iSCSI and 12Gb SAS to match your configuration needs for storage connectivity
• The 4th generation of MSA’s unique Data-in-Place upgrades provides investment protection
• Host ports upgradeable by SFPs (8Gb/16Gb FC or 1GbE/10GbE iSCSI) allow customers to manage their total cost of ownership lower

Figure 5. HPE MSA 2040 SFF Front View

D2700 Disk Enclosure


The HPE D2700 Disk Enclosure is a 6Gb SAS low cost, high capacity, tiered external storage system. It is a Small Form Factor (SFF) Enclosure
with 25 drive bays and offers flexible, modular solutions to simplify capacity expansion of HPE ProLiant server environments and MSA Storage
arrays (SFF only).

Figure 6. HPE D2700 SFF Front View


Reference Architecture Page 10

In the solution example presented in this paper, the MSA 2040 and D2700 were both configured with multiple drive types to provide tiered
storage (4 x 400GB SSD, 16 x 600GB 10K, 4 x 2TB 7.2K disks). This provides more total storage than is needed for the SharePoint databases
and content, but it is assumed the storage is also available to other nodes in the clusters and thus for use by other applications as needed. The
mix of drive types also facilitates tiered storage, although the number and mix of disks will depend on specific customer needs. A later section
presents a methodology and example for sizing the storage.

VMware vSphere clusters


Each HPE ProLiant BL460c Gen9 server in the tested solution is running VMware ESXi 6.0 U1, with the 16 servers running in two separate 8-
node vSphere clusters (nodes 1…8 and nodes 9…16). This provides highly available host servers for virtual machines (VMs), as VMs running on a
specific server in the cluster can be manually or automatically migrated via vMotion to a different host server in the cluster if needed. This may
be because of a deliberate server shutdown to perform updates or maintenance, or an unplanned server shutdown. The vSphere clusters and
vMotion form a part of the infrastructure-provided high availability (HA) capabilities.

Reserve capacity can also be allocated in the event VMs need to be temporarily migrated to a different host. Several methods exist, supported by
VMware 5.5 or later, including simply defining two nodes (one in each cluster) to be “Hot spares”. Another method might be to use VMware
“Admission Control” to define a specific number of vCPU cores to be held in reserve such that VM hosts are not over-committed in the event of
needing to migrate VMs.

SharePoint Server 2016 components


The components used to deploy the single-site SharePoint 2016 farm in this RA comprise the following products and versions:
Database server:

• Microsoft SQL Server 2014 SP1 or later


• Windows Server 2012 R2 Standard
• Note that SQL Server 2012 or later is required to deploy AlwaysOn, as described in this document

SharePoint 2016 server:

• SharePoint Server 2016 RC (Release Candidate, as tested)


• Windows Server 2012 R2 Standard

The SharePoint 2016 farm databases were stored on a SQL Server 2014 AlwaysOn highly-available two-node configuration using two replicas
and separate storage datastores for each node. This ensures both the availability of the SQL service and also ensures the databases will be
available from at least one source (replica) should an issue occur. All VMs were using Windows Server 2012 R2 as the guest OS.

Design principles
These sections describe the various component features and how they were leveraged to provide a solution with HA and redundancy to meet
mid-market requirements. They summarize the key principles behind the overall solution design.

High availability requirements


HA can be defined as the elimination of a single point of failure, generally accomplished by the appropriate rapid recovery from failure of a
component, and the provisioning of multiple redundant components where required to prevent loss of service. HA can be designed into both the
underlying hardware infrastructure (compute/memory, storage, networks) and into the configuration of the applications (multiple database
servers/replicas and multiple redundant load-balanced services). For example, part of the solution design involved determining the specific VM
locations on the hosts so that the types of resources used were balanced across the clusters and cluster hosts. This resulted in balancing of CPU,
memory, and storage access all while taking into account cluster failure domains to ensure that no single point of failure was introduced into the
solution.

High availability features


The configuration design as shown in this paper employs high availability both from the hardware/infrastructure level as well as from the
application level. It is intended that the 16 server blades in the HPE BladeSystem c7000 Enclosure are divided into two clusters with each cluster
having a hot spare server (or other method such as VMware Admission Control used to ensure reserve vCPU resources). The application
Reference Architecture Page 11

software configuration leverages these capabilities while also employing HA features of the software to minimize impact to the user and provide
a predictable level of availability and performance.

An analysis of the intended SQL and SharePoint services deployment across a total of 6 VMs, plus required core counts and separation of
services equally across 2 clusters, yielded a design with the VMs hosted on one server in each cluster – Host 1 in cluster 1; and Host 9 in cluster
2. Servers 8 and 16 are assumed to be the “hot spares” as shown in Figure 7. An alternate method for providing reserve resources would be to
use the VMware “Admission Control” capability which can be configured to keep a specific number of “cores” in reserve spread across multiple
hosts if required.

Either method of designating reserve capacity ensures sufficient resources are available for each VM from each host server should the migration
(vMotion in VMware parlance) of a VM be required from one server to another. This will usually be to support planned maintenance activity
requiring a Host server be temporarily removed from service (e.g., software/driver/firmware updates). Figure 7 illustrates the clusters and the
servers hosting the required VMs.

VMs on SCS01
S S S S S S S S
C C C C C C C C
Cluster01
S S S S S S S S
(sHosts01)
Web APP SQL 0 0 0 0 0 0 0 0
Server Server Server 1 2 3 4 5 6 7 8
A A A

VMs on SCS09
S S S S S S S S
C C C C C C C C Cluster02
S S S S S S S S (sHosts02)
Web APP SQL 0 1 1 1 1 1 1 1
Server Server Server 9 0 1 2 3 4 5 6
B B B

Host Server from Cluster01 Host Server from Cluster02


SCS01 used for the solution SCS09 used for the solution

Figure 7. SQL/SharePoint VM deployment into two VMware clusters

The two VMware clusters provide a deliberate failure boundary that can be leveraged by the applications.

SQL Server AlwaysOn HA features


SQL Server AlwaysOn provides a highly-available solution that both ensures service availability and protects the integrity and availability of the
stored data by leveraging multiple replicas stored on separate storage. It is intended to replace the previously popular “mirroring” capability that
has now been deprecated in favor of this new technology.

The feature provides the SQL Availability Group capability, not dissimilar in concept to that used by Microsoft Exchange Database Availability
Groups (DAGs). It is based on a two-node SQL Server deployment where the servers (VMs in this case) reside in a Windows® 2012 R2 Failover
Cluster; however, the storage for the two nodes is not shared but rather deliberately separate, enabling the SQL Availability Group to provide two
replicas, one each on the Primary and Secondary SQL nodes. The Primary node is in communication with SharePoint and performs the
database/file operations as needed, and the Secondary node replicates the operations to ensure both replicas are in constant total sync.
SharePoint can leverage this technology directly. In our example configuration the data stores for each replica were located on separate virtual
volumes comprising disks housed in the MSA and the D2700 enclosures.
Reference Architecture Page 12

SharePoint HA features
Based on the underlying VMware cluster architecture providing HA capabilities along with SQL Server AlwaysOn providing highly available
services and data, and having knowledge of some of the SharePoint configuration options, we designed the SharePoint 2016 deployment and
configuration on HPE BladeSystem c7000 architecture. An HA mid-market sized deployment will only take a modest portion of the total
resources available on the BL460c Gen9 server blades in the enclosure; however the solution can be appropriately scaled up and out as needed
by expanding further into BL460c Gen9 server resources and deploying additional VMs or VMs with higher amounts of resources assigned. As
configured in this paper, a 1,000 user configuration uses about 50% at most of each host server resources. This allows for in-place scale-up of
the SharePoint configuration by increasing core count per VM – the examples shown illustrate scaling up to 2,500 users by leveraging more
vCPUs. The solution also allows for the deployment of other applications on the resources in the other BL460c Gen9 servers deployed in the
c7000 enclosure.

A solution supporting mid-market sized needs is shown in Figure 8. Note that a total of two active host servers are needed to support
deployment of the various VMs onto different hosts. Scale-up from 1,000 to 2,500 users can be accomplished in-server by simply increasing the
vCPU count as shown in detail later.

VMware Host 1 VMware Host 2

Front End Front End


Server Server

Custom Custom
Server Server

SQL SQL
Database Database
Server Server

SQL Server 2014 installed and configured to support SQL


AlwaysON, which requires SQL Server 2012 or later.

Figure 8. Extended HA SharePoint 2016 VM configuration

The easiest way to develop a practical SharePoint configuration is to use the HPE Sizer for Microsoft SharePoint 2016, which takes detailed
workload input and provides guidance on a physical or virtual solution design, including deployment of specific VMs on the various solution
servers.
SharePoint 2016 MinRole
In prior versions of SharePoint the so-called server “role” was implied by the collection of services running on that server – thus names like Web
Front End, Application server, Search server, etc. became common and their purpose was well understood. SharePoint 2016 has, with the advent
of MinRole, changed that such that now there are MinRole names that will be selected when running the configuration wizard that will cause the
server to adopt that role and only run specific services for reasons of performance optimization. For a more information, refer to this Microsoft
article. MinRole server roles are:
• Front End
• Application
Reference Architecture Page 13

• Distributed cache
• Search
• Custom

In order to achieve HA, a minimum of two (2) of each role type (and 3 of Distributed cache) are required for a full MinRole deployment; thus
totaling 11 servers or VMs. SharePoint can be initially deployed with less roles and servers (VMs) and later be scaled out and roles further
defined as requirements change over time. In the example presented herein, we have used a streamlined model with 2 x Front End VMs (also
running Search Query component and Distributed cache service) and 2 x Custom VMs running all remaining services. If needs changed in the
future as demand grew, further VMs could be added and set up to act as dedicated Search or Application roles, for example.
SharePoint 2016 Front End Server HA
The SharePoint 2016 Front End Server acts as a gateway for all requests from the clients for content or services. Every client’s request passes
through a Front End Server and every response to a client is sent from an FE server. Front End Servers host web pages, web services and web
parts that are required to process requests from users; and in this design also host the Distributed Cache Service and the Search Query
components. The Front End Server redirects the requests to the application server which processes the requests and gives the results back to
the client via the Front End server.

In this design, we have two FE servers running on virtual machines hosted on two blades which are part of separate VMware clusters. The Front
End servers are sized to handle a throughput load of 100 RPS for 1,000 users which can be scaled up to 250 RPS for 2,500 users. The section
discussing Capacity and sizing explains the algorithms for converting users and workload into a goal throughput requirement.

Application services HA
Application servers host SharePoint service applications, rather than the web server related services. In this mid-market design we deployed the
majority of search and application services to these two dedicated servers running as “Custom” MinRole servers. There are two such “custom”
application servers running on virtual machines hosted on two different VMware host server blades which are part of separate VMware clusters.
The built-in SharePoint Application Services Load Balancing service handles the balancing of services requests across the two Application
Servers and their services.

Best practices and configuration guidance


There are a number of configuration options when deploying SharePoint onto a virtualized solution using HPE ProLiant BL460c Gen9 server
blades. This section highlights the best practices used as part of the lab deployment with regard to configuring the host servers, MSA 2040
storage, the VMware datastores, and the virtual machines (VMs). Examples, commentary, and configuration guidance are provided, and speak to
many practical aspects of employing the design principles discussed previously.

Server power mode


The default power mode for the HPE ProLiant servers as delivered from the factory is HPE Dynamic Power Savings Mode. The recommendation
from the HPE UC&C Solutions team is to set the server BIOS to allow the operating system to manage power (OS Control Mode), and use the
“High performance” power plan in Windows. Since these servers will be hosting various virtual machines including the potentially high CPU
workload of the Web Server role, the power mode was changed, as shown in Figure 9, to OS Control Mode.

Note
Changing the server power mode from default HPE Dynamic Power Savings Mode to OS Control Mode has a significant benefit in performance
and should not be overlooked unless the end user validates that the power saving profile can sustain the production workload.

The default for Windows Server 2012 R2 is balanced power mode and this needs to be changed to high performance based on HPE’s
recommendation.
Reference Architecture Page 14

Figure 9. Changing Server power mode to OS Control Mode

Configuring HPE MSA 2040 storage


Configuration of the MSA 2040 is performed through the Storage Management Utility (SMU) v3, which is a web-based application for
configuring, monitoring and managing the storage system. Each controller module in the MSA 2040 has a web interface, a CLI interface, and an
FTP interface.

There are several steps required in configuring the MSA 2040 storage in order to create storage volumes and ultimately present these to the
desired BladeSystem host servers. Key steps are:
• Configuring the Host Bus Adapter (HBA) Initiators – this includes creating friendly nicknames for each initiator, rather than referring to
their complex WWN (World-Wide Names); and creating Host and Group collections that will make assigning volumes to sets of hosts/initiators
much easier
• Creating Disk Groups – this is forming physical disk drives into RAID sets, with optional dedicated spare drives
• Creating Volumes from the Disk Groups – each disk group can host single or multiple disk volumes
• Mapping Volumes to the Hosts – the first step of creating the Group/Host/Nicknames groupings greatly assists this final step
Reference Architecture Page 15

The MSA 2040 Storage Management Utility (SMU) presents views of the various steps needed to configure the storage, with relevant action
buttons for each screen. These screens are:

• System
• Hosts
• Pools
• Volumes
• Mapping

The following sections describe each screen and the actions performed.
System view
Figure 10 shows the System view which displays the slot location and the various disk types in the MSA 2040 (above) and D2700 (below)
enclosures. Each enclosure contains 4 x 400GB SSD, 16 x 600GB 10K, and 4 x 2TB 7.2K disks. The yellow colored highlights indicate disks that
are part of a disk group, with the gray highlights indicating dedicated spare disks. Despite the different orientations of the upper and lower
enclosures, you will note the same disk groups and spares have been created in each enclosure. These are described in more detail in the related
sections below. There are no relevant actions associated with this view – it is informational only.

Figure 10. System view of MSA 2040 and D2700 enclosures

Hosts and HBA grouping


Figure 11 shows the Hosts view. This contains tabular information about HBA initiators, hosts and host groups that are defined.

The table columns are:


• ID (Initiator WWNs) – this lists the World-Wide Names (WWNs) of the HBA Adapters present in the server blades that are discovered
automatically by the storage system controller. Each of the 8 BL460c server blades has 2 HBA adapters, thus a total of 16 initiators are
discovered as shown.
• Discovered – shows Yes for a discovered initiator, or No for manually created Initiator
• Mapped – Shows Yes for an Initiator that is mapped to Volumes, or No for an initiator that is not mapped
Reference Architecture Page 16

• Nickname – this is an alphanumeric name that is provided by the user to more easily identify the initiators; as opposed to using their complex
WWN. In the example shown we have chosen names that identify the HBA (1 or 2), the Cluster the server runs in (1 or 2), and the server slot
number (Node 9…16).
• Host – this is a further level of grouping for convenience that the user can employ to simplify volume target definitions. In the example shown
we have grouped the same HBAs and Clusters, thus we have reduced from 16 individual initiators down to 4 sets
• Group – a final level of grouping where in the example we have now combined both sets of HBAs into each of the two Clusters. It is now very
easy to assign volumes to all HBAs and hosts in either cluster by referencing Cluster1.*.* or Cluster2.*.*. The Related Maps section at the
bottom of the screen shows an example where the highlighted Cluster:Host:Nickname entry has been used to assign 5 specific volumes to
Cluster1.

Figure 11. Hosts view


Reference Architecture Page 17

Disk Groups
Figure 12 shows the Pools screen. This contains a tabular view of information about the pools of storage, referred to as disk groups, that are
created, and the specific physical disks used in each group, including dedicated spares. The table columns are:
• Name – the disk pool name
• Health – nominally OK
• Total size – the total useable space in the pool
• Class – all are Linear in our example (option is dynamic)
• Avail – the current remaining available space in the pool (not used by volumes)
• Volumes – the number of volumes created using the pool
• Disk Group – nominally 1 in our example

Various groups were created for each cluster, per the values in Table 1. The Cluster 1 groups were created using disks in the MSA 2040
enclosure, and the Cluster 2 groups using the D2700 disks. Table 1 shows examples for Cluster 1, with the pools for Cluster 2 being identical in
size/type. Note that we elected to configure a dedicated spare disk for each group, thus enabling auto-failover to the spare in the event of an
issue with a disk in the group.

It is a generally accepted practice to use RAID-5 to provide HA at a higher usable disk capacity than possible with RAID-1; however best
practices recommend that in higher I/O environments (larger solutions or more I/O intense workloads) RAID-1 or RAID-10 be used. Lab testing
of a solution of the size presented revealed that RAID-5 was adequate for providing good I/O performance in this case; however, the reader
should be aware that RAID-1 or RAID-10 may be required if their environment is more I/O intense; especially for LUNs (disk pools) holding SQL
Logs or highly active data. For details of SharePoint 2016 databases, their sizing and recommended placement, refer to this Microsoft article.
Table 1. Disk Pool definitions
Disk Pool Class Size (GB) RAID level

Cluster1-Backups Linear 1998 1


Cluster1-SPSQLVMDK Linear 1199 5
Cluster1-SQLdata Linear 2398 5
Cluster1-SQLlogs Linear 599 5

The ClusterN-SPSQLVMDK pool was used to contain two volumes, with the other three pools containing one volume each. The Backups pools
leveraged the larger, slower 7.5K MDL disks.
Reference Architecture Page 18

Figure 12. Pools view

Note that selecting a specific Pool (Cluster1-SPSQLVMDK in the example above) also displays the related Disk Group (Pool name and
attributes), and also a list of the specific physical disks used in the pool (Related Disks). In the example shown, the Location column shows the
pool comprises disks 5, 6, and 7 from enclosure 1 (MSQA2040); with disk 8 being the dedicated spare.
Reference Architecture Page 19

Disk Volumes
Figure 13 shows the Volumes screen. This lists the volumes that have been created in the disk pools, with key information being their Name, Size,
and Pool (disk group used to host the volume). Selecting (highlighting) a specific volume also shows the volume-to-host mapping in the Related
Maps section of the display.

Five volumes were created from the previously created pools for each cluster. Note Cluster1 examples are shown but Cluster 2 are created
similarly from the Cluster2-xxx pools.
• Cluster1-OSVMDK and Cluster1-SPDATA volumes are both created from pool Cluster1-SPSQLVMDK.
• Cluster1-SQLdata1 volume is created from pool Cluster1-SQLdata.
• Cluster1-SQLlogs1 is created from pool Cluster1-SQLlogs.
• Cluster1-Backups1 Volume is created from the Cluster1-Backups pool.

Figure 13. Volumes view


Reference Architecture Page 20

Mapping volumes to initiators hosts and clusters


Having defined the initiator nicknames and host groups, defined the disk pools, and having created the disk volumes we can now assign the
volumes to specific hosts. In our case we want to assign the 5 volumes to all nodes in each cluster, so we can use the ClusterN.*.* designation to
indicate all HBAs in all nodes in each cluster. The LUN is defined automatically by the configuration tool, starting with LUN 0 for the first volume
mapped, and auto-incrementing for each subsequent volume associated with that host (Group:Host:Nickname definition).

Figure 14 shows the final state of the storage configuration with each of the 5 volumes being mapped to each cluster.

Figure 14. Mapping view

At this point all MSA-related storage configuration is done, including mapping volumes to HBAs/Hosts/Clusters. We can now create the required
VMware Datastores from the volumes presented to the hosts in each cluster using the vSphere Web Client, and reference those Datastores when
defining the required SQL and SharePoint VM storage. The following sections describe related best practices for SharePoint and SQL Server.

SharePoint
The two SharePoint FE servers, two SharePoint application (custom) servers and the two SQL Server virtual machines are created with the
following resources as shown in Table 2. Note that core/socket counts (vCPUs) are shown for both 1,000 and 2,500 user configurations. The
proposed memory should be adequate for either user population, although proactive monitoring and tuning should always be performed on the
running systems to ensure resources meet the workload requirements.
Table 2. SQL and SharePoint Virtual Machine specifications
VM Names VM Roles ESXi Host Cores per VM Cores per VM vRAM System Data disk 1 Data disk 2
Blade for 1,000 users for 2,500 users (GB) Disk (GB)

SharePoint
FE1-Gen8 1 6 12 20 120 200 GB I: n/a
Front End
SharePoint
FE2-Gen9 9 6 12 20 120 200 GB I: n/a
Front End
SharePoint
APP1-Gen8 1 4 6 20 120 200 GB I: N/A
Custom
SharePoint
APP2-Gen9 9 4 6 20 120 200 GB I: N/A
Custom
2 TB M: 500 GB L:
SQL1-Gen8 SQL Server 1 4 4 32 120
(SQL data) (SQL Logs)
2 TB M: 500 GB L:
SQL2-Gen9 SQL Server 9 4 4 32 120
(SQL data) (SQL Logs)
Reference Architecture Page 21

Note
While it was found that with the workload and population as tested that 2-core SQL Servers proved more than adequate to support the
workload, Microsoft TechNet recommends that a minimum of 4 cores (vCPUs) be allocated to any role. Allocating 4 vCPUs to the SQL VMs
would also allow reserve capacity for that role should the workload and/or population increase in the future.

SQL Server AlwaysOn


The following sequence of steps is required for the deployment of SQL Server 2014 (or later version) AlwaysOn Availability Groups.

• Creation of a two-node Windows Server 2012 R2 Failover Cluster with a FileShare witness for Quorum
• Installation of SQL Server 2014 on each cluster node
• Enabling SQL Server 2014 AlwaysOn Availability Group features
• Creating and configuring the Availability Group
• Adding the SharePoint 2016 databases to the Availability Group, as part of the SharePoint 2016 installation, configuration and sites creation

The Windows Server Failover Clustering (WSFC) two-node Cluster is created using the VMs previously created for SQL Server. Each of the
cluster nodes are part of the same Active Directory domain that represents the customer’s data center as part of the lab test environment. Note
that each SQL VM is hosted on a server that is in a different VMware cluster, thus providing a deliberate failure boundary. Each SQL server also
has its own dedicated storage volumes/datastores that are presented to each physical host in the relevant cluster, thus providing dedicated
storage for each of the two AlwaysOn replicas.

The following are the key steps for enabling AlwaysOn Availability Groups:

• On the Primary replica server, launch SQL Server Configuration Manager.


• Create a Remote Network share which is accessible from all the replicas. This remote network share will be used in Data Synchronization.
• In Object Explorer, select “SQL Server Services”. On the right side of the screen, right-click the SQL Server (<instance name>) and click
properties.
• Select the “AlwaysOn” tab.
• Select “Enable AlwaysOn Availability Groups” checkbox. Click ON.
• Manually restart the “SQL Server (<instance name>)” Service to commit the change.
• Repeat the above steps on the second SQL Server Node in the Cluster.
Reference Architecture Page 22

Then specify the required parameters for the two replicas, as shown in the example in Figure 15.

Figure 15. Specifying replicas


Reference Architecture Page 23

An Availability Group Listener is a Virtual Network Name which directs incoming client connections to the primary replica or read-only secondary
replica. It consists of DNS listener name, Listener port designation and one or more IP address. It supports only TCP/IP protocol. In case of
primary replica failure in availability groups, it assists in fast Application failover from failed primary replica on one instance of SQL Server to new
primary replica on another instance of SQL Server. Figure 16 shows the configuration of the Availability Group Listener.

Figure 16. Specifying the AG Listener


Reference Architecture Page 24

Part of the Availability Group creation process includes taking a backup of the database files that are then used to seed the replica. Select the
path to a remote network backup Shared folder as shown in Figure 17.

Figure 17. Specifying a backup location

Once the SQL Server Availability Group is created you can install SharePoint in your environment.

SharePoint Server 2016 capacity and sizing


The following sections provide a practical example of how to define a SharePoint Server 2016 virtualized configuration based on a broadly-
applicable workload, and a requirement for HA in a mid-market sized solution ranging from 1,000 users up to 2,500 users The exact workload
specified and the requirements for content storage and data retention policies may not match your own needs; however you can substitute your
own requirements and values in the various examples shown to yield a recommended solution.

Workload overview
This workload summary is included as a reference point to which you can compare your own expected workload. If your workload is markedly
different, then the VM resource definitions may require more or less resources than those tested in the lab during this study. One advantage of
deploying SharePoint services on VMs is that each VM’s resources can be finely-tuned to meet the service resource needs driven by the
workload requirements. A detailed workload definition is provided in Appendix B.

The workload was designed to run against a SharePoint test farm supporting five main site collections that provide typical functionality for this
type of solution. The sites comprise the following types and their relative use:
• Document Center (30%) – provides document libraries with version control and check-out/-in enabled. The content is heavily modified (high
write: read ratio). The key functions utilized include document check-out, download, upload, and check-in. There is also a modest percentage
of task management involving displaying a task list and opening a random task to view status. The emulated user may also delete a random
task and create a new one.
• Team sites (20%) (20 sub-sites with multiple users per site) – provides team document libraries and collaboration capabilities such as team
calendars, threaded discussions, etc. Moderate document and list content modification. The workload functions include reading and modifying
calendar entries, reading and randomly replying to discussion topics and modest document upload.
• A Portal site (20%) – intended to provide corporate-wide communication, events and announcement lists, surveys, etc. Low content
modification – mostly read. Functions include reading events lists and randomly reading the details of a specific event, reading
announcements, and reading and responding to survey questions.
Reference Architecture Page 25

• My Sites (10%) – each user has a personal “my site” used to contain personal documents and work in progress. Note that these sites are
created prior to the performance tests being run, so that the time and resources required to initialize a My Site the first time a user accesses it
is not captured during the test. Moderate document content modification is performed with mostly doclib listing and document uploads of
random documents of different sizes.
• Search Center (20%) – provides standard search capabilities enabling users to discover content stored across the farm, and optionally
open/read/download discovered content. The emulated user submits a search query based on a simple word or phrase that is randomly
selected from a pre-created list. This query yields a results set listing up to 20 documents (initial results page) sorted in most relevant match
order (high to low). 25% of the time the user then downloads a document chosen at random from the first 8 search results (document size
ranges from 1MB to 16MB), before navigating back to the home page.

Each of the site actions described above is broken down into a set of workload transactions to record the response time of each step of the
action. For example, in the portal workload the first transaction is the navigation into the events calendar, the next transaction is the opening of a
single event; and closing the event is recorded as yet another transaction.

Between each of the transactions, random wait times, of 1-4 seconds designed to simulate users reading or thinking, are introduced after the
response time is recorded for the transaction.

Sizing the solution


The easiest way to develop a practical SharePoint configuration is to use the HPE Sizer for Microsoft SharePoint 2016, which takes detailed
workload input and provides guidance on a physical or virtualized solution, including deployment of specific VMs on the various solution servers.
This tool has been developed by the HPE UC&C solutions design team to provide suitable deployment configurations based on a set of workload
and IT requirements entered by the user. The tool leverages sizing data obtained by the HPE UC&C solutions design team by running extensive
performance tests against many different server and storage configurations using the workload described above. The tests yield key sizing data
such as the throughput capacity “Requests per Second” (RPS) that a configuration can achieve. The sizing data and algorithms are used in the
sizer in conjunction with a set of configuration best practices logic that produces a configuration that meets both the capacity needs and also
other imperatives such as a need for HA, requirement for virtualization, preference for BladeSystem, etc.

It is known from prior BL460c Gen9 testing that a total of 24 cores across multiple FE servers can support a throughput of 500 RPS. For this
specific solution designed to support 1,000 to 2,500 users it was determined that a throughput of 100 RPS or 250 RPS respectively was
required – an example detailed calculation is shown below. This goal throughput takes into consideration a concurrency of 20% of active users
versus total user accounts – i.e., 200 to 500 active users; and a workload frequency of 1 request per user every 2 seconds – an intense workload.
Running a sizing exercise based on the “500 RPS = 24 cores” performance data from above, yielded a need for two FE servers configured with a
combined total of 6 cores for 1,000 users, or 12 cores for 2,500 users. After applying redundancy needs (doubling the capacity per FE server to
ensure a survivor could handle the whole load), the requirements are for a total of two FE server VMs each configured with either 6 cores for
1,000 users, or 12 cores for 2,500 users. The two FE server VMs are hosted across two physical servers and each physical server is located in a
different host cluster so as to provide a failure boundary as discussed earlier in the “Design principles” section. The above sizing calculation
considers that if one FE server fails or needs to be taken out of service for maintenance, the other FE server can handle the whole load. This
calculation can be adjusted such that the survivor can support whatever percentage of total load is critical to the business (likely expressed as a
Service Level Objective or Agreement), commensurate with the cost of providing that level of service.

The Search and Application services components for SharePoint 2016 are moved to separate Custom Servers and were likewise split across two
VMs hosted on two physical hosts. Each physical host is in a different host cluster to provide both redundancy and separation of application
services to optimize VM performance. Each “custom” VM has 4 cores for 1,000 users; and 6 cores for 2,500 users.
SQL Server was deployed as an AlwaysOn cluster as discussed earlier, with each of the two VMs using 4 cores for 1,000 or 2,500 users. The
calculations (and lab testing) show that 2 cores is adequate for 1,000 users; however, Microsoft recommends not less than 4 cores for any
SharePoint 2016 or SQL Server VM. It is known from test and observation that a typical core ratio of Web server to SQL Server is about 4:1. So if
a total of 12 web server cores are used, then a total of 4 SQL Server cores is appropriate. Note that while the two web servers and two
application servers are all active, only one of the two SQL servers is fully active and responding to SharePoint data requests. The second SQL
Server is acting in a slave mode replicating the Primary database changes to its own Secondary replica.
Reference Architecture Page 26

The following summarizes the RPS and VM cores calculations yielding the various VM core counts. You can replace the cited values with your
own to determine the goal throughput and VM vCPU sizing. Note that the example presents a very intense workload, and also makes the
assumption that a single FE server should be able to carry the entire workload in the unlikely event of one of the two FE servers failing or
needing to be shut down for unscheduled maintenance, without loss of performance.

• 1,000 users @ 20% concurrency = 200 active users. Likewise 2,500 @ 20% concurrency = 500 active users
• Assuming a workload intensity of one request per two seconds per active user yields a requirement of 100 to 250 RPS. Note that this is an
extremely high intensity.
• We know from other testing that 24 cores (vCPUs) of FE Server resource yields about 500 RPS throughput.
• Therefore 100 RPS requires about 6 cores, and 250 RPS requires about 12 cores total (rounding up as needed).
• If we double the capacity of the FE server to ensure a survivor could handle the whole load, we would therefore configure each FE VM with 6
or 12 cores as needed.
• From observation of other tests, Custom Server resources are about 25-50% of the FE, thus configure 4 or 6 cores as needed per Custom
server.
• FE:SQL ratio is known to be between 4:1 and 6:1 depending upon the intensity of content modification, therefore configure 4 cores per SQL
VM, per Microsoft recommendation regarding minimum cores per SharePoint or SQL VM.

Total cores (vCPUs) requirement per host server is either 14 (6+4+4 for 1,000 users) or 22 (12+6+4 for 2,500 users) and each Physical
BL460c Gen9 server has 24 cores. Each server therefore supports the required number of cores, taking into consideration that about 10% of the
total number of vCPU resources in the physical servers should be allocated for VMware Hypervisor overhead. Further, if starting at the 14 core
level supporting 1,000 users the solution can be easily scaled-up in-server to the 22 total core requirement for 2,500 users, simply by changing
the vCPU count in the VM definitions.

Note also that the allocation should be ONE core (or socket as VMware does not differentiate between cores and sockets) per vCPU with NO
over-subscription.

Sizing the storage


The storage sizing used is presented as an example of how the HPE MSA 2040 disks can be used to create multiple virtual volumes and thus
VMware datastores. Two datastores were created per SQL Server to follow the long-used best practice of separating SQL database and log files.
While a total of close to 20 SharePoint databases were in use (including both services and content DBs), it was not felt necessary in this example
to separate out the SQL storage LUNs further – there is a RAID-5 array supporting the datastores proving very high I/O bandwidth. Your IT
policies and maintenance/security procedures may dictate further separation or the use of RAID-1 or RAID-10, and this can be easily achieved
by creation of virtual volumes and VMware datastores as required; and presenting these to the relevant nodes in each host cluster.

Note that in the examples presented, we located the service databases and content databases on the same LUNs, separating the data and log
files per SQL best practices. In a larger scale solution some of the services databases can become quite I/O intense, and in that event it would be
a best practice to separate the service and content databases onto separate LUNs, and thus separate disk groups.

Each enclosure (MSA or D2700) in our solution example contains 24 disks as follows:
• 4 x 400GB SSDs
• 16 x 600GB 10K disks
• 4 x 2TB 7.2K disks

Each blade (each hosting its set of 3 VMs) requires its own storage volume for both the Operating System partition (C: LUN) and Data Partition
(I: LUN) for the SharePoint VMs.

In addition to its OS LUN the SQL Server VM requires two more separate partitions for its Database Data (M:) and Database Log (L:) file storage.
We therefore need the following separate storage volumes from both the MSA 2040 storage and D2700 storage.

• OS/Data LUNs
• SQL Data LUNs
Reference Architecture Page 27

• SQL Log LUNs


• Backup LUNs

We use the MSA 2040 disk array for the first VMware host cluster, and the D2700 disk enclosure for the second VMware host cluster. Each
volume was created from a pre-configured disk group that included a dedicated spare disk drive. Multiple datastores were created from each
volume carved out from each of the storage arrays as follows:

• A 1.4TB Volume (3 x 600GB RAID5) for OS/Data LUNs. This provides space for multiple datastores totaling 760GB as follows:
– 3 x 120GB (Operating system LUNs C:)
– 2 x 200GB (Data Drive LUNs I:)
• A 2.4TB volume (5 x 600GB RAID5) for SQL Server Database Data files. LUNs.
– This provides space for a single 2TB datastore
• A 600GB volume (2 x 600GB RAID1) for SQL Server Database Log files. LUNs.
– This provides space for a single 500GB datastore
• We also created volumes on the 2TB 7.5K disks for backups (total of 4 x 2TB RAID10)

Table 3 shows the details of the storage assigned.


Table 3. Storage assigned for Virtual Machines
Role VM C:\ (OS / System disk) I:\ (Data disk) M:\ (SQL Server Data disk) L:\ (SQL Server Log disk)

Front end 120 GB 200 GB N/A N/A


Custom 120 GB 200 GB N/A N/A
SQL 120 GB N/A 2 TB 500 GB
TOTAL 360 GB 400 GB 2 TB 500 GB

Content storage calculations


The sizing shown above was based on a set of assumptions, however every customer likely has different needs based on their business model.
The quantity of actual content (documents, graphics, etc.) to be stored in SharePoint will vary based on business processes, retention policies,
and to some degree the number of users creating and modifying content. The following presents a hypothetical calculation and the process
shown may be re-used with your own values if they differ from those shown in the example. The intent is to estimate the actual storage space
required for the farm content and space for the other databases and data structures to be located on the MSA storage array. Determination of
this will lead towards sizing an appropriate MSA 2040 solution.

While only active users require compute resources, even inactive users require storage; therefore we have to consider the needs of all 1,000
users in a typical SMB business generating content. We also need to make a set of reasonable assumptions (supported by anecdotal data from
HPE field personnel and various customers):

• For a starting point; assume a need for 2TB of content to be stored


• If we assume a collaboration site is between 1.0 - 1.5GB, then we can store between 1,200 and 2,000 such sites
• Assume four organizational units each running 100 projects/year
• Each project content size can be about 500MB (Office files, PDFs, small images, Visio diagrams, etc.)
• Thus the users generate about 200GB of total content per year
• Assume 50% of the storage volume is allocated for content, and the other 50% for Service Application databases, TempDB, Search databases
and the other data structures SharePoint requires to function (a rule-of-thumb from the field)
Reference Architecture Page 28

To summarize this means 2TB of storage space could support the 1,000 user content for about 5 years (considering retention policies). The
above assumptions can be revised to calculate the 2,500 user case and/or to take specific customer needs into account.

Backup also requires consideration. For the mid-market segment it is common to do simple SQL DB backups, typically leveraging SharePoint
Admin features, and likely back up to disk then to tape. For many companies going direct to tape may not be an issue given the amount of
content created in a time period, and differential backups may be only a few MB. However if backup to disk is required, then we also need to
allocate about 1TB from the 2TB 7.2K disks.

Networks
With a non-virtualized deployment of SharePoint 2016, the best practice recommendation is to use two network adapters in the FE and
Application servers to segment the traffic. This has to do with the amount of network bandwidth the IIS / Client communication can consume,
and with not combining that load with the IIS / SQL Server communication. However, in a virtualized deployment, when the entire farm can exist
within the virtual network, the only real concern for deployment is the amount of network capacity that can be handled by the physical NIC. In
cases where multiple FEs will be deployed, and there is concern that a single physical network connection cannot support the amount of network
load required, the use of multiple physical NICs and multiple virtual networks is recommended. Additionally, in cases where more than a single
host server is being used to support the farm, segmenting traffic is still recommended.
Server blade management IPs and vMotion/FT logging IPs
Each blade used 2 IP addresses associated with its physical NIC. One is the management IP address (we usually think of this as the server IP
address), and the other is a related address used for the vMotion and FT logging VMkernel network defined on each ESXi server.

The example in Figure 18 shows the vSphere network configuration for server ESX9. Note the 3 separate networks on the server:

• The VM Network with associated VMs


• The vMotion/FT logging network
• The Management Network

Figure 18. VMware Networking example showing Cluster 1

Lab test environment


The lab test bed for performance testing of SharePoint 2016 comprised SharePoint VMs created on the c7000-based host servers using the
vCenter client application to define and deploy the VMs. Data disks were defined and deployed for SharePoint FE/App data needs (e.g., Indexes,
Analytics), and for the SQL Server Database Data and Log files for each of the two SQL Server VMs (separate datastores hosting master/slave
replicas as part of SQL AlwaysOn).

The lab test configuration comprised a pair of 4-node clusters, each running a mix of Gen8 and Gen9 BL460c servers purely to be able to test
and evaluate the performance difference. It is expected in a real-world deployment that all Gen9 serves would be employed in two 8-node
clusters, as depicted earlier in Figure 1.
Reference Architecture Page 29

The SQL and SharePoint infrastructure deployed on the c7000/MSA 2040 solution were integrated into a separate Active Directory and DNS
infrastructure which is intended to represent the customer data center network and domain.

Microsoft Visual Studio Ultimate 2015 U1 was used as the SharePoint workload engine emulating users in the customer domain, running load
tests comprised of various Webtest components developed to represent a broadly applicable SharePoint collaboration workload. Use of Visual
Studio as a user emulation tool is discussed in detail in a later section.
Hosts, Clusters, and VMs
For test purposes the host servers were a mix of 8 x BL460c Gen8 and 8 x BL460c Gen9 blades; the purpose being to test both variants and
determine performance differences. These blades were grouped into two vSphere clusters, with 2 x Gen8 and 2 x Gen9 blades in each cluster.
This allowed moving VMs from a Gen8 host to a Gen9 host to allow testing on both. The MSA 2040 based storage was presented to all nodes in
each cluster, with separate storage defined for each cluster to establish both compute and storage failure boundaries. Figure 19 illustrates the
clusters and the hosts used for the VMs.

Figure 19. Lab test hosts and clusters deployment design

Important note
The host/cluster arrangement shown in Figure 19 was configured purely for lab test purposes to evaluate both Gen8 and Gen9 BL460c Server
Blades in each cluster. Configuring this way allowed for quick/easy migration of VMs to/from different generation host servers in each cluster to
facilitate easier testing. It is expected that a real-world deployment would use all Gen9 servers in each of two 8-node clusters as illustrated
previously in Figure 1.
Reference Architecture Page 30

Figure 20 shows the VMware vSphere Client Hosts and Clusters view of the solution, illustrating 4 nodes in each cluster and the nominal location
of the VMs.

Figure 20. Lab test VM deployment design (Host view)


Reference Architecture Page 31

In the example shown below in Figure 21, the WFE1-Gen8 VM has been moved via Live Migration from its initial Gen8-based host
(10.172.40.88) to a Gen9 host in the same cluster (10.172.40.93) so comparative performance could be tested. All other VM characteristics and
parameters (vCPUs, RAM, storage, network, etc.) remained the same.

Figure 21. Lab test VM deployment design (VM view)

SQL Server AlwaysOn lab configuration


The database management solution for SharePoint 2016 is implemented using SQL Server 2014 AlwaysOn technology providing database high
availability and integrity to the complete solution. There are two SQL Server Virtual Machines (VMs), one in each VMware cluster. The 2 TB
database partition and 500 GB Log partition for each SQL Server are configured from dedicated Datastores for each node (cluster), hosted on
volumes located on MSA 2040 and D2700 respectively.

With 2-node SQL Server AlwaysOn technology, one node acts as Primary and other node acts as Secondary at any point of time. The Primary
receives requests from SharePoint to modify content or send information requested. In the case of a write or modify operation, this is also
performed synchronously by the Secondary node and the operation is acknowledged back to the Primary as complete before subsequent
requests can proceed. This ensures that both replicas are exactly in sync at any time. In the event of a failure of the Primary node, the Secondary
seamlessly takes over as Primary with no action, or awareness, by SharePoint.
Reference Architecture Page 32

The SQL Server Availability Group is configured with following properties:

• Availability Mode – Synchronous Commit


• Failover Mode – Automatic
• Readable Secondary – Yes

The SharePoint 2016 databases are added to the Availability Group so that they are highly available as a result of the synchronization, and also
have high integrity due to the separate replicas hosted on separate storage (an error cannot be propagated across the storage). The SharePoint
servers connect to the Availability Group Listener in order to access the databases in either replica without needing to know the name of the
physical instance of the SQL Server to which it is connecting (the active Primary replica).

SQL Server AlwaysOn technology is implemented over Windows Server Failover Cluster (WSFC) technology. In the lab we used “Node and File
Share Majority” for Quorum configuration with a remote File Share configured as a voting witness for the 2-node cluster. For more information
about WSFC and Quorum requirements and options, please see this Microsoft TechNet article.

Figure 22 shows the Availability Group and Replicas hierarchy in the left pane, and the list of synchronized SharePoint service and content
databases in the right pane.

Figure 22. Lab test SQL Server AlwaysOn configuration


Reference Architecture Page 33

Test environment deployment


The major steps for deploying the test environment are summarized below. Other than the creation of a representative customer domain (which
would exist in a real-world deployment) and use of Visual Studio as a Client emulation tool, the other steps and procedures shown are as would
be best practices for a real-world deployment and can be viewed as a practical step-by-step example of how a single-site SharePoint 2016 farm
can be deployed in a c7000 blade environment.

Key steps for SharePoint and SQL Server are:


• Deploying the Active Directory and DNS infrastructure domain to represent the existing customer data center
• Creation of Storage Disk Groups and Virtual Volumes using the MSA 2040 Storage Management Utility
• Creation of VMware Datastores from those volumes using the vCenter Client
• Creating the two required vSphere clusters using the vCenter Client
• Creating the SQL Server 2014 and SharePoint Server 2016 VMs on the vSphere clusters using the vCenter Client
• Deploying SQL Server AlwaysOn Availability Groups and a SharePoint 2016 farm, and test Web Applications and Site Collections
• Deploying Microsoft Visual Studio 2015 as the workload driving tool and developing the workload – this is covered in the section discussing
test process, in Appendix B.

SharePoint Server 2016 testing overview


The purpose of the performance tests detailed below was to gain understanding around configuring and sizing SharePoint 2016 virtualized mid-
market (SMB) solutions and to develop best practice configurations showing examples of tested results. Key goals were:

• Determine the difference in performance between Gen8 and Gen9 BL460c servers, using the same SharePoint 2016 configuration and
workload.
• Determine performance differences between a previously tested SharePoint 2013 solution and a SharePoint 2016 solution – both on similarly
configured Gen9 BL460c servers.

The SharePoint FE VMs were initially configured with 6 virtual CPUs, and later also tested with 12 vCPUs to determine core scalability. A 6-core
VM appears to be a common “building block” cited in many Microsoft TechNet articles, and it provides a baseline for performance and capacity.
Knowing the expected CPU ratio of FE to SQL is typically between 4:1 and 6:1, the SQL Server VMs were configured with 2 cores each. Both the
SQL and FE VMs can be scaled up (more cores and memory as needed) and also duplicated (scaled-out) to provide further VM redundancy.
This test provides baseline performance information for a single 6-core SharePoint FE VM.

Individual Webtests were first run using the 5 individual workload components (Search, Portal, Docs, Teams, and MySite) to determine the
resource requirement and throughput differences, as each workload component leverages different SharePoint functions that in turn use
differing CPU, storage and network I/O.

Loadtests were then run using the full collaboration workload mix of the 5 Webtest components against both 6-core and 12-core FE VMs. This
determines the core scalability, and also validates the design goals of a 1,000 user solution scaling up to a 2,500 user solution.

Each of the above two types of tests were run on both Gen8 and then Gen9 BL460c host servers.

HPE has developed a test workload that includes a set of functions and workload mix that is intended to represent how users might employ
SharePoint 2016 features. The workload is believed to be broadly applicable to customers who are using SharePoint primarily as a collaboration
and document management solution. For workload and test methodology details, see Appendix B.

It is also important to note that this workload was designed to replicate the peak usage time of the user population. Peak usage time is defined as
the time when most users are logged on and are working simultaneously. Typically, peak usage times are sporadic; however, it is important to
size solutions based on the peaks to ensure adequate solution performance under worst-case situations.
Reference Architecture Page 34

Performance metrics
Analysis of the data collected provided a set of key metrics that collectively described the characteristics of the functions as regards CPU,
response time, throughput, etc. These data provide an overview of how each function uses various system resources and its typical performance.
Note that the data collected for these metrics relates to the individual VMs, and not physical host servers, unless otherwise stated.

The key metrics are:


• SharePoint VM %CPU Avg.
• SQL VM %CPU Avg.
• Throughput per SharePoint server – requests per second (RPS) @ 80% CPU avg.
• SharePoint/SQL CPU ratio – different functions will require more or less SharePoint or SQL resources. A low ratio will typically indicate that
higher SQL CPU resources are used (more database activity). This number also indicates how many FE servers with a given number of cores
can be supported by a single active SQL server with the appropriate number of cores as determined by the core ratio.
• Typical response times – these may be classed into three groups. Functions that are sub-second, that take a few seconds, or that take “many”
seconds. Functions with longer response times are more complex and will require more server resources. Some functions (e.g., file
upload/download) will be impacted by available network bandwidth.
• Client-FE-SQL Network traffic – the network traffic volume sent from the client to FEs due to a user request, traffic between the FEs and other
role servers, and the data volume returned from the FE to the client to render the page
• Storage subsystem volume I/O data (read/write) – read and write I/O rates, IOPS sizes and response time for the storage volumes

The set of metrics above define the characteristics of the tested SharePoint functions and indicates their resource consumption and typical
performance.

SharePoint 2016 test results and analysis


The following are the test results obtained from running the site-specific and full Collab workloads against three specific configurations of a
SharePoint 2016 solution running on both Gen8 and Gen9 host servers. The VM characteristics are described in detail in the Best practices –
SharePoint section of this document.

The data comprise:

• %CPU busy (Avg.) and Throughput (RPS) data


• Network traffic (Mbps) for the Client, FE and SQL servers
• Storage IOPS data – IOPS response times, IOPS frequencies, and IOPS transfer sizes
• Example response times for the various site functions

Each is shown in a tabular format alongside a definition of the specific test mix, allowing comparison of the data. Comments and analysis are
presented for each section to explain the observations.
CPU and throughput
A key metric, used for capacity planning and solution sizing, is the achievable throughput expressed in Requests per Second (RPS) at a nominal
FE %CPU busy (Avg.) of 80%. Each of the tests shown was run at a specific emulated user population to provide a load on the system to yield
about 80% FE CPU busy. Note that these emulated users were applying a significantly higher load than real users (almost 1.5 RPS per user).
SharePoint, employing a stateless web protocol, does not really care how many users are requesting information – it only cares about how many
requests per second are being submitted by whatever population is busy at the time. The tests follow the best practice of so-called “univariate
analysis” – simply put, changing one thing between tests and observing the result of the change.

Tables 4a, 4b, and 4c show the results for SharePoint 2016 on Gen8, SharePoint 2016 on Gen9, and SharePoint 2013 on Gen9 respectively.
Reference Architecture Page 35

Note
The column designated FE x Cores shows two numbers representing respectively the number of Web servers and the number of cores per Web
server for that specific test. Initial tests were run using a single SharePoint FE server that ran all services (Web, Search, App, etc.). This was tested
using both 6 and 12 cores to determine VM core scalability.

Table 4a. VM %CPU usage and RPS throughput results – SP2016 – Gen8
Test Users FE x Cores FE %CPU Avg SQL %CPU Observed RPS Calc @
Avg RPS 80% CPU

Search only 38 1x6 77.6 30.1 132 136


Portal only 75 1x6 79.9 42.4 93 93
Teams only 25 1x6 79.3 27.9 75 76
Docs only 26 1x6 76.6 38.8 71 74
Mysites only 35 1x6 74.7 41.5 145 155
Collab 1x6 34 1x6 81.3 37.5 91 90
Collab 1x12 70 1 x 12 83.6 34.8 181 173

Table 4b. VM %CPU usage and RPS throughput results – SP2016 – Gen9
Test Users FE x Cores FE %CPU Avg SQL %CPU Observed RPS Calc @
Avg RPS 80% CPU

Search only 50 1x6 78.1 29.8 180 184


Portal only 96 1x6 82.5 61.4 116 112
Teams only 32 1x6 77.7 25.9 100 103
Docs only 36 1x6 79.5 35.4 92 93
Mysites only 48 1x6 79.9 42.1 199 199
Collab 1x6 44 1x6 82.0 37.4 118 115
Collab 1x12 84 1 x 12 79.9 32.9 226 226

Table 4c. VM %CPU usage and RPS throughput results – SP2013 – Gen9
Test Users FE x Cores FE %CPU Avg SQL %CPU Observed RPS Calc @
Avg RPS 80% CPU

Search only 200 1x6 76.0 13.0 292 307


Portal only 110 1x6 76.1 12.0 143 151
Teams only 70 1x6 76.2 13.0 120 126
Docs only 90 1x6 74.0 17.0 125 135
Mysites only 70 1x6 78.0 11.0 60 62
Collab 1x6 100 1x6 73.0 15.0 142 156
Collab 1x12 200 1 x 12 71.0 20.0 270 304

The right-most column in the tables shows the key throughput metric – RPS @ FE 80% CPU busy. The results for the individual workloads
demonstrate the difference in resource intensity – how many RPS are possible at a consistent 80% CPU. Search is clearly much less resource
intense. Another contributing factor is the degree of file upload and the document sizes. The MySites workload includes upload of large
documents, thus the available network bandwidth and latency can also affect what is possible per unit of time. Results for the full Collab workload
show that the %CPU and RPS vary as might be expected by the changes in number of VM CPU cores and applied load. Results show that cores
scale almost linearly.
Reference Architecture Page 36

Analysis and comparison of the data shows the SharePoint 2016 solution taking somewhat more resources and providing slightly less
throughput than the SharePoint 2013 solution – both run on Gen9 servers to ensure fair comparison. This is quite common for a new version of
SharePoint where new and enhanced functionality has been added. Note also that tests were performed on the RC (Release Candidate) version,
as the RTM version was not yet available. It is therefore possible that the RTM version might show slightly different performance to that seen
here.

However, note also that the Gen9 results versus Gen8 results when running SharePoint 2016 showed an increase in performance of approx. 20%
(increased throughput when measured at 80% avg. CPU busy on the FE server). Thus customers running older versions of SharePoint on older
HPE server generations, and considering updating to SharePoint 2016, could achieve better price/performance by investing in the newer Gen9
servers for that deployment.

The resulting network traffic for each test is shown below. Tables 5a, 5b, and 5c show the results for SharePoint 2016 on Gen8, SharePoint 2016
on Gen9, and SharePoint 2013 on Gen9 respectively.
Network traffic
SharePoint is “chatty”. Analysis of the network traffic, especially that received by the client (user), reveals it takes a lot of HTTP/HTML data to
present results to a user. Some functions, such as presenting a complex list of information, especially so. Tables 5a, 5b, and 5c present the
commensurate Network traffic for the Client, FE, and SQL servers.
Table 5a. Network traffic (Mbps) results – SP2016 – Gen8
Test Users FE x Client Send Client FE Send FE Receive SQL Send SQL
Cores Receive Receive

Search only 38 1x6 1.59 34.39 37.83 39.43 36.24 4.11


Portal only 75 1x6 0.97 59.90 67.62 67.78 64.76 10.07
Teams only 25 1x6 1.08 38.34 48.74 46.89 44.71 12.00
Docs only 26 1x6 22.65 56.96 65.43 94.88 70.30 10.91
Mysites only 35 1x6 109.48 20.89 29.95 166.44 51.09 9.56
Collab 1x6 34 1x6 18.46 45.30 54.11 77.51 57.38 10.65
Collab 1x12 70 1 x 12 39.68 89.68 106.20 153.60 109.98 19.97

Table 5b. Network traffic (Mbps) results – SP2016 – Gen9


Test Users FE x Client Send Client FE Send FE Receive SQL Send SQL
Cores Receive Receive

Search only 50 1x6 2.15 47.06 52.35 54.19 50.25 6.29


Portal only 96 1x6 1.21 75.33 85.65 134.22 128.69 13.34
Teams only 32 1x6 1.43 51.51 65.01 62.75 59.65 15.60
Docs only 36 1x6 31.88 79.70 91.27 129.44 94.54 15.44
Mysites only 48 1x6 148.74 27.85 40.77 227.01 70.97 13.59
Collab 1x6 44 1x6 23.99 58.89 71.98 107.71 80.53 13.51
Collab 1x12 84 1 x 12 43.79 107.55 131.54 199.49 145.55 24.33
Reference Architecture Page 37

Table 5c. Network traffic (Mbps) results – SP2013 – Gen9


Test Users FE x Client Send Client FE Send FE Receive SQL Send SQL
Cores Receive Receive

Search only 200 1x6 2.94 25.33 42.62 54.78 24.24 5.87
Portal only 110 1x6 1.26 26.09 37.58 46.73 39.60 11.49
Teams only 70 1x6 1.51 18.71 40.77 53.27 44.29 20.72
Docs only 90 1x6 36.07 64.43 67.45 145.21 98.65 19.13
Mysites only 70 1x6 110.73 10.99 19.46 169.46 54.53 10.91
Collab 1x6 100 1x6 13.25 35.40 52.60 84.81 60.90 17.62
Collab 1x12 200 1 x 12 24.58 65.35 92.78 143.28 108.39 32.55

Network data from the various tests once again shows the differences in the workloads, their functions and the impact on the network. Some
values stick out, and deserve further explanation.
• The MySites workload includes a lot of file upload; therefore client send traffic is high.
• The Docs workload includes a lot of check-out, download, upload new doc version, check-in functions, thus the Client and FE receive network
traffic is higher.
• In general when running the full Collab mix the highest traffic will be seen as FE Receive. This is traffic received by the FE from both the Client
and SQL.

In general the data also show we have more than enough network bandwidth provided to handle desired load levels. A further analysis of this
data, correlated with the RPS data, will reveal the network traffic required per RPS for each function type. These data, among others, are used in
the HPE Sizer for Microsoft SharePoint 2016.
Storage IOPS
Of equal interest is the demand placed on the MSA 2040 storage system. Table 6 shows the leading indicator of I/O performance – the response
time of a disk transfer. A generally accepted standard is that the response time should be below 20 milliseconds on average for good
performance. Results for both Server and SharePoint variants are shown.
Table 6. Disk transfer (IOPS) response time (seconds) results
Test FE x SQL M: SQL L: FE C: OS FE I: Data
Cores Data Logs

SP2016 1x6 Gen8 1x6 0.006 0.001 0.001 0.002


SP2016 1x6 Gen9 1x6 0.015 0.001 0.001 0.010
SP2013 1x6 Gen9 1x6 0.007 0.001 0.002 0.003
SP2016 1x12 Gen8 1 x 12 0.007 0.001 0.002 0.005
SP2016 1x12 Gen9 1 x 12 0.015 0.002 0.001 0.005
SP2013 1x12 Gen9 1 x 12 0.008 0.002 0.003 0.005

In general the response time for the FE System (C: OS) and SQL Log (L: LOGS) disks is negligible. The worst response time of 0.015 seconds for
the SQL Data disk is below the performance threshold of 0.020 seconds. The Docs workload includes upload of new document versions and
check-in. The Search crawl service was set to run in “continuous mode”, thus each change or upload of new content caused a crawl of that
content and an update to the Search Index information on the FE Data (I: DATA) disk.
Reference Architecture Page 38

Table 7 shows some further analysis of the disk data, presenting the average disk reads/writes per second. This is a measure of the I/O intensity.
Table 7. Disk Average reads/writes (IOPS) per second results
Test FE x SQL Data M: SQL Data M: SQL Logs L: FE Data I:
Cores Read Write Write Write

SP2016 1x6 Gen8 1x6 0.08 10.50 36.00 1.16


SP2016 1x6 Gen9 1x6 0.02 10.60 42.80 0.62
SP2013 1x6 Gen9 1x6 0.13 23.60 54.30 0.48
SP2016 1x12 Gen8 1 x 12 0.77 17.30 58.90 1.92
SP2016 1x12 Gen9 1 x 12 1.64 18.80 66.50 1.88
SP2013 1x12 Gen9 1 x 12 0.28 33.70 81.00 0.77

Results show the majority of the I/O operations occurring to the SQL Data and especially Log disks. Logs will be written when any content is
changed or added, so this behavior is expected.

The FE Data disk shows very modest I/O frequency, so more data is needed to better understand I/O behavior for this disk. Table 8 shows the
average I/O size (bytes) for the various disk read and write operations for the full load test run.
Table 8. Disk Average Bytes per IOPS results
Test FE x SQL Data M: SQL Data M: SQL Logs L: FE Data I:
Cores Read Write Write Write

SP2016 1x6 Gen8 1x6 61,189 27,250 30,006 143,611


SP2016 1x6 Gen9 1x6 53,248 28,507 40,666 178,944
SP2013 1x6 Gen9 1x6 65,193 39,011 55.258 384,007
SP2016 1x12 Gen8 1 x 12 48,336 36,118 28,408 210,043
SP2016 1x12 Gen9 1 x 12 56,016 30,468 29,597 235,223
SP2013 1x12 Gen9 1 x 12 65,193 39,011 55,258 384,007

The data shows about even-sized I/O operations to the SQL Data and Log disks – in the range of 30-60KB or so. The size of the write operations
to the FE Data disk is more significant at as much as 380KB; however, analysis of the disk queue depth showed no appreciable queue (about
0.06). We can conclude that the behavior is therefore as expected and is not a cause for concern.

The final resource of interest is the memory usage for the SQL Server and SharePoint FE VMs.
Memory usage
Memory is very important to SharePoint deployments and should never be under-configured. Much of the good performance of SharePoint is
due to caches deployed at every level – SQL Server, FE Distributed cache service, IIS disk – and memory-based caches and browser caches. A
significant amount of memory goes towards these caches. Note that VM Dynamic memory (also referred to sometimes as balloon memory) was
not employed, as SQL and SharePoint heavily leverage memory for caching, thus we want to avoid any cache flushing.
Reference Architecture Page 39

SQL Server VMs


Figure 23 shows the memory consumption of SQL Server AlwaysOn Primary node (SQL1-Gen8) after a number of tests had been performed
(SharePoint 2016 running on Gen9 servers). The VM was configured with 32GB, and the SQL process alone is shown to be using about 30GB of
that total. Unless deliberately limited, SQL memory usage will tend to increase and must be monitored to ensure the VM memory allocation is not
causing an artificial bottleneck by being inadequate for the workload applied.

Figure 23. SQL Server Memory usage

Memory is also critical for SharePoint VMs.


SharePoint VMs
Figure 24 shows a similar Task Manager display of memory usage for an example SharePoint 2016 FE VM (Gen9 servers). Note that each of the
5 site collections created were based on separate Web Applications, each of which had a separate application pool and thus a dedicated IIS
Reference Architecture Page 40

Worker Process. These may be seen in the figure below and the memory consumed by each will grow as they perform more and varied functions.
In addition, other processes supporting SharePoint services require memory. Figure 24 shows one of the IIS Worker processes consuming as
much as 2.7GB and one of the Search processes consuming about 1.7GB. At the time of the screenshot, the VM was configured with 20GB
memory and Task Manager shows 70% of that being used – about 14GB. After further tests this usage grew further to a total exceeding 16GB. It
is for that reason HPE recommends that the FE VM be initially sized at 16GB memory, but then be monitored to see if the workloads presented
require additional memory above that value (e.g., 20GB) as was demonstrated by these tests.

Figure 24. SharePoint FE Memory usage


Reference Architecture Page 41

This concludes discussion about the key server and solution resources, and the next section presents data relating to the user-perceived
performance – the response times.
Response times
Response times in SharePoint can be grouped into 3 types:

• Sub-second – many functions, such as simple list presentation (e.g., a Doclib list) are sub-second assuming no noticeable network propagation
delays or a fast LAN connection.
• Few seconds – some functions, such as a document check-in or similarly more complex functions can take 1-2 seconds.
• Many seconds – functions such as a file upload of a large document can take longer, simply due to the network time required to transfer the
file.

The key in providing good performance is that the response times are stable and predictable – the user’s expectations can therefore be set.
Inconsistent performance – a function taking 1 second at one time and several seconds on another occasion – is usually the most frustrating
thing for a user as their expectations cannot be set and experience is inconsistent.

The following presents the response times for the various test mixes and workloads, showing typical functions executed by the emulated users
during activity on each of the 5 test sites. Refer to Appendix B describing the workload details for more information about the detailed operations
implied by each response time name, and the exact functions executed on each site.

Response time results for Gen8 versus Gen9 servers, and SharePoint 2016 versus 2013 were broadly similar; thus only the results for
SharePoint 2016 running on Gen9 servers are presented as examples of the type of response times that should be expected.

Table 9 shows the timers for the Search and Portal workloads, both as run individually and also as part of the full Collab workload mix. Overall
these are simple functions that are typically sub-second and degrade very slightly as load is increased.
Table 9. Response times (seconds) – Search and Portal functions
Test FE x Goto Search Execute Open Item Survey Goto Tasks Open Event
Cores Search Respond

Search Only 1x6 0.07 0.13 0.03


Portal only 1x6 0.33 0.14 0.27
Collab 1x6 1x6 0.11 0.15 0.06 0.25 0.11 0.25
Collab 1x12 1 x 12 0.10 0.14 0.10 0.24 0.11 0.25

Table 10 shows the MySite functions. Most are simple sub-second functions; however the “Upload Doc” function uploaded a new version of a
4.5MB document thus taking longer due to the file size. Even so, the response time was very stable and predictable.
Table 10. Response times (seconds) – MySites functions
Test FE x Goto Social Sites Goto Upload Doc
Cores MySite Folder

Mysites only 1x6 0.38 0.14 0.26 0.50


Collab 1x6 1x6 0.41 0.15 0.27 0.50
Collab 1x12 1 x 12 0.38 0.14 0.26 0.45
Reference Architecture Page 42

Table 11 shows the Teams site functions, with the “Upload Doc” function taking a little longer although still sub-second as the average file size
was only 40KB. Replying to a discussion takes the longest as there is a threaded structure to the discussion, and over the period of the test
replies are added at several levels in the discussion thread which is a more complex operation.
Table 11. Response times (seconds) – Teams functions
Test FE x Goto Doclib List Upload Doc Open Item Reply to
Cores SubSite Discussion

Teams only 1x6 0.20 0.10 0.31 0.10 0.77


Collab 1x6 1x6 0.24 0.11 0.34 0.10 0.79
Collab 1x12 1 x 12 0.23 0.10 0.32 0.10 0.76

Table 12 shows the Docs site functions. In this case we see some more complex functions (check-out/-in) taking a little longer, as well as upload
of a document ranging between 1-3MB.
Table 12. Response times (seconds) – Docs functions
Test FE x Goto Open Task Checkout Download Upload Doc Checkin Doc
Cores Doclib Doc Doc

Docs only 1x6 0.17 0.12 0.31 0.27 0.91 0.37


Collab 1x6 1x6 0.12 0.11 0.31 0.25 0.90 0.36
Collab 1x12 1 x 12 0.13 0.11 0.31 0.24 0.92 0.39

To summarize the response time data – performance of SharePoint 2016 functions is broadly similar to that of SharePoint 2013, with the
servers running at the same 80% avg. CPU busy. The majority of simple functions are still sub-second when measured in a LAN environment,
with slightly longer times for more complex functions and document up-/down-loads as would be expected. The bandwidth and speed of a WAN
can also of course have an effect on the total response time as perceived by the user.

Summary
This reference architecture shows the design of an overall deployment to support Microsoft SQL Server 2014 and SharePoint Server 2016 in a
virtualized environment by leveraging both the HA features provided by HPE BladeSystem and HPE MSA storage SAN, coupled with best-
practice configurations for both applications. Resources also remain to either scale-up/-out these applications further should workload growth
require this; or to deploy other applications as may be needed to support the business.

The following are key challenges in deploying business process applications, such as SharePoint Server 2016, in a virtualized environment:

• Determining solution sizing and configuration to support the business workload in an HA environment
• Managing design/deployment/hardware/operations costs by leveraging efficient use of virtualized resources
• Achieving rapid deployment to minimize time-to-productivity, yet ensuring consistency during the build process, reducing problems from
configuration errors
• Defining a strategy to handle workload change and growth over time

By virtualizing SharePoint roles such as Web Server, Application servers, and SQL Server you can deploy fewer HPE ProLiant virtualization host
servers to handle all the tasks previously supported by physical servers. This in turn reduces the costs around day-to-day operations (power,
cooling, physical management, etc.). A further advantage is that each VM can be defined as to exactly the resources needed by the services
running on that VM. Further, as workloads, user behavior, and business needs change over time, the VM resources can be fine-tuned as part of a
proactive capacity planning activity that ensures capacity and performance can meet ongoing demands. Additional VMs can also be created to
support changes in service needs, or to quickly deploy temporary requirements such as a development or test server farm. High availability can
be provided by an appropriate combination of infrastructure technologies and by leveraging the HA design principles of SharePoint and SQL
Server. Virtualization can therefore provide an improved solution with more efficient use of resources and easier management. The HPE
BladeSystem allows in-enclosure expansion up to 16 servers; so by deploying additional BL460c servers (if your c7000 enclosure has available
Reference Architecture Page 43

space) or assigning/re-purposing existing un-used servers, as may be needed, you can scale-up/-out the virtual machines (VMs) to support an
increased workload or more applications as business needs evolve over time.

Results suggest that SharePoint 2016 requires increased resources compared to previous versions – a common trait for new application versions
when functionality is added. If you are running a prior SharePoint version on Gen8 servers (or older), then upgrading to SharePoint 2016 and
running it on the more powerful HPE Gen9 servers would be very appropriate and yield better price/performance – perhaps as much as 20% as
our findings show.

The configuration presented in this paper demonstrates SQL Server 2014 and SharePoint Server 2016 scaled to deliver performance suitable
for mid-market solutions, with a simple process for scaling-up the configuration to support additional users over time. You can work with your
local HPE Sales professional to determine the right configuration for your current needs.

Implementing a proof-of-concept
As a matter of best practice for all deployments, HPE recommends implementing a proof-of-concept using a test environment that matches as
closely as possible the planned production environment. In this way, appropriate performance and scalability characterizations can be obtained.
For help with a proof-of-concept, contact an HPE Services representative (hpe.com/us/en/services/consulting.html) or your HPE partner.

Appendix A: Bill of materials

Note
Part numbers are at time of testing and subject to change. The bill of materials does not include complete support options or rack and power
requirements. For questions regarding ordering, please consult with your HPE Reseller or HPE Sales Representative for more details.
hpe.com/us/en/services/consulting.html

Table 13. Bill of materials


Qty Product number Description

Rack and power

1 BW908A HPE 42U 600x1200mm Enterprise Shock Rack


4 H8B50A HPE 4.9kVA 208V 30A NA/JP maPDU
1 BW930A HPE Air Flow Optimization Kit
1 BW909A HPE 42U 1200mm Side Panel Kit
4 142257-006 HPE C13-C14 WW 250V 10Amp 1.4m Jumper Crd
BladeSystem enclosure and servers

1 681844-B21 HPE BLc7000 CTO 3 IN LCD Plat Enclosure


16 727021-B21 HPE BL460c Gen9 10Gb/20Gb FLB CTO Blade
16 726987-L21 HPE BL460c Gen9 E5-2690v3 FIO Kit
16 726987-B21 HPE BL460c Gen9 E5-2690v3 Kit
256 726719-B21 HPE 16GB 2Rx4 PC4-2133P-R Kit
32 759208-B21 HPE 300GB 12G SAS 15K 2.5in SC ENT HDD
16 700764-B21 HPE FlexFabric 20Gb 2P 650FLB FIO Adptr
16 761871-B21 HPE Smart Array P244br/1G FIO Controller
2 691367-B21 HPE BLc VC FlexFabric-20/40 F8 Module
1 733460-B21 HPE 6X 2650W Plat Ht Plg FIO Pwr Sply Kit
1 433718-B21 HPE BLc7000 10K Rack Ship Brkt Opt Kit
1 677595-B21 HPE BLc 1PH Intelligent Power Mod FIO Opt
1 517520-B21 HPE BLc 6X Active Cool 200 FIO Fan Opt
Reference Architecture Page 44

Qty Product number Description


Storage

1 K2R80A HPE MSA 2040 ES SAN DC SFF Storage


4 J9F37A HPE MSA 400GB 12G SAS ME 2.5in EM SSD
16 J9F46A HPE MSA 600GB 12G SAS 10K 2.5in ENT HDD
4 J9F51A HPE MSA 2TB 12G SAS 7.2K 2.5in 512e HDD
1 C8R23A HPE MSA 2040 8Gb SW FC SFP 4 Pk
1 AJ941A HPE D2700 Disk Enclosure
4 J9F37A HPE MSA 400GB 12G SAS ME 2.5in EM SSD
16 J9F46A HPE MSA 600GB 12G SAS 10K 2.5in ENT HDD
4 J9F51A HPE MSA 2TB 12G SAS 7.2K 2.5in 512e HDD
2 407337-B21 HPE Ext Mini SAS 1m Cable

Appendix B: SharePoint test methodology


This Appendix provides details of the SharePoint test methodology using Microsoft Visual Studio, and of the workload developed by HPE to
represent broadly applicable usage of SharePoint as a collaboration and document management solution.

Test process
These sections discuss best practices for running performance tests, the tools and components involved, user emulation, and the processes for
running the tests and collection and analysis of the resulting data.
Visual Studio Enterprise 2015 Update 1
Microsoft Visual Studio Enterprise 2015 Update 1 was used as the tool to emulate web-based user activity and workloads. This tool provides a
rich set of recording, customization and test execution capabilities, coupled with built-in data collection, analysis, and reporting. It also contains
many features and capabilities designed specifically to make performance/load testing of SharePoint easier to develop and accomplish. Its
capabilities and concepts will be familiar to anyone who has used similar load generation tools and the learning curve is not steep.

Its methodology includes the recording and customization of so-called “Webtests” where each of which can represent a portion of a total desired
workload. Our test bed included 5 SharePoint Site Collections (Docs, Portal, Teams, MySite, and Search) and a Webtest workload was developed
for each site. Details of the workload can be found in a later section.

Once the Webtests have been developed and fully tested (single-user) they can be incorporated into a multi-user “Loadtest” which defines what
Webtests are to be included (and in what percentage mix), and other relevant test run parameters and definition of the data collection metrics.
The next sections provide more details and examples.
Workload component Webtests
Figure 25 shows a fragment of one of the HPE Collab Workload components, in this example the “Teams” workload. The list of web functions
largely results from recording user activity whereby the user executes the functions as defined by the workload. Some edit and customization is
required to add things like randomization and variability, % probability of executing various sub-sections, etc. Use can also be made of data
sources (e.g., CSV files) that can be bound to script variables to provide randomized or sequential data as needed. In the example fragment
shown, a user connects to a Team Sites host site and then picks a random sub-site to visit and execute further actions. Definition of user “think
time” can also be incorporated into the functions to emulate users reading screen output or making decisions before proceeding.
Reference Architecture Page 45

Figure 25. Visual Studio – Example Webtest workload component

Once each of the Webtests is developed and proven to fully work, they can be incorporated into a multi-user Loadtest.
Reference Architecture Page 46

Workload Loadtests
Multiple Loadtests were defined for the tests required. These included 5 individual tests each using only one of the workloads. This allowed
examination of the resources used by each of the different sites and the web parts used on that site. The resource demands of, say, Search
activity are quite different to those used by List presentation or file upload. A further test mix was defined to represent the full HPE Collab
Workload, being a mix of the 5 workload components in a specific percentage mix, as per the workload definition.

Figure 26 shows the definition of the Collab workload in Visual Studio. Also defined in the Loadtest are such items as warm-up and stable test
times, user population, server-specific metric data collection, etc. Visual Studio provides all the data collection, aggregation and analysis/reporting
tools and can capture and report on a substantial amount of data.

Figure 26. Visual Studio – Loadtest definition


Reference Architecture Page 47

Test procedures
The test procedures are in large part defined in the Loadtest. The process is to define a workload mix as desired, then to ramp-up the user
population over a period of time so as to avoid “shock” to the system and enable a stable state to be reached. The test then runs for a period of
time at a constant user population with the desired mix of Webtests (and thus site-related functions) being executed. Data are collected at a
defined sample interval during this time. The length of the test is determined by the quantity of data samples required to provide confidence in
the statistics. As this workload is fairly intense (each emulated user executing about 1.4 requests per second), thousands of data samples can be
generated in only a 10 minute test – especially as the workload is quite stable over time. At the conclusion of the test Visual Studio collates the
data from the multiple servers as defined in the Webtest and creates summary and graphical reports of the key resources, response times, etc.

Figure 27 shows an example Visual Studio screen during a test being run. Graphs and tabular data show real-time information regarding key
performance indicators, response times, system-under-test CPU and memory consumption, etc. The example below shows a test approaching
the end of a 90 minute run. Some typical behavior stands out, such as periodic Search Service incremental crawls occurring every 15 minutes.
These show as both increased Search Service VM CPU “spikes” (blue lines in bottom left graph), but also as increased network traffic from SQL
(corresponding spikes on green line in top left graph).

Figure 27. Visual Studio – Workload real-time monitoring

SharePoint workload description


This workload description is included as a reference point to which you can compare your own expected workload. If your workload is markedly
different or more intense, then the VM resource definitions may require more resources than those tested in the labs during this study. One
advantage of deploying SharePoint services on VMs is that each VM’s resources can be finely-tuned to meet the service resource needs driven
by the workload requirements.

The workload was designed to run against a SharePoint test farm supporting five main site collections that provide typical functionality for this
type of solution. The sites comprise:
• A Document Center – provides document libraries with version control and check-out/-in capability. The content is heavily modified (high
write: read ratio).
Reference Architecture Page 48

• Team sites (50 sub-sites with multiple users per site) – provides team document libraries and collaboration capabilities such as team
calendars, threaded discussions, etc. Moderate document and list content modification.
• A Portal site – intended to provide corporate-wide communication, events and announcements lists, surveys, etc. Low content modification –
mostly read.
• My Sites – each user has a personal “my site” used to contain personal documents and work in progress. Moderate document content
modification.
• A Search Center – provides standard search capabilities enabling users to discover content stored across the farm, and optionally
open/read/download discovered content.

The following sections describe the workload functionality associated with each of the five sites. Note that the percentage figure shown with each
site indicates the amount each contributes to the overall mixed workload. Within each site workload description there are also typically other
percentages indicating probabilities that sub-functions will be executed.
Document Center – 30%
The Document Center contains multiple document libraries and is intended as a central repository for content being developed or modified. It
also contains a task list containing tasks assigned to specific users. The simulated user accesses the Docs.com site, and then performs document
or task-related functions according to the percentages shown:
• Check-out/-in document – 75%
– A simulated user navigates to the Document Center site (Docs.com) and then into a specific folder in the site document library.
– The user then checks-out a user-specific document, causing a change to the state of the document in the list view.
– 50% of the time, the user will then download the document; and then re-upload it as a new version. Document size is 2.5MB.
– The user then checks the document back in as a new version before navigating back to the Document Center site home page. Note that the
maximum number of major versions in the document libraries was set to 3, thus preventing library size growth that would occur if versions
were unrestricted.

• Tasks – 25%
– 50% of the time, the user navigates to the site task list and opens a random task from the list (simulating reading), and then closes it.
– 50% of the time, the user navigates to the task list and creates a new task; and then deletes a random task.

Team Sites – 20%


A simulated user accesses the Teams.com site which contains 25 sub-sites. The user will randomly navigate to one of these sites. The user
actions will then relate to Calendar, Discussions, or Document-related tasks according to defined percentages as follows:
• Calendar – 30%:
– 75% of the time, the user navigates to the Team Site Calendar and chooses a random entry from the list of 25 entries. The user will then
open the chosen calendar entry (simulating reading), then close the entry; and finally navigate back to the home page.
– 25% of the time, the user creates a new calendar entry, then deletes a random calendar entry; and then navigates back to the home page.

• Discussions – 50%:
– The user navigates to the discussion list in the site and then opens a random discussion (simulating reading).
– 33% of the time, the user replies to that discussion. The user then navigates back to the Teams home page.

• Documents – 20%:
– The user navigates to the team site document library and then uploads a random Microsoft Excel 2010 worksheet (xlsx file) ranging in size
from 10KB to 16KB.

Portal – 20%
Simulated users navigate to the portal.com site and then perform the following actions based on the percentage probabilities:
• Events – 40%
– The simulated user navigates into a list that contains 50 events. The user chooses a single event and opens it (simulating reading). The
user then closes the event and navigates back to the home page.
Reference Architecture Page 49

• Announcements – 40%
– The simulated user navigates into a list that contains 50 general announcements. The user chooses a single announcement and opens it
(simulating reading). The user then closes the announcement and navigates back to the home page.

• Surveys – 20%
– The simulated user navigates to the Surveys list, then opens and responds to a survey; and then navigates back to the portal site.

My Sites – 10%
Each simulated user has a personal “my site”, used to store personal documents or work in progress. Note that these sites are created prior to the
performance tests being run, so that the time and resources required to initialize a My Site the first time a user accesses it is not captured during
the test.

• The simulated user navigates to their personal My Site, and then into their personal documents folder. 50% of the time the user uploads a
random document ranging in size from 1MB to 10MB. Upon completion, the user then navigates back to the Mysite.com home page.

Search – 20%
The Search Center provides the user the ability to submit a search query based on a simple word or phrase. This query yields a results set listing
up to 20 documents (initial results page) sorted in most relevant match order (high to low).

• The simulated user navigates to the Search Center site and performs a search with a word or phrase picked randomly from a list of 40
examples. 25% of the time the user then downloads a document chosen at random from the first 8 search results (document size ranges from
1MB to 16MB), before navigating back to the home page.

Each of the site actions described above is broken down into a set of workload transactions to record the response time of each step of the
action. For example, in the portal workload the first transaction is the navigation into the events calendar, the next transaction is the opening of a
single event; and closing the event is recorded as yet another transaction.

Between each of the transactions, random wait times, designed to simulate users reading or thinking, are introduced after the response time is
recorded for the transaction. For this workload the random wait times are in the range of 1 to 4 seconds.
Reference Architecture Page 50

Resources and additional links


To read more about HPE solutions for Microsoft SharePoint please refer to
http://h17007.www1.hpe.com/us/en/enterprise/reference-architecture/info-library/index.aspx?app=ms_sp

HPE BladeSystem
hpe.com/info/bladesystem

HPE MSA Storage


hpe.com/info/msa

HPE Sizer for Microsoft SharePoint 2016


hpe.com/info/sizers

HPE Services
hpe.com/services

HPE and VMware


hpe.com/partners/vmware

HPE Networking
hpe.com/networking

HPE Servers
hpe.com/servers

Microsoft SharePoint on Microsoft Docs


https://docs.microsoft.com/en-us/SharePoint/sharepoint-server

HPE Reference Architectures


hpe.com/info/ra

HPE Technology Consulting Services


hpe.com/us/en/services/consulting.html

To help us improve our documents, please provide feedback at hpe.com/contact/feedback.

Sign up for updates

© Copyright 2016-2018 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without
notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements
accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard
Enterprise shall not be liable for technical or editorial errors or omissions contained herein.

Intel and Xeon are trademarks of Intel Corporation in the U.S. and other countries. Microsoft, Windows, and Windows Server are either
registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. VMware is a registered
trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions.

4AA6-5186ENW, October 2018, Rev. 2

You might also like